Artificial Neural Network Building Blocks
Building Blocks
- Neural networks are made of shorter modules or building blocks, same as atoms in matter and logic gates in electronic circuits.
- It depends upon the given three building blocks:
- Network Topology
- Adjustments of weights or learning
- Activation functions
Building Blocks
Network Topology
network Topology - Building Blocks
- The topology of a neural network refers to the way how Neurons are associated, and it is a significant factor in network functioning and learning. A common topology in unsupervised learning is a direct mapping of inputs to a group of units.
- All input values are associated with all neurons in the hidden layer, the output of the hidden neurons are associated to all neurons in the output layer, and the activation functions of the output neurons establish the output of the entire network.
- Such networks are well-known partly because hypothetically, they are known to be universal function approximators, for example, a sigmoid or Gaussian.
Feedforward Network
- The advancement of layered feed-forward networks initiated in the late 1950s, given by Rosenblatt's perceptron and Widrow's Adaptive linear Element (ADLINE). The perceptron and ADLINE can be defined as a single layer networks and are usually referred to as single-layer perceptron's.
- Single-layer perceptron's can only solve linearly separable problems.
- MLP networks overcome various limitations of single-layer perceptron's and can be prepared to utilize the backpropagation algorithm. The backpropagation method was invented autonomously several times.
- In 1974, Werbos created a backpropagation training algorithm. However, Werbos work remained unknown in the scientific community, and in 1985, parker rediscovers the technique. Soon after Parker published his discoveries, Rumelhart, Hinton, and Williams also rediscovered the method. It is the endeavors of Rumelhart and the other individual if the Parallel Distributed Processing (PDP) group, that makes the backpropagation method a pillar of neurocomputing.
Single-layer feedforward Network
- Rosenblatt first constructed the single-layer feedforward network in the late 1950s and early 1990s. The concept of feedforward artificial neural network is just one weighted layer.
Single-layer feedforward network
Multilayer feedforward Network
- A multilayer feedforward neural network is a linkage of perceptrons in which information and calculations flow are uni-directional, from the input data to the outputs. The total number of layers in a neural network is like as the total number of layers of perceptrons.
- The easiest neural network is one with a single input layer and an output layer of perceptrons. The concept of feedforward artificial neural network is having more than one weighted layer. As the system has at least one layer between the input and the output layer, it is called the hidden layer.
Feedback Network
- A feedback based prediction refers to an approximation of an outcome in an iterative way where each iteration operation depends on the present outcome. Feedback is a common way of making predictions in different fields, ranging from control hypothesis to psychology.
- A feedback network has feedback paths, which implies the signal can flow in both directions using loops. It makes a non-linear dynamic system, which changes continuously until it reaches the equilibrium state.
It may be divided into the following types
Recurrent Network
- The human brain is a recurrent neural network that refers to a network of neurons with feedback connections. It can learn numerous behaviors, sequence, processing tasks algorithms, and programs that are not learnable by conventional learning techniques. It explains the rapidly growing interest in artificial recurrent networks for technical applications.
Recurrent network
Fully Recurrent Network
- The most straightforward form of a fully recurrent neural network is a Multi-Layer Perceptron (MLP) with the previous set of hidden unit activations, feeding back along with the inputs. The easiest neural network designs because all nodes are associated with all other nodes with every single node work as both input and output.
Fully recurrent network
- The time’t’ has to be discretized, with the activations updated at each time interval. The time scale may compare to the activity of real neurons, or for artificial systems whenever step size fitting for the given problem can be used. A delay unit should be introduced to hold activations until they are prepared at the next time interval.
Jordan Network
- The Jordan network refers to a simple neural structure in which only one value of the process input signal (from the previous sampling) and only one value of the delayed output signal of the model (from the previous sampling) are utilized as the inputs of the network.
- A basic MPC (Model Predictive Control) algorithm, the nonlinear Jordan neural model is repeatedly linearized on line around an operating point, which prompts a quadratic optimization issue. Adequacy of the described MPC algorithm is compared with that of the nonlinear MPC scheme with on-line nonlinear optimization performed at each sampling instant.
Building Blocks
Read Also
Adjustments of Weights or Learning:
- Learning in ANN is the technique for changing the weights of associations between the neurons of a specified network. Learning in artificial neural networks can be characterized into three different categories, namely supervised learning, unsupervised learning, and reinforcement learning.
Supervised Learning
- Supervised learning consists of two words supervised and learning. Supervise intends to guide. We have supervisors whose duty is to guide and show the way. Here the machine or program is learning with the help of the existing data set.
- It implies the existing data sets acts as a supervisor or boss to find the new data. A basic example being electronic gadgets price prediction. The price of electronic gadgets is predicted depending on what is observed with the prices of other digital gadgets.
- During the training of artificial neural networks under supervised learning, the input vector is given to the network, which offers an output vector. Afterward, the output vector is compared with the desired output vector. An error signal is produced if there is a difference between the actual output and the desired output vector. Based on this error signal, the weight is adjusted until the actual output is matched with the desired output.
Unsupervised Learning
- As the name suggests, unsupervised learning refers to predict something without any supervision or help from existing data. In supervised learning, the data are grouped, relying upon similar characteristics. In this situation, there are no existing data to look for direction. In other words, there is no supervisor.
- During the training of the artificial neural network under unsupervised learning, the input vectors of a comparative type are joined to form clusters.
- At the point when a new input pattern is implemented, then the neural network gives an output response showing the class to which the input pattern belongs.
Reinforcement learning
- Reinforcement Learning (RL) is a technique that helps to solve control optimization issues. By using control optimization, we can recognize the best action in each state visited by the system in order to optimize some objective function.
- Reinforcement learning comes into existence when the system has a huge number of states and has a complex stochastic structure, which is not responsible to closed-form analysis. If issues have a relatively small number of states, then the random structure is relatively simple, so that one can utilize dynamic programming.
- As the name suggests, this kind of learning is used to strengthen the network over some analyst data. This learning procedure is like supervised learning. However, we may have very little information. In reinforcement learning, during the training of the network, the network gets some feedback from the system.
- This makes it fairly like supervised learning. The feedback acquired here is evaluative, not instructive, which implies there is no instructor as in supervised learning. After getting the feedback, the networks perform modifications of the weights to get better Analyst data in the future.
Activation Function:
- Activation functions refer to the functions used in neural networks to compute the weighted sum of input and biases, which is used to choose the neuron that can be fire or not. It controls the presented information through some gradient processing, normally gradient descent. It produces an output for the neural network that includes the parameters in the data.
- Activation function can either be linear or non-linear, relying on the function it shows. It is used to control the output of outer neural networks across various areas, such as speech recognition, segmentation, fingerprint detection, cancer detection system, etc.
- In the artificial neural network, we can use activation functions over the input to get the precise output. These are some activation functions that are used in ANN.
Linear Activation Function
- The equation of the linear activation function is the same as the equation of a straight line i.e.
- If we have many layers and all the layers are linear in nature, then the final activation function of the last layer is the same as the linear function of the first layer. The range of a linear function is -infinitive to + infinitive.
- Linear activation function can be used at only one place that is the output layer.
Sigmoid function
- Sigmoid function refers to a function that is projected as S - shaped graph.
- This function is non-linear, and the values of x lie between -2 to +2.
- The value of X is directly proportional to the values of Y. It means a slight change in the value of x would also bring about changes in the value of y.
Tanh Function
- The activation function, which is more efficient than the sigmoid function is Tanh function. Tanh function is also known as Tangent Hyperbolic Function.
- It is a mathematical updated version of the sigmoid function. Sigmoid and Tanh function are similar to each other and can be derived from each other.
- This function is non-linear, and the value range lies between -1 to +1
If you want to learn about Artificial Intelligence Course , you can refer the following links Artificial Intelligence Training in Chennai , Machine Learning Training in Chennai , Python Training in Chennai , Data Science Training in Chennai.