Artificial neural network (ANN), usually called neural network (NN), is a system of mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. There are many types of artificial neural networks (ANN). Each artificial neural network is a computational simulation of a biological neural network model. Artificial neural network models mimic the real life behavior of neurons and the electrical messages they produce between input (such as from the eyes or nerve endings in the hand), processing by the brain and the final output from the brain (such as reacting to light or from sensing touch or heat). There are other ANNs which are adaptive systems used to model things such as environments and population. Artificial neural network systems can be hardware and software based specifically built systems or purely software based and run in computer models.

Types of Artificial Neural Networks (ANN)

  • Feedforward Neural Network – The feedforward neural network was the first and arguably most simple type of artificial neural network devised. In this network the information moves in only one direction — forwards: From the input nodes data goes through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network.
  • Radial Basis Function (RBF) Neural Network – Radial basis functions are powerful techniques for interpolation in multidimensional space. A RBF is a function which has built into a distance criterion with respect to a center. RBF neural networks have the advantage of not suffering from local minima in the same way as Multi-Layer Perceptrons. RBF neural networks have the disadvantage of requiring good coverage of the input space by radial basis functions.
  • Kohonen Self-organizing Neural Network – The self-organizing map (SOM) performs a form of unsupervised learning. A set of artificial neurons learn to map points in an input space to coordinates in an output space. The input space can have different dimensions and topology from the output space, and the SOM will attempt to preserve these.
  • Learning Vector Quantization Neural Network – Learning Vector Quantization (LVQ) can also be interpreted as a neural network architecture. In LVQ, prototypical representatives of the classes parameterize, together with an appropriate distance measure, a distance-based classification scheme.
  • Recurrent Neural Networks – Recurrent neural networks (RNNs) are models with bi-directional data flow. Recurrent neural networks can be used as general sequence processors. Various types of Recurrent neural networks are Fully recurrent network (Hopfield network and Boltzmann machine), Simple recurrent networks, Echo state network, Long short term memory network, Bi-directional RNN, Hierarchical RNN, and Stochastic neural networks.
  • Modular Neural Network – Biological studies have shown that the human brain functions not as a single massive network, but as a collection of small networks. This realization gave birth to the concept of modular neural networks, in which several small networks cooperate or compete to solve problems.
  • Physical Neural Network – A physical neural network includes electrically adjustable resistance material to simulate artificial synapses.
  • Other Special Types of Neural Networks
    • Holographic associative memory – Holographic associative memory represents a family of analog, correlation-based, associative, stimulus-response memories, where information is mapped onto the phase orientation of complex numbers operating.
    • Instantaneously Trained Neural Networks – Instantaneously trained neural networks (ITNNs) were inspired by the phenomenon of short-term learning that seems to occur instantaneously.
    • Spiking Neural Networks – Spiking neural networks (SNNs) are models which explicitly take into account the timing of inputs. The network input and output are usually represented as series of spikes (delta function or more complex shapes). SNNs have an advantage of being able to process information in the time domain (signals that vary over time).
    • Dynamic Neural Networks – Dynamic neural networks not only deal with nonlinear multivariate behaviour, but also include (learning of) time-dependent behaviour such as various transient phenomena and delay effects.
    • Cascading Neural Networks – Cascade Correlation is an architecture and supervised learning algorithm. Instead of just adjusting the weights in a network of fixed topology, Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure.
    • Neuro-Fuzzy Neural Networks – A neuro-fuzzy network is a fuzzy inference system in the body of an artificial neural network. Depending on the FIS type, there are several layers that simulate the processes involved in a fuzzy inference like fuzzification, inference, aggregation and defuzzification. Embedding an FIS in a general structure of an ANN has the benefit of using available ANN training methods to find the parameters of a fuzzy system.
    • Compositional Pattern-producing Neural Networks – Compositional pattern-producing networks (CPPNs) are a variation of ANNs which differ in their set of activation functions and how they are applied. While typical ANNs often contain only sigmoid functions (and sometimes Gaussian functions), CPPNs can include both types of functions and many others.