Читать книгу The Digital Agricultural Revolution - Группа авторов - Страница 53

2.2.3 Types and Suitability of Neural Networks

Оглавление

The artificial neural networks are usually selected based on the mathematical functions and output parameters. Among the different types of artificial neural networks, some of the most important kinds of the neural networks are discussed in this section.

A feed-forward neural network (FFNN) is an artificial neural network and is one of the simple type of neural network. In which, the input data travels in only one direction no loop or cycle formation. In an FFNN, every neuron (perceptron) in one layer is connected with each node in the immediate layer. As a result, each and every node is fully connected. This systematic arrangement of FFNN generates output by output layer. The number of hidden layers may arrange in between input and output layers and do not have a connection with the outer environment. These neural networks may or may not have a hidden layer. Common applications are pattern recognition, speech recognition, data compression, computer vision, and so on. If an FFN network uses more than one hidden layer, it is called a deep feed-forward network. By adding more hidden layers, overfitting will be reduced and improved generalization. The synaptic operation order in a hidden neuron, the ANNs were classified as first order, second, third, or higher order [14]. The back loops are absent in the FFN network. To reduce the error value in prediction, the back propagation algorithm may be used. The weights between the input hidden and output layers can be adjusted by using back propagation algorithm through learning rate and momentum. Then, the error value is back propagated from output layer to the input layer [15]. Back propagation algorithm was adopting widely for forecasting problems with the networks [16, 17]. Radial Basis Network (RBN)s behave as feed forward networks using other functions (radial basis function) to activate the network. Radial Basis Networks determine the gap between the generated output and target output. The logistic (sigmoid) function produces output value between 0 and 1. An RBN cannot be used for continuous value. Radial basis function considers the distance from the center to a point. The main advantage of RB neural networks is universal approximation and faster learning rate.

Recurrent neural network (RNN) uses previous information in current iterations. The principle of the RNN is keeping output and feeding the output back to input layer to help in estimating the output of the layer. In this type of neural network, hidden layers every neuron receives an input with a specific time delay. Recurrent neural network is required more time with low computational speed. These types can be used in time series anomaly detection, speech recognition, speech synthesis, and robot control. Long-/ short-term memory networks (LSTM) use a memory cell that processes data in RNNs. Gate recurrent units (GRUs) are different from LSTMs with similar models and produce equally better results. Extreme learning machines (ELMs) determine the output weights by choosing hidden nodes randomly. Extreme learning machine networks learn the output weights in only one step, and assigned weights are never updated. These algorithms work faster than the other general NN algorithms.

Convolution neural networks (CNN) are primarily used for image classification, image clustering, time series forecasting and object recognition. Deconvolutional networks (DCN) are CNNs that work in anopposite process. The major drawbacks of conventional neural networks are low learning rate and all the parameters tuning iteratively. In Hopfield network (HN), every neuron in network is directly connected with other neurons. The HNs are used to save memories and patterns and also applied in optimization problems. A Kohonen neural network (KN) also known as self-organizing maps is an unsupervised algorithm. This is very useful for multidimensional scattered data. It gives output in one or two dimensions only, so it is treated as a method of dimensionality reduction. This self-organizing process has different phases. A small weight is initialized to each neuron. In the second phase, the close neuron is the “winning neuron,” and the other neurons are connected to the winning neuron. The KN networks use more competitive learning than error correction learning.

The Digital Agricultural Revolution

Подняться наверх