Читать книгу The Digital Agricultural Revolution - Группа авторов - Страница 60

2.6 Neural Network Model Development, Calibration and Validation 2.6.1 Materials and Methods 2.6.1.1 ANN Model Design

Оглавление

The data extracted from the abovementioned maps are divided into two data sets for training and testing to analyze network results and testing the models. The typical architecture of three-layered MLFF perceptron used is shown in Figure 2.7. The derived five yield factors, such as NDVI, surface temperature, APAR, crop water stress index, and average yields are taken as neurons for input layer. The output layer is one neuron, i.e., yield.

The hidden layer has a different number of hidden neurons and is tested for optimum number of neurons. The optimum number of neurons in hidden layer and parameters of the model is determined by trial and error method. Wij is the connecting weight between ith input layer neuron to the jth hidden layer neuron. The Vjk is weight between the jth hidden layer neuron and the kth output layer neuron (in this case k=1). Momentum and learning rate are two main parameters for training, which takes care of steepest-descent convergence [71]. The final weighting factors are used to simulate relationship between crop yield and corresponding crop growth factors. The final weighting factors generated by the network trained model are saved for estimation of new data. The hidden layer neurons were varied between 1 and 30 in the developed models. Sigmoidal transfer function and linear activation functions are in hidden output layers. The code to develop the neural network is written in MATLAB programming language package.

Figure 2.7 Architecture of the proposed FFBPNN model (original figure).

The hidden layer receives data from the input neurons layer. In the hidden layer, inputs are multiplied by suitable weights and sums. The sigmoid transfer function was activated before the output layer. Mathematical expression linear transfer function is

(2.5)

The output y is expressed as:

(2.6)

where f is neuron activation or transfer function. The transfer function of each neuron in the network is a sigmoid transfer function and is given as

(2.7)

Figure 2.8 Relative error between observed and predicted crop yields of training and testing data during 2015 of paddy crop in Kharif season (original figure).

The neuron activation function was shown in Figure 2.8. The final form of the FFBPNN model with the substitution of weights is given as

(2.8)

where Y = Yield per unit area, q = number of nodes in hidden layer, Vj = weight coefficient between jth hidden and kth output nodes, c = threshold of the output node.

The Digital Agricultural Revolution

Подняться наверх