Читать книгу Artificial Intelligence and Quantum Computing for Advanced Wireless Networks - Savo G. Glisic - Страница 45

3.3.1 Adaptation and Iterated Predictions

Оглавление

The basic predictor training configuration for the FIR network is shown in Figure 3.9 with a known value of y(k − 1) as the input, and the output as the single‐step estimate of the true series value y(k). During training, the squared error is minimized by using the temporal backpropagation algorithm to adapt the network (y(k) acts as the desired response). Training consists of finding a least‐squares solution. In a stochastic framework, the optimal neural network mapping is simply the conditional running mean


Figure 3.9 Network prediction configuration.

(3.46)

where y(k) is viewed as a stationary ergodic process‚ and the expectation is taken over the joint distribution of y(k) through y(kM). N* represents a closed‐form optimal solution that can only be approximated due to finite training data and constraints in the network topology.

Iterated predictions: Once the network is trained, iterated prediction is achieved by taking the estimate and feeding it back as input to the network:

(3.47)

as illustrated in Figure 3.9. Equation (3.47) can be now iterated forward in time to achieve predictions as far into the future as desired. Suppose, for example, that we were given only N points for some time series of interest. We would train the network on those N points. The single‐step estimate , based on known values of the series, would then be fed back to produce the estimate , and continued iterations would yield future predictions.

Artificial Intelligence and Quantum Computing for Advanced Wireless Networks

Подняться наверх