Читать книгу Artificial Intelligence and Quantum Computing for Advanced Wireless Networks - Savo G. Glisic - Страница 39
3.2 FIR Architecture 3.2.1 Spatial Temporal Representations
ОглавлениеMost often in engineering, prior to becoming a member of the observation set, the input signals to the neural network have gone through some form of filtering. This also coincides with the form of potential maintained at the axon hillock region of the neural cell. With this in mind, we may modify Eq. (3.1) as
(3.15)
By adding filtering operations, we have included the equally important temporal dimension in the static model. For our purposes, we will now be interested in adapting the filters. To this end, we assume a discrete FIR representation for each filter. This yields
(3.16)
with k being the discrete time index for some sampling rate Δt, and wi(n) being the coefficients for the FIR filters. In the following, we will represent the vector wi = [wi(0), wi(1), … , wi(M)] and the delayed states as xi(k) = [xi(k), xi(k − 1), … , xi(k − M)]. Now, a filter operation is written as the vector dot product wixi(k), with time implicitly included in the notation.
The top part of Figure 3.5 shows a standard representation of an FIR filter as a tap delay line. Although this filter represents several biological processes, as well as many engineering solutions, for ease of reference to a real neuron network we will refer to an FIR filter as a synaptic filter or simply a synapse. The output of the neuron will be as before y(k) = f(s(k)) with f (x) = tanh (x), and we have added only a time index k.
We use the same approach to network modeling as in the previous section. Each link in the network is now created using an FIR filter (see Figure 3.5). The neural network no longer performs a simple static mapping from input to output; internal memory has now been added to simple static mapping from input to output. At the same time, since there are no feedback loops, the overall network is still FIR [2–5]. The notation now becomes .
For all filters in a given layer, we will assume that the order Ml is the same. The activation value representing the output of a neuron in a layer, is given by the corresponding vector of delayed activations written as . Again, at the edges we have and . Instead of Table 3.1, a complete set of definitions is summarized in Table 3.2. The form of the two tables demonstrates a high level of similarity.