Читать книгу Handbook of Intelligent Computing and Optimization for Sustainable Development - Группа авторов - Страница 48

2.4.2 Design Strategy of DNA Perceptron

Оглавление

In 2009, Liu et al. [5] used the property of massive parallelism of DNA molecules and designed a model perceptron. Thus, the running time of the algorithm can be reduced to a great extent. The structure of the perceptron is presented in Figure 2.11. It has two layers: the input layer generates n input signals and the output layer generates m output signals. Apart from n number of input signals, there is another additional signal termed as bias signal. The ith input neuron is joined with jth output neuron by a joining weight denoted as, wij. Each of the m output neurons receives n signals from the input layer and organizes them with the corresponding n weight coefficient. The weighted sum for each output neuron can be expressed by the following equation;

(2.6)

where

k ≡ kth sample of training set;

≡ output value of jth output neuron;

wij ≡ weight value joining ith input and jth output neuron;

≡ input value of ith input neuron.


Figure 2.11 Structure of perceptron [5].

The designed algorithm for perceptron categorizer model follows two processes: one is training process and the other is category process.

 • Training process: In this process, the ideal input values, , and the ideal output values, , are used to train weight coefficient to get the set of weights. The sample vector is represented by Equation (2.7).(2.7)

The set of weights, wk, is represented by the following expression:

(2.8)

After determining the values for all of the wk, the value of w can be calculated by taking the intersection of wk.

 • Category process: If an unknown vector is given as the input, then the model of perceptron categorizes it using the weight set w which has been computed from training process.

Handbook of Intelligent Computing and Optimization for Sustainable Development

Подняться наверх