Читать книгу Machine Learning for Tomographic Imaging - Professor Ge Wang - Страница 60

History

Оглавление

The CNN dates back to the papers by Hubel and Wiesel in the late 1960s (Hubel and Wiesel 1968), in which they claimed that the visual cortex of cats and monkeys contains neurons that react individually to directional structures. Visual stimulus can affect a neighborhood of a single neuron, known as the receptive field. Adjacent cells have similar and overlapped receptive fields, the size and position of which vary, forming a complete visual spatial map. This justifies the use of local receptive fields in CNNs.

In 1980, neocognition was proposed, marking the birth of the CNN, which introduced the concept of the receptive field in the artificial neural network (Fukushima 1980).

In 1988, the shift-invariant neural network was proposed to improve the performance of the CNN, which can successfully complete the object recognition in the existence of displacements or slight deformations of objects (Waibel et al 1989). The feed-forward architecture of CNN was then extended in the neural abstraction pyramid by lateral and feedback connections. The resultant recurrent convolutional network allows for incorporation of contextual information to resolve local ambiguities iteratively. In contrast to the previous models, image-like outputs at high resolution were generated.

Finally, in 2005, there was a GPU implementation of CNN, making CNN much more effective and efficient (Steinkraus et al 2005). As a result, CNN entered its prime.

Machine Learning for Tomographic Imaging

Подняться наверх