Читать книгу Machine Learning for Tomographic Imaging - Professor Ge Wang - Страница 36
2.4 Final remarks
ОглавлениеTomographic image reconstruction is an inverse problem. Due to the non-ideal factors in the data acquisition, such as noise and errors introduced in practice, as well as possibly the data incompleteness, image reconstruction can be a highly ill-posed problem. Thanks to the Bayesian inference, prior knowledge can be utilized to solve the inverse problem. Compared with the simplest iterative algorithms that only address the data fidelity term, the regularized/penalized reconstruction techniques take advantage of the posterior probability and naturally incorporate prior knowledge into the reconstruction process. Consequently, a regularized solution must be in a much narrower searching space of the unknown variables. In determining how to extract the prior knowledge, or in other words, the intrinsic distribution properties of images, the theory of natural image statistics and the HVS give us great inspiration. Both anatomical and neurophysiological studies indicate a multi-layer HVS architecture which supports a sparse representation to eliminate image redundancy as much as possible and use statistical independent features when we perceive natural scenes. Inspired by this discovery, machine learning, such as a typical single-layer neural network or the dictionary learning method, is capable of extracting the prior information and representing it sparsely. The building block of a neural network is an artificial neuron which takes multiple variables as the input, computes an inner product between the input vector and a weighting vector, and thresholds the inner product in some way as the output of the neuron.
Furthermore, the multi-layer neural mechanism exists in HVS, which inspires the development of multi-layer neural networks, which is commonly known as deep learning. Along this direction, a set of non-linear machine learning algorithms were developed for modeling complex data representations. It is the multi-layer artificial neural architecture that allows the learning of high-level representations of data through multi-scale analysis from low-level primitives to semantic features. It is comforting that this type of multi-layer neural networks resembles the multi-layer mechanism in the HVS. In the next chapter, we will cover the basics of artificial neural networks.