Читать книгу Machine Learning for Tomographic Imaging - Professor Ge Wang - Страница 32

2.3.1 Statistic iterative reconstruction framework (SIR)

Оглавление

X-ray photons recorded by the detectors can be regarded as evidence. A reconstructed image can be seen as a hypothesis. In the Bayesian framework, given our statistical knowledge and evidence, we should make a hypothesis that is most reasonable/plausible.

X-ray photons are emitted from an x-ray source. During their propagation, they interact with tissues inside a patient. As a result, some x-ray photons are absorbed or re-directed, while the rest of them penetrate the patient, reaching an x-ray detector. Assuming a monochromatic x-ray beam, Beer’s law is expressed as

Ii=I0e−∫Lix(l)dl,(2.13)

where Ii represents the recorded x-ray intensity at the ith element of the detector, I0 is the initial flux, and x is the linear attenuation coefficient along the ith x-ray path Li from which the intensity Ii is produced. This equation can be converted into the line integral and is discretized as

bi=∑j=1Naijxj,(2.14)

where bi=lnI0/Ii=∫Lix(l)dl is the line integral value along the ith x-ray path and aij denotes the contribution of the jth pixel to the ith line integral bi. Then, we have a discretization version of Beer’s law

Ii=I0exp(−bi)=I0exp(−[Ax]i),(2.15)

where A={aij∣i=1,2,…,I,j=1,2,…,J} and [Ax]i denotes the ith element of Ax.

With a CT scanner, an original signal contains an inherent quantum noise. Approximately, the measured data follow the Poisson distribution

Ii∼Poisson(I¯i),(2.16)

where Ii denotes a specific measurement of the Poisson noise compromised line integral.

Since the measured data along different paths are statistically independent from each other, according to the Poisson model, the joint probability distribution of the measured data can be written as follows,

P(I∣x)=∏i=1II¯iIiexp(−I¯i)Ii!,(2.17)

and the corresponding likelihood function is expressed as the following:

L(I∣x)=log(P(I∣x))=∑i=1IIilogI¯i−I¯i−logIi!(2.18)

or

L(I∣x)=∑i=1IIilogI0e−[Ax]i−I0e−[Ax]i−logIi!.(2.19)

Ignoring the constants, we rewrite the above formula as

L(I∣x)=−∑i=1IIi[Ax]i+I0e−[Ax]i.(2.20)

Statistically, it makes a perfect sense to maximize the posterior probability:

P(x∣I)=P(I∣x)P(x)/P(I).(2.21)

Considering the monotone increasing property of the natural log operation, the image reconstruction process is equivalent to maximizing the following objective function:

Φ˜(x)=L(I∣x)+log(P(x)).(2.22)

In equation (2.22), log(P(x)) denotes the prior information and can be conveniently expressed by changing −log(P(x)) to βψ(x). Then, the image reconstruction problem can be expressed as the following optimization problem:

Φ(x)=−L(Iˆ∣x)−log(P(x))=∑i=1IIi[Ax]i+I0e−[Ax]+βψ(x).(2.23)

It is noted that ψ(x) is the regularizer and β is used as a regularization parameter. Because the objective function is not easy for numerical optimization, we transform the first term into its second-order Taylor expansion around the measurement-based estimate of the line integral bi=lnI0/Ii and obtain a simplified objective function

Φ(x)=∑i=1Iwi2([Ax]i−bi)2+βψ(x),(2.24)

where wi=Ii is also known as the statistical weight. For the deduction, see Elbakri and Fessler (2002).

This objective function has two terms, for data fidelity and image regularization, respectively. In the fidelity term, the statistical weight modifies the discrepancy between the measured and estimated line integrals along each x-ray path. Heuristically, there would be fewer photons along a more attenuating path, resulting in a greater uncertainty in estimating the line integral along the path, thus having a smaller weight assigned to the line integral along that path. In contrast, a less attenuating path will be given a larger weight. With the regularization term, a statistical model can be enforced to guide the image reconstruction process so that a reconstructed image will look more like what we expect based on a statistical model of images.

Machine Learning for Tomographic Imaging

Подняться наверх