Читать книгу Position, Navigation, and Timing Technologies in the 21st Century - Группа авторов - Страница 22
36.2.1 Typical Recursive Estimation Framework
ОглавлениеIn a typical recursive estimation framework, the system is represented using a process model and one (or more) observation models. The process model represents the internal dynamics of the system and can be expressed as a nonlinear, stochastic difference equation of the form
where xk is the state vector at time k ∈ ℕ, and wk − 1 is the process noise random vector at time k – 1. External observations regarding the system state are represented by an observation model. The generalized observation model is a function of both the system state and a random vector representing the observation errors:
In the above equation, zk is the observation at time k, and vk is the random observation error vector at time k. The objective of the recursive estimator is to estimate the posterior pdf of the state vector, conditioned on the observations
(36.7)
where ℤk is the collection of observations up to, and including, time k. This is accomplished by performing two types of transformations on the state pdf, propagation and updates. The result is a filter cycle given by
Note the introduction of the a priori pdf given by
(36.9)
Further examination of the propagation and update cycle in Eq. 36.8 provides insights into how our system knowledge and observations are incorporated into our understanding of the state vector. To begin, we consider the propagation step from epoch k – 1 to k. Time propagation begins with the posterior pdf p(xk − 1| ℤk − 1). The process model defined in Eq. 36.5 is used to define the transition pdf p(xk| xk − 1), which can then be used to calculate the a priori pdf at time k via the Chapman–Kolmogorov equation [2]:
Examination of the process model (Eq. 36.5) shows that the propagated state vector is a first‐order Gauss–Markov random process and is dependent only on the previous state vector and the process noise vector. As a result, we can express the transition probability, which is independent of the prior observation, as
Substituting Eqs. 36.11 into 36.10 results in the propagation relationship
An observation at time k can be incorporated by considering the posterior pdf p(xk| ℤk), which, given the definition of our observation sequence in Eq. 36.3, can be expressed equivalently as
Applying Bayes’ rule to Eq. 36.13 yields
Observing the form of the previously defined observation, Eq. 36.6 shows that zk is independent of ℤk − 1, and thus Eq. 36.14 can be simplified to
(36.15)
As a final note, we observe that the normalizing term in the denominator, known as the evidence, can be expressed in a more directly obvious form by de‐marginalizing about the state vector as follows:
Thus, we have presented the mathematical form of both the propagation (Eq. 36.12) and update (Eq. 36.16) actions on the pdf representing the state random vector.
For a specific class of problems (e.g. linear Gaussian systems), the above equations can be solved in closed form. In this case, the generalized process model (Eq. 36.5) simplifies to
where is the state transition matrix from time k − 1 to k, and wk − 1 is a zero‐mean, white Gaussian sequence with covariance Qk. Similarly, the generalized observation model 36.6 simplifies to
where Hk is the observation influence matrix at time k, and vk is a zero‐mean, white Gaussian sequence with covariance Rk.
Thus, both the a priori and posterior pdfs can be represented as the following Gaussian densities, respectively:
(36.19)
(36.20)
where represents a Gaussian density with μ mean and Λ covariance. In addition, the plus and minus superscripts are used to express an a priori or a posteriori quantity, respectively. Substituting the linear process model (Eq. 36.17) into our propagation relationship (Eq. 36.12) results in the linear Kalman filter propagation equations
(36.21)
(36.22)
Furthermore, substituting the linear observation model (Eq. 36.18) into our update relationship (Eq. 36.16) results in the linear Kalman filter update equations:
(36.23)
(36.24)
where zk is the realized measurement observation, and Kk is the Kalman gain at time k:
(36.25)
and Sk is the residual covariance matrix, given by
(36.26)
In many cases, systems can be accurately represented by linear Gaussian models. Unfortunately, there are a number of systems where these models are not adequate. This motivates the development of various algorithms that attempt to solve these equations for various classes of problems.
In the next section, we will present the fundamental concepts which will be used to derive various recursive nonlinear estimators.