Читать книгу Seismic Reservoir Modeling - Dario Grana - Страница 28

1.6 Inverse Theory

Оглавление

In many subsurface modeling problems, we cannot directly measure the properties of interest but can only collect data indirectly related to them. For example, in reservoir modeling, we cannot directly measure porosity far away from the well, but we can acquire seismic data that depend on porosity and other rock and fluid properties. Geophysics generally provides the physical models that link the unknown property, such as porosity, to the measured data, such as seismic velocities. Therefore, the estimation of the unknown model from the measured data is, from the mathematical point of view, an inverse problem.

If m represents the unknown physical variables (i.e. the model), d represents the measurements (i.e. the data), and f is the set of physical equations (i.e. the forward operator) that links the model to the data, then the problem can be formulated as:

(1.46)

where ε is the measurement error associated with the data. The data d can be a function of time and/or space, or a set of discrete observations. When m and d are vectors of size nm and nd, respectively, then f is a function from to . When m and d are functions, then f is an operator. The operator f can be a linear or non‐linear system of algebraic equations, ordinary or partial differential equations, or it might involve an algorithm for which there is no explicit analytical formulation. The forward problem is to compute d given m. Our focus is on the inverse problem of finding m given d and assessing the uncertainty of the predictions. In other words, we aim to predict the posterior distribution of md.

In the case of a linear inverse problem with a finite number of measurements , we can write Eq. (1.46) as a linear system of algebraic equations:

(1.47)

where F is the matrix of size nd × nm associated with the linear operator f. A common approach to find the solution of the inverse problem associated with Eq. (1.47) is to estimate the model m that gives the minimum misfit between the data d and the theoretical predictions of the forward problem, by minimizing the L2‐norm (also called the Euclidean norm) ‖r2 of the residuals r = dFm:

(1.48)

The model m* that minimizes the L2‐norm is called the least‐squares solution because it minimizes the sum of the squares of the differences of measured and predicted data, and it is given by the following equation, generally called the normal equation (Aster et al. 2018):

(1.49)

If we consider the data points to be imperfect measurements with random errors, the inverse problem associated with Eq. (1.47) can be seen, from a statistical point of view, as a maximum likelihood estimation problem. Given a model m, we assign to each observation di a PDF fi(dim) for i = 1, … , nd and we assume that the observations are independent. The joint probability density of the vector of independent observations d is then:

(1.50)

The expression in Eq. (1.50) is generally called likelihood function. In the maximum likelihood estimation, we select the model m that maximizes the likelihood function. If we assume a discrete linear inverse problem with independent and Gaussian distributed data errors ( for i = 1, … , nd), then the maximum likelihood solution is equivalent to the least‐squares solution. Indeed, under these assumptions, Eq. (1.50) can be written as:

(1.51)

and the maximization of Eq. (1.51) is equivalent to the minimization of Eq. (1.48) (Tarantola 2005; Aster et al. 2018).

The L2‐norm is not the only misfit measure that can be used in inverse problems. For example, to avoid data points inconsistent with the chosen mathematical model (namely the outliers), the L1‐norm is generally preferable to the L2‐norm. However, from a mathematical point of view, the L2‐norm is preferable because of the analytical tractability of the associated Gaussian distribution.

In science and engineering applications, many inverse problems are not linear; therefore, the analytical solution of the inverse problem might not be available. For non‐linear inverse problems, several mathematical algorithms are available, including gradient‐based deterministic methods, such as Gauss–Newton, Levenberg–Marquardt, and conjugate gradient; Markov chain Monte Carlo methods, such as Metropolis, Metropolis Hastings, and Gibbs sampling; and stochastic optimization algorithms, such as simulated annealing, particle swarm optimization, and genetic algorithms. For detailed descriptions of these methods we refer the reader to Tarantola (2005), Sen and Stoffa (2013), and Aster et al. (2018).

Seismic Reservoir Modeling

Подняться наверх