Читать книгу Modern Characterization of Electromagnetic Systems and its Associated Metrology - Magdalena Salazar-Palma - Страница 15

1.1 Introduction

Оглавление

In mathematical physics many problems are characterized by a second order partial differential equation for a function as

(1.1)

and u(x, y) is the function to be solved for a given excitation f (x, y), where


(1.2)

When Β2AC < 0 and assuming uxy = uyx, then (1.1) is called an elliptic partial differential equation. These classes of problems arise in the solution of boundary value problems. In this case, the solution u(x, y) is known only over a boundary {or equivalently a contour B(x, y)} and the goal is to continue the given solution u(x, y) from the boundary to the entire region of the real plane ℜ(x, y).

When B2AC = 0 we obtain a parabolic partial differential equation for (1.1), which arises in the solution of the diffusion equation or an acoustic propagation in the ocean. Such applications are characterized by the term initial value problems. The solution is given for the initial condition u(x, y = 0) and the goal is to find the solution u(x, y) for all values of x and y.

Finally when Β2AC > 0, we obtain a hyperbolic partial differential equation. This type of equation arises from the solution of the wave equation. The characteristic of the wave equation is that if a disturbance is made in the initial data, then not every point of space feels the disturbance at once. The disturbance has a finite propagation speed. This feature makes it distinct from the elliptic and parabolic partial differential equations when a disturbance of the initial data is felt at once by all points in the domain. Even though these equations have significantly different mathematical properties, the solution methodology, just like for every numerical method in solution of an operator equation, is essentially the same, by exploiting the principle of analytic continuation.

The solution u of these equations is made in a straight forward fashion by assuming: it to be of the form

(1.3)

where ϕi(x, y) are some known basis functions; and the final solution is to be composed of these functions multiplied by some constants αi which are the unknowns to be determined using the specific given boundary conditions. The solution procedure then translates the solution of a functional equation to the solution of a matrix equation, the solution of these unknown constants is much easier to address. The methodology starts by substituting (1.3) into (1.1) and then solving for the unknown coefficients αi from the boundary conditions for the problem if the equations are in the differential form or by integrating if it is an integral equation. Then once the unknown coefficients αi are determined, the general solution for the problem can be obtained using (1.3).

A question that is now raised is: what is the optimum way to choose the known basis functions ϕi as the quality of the final solution depends on the choice of ϕi? It is well known in the numerical community that the best choices of the basis functions are the eigenfunctions of the operator that characterizes the system. Since in most examples one is dealing with a real life system, then the operators, in general, are linear time invariant (LTI) and have a bounded input and bounded output (BIBO) response resulting in a second‐order differential equation, which is the case for Maxwell’s equations. In the general case, the eigenfunctions of these operators are the complex exponentials, and in the transformed domain, they form a ratio of two rational polynomials. Therefore, our goal is to fit the given data for a LTI system either by a sum of complex exponentials or in the transformed domain approximate it by a ratio of polynomials. Next, it is illustrated how the eigenfunctions are used through a bias‐variance tradeoff in reduced rank modelling [1, 2].

Modern Characterization of Electromagnetic Systems and its Associated Metrology

Подняться наверх