Читать книгу From Euclidean to Hilbert Spaces - Edoardo Provenzi - Страница 20
1.6. Orthogonal projection in inner product spaces
ОглавлениеThe definition of orthogonal projection can be extended by examining the geometric and algebraic properties of this operation in ℝ2 and ℝ3. Let us begin with ℝ2.
In the Euclidean space ℝ2, the inner product of a vector v and a unit vector evidently gives us the orthogonal projection of v in the direction defined by this vector, as shown in Figure 1.2 with an orthogonal projection along the x axis.
The properties verified by this projection are as follows:
1) projecting onto the x axis a second time, vector Pxv obviously remains unchanged given that it is already on the x axis, i.e. . Put differently, the operator Px bound to the x axis is the identity of this axis;
2) the difference vector between v and its projection v − Pxv is orthogonal to the x axis, as we see from Figure 1.3;
Figure 1.2. Orthogonal projection and diagonal projections and of a vector in v ∈ ℝ2 onto the x axis. For a color version of this figure, see www.iste.co.uk/provenzi/spaces.zip
Figure 1.3. Visualization of property 2 in ℝ2. For a color version of this figure, see www.iste.co.uk/provenzi/spaces.zip
3) Pxv minimizes the distance between the terminal point of v and the x axis. In Figure 1.2, and are, in fact, the hypotenuses of right-angled triangles ABC and ACD; on the other hand, is another side of these triangles, and is therefore smaller than and . is the distance between the terminal point of v and the terminal point of Pxv, while and are the distances between the terminal point of v and the diagonal projections of v onto x rooted at B and D, respectively.
We wish to define an orthogonal projection operation for an abstract inner product space of dimension n which retains these same geometric properties.
Analyzing orthogonal projections in ℝ3 helps us to establish an idea of the algebraic definition of this operation. Figure 1.4 shows a vector v ∈ ℝ3 and the plane produced by the orthogonal vectors u1 and u2. We see that the projection p of v onto this plane is the vector sum of the orthogonal projections and onto the two vectors u1 and u2 taken separately, i.e. .
Figure 1.4. Orthogonal projection p of a vector in ℝ3 onto the plane produced by two unit vectors. For a color version of this figure, see www.iste.co.uk/provenzi/spaces.zip
Generalization should now be straightforward: consider an inner product space (V, 〈, 〉) of dimension n and an orthogonal family of non-zero vectors F = {u1, . . . , um}, m ≼ n, ui ≠ 0V ∀i = 1, . . . , m.
The vector subspace of V produced by all linear combinations of the vectors of F shall be written Span(F ):
The orthogonal projection operator or orthogonal projector of a vector v ∈ V onto S is defined as the following application, which is obviously linear:
Theorem 1.12 shows that the orthogonal projection defined above retains all of the properties of the orthogonal projection demonstrated for ℝ2.
THEOREM 1.12.– Using the same notation as before, we have:
1) if s ∈ S then PS(s) = s, i.e. the action of PS on the vectors in S is the identity;
2) ∀v ∈ V and s ∈ S, the residual vector of the projection, i.e. v − PS(v), is ⊥ to S:
3) ∀v ∈ V et s ∈ S: ‖v − PS(v)‖ ≼ ‖v − s‖ and the equality holds if and only if s = PS(v). We write:
PROOF.–
1) Let s ∈ S, i.e. , then:
2) Consider the inner product of PS(v) and a fixed vector uj, j ∈ {1, . . . , m}:
hence:
Lemma 1.1 guarantees that .
3) It is helpful to rewrite the difference v − s as v − PS(v) + PS(v) − s. From property 2, v−PS(v)⊥S, however PS(v), s ∈ S so PS(v)−s ∈ S. Hence (v−PS(v)) ⊥ (PS(v) − s). The generalized Pythagorean theorem implies that:
hence ‖v − s‖ ≽ ‖v − PS(v)‖ ∀v ∈ V, s ∈ S.
Evidently, ‖PS(v) − s‖2 = 0 if and only if s = PS(v), and in this case ‖v − s‖2 = ‖v − PS(v)‖2.□
The theorem demonstrated above tells us that the vector in the vector subspace S ⊆ V which is the most “similar” to v ∈ V (in the sense of the norm induced by the inner product) is given by the orthogonal projection. The generalization of this result to infinite-dimensional Hilbert spaces will be discussed in Chapter 5.
As already seen for the projection operator in ℝ2 and ℝ3, the non-negative scalar quantity gives a measure of the importance of in the reconstruction of the best approximation of v in S via the formula : if this quantity is large, then is very important to reconstruct PS(v), otherwise, in some circumstances, it may be ignored. In the applications to signal compression, a usual strategy consists of reordering the summation that defines PS(v) in descent order of the quantities and trying to eliminate as many small terms as possible without degrading the signal quality.
This observation is crucial to understanding the significance of the Fourier decomposition, which will be examined in both discrete and continuous contexts in the following chapters.
Finally, note that the seemingly trivial equation v = v − s + s is, in fact, far more meaningful than it first appears when we know that s ∈ S: in this case, we know that v − s and s are orthogonal.
The decomposition of a vector as the sum of a component belonging to a subspace S and a component belonging to its orthogonal is known as the orthogonal projection theorem.
This decomposition is unique, and its generalization for infinite dimensions, alongside its consequences for the geometric structure of Hilbert spaces, will be examine in detail in Chapter 5.