Читать книгу Convex Optimization - Mikhail Moklyachuk - Страница 11

1.4. Constrained optimization problems 1.4.1. Problems with equality constraints

Оглавление

Let fk : ℝn → ℝ, k = 0,1, … , m, be differentiable functions of n real variables. The constrained optimization problem (with equality constraints) is the problem

[1.2]

The points x ∈ ℝn, which satisfy the equation fk(x) = 0, , are called admissible in the problem [1.2]. An admissible point gives a local minimum (maximum) of the problem [1.2] if there exists a number δ > 0 such that for all admissible x ∈ ℝn that satisfy the conditions fk(x) = 0, k = 1, 2, … , m, and the condition , the following inequality


holds true.

The main method of solving constrained optimization problems is the method of indeterminate Lagrange multipliers. It is based on the fact that the solution of the constrained optimization problem [1.2] is attained at points that are critical in the unconstrained optimization problem


where is the Lagrange function, and λ0, … , λm are the Lagrange multipliers.

THEOREM 1.15.– (Lagrange theorem) Let be a point of local extremum of the problem [1.2], and let the functions fi(x), i = 0, 1, … , m, be continuously differentiable in a neighborhood U of the point . Then there will exist the Lagrange multipliers λ0, … , λm, not all equal to zero, such that the stationary condition on x of the Lagrange function is fulfilled


In order for λ0 ≠ 0, it is sufficient that the vectors be linearly independent.

To prove the theorem, we use the inverse function theorem in a finite-dimensional space.

THEOREM 1.16.– (Inversefunction theorem) Let F1(x, … , xs), … , Fs(x1, ..., xs) be s continuously differentiable in a neighborhood U of the point functions of s variables and let the Jacobian


be not equal to zero. Then there exist numbers ε > 0, δ > 0, K > 0 such that for any y = (y1, … , ys), ∥y∥ ≤ ε we can find x = (x1, … , xs), which satisfies conditions ∥x∥ < δ, , ∥x∥ ≤ Ky∥.

PROOF.– We prove the Lagrange theorem by contradiction. Suppose that the stationarity condition


does not hold, and vectors , i = 0, 1, … , m, are linearly independent, which means that the rank of the matrix


is equal to m +1. Therefore, there exists a submatrix of the matrix A of size (m + 1) × (m + 1) whose determinant is not zero. Let this be a matrix composed of the first m + 1

columns of the matrix A. We construct the function F : ℝm + 1 → ℝm + 1 with the help of functions fk(x), k = 0, … , m. Let


Here x1, … , xm + 1 are variables, and are fixed quantities. If is a solution of the constrained optimization problem, then . The functions Fk(x1,… , xm + 1), k = 1, … , m + 1 satisfy conditions of the inverse function theorem. Take y = (ε, 0, … , 0). For sufficiently small values of ε, there exists a vector such that


that is


where x(ε) = (x1(ε),… , xm + 1(ε), ) and . This contradicts the fact that is a solution to the constrained optimization problem [1.2], since both for positive and negative values of ε there are vectors close to on which the function f0(x(ε)) takes values of both smaller and larger . □

Thus, to determine m + n + 1 unknown λ0, λ1, … , λm, , we have n + m equations


Note that the Lagrange multipliers are determined up to proportionality. If it is known that λ0 ≠ 0, then we can choose λ0 = 1. Then the number of equations and the number of unknowns are the same.

Linear independence of the vectors of derivatives is the regularity condition that guarantees that λ0 ≠ 0. However, verification of this condition is more complicated than direct verification of the fact that λ0 cannot be equal to zero.

Since the time of Lagrange, almost a century ago, the method of Lagrange multipliers has been used with λ0 = 1, despite the fact that in the general case it is wrong.

As in the case of the unconstrained optimization problem, the stationary points of the constrained optimization problem need not be its solution. There also exist the necessary and sufficient conditions for optimality in terms of the second derivatives. Denote by


the matrix of the second-order partial derivatives of the Lagrange function L (x, λ, λ0).

THEOREM 1.17.– Let the functions fi(x), i = 0,1, … , m be two times differentiable at a point and continuously differentiable in some neighborhood U of the point , and let the gradients be linearly independent. If is the point of local minimum of the problem [1.2], then


for all λ, λ0, satisfying the condition


and all h ∈ ℝn such that


THEOREM 1.18.– Let the functions fi(x), i = 0, 1, … , m, be two times differentiable at a point ∈ ℝn, satisfying the conditions


Assume that for some λ, λ0, the condition holds


and, in addition,


for all non-zero h ∈ ℝn satisfying the condition


Then is the point of local minimum of the problem [1.2].

We can formulate the following rule of indeterminate Lagrange multipliers for solving constrained optimization problems with equality constraints:

1 1) Write the Lagrange function

2 2) Write the necessary conditions for the extremum of the function L, that is:

3 3) Find the stationary points, that is, the solutions of these equations, provided that not all Lagrange multipliers λ0, λ1, … , λm are zero.

4 4) Find a solution to the problem among the stationary points or prove that the problem has no solutions.

Convex Optimization

Подняться наверх