Читать книгу Power Magnetic Devices - Scott D. Sudhoff - Страница 17

1.3 Single‐Objective Optimization Using Newton’s Method

Оглавление

Let us consider a method to find the extrema of an objective function f(x). Let us focus our attention on the case where f(x) ∈ ℝ and x ∈ ℝn. Algorithms to solve this problem include gradient methods, Newton’s method, conjugate direction methods, quasi‐Newton methods, and the Nelder–Mead simplex method, to name a few. Let us focus on Newton’s method as being somewhat representative.

In order to set the stage for Newton’s method, let us first define some operators. The first derivative or gradient of our objective function is denoted ∇f(x) and is defined as

(1.3-1)

The second derivative or Hessian of f(x)is defined as

(1.3-2)

If x* is a local minimizer of f, and if x* is in the interior of Ω, it can be shown that

(1.3-3)

and that

(1.3-4)

Note that the statement F(x*) ≥ 0 means that F(x*) is positive semi‐definite, which is to say that yTF(x*)y ≥ 0 ∀ y ∈ ℝn. This second condition verifies that a minimum rather than a maximum has been found. It is important to understand that the conditions (1.3-3) and (1.3-4) are necessary but not sufficient conditions for x* to be a mimimizer, unless f is a convex function. If f is convex, (1.3-3) and (1.3-4) are necessary and sufficient conditions for x* to be a global minimizer.

At this point, we are posed to set forth Newton’s method of finding function minimizers. This method is iterative and is based on a kth estimate of the solution denoted x[k]. Then an update formula is applied to generate a (hopefully) improved estimate x[k + 1] of the minimizer. The update formula is derived by first approximating f as a Taylor series about the current estimated solution x[k]. In particular,

(1.3-5)

where H denotes higher‐order terms. Neglecting these higher‐order terms and finding the gradient of f(x) based on (1.3-5), we obtain

(1.3-6)

From the necessary condition (1.3-3), we next take the right‐hand side of (1.3-6) and equate it with zero; then we replace x, which will be our improved estimate, with x[k + 1]. Manipulating the resulting expression yields

(1.3-7)

which is Newton’s method.

Clearly, Newton’s method requires f ⊂ C2, which is to say that f is in the set of twice differentiable functions. Note that the selection of the initial solution, x[1], can have a significant impact on which (if any) local solution is found. Also note that method is equally likely to yield a local maximizer as a local minimizer.

Power Magnetic Devices

Подняться наверх