Читать книгу Mathematical Programming for Power Systems Operation - Alejandro Garcés Ruiz - Страница 46
Example 2.9
ОглавлениеConsider the following optimization problem:
(2.55)
Its corresponding lagrangian is presented below:
(2.56)
and the optimal conditions forms the following set of algebraic equations:
(2.57)
The corresponding Jacobian is the following matrix:
(2.58)
It is possible to formulate Newton’s method using the information described above. The algorithm implemented in Python is presented below:
import numpy as np def Fobj(x,y): "Objective funcion" return 10*x**2 + 15*y**2 + np.exp(x+y) def Grad(x,y,z): "Gradient of Lagrangian" dx = 20*x + np.exp(x+y) + z dy = 30*y + np.exp(x+y) + z dz = x + y - 5 return np.array([dx,dy,dz]) def Jac(x,y,z): "Jacobian of Grad" p = np.exp(x+y) return np.array([[20+p,p,1],[p,30+p,1],[1,1,0]]) (x,y,z) = (10,10,1) # initial condition G = Grad(x,y,z) while np.linalg.norm(G) >= 1E-8: J = Jac(x,y,z) step = -np.linalg.solve(J,G) (x,y,z) = (x,y,z) + step G = Grad(x,y,z) print('Gradient: ',np.linalg.norm(G)) print('Optimum point: ', np.round([x,y,z],2)) print('Objective function: ', Fobj(x,y))
In this case, we used a tolerance of 10−3. The algorithm achieves convergence in few iterations, as the reader can check by running the code.