This is a demonstration for the medium-scale algorithms in the Optimization Toolbox. It closely follows the Tutorial section of the users' guide.
All the principles outlined in this demonstration apply to the other nonlinear solvers: FGOALATTAIN, FMINIMAX, LSQNONLIN, FSOLVE.
The routines differ from the Tutorial Section examples in the User's Guide only in that some objectives are anonymous functions instead of M-file functions.
Consider initially the problem of finding a minimum of the function:
2 2 f(x) = exp(x(1)) . (4x(1) + 2x(2) + 4x(1).x(2) + 2x(2) + 1)
Define the objective to be minimized as an anonymous function:
fun = @(x) exp(x(1)) * (4*x(1)^2 + 2*x(2)^2 + 4*x(1)*x(2) + 2*x(2) + 1)
fun = @(x) exp(x(1)) * (4*x(1)^2 + 2*x(2)^2 + 4*x(1)*x(2) + 2*x(2) + 1)
Take a guess at the solution:
x0 = [-1; 1];
Set optimization options: turn off the large-scale algorithms (the default):
options = optimset('LargeScale','off');
Call the unconstrained minimization function:
[x, fval, exitflag, output] = fminunc(fun, x0, options);
Optimization terminated: relative infinity-norm of gradient less than options.TolFun.
The optimizer has found a solution at:
x
x = 0.5000 -1.0000
The function value at the solution is:
fval
fval = 1.0983e-015
The total number of function evaluations was:
output.funcCount
ans = 66
Consider the above problem with two additional constraints:
2 2 minimize f(x) = exp(x(1)) . (4x(1) + 2x(2) + 4x(1).x(2) + 2x(2) + 1)
subject to 1.5 + x(1).x(2) - x(1) - x(2) <= 0 - x(1).x(2) <= 10
The objective function this time is contained in an M-file, objfun.m:
type objfun
function f = objfun(x) % Objective function % Copyright 1990-2004 The MathWorks, Inc. % $Revision: 1.5.4.2 $ $Date: 2004/04/06 01:10:28 $ f = exp(x(1)) * (4*x(1)^2 + 2*x(2)^2 + 4*x(1)*x(2) + 2*x(2) + 1);
The constraints are also defined in an M-file, confun.m:
type confun
function [c, ceq] = confun(x) % Nonlinear inequality constraints: % Copyright 1990-2004 The MathWorks, Inc. % $Revision: 1.6.4.2 $ $Date: 2004/04/06 01:10:16 $ c = [1.5 + x(1)*x(2) - x(1) - x(2); -x(1)*x(2) - 10]; % No nonlinear equality constraints: ceq = [];
Take a guess at the solution:
x0 = [-1 1];
Set optimization options: turn off the large-scale algorithms (the default) and turn on the display of results at each iteration:
options = optimset('LargeScale','off','Display','iter');
Call the optimization algorithm. We have no linear equalities or inequalities or bounds, so we pass [] for those arguments:
[x,fval,exitflag,output] = fmincon(@objfun,x0,[],[],[],[],[],[],@confun,options);
max Directional First-order Iter F-count f(x) constraint Step-size derivative optimality Procedure 0 3 1.8394 0.5 Infeasible start point 1 7 1.85127 -0.09197 1 -0.027 0.778 2 11 0.300167 9.33 1 -0.825 0.313 Hessian modified twice 3 15 0.529835 0.9209 1 0.302 0.232 4 20 0.186965 -1.517 0.5 -0.437 0.13 5 24 0.0729085 0.3313 1 -0.0715 0.054 6 28 0.0353323 -0.03303 1 -0.026 0.0271 7 32 0.0235566 0.003184 1 -0.00963 0.00587 8 36 0.0235504 9.032e-008 1 -6.22e-006 8.51e-007 Optimization terminated: first-order optimality measure less than options.TolFun and maximum constraint violation is less than options.TolCon. Active inequalities (to within options.TolCon = 1e-006): lower upper ineqlin ineqnonlin 1 2
A solution to this problem has been found at:
x
x = -9.5474 1.0474
The function value at the solution is:
fval
fval = 0.0236
Both inequality constraints are satisfied (and active) at the solution:
[c, ceq] = confun(x)
c = 1.0e-007 * -0.9032 0.9032 ceq = []
The total number of function evaluations was:
output.funcCount
ans = 36
Consider the previous problem with additional bound constraints:
2 2 minimize f(x) = exp(x(1)) . (4x(1) + 2x(2) + 4x(1).x(2) + 2x(2) + 1)
subject to 1.5 + x(1).x(2) - x(1) - x(2) <= 0 - x(1).x(2) <= 10
and x(1) >= 0 x(2) >= 0
As in the previous example, the objective and constraint functions are defined in M-files. The file objfun.m contains the objective:
type objfun
function f = objfun(x) % Objective function % Copyright 1990-2004 The MathWorks, Inc. % $Revision: 1.5.4.2 $ $Date: 2004/04/06 01:10:28 $ f = exp(x(1)) * (4*x(1)^2 + 2*x(2)^2 + 4*x(1)*x(2) + 2*x(2) + 1);
The file confun.m contains the constraints:
type confun
function [c, ceq] = confun(x) % Nonlinear inequality constraints: % Copyright 1990-2004 The MathWorks, Inc. % $Revision: 1.6.4.2 $ $Date: 2004/04/06 01:10:16 $ c = [1.5 + x(1)*x(2) - x(1) - x(2); -x(1)*x(2) - 10]; % No nonlinear equality constraints: ceq = [];
Set the bounds on the variables:
lb = zeros(1,2); % Lower bounds x >= 0 ub = []; % No upper bounds
Again, make a guess at the solution:
x0 = [-1 1];
Set optimization options: turn off the large-scale algorithms (the default). This time we do not turn on the Display option.
options = optimset('LargeScale','off');
Run the optimization algorithm:
[x,fval,exitflag,output] = fmincon(@objfun,x0,[],[],[],[],lb,ub,@confun,options);
Optimization terminated: first-order optimality measure less than options.TolFun and maximum constraint violation is less than options.TolCon. Active inequalities (to within options.TolCon = 1e-006): lower upper ineqlin ineqnonlin 1 1
The solution to this problem has been found at:
x
x = 0 1.5000
The function value at the solution is:
fval
fval = 8.5000
The constraint values at the solution are:
[c, ceq] = confun(x)
c = 0 -10 ceq = []
The total number of function evaluations was:
output.funcCount
ans = 15
Optimization problems can be solved more efficiently and accurately if gradients are supplied by the user. This demo shows how this may be performed. We again solve the inequality-constrained problem
2 2 minimize f(x) = exp(x(1)) . (4x(1) + 2x(2) + 4x(1).x(2) + 2x(2) + 1)
subject to 1.5 + x(1).x(2) - x(1) - x(2) <= 0 - x(1).x(2) <= 10
The objective function and its gradient are defined in the M-file objfungrad.m:
type objfungrad
function [f, G] = objfungrad(x) % objective function: % Copyright 1990-2004 The MathWorks, Inc. % $Revision: 1.6.4.1 $ $Date: 2004/02/07 19:13:23 $ f =exp(x(1)) * (4*x(1)^2 + 2*x(2)^2 + 4*x(1)*x(2) + 2*x(2) + 1); % gradient (partial derivatives) of the objective function: t = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1); G = [ t + exp(x(1)) * (8*x(1) + 4*x(2)), exp(x(1))*(4*x(1)+4*x(2)+2)];
The constraints and their partial derivatives are contained in the M-file confungrad:
type confungrad
function [c, ceq, dc, dceq] = confungrad(x) % Nonlinear inequality constraints: % Copyright 1990-2004 The MathWorks, Inc. % $Revision: 1.5.4.1 $ $Date: 2004/02/07 19:12:54 $ c = [1.5 + x(1)*x(2) - x(1) - x(2); -x(1)*x(2) - 10]; % Gradient (partial derivatives) of nonlinear inequality constraints: dc = [x(2)-1, -x(2); x(1)-1, -x(1)]; % no nonlinear equality constraints (and gradients) ceq = []; dceq = [];
Make a guess at the solution:
x0 = [-1; 1];
Set optimization options: since we are supplying the gradients, we have the choice to use either the medium- or the large-scale algorithm; we will continue to use the same algorithm for comparison purposes.
options = optimset('LargeScale','off');
We also set options to use the gradient information in the objective and constraint functions. Note: these options MUST be turned on or the gradient information will be ignored.
options = optimset(options,'GradObj','on','GradConstr','on');
Call the optimization algorithm:
[x,fval,exitflag,output] = fmincon(@objfungrad,x0,[],[],[],[],[],[], ...
@confungrad,options);
Optimization terminated: first-order optimality measure less than options.TolFun and maximum constraint violation is less than options.TolCon. Active inequalities (to within options.TolCon = 1e-006): lower upper ineqlin ineqnonlin 1 2
As before, the solution to this problem has been found at:
x
x = -9.5474 1.0474
The function value at the solution is:
fval
fval = 0.0236
Both inequality constraints are active at the solution:
[c, ceq] = confungrad(x)
c = 1.0e-007 * -0.9032 0.9032 ceq = []
The total number of function evaluations was:
output.funcCount
ans = 18
Consider the above problem with an additional equality constraint:
2 2 minimize f(x) = exp(x(1)) . (4x(1) + 2x(2) + 4x(1).x(2) + 2x(2) + 1)
subject to x(1)^2 + x(2) = 1 -x(1).x(2) <= 10
Like in previous examples, the objective function is defined in the M-file objfun.m:
type objfun
function f = objfun(x) % Objective function % Copyright 1990-2004 The MathWorks, Inc. % $Revision: 1.5.4.2 $ $Date: 2004/04/06 01:10:28 $ f = exp(x(1)) * (4*x(1)^2 + 2*x(2)^2 + 4*x(1)*x(2) + 2*x(2) + 1);
The M-file confuneq.m contains the equality and inequality constraints:
type confuneq
function [c, ceq] = confuneq(x) % Nonlinear inequality constraints: % Copyright 1990-2004 The MathWorks, Inc. % $Revision: 1.7.4.1 $ $Date: 2004/02/07 19:12:53 $ c = -x(1)*x(2) - 10; % Nonlinear equality constraint: ceq = x(1)^2 + x(2) - 1;
Again, make a guess at the solution:
x0 = [-1 1];
Set optimization options: turn off the large-scale algorithms (the default):
options = optimset('LargeScale','off');
Call the optimization algorithm:
[x,fval,exitflag,output] = fmincon(@objfun,x0,[],[],[],[],[],[],@confuneq,options);
Optimization terminated: first-order optimality measure less than options.TolFun and maximum constraint violation is less than options.TolCon. No active inequalities
The solution to this problem has been found at:
x
x = -0.7529 0.4332
The function value at the solution is:
fval
fval = 1.5093
The constraint values at the solution are:
[c, ceq] = confuneq(x)
c = -9.6739 ceq = 6.3038e-009
The total number of function evaluations was:
output.funcCount
ans = 27
Consider the original unconstrained problem we solved first:
2 2 minimize f(x) = exp(x(1)) . (4x(1) + 2x(2) + 4x(1).x(2) + 2x(2) + 1)
This time we will solve it more accurately by overriding the default termination criteria (options.TolX and options.TolFun).
Create an anonymous function of the objective to be minimized:
fun = @(x) exp(x(1)) * (4*x(1)^2 + 2*x(2)^2 + 4*x(1)*x(2) + 2*x(2) + 1)
fun = @(x) exp(x(1)) * (4*x(1)^2 + 2*x(2)^2 + 4*x(1)*x(2) + 2*x(2) + 1)
Again, make a guess at the solution:
x0 = [-1; 1];
Set optimization options: turn off the large-scale algorithms (the default):
options = optimset('LargeScale','off');
Override the default termination criteria:
% Termination tolerance on X and f. options = optimset(options,'TolX',1e-3,'TolFun',1e-3);
Call the optimization algorithm:
[x, fval, exitflag, output] = fminunc(fun, x0, options);
Optimization terminated: relative infinity-norm of gradient less than options.TolFun.
The optimizer has found a solution at:
x
x = 0.4998 -0.9997
The function value at the solution is:
fval
fval = 2.0368e-007
The total number of function evaluations was:
output.funcCount
ans = 60
Set optimization options: turn off the large-scale algorithms (the default):
options = optimset('LargeScale','off');
If we want a tabular display of each iteration we can set options.Display = 'iter' as follows:
options = optimset(options, 'Display','iter'); [x, fval, exitflag, output] = fminunc(fun, x0, options);
Gradient's Iteration Func-count f(x) Step-size infinity-norm 0 3 1.8394 0.736 1 9 1.72428 0.368157 0.257 2 27 0.0845289 22.5704 0.923 3 51 0.072564 0.0012394 1.05 4 54 0.00450951 1 0.29 5 57 1.15036e-005 1 0.014 6 60 2.03677e-007 1 0.00107 7 63 3.47346e-012 1 9.34e-006 8 66 1.09827e-015 1 7.37e-008 Optimization terminated: relative infinity-norm of gradient less than options.TolFun.
At each major iteration the table displayed consists of: (i) number of function evaluations, (ii) function value, (iii) step length used in the line search, (iv) gradient in the direction of search.