Minimization¶
This module performs the optimization given a step proposal.
Classes Summary
|
Performs optimization |
Classes
-
class
fides.minimize.
Optimizer
(fun, ub, lb, verbose=10, options=None, funargs=None, hessian_update=None)[source]¶ Performs optimization
- Variables
fun – objective function
funargs – keyword arguments that are passed to the function
lb – lower optimization boundaries
ub – upper optimization boundaries
options – options that configure convergence checks
delta – trust region radius
x – current optimization variables
fval – objective function value at x
grad – objective function gradient at x
hess – objective function Hessian (approximation) at x
hessian_update – object that performs hessian updates
starttime – time at which optimization was started
iteration – current iteration
converged – flag indicating whether optimization has converged
-
__init__
(fun, ub, lb, verbose=10, options=None, funargs=None, hessian_update=None)[source]¶ Create an optimizer object
- Parameters
fun (
typing.Callable
) – This is the objective function, if no hessian_update is provided, this function must return a tuple (fval, grad), otherwise this function must return a tuple (fval, grad, Hessian)ub (
numpy.ndarray
) – Upper optimization boundaries. Individual entries can be set to np.inf for respective variable to have no upper boundlb (
numpy.ndarray
) – Lower optimization boundaries. Individual entries can be set to -np.inf for respective variable to have no lower boundverbose (
typing.Optional
[int
]) – Verbosity level, pick from logging.[DEBUG,INFO,WARNING,ERROR]options (
typing.Optional
[typing.Dict
]) – Options that control termination of optimization. See minimize for details.funargs (
typing.Optional
[typing.Dict
]) – Additional keyword arguments that are to be passed to fun for evaluationhessian_update (
typing.Optional
[fides.hessian_approximation.HessianApproximation
]) – Subclass offides.hessian_update.HessianApproximation
that performs the hessian updates in every iteration.
-
check_continue
()[source]¶ Checks whether minimization should continue based on convergence, iteration count and remaining computational budget
- Return type
- Returns
flag indicating whether minimization should continue
-
check_convergence
(fval, x, grad)[source]¶ Check whether optimization has converged.
- Parameters
fval – updated objective function value
x – updated optimization variables
grad – updated objective function gradient
- Return type
-
check_finite
()[source]¶ Checks whether objective function value, gradient and Hessian ( approximation) have finite values and optimization can continue.
- Raises
RuntimeError if any of the variables have non-finite entries
-
check_in_bounds
(x=None)[source]¶ Checks whether the current optimization variables are all within the specified boundaries
- Raises
RuntimeError if any of the variables are not within boundaries
-
get_affine_scaling
()[source]¶ Computes the vector v and dv, the diagonal of it’s Jacobian. For the definition of v, see Definition 2 in [Coleman-Li1994]
- Return type
- Returns
v scaling vector dv diagonal of the Jacobian of v wrt x
-
log_header
()[source]¶ Prints the header for diagnostic information, should complement
Optimizer.log_step()
.
-
log_step
(accepted, steptype, normdx)[source]¶ Prints diagnostic information about the current step to the log
-
make_non_degenerate
(eps=2.220446049250313e-14)[source]¶ Ensures that x is non-degenerate, this should only be necessary for initial points.
- Parameters
eps – degeneracy threshold
- Return type
-
minimize
(x0)[source]¶ Minimize the objective function the interior trust-region reflective algorithm described by [ColemanLi1994] and [ColemanLi1996] Convergence with respect to function value is achieved when math:|f_{k+1} - f_k| < options[fatol] - \(f_k\) options[ frtol]. Similarly, convergence with respect to optimization variables is achieved when \(||x_{k+1} - x_k||\) < options[ xatol] - \(x_k\) options[xrtol]. Convergence with respect to the gradient is achieved when \(||g_k||\) < options[gatol] or ||g_k|| < options[grtol] * f_k. Other than that, optimization can be terminated when iterations exceed options[ maxiter] or the elapsed time is expected to exceed options[maxtime] on the next iteration.
- Parameters
x0 (
numpy.ndarray
) – initial guess- Returns
fval: final function value, x: final optimization variable values, grad: final gradient, hess: final Hessian (approximation)
-
update_tr_radius
(fval, grad, step_sx, dv, qppred)[source]¶ Update the trust region radius
- Parameters
fval – new function value if step defined by step_sx is taken
grad – new gradient value if step defined by step_sx is taken
step_sx – proposed scaled step
dv – derivative of scaling vector v wrt x
qppred – predicted objective function value according to the quadratic approximation
- Return type
- Returns
flag indicating whether the proposed step should be accepted