Minimization¶
This module performs the optimization given a step proposal.
Classes Summary
|
Performs optimization |
Classes
- class fides.minimize.Optimizer(fun, ub, lb, verbose=10, options=None, funargs=None, hessian_update=None)[source]¶
Performs optimization
- Variables
fun – Objective function
funargs – Keyword arguments that are passed to the function
lb – Lower optimization boundaries
ub – Upper optimization boundaries
options – Options that configure convergence checks
delta_iter – Trust region radius that was used for the current step
delta – Updated trust region radius
x – Current optimization variables
fval – Objective function value at x
grad – Objective function gradient at x
x_min – Optimal optimization variables
fval_min – Objective function value at x_min
grad_min – Objective function gradient at x_min
hess – Objective function Hessian (approximation) at x
hessian_update – Object that performs hessian updates
starttime – Time at which optimization was started
iteration – Current iteration
converged – Flag indicating whether optimization has converged
exitflag – ExitFlag to indicate reason for termination
verbose – Verbosity level for logging
logger – logger instance
- __init__(fun, ub, lb, verbose=10, options=None, funargs=None, hessian_update=None)[source]¶
Create an optimizer object
- Parameters
fun (
typing.Callable
) – This is the objective function, if no hessian_update is provided, this function must return a tuple (fval, grad), otherwise this function must return a tuple (fval, grad, Hessian)ub (
numpy.ndarray
) – Upper optimization boundaries. Individual entries can be set to np.inf for respective variable to have no upper boundlb (
numpy.ndarray
) – Lower optimization boundaries. Individual entries can be set to -np.inf for respective variable to have no lower boundverbose (
typing.Optional
[int
]) – Verbosity level, pick from logging.[DEBUG,INFO,WARNING,ERROR]options (
typing.Optional
[typing.Dict
]) – Options that control termination of optimization. See minimize for details.funargs (
typing.Optional
[typing.Dict
]) – Additional keyword arguments that are to be passed to fun for evaluationhessian_update (
typing.Optional
[fides.hessian_approximation.HessianApproximation
]) – Subclass offides.hessian_update.HessianApproximation
that performs the hessian updates in every iteration.
- check_continue()[source]¶
Checks whether minimization should continue based on convergence, iteration count and remaining computational budget
- Return type
- Returns
flag indicating whether minimization should continue
- check_convergence(step, fval, grad)[source]¶
Check whether optimization has converged.
- Parameters
step (
fides.steps.Step
) – update to optimization variablesfval (
float
) – updated objective function valuegrad (
numpy.ndarray
) – updated objective function gradient
- Return type
- check_finite(grad=None, hess=None)[source]¶
Checks whether objective function value, gradient and Hessian ( approximation) have finite values and optimization can continue.
- Parameters
grad (
typing.Optional
[numpy.ndarray
]) – gradient to be checked for finiteness, if not provided, current one will be checkedhess (
typing.Optional
[numpy.ndarray
]) – Hessian (approximation) to be checked for finiteness, if not provided, current one will be checked
- Raises
RuntimeError if any of the variables have non-finite entries
- check_in_bounds(x=None)[source]¶
Checks whether the current optimization variables are all within the specified boundaries
- Raises
RuntimeError if any of the variables are not within boundaries
- get_affine_scaling()[source]¶
Computes the vector v and dv, the diagonal of its Jacobian. For the definition of v, see Definition 2 in [Coleman-Li1994]
- Return type
- Returns
v scaling vector dv diagonal of the Jacobian of v wrt x
- log_header()[source]¶
Prints the header for diagnostic information, should complement
Optimizer.log_step()
.
- log_step(accepted, step, fval)[source]¶
Prints diagnostic information about the current step to the log
- make_non_degenerate(eps=2.220446049250313e-14)[source]¶
Ensures that x is non-degenerate, this should only be necessary for initial points.
- Parameters
eps – degeneracy threshold
- Return type
- minimize(x0)[source]¶
Minimize the objective function using the interior trust-region reflective algorithm described by [ColemanLi1994] and [ColemanLi1996] Convergence with respect to function value is achieved when math:|f_{k+1} - f_k| < options[fatol] - \(f_k\) options[ frtol]. Similarly, convergence with respect to optimization variables is achieved when \(||x_{k+1} - x_k||\) < options[ xtol] \(x_k\) (note that this is checked in transformed coordinates that account for distance to boundaries). Convergence with respect to the gradient is achieved when \(||g_k||\) < options[gatol] or ||g_k|| < options[grtol] * f_k. Other than that, optimization can be terminated when iterations exceed options[ maxiter] or the elapsed time is expected to exceed options[maxtime] on the next iteration.
- Parameters
x0 (
numpy.ndarray
) – initial guess- Returns
fval: final function value, x: final optimization variable values, grad: final gradient, hess: final Hessian (approximation)
- track_minimum(x_new, fval_new, grad_new)[source]¶
Function that tracks the optimization variables that have minimal function value independent of whether the step is accepted or not.
- Parameters
x_new (
numpy.ndarray
) –fval_new (
float
) –grad_new (
numpy.ndarray
) –
- Return type
- Returns
- update(step, x_new, fval_new, grad_new, hess_new=None)[source]¶
Update self according to employed step
- Parameters
step (
fides.steps.Step
) – Employed stepx_new (
numpy.ndarray
) – New optimization variable valuesfval_new (
float
) – Objective function value at x_newgrad_new (
numpy.ndarray
) – Objective function gradient at x_newhess_new (
typing.Optional
[numpy.ndarray
]) – (Approximate) objective function Hessian at x_new
- Return type
- update_tr_radius(fval, grad, step, dv)[source]¶
Update the trust region radius
- Parameters
fval (
float
) – new function value if step defined by step_sx is takengrad (
numpy.ndarray
) – new gradient value if step defined by step_sx is takenstep (
fides.steps.Step
) – stepdv (
numpy.ndarray
) – derivative of scaling vector v wrt x
- Return type
- Returns
flag indicating whether the proposed step should be accepted