Hessian Update Strategies
This module provides various generic Hessian approximation strategies that can be employed when the calculating the exact Hessian or an approximation is computationally too demanding.
Classes Summary
|
Broydens "bad" method as introduced in [Broyden 1965](https://doi.org/10.1090%2FS0025-5718-1965-0198670-6). |
|
Broyden-Fletcher-Goldfarb-Shanno update strategy. |
|
Broydens "good" method as introduced in [Broyden 1965](https://doi.org/10.1090%2FS0025-5718-1965-0198670-6). |
|
BroydenClass Update scheme as described in [Nocedal & Wright]( http://dx.doi.org/10.1007/b98874) Chapter 6.3. |
|
Davidon-Fletcher-Powell update strategy. |
|
|
|
|
|
Abstract class from which Hessian update strategies should subclass |
|
|
|
|
|
Iterative update schemes that only use s and y values for update. |
|
Symmetric Rank 1 update strategy as described in [Nocedal & Wright](http://dx.doi.org/10.1007/b98874) Chapter 6.2. |
|
Structured Secant Method as introduced by [Dennis et al 1989](https://doi.org/10.1007/BF00962795), which is compatible with BFGS, DFP update schemes. |
|
|
|
Totally Structured Secant Method as introduced by [Huschens 1994](https://doi.org/10.1137/0804005), which uses a self-adjusting update method for the second order term. |
Functions Summary
|
Scale free implementation of the broyden class update scheme. |
Classes
- class fides.hessian_approximation.BB(init_with_hess=False)[source]
Broydens “bad” method as introduced in [Broyden 1965](https://doi.org/10.1090%2FS0025-5718-1965-0198670-6). This is a rank 1 update strategy that does not preserve symmetry or positive definiteness.
This scheme only works with a function that returns (fval, grad)
- class fides.hessian_approximation.BFGS(init_with_hess=False)[source]
Broyden-Fletcher-Goldfarb-Shanno update strategy. This is a rank 2 update strategy that preserves symmetry and positive-semidefiniteness.
This scheme only works with a function that returns (fval, grad)
- __init__(init_with_hess=False)[source]
Create a Hessian update strategy instance
- Parameters
init_with_hess (
typing.Optional
[bool
]) – Whether the hybrid update strategy should be initialized according to the user-provided objective function
- class fides.hessian_approximation.BG(init_with_hess=False)[source]
Broydens “good” method as introduced in [Broyden 1965](https://doi.org/10.1090%2FS0025-5718-1965-0198670-6). This is a rank 1 update strategy that does not preserve symmetry or positive definiteness.
This scheme only works with a function that returns (fval, grad)
- class fides.hessian_approximation.Broyden(phi, init_with_hess=False)[source]
BroydenClass Update scheme as described in [Nocedal & Wright]( http://dx.doi.org/10.1007/b98874) Chapter 6.3. This is a generalization of BFGS/DFP methods where the parameter \(phi\) controls the convex combination between the two. This is a rank 2 update strategy that preserves positive-semidefiniteness and symmetry (if \(\phi \in [0,1]\)).
This scheme only works with a function that returns (fval, grad)
- Parameters
phi (
float
) – convex combination parameter interpolating between BFGS (phi==0) and DFP (phi==1).
- __init__(phi, init_with_hess=False)[source]
Create a Hessian update strategy instance
- Parameters
init_with_hess (
typing.Optional
[bool
]) – Whether the hybrid update strategy should be initialized according to the user-provided objective function
- class fides.hessian_approximation.DFP(init_with_hess=False)[source]
Davidon-Fletcher-Powell update strategy. This is a rank 2 update strategy that preserves symmetry and positive-semidefiniteness.
This scheme only works with a function that returns (fval, grad)
- __init__(init_with_hess=False)[source]
Create a Hessian update strategy instance
- Parameters
init_with_hess (
typing.Optional
[bool
]) – Whether the hybrid update strategy should be initialized according to the user-provided objective function
- class fides.hessian_approximation.FX(happ=<fides.hessian_approximation.BFGS object>, hybrid_tol=0.2)[source]
- __init__(happ=<fides.hessian_approximation.BFGS object>, hybrid_tol=0.2)[source]
Hybrid method HY2 as introduced by [Fletcher & Xu 1986](https://doi.org/10.1093/imanum/7.3.371). This approximation scheme employs a dynamic approximation as long as function values satisfy \(\frac{f_k - f_{k+1}}{f_k} < \epsilon\) and employs the iterative scheme applied to the last dynamic approximation if not.
This scheme only works with a function that returns (fval, grad, hess)
- Parameters
hybrid_tol (
typing.Optional
[float
]) – switch tolerance \(\epsilon\)
- class fides.hessian_approximation.GNSBFGS(hybrid_tol=1e-06)[source]
- __init__(hybrid_tol=1e-06)[source]
Hybrid Gauss-Newton Structured BFGS method as introduced by [Zhou & Chen 2010](https://doi.org/10.1137/090748470), which combines ideas of hybrid switching methods and structured secant methods.
This scheme only works with a function that returns (res, sres)
- Parameters
hybrid_tol (
float
) – switching tolerance that controls switching between update methods
- class fides.hessian_approximation.HessianApproximation(init_with_hess=False)[source]
Abstract class from which Hessian update strategies should subclass
- __init__(init_with_hess=False)[source]
Create a Hessian update strategy instance
- Parameters
init_with_hess (
typing.Optional
[bool
]) – Whether the hybrid update strategy should be initialized according to the user-provided objective function
- get_mat()[source]
Getter for the Hessian approximation :rtype:
numpy.ndarray
:return:
- init_mat(dim, hess=None)[source]
Initializes this approximation instance and checks the dimensionality
- Parameters
dim (
int
) – dimension of optimization variableshess (
typing.Optional
[numpy.ndarray
]) – user provided initialization
- class fides.hessian_approximation.HybridFixed(happ=<fides.hessian_approximation.BFGS object>, switch_iteration=20)[source]
- __init__(happ=<fides.hessian_approximation.BFGS object>, switch_iteration=20)[source]
Switch from a dynamic approximation that to the user provided iterative scheme after a fixed number of iterations. The iterative scheme is initialized and updated from the beginning, but only employed after the specified number of iterations.
This scheme only works with a function that returns (fval, grad, hess)
- Parameters
switch_iteration (
typing.Optional
[int
]) – Iteration after which this approximation is used
- get_mat()[source]
Getter for the Hessian approximation :rtype:
numpy.ndarray
:return:
- class fides.hessian_approximation.HybridSwitchApproximation(happ=<fides.hessian_approximation.BFGS object>)[source]
- __init__(happ=<fides.hessian_approximation.BFGS object>)[source]
Create a Hybrid Hessian update strategy that switches between an iterative approximation and a dynamic approximation
- Parameters
happ (
fides.hessian_approximation.IterativeHessianApproximation
) – Iterative Hessian Approximation
- get_mat()[source]
Getter for the Hessian approximation :rtype:
numpy.ndarray
:return:
- init_mat(dim, hess=None)[source]
Initializes this approximation instance and checks the dimensionality
- Parameters
dim (
int
) – dimension of optimization variableshess (
typing.Optional
[numpy.ndarray
]) – user provided initialization
- class fides.hessian_approximation.IterativeHessianApproximation(init_with_hess=False)[source]
Iterative update schemes that only use s and y values for update.
- class fides.hessian_approximation.SR1(init_with_hess=False)[source]
Symmetric Rank 1 update strategy as described in [Nocedal & Wright](http://dx.doi.org/10.1007/b98874) Chapter 6.2. This is a rank 1 update strategy that preserves symmetry but does not preserve positive-semidefiniteness.
This scheme only works with a function that returns (fval, grad)
- class fides.hessian_approximation.SSM(phi=0.0)[source]
Structured Secant Method as introduced by [Dennis et al 1989](https://doi.org/10.1007/BF00962795), which is compatible with BFGS, DFP update schemes.
This scheme only works with a function that returns (res, sres)
- class fides.hessian_approximation.StructuredApproximation(phi=0.0)[source]
- __init__(phi=0.0)[source]
This is the base class for structured secant methods (SSM). SSMs approximate the hessian by combining the Gauss-Newton component C(x) and an iteratively updated component that approximates the difference S to the true Hessian.
- Parameters
phi (
typing.Optional
[float
]) – convex combination parameter interpolating between BFGS (phi==0) and DFP (phi==1) update schemes.
- init_mat(dim, hess=None)[source]
Initializes this approximation instance and checks the dimensionality
- Parameters
dim (
int
) – dimension of optimization variableshess (
typing.Optional
[numpy.ndarray
]) – user provided initialization
- class fides.hessian_approximation.TSSM(phi=0.0)[source]
Totally Structured Secant Method as introduced by [Huschens 1994](https://doi.org/10.1137/0804005), which uses a self-adjusting update method for the second order term.
This scheme only works with a function that returns (res, sres)
Functions
- fides.hessian_approximation.broyden_class_update(y, s, mat, phi=0.0)[source]
Scale free implementation of the broyden class update scheme.
- Parameters
y – difference in gradient
s – search direction in previous step
mat – current hessian approximation
phi – convex combination parameter, interpolates between BFGS (phi=0) and DFP (phi=1).