taco package¶
Submodules¶
taco.chance_optimizer module¶
-
class
taco.chance_optimizer.
Optimizer
(problem, p=0.01, starting_point=array([0.0, 0.0, 0.0]), pen1=None, pen2=None, factor_pen2=None, bund_mu_start=None, performance_warnings=False, numba=None, params=None)[source]¶ Bases:
object
Base class for optimization of chance constrained problems
For an problem instance providing a dataset and two first order oracles \(f\) and \(g\), this class is an interface for solving the minimization problem
- Parameters
problem – An instance of Problem
p (np.float64) – Safety probability threshold for the problem
starting_point (np.ndarray) – (optional) Starting point for the algorithm
pen1 (np.float64) – (optional) First Penalization parameter
pen2 (np.float64) – (optional) Second Penalization parameter
factor_pen2 (np.float64) – (optional) Incremental factor for the second penalization parameter pen2
bund_mu_start (np.float64) – Starting value for the proximal parameter \(\mu\) of the bundle method
numba (bool) – If True, instantiate an Oracle with numba in
no-python
mode.performance_warning (bool) – If True, prints numba performance warnings.
params (dict) – Dictionnary of parameters for the optimization process
taco.oracle module¶
-
class
taco.oracle.
Oracle
(problem, p, pen1, pen2, rho)[source]¶ Bases:
object
Base class that instantiates a first-order oracle for the DC objective of the penalized chance constaint
For two input oracles \(f\) and \(g\) given through their function and gradients, and a sampled dataset from a random variable \(\xi\), this class is an interface to compute the value and the gradient of the function \(x, \eta \mapsto f(x) + \mu \max(\eta,0) + \lambda \left(G(x,\eta) - \text{CVaR}_p(g(x,\xi))\right)\) where \(G(x, \eta) = \eta + \frac{1}{1-p} \mathbb{E}[\max(g(x, \xi) - \eta]\)
- Parameters
problem – Instance of Problem
p (np.float64) – Real number in [0,1]. Safety probability level
pen1 (np.float64) – Penalization parameter, must be positive.
pen2 (np.float64) – Penalization parameter, must be positive.
rho (np.float64) – Smoothing parameter for the superquantile, must be positive.
-
f1
(x)[source]¶ Value of the convex part of the DC objective \(x, \eta \mapsto f(x) + \mu \max(\eta,0) + \lambda G(x,\eta)\)
-
f2
(x)[source]¶ Value of the concave part of the DC objective \(x, \eta \mapsto - {CVaR}_p(g(x,{\xi}))\)
-
g1
(x)[source]¶ Gradient of the convex part of the DC objective \(x, \eta \mapsto f(x) + \mu \max(\eta,0) + \lambda G(x,\eta)\)
Module contents¶
Core module .. moduleauthor:: Yassine LAGUEL
-
class
taco.
BundleAlgorithm
(oracle, params)[source]¶ Bases:
object
Base class that combines the penalization procedure with a Bundle Method to solve the chance constraint problem. It is instantiated with a first-oder oracle for the DC objective and a dictionary of parameters. From time to time, the penalization parameters are increased to escape some critical point of the DC objective.
- Parameters
oracle – An oracle object.
params (dict) – Python dictionary of parameters.
-
class
taco.
FastOracle
(*args, **kwargs)¶ Bases:
taco.oracle.FastOracle
Numba version of the class Oracle.
FastOracle works with numba in full no-python mode. It must take as an input an instance of problem that is a jitclass.
- Parameters
problem – Instance of Problem. Must be a numba jitclass.
p (np.float64) – Real number in [0,1]. Safety probability level
pen1 (np.float64) – Penalization parameter, must be positive.
pen2 (np.float64) – Penalization parameter, must be positive.
rho (np.float64) – Smoothing parameter for the superquantile, must be positive.
-
class_type
= jitclass.FastOracle#113c93198<problem:instance.jitclass.ToyProblem#113c64748<a:array(float64, 1d, A),data:array(float64, 2d, A),geo_a:array(float64, 2d, A)>,sample_size:int32,p:float64,pen1:float64,pen2:float64,rho:float64>¶
-
class
taco.
Optimizer
(problem, p=0.01, starting_point=array([0.0, 0.0, 0.0]), pen1=None, pen2=None, factor_pen2=None, bund_mu_start=None, performance_warnings=False, numba=None, params=None)[source]¶ Bases:
object
Base class for optimization of chance constrained problems
For an problem instance providing a dataset and two first order oracles \(f\) and \(g\), this class is an interface for solving the minimization problem
- Parameters
problem – An instance of Problem
p (np.float64) – Safety probability threshold for the problem
starting_point (np.ndarray) – (optional) Starting point for the algorithm
pen1 (np.float64) – (optional) First Penalization parameter
pen2 (np.float64) – (optional) Second Penalization parameter
factor_pen2 (np.float64) – (optional) Incremental factor for the second penalization parameter pen2
bund_mu_start (np.float64) – Starting value for the proximal parameter \(\mu\) of the bundle method
numba (bool) – If True, instantiate an Oracle with numba in
no-python
mode.performance_warning (bool) – If True, prints numba performance warnings.
params (dict) – Dictionnary of parameters for the optimization process
-
class
taco.
Oracle
(problem, p, pen1, pen2, rho)[source]¶ Bases:
object
Base class that instantiates a first-order oracle for the DC objective of the penalized chance constaint
For two input oracles \(f\) and \(g\) given through their function and gradients, and a sampled dataset from a random variable \(\xi\), this class is an interface to compute the value and the gradient of the function \(x, \eta \mapsto f(x) + \mu \max(\eta,0) + \lambda \left(G(x,\eta) - \text{CVaR}_p(g(x,\xi))\right)\) where \(G(x, \eta) = \eta + \frac{1}{1-p} \mathbb{E}[\max(g(x, \xi) - \eta]\)
- Parameters
problem – Instance of Problem
p (np.float64) – Real number in [0,1]. Safety probability level
pen1 (np.float64) – Penalization parameter, must be positive.
pen2 (np.float64) – Penalization parameter, must be positive.
rho (np.float64) – Smoothing parameter for the superquantile, must be positive.
-
f1
(x)[source]¶ Value of the convex part of the DC objective \(x, \eta \mapsto f(x) + \mu \max(\eta,0) + \lambda G(x,\eta)\)
-
f2
(x)[source]¶ Value of the concave part of the DC objective \(x, \eta \mapsto - {CVaR}_p(g(x,{\xi}))\)
-
g1
(x)[source]¶ Gradient of the convex part of the DC objective \(x, \eta \mapsto f(x) + \mu \max(\eta,0) + \lambda G(x,\eta)\)