Risk Optimization

class spqr.RiskOptimizer(loss, loss_grad, algorithm='subgradient', mode='superquantile', w_start=None, p=None, alpha=None, mu=None, max_iter=None, dual_averaging_lmbda=None, beta_smoothness=None, params=None)[source]

Base class for optimization of superquantile-based losses.

For an input oracle \(L\) given through two functions function_l and gradient_l, this class is an interface to run optimization procedures aimed at minimizing \(w \mapsto Cvar \circ L(w)\). Given the regularity of the loss, the algorithm chosen for optimization should be carefully chosen.

Parameters
  • loss – function associated to the oracle

  • loss_grad – gradient associated to the oracle

  • p – probability level (by default 0.8)

  • algorithm – chosen algorithm for optimization. Allowed inputs are 'subgradient', 'dual_averaging', 'gradient', 'nesterov' and 'bfgs'. Default is 'subgradient'

  • w_start – starting point of the algorithm

  • alpha – scale parameter for the direction descent (by default computed through a line search)

  • mu – smoothing parameter associated to the CVar

  • beta_smoothness – estimation of the smoothness of \(L\) (used for accelerated gradient method).

fit(x, y, verbose_mode=False)[source]

Runs the optimization of the model

Parameters
  • x (numpy.ndarray) – matrix whose lines are realizations of random variable \(X\)

  • y (numpy.array) – vector whose coefficients are realizations of random variable \(y\)

  • verbose_mode (bool) – If True, saves function values during iterations of selected algorithm as well as time since start.

predict(x)[source]

Gives a prediction of x

Parameters

x (numpy.array) – input whose label is to predict

Returns

value of the prediction

score(x, y)[source]

To be implemented in next release