profit.al.aquisition_functions

  • simple exploration
    • variance

    • variance + penalty for near distance to previous point

    • weighted exploration/exploitation: weight * mu + (1 - weight) * sigma

  • bayesian optimization
    • probability of improvement

    • expected improvement

  • mixed exploration and bayesian optimization

Module Contents

Classes

AcquisitionFunction

Base class for acquisition functions.

SimpleExploration

Minimizes the local variance, which means the next points are generated at points of high variance.

ExplorationWithDistancePenalty

Enhanced variance minimization by adding an exponential penalty for neighboring candidates.

WeightedExploration

Combination of exploration and optimization.

ProbabilityOfImprovement

Maximizes the probability of improvement.

ExpectedImprovement

Maximising the expected improvement.

ExpectedImprovement2

Simplified batch expected improvement where the first point is calculated using normal expected improvement,

AlternatingAF

Base class for acquisition functions.

class profit.al.aquisition_functions.AcquisitionFunction(Xpred, surrogate, variables, **parameters)[source]

Bases: profit.util.base_class.CustomABC

Base class for acquisition functions.

Parameters:
  • Xpred (np.ndarray) – Matrix of possible training points.

  • surrogate (profit.sur.Surrogate) – Surrogate.

  • variables (profit.util.variable.VariableGroup) – Variables.

  • parameters (dict) – Miscellaneous parameters for the specified function. E.g. ‘exploration_factor’.

labels
al_parameters
EPSILON = 1e-12
set_al_parameters(**kwargs)[source]
calculate_loss(*args)[source]

Calculate the loss of the acquisition function.

find_next_candidates(batch_size)[source]

Find the next training input points which minimize the loss/maximize improvement.

_find_next_candidates(batch_size, *loss_args)[source]
normalize(value, min=None)[source]
class profit.al.aquisition_functions.SimpleExploration(Xpred, surrogate, variables, use_marginal_variance=se_defaults['use_marginal_variance'], **parameters)[source]

Bases: AcquisitionFunction

Minimizes the local variance, which means the next points are generated at points of high variance.

calculate_loss()[source]

Calculate the loss of the acquisition function.

class profit.al.aquisition_functions.ExplorationWithDistancePenalty(Xpred, surrogate, variables, use_marginal_variance=edp_defaults['use_marginal_variance'], weight=edp_defaults['weight'])[source]

Bases: SimpleExploration

Enhanced variance minimization by adding an exponential penalty for neighboring candidates.

Variables:

weight (float): Exponential penalty factor: $penalty = 1 - exp(c1 * |X_{pred} - X_{last}|)$.

calculate_loss()[source]

Calculate the loss of the acquisition function.

class profit.al.aquisition_functions.WeightedExploration(Xpred, surrogate, variables, weight=we_defaults['weight'], use_marginal_variance=we_defaults['use_marginal_variance'])[source]

Bases: AcquisitionFunction

Combination of exploration and optimization.

Variables:

weight (float): Factor to favor maximization of the target function over exploration.

calculate_loss(mu)[source]

Calculate the loss of the acquisition function.

find_next_candidates(batch_size)[source]

Find the next training input points which minimize the loss/maximize improvement.

class profit.al.aquisition_functions.ProbabilityOfImprovement(Xpred, surrogate, variables, **parameters)[source]

Bases: AcquisitionFunction

Maximizes the probability of improvement. See https://math.stackexchange.com/questions/4230985/probability-of-improvement-pi-acquisition-function-for-bayesian-optimization

calculate_loss(mu)[source]

Calculate the loss of the acquisition function.

find_next_candidates(batch_size)[source]

Find the next training input points which minimize the loss/maximize improvement.

class profit.al.aquisition_functions.ExpectedImprovement(Xpred, surrogate, variables, exploration_factor=ei_defaults['exploration_factor'], find_min=ei_defaults['find_min'])[source]

Bases: AcquisitionFunction

Maximising the expected improvement. See https://krasserm.github.io/2018/03/21/bayesian-optimization/

To be able to execute this funciton with batches of data, some simplifications are made: The optimization part (prediction mean) is only calculated once for the first point. Thereafter, it is assumed that the data coincides with the prediction. For the next points in the batch, only the variance part is calculated as this does not need an evaluation of the function.

SIGMA_EPSILON = 1e-10
calculate_loss(improvement)[source]

Calculate the loss of the acquisition function.

mu_part()[source]
sigma_part()[source]
find_next_candidates(batch_size)[source]

Find the next training input points which minimize the loss/maximize improvement.

class profit.al.aquisition_functions.ExpectedImprovement2(Xpred, surrogate, variables, exploration_factor=ei2_defaults['exploration_factor'], find_min=ei2_defaults['find_min'])[source]

Bases: AcquisitionFunction

Simplified batch expected improvement where the first point is calculated using normal expected improvement, while the others are found using the minimization of local variance acquisition function.

calculate_loss()[source]

Calculate the loss of the acquisition function.

find_next_candidates(batch_size)[source]

Find the next training input points which minimize the loss/maximize improvement.

class profit.al.aquisition_functions.AlternatingAF(Xpred, surrogate, variables, use_marginal_variance=ae_defaults['use_marginal_variance'], exploration_factor=ae_defaults['exploration_factor'], find_min=ae_defaults['find_min'], alternating_freq=ae_defaults['alternating_freq'])[source]

Bases: AcquisitionFunction

Base class for acquisition functions.

Parameters:
  • Xpred (np.ndarray) – Matrix of possible training points.

  • surrogate (profit.sur.Surrogate) – Surrogate.

  • variables (profit.util.variable.VariableGroup) – Variables.

  • parameters (dict) – Miscellaneous parameters for the specified function. E.g. ‘exploration_factor’.

al_parameters
find_next_candidates(batch_size)[source]

Find the next training input points which minimize the loss/maximize improvement.