profit.sur.gp.sklearn_surrogate

Module Contents

Classes

SklearnGPSurrogate

Surrogate for https://github.com/scikit-learn/scikit-learn Gaussian process.

LinearEmbedding

Mixin for kernels which are stationary: k(X, Y)= f(X-Y).

class profit.sur.gp.sklearn_surrogate.SklearnGPSurrogate[source]

Bases: profit.sur.gp.GaussianProcess

Surrogate for https://github.com/scikit-learn/scikit-learn Gaussian process.

model

Model object of Sklearn.

Type:

sklearn.gaussian_process.GaussianProcessRegressor

train(X, y, kernel=defaults['kernel'], hyperparameters=defaults['hyperparameters'], fixed_sigma_n=base_defaults['fixed_sigma_n'], **kwargs)[source]

Trains the model on the dataset.

After initializing the model with a kernel function and initial hyperparameters, it can be trained on input data X and observed output data y by optimizing the model’s hyperparameters. This is done by minimizing the negative log likelihood.

Parameters:
  • X (ndarray) – (n, d) array of input training data.

  • y (ndarray) – (n, D) array of training output.

  • kernel (str/object) – Identifier of kernel like ‘RBF’ or directly the kernel object of the surrogate.

  • hyperparameters (dict) – Hyperparameters such as length scale, variance and noise. Taken either from given parameter, config file or inferred from the training data. The hyperparameters can be different depending on the kernel. E.g. The length scale can be a scalar, a vector of the size of the training data, or for the custom LinearEmbedding kernel a matrix.

  • fixed_sigma_n (bool) – Indicates if the data noise should be optimized or not.

  • return_hess_inv (bool) – Whether to the attribute hess_inv after optimization. This is important for active learning.

post_train()[source]
add_training_data(X, y)[source]

Add training points to existing data.

Parameters:
  • X (ndarray) – Input points to add.

  • y (ndarray) – Observed output to add.

set_ytrain(ydata)[source]

Set the observed training outputs. This is important for active learning.

Parameters:

ydata (np.array) – Full training output data.

predict(Xpred, add_data_variance=True)[source]

Predicts the output at test points Xpred.

Parameters:
  • Xpred (ndarray/list) – Input points for prediction.

  • add_data_variance (bool) – Adds the data noise \(\sigma_n^2\) to the prediction variance. This is especially useful for plotting.

Returns:

a tuple containing:
  • ymean (ndarray) Predicted output values at the test input points.

  • yvar (ndarray): Diagonal of the predicted covariance matrix.

Return type:

tuple

save_model(path)[source]

Save the SklGPSurrogate model to a pickle file. All attributes of the surrogate are loaded directly from the model.

Parameters:

path (str) – Path including the file name, where the model should be saved.

classmethod load_model(path)[source]

Load a saved SklGPSurrogate model from a pickle file and update its attributes.

Parameters:

path (str) – Path including the file name, from where the model should be loaded.

Returns:

Instantiated surrogate model.

Return type:

profit.sur.gaussian_process.SklearnGPSurrogate

optimize(**opt_kwargs)[source]

For hyperparameter optimization the Sklearn base optimization is used.

Currently, the inverse Hessian can not be retrieved, which limits the active learning effectivity.

Parameters:

opt_kwargs – Keyword arguments used directly in the Sklearn base optimization.

select_kernel(kernel)[source]

Get the sklearn.gaussian_process.kernels kernel by matching the given kernel identifier.

Parameters:

kernel (str) – Kernel string such as ‘RBF’ or depending on the surrogate also product and sum kernels such as ‘RBF+Matern52’.

Returns:

Scikit-learn kernel object. Currently, for sum and product kernels, the initial hyperparameters are the same for all kernels.

Return type:

sklearn.gaussian_process.kernels

_set_hyperparameters_from_model()[source]

Helper function to set the hyperparameter dict from the model.

It depends on whether \(\sigma_n\) is fixed. Currently this is only stable for single kernels and not for Sum and Prod kernels.

class profit.sur.gp.sklearn_surrogate.LinearEmbedding(dims, length_scale=np.array([1.0]), length_scale_bounds=(1e-05, 100000.0))[source]

Bases: sklearn.gaussian_process.kernels.StationaryKernelMixin, sklearn.gaussian_process.kernels.NormalizedKernelMixin, sklearn.gaussian_process.kernels.Kernel

Mixin for kernels which are stationary: k(X, Y)= f(X-Y).

New in version 0.18.

property hyperparameter_length_scale
__call__(X, Y=None, eval_gradient=False)[source]

Evaluate the kernel.

__repr__()[source]

Return repr(self).