profit.sur.gp.gaussian_process

This module contains the backends for various Gaussian Process surrogate models.

Gaussian Processes (GPs) are a generalization of Gaussian Distributions which are described by mean- and covariance functions. They can be used as as a non-parametric, supervised machine-learning technique for regression and classification. The advantages of GPs over other machine-learning techniques such as Artificial Neural Networks is the consistent, analytic mathematical derivation within probability-theory and therefore their intrinsic uncertainty-quantification of the predictions by means of the covariance function. This makes the results of GP fits intuitive to interpret.

GPs belong (like e.g. Support Vector Machines) to the kernel methods of machine learning. The mean function is often neglected and set to zero, because most functions can be modelled with a suitable covariance function or a combination of several covariance functions. The most important kernels are the Gaussian (RBF) and its generalization, the Matern kernels.

Isotropic RBF Kernel:

\[ \begin{align} k(x, x') &= \sigma_f^2 \exp(-\frac{1}{2} \frac{\lvert x-x' \rvert}{l^2}) \end{align} \]

Literature:

Rasmussen & Williams 2006: General Introduction to Gaussian Processes Garnett 2014: Active Learning and Linear Embeddings Osborne 2012: Active Learning of Hyperparameters

Module Contents

Classes

GaussianProcess

This is the base class for all Gaussian Process models.

class profit.sur.gp.gaussian_process.GaussianProcess[source]

Bases: profit.sur.sur.Surrogate

This is the base class for all Gaussian Process models.

trained

Flag that indicates if the model is already trained and ready to make predictions.

Type:

bool

fixed_sigma_n

Indicates if the data noise should be optimized or not. If an ndarray is given, its length must match the training data.

Type:

bool/float/ndarray

Xtrain

Input training points.

Type:

ndarray

ytrain

Observed output data. Vector output is supported for independent variables only.

Type:

ndarray

ndim

Dimension of input data.

Type:

int

output_ndim

Dimension of output data.

Type:

int

kernel

Kernel identifier such as ‘RBF’ or directly the (surrogate specific) kernel object. Defaults to ‘RBF’.

Type:

str/object

hyperparameters

Parameters like length-scale, variance and noise which can be optimized during training. As default, they are inferred from the training data.

Type:

dict

Default parameters:

surrogate: GPy kernel: RBF

Default hyperparameters:

\(l\) … length scale \(\sigma_f\) … scaling \(\sigma_n\) … data noise

\[\begin{split} \begin{align} l &= \frac{1}{2} \overline{\lvert x - x' \rvert} \\ \sigma_f &= \overline{std(y)} \\ \sigma_n &= 0.01 \cdot \overline{max(y) - min(y)} \end{align} \end{split}\]

pre_train(X, y, kernel=defaults['kernel'], hyperparameters=defaults['hyperparameters'], fixed_sigma_n=base_defaults['fixed_sigma_n'])[source]

Check the training data, initialize the hyperparameters and set the kernel either from the given parameter, from config or from the default values.

Parameters:
  • X – (n, d) or (n,) array of input training data.

  • y – (n, D) or (n,) array of training output.

  • kernel (str/object) – Identifier of kernel like ‘RBF’ or directly the kernel object of the specific surrogate.

  • hyperparameters (dict) – Hyperparameters such as length scale, variance and noise. Taken either from given parameter, config file or inferred from the training data. The hyperparameters can be different depending on the kernel. E.g. The length scale can be a scalar, a vector of the size of the training data, or for the custom LinearEmbedding kernel a matrix.

  • fixed_sigma_n (bool/float/ndarray) – Indicates if the data noise should be optimized or not. If an ndarray is given, its length must match the training data.

set_attributes(**kwargs)[source]
infer_hyperparameters()[source]
abstract train(X, y, kernel=defaults['kernel'], hyperparameters=defaults['hyperparameters'], fixed_sigma_n=base_defaults['fixed_sigma_n'], return_hess_inv=False)[source]

Trains the model on the dataset.

After initializing the model with a kernel function and initial hyperparameters, it can be trained on input data X and observed output data y by optimizing the model’s hyperparameters. This is done by minimizing the negative log likelihood.

Parameters:
  • X (ndarray) – (n, d) array of input training data.

  • y (ndarray) – (n, D) array of training output.

  • kernel (str/object) – Identifier of kernel like ‘RBF’ or directly the kernel object of the surrogate.

  • hyperparameters (dict) – Hyperparameters such as length scale, variance and noise. Taken either from given parameter, config file or inferred from the training data. The hyperparameters can be different depending on the kernel. E.g. The length scale can be a scalar, a vector of the size of the training data, or for the custom LinearEmbedding kernel a matrix.

  • fixed_sigma_n (bool) – Indicates if the data noise should be optimized or not.

  • return_hess_inv (bool) – Whether to the attribute hess_inv after optimization. This is important for active learning.

abstract predict(Xpred, add_data_variance=True)[source]

Predicts the output at test points Xpred.

Parameters:
  • Xpred (ndarray/list) – Input points for prediction.

  • add_data_variance (bool) – Adds the data noise \(\sigma_n^2\) to the prediction variance. This is especially useful for plotting.

Returns:

a tuple containing:
  • ymean (ndarray) Predicted output values at the test input points.

  • yvar (ndarray): Diagonal of the predicted covariance matrix.

Return type:

tuple

abstract optimize(**opt_kwargs)[source]

Find optimized hyperparameters of the model. Optional kwargs for tweaking optimization.

Parameters:

opt_kwargs – Keyword arguments for optimization.

classmethod from_config(config, base_config)[source]

Instantiate a GP model from the configuration file with kernel and hyperparameters.

Parameters:
  • config (dict) – Only the ‘fit’ part of the base_config.

  • base_config (dict) – The whole configuration parameters.

Returns:

Instantiated surrogate.

Return type:

profit.sur.gaussian_process.GaussianProcess

select_kernel(kernel)[source]

Convert the name of the kernel as string to the kernel class object of the surrogate.

Parameters:

kernel (str) – Kernel string such as ‘RBF’ or depending on the surrogate also product and sum kernels such as ‘RBF+Matern52’.

Returns:

Custom or imported kernel object. This is the function which builds the kernel and not the calculated covariance matrix.

Return type:

object

decode_hyperparameters()[source]

Decodes the hyperparameters, as encoded ones are used in the surrogate model.

special_hyperparameter_decoding(key, value)[source]
print_hyperparameters(prefix)[source]

Helper function to print the hyperparameter dict.

Parameters:

prefix (str) – Usually ‘Initialized’, ‘Loaded’ or ‘Optimized’ to identify the state of the hyperparameters.