Gaussian Process Package

Holds all Gaussian Process classes, which hold all informations for a Gaussian Process to work porperly.

Gaussian Process Package

Holds all Gaussian Process classes, which hold all informations for a Gaussian Process to work porperly.

class pygp.gp.gp_base.GP(covar_func=None, likelihood=None, x=None, y=None)

Bases: object

Gaussian Process regression class. Holds all information for the GP regression to take place.

Parameters:

covar_func : pygp.covar
The covariance function, which calculates the covariance of the outputs
x : [double]
training inputs (might be high dimensional, depending on which covariance function is chosen) Note: x must be of dimension (-1,d)
y : [double]
training targets of dimension (-1,1)

Detailed descriptions of the fields of this class:

Data Type/Default Explanation
x array([]) inputs
t array([]) targets
n 0 size of training data
mean 0 mean of the data
Settings:    
Covariance:    
covar None Covariance function
caching of covariance-stuff:    
alpha None cached alpha
L None chol(K)
Nlogtheta 0 total number of hyperparameters for set kernel etc. which if av. will be used for predictions
LML(hyperparams, priors=None)

Calculate the log Marginal likelihood for the given logtheta.

Parameters:

hyperparams : {‘covar’:CF_hyperparameters, ... }
The hyperparameters for the log marginal likelihood.
priors : [pygp.priors]
the prior beliefs for the hyperparameter values
Ifilter : [bool]

Denotes which hyperparameters shall be optimized. Thus

Ifilter = [0,1,0]

has the meaning that only the second hyperparameter shall be optimized.

kw_args :
All other arguments, explicitly annotated when necessary.
LMLgrad(hyperparams, priors=None, **kw_args)

Returns the gradient of the log Marginal likelihood for the given hyperparameters hyperparams.

Parameters:

hyperparams : {‘covar’:CF_hyperparameters, ...}
The hyperparameters which shall be optimized and derived
priors : [pygp.priors]
Priors which shall be optimized and derived
getData()

Returns the data [x,y], currently set for this GP

get_covariances(hyperparams)

Return the Cholesky decompositions L and alpha:

K
L     = chol(K)
alpha = solve(L,t)
return [covar_struct] = get_covariances(hyperparam)

Parameters:

hyperparams: dict
The hyperparameters for cholesky decomposition
_x, _y: [double]
input x and output _y for cholesky decomposition. If one/both is/are set, there will be no chaching allowed
plot(hyperparams)

plot current state of the model

predict(hyperparams, xstar, output=0, var=True)

Predict mean and variance for given Parameters:

hyperparams : {}
hyperparameters in logSpace
xstar : [double]
prediction inputs
var : boolean
return predicted variance
interval_indices : [ int || bool ]
Either numpy array-like of boolean indicators, or numpy array-like of integer indices, denoting which x indices to predict from data.

output : output dimension for prediction (0)

setData(x, y)

setData(_x,t) with Parameters:

x : inputs: [N x D]

y : targets/outputs [N x d] #note d dimensional data structure only make sense for GPLVM

Grouping GP regression classes

Module for composite Gaussian processes models that combine multiple GPs into one model

class pygp.gp.composite.GroupGP(GPs=None)

Bases: pygp.gp.gp_base.GP

Class to bundle one or more GPs for joint optimization of hyperparameters.

Parameters:

GPs : [gpr.GP]
Array, holding al GP classes to be optimized together
LML(hyperparams, **LML_kwargs)

Returns the log Marginal likelyhood for the given logtheta and the LML_kwargs:

logtheta : [double]
Array of hyperparameters, which define the covariance function
LML_kwargs : lml, dlml, clml, sdlml, priors, Ifilter
See gpr.GP.LML
LMLgrad(hyperparams, **lml_kwargs)

Returns the log Marginal likelihood for the given logtheta.

Parameters:

hyperparams : {‘covar’:CF_hyperparameters, ...}
The hyperparameters which shall be optimized and derived
priors : [pygp.priors]
The hyperparameters which shall be optimized and derived
predict(*args, **kwargs)

Predict mean and variance for each GP and given Parameters.

Parameters:

hyperparams : {}
hyperparameters in logSpace.
xstar : [double]
prediction inputs.
var : boolean
return predicted variance.
output : int
output dimension for prediction (0)

Return: Array as follows:

[[1st_predictions_mean, 2nd, ..., nth_predictions_mean],
 [1st_predictions_var, 2nd, ..., nth_predictions_var]]

See pygp.gp.basic_gp.GP for individual prediction outputs.

setData(x, y)

set inputs x and outputs y with Parameters:

x : [double]
training input
y : [double]
training targets
rescale_dim : int
dimensions to be rescaled (default all real)
process : boolean
subtract mean and rescale inputs

Class for Gaussian process classification using EP

Base class for Gaussian process latent variable models This is really not ready for release yet but is used by the gpasso model

class pygp.gp.gplvm.GPLVM(gplvm_dimensions=None, **kw_args)

Bases: pygp.gp.gp_base.GP

derived class form GP offering GPLVM specific functionality

LML(hyperparams, priors=None, **kw_args)

Calculate the log Marginal likelihood for the given logtheta.

Parameters:

hyperparams : {‘covar’:CF_hyperparameters, ... }
The hyperparameters for the log marginal likelihood.
priors : [lnpriors]
the prior beliefs for the hyperparameter values
Ifilter : [bool]

Denotes which hyperparameters shall be optimized. Thus

Ifilter = [0,1,0]

has the meaning that only the second hyperparameter shall be optimized.

kw_args :
All other arguments, explicitly annotated when necessary.
LMLgrad(hyperparams, priors=None, **kw_args)

Returns the log Marginal likelihood for the given logtheta.

Parameters:

hyperparams : {‘covar’:CF_hyperparameters, ...}
The hyperparameters which shall be optimized and derived
priors : [lnpriors]
The hyperparameters which shall be optimized and derived
pygp.gp.gplvm.PCA(Y, components)

run PCA, retrieving the first (components) principle components return [s0, eig, w0] s0: factors w0: weights

Class for Gaussian Process Regression with arbitrary likelihoods commonly we will use EP to obtain a Gaussian approximation to the likelihood function

class pygp.gp.gprEP.GPEP(likelihood=None, Nep=3, *argin, **kwargin)

Bases: pygp.gp.gp_base.GP

Gaussian Process class with an arbitrary likelihood (likelihood) which will be approximiated using an EP approximation

epComputeParams(K, KI, g)

calculate the ep Parameters K: plain kernel matrix g: [0,1]: natural parameter rep. [2]: 0. moment for lml

getCovariances(logtheta)

[L,Alpha] = getCovariances() - special overwritten version of getCovariance (gpr.py) - here: EP updates are employed

updateEP(K, logthetaL=None)

update a kernel matrix K using Ep approximation [K,t,C0] = updateEP(K,logthetaL) logthetaL: likelihood hyperparameters t: new means of training targets K: new effective kernel matrix C0:0th moments

Table Of Contents

Previous topic

Welcome to PyGP

Next topic

Covariance Functions

This Page