Holds all Gaussian Process classes, which hold all informations for a Gaussian Process to work porperly.
Holds all Gaussian Process classes, which hold all informations for a Gaussian Process to work porperly.
Bases: object
Gaussian Process regression class. Holds all information for the GP regression to take place.
Parameters:
Detailed descriptions of the fields of this class:
Data | Type/Default | Explanation |
---|---|---|
x | array([]) | inputs |
t | array([]) | targets |
n | 0 | size of training data |
mean | 0 | mean of the data |
Settings: | ||
Covariance: | ||
covar | None | Covariance function |
caching of covariance-stuff: | ||
alpha | None | cached alpha |
L | None | chol(K) |
Nlogtheta | 0 | total number of hyperparameters for set kernel etc. which if av. will be used for predictions |
Calculate the log Marginal likelihood for the given logtheta.
Parameters:
Denotes which hyperparameters shall be optimized. Thus
Ifilter = [0,1,0]
has the meaning that only the second hyperparameter shall be optimized.
Returns the gradient of the log Marginal likelihood for the given hyperparameters hyperparams.
Parameters:
Returns the data [x,y], currently set for this GP
Return the Cholesky decompositions L and alpha:
K
L = chol(K)
alpha = solve(L,t)
return [covar_struct] = get_covariances(hyperparam)
Parameters:
plot current state of the model
Predict mean and variance for given Parameters:
output : output dimension for prediction (0)
setData(_x,t) with Parameters:
x : inputs: [N x D]
y : targets/outputs [N x d] #note d dimensional data structure only make sense for GPLVM
Module for composite Gaussian processes models that combine multiple GPs into one model
Bases: pygp.gp.gp_base.GP
Class to bundle one or more GPs for joint optimization of hyperparameters.
Parameters:
Returns the log Marginal likelyhood for the given logtheta and the LML_kwargs:
Returns the log Marginal likelihood for the given logtheta.
Parameters:
Predict mean and variance for each GP and given Parameters.
Parameters:
Return: Array as follows:
[[1st_predictions_mean, 2nd, ..., nth_predictions_mean],
[1st_predictions_var, 2nd, ..., nth_predictions_var]]
See pygp.gp.basic_gp.GP for individual prediction outputs.
set inputs x and outputs y with Parameters:
Base class for Gaussian process latent variable models This is really not ready for release yet but is used by the gpasso model
Bases: pygp.gp.gp_base.GP
derived class form GP offering GPLVM specific functionality
Calculate the log Marginal likelihood for the given logtheta.
Parameters:
Denotes which hyperparameters shall be optimized. Thus
Ifilter = [0,1,0]
has the meaning that only the second hyperparameter shall be optimized.
Returns the log Marginal likelihood for the given logtheta.
Parameters:
run PCA, retrieving the first (components) principle components return [s0, eig, w0] s0: factors w0: weights
Class for Gaussian Process Regression with arbitrary likelihoods commonly we will use EP to obtain a Gaussian approximation to the likelihood function
Bases: pygp.gp.gp_base.GP
Gaussian Process class with an arbitrary likelihood (likelihood) which will be approximiated using an EP approximation
calculate the ep Parameters K: plain kernel matrix g: [0,1]: natural parameter rep. [2]: 0. moment for lml
[L,Alpha] = getCovariances() - special overwritten version of getCovariance (gpr.py) - here: EP updates are employed
update a kernel matrix K using Ep approximation [K,t,C0] = updateEP(K,logthetaL) logthetaL: likelihood hyperparameters t: new means of training targets K: new effective kernel matrix C0:0th moments