Bases: infpy.gp.kernel.Kernel
Changes the type of object a kernel can act on
For example our data might be of type X that aggregate several features:
X.vec: a numpy.array X.is_true: a boolean
We may have a kernel, Kvec, that acts on arrays and a kernel, Kbool, that acts on boolean data. We could combine them as follows:
K = SumKernel(
AttributeExtractor( 'vec', Kvec ),
AttributeExtractor( 'is_true', Kbool )
)
The AttributeExtractor will find the vec and is_true attributes of each x and pass them to the kernels Kvec and Kbool
Bases: object
\(\frac{d\textrm{LL}}{d\textrm{params}}\) as function of kernel parameters
Relies on the parameters already having been set. I.e. __call__ ignores its arguments!
Bases: object
GP LL as function of kernel parameters
Bases: object
A Gaussian process.
Following notation in section 2.2 of Gaussian Processes for Machine Learning by Rasmussen and Williams.
Calculate covariance matrix of x1 against x2
if x2 is None calculate symmetric matrix
Calculate derivative of covariance matrix of x1 against x2 w.r.t param i
if x2 is None calculate symmetric matrix
Predict the process’s values on the input values
@arg x_star: Prediction points
@return: ( mean, variance, LL ) where mean are the predicted means, variance are the predicted variances and LL is the log likelihood of the data for the given value of the parameters (i.e. not integrating over hyperparameters)
@return: An array of points between xmin and xmax with the given step size. The shape of the array will be (N, 1) where N is the number of points as this is the format the GP code expects.
Implementation uses numpy.arange so last element of array may be > xmax.
Predict and plot the GP’s predictions over the range provided
Run optimisation algorithm on GP’s kernel parameters
Uses conjugate gradient descent
Calls gp_loo_predict_i for all i and yields the return value
Creates a gp from all x in X except the i’th and predicts its value...
Plot a gp’s prediction using pylab including error bars if variance specified
Error bars are 2 * standard_deviation as in Gaussian Processes for Machine Learning by Rasmussen and Williams.
Plot samples from a Gaussian process.
Samples from a gaussian process
x is the value to sample at
Similar to figure 2.2 in Gaussian Processes for Machine Learning by Rasmussen and Williams.
Bases: infpy.gp.kernel.Kernel
A constant kernel.
Bases: infpy.gp.kernel.Kernel
Bases: infpy.gp.kernel.Kernel
1 if x1 and x2 are equal, 0 otherwise
Bases: infpy.gp.kernel.Kernel
1 if x1 and x2 are identical, 0 otherwise
Bases: object
Base class for all Gaussian process kernels
Bases: infpy.gp.kernel.Kernel
A wrapper kernel that hides (i.e. fixes) the parameters of another kernel
To users of this class it appears as the kernel has no parameters to optimise. This can be useful when you have a mixture kernel and you only want to learn one child kernel’s parameters.
A helper function to allow user to only specify some of the priors, parameters and dimensions arguments.
Parameters: |
|
---|
This function fills any unspecified arguments in with defaults.
It returns (params, priors, dimensions)
Bases: infpy.gp.real_kernel.RealKernel
Eq 4.17 in Gaussian Processes for Machine Learning by Rasmussen and Williams.
Bases: infpy.gp.real_kernel.RealKernel
Eq 4.17 in Gaussian Processes for Machine Learning by Rasmussen and Williams.
Bases: infpy.gp.real_kernel.RealKernel
Following 4.29 in Gaussian Processes for Machine Learning by Rasmussen and Williams.
No trainable paramaters
Bases: infpy.gp.real_kernel.RealKernel
See 4.31 in Gaussian Processes for Machine Learning by Rasmussen and Williams.
Could be generalised to more than one dimension or for parameterisable periods
Bases: infpy.gp.piecewise_poly_kernel.PiecewisePolyKernel
Eq 4.21 in Gaussian Processes for Machine Learning by Rasmussen and Williams.
q=0
Bases: infpy.gp.piecewise_poly_kernel.PiecewisePolyKernel
Eq 4.21 in Gaussian Processes for Machine Learning by Rasmussen and Williams.
q=1
Bases: infpy.gp.real_kernel.RealKernel
Eq 4.21 in Gaussian Processes for Machine Learning by Rasmussen and Williams.
Base class for specialisations for specific values of q
Will produce a sparse covariance matrix with 0’s where r > 1
Bases: infpy.gp.kernel.Kernel
A kernel that has been pre-computed
Bases: infpy.gp.real_kernel.RealKernel
Following 4.19 in Gaussian Processes for Machine Learning by Rasmussen and Williams.
Alpha is not a trainable parameter. The parameters are for the length scale in the r term.
Bases: infpy.gp.real_kernel.RealKernel
Following 4.19 in Gaussian Processes for Machine Learning by Rasmussen and Williams.
The first parameter is alpha. The rest of the parameters are for the length scale in the r term.
Bases: infpy.gp.kernel.Kernel
A kernel that is defined on \(\mathbb{R}^n\)
Calculates distance between two points in \(\mathbb{R}^n\)
Calculates distance squared between two points in \(\mathbb{R}^n\)
where the parameters \(l\) are the length scales.
Bases: infpy.gp.real_kernel.RealKernel
Eq. 4.30 in Gaussian Processes for Machine Learning by Rasmussen and Williams.
No trainable parameters
Bases: infpy.gp.real_kernel.RealKernel
A squared exponential kernel
where \(r(x_1, x_2) = |\frac{x_1 - x_2}{l}|\)
Bases: object
Mixes two indexable objects to provide one seamlessly indexable object
That is len(IndexMixer(x1,x2))=len(x1)+len(x2)
Bases: infpy.gp.kernel.Kernel
Kernel that mixes two other kernels. E.g. used as sub-class for SumKernel
Bases: infpy.gp.sum_kernel.MixKernel
The product of 2 other kernels
Bases: infpy.gp.sum_kernel.MixKernel
The sum of 2 other kernels