This example shows how ALGOPY can be used for linear error propagation.
Consider the error model
where \(x\) a vector and \(\epsilon\) a random vector that is normally distributed with zero mean and covariance matrix \(\Sigma^2\). The \(y\) is the observed quantity and \(x\) is a real vector representing the “true” value.
One defines some estimator \(\hat x\) for \(x\), e.g. the arithmetic mean \(\hat x = \sum_{i=1}^{N_m} y_i\). We assume that confidence region of the estimate \(\hat x\) is known and has an associated confidence region described by its covariance matrix
The question is: What can we say about the confidence region of the function \(f(y)\) when the confidence region of \(y\) is described by the covariance matrix \(\Sigma^2\)?
For affine (linear) functions
the approach is described in the wikipedia article http://en.wikipedia.org/wiki/Propagation_of_uncertainty . Nonlinear functions are simply linearized about the estimate \(\hat y\) of \(\mathbb E[y]\). In the vicinity of \(\hat y\), the linear model approximates the nonlinear function often quite well. To linearize the function, the Jacobian \(J(\hat y)\) of the function \(f(\hat y)\) has to be computed, i.e.:
The covariance matrix of \(z\) is defined as
That means if we know \(J(y)\), we can approximately compute the confidence region if \(f(\hat y)\) is sufficiently linear.
To compute the Jacobian one can use the forward or the reverse mode of AD.
import numpy
from algopy import CGraph, Function, UTPM, dot, qr, eigh, inv, zeros
def f(y):
retval = zeros((3,1),dtype=y)
retval[0,0] = numpy.log(dot(y.T,y))
retval[1,0] = numpy.exp(dot(y.T,y))
retval[2,0] = numpy.exp(dot(y.T,y)) - numpy.log(dot(y.T,y))
return retval
D,Nm = 2,40
P = Nm
y = UTPM(numpy.zeros((2,P,Nm)))
y.data[0,:] = numpy.random.rand(Nm)
y.data[1,:] = numpy.eye(Nm)
# print f(y)
J = f(y).data[1,:,:,0]
print('Jacobian J(y) = \n', J)
C_epsilon = 0.3*numpy.eye(Nm)
print(J.shape)
C = dot(J.T, dot(C_epsilon,J))
print('Covariance matrix of z: C = \n',C)