This example shows how ALGOPY can be used for linear error propagation.
Consider the error model
where x a vector and ϵ a random vector that is normally distributed with zero mean and covariance matrix Σ2. The y is the observed quantity and x is a real vector representing the “true” value.
One defines some estimator ˆx for x, e.g. the arithmetic mean ˆx=∑Nmi=1yi. We assume that confidence region of the estimate ˆx is known and has an associated confidence region described by its covariance matrix
The question is: What can we say about the confidence region of the function f(y) when the confidence region of y is described by the covariance matrix Σ2?
For affine (linear) functions
the approach is described in the wikipedia article http://en.wikipedia.org/wiki/Propagation_of_uncertainty . Nonlinear functions are simply linearized about the estimate ˆy of E[y]. In the vicinity of ˆy, the linear model approximates the nonlinear function often quite well. To linearize the function, the Jacobian J(ˆy) of the function f(ˆy) has to be computed, i.e.:
The covariance matrix of z is defined as
That means if we know J(y), we can approximately compute the confidence region if f(ˆy) is sufficiently linear.
To compute the Jacobian one can use the forward or the reverse mode of AD.
import numpy
from algopy import CGraph, Function, UTPM, dot, qr, eigh, inv, zeros
def f(y):
retval = zeros((3,1),dtype=y)
retval[0,0] = numpy.log(dot(y.T,y))
retval[1,0] = numpy.exp(dot(y.T,y))
retval[2,0] = numpy.exp(dot(y.T,y)) - numpy.log(dot(y.T,y))
return retval
D,Nm = 2,40
P = Nm
y = UTPM(numpy.zeros((2,P,Nm)))
y.data[0,:] = numpy.random.rand(Nm)
y.data[1,:] = numpy.eye(Nm)
# print f(y)
J = f(y).data[1,:,:,0]
print('Jacobian J(y) = \n', J)
C_epsilon = 0.3*numpy.eye(Nm)
print(J.shape)
C = dot(J.T, dot(C_epsilon,J))
print('Covariance matrix of z: C = \n',C)