Linear Machines and Trainers¶
Machines are one of the core components of Bob. They represent
statistical models or other functions defined by parameters that can be learnt
or manually set. The simplest of Bob‘s machines is a
bob.learn.linear.Machine
. This package contains the definition of
this class as well as trainers that can learn linear machine parameters from
data.
Linear machines¶
Linear machines execute the simple operation , where
is the output vector, is the input vector and is
a matrix (2D array) stored in the machine. The input vector should be
composed of double-precision floating-point elements. The output will also be
in double-precision. Here is how to use a
bob.learn.linear.Machine
:
>>> W = numpy.array([[0.5, 0.5], [1.0, 1.0]], 'float64')
>>> W
array([[ 0.5, 0.5],
[ 1. , 1. ]])
>>> machine = bob.learn.linear.Machine(W)
>>> machine.shape
(2, 2)
>>> x = numpy.array([0.3, 0.4], 'float64')
>>> y = machine(x)
>>> y
array([ 0.55, 0.55])
As was shown in the above example, the way to pass data through a machine is to
call its bob.learn.linear.Machine.forward()
method, for which the
__call__
method is an alias.
The first thing to notice about machines is that they can be stored and
retrieved in bob.io.base.HDF5File
. To save the before
metioned machine to a file, just use the machine’s
bob.learn.linear.Machine.save()
command. Because several machines
can be stored on the same bob.io.base.HDF5File
, we let the user
open the file and set it up before the machine can write to it:
>>> myh5_file = bob.io.base.HDF5File('linear.hdf5', 'w')
>>> #do other operations on myh5_file to set it up, optionally
>>> machine.save(myh5_file)
>>> del myh5_file #close
You can load the machine again in a similar way:
>>> myh5_file = bob.io.base.HDF5File('linear.hdf5')
>>> reloaded = bob.learn.linear.Machine(myh5_file)
>>> numpy.array_equal(machine.weights, reloaded.weights)
True
The shape of a bob.learn.linear.Machine
(see
bob.learn.linear.Machine.shape
) indicates the size of the input
vector that is expected by this machine and the size of the output vector it
produces, in a tuple format like (input_size, output_size)
:
>>> machine.shape
(2, 2)
A bob.learn.linear.Machine
also supports pre-setting
normalization vectors that are applied to every input . You can set a
subtraction factor and a division factor, so that the actual input
that is fed to the matrix is . The variables
and are vectors that have to have the same size as the
input vector . The operator indicates an element-wise
division. By default, and .
>>> machine.input_subtract
array([ 0., 0.])
>>> machine.input_divide
array([ 1., 1.])
To set a new value for or just assign the desired machine property:
>>> machine.input_subtract = numpy.array([0.5, 0.8])
>>> machine.input_divide = numpy.array([2.0, 4.0])
>>> y = machine(x)
>>> y
array([-0.15, -0.15])
Note
In the event you save a machine that has the subtraction and/or a division factor set, the vectors are saved and restored automatically w/o user intervention.
Linear machine trainers¶
Next, we examine available ways to train a bob.learn.linear.Machine
so they can do something useful for you.
Principal component analysis¶
PCA [1] is one way to train a bob.learn.linear.Machine
. The
associated Bob class is bob.learn.linear.PCATrainer
as the
training procedure mainly relies on a singular value decomposition.
PCA belongs to the category of unsupervised learning algorithms, which
means that the training data is not labelled. Therefore, the training set can
be represented by a set of features stored in a container. Using Bob,
this container is a 2D numpy.ndarray
.
>>> data = numpy.array([[3,-3,100], [4,-4,50], [3.5,-3.5,-50], [3.8,-3.7,-100]], dtype='float64')
>>> print(data)
[[ 3. -3. 100. ]
[ 4. -4. 50. ]
[ 3.5 -3.5 -50. ]
[ 3.8 -3.7 -100. ]]
Once the training set has been defined, the overall procedure to train a
bob.learn.linear.Machine
with a
bob.learn.linear.PCATrainer
is simple and shown below. Please note
that the concepts remains very similar for most of the other trainers and
machines.
>>> trainer = bob.learn.linear.PCATrainer() # Creates a PCA trainer
>>> [machine, eig_vals] = trainer.train(data) # Trains the machine with the given data
>>> print(machine.weights) # The weights of the returned (linear) Machine after the training procedure
[[ 0.002 -0.706 -0.708]
[-0.002 0.708 -0.706]
[-1. -0.003 -0. ]]
Next, input data can be projected using this learned projection matrix .
>>> e = numpy.array([3.2,-3.3,-10], 'float64')
>>> print(machine(e))
[ 9.999 0.47 0.092]
Linear discriminant analysis¶
LDA [2] is another way to train a bob.learn.linear.Machine
.
The associated Bob class is
bob.learn.linear.FisherLDATrainer
.
In contrast to PCA [1], LDA [2] is a supervised technique.
Furthermore, the training data should be organized differently. It is indeed
required to be a list of 2D numpy.ndarray
‘s, one for each class.
>>> data1 = numpy.array([[3,-3,100], [4,-4,50], [40,-40,150]], dtype='float64')
>>> data2 = numpy.array([[3,6,-50], [4,8,-100], [40,79,-800]], dtype='float64')
>>> data = [data1,data2]
Once the training set has been defined, the procedure to train the
bob.learn.linear.Machine
with LDA is very similar to the one
for PCA. This is shown below.
>>> trainer = bob.learn.linear.FisherLDATrainer()
>>> [machine,eig_vals] = trainer.train(data) # Trains the machine with the given data
>>> print(eig_vals)
[ 13.10097786 0. ]
>>> machine.resize(3,1) # Make the output space of dimension 1
>>> print(machine.weights) # The new weights after the training procedure
[[ 0.609]
[ 0.785]
[ 0.111]]
Whitening¶
This is generally used for i-vector preprocessing.
Let’s consider a 2D array of data used to train the withening, and a sample to be whitened:
>>> data = numpy.array([[ 1.2622, -1.6443, 0.1889], [ 0.4286, -0.8922, 1.3020], [-0.6613, 0.0430, 0.6377], [-0.8718, -0.4788, 0.3988], [-0.0098, -0.3121,-0.1807], [ 0.4301, 0.4886, -0.1456]])
>>> sample = numpy.array([1, 2, 3.])
The initialisation of the trainer and the machine:
>>> t = bob.learn.linear.WhiteningTrainer()
Then, the training and projection are done as follows:
>>> m = t.train(data)
>>> withened_sample = m.forward(sample)
Within-Class Covariance Normalisation¶
This can also be used for i-vector preprocessing. Let’s first put the training data into list of numpy arrays.
>>> data = [numpy.array([[ 1.2622, -1.6443, 0.1889], [ 0.4286, -0.8922, 1.3020]]), numpy.array([[-0.6613, 0.0430, 0.6377], [-0.8718, -0.4788, 0.3988]]), numpy.array([[-0.0098, -0.3121,-0.1807], [ 0.4301, 0.4886, -0.1456]])]
The initialisation of the trainer is done as follows:
>>> t = bob.learn.linear.WCCNTrainer()
Then, the training and projection are done as follows:
>>> m = t.train(data)
>>> wccn_sample = m.forward(sample)
[1] | (1, 2) http://en.wikipedia.org/wiki/Principal_component_analysis |
[2] | (1, 2) http://en.wikipedia.org/wiki/Linear_discriminant_analysis |