The glab module provides a simplified interface for running Glimpse experiments on HMAX-like models. Using this interface, it is possible to:
We will cover each of these points in turn, describing the usage of the relevant glab functions.
Note that the examples below assume that all functions of the glab have been imported. This can be done using:
>>> from glimpse.glab import *
The model is chosen by specifying its type to the SetModelClass() function.
>>> from glimpse.models.ml import Model >>> SetModelClass(Model)
Once the model is chosen, the collection of parameters can be accessed using GetParams().
>>> p = GetParams() >>> p.num_scales = 8
If desired, the worker pool can also be specified using SetPool().
>>> from glimpse.pools import MulticorePool >>> SetPool(MulticorePool)
To choose the set of images, call the SetCorpus() function with the path to the corpus directory.
The SetCorpus() function randomly selects half of the examples of each class to train the classifier, and reserves the other half for testing. Here, “animals” is the path to our corpus, which has the following structure.
animals/ cats/ persian.jpg siamese.jpg tabby.jpg burmese.jpg dogs/ labrador.jpg poodle.jpg greyhound.jpg bulldog.jpg
Thus, our example corpus has two classes (“cats” and “dogs”), with four examples each.
At times, it may be undesirable to have the images for all classes under the same parent directory. In this case, we can specify the sub-directory for each class using the SetCorpusSubdirs() function.
>>> SetCorpusSubdirs(["animals/cats", "animals/dogs"])
Finally, it is possible to explicitly specify the training and testing splits, using the SetTrainTestSplitFromDirs() function.
>>> SetTrainTestSplitFromDirs("train-animals", "test-animals")
In this example, both directories would have the same structure as the “animals” corpus above, with one sub-directory for each object class.
The filters used at the S2 layer are defined by their kernel matrices, or prototypes. These matrices can be learned in a number of ways. A simple learning mechanism used by Serre et al  is to imprint these prototypes from the training examples. In glab, this can be achieved using ImprintS2Prototypes().
In this example, we have asked for 10 prototypes to be chosen from the training data.
A simple “null hypothesis” for testing the utility of prototype learning is the use of randomly generated values. Random prototypes can be generated by calling MakeUniformRandomS2Prototypes(), which generates a prototype by choosing each component independently from the uniform distribution.
As in imprinting, the integer argument specifies the number of desired prototypes. Alternatively, components can be chosen from the normal distribution using MakeNormalRandomS2Prototypes(), where the parameters of the distribution are chosen to fit examples drawn from the training images. Similarly, the distribution on prototype components can be modeled non-parametrically using MakeHistogramRandomS2Prototypes(). Finally, we can use MakeShuffledRandomS2Prototypes() to first imprint a set of prototypes, and then reorder the components of each prototype.
For debugging, note that the set of prototypes can be visualized as a set of 2D matrices (one matrix for each S1 orientation).
>>> MakeUniformRandomS2Prototypes(10) >>> exp = GetExperiment() >>> proto = exp.s2_prototypes >>> from glimpse.util.gplot import Show3dArray >>> Show3dArray(proto)
This example uses GetExperiment() to access the experimental data using an object-oriented interface. The list of prototypes arrays is then accessible via the s2_prototypes attribute. This first prototype is then displayed as a set of matrices using the Show3dArray function.
Using Show3dArray requires the matplotlib package.
In general, it is unnecessary to explicitly request that features are extracted, as this is automatically performed when the SVM classifier is trained.
However, it may be useful to change the type of the extracted features using the SetLayer() function.
In this example, the feature vectors will be drawn from the model’s C1 layer. By default, features will be constructed from the activity of the model’s top-most layer.
To train an SVM classifier on the extracted features and then apply this classifier to the test data, use the RunSvm() function.
>>> train_acc, test_acc = RunSvm()
As a result, this function will return classifier accuracy on the training and testing sets.
Finally, all experimental data generated by the script (including SVM performance) can be saved to a file using StoreExperiment().
This data can be read back at a later time using LoadExperiment().