(list of str) Name for each object class, with indices corresponding to the values in the labels attribute.
(1D ndarray of int) Class label for each input image
(list of str) Path for each input image
(1D ndarray of bool) Mask indicating if each image is in the training set, as specified by user.
(tuple of BaseState) Activation maps for each image, organized by (image, layer, scale, y, x).
(Model) Glimpse model
(Params) Parameters for Glimpse model XXX is this needed?
(1D ndarray of bool) Mask indicating if each image is in the training set, as used for prototype learning. This is None if corpus.training_set is given, or if prototypes are not derived from training data.
Parameters and results for classifier evaluation.
The evaluator creates feature vectors on the fly from activation maps in extractor.activation, based on values in evaluation.layers list. Thus, the set of feature vectors are not explicitly stored. The method for building these features is specified by the user via the feature_builder argument. See TrainAndTestClassifier() and CrossValidateClassifier().
Layers from which features are extracted.
Outcome of evaluation. This may contain one or more measurements of classifier accuracy, AUC, etc.
(1D ndarray of bool) Mask indicating if each image is in the training set, as used during evaluation. This is None if either corpus.training_set or extractor.training_set is given, or if a fixed training set is not used during evaluation (e.g., if cross-validation is used).
Results and settings for an experiment.
Input image data (exactly one).
(list of EvaluationData) Classification performance on feature data (zero or more).
Feature extraction (exactly one).
Set the verbosity of log output.
Parameters: | flag (bool) – Whether to enable verbose logging. |
---|
Add a model to an existing experiment.
If model is passed, it is stored in the experiment’s extractor.model attribute. Otherwise, the parameters are created/updated, and used to create a new Model object.
Parameters: |
|
---|
All remaining keyword arguments are treated as model parameters and overwrite values in the (created or passed) params argument. This allows the set of parameters to be specified in short-hand as
>>> SetModel(param1=1, param2=2)
without creating a Params object.
Read images from the corpus directory.
This function assumes that each sub-directory contains images for exactly one object class, with a different object class for each sub-directory. Training and testing subsets are chosen automatically.
Parameters: |
|
---|
See also
Read images from per-class corpus sub-directories.
This function assumes that each sub-directory contains images for exactly one object class, with a different object class for each sub-directory. Training and testing subsets are chosen automatically.
Parameters: |
|
---|
See also
Read images and training information from the corpus directory.
This function assumes that the train_dir and test_dir have the same set of sub-directories. Each sub-directory shoudl contain images for exactly one object class, with a different object class for each sub-directory.
Parameters: |
|
---|
See also
Create a set of S2 prototypes, and use them for this experiment.
Parameters: |
|
---|
If algorithm is a function, it will be called as
>>> algorithm(num_prototypes, model, make_training_exp, pool, progress)
where model is the experiment’s hierarchical model. The argument make_training_exp is a function that takes no arguments and returns the training experiment (i.e., an ExperimentData object containing only training images and labels. Calling this method has the side-effect that the original experiment’s extractor.training_set attribute will be set if corpus.training_set is empty.
If algorithm has a locations attribute, it is assumed to contain a list of num_prototypes x 4 arrays with list indexed by kernel width and array indexed by prototype. Each row of the array contains (image index, scale offset, y-offset, x-offset). The algorithm sets image indices relative to the training set, and this function rewrites those indices relative to the full corpus.
Compute the model activity for all images in the experiment.
Parameters: |
|
---|
Evaluate extracted features using a fixed train/test split.
Parameters: |
|
---|
Creates a new entry in the experiment’s evaluation list, and sets the feature_builder, classifier, training_predictions, training_score, score_func, predictions, and score keys in its results dictionary.
Evaluate extracted features using a fixed train/test split.
Parameters: |
|
---|
Creates a new entry in the experiment’s evaluation list, and sets the feature_builder, cross_validate, classifier, score_func, and score keys in its results dictionary. The number of folds can be deduced from the number of score values.
Create feature vectors from the activation maps for a set of images.
Parameters: |
|
---|---|
Return type: | 2D ndarray of float |
Returns: | 1D feature vector for each image |
Compute image features as a histogram over location and scale.
Parameters: |
|
---|---|
Return type: | 2-d array of float |
Returns: | Feature vectors. |
Learn prototypes by imprinting from training images.
Image locations from which samples were drawn.
Learn prototypes by shuffling a set of imprinted prototypes.
Create prototypes by sampling components uniformly.
Upper limit of uniform distribution.
Lower limit of uniform distribution.
Learn prototypes as centroids of C1 patches using k-Means.
Learn patch models by meta-feature weighted k-Means clustering.
See also
Number of samples with which to train regr model
Learn patch models by object-mask weighted k-Means clustering.
See also
Weight added for all patches.
Directory containing object masks.
Lookup the name of all available prototype algorithm.
Return type: | list of str |
---|---|
Returns: | Name of all known prototype algorithms, any of which can be passed to ResolveAlgorithm(). |
Returns the filename for each image in the corpus.
Returns the class name for each image in the corpus.
Returns the model parameters for the experiment.
Return the number of S2 prototypes in the model.
Parameters: | kwidth (int) – Index of kernel shape. |
---|
Return the image location from which a prototype was imprinted.
This requires that the prototypes were learned by imprinting.
Parameters: |
|
---|---|
Returns: | Location information in the format (image index, scale, y-offset, x-offset), where scale and y- and x-offsets identify the S2 unit from which the prototype was “imprinted”. |
Return type: | 4 element array of int |
Return an S2 prototype from the experiment.
Parameters: |
|
---|
Get the image patch used to create a given imprinted prototype.
Parameters: |
|
---|---|
Return type: | 2d-array of float |
Returns: | image data used to construct given imprinted prototype |
Find the S2 unit with maximum response for a given prototype and image.
Parameters: |
|
---|---|
Return type: | 3-tuple of int |
Returns: | S2 unit, given as (scale, y, x) |
Returns image layer for a given image and scale.
Parameters: |
|
---|---|
Return type: | 2d array of float |
Returns: | Image data. |
Returns the experiment’s training set.
The extractor determines the training set by checking the experiment’s corpus.training_set attribute, and then randomly constructing a set if needed. The constructed set is stored in the experiment’s extractor.training_set attribute.
The evaluator determines the training set by checking the experiment’s extractor.training_set attribute, then its corpus.training_set, and then randomly constructing a set if needed. The constructed set is stored in the training_set attribute of the experiment’s new evaluation record created by the evaluator.
Parameters: | evaluation (int) – Index of the evaluation record to use. |
---|---|
Return type: | 1D array of bool |
Returns: | Mask array indicating which images are in the training set. |
Get information about classifier predictions.
Parameters: |
|
---|---|
Return type: | list of 3-tuple of str |
Returns: | filename, true label, and predicted label for each image in the set |
Returns the model layers from which features were extracted.
Parameters: | evaluation (int) – Index of the evaluation record to use. |
---|---|
Return type: | list of str |
Returns: | Names of layers used for evaluation. |
Returns the results of a model evaluation.
Parameters: | evaluation (int) – Index of the evaluation record to use. |
---|---|
Return type: | glimpse.util.data.Data |
Returns: | Result data, with attributes that depend on the method of evaluation. In general, the feature_builder, score, score_func attributes will be available. |
Plot the S2 activity for a given image.
Parameters: |
|
---|
Plot the S2 activity and image data for a given image.
This shows the image in the background, with the S2 activity on top.
Parameters: |
|
---|
Plot the location that best matches a given S2 prototype.
This shows the image in the background, with a red box over the region elliciting maximal response.
Parameters: |
|
---|
Plot the image region used to construct a given imprinted prototype.
This shows the image in the background, with a red box over the imprinted region.
Parameters: |
|
---|
Plot the prototype activation.
There is one plot for each orientation band.
Parameters: |
|
---|
Plot the C1 activation for a given image.
There is one plot for each orientation band.
Parameters: |
|
---|
Plot the C1 activation for a given image.
This shows the image in the background, with the activation plotted on top. There is one plot for each orientation band.
Parameters: |
|
---|
Plot the S1 activation for a given image.
There is one plot for each orientation band.
Parameters: |
|
---|
Plot the S1 activation for a given image.
This shows the image in the background, with the activation plotted on top. There is one plot for each orientation band.
Parameters: |
|
---|
Plot the S1 Gabor kernels.
There is one plot for each orientation band.
Indicates that an error occurred while processing an Experiment.
Read directory contents.
Read list of sub-directories.
Parameters: | dir_path (str) – Filesystem for directory to read. |
---|---|
Return type: | list of str |
Returns: | List of sub-directories in directory. |
Read list of files.
Parameters: | dir_path (str) – Filesystem for directory to read. |
---|---|
Return type: | list of str |
Returns: | List of files in directory. |
Ignore hidden directory entries (i.e., those starting with ‘.’).
Create list of paths and class labels for a set of image directories.
Parameters: |
|
---|
Resolve layer names to LayerSpec objects.
This is an internal function.
Parameters: | layers (str or list of str) – One or more model layers to compute. |
---|---|
Return type: | list of LayerSpec |
Returns: | Resolved layer specs. |
Choose a subset of instances, such that the resulting corpus is balanced.
A balanced corpus has the same number of instances for each class.
Parameters: |
|
---|---|
Return type: | 1D array of bool |
Returns: | Mask of chosen instances. |
Return the path to a sample corpus of images.
Parameters: | name (str) – Corpus name. One of ‘easy’, ‘moderate’, or ‘hard’. |
---|---|
Return type: | str |
Returns: | Filesystem path to corpus root directory. |