Note
If this section is empty, please go to the console and type:
$ ./bin/sphinx-build doc sphinx
again, after you have successfully patched the CSU code.
Bases: facereclib.preprocessing.Preprocessor.Preprocessor
This class defines a wrapper for the facerec2010.baseline.lrpca.LRPCA class to be used as an image facereclib.preprocessing.Preprocessor in the FaceRecLib.
Constructor Documentation:
Reads the original images using functionality from pyvision.
Returns the quality of the last preprocessed image. This quality term is application dependent. By default, None is returned.
Reads the preprocessed data from file. In this base class implementation, it uses bob.io.base.load to do that. If you have different format, please overwrite this function.
Saves the given preprocessed data to a file with the given name. In this base class implementation:
If you have a different format (e.g. not images), please overwrite this function.
Bases: facereclib.features.Extractor.Extractor
This class defines a wrapper for the facerec2010.baseline.lrpca.LRPCA class to be used as a facereclib.feature.Extractor in the FaceRecLib.
Constructor Documentation:
Trains the LRPCA module with the given image list and saves the result into the given extractor file using the pickle module.
Loads the trained LRPCA feature extractor from the given extractor file using the pickle module.
Reads the extracted feature from file. In this base class implementation, it uses bob.io.base.load to do that. If you have different format, please overwrite this function.
Saves the given extracted feature to a file with the given name. In this base class implementation:
If you have a different format, please overwrite this function.
Bases: facereclib.tools.Tool.Tool
This class defines a wrapper for the facerec2010.baseline.lrpca.LRPCA class to be used as a face recognition facereclib.tools.Tool in the FaceRecLib.
Constructor Documentation:
Enrolls a model from features from several images by simply storing all given features.
Computes the score for the given model (a list of FaceRecords) and a probe feature (a numpy.ndarray)
Loads the parameters required for model enrollment from file. This function usually is only useful in combination with the ‘train_enroller’ function (see above). This function is always called AFTER calling the ‘load_projector’. In this base class implementation, it does nothing.
Loads the parameters required for feature projection from file. This function usually is only useful in combination with the ‘train_projector’ function (see above). In this base class implementation, it does nothing.
Please register ‘performs_projection = True’ in the constructor to enable this function.
Reads the projected feature from file. In this base class implementation, it uses bob.io.base.load to do that. If you have different format, please overwrite this function.
Please register ‘performs_projection = True’ in the constructor to enable this function.
Reads the probe feature from file. By default, the probe feature is identical to the projected feature. Hence, this base class implementation simply calls self.read_feature(...).
If your tool requires different behavior, please overwrite this function.
Saves the given projected feature to a file with the given name. In this base class implementation:
If you have a different format, please overwrite this function.
Please register ‘performs_projection = True’ in the constructor to enable this function.
This function computes the score between the given model list and the given probe. In this base class implementation, it computes the scores for each model using the ‘score’ method, and fuses the scores using the fusion method specified in the constructor of this class. Usually this function is called from derived class ‘score’ functions.
This function computes the score between the given model and the given probe files. In this base class implementation, it computes the scores for each probe file using the ‘score’ method, and fuses the scores using the fusion method specified in the constructor of this class.
This function can be overwritten to train the model enroller. If you do this, please also register the function by calling this base class constructor and enabling the training by ‘require_enroller_training = True’.
The training function gets two parameters:
This function can be overwritten to train the feature projector. If you do this, please also register the function by calling this base class constructor and enabling the training by ‘requires_projector_training = True’.
The training function gets two parameters:
Bases: facereclib.preprocessing.Preprocessor.Preprocessor
This class defines a wrapper for the facerec2010.baseline.lda.LRLDA class to be used as an image facereclib.preprocessing.Preprocessor in the FaceRecLib.
Constructor Documentation:
Reads the original images using functionality from pyvision.
Returns the quality of the last preprocessed image. This quality term is application dependent. By default, None is returned.
Reads the preprocessed data from file. In this base class implementation, it uses bob.io.base.load to do that. If you have different format, please overwrite this function.
Saves the given preprocessed data to a file with the given name. In this base class implementation:
If you have a different format (e.g. not images), please overwrite this function.
Bases: facereclib.features.Extractor.Extractor
This class defines a wrapper for the facerec2010.baseline.lda.LRLDA class to be used as a facereclib.feature.Extractor in the FaceRecLib.
Constructor Documentation:
Trains the LDA-IR module with the given image list and saves its result into the given extractor file using the pickle module.
Loads the LDA-IR from the given extractor file using the pickle module.
Bases: facereclib.tools.Tool.Tool
This class defines a wrapper for the facerec2010.baseline.lda.LRLDA class to be used as a face recognition facereclib.tools.Tool in the FaceRecLib.
Constructor Documentation:
This function loads the Projector from the given projector file. This is only required when the cohort adjustment is enabled.
Enrolls a model from features from several images by simply storing all given features.
Loads an enrolled model from file using the pickle module.
Compute the score for the given model (a list of FaceRecords) and a probe (a FaceRecord)
Loads the parameters required for model enrollment from file. This function usually is only useful in combination with the ‘train_enroller’ function (see above). This function is always called AFTER calling the ‘load_projector’. In this base class implementation, it does nothing.
Reads the projected feature from file. In this base class implementation, it uses bob.io.base.load to do that. If you have different format, please overwrite this function.
Please register ‘performs_projection = True’ in the constructor to enable this function.
Saves the given projected feature to a file with the given name. In this base class implementation:
If you have a different format, please overwrite this function.
Please register ‘performs_projection = True’ in the constructor to enable this function.
This function computes the score between the given model list and the given probe. In this base class implementation, it computes the scores for each model using the ‘score’ method, and fuses the scores using the fusion method specified in the constructor of this class. Usually this function is called from derived class ‘score’ functions.
This function computes the score between the given model and the given probe files. In this base class implementation, it computes the scores for each probe file using the ‘score’ method, and fuses the scores using the fusion method specified in the constructor of this class.
This function can be overwritten to train the model enroller. If you do this, please also register the function by calling this base class constructor and enabling the training by ‘require_enroller_training = True’.
The training function gets two parameters:
This function can be overwritten to train the feature projector. If you do this, please also register the function by calling this base class constructor and enabling the training by ‘requires_projector_training = True’.
The training function gets two parameters: