Tools implemented in bob.bio.face

Summary

Databases

bob.bio.face.database.ARFaceBioDatabase([...]) ARFace database implementation of bob.bio.base.database.BioDatabase interface.
bob.bio.face.database.AtntBioDatabase([...]) ATNT database implementation of bob.bio.base.database.BioDatabase interface.
bob.bio.face.database.BancaBioDatabase([...]) BANCA database implementation of bob.bio.base.database.ZTBioDatabase interface.
bob.bio.face.database.MobioBioDatabase([...]) MOBIO database implementation of bob.bio.base.database.ZTBioDatabase interface.
bob.bio.face.database.CaspealBioDatabase([...]) Caspeal database implementation of bob.bio.base.database.BioDatabase interface.
bob.bio.face.database.ReplayBioDatabase(\*\*kwargs) Replay attack database implementation of bob.bio.base.database.BioDatabase interface.
bob.bio.face.database.ReplayMobileBioDatabase([...]) ReplayMobile database implementation of bob.bio.base.database.BioDatabase interface.
bob.bio.face.database.MsuMfsdModBioDatabase([...]) MsuMfsdMod database implementation of bob.bio.base.database.BioDatabase interface.
bob.bio.face.database.GBUBioDatabase([...]) GBU database implementation of bob.bio.base.database.BioDatabase interface.
bob.bio.face.database.LFWBioDatabase([...]) LFW database implementation of bob.bio.base.database.Database interface.
bob.bio.face.database.MultipieBioDatabase([...]) Multipie database implementation of bob.bio.base.database.Database interface.
bob.bio.face.database.IJBABioDatabase([...]) IJBA database implementation of bob.bio.base.database.BioDatabase interface.
bob.bio.face.database.XM2VTSBioDatabase([...]) XM2VTS database implementation of bob.bio.base.database.Database interface.
bob.bio.face.database.FRGCBioDatabase([...]) FRGC database implementation of bob.bio.base.database.BioDatabase interface.
bob.bio.face.database.SCFaceBioDatabase([...]) SCFace database implementation of bob.bio.base.database.ZTDatabase interface.

Image Preprocessors

bob.bio.face.preprocessor.Base([dtype, ...]) Performs color space adaptations and data type corrections for the given image.
bob.bio.face.preprocessor.FaceCrop(...[, ...]) Crops the face according to the given annotations.
bob.bio.face.preprocessor.FaceDetect(...[, ...]) Performs a face detection (and facial landmark localization) in the given image and crops the face.
bob.bio.face.preprocessor.TanTriggs(face_cropper) Crops the face (if desired) and applies Tan&Triggs algorithm [TT10] to photometrically enhance the image.
bob.bio.face.preprocessor.HistogramEqualization(...) Crops the face (if desired) and performs histogram equalization to photometrically enhance the image.
bob.bio.face.preprocessor.SelfQuotientImage(...) Crops the face (if desired) and applies self quotient image algorithm [WLW04] to photometrically enhance the image.
bob.bio.face.preprocessor.INormLBP(face_cropper) Performs I-Norm LBP on the given image

Image Feature Extractors

bob.bio.face.extractor.Eigenface(...) Performs a principal component analysis (PCA) on the given data.
bob.bio.face.extractor.DCTBlocks([...]) Extracts Discrete Cosine Transform (DCT) features from (overlapping) image blocks.
bob.bio.face.extractor.GridGraph([...]) Extracts Gabor jets in a grid structure [GHW12] using functionalities from bob.ip.gabor.
bob.bio.face.extractor.LGBPHS(block_size[, ...]) Extracts Local Gabor Binary Pattern Histogram Sequences (LGBPHS) [ZSG+05] from the images, using functionality from bob.ip.base and bob.ip.gabor.

Face Recognition Algorithms

bob.bio.face.algorithm.GaborJet(...[, ...]) Computes a comparison of lists of Gabor jets using a similarity function of bob.ip.gabor.Similarity.
bob.bio.face.algorithm.Histogram([...]) Computes the distance between histogram sequences.

Databases

class bob.bio.face.database.ARFaceBioDatabase(original_directory=None, original_extension='.ppm', **kwargs)

Bases: bob.bio.base.database.BioDatabase

ARFace database implementation of bob.bio.base.database.BioDatabase interface. It is an extension of an SQL-based database interface, which directly talks to ARFACE database, for verification experiments (good to use in bob.bio.base framework).

annotations(myfile)[source]
model_ids_with_protocol(groups=None, protocol=None, **kwargs)[source]
objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]
class bob.bio.face.database.AtntBioDatabase(original_directory=None, original_extension='.pgm', **kwargs)

Bases: bob.bio.base.database.BioDatabase

ATNT database implementation of bob.bio.base.database.BioDatabase interface. It is an extension of the database interface, which directly talks to ATNT database, for verification experiments (good to use in bob.bio.base framework).

annotations(file)[source]
model_ids_with_protocol(groups=None, protocol=None, **kwargs)[source]
objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]
class bob.bio.face.database.BancaBioDatabase(original_directory=None, original_extension=None, **kwargs)

Bases: bob.bio.base.database.ZTBioDatabase

BANCA database implementation of bob.bio.base.database.ZTBioDatabase interface. It is an extension of an SQL-based database interface, which directly talks to Banca database, for verification experiments (good to use in bob.bio.base framework).

annotations(myfile)[source]
model_ids_with_protocol(groups=None, protocol=None, **kwargs)[source]
objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]
tmodel_ids_with_protocol(protocol=None, groups=None, **kwargs)[source]
tobjects(groups=None, protocol=None, model_ids=None, **kwargs)[source]
zobjects(groups=None, protocol=None, **kwargs)[source]
class bob.bio.face.database.CaspealBioDatabase(original_directory=None, original_extension='.tif', **kwargs)

Bases: bob.bio.base.database.BioDatabase

Caspeal database implementation of bob.bio.base.database.BioDatabase interface. It is an extension of an SQL-based database interface, which directly talks to Caspeal database, for verification experiments (good to use in bob.bio.base framework).

annotations(myfile)[source]
model_ids_with_protocol(groups=None, protocol=None, **kwargs)[source]
objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]
class bob.bio.face.database.FRGCBioDatabase(original_directory=None, original_extension='.jpg', **kwargs)

Bases: bob.bio.base.database.BioDatabase

FRGC database implementation of bob.bio.base.database.BioDatabase interface. It is an extension of the low-level database interface, which directly talks to FRGC database, for verification experiments (good to use in bob.bio.base framework).

annotations(myfile)[source]
model_ids_with_protocol(groups=None, protocol=None, **kwargs)[source]
objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]
class bob.bio.face.database.FaceBioFile(client_id, path, file_id)

Bases: bob.bio.base.database.BioFile

class bob.bio.face.database.GBUBioDatabase(original_directory=None, original_extension='.jpg', **kwargs)

Bases: bob.bio.base.database.BioDatabase

GBU database implementation of bob.bio.base.database.BioDatabase interface. It is an extension of an SQL-based database interface, which directly talks to GBU database, for verification experiments (good to use in bob.bio.base framework).

annotations(myfile)[source]
model_ids_with_protocol(groups=None, protocol=None, **kwargs)[source]
objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]
class bob.bio.face.database.IJBABioDatabase(original_directory=None, annotations_directory=None, original_extension=None, **kwargs)

Bases: bob.bio.base.database.BioDatabase

IJBA database implementation of bob.bio.base.database.BioDatabase interface. It is an extension of an SQL-based database interface, which directly talks to IJBA database, for verification experiments (good to use in bob.bio.base framework).

annotations(biofile)[source]
client_id_from_model_id(model_id, group='dev')[source]
model_ids_with_protocol(groups=None, protocol='search_split1', **kwargs)[source]
object_sets(groups=None, protocol='search_split1', purposes=None, model_ids=None)[source]
objects(groups=None, protocol='search_split1', purposes=None, model_ids=None, **kwargs)[source]
uses_probe_file_sets()[source]
class bob.bio.face.database.LFWBioDatabase(original_directory=None, original_extension='.jpg', annotation_type=None, **kwargs)

Bases: bob.bio.base.database.BioDatabase

LFW database implementation of bob.bio.base.database.Database interface. It is an extension of an SQL-based database interface, which directly talks to LFW database, for verification experiments (good to use in bob.bio.base framework).

annotations(myfile)[source]
client_id_from_model_id(model_id, group='dev')[source]

Return the client id associated with the given model id. In this base class implementation, it is assumed that only one model is enrolled for each client and, thus, client id and model id are identical. All key word arguments are ignored. Please override this function in derived class implementations to change this behavior.

model_ids_with_protocol(groups=None, protocol=None, **kwargs)[source]
objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]
class bob.bio.face.database.MobioBioDatabase(original_directory=None, original_extension=None, annotation_directory=None, annotation_extension='.pos', **kwargs)

Bases: bob.bio.base.database.ZTBioDatabase

MOBIO database implementation of bob.bio.base.database.ZTBioDatabase interface. It is an extension of an SQL-based database interface, which directly talks to Mobio database, for verification experiments (good to use in bob.bio.base framework).

annotations(myfile)[source]
model_ids_with_protocol(groups=None, protocol=None, gender=None)[source]
objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]
tmodel_ids_with_protocol(protocol=None, groups=None, **kwargs)[source]
tobjects(groups=None, protocol=None, model_ids=None, **kwargs)[source]
zobjects(groups=None, protocol=None, **kwargs)[source]
class bob.bio.face.database.MsuMfsdModBioDatabase(max_number_of_frames=None, **kwargs)

Bases: bob.bio.base.database.BioDatabase

MsuMfsdMod database implementation of bob.bio.base.database.BioDatabase interface. It is an extension of an SQL-based database interface, which directly talks to MsuMfsdMod database, for verification experiments (good to use in bob.bio.base framework).

annotations(myfile)[source]

Will return the bounding box annotation of nth frame of the video.

arrange_by_client(files)[source]
groups()[source]
model_ids_with_protocol(groups=None, protocol=None, **kwargs)[source]
objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]
protocol_names()[source]
class bob.bio.face.database.MultipieBioDatabase(original_directory=None, original_extension='.png', annotation_directory=None, annotation_extension='.pos', **kwargs)

Bases: bob.bio.base.database.ZTBioDatabase

Multipie database implementation of bob.bio.base.database.Database interface. It is an extension of an SQL-based database interface, which directly talks to Multipie database, for verification experiments (good to use in bob.bio.base framework).

annotations(myfile)[source]
model_ids_with_protocol(groups=None, protocol=None, **kwargs)[source]
objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]
tmodel_ids_with_protocol(protocol=None, groups=None, **kwargs)[source]
tobjects(groups=None, protocol=None, model_ids=None, **kwargs)[source]
zobjects(groups=None, protocol=None, **kwargs)[source]
class bob.bio.face.database.ReplayBioDatabase(**kwargs)

Bases: bob.bio.base.database.BioDatabase

Replay attack database implementation of bob.bio.base.database.BioDatabase interface. It is an extension of an SQL-based database interface, which directly talks to Replay database, for verification experiments (good to use in bob.bio.base framework). It also implements a kind of hack so that you can run vulnerability analysis with it.

annotations(file)[source]

Will return the bounding box annotation of 10th frame of the video.

arrange_by_client(files)[source]
groups()[source]
model_ids_with_protocol(groups=None, protocol=None, **kwargs)[source]
objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]
protocol_names()[source]

Returns all registered protocol names Here I am going to hack and double the number of protocols with -licit and -spoof. This is done for running vulnerability analysis

class bob.bio.face.database.ReplayMobileBioDatabase(max_number_of_frames=None, **kwargs)

Bases: bob.bio.base.database.BioDatabase

ReplayMobile database implementation of bob.bio.base.database.BioDatabase interface. It is an extension of an SQL-based database interface, which directly talks to ReplayMobile database, for verification experiments (good to use in bob.bio.base framework).

annotations(myfile)[source]

Will return the bounding box annotation of nth frame of the video.

arrange_by_client(files)[source]
groups()[source]
model_ids_with_protocol(groups=None, protocol=None, **kwargs)[source]
objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]
protocol_names()[source]
class bob.bio.face.database.SCFaceBioDatabase(original_directory=None, original_extension='.jpg', **kwargs)

Bases: bob.bio.base.database.ZTBioDatabase

SCFace database implementation of bob.bio.base.database.ZTDatabase interface. It is an extension of an SQL-based database interface, which directly talks to SCFace database, for verification experiments (good to use in bob.bio.base framework).

annotations(myfile)[source]
model_ids_with_protocol(groups=None, protocol=None, **kwargs)[source]
objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]
tmodel_ids_with_protocol(protocol=None, groups=None, **kwargs)[source]
tobjects(groups=None, protocol=None, model_ids=None, **kwargs)[source]
zobjects(groups=None, protocol=None, **kwargs)[source]
class bob.bio.face.database.XM2VTSBioDatabase(original_directory=None, original_extension='.ppm', **kwargs)

Bases: bob.bio.base.database.BioDatabase

XM2VTS database implementation of bob.bio.base.database.Database interface. It is an extension of an SQL-based database interface, which directly talks to XM2VTS database, for verification experiments (good to use in bob.bio.base framework).

annotations(myfile)[source]
model_ids_with_protocol(groups=None, protocol=None, **kwargs)[source]
objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]

Preprocessors

class bob.bio.face.preprocessor.Base(dtype=None, color_channel='gray', **kwargs)

Bases: bob.bio.base.preprocessor.Preprocessor

Performs color space adaptations and data type corrections for the given image.

Parameters:

dtype
: numpy.dtype or convertible or None
The data type that the resulting image will have.
color_channel
: one of ('gray', 'red', 'gren', 'blue', 'rgb')
The specific color channel, which should be extracted from the image.
color_channel(image) → channel[source]

Returns the channel of the given image, which was selected in the constructor. Currently, gray, red, green and blue channels are supported.

Parameters:

image
: 2D or 3D numpy.ndarray
The image to get the specified channel from.

Returns:

channel
: 2D or 3D numpy.ndarray
The extracted color channel.
data_type(image) → image[source]

Converts the given image into the data type specified in the constructor of this class. If no data type was specified, or the image is None, no conversion is performed.

Parameters:

image
: 2D or 3D numpy.ndarray
The image to convert.

Returns:

image
: 2D or 3D numpy.ndarray
The image converted to the desired data type, if any.
class bob.bio.face.preprocessor.FaceCrop(cropped_image_size, cropped_positions, fixed_positions=None, mask_sigma=None, mask_neighbors=5, mask_seed=None, **kwargs)

Bases: bob.bio.face.preprocessor.Base

Crops the face according to the given annotations.

This class is designed to perform a geometric normalization of the face based on the eye locations, using bob.ip.base.FaceEyesNorm. Usually, when executing the crop_face() function, the image and the eye locations have to be specified. There, the given image will be transformed such that the eye locations will be placed at specific locations in the resulting image. These locations, as well as the size of the cropped image, need to be specified in the constructor of this class, as cropped_positions and cropped_image_size.

Some image databases do not provide eye locations, but rather bounding boxes. This is not a problem at all. Simply define the coordinates, where you want your cropped_positions to be in the cropped image, by specifying the same keys in the dictionary that will be given as annotations to the crop_face() function.

Sometimes, databases provide pre-cropped faces, where the eyes are located at (almost) the same position in all images. Usually, the cropping does not conform with the cropping that you like (i.e., image resolution is wrong, or too much background information). However, the database does not provide eye locations (since they are almost identical for all images). In that case, you can specify the fixed_positions in the constructor, which will be taken instead of the annotations inside the crop_face() function (in which case the annotations are ignored).

Sometimes, the crop of the face is outside of the original image boundaries. Usually, these pixels will simply be left black, resulting in sharp edges in the image. However, some feature extractors do not like these sharp edges. In this case, you can set the mask_sigma to copy pixels from the valid border of the image and add random noise (see bob.ip.base.extrapolate_mask()).

Parameters:

cropped_image_size
: (int, int)
The size of the resulting cropped images.
cropped_positions
: dict
The coordinates in the cropped image, where the annotated points should be put to. This parameter is a dictionary with usually two elements, e.g., {'reye':(RIGHT_EYE_Y, RIGHT_EYE_X) , 'leye':(LEFT_EYE_Y, LEFT_EYE_X)}. However, also other parameters, such as {'topleft' : ..., 'bottomright' : ...} are supported, as long as the annotations in the __call__ function are present.
fixed_positions
: dict or None
If specified, ignore the annotations from the database and use these fixed positions throughout.
mask_sigma
: float or None
Fill the area outside of image boundaries with random pixels from the border, by adding noise to the pixel values. To disable extrapolation, set this value to None. To disable adding random noise, set it to a negative value or 0.
mask_neighbors
: int
The number of neighbors used during mask extrapolation. See bob.ip.base.extrapolate_mask() for details.
mask_seed
: int or None

The random seed to apply for mask extrapolation.

Warning

When run in parallel, the same random seed will be applied to all parallel processes. Hence, results of parallel execution will differ from the results in serial execution.

kwargs
Remaining keyword parameters passed to the Base constructor, such as color_channel or dtype.
crop_face(image, annotations = None) → face[source]

Executes the face cropping on the given image and returns the cropped version of it.

Parameters:

image
: 2D numpy.ndarray
The face image to be processed.
annotations
: dict or None
The annotations that fit to the given image. None is only accepted, when fixed_positions were specified in the constructor.

Returns:

face
: 2D numpy.ndarray (float)
The cropped face.
class bob.bio.face.preprocessor.FaceDetect(face_cropper, cascade=None, use_flandmark=False, detection_overlap=0.2, distance=2, scale_base=0.9576032806985737, lowest_scale=0.125, **kwargs)

Bases: bob.bio.face.preprocessor.Base

Performs a face detection (and facial landmark localization) in the given image and crops the face.

This class is designed to perform a geometric normalization of the face based on the detected face. Face detection is performed using bob.ip.facedetect. Particularly, the function bob.ip.facedetect.detect_single_face() is executed, which will always return exactly one bounding box, even if the image contains more than one face, or no face at all. The speed of the face detector can be regulated using the cascade, distance` ``scale_base and lowest_scale parameters. The number of overlapping detected bounding boxes that should be joined can be selected by detection_overlap. Please see the documentation of bob.ip.facedetect for more details about these parameters.

Additionally, facial landmarks can be detected using the Python Bindings to the Flandmark Keypoint Localizer for Frontal Faces. If enabled using use_flandmark = True in the constructor, it is tried to obtain the facial landmarks inside the detected facial area. If landmarks are found, these are used to geometrically normalize the face. Otherwise, the eye locations are estimated based on the bounding box. This is also applied, when use_flandmark = False.

The face cropping itself is done by the given face_cropper. This cropper can either be an instance of FaceCrop (or any other class that provides a similar crop_face function), or it can be the resource name of a face cropper, such as 'face-crop-eyes'.

Parameters:

face_cropper
: bob.bio.face.preprocessor.FaceCrop or str
The face cropper to be used to crop the detected face. Might be an instance of a FaceCrop or the name of a face cropper resource.
cascade
: str or None
The file name, where a face detector cascade can be found. If None, the default cascade for frontal faces bob.ip.facedetect.default_cascade() is used.
use_flandmark
: bool
If selected, bob.ip.flandmark.Flandmark is used to detect the eye locations. Otherwise, the eye locations are estimated based on the detected bounding box.
detection_overlap
: float
See bob.ip.facedetect.detect_single_face().
distance
: int
See the Sampling section in the Users Guide of bob.ip.facedetect.
scale_base
: float
See the Sampling section in the Users Guide of bob.ip.facedetect.
lowest_scale
: float
See the Sampling section in the Users Guide of bob.ip.facedetect.
kwargs
Remaining keyword parameters passed to the Base constructor, such as color_channel or dtype.
crop_face(image, annotations = None) → face[source]

Detects the face (and facial landmarks), and used the face_cropper given in the constructor to crop the face.

Parameters:

image
: 2D or 3D numpy.ndarray
The face image to be processed.
annotations
: any
Ignored.

Returns:

face
: 2D or 3D numpy.ndarray (float)
The detected and cropped face.
class bob.bio.face.preprocessor.HistogramEqualization(face_cropper, **kwargs)

Bases: bob.bio.face.preprocessor.Base

Crops the face (if desired) and performs histogram equalization to photometrically enhance the image.

Parameters:

face_cropper
: str or bob.bio.face.preprocessor.FaceCrop or bob.bio.face.preprocessor.FaceDetect or None

The face image cropper that should be applied to the image. If None is selected, no face cropping is performed. Otherwise, the face cropper might be specified as a registered resource, a configuration file, or an instance of a preprocessor.

Note

The given class needs to contain a crop_face method.

kwargs
Remaining keyword parameters passed to the Base constructor, such as color_channel or dtype.
equalize_histogram(image) → equalized[source]

Performs the histogram equalization on the given image.

Parameters:

image
: 2D numpy.ndarray
The image to berform histogram equalization with. The image will be transformed to type uint8 before computing the histogram.

Returns:

equalized
: 2D numpy.ndarray (float)
The photometrically enhanced image.
class bob.bio.face.preprocessor.INormLBP(face_cropper, radius=2, is_circular=True, compare_to_average=False, elbp_type='regular', **kwargs)

Bases: bob.bio.face.preprocessor.Base

Performs I-Norm LBP on the given image

class bob.bio.face.preprocessor.SelfQuotientImage(face_cropper, sigma=1.4142135623730951, **kwargs)

Bases: bob.bio.face.preprocessor.Base

Crops the face (if desired) and applies self quotient image algorithm [WLW04] to photometrically enhance the image.

Parameters:

face_cropper
: str or bob.bio.face.preprocessor.FaceCrop or bob.bio.face.preprocessor.FaceDetect or None

The face image cropper that should be applied to the image. If None is selected, no face cropping is performed. Otherwise, the face cropper might be specified as a registered resource, a configuration file, or an instance of a preprocessor.

Note

The given class needs to contain a crop_face method.

sigma
: float
Please refer to the [WLW04] original paper (see bob.ip.base.SelfQuotientImage documentation).
kwargs
Remaining keyword parameters passed to the Base constructor, such as color_channel or dtype.
class bob.bio.face.preprocessor.TanTriggs(face_cropper, gamma=0.2, sigma0=1, sigma1=2, size=5, threshold=10.0, alpha=0.1, **kwargs)

Bases: bob.bio.face.preprocessor.Base

Crops the face (if desired) and applies Tan&Triggs algorithm [TT10] to photometrically enhance the image.

Parameters:

face_cropper
: str or bob.bio.face.preprocessor.FaceCrop or bob.bio.face.preprocessor.FaceDetect or None

The face image cropper that should be applied to the image. If None is selected, no face cropping is performed. Otherwise, the face cropper might be specified as a registered resource, a configuration file, or an instance of a preprocessor.

Note

The given class needs to contain a crop_face method.

gamma, sigma0, sigma1, size, threshold, alpha
Please refer to the [TT10] original paper (see bob.ip.base.TanTriggs documentation).
kwargs
Remaining keyword parameters passed to the Base constructor, such as color_channel or dtype.

Extractors

class bob.bio.face.extractor.DCTBlocks(block_size=12, block_overlap=11, number_of_dct_coefficients=45, normalize_blocks=True, normalize_dcts=True, auto_reduce_coefficients=False)

Bases: bob.bio.base.extractor.Extractor

Extracts Discrete Cosine Transform (DCT) features from (overlapping) image blocks. These features are based on the bob.ip.base.DCTFeatures class. The default parametrization is the one that performed best on the BANCA database in [WMM+11].

Usually, these features are used in combination with the algorithms defined in bob.bio.gmm. However, you can try to use them with other algorithms.

Parameters:

block_size
: int or (int, int)
The size of the blocks that will be extracted. This parameter might be either a single integral value, or a pair (block_height, block_width) of integral values.
block_overlap
: int or (int, int)
The overlap of the blocks in vertical and horizontal direction. This parameter might be either a single integral value, or a pair (block_overlap_y, block_overlap_x) of integral values. It needs to be smaller than the block_size.
number_of_dct_coefficients
: int
The number of DCT coefficients to use. The actual number will be one less since the first DCT coefficient (which should be 0, if normalization is used) will be removed.
normalize_blocks
: bool
Normalize the values of the blocks to zero mean and unit standard deviation before extracting DCT coefficients.
normalize_dcts
: bool
Normalize the values of the DCT components to zero mean and unit standard deviation. Default is True.
load(*args, **kwargs)[source]
train(*args, **kwargs)[source]
class bob.bio.face.extractor.Eigenface(subspace_dimension)

Bases: bob.bio.base.extractor.Extractor

Performs a principal component analysis (PCA) on the given data.

This algorithm computes a PCA projection (bob.learn.linear.PCATrainer) on the given training images, and projects the images into face space. In opposition to bob.bio.base.algorithm.PCA, here the eigenfces are used as features, i.e., to apply advanced face recognition algorithms on top of them.

Parameters:

subspace_dimension
: int or float
If specified as int, defines the number of eigenvectors used in the PCA projection matrix. If specified as float (between 0 and 1), the number of eigenvectors is calculated such that the given percentage of variance is kept.
kwargs
: key=value pairs
A list of keyword arguments directly passed to the bob.bio.base.extractor.Extractor base class constructor.
load(extractor_file)[source]

Reads the PCA projection matrix from file.

Parameters:

extractor_file
: str
An existing file, from which the PCA projection matrix are read.
train(training_images, extractor_file)[source]

Generates the PCA covariance matrix and writes it into the given extractor_file.

Beforehand, all images are turned into a 1D pixel vector.

Parameters:

training_images
: [2D numpy.ndarray]
A list of 2D training images to train the PCA projection matrix with.
extractor_file
: str
A writable file, into which the PCA projection matrix (as a bob.learn.linear.Machine) will be written.
class bob.bio.face.extractor.GridGraph(gabor_directions=8, gabor_scales=5, gabor_sigma=6.283185307179586, gabor_maximum_frequency=1.5707963267948966, gabor_frequency_step=0.7071067811865476, gabor_power_of_k=0, gabor_dc_free=True, normalize_gabor_jets=True, eyes=None, nodes_between_eyes=4, nodes_along_eyes=2, nodes_above_eyes=3, nodes_below_eyes=7, node_distance=None, first_node=None)

Bases: bob.bio.base.extractor.Extractor

Extracts Gabor jets in a grid structure [GHW12] using functionalities from bob.ip.gabor.

The grid can be either aligned to the eye locations (in which case the grid might be rotated), or a fixed grid graph can be extracted.

In the first case, the eye locations in the aligned image need to be provided. Additionally, the number of node between, along, above and below the eyes need to be specified.

In the second case, a regular grid graph is created, by specifying the distance between two nodes. Additionally, the coordinate of the first node can be provided, which otherwise is calculated to evenly fill the whole image with nodes.

Parameters:

gabor_directions, gabor_scales, gabor_sigma, gabor_maximum_frequency, gabor_frequency_step, gabor_power_of_k, gabor_dc_free
The parameters of the Gabor wavelet family, with its default values set as given in [WFK97]. Please refer to bob.ip.gabor.Transform for the documentation of these values.
normalize_gabor_jets
: bool
Perform Gabor jet normalization during extraction?
eyes
: dict or None
If specified, the grid setup will be aligned to the eye positions {‘reye’ : (re_y, re_x), ‘leye’ : (le_y, le_x)}. Otherwise a regular grid graph will be extracted.
nodes_between_eyes, nodes_along_eyes, nodes_above_eyes, nodes_below_eyes
: int
Only used when eyes is not None. The number of nodes to be placed between, along, above or below the eyes. The final number of nodes will be: (above + below + 1) times (between + 2*along + 2).
node_distance
: (int, int)
Only used when eyes is None. The distance between two nodes in the regular grid graph.
first_node
: (int, int) or None
Only used when eyes is None. If None, it is calculated automatically to equally cover the whole image.
load(*args, **kwargs)[source]
read_feature(feature_file) → feature[source]

Reads the feature written by the write_feature() function from the given file.

Parameters:

feature_file
: str or bob.io.base.HDF5File
The name of the file or the file opened for reading.

Returns:

feature
: [bob.ip.gabor.Jet]
The list of Gabor jets read from file.
train(*args, **kwargs)[source]
write_feature(feature, feature_file)[source]

Writes the feature extracted by the __call__ function to the given file.

Parameters:

feature
: [bob.ip.gabor.Jet]
The list of Gabor jets extracted from the image.
feature_file
: str or bob.io.base.HDF5File
The name of the file or the file opened for writing.
class bob.bio.face.extractor.LGBPHS(block_size, block_overlap=0, gabor_directions=8, gabor_scales=5, gabor_sigma=6.283185307179586, gabor_maximum_frequency=1.5707963267948966, gabor_frequency_step=0.7071067811865476, gabor_power_of_k=0, gabor_dc_free=True, use_gabor_phases=False, lbp_radius=2, lbp_neighbor_count=8, lbp_uniform=True, lbp_circular=True, lbp_rotation_invariant=False, lbp_compare_to_average=False, lbp_add_average=False, sparse_histogram=False, split_histogram=None)

Bases: bob.bio.base.extractor.Extractor

Extracts Local Gabor Binary Pattern Histogram Sequences (LGBPHS) [ZSG+05] from the images, using functionality from bob.ip.base and bob.ip.gabor.

The block size and the overlap of the blocks can be varied, as well as the parameters of the Gabor wavelet (bob.ip.gabor.Transform) and the LBP extractor (bob.ip.base.LBP).

Parameters:

block_size
: int or (int, int)
The size of the blocks that will be extracted. This parameter might be either a single integral value, or a pair (block_height, block_width) of integral values.
block_overlap
: int or (int, int)
The overlap of the blocks in vertical and horizontal direction. This parameter might be either a single integral value, or a pair (block_overlap_y, block_overlap_x) of integral values. It needs to be smaller than the block_size.
gabor_directions, gabor_scales, gabor_sigma, gabor_maximum_frequency, gabor_frequency_step, gabor_power_of_k, gabor_dc_free
The parameters of the Gabor wavelet family, with its default values set as given in [WFK97]. Please refer to bob.ip.gabor.Transform for the documentation of these values.
use_gabor_phases
: bool
Extract also the Gabor phases (inline) and not only the absolute values. In this case, Extended LGBPHS features [ZSQ+09] will be extracted.
lbp_radius, lbp_neighbor_count, lbp_uniform, lbp_circular, lbp_rotation_invariant, lbp_compare_to_average, lbp_add_average

The parameters of the LBP. Please see bob.ip.base.LBP for the documentation of these values.

Note

The default values are as given in [ZSG+05] (the values of [ZSQ+09] might differ).

sparse_histogram
: bool

If specified, the histograms will be handled in a sparse way. This reduces the size of the extracted features, but the computation will take longer.

Note

Sparse histograms are only supported, when split_histogram = None.

split_histogram
: one of ('blocks', 'wavelets', 'both') or None
Defines, how the histogram sequence is split. This could be interesting, if the histograms should be used in another way as simply concatenating them into a single histogram sequence (the default).
load(*args, **kwargs)[source]
train(*args, **kwargs)[source]

Algorithms

class bob.bio.face.algorithm.GaborJet(gabor_jet_similarity_type, multiple_feature_scoring='max_jet', gabor_directions=8, gabor_scales=5, gabor_sigma=6.283185307179586, gabor_maximum_frequency=1.5707963267948966, gabor_frequency_step=0.7071067811865476, gabor_power_of_k=0, gabor_dc_free=True)

Bases: bob.bio.base.algorithm.Algorithm

Computes a comparison of lists of Gabor jets using a similarity function of bob.ip.gabor.Similarity.

The model enrollment simply stores all extracted Gabor jets for all enrollment features. By default (i.e., multiple_feature_scoring = 'max_jet'), the scoring uses an advanced local strategy. For each node, the similarity between the given probe jet and all model jets is computed, and only the highest value is kept. These values are finally averaged over all node positions. Other strategies can be obtained using a different multiple_feature_scoring.

Parameters:

gabor_jet_similarity_type
: str:
The type of Gabor jet similarity to compute. Please refer to the documentation of bob.ip.gabor.Similarity for a list of possible values.
multiple_feature_scoring
: str

How to fuse the local similarities into a single similarity value. Possible values are:

  • 'average_model' : During enrollment, an average model is computed using functionality of bob.ip.gabor.
  • 'average' : For each node, the average similarity is computed. Finally, the average of those similarities is returned.
  • 'min_jet', 'max_jet', 'med_jet' : For each node, the minimum, maximum or median similarity is computed. Finally, the average of those similarities is returned.
  • 'min_graph', 'max_graph', 'med_graph' : For each node, the average similarity is computed. Finally, the minimum, maximum or median of those similarities is returned.
gabor_directions, gabor_scales, gabor_sigma, gabor_maximum_frequency, gabor_frequency_step, gabor_power_of_k, gabor_dc_free
These parameters are required by the disparity-based Gabor jet similarity functions, see bob.ip.gabor.Similarity.. The default values are identical to the ones in the bob.bio.face.extractor.GridGraph. Please assure that this class and the bob.bio.face.extractor.GridGraph class get the same configuration, otherwise unexpected things might happen.
enroll(enroll_features) → model[source]

Enrolls the model using one of several strategies. Commonly, the bunch graph strategy [WFK97] is applied, by storing several Gabor jets for each node.

When multiple_feature_scoring = 'average_model', for each node the average bob.ip.gabor.Jet is computed. Otherwise, all enrollment jets are stored, grouped by node.

Parameters:

enroll_features
: [[bob.ip.gabor.Jet]]
The list of enrollment features. Each sub-list contains a full graph.

Returns:

model
: [[bob.ip.gabor.Jet]]
The enrolled model. Each sub-list contains a list of jets, which correspond to the same node. When multiple_feature_scoring = 'average_model' each sub-list contains a single bob.ip.gabor.Jet.
load_enroller(*args, **kwargs)[source]
load_projector(*args, **kwargs)[source]
project(*args, **kwargs)[source]
read_feature(*args, **kwargs)[source]
read_model(model_file) → model[source]

Reads the model written by the write_model() function from the given file.

Parameters:

model_file
: str or bob.io.base.HDF5File
The name of the file or the file opened for reading.

Returns:

model
: [[bob.ip.gabor.Jet]]
The list of Gabor jets read from file.
read_probe(probe_file) → probe[source]

Reads the probe file, e.g., as written by the bob.bio.face.extractor.GridGraph.write_feature() function from the given file.

Parameters:

probe_file
: str or bob.io.base.HDF5File
The name of the file or the file opened for reading.

Returns:

probe
: [bob.ip.gabor.Jet]
The list of Gabor jets read from file.
score(model, probe) → score[source]

Computes the score of the probe and the model using the desired Gabor jet similarity function and the desired score fusion strategy.

Parameters:

model
: [[bob.ip.gabor.Jet]]
The model enrolled by the enroll() function.
probe
: [bob.ip.gabor.Jet]
The probe read by the read_probe() function.

Returns:

score
: float
The fused similarity score.
score_for_multiple_models(*args, **kwargs)[source]
score_for_multiple_probes(model, probes)[source]

score(model, probes) -> score

This function computes the score between the given model graph(s) and several given probe graphs. The same local scoring strategy as for several model jets is applied, but this time the local scoring strategy is applied between all graphs from the model and probes.

Parameters:

model
: [[bob.ip.gabor.Jet]]
The model enrolled by the enroll() function. The sub-lists are groups by nodes.
probes
: [[bob.ip.gabor.Jet]]
A list of probe graphs. The sub-lists are groups by graph.

Returns:

score
: float
The fused similarity score.
train_enroller(*args, **kwargs)[source]
train_projector(*args, **kwargs)[source]
write_feature(*args, **kwargs)[source]
write_model(model, model_file)[source]

Writes the model enrolled by the enroll() function to the given file.

Parameters:

model
: [[bob.ip.gabor.Jet]]
The enrolled model.
model_file
: str or bob.io.base.HDF5File
The name of the file or the file opened for writing.
class bob.bio.face.algorithm.Histogram(distance_function=<built-in function chi_square>, is_distance_function=True, multiple_probe_scoring='average')

Bases: bob.bio.base.algorithm.Algorithm

Computes the distance between histogram sequences.

Both sparse and non-sparse representations of histograms are supported. For enrollment, to date only the averaging of histograms is implemented.

Parameters:

distance_function
: function
The function to be used to compare two histograms. This function should accept sparse histograms.
is_distance_function
: bool
Is the given distance_function distance function (lower values are better) or a similarity function (higher values are better)?
multiple_probe_scoring
: str or None
The way, scores are fused when multiple probes are available. See bob.bio.base.score_fusion_strategy() for possible values.
enroll(enroll_features) → model[source]

Enrolls a model by taking the average of all histograms.

enroll_features
: [1D or 2D numpy.ndarray]
The histograms that should be averaged. Histograms can be specified sparse (2D) or non-sparse (1D)

Returns:

model
: 1D or 2D numpy.ndarray
The averaged histogram, sparse (2D) or non-sparse (1D).
load_enroller(*args, **kwargs)[source]
load_projector(*args, **kwargs)[source]
project(*args, **kwargs)[source]
read_feature(*args, **kwargs)[source]
read_probe(probe_file) → probe[source]

Reads the probe feature from the given file.

Parameters:

probe_file
: str or bob.io.base.HDF5File
The file (open for reading) or the name of an existing file to read from.

Returns:

probe
: 1D or 2D numpy.ndarray
The probe read by the read_probe() function.
score(model, probe) → score[source]

Computes the score of the probe and the model using the desired histogram distance function. The resulting score is the negative distance, if is_distance_function = True. Both sparse and non-sparse models and probes are accepted, but their sparseness must agree.

Parameters:

model
: 1D or 2D numpy.ndarray
The model enrolled by the enroll() function.
probe
: 1D or 2D numpy.ndarray
The probe read by the read_probe() function.

Returns:

score
: float
The resulting similarity score.
score_for_multiple_models(*args, **kwargs)[source]
train_enroller(*args, **kwargs)[source]
train_projector(*args, **kwargs)[source]
write_feature(*args, **kwargs)[source]