Python API¶
This section includes information for using the Python API of bob.learn.boosting
.
Machines¶
The bob.learn.boosting
module contains classifiers that can predict classes for given input value:
bob.learn.boosting.BoostedMachine
: the strong classifier, which is a weighted combination of several machines of typebob.learn.boosting.WeakMachine
.
Weak machines might be:
bob.learn.boosting.LUTMachine
: A weak machine that performs a classification by a look-up-table thresholding.bob.learn.boosting.StumpMachine
: A weak machine that performs classification by simple threshlding.
Theoretically, the strong classifier can consist of different types of weak classifiers, but usually all weak classifiers have the same type.
Trainers¶
Available trainers in bob.learn.boosting
are:
bob.learn.boosting.Boosting
: Trains a strong machine of typebob.learn.boosting.BoostedMachine
.bob.learn.boosting.LUTTrainer
: Trains a weak machine of typebob.learn.boosting.LUTMachine
.bob.learn.boosting.StumpTrainer
: Trains a weak machine of typebob.learn.boosting.StumpMachine
.
Loss functions¶
Loss functions are used to define new weights for the weak machines using the scipy.optimize.fmin_l_bfgs_b
function.
A base class loss function bob.learn.boosting.LossFunction
is called by that function, and derived classes implement the actual loss for a single sample.
Note
Loss functions are designed to be used in combination with a specific weak trainer in specific cases. Not all combinations of loss functions and weak trainers make sense. Here is a list of useful combinations:
bob.learn.boosting.ExponentialLoss
withbob.learn.boosting.StumpTrainer
(uni-variate classification only).bob.learn.boosting.LogitLoss
withbob.learn.boosting.StumpTrainer
orbob.learn.boosting.LUTTrainer
(uni-variate or multi-variate classification).bob.learn.boosting.TangentialLoss
withbob.learn.boosting.StumpTrainer
orbob.learn.boosting.LUTTrainer
(uni-variate or multi-variate classification).bob.learn.boosting.JesorskyLoss
withbob.learn.boosting.LUTTrainer
(multi-variate regression only).
Details¶
-
class
bob.learn.boosting.
BoostedMachine
¶ Bases:
object
A strong machine that holds a weighted combination of weak machines
Constructor Documentation:
- BoostedMachine ()
- BoostedMachine (hdf5)
Initializes a BoostedMachine object
Parameters:
hdf5
:bob.io.base.HDF5File
The HDF5 file object to read the weak classifier fromClass Members:
-
add_weak_machine
()¶ - add_weak_machine(machine, weight) -> None
- add_weak_machine(machine, weights) -> None
Adds the given weak machine and its weight(s) to the list of weak machines
Parameters:
machine
: A derivative fromWeakMachine
The weak machine to addweight
: floatThe weight for the machine (uni-variate)weights
: float <#outputs>The weights for the machine (multi-variate)
-
alpha
¶ float <#machines,#outputs> <– The weights for the weak machines
-
feature_indices
([start[, end]]) → indices¶ Returns the feature index that will be used in this weak machine
Parameters:
start
: intThe first machine index to the the indices for; defaults to 0end
: intThe last machine index +1 to the the indices for; defaults to -1, which correspponds to the last machine + 1Returns:
indices
: array_like <int32>The feature indices required by the selected machines
-
forward
()¶ - forward(features) -> prediction
- forward(features, predictions) -> None
- forward(features, predictions, labels) -> None
Returns the prediction for the given feature vector(s)
Note
The
__call__
function is an alias for this function.This function can be called in six different ways:
(uint16 <#inputs>)
will compute and return the uni-variate prediction for a single feature vector.(uint16 <#samples,#inputs>, float <#samples>)
will compute the uni-variate prediction for several feature vectors.(uint16 <#samples,#inputs>, float <#samples>, float<#samples>)
will compute the uni-variate prediction and the labels for several feature vectors.(uint16 <#inputs>, float <#outputs>)
will compute the multi-variate prediction for a single feature vector.(uint16 <#samples,#inputs>, float <#samples,#outputs>)
will compute the multi-variate prediction for several feature vectors.(uint16 <#samples,#inputs>, float <#samples,#outputs>, float <#samples,#outputs>)
will compute the multi-variate prediction and the labels for several feature vectors.
Parameters:
features
: uint16 <#inputs> or uint16 <#samples, #inputs>The feature vector(s) the prediction should be computed for.predictions
: float <#samples> or float <#outputs> or float <#samples, #outputs>The predicted values – see below.labels
: float <#samples> or float <#samples, #outputs>The predicted labels:
- for the uni-variate case, -1 or +1 is assigned according to threshold 0
- for the multi-variate case, +1 is assigned for the highest value, and 0 for all others
Returns:
prediction
: floatThe predicted value - in case a single feature is provided and a single output is required
-
indices
¶ int <#machines,#outputs> <– The indices into the feature vector required by all of the weak machines.
-
load
(hdf5) → None¶ Loads the Strong machine from the given HDF5 file
Parameters:
hdf5
:bob.io.base.HDF5File
The HDF5 file to load this machine from.
-
outputs
¶ int <– The number of outputs; for uni-variate classifiers always 1
-
save
(hdf5) → None¶ Saves the content of this machine to the given HDF5 file
Parameters:
hdf5
:bob.io.base.HDF5File
The HDF5 file to save this weak machine to.
-
weak_machines
¶ [
WeakMachine
] <– The list of weak machines stored in this strong machine
-
weights
¶ float <#machines,#outputs> <– The weights for the weak machines
-
class
bob.learn.boosting.
Boosting
(weak_trainer, loss_function)[source]¶ The class to boost the features from a set of training samples.
It iteratively adds new weak models to assemble a strong classifier. In each round of iteration a weak machine is learned by optimizing a differentiable function.
Constructor Documentation
Keyword parameters
- weak_trainer :
- The class to train weak machines.
- loss_function : a class derived from
- The function to define the weights for the weak machines.
bob.learn.boosting.LUTTrainer
orbob.learn.boosting.StumpTrainer
bob.learn.boosting.LossFunction
-
train
(training_features, training_targets, number_of_rounds=20, boosted_machine=None)[source]¶ The function to train a boosting machine.
The function boosts the training features and returns a strong classifier as a weighted combination of weak classifiers.
Keyword parameters:
- training_features : uint16 <#samples, #features> or float <#samples, #features>)
- Features extracted from the training samples.
- training_targets : float <#samples, #outputs>
- The values that the boosted classifier should reach for the given samples.
- number_of_rounds : int
- The number of rounds of boosting, i.e., the number of weak classifiers to select.
- boosted_machine
bob.learn.boosting.BoostedMachine
or None - The machine to add the weak machines to. If not given, a new machine is created.
- Returns :
- The boosted machine that is combination of the weak classifiers.
bob.learn.boosting.BoostedMachine
-
class
bob.learn.boosting.
ExponentialLoss
[source]¶ Bases:
bob.learn.boosting.LossFunction
The class implements the exponential loss function for the boosting framework.
-
loss
(targets, scores)[source]¶ The function computes the exponential loss values using prediction scores and targets. It can be used in classification tasks, e.g., in combination with the StumpTrainer.
Keyword parameters:
targets (float <#samples, #outputs>): The target values that should be reached.
scores (float <#samples, #outputs>): The scores provided by the classifier.
- Returns
- (float <#samples, #outputs>): The loss values for the samples, always >= 0
-
loss_gradient
(targets, scores)[source]¶ The function computes the gradient of the exponential loss function using prediction scores and targets.
Keyword parameters:
targets (float <#samples, #outputs>): The target values that should be reached.
scores (float <#samples, #outputs>): The scores provided by the classifier.
- Returns
- loss (float <#samples, #outputs>): The gradient of the loss based on the given scores and targets.
-
loss_gradient_sum
(alpha, targets, previous_scores, current_scores)¶ The function computes the gradient as the sum of the derivatives per sample which is used to find the optimized values of alpha.
The functions computes sum of loss values which is required during the line search step for the optimization of the alpha. This function is given as the input for the L-BFGS optimization function.
Keyword parameters:
alpha (float): The current value of the alpha.
targets (float <#samples, #outputs>): The targets for the samples
previous_scores (float <#samples, #outputs>): The cumulative prediction scores of the samples until the previous round of the boosting.
current_scores (float <#samples, #outputs>): The prediction scores of the samples for the current round of the boosting.
- Returns
- (float <#outputs>) The sum of the loss gradient for the current value of the alpha.
-
loss_sum
(alpha, targets, previous_scores, current_scores)¶ The function computes the sum of the loss which is used to find the optimized values of alpha (x).
The functions computes sum of loss values which is required during the line search step for the optimization of the alpha. This function is given as the input for the L-BFGS optimization function.
Keyword parameters:
alpha (float): The current value of the alpha.
targets (float <#samples, #outputs>): The targets for the samples
previous_scores (float <#samples, #outputs>): The cumulative prediction scores of the samples until the previous round of the boosting.
current_scores (float <#samples, #outputs>): The prediction scores of the samples for the current round of the boosting.
Returns
(float <#outputs>) The sum of the loss values for the current value of the alpha
-
-
class
bob.learn.boosting.
JesorskyLoss
¶ Bases:
LossFunction
Computes the Jesorsky loss and its derivative.
The Jesorsky loss defines an error function between the target vectors and the currently achieved scores.It is specifically designed to perform regression in a facial feature localization (FFL) task.It assumes that the feature vector consists of facial landmark positions, which are given in the following way:
with is the right eye landmark, is the left eye landmark and all other landmarks (a total of landmarks) are following.
The error between target vector and test vector is computed as the average landmark-wise Euclidean distance, normalized by the inter-eye-distance of the target vector:
The derivative is a bit more complicated.First, the error is computed for each landmark :
and then the derivative is computed for each element of the target vector:
Constructor Documentation:
JesorskyLoss ()
Initializes a JesorskyLoss object.
The constructor comes with no parameters.
Class Members:
-
loss
(targets, scores) → errors¶ Computes the Jesorsky error between the targets and the scores.
This function computes the Jesorsky error between all given targets and samples, using the loss formula as explained above
JesorskyLoss
Parameters:
targets
: float <#samples, #outputs>The target values that should be achieved during boostingscores
: float <#samples, #outputs>The score values that are currently achievedReturns:
errors
: float <#samples, 1>The resulting Jesorsky errors for each target
-
loss_gradient
(targets, scores) → gradient¶ Computes the Jesorsky error between the targets and the scores.
This function computes the derivative of the Jesorsky error between all given targets and samples, using the loss formula as explained above
JesorskyLoss
Parameters:
targets
: float <#samples, #outputs>The target values that should be achieved during boostingscores
: float <#samples, #outputs>The score values that are currently achievedReturns:
gradient
: float <#samples, #outputs>The derivative of the Jesorsky error for each sample
-
loss_gradient_sum
(alpha, targets, previous_scores, current_scores) → gradient_sum¶ Computes the sum of the loss gradients computed between the targets and the scores.
This function is designed to be used with the L-BFGS method.It computes the new derivative of the loss based on the loss from the current strong classifier, adding the new weak machine with the currently selected weight
alpha
Parameters:
alpha
: float <#outputs>The weight for the current_scores that will be optimized in L-BFGStargets
: float <#samples, #outputs>The target values that should be achieved during boostingprevious_scores
: float <#samples, #outputs>The score values that are achieved by the boosted machine after the previous boosting iterationcurrent_scores
: float <#samples, #outputs>The score values that are achieved with the weak machine added in this boosting roundReturns:
gradient_sum
: float <#outputs>The sum over the loss gradients for the newly combined strong classifier
-
loss_sum
(alpha, targets, previous_scores, current_scores) → loss_sum¶ Computes the sum of the losses computed between the targets and the scores.
This function is designed to be used with the L-BFGS method.It computes the new loss based on the loss from the current strong classifier, adding the new weak machine with the currently selected weight
alpha
Parameters:
alpha
: float <#outputs>The weight for the current_scores that will be optimized in L-BFGStargets
: float <#samples, #outputs>The target values that should be achieved during boostingprevious_scores
: float <#samples, #outputs>The score values that are achieved by the boosted machine after the previous boosting iterationcurrent_scores
: float <#samples, #outputs>The score values that are achieved with the weak machine added in this boosting roundReturns:
loss_sum
: float <1>The sum over the loss values for the newly combined strong classifier
-
-
class
bob.learn.boosting.
LUTMachine
¶ Bases:
WeakMachine
A weak machine that bases it’s decision on a Look-Up-Table
Constructor Documentation:
- LUTMachine (look_up_table, index)
- LUTMachine (look_up_tables, indices)
- LUTMachine (hdf5)
Initializes a LUTMachine object
Parameters:
look_up_table
: float <#entries>The look up table (for the univariate case)index
: intThe index into the feature vector (for the univariate case)look_up_tables
: float <#entries,#outputs>The look up tables, one for each output dimension (for the multi-variate case)indices
: int <#outputs>The indices into the feature vector, one for each output dimension (for the multi-variate case)hdf5
:bob.io.base.HDF5File
The HDF5 file object to read the weak classifier fromClass Members:
-
feature_indices
() → indices¶ Returns the feature index that will be used in this weak machine
Returns:
indices
: int32 <1>The feature index required by this machine
-
forward
()¶ - forward(features) -> prediction
- forward(features, predictions) -> None
Returns the prediction for the given feature vector(s)
Note
The
__call__
function is an alias for this function.This function can be called in four different ways:
(uint16 <#inputs>)
will compute and return the uni-variate prediction for a single feature vector.(uint16 <#samples,#inputs>, float <#samples>)
will compute the uni-variate prediction for several feature vectors.(uint16 <#inputs>, float <#outputs>)
will compute the multi-variate prediction for a single feature vector.(uint16 <#samples,#inputs>, float <#samples,#outputs>)
will compute the multi-variate prediction for several feature vectors.
Parameters:
features
: uint16 <#inputs> or uint16 <#samples, #inputs>The feature vector(s) the prediction should be computed for.predictions
: float <#samples> or float <#outputs> or float <#samples, #outputs>The predicted values – see below.Returns:
prediction
: floatThe predicted value – in case a single feature is provided and a single output is required
-
load
(hdf5) → None¶ Loads the LUT machine from the given HDF5 file
Parameters:
hdf5
:bob.io.base.HDF5File
The HDF5 file to load this weak machine from.
-
lut
¶ float <#entries,#outputs> <– The look-up table associated with this object. In the uni-variate case, #outputs will be 1
-
save
(hdf5) → None¶ Saves the content of this machine to the given HDF5 file
Parameters:
hdf5
:bob.io.base.HDF5File
The HDF5 file to save this weak machine to.
-
class
bob.learn.boosting.
LUTTrainer
¶ Bases:
object
A weak machine that bases it’s decision on a Look-Up-Table
Constructor Documentation:
LUTTrainer (maximum_feature_value, [number_of_outputs, selection_style])
Initializes a LUTTrainer object
Parameters:
maximum_feature_value
: intThe number of entries in the Look-Up-Tablesnumber_of_outputs
: intThe dimensionality of the output vector; defaults to 1 for the uni-variate caseselection_style
: strThe way, features are selected; possible values: ‘shared’, ‘independent’; only useful for the multi-variate case; defaults to ‘independent’Class Members:
-
number_of_labels
¶ uint16 <– The highest feature value + 1, i.e., the number of entries in the LUT
-
number_of_outputs
¶ int <– The dimensionality of the output vector (1 for the uni-variate case)
-
selection_type
¶ str <– The style for selecting features (valid for multi-variate case only)
-
train
(training_features, loss_gradient) → lut_machine¶ Trains and returns a weak LUT machine
Parameters:
training_features
: uint16 <#samples, #inputs>The feature vectors to train the weak machineloss_gradient
: float <#samples, #outputs>The gradient of the loss function for the training featuresReturns:
lut_machine
: bob.boosting.machine.LUTMachineThe weak machine that is obtained in the current round of boosting
-
-
class
bob.learn.boosting.
LogitLoss
[source]¶ Bases:
bob.learn.boosting.LossFunction
The class to implement the logit loss function for the boosting framework.
-
loss
(targets, scores)[source]¶ The function computes the logit loss values using prediction scores and targets.
Keyword parameters:
targets (float <#samples, #outputs>): The target values that should be reached.
scores (float <#samples, #outputs>): The scores provided by the classifier.
- Returns
- (float <#samples, #outputs>): The loss values for the samples, which is always >= 0
-
loss_gradient
(targets, scores)[source]¶ The function computes the gradient of the logit loss function using prediction scores and targets.
Keyword parameters:
targets (float <#samples, #outputs>): The target values that should be reached.
scores (float <#samples, #outputs>): The scores provided by the classifier.
- Returns
- loss (float <#samples, #outputs>): The gradient of the loss based on the given scores and targets.
-
loss_gradient_sum
(alpha, targets, previous_scores, current_scores)¶ The function computes the gradient as the sum of the derivatives per sample which is used to find the optimized values of alpha.
The functions computes sum of loss values which is required during the line search step for the optimization of the alpha. This function is given as the input for the L-BFGS optimization function.
Keyword parameters:
alpha (float): The current value of the alpha.
targets (float <#samples, #outputs>): The targets for the samples
previous_scores (float <#samples, #outputs>): The cumulative prediction scores of the samples until the previous round of the boosting.
current_scores (float <#samples, #outputs>): The prediction scores of the samples for the current round of the boosting.
- Returns
- (float <#outputs>) The sum of the loss gradient for the current value of the alpha.
-
loss_sum
(alpha, targets, previous_scores, current_scores)¶ The function computes the sum of the loss which is used to find the optimized values of alpha (x).
The functions computes sum of loss values which is required during the line search step for the optimization of the alpha. This function is given as the input for the L-BFGS optimization function.
Keyword parameters:
alpha (float): The current value of the alpha.
targets (float <#samples, #outputs>): The targets for the samples
previous_scores (float <#samples, #outputs>): The cumulative prediction scores of the samples until the previous round of the boosting.
current_scores (float <#samples, #outputs>): The prediction scores of the samples for the current round of the boosting.
Returns
(float <#outputs>) The sum of the loss values for the current value of the alpha
-
-
class
bob.learn.boosting.
LossFunction
¶ Bases:
object
This is a base class for all loss functions implemented in pure python. It is simply a python re-implementation of the
bob.learn.boosting.LossFunction
class.This class provides the interface for the L-BFGS optimizer. Please overwrite the loss() and loss_gradient() function (see below) in derived loss classes.
-
loss
(targets, scores)[source]¶ This function is to compute the loss for the given targets and scores.
Keyword parameters:
targets (float <#samples, #outputs>): The target values that should be reached.
scores (float <#samples, #outputs>): The scores provided by the classifier.
- Returns
- (float <#samples, #outputs>) or (float <#samples, 1>): The loss based on the given scores and targets. Depending on the intended task, one of the two output variants should be chosen. For classification tasks, please use the former way (#samples, #outputs), while for regression tasks, use the latter (#samples, 1).
-
loss_gradient
(targets, scores)[source]¶ This function is to compute the gradient of the loss for the given targets and scores.
Keyword parameters:
targets (float <#samples, #outputs>): The target values that should be reached.
scores (float <#samples, #outputs>): The scores provided by the classifier.
- Returns
- loss (float <#samples, #outputs>): The gradient of the loss based on the given scores and targets.
-
loss_gradient_sum
(alpha, targets, previous_scores, current_scores)[source]¶ The function computes the gradient as the sum of the derivatives per sample which is used to find the optimized values of alpha.
The functions computes sum of loss values which is required during the line search step for the optimization of the alpha. This function is given as the input for the L-BFGS optimization function.
Keyword parameters:
alpha (float): The current value of the alpha.
targets (float <#samples, #outputs>): The targets for the samples
previous_scores (float <#samples, #outputs>): The cumulative prediction scores of the samples until the previous round of the boosting.
current_scores (float <#samples, #outputs>): The prediction scores of the samples for the current round of the boosting.
- Returns
- (float <#outputs>) The sum of the loss gradient for the current value of the alpha.
-
loss_sum
(alpha, targets, previous_scores, current_scores)[source]¶ The function computes the sum of the loss which is used to find the optimized values of alpha (x).
The functions computes sum of loss values which is required during the line search step for the optimization of the alpha. This function is given as the input for the L-BFGS optimization function.
Keyword parameters:
alpha (float): The current value of the alpha.
targets (float <#samples, #outputs>): The targets for the samples
previous_scores (float <#samples, #outputs>): The cumulative prediction scores of the samples until the previous round of the boosting.
current_scores (float <#samples, #outputs>): The prediction scores of the samples for the current round of the boosting.
Returns
(float <#outputs>) The sum of the loss values for the current value of the alpha
-
-
class
bob.learn.boosting.
StumpMachine
¶ Bases:
WeakMachine
A weak machine that bases it’s decision on comparing the given value to a threshold
Constructor Documentation:
- StumpMachine (threshold, polarity, index)
- StumpMachine (hdf5)
Initializes a StumpMachine object.
Parameters:
threshold
: floatThe decision thresholdpolarity
: float-1 if positive values are below threshold, +1 if positive values are above thresholdindex
: intThe index into the feature vector that is thresholdedhdf5
:bob.io.base.HDF5File
The HDF5 file object to read the weak classifier fromClass Members:
-
feature_indices
() → indices¶ Returns the feature index that will be used in this weak machine
Returns:
indices
: int32 <1>The feature index required by this machine
-
forward
()¶ - forward(features) -> prediction
- forward(features, predictions) -> None
Returns the prediction for the given feature vector(s)
Note
The
__call__
function is an alias for this function.Parameters:
features
: float <#inputs> or float <#samples, #inputs>The feature vector(s) the prediction should be computed for. If only a single feature is given, the resulting prediction is returned as a float. Otherwise it is stored in the secondpredictions
parameter.predictions
: float <#samples> or float <#samples, 1>The predicted values – in case severalfeatures
are provided.Returns:
prediction
: floatThe predicted value – in case a single feature is provided
-
load
(hdf5) → None¶ Loads the Stump machine from the given HDF5 file
Parameters:
hdf5
:bob.io.base.HDF5File
The HDF5 file to load this weak machine from.
-
polarity
¶ float <– The polarity of the comparison -1 if the values lower than the threshold should be accepted, +1 otherwise.
-
save
(hdf5) → None¶ Saves the content of this machine to the given HDF5 file
Parameters:
hdf5
:bob.io.base.HDF5File
The HDF5 file to save this weak machine to.
-
threshold
¶ float <– The thresholds that the feature value will be compared with
-
class
bob.learn.boosting.
StumpTrainer
[source]¶ The class for training weak stump classifiers. The weak stump is parameterized the threshold and the polarity.
-
compute_threshold
(features, gradient)[source]¶ Computes the stump classifier threshold for a single feature
The threshold is computed for the given feature values using the weak learner algorithm of Viola Jones.
- Keyword parameters
features (float<#samples>): The feature values for a single index
gradient (float<#samples>): The negative loss gradient values for the training samples
- Returns a triplet containing:
- threshold (float): threshold that minimizes the error polarity (float): the polarity or the direction used for stump classification gain (float): gain of the classifier
-
train
(training_features, loss_gradient)[source]¶ Computes a weak stump machine.
The best weak machine is chosen to maximize the dot product of the outputs and the weights (gain). The weights are the negative of the loss gradient for exponential loss.
- Keyword parameters
training_features (float<#samples, #features>): The training features samples
loss_gradient (float<#samples>): The loss gradient values for the training samples
- Returns
- A (weak)
bob.learn.boosting.StumpMachine
-
-
class
bob.learn.boosting.
TangentialLoss
[source]¶ Bases:
bob.learn.boosting.LossFunction
Tangent loss function, as described in http://www.svcl.ucsd.edu/projects/LossDesign/TangentBoost.html.
-
loss
(targets, scores)[source]¶ The function computes the logit loss values using prediction scores and targets.
Keyword parameters:
targets (float <#samples, #outputs>): The target values that should be reached.
scores (float <#samples, #outputs>): The scores provided by the classifier.
- Returns
- (float <#samples, #outputs>): The loss values for the samples, always >= 0
-
loss_gradient
(targets, scores)[source]¶ The function computes the gradient of the tangential loss function using prediction scores and targets.
Keyword parameters:
targets (float <#samples, #outputs>): The target values that should be reached.
scores (float <#samples, #outputs>): The scores provided by the classifier.
- Returns
- loss (float <#samples, #outputs>): The gradient of the loss based on the given scores and targets.
-
loss_gradient_sum
(alpha, targets, previous_scores, current_scores)¶ The function computes the gradient as the sum of the derivatives per sample which is used to find the optimized values of alpha.
The functions computes sum of loss values which is required during the line search step for the optimization of the alpha. This function is given as the input for the L-BFGS optimization function.
Keyword parameters:
alpha (float): The current value of the alpha.
targets (float <#samples, #outputs>): The targets for the samples
previous_scores (float <#samples, #outputs>): The cumulative prediction scores of the samples until the previous round of the boosting.
current_scores (float <#samples, #outputs>): The prediction scores of the samples for the current round of the boosting.
- Returns
- (float <#outputs>) The sum of the loss gradient for the current value of the alpha.
-
loss_sum
(alpha, targets, previous_scores, current_scores)¶ The function computes the sum of the loss which is used to find the optimized values of alpha (x).
The functions computes sum of loss values which is required during the line search step for the optimization of the alpha. This function is given as the input for the L-BFGS optimization function.
Keyword parameters:
alpha (float): The current value of the alpha.
targets (float <#samples, #outputs>): The targets for the samples
previous_scores (float <#samples, #outputs>): The cumulative prediction scores of the samples until the previous round of the boosting.
current_scores (float <#samples, #outputs>): The prediction scores of the samples for the current round of the boosting.
Returns
(float <#outputs>) The sum of the loss values for the current value of the alpha
-
-
class
bob.learn.boosting.
WeakMachine
¶ Bases:
object
Pure virtual base class for weak machines
Class Members:
-
bob.learn.boosting.
weighted_histogram
(features, weights, histogram) → None¶ Computes a weighted histogram from the given features.
Parameters:
features
: array_like <1D, uint16>The vector of features to compute a histogram forweights
: array_like <1D, float>The vector of weights; must be of the same size as the featureshistogram
: array_like <1D, float>The histogram that will be filled