Processing math: 33%

5. Active Learning

(placeholder)

5.1. Main Active Learning Routine

(placeholder)

5.2. Heuristics

(placeholder)

5.2.1. Random Benchmark

(placeholder)

5.2.2. Uncertainty Sampling

(placeholder)

5.2.3. Query by Bagging

The Kullback-Leibler divergence of Q from P is defined as

DKL(P

This KL divergence measures the amount of information lost when Q is used to approximate P. In the active learning context, Q is the average prediction probability of the committee, while $P$ is the prediction of a particular committee member.