Active classification using belief functions and information gain maximization
by
Abstract:
Obtaining reliable estimates of the parameters of a probabilistic classification model is often a challenging problem because the amount of available training data is limited. In this paper, we present a classification approach based on belief functions that makes the uncertainty resulting from limited amounts of training data explicit and thereby improves classification performance. In addition, we model classification as an active information acquisition problem where features are sequentially selected by maximizing the expected information gain with respect to the current belief distribution, thus reducing uncertainty as quickly as possible. For this, we consider different measures of uncertainty for belief functions and provide efficient algorithms for computing them. As a result, only a small subset of features need to be extracted without negatively impacting the recognition rate. We evaluate our approach on an object recognition task where we compare different evidential and Bayesian methods for obtaining likelihoods from training data and we investigate the influence of different uncertainty measures on the feature selection process.
Reference:
Active classification using belief functions and information gain maximization (Thomas Reineking), In International Journal of Approximate Reasoning, Elsevier, volume 72, 2016.
Bibtex Entry:
@Article{reineking2016active,
  author    = {Reineking, Thomas},
  title     = {Active classification using belief functions and information gain maximization},
  journal   = {International Journal of Approximate Reasoning},
  year      = {2016},
  volume    = {72},
  pages     = {43-54},
  month     = {may},
  abstract  = {Obtaining reliable estimates of the parameters of a probabilistic classification model is often a challenging problem because the amount of available training data is limited. In this paper, we present a classification approach based on belief functions that makes the uncertainty resulting from limited amounts of training data explicit and thereby improves classification performance. In addition, we model classification as an active information acquisition problem where features are sequentially selected by maximizing the expected information gain with respect to the current belief distribution, thus reducing uncertainty as quickly as possible. For this, we consider different measures of uncertainty for belief functions and provide efficient algorithms for computing them. As a result, only a small subset of features need to be extracted without negatively impacting the recognition rate. We evaluate our approach on an object recognition task where we compare different evidential and Bayesian methods for obtaining likelihoods from training data and we investigate the influence of different uncertainty measures on the feature selection process.},
  doi       = {10.1016/j.ijar.2015.12.005},
  publisher = {Elsevier},
  url       = {10.1016/j.ijar.2015.12.005">http://dx.doi.org/10.1016/j.ijar.2015.12.005},
}