Abstract: Locally adaptive classifiers are usually superior to the use of a single global classifier. However, there are two major problems in designing locally adaptive classifiers. First, how to place the local classifiers, and, second, how to combine them together. In this paper, instead of placing the classifiers based on the data distribution only, we propose a {\em responsibility mixture model\/} that uses the uncertainty associated with the classification at each training sample. Using this model, the local classifiers are placed near the decision boundary where they are most effective. A set of local classifiers are then learned to form a global classifier by maximizing an estimate of the probability that the samples will be correctly classified with a nearest neighbor classifier. Experimental results on both artificial and real-world data sets demonstrate its superiority over traditional algorithms.
Proceedings of the Twenty-Third International Conference on Machine Learning (ICML-2006), pp.225-232, Pittsburgh, USA, June 2006.
PDF: http://www.cs.ust.hk/~jamesk/papers/icml06b.pdf