Moderating the outputs of support vector machine classifiers
James T. Kwok
Abstract:
In this paper, we extend the use of moderated outputs to
the support vector machine (SVM) by
making use of a relationship
between SVM and the evidence framework.
The moderated output
is more in line with the Bayesian idea that the posterior weight
distribution should be taken into account upon prediction, and it also
alleviates the usual tendency of assigning overly high confidence to the
estimated class memberships of the test
patterns. Moreover, the moderated output derived
here can be
taken as an
approximation to the posterior class probability. Hence, meaningful rejection
thresholds can be assigned and outputs from several networks can be directly
compared. Experimental results on both artificial
and real-world data
are also discussed.
Proceedings of the International Joint Conference on
Neural Networks (IJCNN), pp.943-948, Washington, DC, USA, July 1999.
Postscript:
http://www.cs.ust.hk/~jamesk/papers/ijcnn99.ps.gz
Back to James Kwok's home page.