Abstract: This paper focuses on kernel methods for multi-instance learning. Existing methods require the prediction of the bag to be identical to the maximum of those of its individual instances. However, this is too restrictive as only the sign is important in classification. In this paper, we provide a more complete regularization framework for MI learning by allowing the use of different loss functions between the outputs of a bag and its associated instances. This is especially important as we generalize this for multi-instance regression. Moreover, both bag and instance information can now be directly used in the o ptimization. Instead of using heuristics to solve the resultant nonlinear optimization problem, we use the constrained concave-convex procedure which has well-studied convergence properties. Experiments on both classification and regression data sets show that the proposed method leads to improved performance.
Proceedings of the Twenty-Third International Conference on Machine Learning (ICML-2006), Pittsburgh, USA, June 2006.
PDF: http://www.cs.ust.hk/~jamesk/papers/icml06a.pdf