Efficient Multi-label Classification with Many Labels

Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):405-413, 2013.

Abstract

Multi-label classification deals with the problem where each instance can be associated with a set of class labels. However, in many real-world applications, the number of class labels can be in the hundreds or even thousands, and existing multi-label classification methods often become computationally inefficient. In recent years, a number of remedies have been proposed. However, they are either based on simple dimension reduction techniques or involve expensive optimization problems. In this paper, we address this problem by selecting a small subset of class labels that can approximately span the original label space. This is performed by randomized sampling where the sampling probability of each class label reflects its importance among all the labels. Theoretical analysis shows that this randomized sampling approach is highly efficient. Experiments on a number of real-world multi-label datasets with many labels demonstrate the appealing performance and efficiency of the proposed algorithm.

Related Material

@InProceedings{pmlr-v28-bi13,
title = {Efficient Multi-label Classification with Many Labels},
author = {Wei Bi and James Kwok},
booktitle = {Proceedings of the 30th International Conference on Machine Learning},
pages = {405--413},
year = {2013},
editor = {Sanjoy Dasgupta and David McAllester},
volume = {28},
number = {3},
series = {Proceedings of Machine Learning Research},
address = {Atlanta, Georgia, USA},
month = {17--19 Jun},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v28/bi13.pdf},
url = {http://proceedings.mlr.press/v28/bi13.html},
abstract = {Multi-label classification deals with the problem where each instance can be associated with a set of class labels. However, in many real-world applications, the number of class labels can be in the hundreds or even thousands, and existing multi-label classification methods often become computationally inefficient. In recent years, a number of remedies have been proposed. However, they are either based on simple dimension reduction techniques or involve expensive optimization problems. In this paper, we address this problem by selecting a small subset of class labels that can approximately span the original label space. This is performed by randomized sampling where the sampling probability of each class label reflects its importance among all the labels. Theoretical analysis shows that this randomized sampling approach is highly efficient. Experiments on a number of real-world multi-label datasets with many labels demonstrate the appealing performance and efficiency of the proposed algorithm. }
}

%0 Conference Paper
%T Efficient Multi-label Classification with Many Labels
%A Wei Bi
%A James Kwok
%B Proceedings of the 30th International Conference on Machine Learning
%C Proceedings of Machine Learning Research
%D 2013
%E Sanjoy Dasgupta
%E David McAllester
%F pmlr-v28-bi13
%I PMLR
%J Proceedings of Machine Learning Research
%P 405--413
%U http://proceedings.mlr.press
%V 28
%N 3
%W PMLR
%X Multi-label classification deals with the problem where each instance can be associated with a set of class labels. However, in many real-world applications, the number of class labels can be in the hundreds or even thousands, and existing multi-label classification methods often become computationally inefficient. In recent years, a number of remedies have been proposed. However, they are either based on simple dimension reduction techniques or involve expensive optimization problems. In this paper, we address this problem by selecting a small subset of class labels that can approximately span the original label space. This is performed by randomized sampling where the sampling probability of each class label reflects its importance among all the labels. Theoretical analysis shows that this randomized sampling approach is highly efficient. Experiments on a number of real-world multi-label datasets with many labels demonstrate the appealing performance and efficiency of the proposed algorithm.

TY - CPAPER
TI - Efficient Multi-label Classification with Many Labels
AU - Wei Bi
AU - James Kwok
BT - Proceedings of the 30th International Conference on Machine Learning
PY - 2013/02/13
DA - 2013/02/13
ED - Sanjoy Dasgupta
ED - David McAllester
ID - pmlr-v28-bi13
PB - PMLR
SP - 405
DP - PMLR
EP - 413
L1 - http://proceedings.mlr.press/v28/bi13.pdf
UR - http://proceedings.mlr.press/v28/bi13.html
AB - Multi-label classification deals with the problem where each instance can be associated with a set of class labels. However, in many real-world applications, the number of class labels can be in the hundreds or even thousands, and existing multi-label classification methods often become computationally inefficient. In recent years, a number of remedies have been proposed. However, they are either based on simple dimension reduction techniques or involve expensive optimization problems. In this paper, we address this problem by selecting a small subset of class labels that can approximately span the original label space. This is performed by randomized sampling where the sampling probability of each class label reflects its importance among all the labels. Theoretical analysis shows that this randomized sampling approach is highly efficient. Experiments on a number of real-world multi-label datasets with many labels demonstrate the appealing performance and efficiency of the proposed algorithm.
ER -