InfoSci®-Journals Annual Subscription Price for New Customers: As Low As US$ 4,950

This collection of over 175 e-journals offers unlimited access to highly-cited, forward-thinking content in full-text PDF and XML with no DRM. There are no platform or maintenance fees and a guarantee of no more than 5% increase annually.

Receive the complimentary e-books for the first, second, and third editions with the purchase of the Encyclopedia of Information Science and Technology, Fourth Edition e-book. Plus, take 20% off when purchasing directly through IGI Global's Online Bookstore.

Take 20% Off All Publications Purchased Directly Through the IGI Global Online Bookstore: www.igi-global.com/

Abstract

Recently the field of machine learning, pattern recognition, and data mining has witnessed a new research stream that is learning with partial supervision -LPS- (known also as semi-supervised learning). This learning scheme is motivated by the fact that the process of acquiring the labeling information of data could be quite costly and sometimes prone to mislabeling. The general spectrum of learning from data is envisioned in Figure 1. As shown, in many situations, the data is neither perfectly nor completely labeled.

LPS aims at using available labeled samples in order to guide the process of building classification and clustering machineries and help boost their accuracy. Basically, LPS is a combination of two learning paradigms: supervised and unsupervised where the former deals exclusively with labeled data and the latter is concerned with unlabeled data. Hence, the following questions:

Can we improve supervised learning with unlabeled data?

Can we guide unsupervised learning by incorporating few labeled samples?

Because LPS is still a young but active research field, it lacks a survey outlining the existing approaches and research trends. In this chapter, we will take a step towards an overview. We will discuss (i) the background of LPS, (iii) the main focus of our LPS research and explain the underlying assumptions behind LPS, and (iv) future directions and challenges of LPS research.

Introduction

Recently the field of machine learning, pattern recognition, and data mining has witnessed a new research stream that is learning with partial supervision -LPS- (known also as semi-supervised learning). This learning scheme is motivated by the fact that the process of acquiring the labeling information of data could be quite costly and sometimes prone to mislabeling. The general spectrum of learning from data is envisioned in Figure 1. As shown, in many situations, the data is neither perfectly nor completely labeled.

Figure 1.

Learning from data spectrum

LPS aims at using available labeled samples in order to guide the process of building classification and clustering machineries and help boost their accuracy. Basically, LPS is a combination of two learning paradigms: supervised and unsupervised where the former deals exclusively with labeled data and the latter is concerned with unlabeled data. Hence, the following questions:

•

Can we improve supervised learning with unlabeled data?

•

Can we guide unsupervised learning by incorporating few labeled samples?

Because LPS is still a young but active research field, it lacks a survey outlining the existing approaches and research trends. In this chapter, we will take a step towards an overview. We will discuss (i) the background of LPS, (iii) the main focus of our LPS research and explain the underlying assumptions behind LPS, and (iv) future directions and challenges of LPS research.

Background

LPS is about devising algorithms that combine labeled and unlabeled data in a symbiotic way in order to boost classification accuracy. The scenario is portrayed in Fig.2 showing that the combination can mainly be done in two ways: active/passive pre-labeling, or via ‘pure’ LPS (Fig. 4). We try to draw a clear picture about these schemes by means of an up-to-date taxonomy of methods.

Figure 2.

Combining labeled and unlabeled data

Figure 4.

Combining labeled and unlabeled data

Active and Passive Pre-Labeling

Pre-labeling aims at assigning a label to unlabeled samples (called queries). These samples are used together with the originally labeled samples to train a fully supervised classifier (Fig. 3). “Passive” pre-labeling means that pre-labeling is done automatically and is referred to as selective sampling or self-training. It has been extensively discussed and consists of first training a classifier before using it to label the unlabeled data (for more details see, Bouchachia, 2007). Various algorithms are used to perform selective sampling, such as multilayer perceptron (Verikas et al., 2001), slef-organizing maps (Dara et al., 2002), and clustering techniques (Bouchachia, 2005a). On the other hand, in active learning, queries are sequentially submitted to an oracle for labeling. Different models have been applied; such as neural networks inversion (Baum, 1991), decision trees (Wiratunga et al., 2003), and query by committee (Freund & Shapire, 1997).