14-1 Introduction (蝪∩?)

This chapter introduces methods for melody recognition. In fact, there are many different types of melody recognition. But in this chapter, we shall focus on melody recognition for query by singing/humming (QBSH), which is an intuitive interface for karaoke when one cannot recall the title or artist of a song to sing. Other applications include intelligent toys, on-the-fly scoring for karaoke, edutainment tools for singing and language learning, and so on.

A QBSH system consists of three essential components:

Input data: The system can take two types of inputs:

Acoustic input: This includes singing, humming, or whistling from the users. For such an input, the system needs to compute the corresponding pitch contour, and convert them into note vector (optionally) for further comparison.

Symbolic input: This includes music note representation from the user. Usually this is not a part of QBSH since no singing or humming is involved, but the computation procedure is quite the same, as detailed later.

Song database: This is the song database which hosts a collection of songs to be compared with. For simplicity, most of the song database contains symbolic music whose music scores can be extracted reliably. Typical examples of symbolic music are MIDI files, which can be classified into two types:

Monophonic music: For a given time, only a single voice of an instrument can be heard.

Polyphonic music: Multiple voices can be heard at the same time. Most pop music is of this type.

Most of the time, we are using monophonic symbolic music for melody recognition systems.

Theoretically it is possible to use polyphonic audio music (such as MP3) for constructing the song database. However, it remains a tough and challenging problem to extract the dominant pitch (from vocals, for instance, for pop songs) from polyphonic audio music reliably. (This can be explained from the analogy of identifying swimming objects on a pond by the observed waveforms coming from two channels.)

Methods for comparison: There are at least 3 types of methods for comparing the input pitch representation with songs in the database:

Note-based methods: The input is converted into note vector and compared with each song in the database. This method is efficient since a note vector is much shorter when compared with the corresponding pitch vector. However, note segmentation itself may introduce errors, leading to the degradation in performance. Typical methods of this type include edit distance for music note vectors.

Frame-based methods: The input and database songs are compared in the format of frame-based pitch vectors, where the pitch rate (or frame rate) can be varied from 8 to 64 points per second. The major advantage of this method is effectiveness, at the cost of more computation. Typical methods include linear scaling (LS) and dynamic time warping (DTW, including type-1 and 2).

Hybrid methods: The input is frame-based while the database song is note-based. Typical methods include type-3 DTW and HMM (Hidden Markov Models).