Traditionally, information retrieval in music has been based on surrogates of music, i.e., bibliographic descriptions of music documents. This does not provide access to the essence of music, whether it is defined as the musical idea represented in the score, the gestures of the performer playing an instrument or the resulting auditive phenomenon - the sound. In this paper we develop a retrieval model for music content. We develop representations for music content and music queries, a matching method for the representations and show that the model has desirable properties for the retrieval of music content. Our model captures representative and memorable features of music in a simple representation, supports inexact retrieval, and ranks retrieved music documents. The MUSIR retrieval model is based on filtering the MIDI representation of music and n-gram matching.