One application of a filter bank is a graphic equalizer, which can attenuate the components differently and recombine them into a modified version of the original signal.

The process of decomposition performed by the filter bank is called analysis (meaning analysis of the signal in terms of its components in each sub-band); the output of analysis is referred to as a subband signal with as many subbands as there are filters in the filter bank. The reconstruction process is called synthesis, meaning reconstitution of a complete signal resulting from the filtering process.

In digital signal processing, the term filter bank is also commonly applied to a bank of receivers. The difference is that receivers also down-convert the subbands to a low center frequency that can be re-sampled at a reduced rate. The same result can sometimes be achieved by undersampling the bandpass subbands.

Another application of filter banks is signal compression when some frequencies are more important than others. After decomposition, the important frequencies can be coded with a fine resolution. Small differences at these frequencies are significant and a coding scheme that preserves these differences must be used. On the other hand, less important frequencies do not have to be exact. A coarser coding scheme can be used, even though some of the finer (but less important) details will be lost in the coding.

The vocoder uses a filter bank to determine the amplitude information of the subbands of a modulator signal (such as a voice) and uses them to control the amplitude of the subbands of a carrier signal (such as the output of a guitar or synthesizer), thus imposing the dynamic characteristics of the modulator on the carrier.

A Tatum[1] is the “lowest regular pulse train that a listener intuitively infers from the timing of perceived musical events: a time quantum. It is roughly equivalent to the time division that most highly coincides with note onsets”. It can be computed by using a histogram of inter-onset intervals.

5_ Conclusion and Future work

We have proposed a downbeat tracking back-end system that uses recurrent Neural networks (RNNs) to analyze a beat-synchronous feature stream.

With estimated beats as input, the system performs comparable to the state-of-the- art, yielding a mean downbeat F-measure of 77.3% on a set of 1771 excerpts of Western music. With manually annotated beats the score goes up to 90.4%.

For future work, a good modular comparison of downbeat tracking approaches needs to be undertaken, possibly with collaboration between several researchers.

In particular, standardized dataset train/test splits need to be defined.

Second, we would like to train and test the model with non-Western music and ‘odd’ time signatures, such as done in ‘Tracking the “odd”: Meter inference in a culturally diverse music corpus’.