George Tzanetakis

I am an assistant professor in Computer Science (also cross-listed in Music and
Electrical and Computer Engineering) at the University of Victoria in Canada. I
received my PhD in Computer Science from Princeton University under the
supervision of Dr. Perry Cook. I also worked at Carnegie Mellon University as a
PostDoctoral Fellow with Dr. Roger Dannenberg on query-by-humming systems and
audio-score alignment and with the Informedia group on multimodal video
retrieval and microphone arrays. I have also consulted with several companies
using Marsyas. They include
Moodlogic Inc. (audio fingerpriting),
All Music Inc., The Netherlands (music-speech classification),
and Teligence Communications Inc. (gender classification of voice recordings).
My research deals with all stages of audio content analysis such as feature
extraction, segmentation, classification, retrieval and source separation with
specific focus on Music Information Retrieval (MIR). I am also an active
musician and have studied saxophone performance, music theory and
composition.

Jakob Leben

I am a researcher with main interest in models of computation and
programming languages for digital signal processing.

My contributions to Marsyas include:

Marsyas Script (scripting language)

Marsyas Inspector (graphical network inspector),

improved real-time audio processing

improved Open Sound Control support and real-time control in general

improved interaction with Qt

dataflow testing and debugging system

I have also contributed significantly to SuperCollider (programming language
and system for sound synthesis and algorithmic composition) and other
open-source projects.

I am currently a PhD student at Computer Science, Universtiy of Victoria
under supervision of prof. George Tzanetakis.
My current research focuses on the design of a new programming language
for DSP and compiler optimization techniques.

Luis Gustavo Martins

Windows and Qt Guru (Portuguese Invasion 1)

I've been using and contributing to Marsyas since its 0.1 version, back in 2003,
when working on a contract project that required a realtime audio classifier
(for music/speech/noise/silence...). When I started my PhD in the area of
computational auditory scene analysis (CASA), I based all my research software
implementation and evaluations in Marsyas 0.2. Marsyas was a key factor for my
PhD work, being efficient and flexible to develop advanced algorithms for sound
segregation, that run close to real-time. Contributing to Marsyas also allowed
me to interact and work with all the Marsyas developer team, becoming one of the
most import learning experiences in my research and academic life.

Steven Ness

Coder

Steven is a mad scientist and coder who works on various aspects of Marsyas. He
added unit tests to Marsyas, turned the website into a Ruby on Rails backend,
has written some Marsystems, and uses Marsyas in the development of rich
internet-enabled applications. He's currently doing his Ph.D. in the lab of
George Tzanetakis in the field of Music Information Retrieval.

Mathieu Lagrange

Author of the most complex MarSystem network

I am interested in auditory scene analysis for music streams understanding. For
such a purpose, Marsyas as a valuable tool as it provides the user with an
efficient, easy-to-install platform for processing various type of audio data.
As the owner of the Most complex Marsystem network Award, I can say that even on
challenging research tasks, Marsyas proved to be flexible and fun to use.

Stefaan Lippens

Coder and Code Styler

My first encounter with Marsyas was in 2002. It was used for my master thesis at
Ghent University on the topic of automatic musical genre classification. After
completing my electrical engineering studies, I started working in the field of
image processing and obtained my PhD on halftoning and printing. Since 2008 I'm
back in the field of audio and music processing, now as a post-doctoral
researcher at the Digital Speech and Signal Processing research group of Ghent
University. The research revolves around music information retrieval and is
bridged with an outside company. Together we focus on large scale automatic
extraction of several music characteristics such as musical genre and rhythm
style. My main Marsyas activities and contributions are situated in the core
MarSystems and the Python bindings.

Tiago F. Tavares

I'm a PhD candidate in the University of Campinas (UNICAMP) and I will be
visiting UVic, here in Victoria-BC, for one year. I have been working with
automatic transcription of audio for some years now, and I hope to contribute
with the development of Marsyas as much as I can. I have written some
documentation, and let's see what patches I will do in the future!

Adam Tindale

Open Sound Control, Chuck, Percussion and Live Electroacoustic Music

Adam Tindale is an electronic drummer, teacher, and researcher. His research
combines signal processing and machine learning tools from Marsyas to classify
drum events in real-time to develop a more expressive electronic drum. Adam is
currently completing his Interdisciplinary Ph.D. in Music, Computer Science, and
Electrical Engineering under the supervision of George Tzanetakis.

Thijs Koerselman

Software Developer and Designer

I'm a software developer and designer working with interactive media and sound.
I hold an MA and BSc in Music Technology. After graduating in 2004 I got
increasingly involved with programming. I have developed software for creative
applications, live performance systems and art installations. Currently I work
for the Utrecht School of Arts in the Netherlands, faculty of Art, Media and
Technlogy, where we employ Marsyas in a project focusing on flexible and
intelligent media repository software. Currently Marsyas is used for tasks such
as music/speech classification and similarity matching. All content processing
is done via a modular distributed pipeline framework, so additional algorithms
can be easily plugged in. Other parts of the project include video analysis,
data modeling and adaptive user interfaces.

Luis Teixeira

Video, Python, upcoming Marsyas-0.x (Portuguese Invasion 2)

I'm a PhD student at FEUP and a researcher at the Telecommunications and
Multimedia Unit of INESC Porto. Currently most of my time is consumed by my PhD
and by the strange experiments I'm doing with Marsyas like trying to get video
to work in it, and who knows what more! As for the PhD, the focus is on the
detection of events and automatic description of multi-sensor systems.
Previously I worked with MPEG-4 and MPEG-7 for a video editing framework during
MSc. That was back in 2004. Before, i.e. since I started my collaboration with
INESC Porto in 2001 until 2004, I collaborated in several research projects
mainly on distributed multimedia systems. Multimodal analysis, fusion of
information from multiple types of sources and multimedia distributed systems
are my main research interests. C/C++ and Python are the tools of the trade.

Fabien Gouyon

Zhang Bingjun (Eddy)

Coder

I am currently PhD candidate under the supervision of Dr Wang Ye, in Department
of Computer Science, School of Computing, National University of Singapore. My
research interest include music information retrieval, multimodal data fusion,
and machine learning. In the project of multimodal music information retrieval,
we employed Marsyas to build a music analysis module. In addition, we also
modified parts of the Marsyas framework to extend it functionality and
robustness.

Miguel Lopez

My name is Miguel Lopes, I'm a finalist student at FEUP (Porto), and I've just
finished my Masters Degree Thesis about musical genre classification - developed
at INESC Porto (Fabien Gouyon was my thesis advisor). I used Marsyas to extract
features from audio files and to run several classification experiments using
Weka. My thesis consists on classification experiments on the Latin Music
Database (presented by Silla, Koerich and Kaestner). A performance comparison
between various Weka classifiers and Gaussian Mixture Models is made; there is
an assessment of the influence on the classification results of the use of an
artist filter, the size of the datasets used and the testing method (cross
validation vs different percentages split); there is a comparison between song
classification and frame classification. A detailed analysis of the LMD genres
and how well each of them is defined in the context of the LMD was also made.
Marsyas was used to extract the features from the LMD audio samples (using
bextract).

Ajay Kapur

Sensors and Robots

I have been using Marsyas to do audio feature extraction and machine learning
experiments in my research in computational ethnomusicology. I have also used
Marsyas in live electronic music performance, integrating multimodal sensor
interfaces with custom built robotic systems. Director of Music Technology at
California Institute of the Arts Professor in Sonic Arts, New Zealand School of
Music

Mark Brand

I am lecturer in music technology at the Nelson Mandela Metropolitan University
(South Africa), and currently working toward an MScEng from Stellenbosch
University under the supervision of Prof. Thomas Niesler (DSP/engineering) and
Mr. Theo Herbst (new music). I am investigating, within the MIR domain,
alternative music theory approaches in respect of non-western musics,
particularly those found in southern Africa. I have a strong bias against the
use of common music notation-based theory in this regard, and I'm thus
leveraging Marsyas (with much guidance from my supervisors) in a bid to unmask
an alternate theoretical framework. Before that I was a rock musician.

Gabrielle Odowichuk

I am pursuing a MASc under the supervision of George Tzanetakis and Peter
Driessen at the University of Victoria. My work is in the field of audio signal
processing, and I used Marsyas to process real-time audio signals for sound
localization using a microphone array. I've written my very own MarSystem to
perform cross-correlation, and will use Marsyas for many more projects in years
to come. Yay, Marsyas!

Giovanni Donati

I'm an Electronics and Telecommunications Engineering student at the Bologna
University in Italy. I'm writing my thesis about Automatic Genre Recognition and
Tagging for Music Social Networks. At the moment I'm also incumbent of a
scholarship and I'm collaborating with an informatic company called PuzzleDev
(www.puzzledev.com) to develope a system called MX-Ray. Basically will be a
signal processing based features extractor conceived for web automatic tagging
applications, but the final target will be to integrate it into different
systems for different purposes. I'm using and will use Marsyas for all the audio
processing operations for the prototype because I find it very useful and
powerful.

Fabiano Fidancio

I'm a Brazilian software developer/free software enthusiast that found

Aaron Rush

I am a grade 12 student at a high school in Canada. I am interested in the
process of transcribing polyphonic music. For such a purpose, Marsyas is a
valuable tool as it already has built in features that can be extended to
further advance research in this area.