In order to convey the manifold possibilies of surround sound design in the new MUMUTH concert hall to a broad audience we present the development of an easy to use application which allows to run and interact with several audio demonstrations from a mobile computing device. An extensible number of such demonstrations using the large-scale and variable loudspeaker hemisphere in the hall are controlled with a user interface optimized for touch-sensitive input. Strategies for improving the operating safety as well as the introduction of a client/server model using the programming language Pure Data are discussed in the context of use with a 3D Ambisonics rendering server, both implemented using the Linux operating system.

This paper introduces an extension of Medusa, a distributed music environment, that allows an easy use of network music communication in common music applications. Medusa was firstly developed as a Jack application and now it is being ported to other audio APIs as a try of make network music more accessible. The APIs chosen were LADSPA, LV2 and Pure Data external. This new project approach required a complete review of the tool's architecture. As result of this development, possibilities of using network music as a plugin in well known environments will be presented and commented.

Luppp is a live performance tool that allows the playback of audio while applying effects to it in real time. It aims to make creating and modifying audio as easy as possible for an artist in a live situation.
At its core lies the JACK Audio Connection Kit for access to audio and MIDI, usable effects include LADSPA, LV2 and custom Luppp effects. There is a GUI built using Gtkmm, however it is encouraged to use a hardware controller for faster and more intuitive interaction.

This article discusses the usage of Csound for live electronics. Embedding Csound as an audio engine in Pd is shown as well as working in CsoundQt for live performance. Typical problems and possible solutions are demonstrated as examples. The pros and contras of both environments are discussed in comparison.

The Csound computer music synthesis system has grown from its roots in 1986 on desktop Unix systems to today's many different desktop and embedded operating systems. With the growing popularity of the Linux-based Android operating system, Csound has been ported to this vibrant mobile platform. This paper will discuss using the Csound for Android platform, use cases, and possible future explorations.

This article describes video-integration in the Ardour3 Digital Audio Workstation to facilitate sound-track creation and film post-production.
It aims to lay a foundation for users and developers, towards establishing a maintainable tool-set for using free-software in A/V soundtrack production.
To that end a client-server interface for communication between non-linear editing systems is specified.
The paper describes the reference implementation and documents the current state of video integration into Ardour3 and future planned development. In the final sections a user-manual and setup/install information is presented.

INScore is an open source framework for the design of interactive, augmented, live music scores. Augmented music scores are graphic spaces providing representation, composition and manipulation of heterogeneous and arbitrary music objects (music scores but also images, text, signals...), both in the graphic and time domains.
INScore includes also a dynamic system for the representation of the music performance, considered as a specific sound or gesture instance of the score, and viewed as signals.
It integrates an event based interaction mechanism that opens the door to original uses and designs, transforming a score as a user interface or allowing a score self-modification based on temporal events.
This paper presents the system features, the underlying formalisms, and introduces the OSC based scripting language.

In recent years we've experienced a proliferation of laptop orchestras and ensembles. Linux Laptop Orchestra or L2Ork founded in the spring 2009 introduces exclusive reliance on open-source software to this novel genre. Its hardware design also provides minimal cost overhead. Most notably, L2Ork provides a homogenous software and hardware environment with focus on usability and transparency. In the following paper we present an overview of L2Ork's infrastructure and lessons learned through its design and implementation.

Workshops & Events

Invisible Suns (Marco Donnarumma, 2010-2011) is an autonomous system that perform a permanent analysis of historical stock prices of a variable selection of major corporations, compresses in few minutes over 10 years of economic transactions and eventually produces a generative and self-organizing audiovisual datascape every 24 hours.
The work does not focus on traditional visualization of data, but rather aims at exploring how this data -- and their implied meaning -- can be perceptually, emotionally experienced.
Everyday since the 1st August 2010 the system retrieves from the Internet up to date stock prices of selected companies and adds new values to its set of databases. The oldest figures date back to January 2002 while the newest are being collected today. At the moment the system is analyzing historical stock prices of six companies which boast the highest market capital in defense and oil industry: BAE Systems, Lockheed Martin Corporation, Exxon Mobil Corporation, Royal Dutch Shell, Chevron Corporation and General Dynamics Corporation.
Data are processed in real time to generate a panoramic, synaesthetic scape which demonstrates an auditive sensation of expansions and falls of companies shares as well as the overall movement of the trading market.
The system also operates a cross-comparison of datasets in order to identify peaks and lows in the overall trading activity and outline them utilizing sound spatialization and lights movements in the 3D environment.
Duration of the work, intended as audio visual output, is constantly growing: as figures increase every day, the time length of the piece increments too.

TraxPong is a tape-music (mediatic) composition based on J.C. Risset's rhythmic paradoxes, applied to speech signals so that their effect is implicit. The source of these signals belongs to previous works, Las Meninas(1991) and TxRx Pong(2007). "Trax" means transmitting a source signal while "pong" is receiving feedback from the source signal, both terms used in radio transmission and appropriated in radio art. The effect of each rhythmic paradox is a continuous crescendi or otherwise decrescendi, to achieve tension contrasts throughout the piece. Other sounds in this piece are futher processed using spectral modeling, as well as known delay-line techniques. Spatial manipulation is achieved by distance changing trajectories of sound sources over Lissajous graphic schemes. This piece was composed using Bill Shottstaedt CLM and Snd, with Michael McNabb's reverb in addition to Juan Pampin's ATS on a PlanetCCRMA Linux workstation.

"FT001" is stereo version of multi channel recording made with Supercollider 3 programming language on Puredyne and edited in Audacity with aim to explore possible synergy between control/decontrol, merging generative strategies with improvised real-time decisions,creating dense, abstract sound gestures influenced by communication patterns of insects and birds. Resulting textures serves as background material for live coded responses or further manipulations and edits.

Music - Oded Ben-Tal; Video - Rees Archibald; Performance - Caroline Wilkins;
The title is a play on the affinity, in Hebrew, between the word for sound (Tslil) and shadow (Tsel). If the word existed in the language it might mean 'sonorities of shadows'. The piece emerges out of Zaum: beyond mind – an ongoing collaboration between composer/performers Caroline Wilkins and Oded Ben-Tal. Zaum is a sound theatre piece particularly interested in the notions of embodied musical performance as it relates to the digital nature of much of the sonic material and the changing relationship between the different types of presence. A chance encounter with Rees in Caen led us to try and extend our collaboration.

At an unknown time,
in an undated year,
a no further defined species encounters an undiscovered planet

The piece Caladan is inspired by the Science-Fiction Novel 'Dune' by Frank Herbert. The piece is divided in three movements that shall take the audience on a fictive journey to a foreign planet.
The first movement describes the landing in the new world. Different sounds from crickets, doves and water (in the novel 'Dune', Caladan is the Water-Planet) are combined with synthetic sounds, generated through Frequency Modulation.
The second movement deals with the exploration of the unknown planet. High tension is created and the water sounds get very obvious in this part.
In the third movement, different kinds of being encounter themselves, which is musically described through question-answer gestures. The different sounds of the gestures seem to learn from each other and result in a merged soundworld.

Composition in Loops #1 presents a performance interface with a tight integration between audio and visual elements created using Pd and GEM. This allows the performer to fluidly compose and perform within the audio-visual realm, and attempts to eliminate any disparity between the two elements. The included recording is a screencast of the live performance.
This is a piece of accidents, glitches, and mistakes. As in all art, this piece is a product of the feedback loop that occurs between the artist and their chosen medium. Sometimes, we are reduced to observing the medium's behavior and trying to intervene.

Day 2 - Friday the 13th

Main Track

The Natural Speech Technology (NST) project is the
UK's flagship research programme for speech recognition research in natural environments. NST is a collaboration between Edinburgh, Cambridge and Sheffield
Universities; public sector institutions the BBC, NHS
and GCHQ; and companies including Nuance, EADS,
Cisco and Toshiba. In contrast to assumptions made
by most current commercial speech recognisers, natural environments include situations such as multiparticipant meetings, where participants may talk over
one another, move around the meeting room, make
non-speech vocalisations, and all in the presence of
noises from office equipment and external sources such
as traffic and people outside the room. To generate
data for such cases, we have set up a meeting room /
recording studio equipped to record 16 channels of audio from real-life meetings, as well as a large computing cluster for audio analysis. These systems run on free,
Linux-based software and this paper gives details of
their implementation as a case study for other users
considering Linux audio for similar large projects.

A software toolbox developed for a concert in which acoustic instruments are amplified and spatially processed in low-delay real-time is described in this article. The spatial image is created in Ambisonics format by a set of dynamic acoustic scene generators which can create periodic spatial trajectories. Parameterization of the trajectories can be selected by a body tracking interface optimized for seated musicians. Application of this toolbox in the field of hearing research is discussed.

Implementation of Ambisonic reproduction systems is limited by the number and placement of the loudspeakers. In practice, real-world systems tend to have insufficient loudspeaker coverage above and below the listening position. Because the localization experienced by the listener is a nonlinear function of the loudspeaker signals it is difficult to derive suitable decoders analytically. As an alternative it is possible to derive decoders via a search process in which analytic estimators of the localization quality are evaluated at each search position. We discuss the issues involved and describe a set of tools for generating optimized decoder solutions for irregular loudspeaker arrays and demonstrate those tools with practical examples.

2pmJunctionBox for Android: An Interaction Toolkit for Android-based Mobile Devices

JunctionBox is an interaction toolkit specifically designed for building sound control interfaces. The toolkit
allows developers to build interfaces for Android mobile devices, including phones and tablets. Those devices
can then be used to remotely control any sound engine via OSC messaging. While the toolkit makes many aspects
of interface development easy, the toolkit is designed to offer considerable power to developers looking to build
novel interfaces.

2:45An Introduction to the Synth-A-Modeler Compiler: For Modular and Open-Source Sound Synthesis using Physical Models

The tool is not a synthesizer - it is a Synth-A-Modeler! This paper introduces the Synth-A-Modeler compiler, which enables artists to synthesize binary DSP modules according to mechanical analog model specifications. This open-source tool promotes modular design and ease of use. Leveraging the Faust DSP programming environment, an output Pd, Max/MSP, SuperCollider, VST, LADSPA, or other external module is created, allowing the artist to "hear" the sound of the physical model in real time using an audio host application.

3:45pd-faust: An integrated environment for running Faust objects in Pd

This paper introduces pd-faust, a library for running signal processing modules written in Grame's functional dsp programming language Faust in Miller Puckette's graphical computer music environment Pure Data a.k.a. Pd. pd-faust is based on the author's faust2pd script which generates Pd GUIs from Faust programs and also provides the necessary infrastructure for running Faust dsps in Pd. pd-faust combines this functionality with its own Faust plugin loader
which makes it possible to reload Faust dsps while a patch is running. It also adds automatic configuration of MIDI and OSC controller assignments, as well as OSC-based automation features.

The Faust Online Compiler is a PHP/JavaScript based web application that provides a cross-platform and cross-processor programming environment for the Faust language. It allows to use most of Faust features directly in a web browser and it integrates an editable catalog of examples making it a platform to easily share and use Faust objects.

In this paper we introduce a novel interface for mobile devices enabling multi-touch interaction with sound modules generated with FAUST (Functional Audio Stream) and run in SuperCollider. The interface allows a streamlined experience for experimentation and exploration of sound design.

Workshops & Events

AVSynthesis is an environment for creating complex audio/visual works. The software is based on the combined powers of OpenGL and Csound. The audio/visual lobes can be decoupled, and the program can be used as a limited but powerful front-end for composing with Csound.
This workshop will explore some of the Csound-specific resources of AVSynthesis.

In 2010, the author had the privilege to capture a performance of Rebecca Saunders' intricately spatial composition Chroma XII in fully periphonic third-order Ambisonics. The production grew to considerable complexity and provides an excellent showcase for a large-scale Ambisonic production using free software. This workshop discusses the artistic motivation (or even necessity) of using a with height recording method for the work at hand.
After a short description of the composition, its instrumentation and the performance space, the microphone and mixing techniques are being
discussed in detail, including hardware and software toolchains, postproduction workflow, and lessons learned from subsequent replays on various systems.
This is a follow-up to a workshop presented at LAC 2010 in Utrecht.

G.R.E. (Gradute Rhythmic Examination) is a piece that involves a computer-administered "test" in the form of a series of short rhythmic exercises that the percussionist plays. After playing each exercise, the computer "grades" the performer on her ability to play the example correctly and then creates a new piece of music for her to play. As the test progresses, the level of difficulty of new questions is guaged by the performance ability of the percussionist.

Warscape Sonata is a sound installation that streams sonic information related to the current drug war in Mexico. RSS news channels, microblogging hashtags, and viral videos are used as sources for an electronic registry of the historic moment of militarized Mexico. The information obtained from these sources is manipulated by software to extract sound archives which are then used to create a noise musical structure that places aesthetic emphasis on the media aspect of the war. Warscape Sonata also highlights the way in which Mexico's civilian population experiments with information technologies to confront propaganda, social control and fear. This is an on-site project for the CCRMA's Listening Room: it will stream randomly 3 different playlist from around, above and below to create a 22 channel sonic experience.

"Densité" was written in the audio software languages of SuperCollider and Paul Koonce's PVC. "Densité" documents the interactions between the density of samples being selected and the dimensions of the space in which they are realized. Depending on particular sets of heuristics, different exponential models and soundscape audio files determine percussion sample playback parameters which are, in turn, recorded. These audio segments are then convolved with varying types of impulses responses, resulting in different sonic spaces. "Densité?" focuses on subverting the inherent sonic qualities of percussion instruments as a result of temporal sequence and their individual placement within particular spaces.

I made this piece in 2001 using Csound and 2nd order Ambisonic. It now got re-rendered for 3rd order and if performed, it will be the very first time the piece will be diffused as it was intended to: in full 3d (periphonic) with a lot of spatial resolution thanks to 3rd order Ambisonic and marvellous 22 speakers.

Terra Incognita is a journey to an "otherworldly" sonic landscape. It provides a setting which may evoke images of something real or fantastic that borders to the narrative. The work is composed in the ambisonics technique in order to create a three-dimensional sound-sphere in which sounds can move about in any direction, and where full surround-sound environments can be set up. The wide dynamics and the variations in range and speed of movement of the different sound materials in this piece indicate ever-changing situations, sometimes expected, other times not.

Sol Aur is an exploration of the use of FM synthesis. It also serves as a vehicle for the Orrerator control interface.
The Orrerator a software controller built for Android tablets. Sound for Sol Aur is generated by four FM oscillators in Pd, with each oscillator being detuned from a base frequency. By changing the tuning, the index of modulation, and the modulation frequency, many combinations of FM sounds can be created and shifted over the course of the piece.

If the Trombone was a city, - how would it sound like?
A piece made of trombone sounds transformed into concrete and atmospheric city-like sounds. A soundwalk through an imaginary city that tells also about the inaudible and emotional events that have taken oder will take place within it.
Trombone sounds taken from and worked out with the trombonist Rick Peperkamp (NL).
All sounds are taken from an experimental session with Rick Peperkamp. Afterwards Strothmann transformed the recorded sounds (including some snippets of spech) in the following way:
She wrote a personal algorithm in *scheme* by which she was able to determine the harmonic and rhythmic microstructure of the sounds which were to be proceeded with *csound*. The scheme-code generated the csound-scores of the two csound-instruments used for the sound proceeding within "Rick`s Trombone".

With-height reproduction is a hot marketing item in surround sound. This paper examines the (sometimes non-obvious) motivations behind it and discusses the abilities and shortcomings of different methods as to the perceptional mechanisms of height localisation.

The Mamba Digital Snakes are commercial products created by Network Sound, that are used in pairs to replace costly analog cable snakes by a single ethernet cable. A pair of boxes can send and receive up to 64 channels at 48KHz sampling rate packed with 24 bit samples. This paper describes the evolution of jack-mamba, a small jack client that can send and receive UDP packets to/from the box through a network interface and basically transforms it into a high channel count soundcard.

Combining audio components that use incoherent sample clocks requires
adaptive resampling - the exact ratio of the sample frequencies is not
known a priori and may also drift slowly over time. This situation
arises when using two audio cards that don't have a common word clock,
or when exchanging audio signals over a network. Controlling the
resampling algorithm in software can be difficult as the available
information (e.g. timestamps on blocks of audio samples) is usually
inexact and very noisy. This paper analyses the problem and presents
a possible solution.

Signal-processing tools written in the FAUST language are described. Developments in FAUST libraries, oscillator.lib, filter.lib, and effect.lib since LAC-2008 are summarized. A good collection of sinusoidal oscillators is included, as well as a large variety of digital filter structures, including means for specifying digital filters using analog coefficients (on the other side of a bilinear transform). Facilities for filter-bank design are described, including optional delay equalization for phase alignment in the filter-bank sum.

4:30The Integration of the PCSlib PD library in a Touch-Sensitive Interface with Musical Application

This paper describes the study and use of the PC- Slib library for Pure Data and its implementation in the project "Interface Design for the development of a touch screen with musical application". The project consists of a touch-sensitive interface that allows the drawing of musical gestures that are then mapped to a harmonic structure generated by the PCSlib library. Pure Data also is responsible of the translation and reproduction of the musical gestures via MIDI.

5:15Rite of the Earth -- composition with frequency-based harmony and ambisonic space projection

A multi part composition, called Rite of the Earth will be presented, utilizing sounds of ceramic instruments built during the Academy of the Sounds of the Earth, a multidisciplinary artistic project held in the Artistic Department of Silesian University in Katowice (Poland). Most of the compositional process has been done in Supercollider, mixing and spatializing in Ardour, with the use of tools by Fons Adriaensen.

Digital RoundO #1 is a homage to music pioneers of the past. A reflection on the past which tries not to be nostalgic but hopefully provide an insight on the present and future of music. Here I have been looking at the remote past of Baroque Music masters in the piece's title, form and eloquent gestures. The more recent past of electronic music pioneers in its sound synthesis techniques and the craft work of assembling them and the creative use of effects. Use of minimal raw audio materials went in such direction. Finally, Digital RoundO #1 is a 'dance' which aims to provide listeners with an intriguing sonic experience.
Digital RoundO #1 was entirely composed on Linux. The raw materials are essentially additive synthesis produced in two ways: 1. An additive synthesiser I created in Pure Data which is driven by Markov chains. 2. Analysis and Resynthesis of bitmap images through the ARSS software written by Michel Rouzic. After creating an initial 'palette' of interesting raw sounds I solely used Ardour as my workbench with chisel-work montage and layering, working on panning and enveloping to shape the overall form of the piece. A Stereo Reverb effect (by Fons Adriansen – LADSPA version) was used in a very creative, unorthodox way through the continuous variation of the Reverb Tail parameter.

Birches was composed as a response to the poem of the same title by the great American poet, Robert Frost. My intent was not to "set" the poem, but rather to explore its inner workings -- to re-imagine its parentheticals, present in Frost's vicarious vision of a boy, a scene of birches, and the truth of the matter versus the `truth' as revealed in the confession of an old man looking back.
Birches is dedicated to my father and was composed for violist John Graham.
Open Source tools:
The electronic part for Birches is derived entirely from Mr. Graham's own instrument, an Amati viola with a rather historic lineage. Software used to construct the electronic part includes Csound, Score11 (a Csound even preprocessor), Xavier Serra's SMS (Spectral Modeling Synthesis, on an SGI O2 workstation), Paul Koonce's PVC, the Ardour DAW, and several open source sound file editors including Wavesurfer, Snd, and DAP.
Performance:
Live performance is enabled by a PD patch. Playback is from stereo master files which can be diffused as multichannel.

Rite of the Earth is a series of pieces utilizing sounds of ceramic instruments built during the Academy of the Sounds of the Earth, a
multidisciplinary artistic project held in the Artistic Department of Silesian University in Katowice (Poland). Most of the compositional process and sound synthesis was carried out in SuperCollider, mixing and spatializing in Ardour, with the use of tools by Fons Adriaensen. In this sixth part, there's an orchestra of bowed bowls, flutes, ocarinas, shakers and drums, all brought to life by various computer music techniques.

Description of "Vocal Etude"
It is a piece composed by Nicola Monopoli.
Vocal Etude is an etude on the voice which is probably the
best instrument of the world.
The voice could be the voice of a child, the voice of a girl,
the voice of the people we hear every day or also the inner
voice.
This etude is a "Ricercare" on the voice.

Morales composed the computer sequences and processing effects after
his experiences recording music of the Huaves natives of Oaxaca, Mexico.
His work at The Banff Centre in March, 2009 included this performance
with Chris Chafe.

10pmDebb and Duff play the music you ate your first crawdad by - Concert

A normal-looking country music duo, dressed a la Nashville and
brandishing a guitar, an autoharp, and a wooden cooking spoon, perform
new, sometimes ironic versions of old-timey tunes. What comes out of
the speakers is only partly identifiable as the instruments and voices
of the musicians. Their 20-minute set combines standards (Dolly Parton;
Johnny Cash) with more obscure fare, all transformed using experimental
signal processing algorithms implemented in Pd on linux.

Live performance of Luppp & Harry on stage, with some loops being loaded from disk, some live parameter twiddling, some live noises being made and looped.
No particular "routine" will be practiced, it will be a live improve on the day, with some pre-made loops & melodies.

CIA-X is a new band coming from APO33 collective. For this time the two representants of the crew for the Linux Sound Night are Romain Papion aka Cambia and Julien Ottavi aka Jokiller.
This project is an evolution of underground Hip Hop with the influences of experimental music. All original instrumentals are created with libre software like ardour, Puredata, LMMS and lead with the voices of the mc's, mixing french and english lyrics, stories and abstract poetry.

Compositions in Loops nos. 1 and 2 present a performance interface with a tight integration between audio and visual elements created using the open-source software Pure Data and GEM. This allows the performer to fluidly compose and perform within the audio-visual realm, and attempts to eliminate any disparity between the two elements.
These are pieces born of accidents, glitches, and mistakes. As in all art, these works are a product of the feedback loop that occurs between the artist and their chosen medium. Sometimes, we are reduced to observing the medium's behavior and trying to intervene.

This was a last minute set put together for Linux Sound Night, as
another performer dropped out. The first set of music is all from a
video game that was never made. The tunes were composed with
Nitrotracker on the Nintendo DS, a nice example of free software
opening up closed hardware and making it do something that was never
intended. The last two pieces are granular explorations of the same
Beach Boys song, one with lyrics written specifically for this
performance.

Day 4 - Sunday, April/15

Main Track

Understanding the construction and implementation of sound cards (as examples of digital audio hardware) can be a demanding task, requiring insight into both hardware and software issues. An important step towards this goal, is the understanding of audio drivers and how they fit in the flow of execution of software instructions of the entire operating system.
The contribution of this project is in providing sample open-source code, and an online tutorial [1] for a mono, capture-only, audio driver which is completely virtual; and as such, does not require any soundcard hardware. Thus, it may represent the simplest form of an audio driver under ALSA, available for introductory study; which can hopefully assist with a gradual, systematic understanding of ALSA drivers' architecture and audio drivers in general.

An open source C++ framework is introduced which facilitates the implementation of multithreaded realtime audio applications, especially ones with many input and output channels. Block-based audio processing is used.
The framework is platform-independent and different low-level audio backends can be used for both realtime and non-realtime operation. Support for further backends can be added easily.

This paper describes the implementation of diverse layers of code to enable an open source embedded hardware device to run audio processing pure data patches. The evolution of this implementation is reviewed describing different approaches taken by the authors in order to find optimal software and hardware settings. Although some problems are detected when running specific patches, the system has a conjunction of features that are relevant for the floss audio community.

Workshops & Events

This workshop will demonstrate the use of Csound for live situations in
PD and CsoundQt. It will show features from my paper (Thursday) in more
detail, as well as discussing problems and helping the participants to
get things to run.

Daily Events / Exhibitions

Installations & Listening sessions

The following pieces are presented as installations or part of a loop-playlist in the listening room. They are accessible on each day of the conference during opening hours; except for the "Listening Room" which will be closed Friday afternoon 3.45pm to 4:30pm for a workshop.

This electroacoustic work, entirely generated by sound synthesis with Csound*, is based on an original system that emphasizes harmonics of sound. A sound wave with deep metallic sheen slowly unfolds his plot and gradually undergoes subtle variations or radical. Sometimes volcanic or stormy, the sonic material forged by Vulcan here evokes the heat, power and stability of terrestrial bowels.
* V. 5.14 compiled for Fedora 14, and played with the PlanetCCRMA RT kernel.

"Structures" is the result of a research composition project, exploring the creation of complex structures into sound, which are developed into a large number of varieties. The goal is to achieve an improvement in the process of sound creation for acousmatic works. This research project is a major part of a personal study in electro-acoustic composition, at the Utrecht School of the Arts, the Netherlands.

Series of sound compositions where the relationship between recorded music andconcert halls are exploring the field itself. Inspired by the forms of visual artists in generalcreate their works inside their studios and then exposed them, compose a work of thesefeatures per year. Then I present it in various festivals and concours where the work is completed by itself, without my physical presence. Could be understood, then this series of compositions as a continuous evolution of asimple idea, listen.
"Hacia la Expansión Auditiva Constante" (6' 29")
Inspired by the micro universe of sounds with which we live all the time, "hacia la expansión auditiva constante" offers an approach to sound generated by the electricity itself.
Based on the "error" of the console itself, connecting cables to input and output without adding any external instrument, are born all the variables that are then processed byseveral different effects and compositing techniques to shape the final work.
A profound hearing away from anxiety, a search of total dispossession, a path towards the essence of sound itself ... then can we enjoy the music that surrounds us all the time?

Subvocalization, or silent speech, is defined as the internal speech made when reading a word, thus allowing the reader to imagine the sound of the word as it is read. Is possible to associate subvocalization with moving one's lips, but most subvocalization is undetectable even by the person doing the subvocalizing.
Subvocalization usually occurs during the stream of consciousness.
According to Chomsky's theories about a `universal grammar' is possible to say that there are things that have the same meaning for every human. These things are before the word, just like subvocalization.
When I showed this work in various concerts people usually received the same message, the same impression, the same meaning from this quite `abstract' multimedia piece, maybe it is connected to a universal idea, before the words.
The piece is divided into two movements.
Andante Sostenuto is `the road to the stream of consciousness'.
Con Fuoco -- Presto Agitato is the second movement. Con Fuoco is the inner monologue during the stream of consciousness and the final part, Presto Agitato, is the Epiphany, a moment of spiritual revelation.
This piece deals with a very primitive, pre-speech, level.

Synesthesia is an ability of the brain of melting the senses together. Kandisky, who could hear music in colours or Nabokov, for which letters would conjure up colours were synesthetes. Synesthesia does not apply to the human brain only, as a symbol system the computer can produce synæsthetic experiences. The synæsizer is an artificial (multimodal and bidirectional) synesthete, its senses having been melt together by the use of data-bending. Its video system is directly plugged into the audio and vice-versa. If we compare it to a human being, it can hear with its eyes and see with its ears. But that quality of being a synesthete is only the consequence of its primary function, which is to generate a synesthetic experience on its user.

This composition is a, non-real time, entirely electronic piece (CA. 17.5 minutes) to be played without interruption between its four parts.
This electronic composition was entirely created using the GNU Csound program (By Barry Vercoe et al, MIT). Electronic sounds are produced by means of several Synthesis and processing techniques. The only exception of this are the voices which recite the poems and several minor audio files used as source to be processed by Csound. All the Csound code used (in its unified format, namely csd) is provided together with the mentioned audio files plus a Linux script (or a batch file, for Windows users) which execute them (using the Csound Command line) to make the work entirely from scratch. Needless to say, the user must have an updated Csound Version properly installed in his system. The last version of the work was successfully made using the Csound version 5.13 (double samples) Feb 11 2011.
This work, originally for 3D surround sound and lights, is the first part of a larger one that was commissioned by the National Secretary of Culture of Argentina and conceived to be performed in the reading room of the old National Library.
By one hand, the work presents the argentine poetry in its environment. By other hand, all the selected poems deals with the night from different perspectives as are the sunset, insomnia, wakefulness, dream, nightmares, and dawn.
The title (clair-obscures) sets a double reference: by one hand, the zones between the light and its absence which make the form arises (the chiaroscuro technique of the Italian painters leaded by Caravaggio). By the other, the multiple instances arising between the reference, the form and the sonority which are the essence of poetry and that electro-acoustic music may recreate with an intensity and precision without precedent in sonic art.

"Diptiq"
for computer playback
"Diptiq" is a documented solo improvisation with instruments written in the computer music language SuperCollider. The audio output signal of a variety of synthesizers is directed to specific audio busses to be read and shared by selected synthesizers. While these selections are unique to the human performer's musical aesthetic, the fluctuating frequency and amplitude values influence synthesis parameters, creating sonic instability. "Diptiq" explores how the unpredictability of common audio signals combined with human intention can yield moments of chaos and cohesion.

Aphelion is the combination of the first two tracks from the Sonnamble album 'Blindlight' which was released on Forwind in July 2011.
Aphelion (a word meaning the point in a planet's orbit where it is furthest from the sun) showcases Sonnamble's dissonant while at the same time consonant approach to improvised minimalist soundscapes.

I made this piece in 2001 using Csound and 2nd order Ambisonic. It now got re-rendered for 3rd order and if performed, it will be the very first time the piece will be diffused as it was intended to: in full 3d (periphonic) with a lot of spatial resolution thanks to 3rd order Ambisonic and marvellous 22 speakers.

This piece is part of an exploration of
time using granular synthesis.
A moment of inspiration and emotion lead
to the creation of the original source,
and granular techniques were used to
revisit that moment and expand it, thus
exploring the many facets of its texture
and sonic palette. I've been fascinated
with the possibilities of exploring time
by changing the scale of perception.
Small moments of musical or sonic material
can be used as a palette for painting
a new experience composed of shifting
micro-sonic windows of time. There are
five sections to this piece which each
have different arrangements of sonic
grains coming from one source. Three of
them reflect closely the original source
and the other two are more focused on
the nature of microsounds.

"17C Foam Rubber and Fur" uses several compositional techniques "popularized" by Elliott Carter, such as
* Metric Modulation -- the gradual introduction of a new tempo that is a division of the existing tempo
* All-interval 12-tone rows -- a static octave placement of all 12 notes in such a way that all 11 possible intervals exist between them
* Differentiation of voices by assigning each voice a subset of the 11 possible intervals
I've been exploring the advantages and limitations of these techniques, and have found that audio/midi apps are a bit better able to suffer some of the performance difficulties involved (though I've never seen a musician experience a segmentation fault). In this particular piece, I've attempted to make the above techniques somewhat obvious, and the music more accessible than Mr. Carter's tend to be. But maybe I'm fooling myself.

Soñando Satelites [Dreaming Satellites] is a real time satellite sound installation that invites to dream and proposes to look over our head as a land to explore and hack.
It is a ritual of re-appropriation, a celebration of the fact we are involved in the same data-space that could control bodies or change relations between entities.
Soñando Satelites is a generative sound track connected to a real time multi tracking satellite system called Gpredict, especially patched for the installation. The installation aims to create an immersive audio/visual space for the audience: the public enters a dark room where a projection of a satellite real time tracking system is displayed.

The schedule is a major guideline. There is no guarantee events will take place at the announced timeslot.