Generally speaking LS is concerned with the innovative & transdisciplinary nature of audio art. We explore these areas in a group context, within which we aim to evaluate the groups own projects & realizations, however we also have a tendency to provide openings for other artists & researchers. A important aspect of our research centers on the exploration of multi user systems and requires by definition collective collaboration. Our research is englobed by two main themes - audio in it's relation to space and networked audio.

Locus sonus' first project is called "locus stream project". It consists of an international network of open-microphones which stream soundscapes real-time over the internet. The project was conceived as a resource - a source of sound materiel permanently available to be exploited for different art forms. Details of these projects are available on the LS website, for the present it is only necessary to develop the aspect of the locus stream project which has led to our current interest in virtual space.

Second life is a multiuser online world which has recently gained a certain importance, in the sense that it is used by a large number of people. Unlike the majority of virtual worlds, second life is not organized as a game,but rather as a social space, and the architecture of the world is built and maintained by users. LS started to take an interest in SL as an online communal space in 2007 when a discussion took place as to wether it could be used as an annex for our experimentations.

The first action which LS accomplished in second life was to build an interface, in the form of an old "Marconi" radio set, which offers the possibility to access the LS streams from SL.

The question then became how to get an audio stream out of second life? It rapidly became apparent that since the possibilities for creating audio in second life are limited to using basic sound effects and playing back samples, we needed to add an audio engine in order to generate spatial effects and other complex audio operations.

It was decided to use PD for this purpose. A system was devised to allow PD to recuperate x,y,z coordinates of objects in Second Life via the web & generate the consequent sound and stream it back into second life.

This was done in collaboration with SAIC (School of Arts Institute of Chicago).

The result of this experimentation was presented publicly in June 2008 during the Aix-en-Provence new-media festival "Seconde Nature". The proposed installation mixed the virtual online, sound space with the local physical space:

LS/SL 'Locus Sonus in Second Life'

During the Second Nature Festival, Locus Sonus Lab sets up an extension of the Cité du Livre venue in Aix en Provence in Second Life.

The idea is to experiment the possible permutations between the physical and the virtual world using audio as the main vector.

The aim is to verify the way resonant spaces influence and mix with the local acoustic space leading to a paradoxical hybridization possibly placing the user in both places simultaneously.

Avatars visiting the "Cultures Digital island" in Second Life are invited to manipulate sound objects.

Their action is spacialized in the physical space in Aix and the resulting audio signal in the physical space is recorded and "streamed" into Second Life.

The experience during the Second Nature festival led to the following conclusions:

The relationship between visual space and audio synthesis, in particular virtual resonant spaces appears as a fruitful terrain to explore and develop. We are also very interested by the artistic possibilities offered by linking physical world and virtual spaces via, streaming in one direction and multi loud-speaker spatialization in the other.

Some notes proposed by Scott Fitzgerald following the LSSL presentation:

Regarding our experience with Second Life at Seconde Nature, I think it would be best for Locus Sonus to consider moving to a different

platform, or tool, for creating aural networked virtual spaces.

Second Life, in my opinion, has one thing going for it: a built in user base. However we did not capitalize on that base for a number of reasons. With an estimated population density of 87 people per sq km, one has to wonder what second life really offers us in terms of

A last minute email sent to the locus sonus email list, and no "in world" publicity accounted for (from what I saw) 2 visitors in second life who visited the space during the seconde nature festival, that were not physically present at the event.

If the population of Seconf Life is 1) not informed about LS and the work and 2) nowhere near the event at the time, then it defeats the purpose of using such a space, as the pre-exisiting community offers us nothing.

The participants at seconde nature certainly enjoyed themselves, but it's hard to say how many of them thought of it as a "game" or as a "second life thing" and not as a sound work. wrestling with uncumbersome controls and using machines that are not designed to run the simulator did not help people experience what the work should have offered.

Also, as we witnessed, the second life scripting language has a large number of flaws that seriously inhibit accurate position tracking, making the environment not ideal for what we wished to achieve.

or, in bullet points :

1) the controls are awful and people in the space who had never used it before did not know what they were doing

2) the second life scripting language is horrible, sends bogus data and is generally prohibitive

3) the one asset that second life does have, a large user base, is offset by the fact that a) there was no "in world" advertising of the event and b) population density in the simulator is so low that a "walk-in" is unlikely

4) people perceived the whole experience as a "game" and their take-away experience was more about that aspect, the "game second life," and not a sound piece, detracts from the intent.

Having said all that, I think the ideas we worked with (virtualized sound spatialization/linking the virtual and the real), are valid points of investigation. Perhaps LS could explore various other virtual spaces to work with. Panda http://panda3d.org/ , as mentioned by Alejo, and used by SAIC, can run over the network, and allow many people to log in remotely. Ogre http://www.ogre3d.org/ is another open source 3d engine, though I am not sure of it's ability to be networked (interesting project though : http://jitogre.org is a project that exposes Ogre to Max/MSP/Jitter).

Of course, there's also the possibility of creating a 3d environment in GEM, and streaming it to clients, for another approach. obviously it wouldn't have all the functionality of an ogre or panda, but could serve as a simple sandbox (much like second life already serves).