I am trying to get separate audio inputs into Processing... I have been trying with Minim with little success. I only need 2 inputs, but unfortunately, Minim (java?) is not recognizing my audio interface as having a stereo line-in (it still simply mixes the 2 inputs)... as almost all other audio programs do (ie, Digital Performer, Supercollider, etc.) and it doesn't seem to allow me to add multiple audio inputs. (if it recognized the input as stereo, I could just use the R & L as separate tracks but alas)

I have tried to set the input mixer to a mixer that is capable of handling multiple inputs (ie, CoreAudio, my audio interface) via the call to setInputMixer() but it still seems to only receive the one master input. Setting it to CoreAudio seems to make it read only input 1/L & not the master (mixed version of 1&2) but it won't read input 2/R... Is there anything else I can do?

I also have installed SoundFlower & Jack but these seem to have the *same* problems as mentioned above.

At this point, the only solution I can think of is get the osc library implemented and working on Processing (I have not done this yet) and then use Supercollider to read the audio-ins and then send it to Processing via osc. This seems extremely cludgy but faster than writing my own audio class to handle multiple inputs (and time is of the essence)... I could also possibly run Csound with the new csoundo thing, but my Csound is rusty by several years at this point... is that a better solution?

I am still running OSX 10.4, I have a FA-66 audio interface, and I'm on the latest version of processing.

If anyone has any thoughts or suggestions... that would great and I would be ever so greatful, thanks.

Minim does not support multiple audio inputs, nor does the other library I tried, Ess. Yes, that is rather critical functionality.

As I only needed the amplitude envelopes from the audio for video processing, I was able to implement a solution via OSC & Supercollider. I read the multiple audio-ins and took the amplitude envelope in Supercollider, then sent the result to Processing via OSC. You could probably do something similar with MAX/MSP or PD as well. It turned out to be a fortuitous solution, since the use of OSC allowed me to process/input audio on a seperate computer from the video, which solved another problem that came up later.

I don't know if that is helpful to you or not. If you are interested, I can send you both the Processing & Supercollider code.

oups ! i just saw your replay now.... sorry.
Yes, i did exactly the same. i am sending the waveform via osc and it works great.
Thank you for the answer.
The main goal was if i could use only processing with, say , ableton live or whatever. with PD and MaxMSP no problem cause i can send osc messages to P5.

Just as you two I am using external mics with processing and Mac OS X.

I'm using a pair of mono USB mics that work fine in other applications such as garage band etc.

I have not managed to get processing to recognise them on their own or as an aggregate (created through the system utility: Audio MIDI Setup). When I choose one of the mics in System Prefs Processing ignores this and uses the built-in. If I use the aggregate device it just does not see sound at all.

I'd love to use Supercollider to pass through just as you the amplitude to change the behaviour of processing. I have used OSC in the past sending sensor data and video data back and forth between maxMSP and processing and iphone and ipad to maxMSP etc. That was when I was at working at a university with a MaxMSP licence.

So as I'm not an avid supercollider user (nor an audio person really!) I'd love if you had some pointers on streaming through those data points from Supercollider to Processing, especially on the Supercollider side.