I have read the "personal interactive relaxation system" and would like to do something very close to that, but with the use of a microphone & DAW. The idea is : noatikl generates music on N voices, and one voice "listens" to a mike and responds with some generative melody for a few seconds. I don't know if that is very clear... This would be used for an underground happening. All computer things would be hidden, only a mike in a room, and people could toy with it. If anybody could help, it would be great.

The only way I could imagine this happening is if you had a Pitch-to-MIDI VST plugin running in a host. Probably a standalone host (not a sequencer) would be optimal; ie: something not tied to a strict timeline, and can just be enabled and run indefinitely. The listening part could be covered easily with scripting. Intercept the midi notes (it doesn't even matter if the pitch-to-midi is correct because you can do anything in the script part. You could mute/unmute, or send out a certain nonspecific pitch (maybe at zero velocity) that other voices are set to watch in "following" mode.

These guys make a neat pitch-midi transcription helper program, but also release their product in VST format for pitch-midi conversion. The best part is that they do an Audio Units version of their plugin and Windows/Mac version of the VST. Something like that might be worth investigating because it could send the notes via IAC and trigger noatikl in a few different ways.

I've been playing with listening voices a bit and there is some odd behavior even when just using monophonic notes...it isn't working like I would expect.

It seems that no matter what I do, the note that comes in to be processed is automatically sent out the voice. So I am unable to modify the original note to "fix" one that is out of scale, because it always comes out regardless of what my script does or voice settings are. Is there any way to destroy this note that I don't want to hear?

I can't filter it out because I can't know in advance what to filter...

Edit: the note goes through when noatikl_Trigger_EmitListeningNote() is in the voice note trigger script, and when it is not. "Listen?" is checked also. I've tried different channel settings and routing but I still end up with those original notes scrambled in like eggs.

I'll have a bit more experimenting with it. I think it might have to do with the semitone shift stuff.

EDIT: yeah it does, and it is an odd beast. If the shift is at 0, it doesn't work. (I'm using interval within a scale rule) If I set it to 1, it does work, but everything is shifted by 1. So to get it to really work, you have to shift by 1...or more, and then subtract the pitch sent to the emit() call by whatever the shift is.

(Actually, I think you have to then subtract by whatever interval it is shifting by? I need to play with this some more and come up with a workaround because it is not consistent..)

I think I can work around this bug, but it is kind of odd that I have to counteract the shift which could be potentially unknown without continually inspecting scale rules..especially if a scale is changing dynamically..