I did some work with MAX at CNMAT and on my own a very long time ago, but since then I've led a MAX-free existence - don't know how that happened. Anyway, I finally have the opportunity to get back into it, so I just started to catch up and see how things have developed since I left. Holy cats - they certainly have changed. The fact that I'm on this forum shows you a bit about where my click trail has taken me.
Ok, so I have the opportunity to retool and almost start from scratch, and I'm faced with general questions about the ideal OS / software to commit to. I'm aware that questions like these can be inflammatory, so I'll give a bit more info and justification for being so general.
I'm primarily a Mac OS X dude, although in my main profession as web developer, I use all OpenBSD servers. For one particular project, I may be moving to Linux because of its clustering possibilities. But at home, I do all my work on my G5\.
My training, such as it is, is in composition - with pen & paper even. But I'm eager to expand my toolset with various computer-assisted techniques, and I intend to develop my own, not just use what's out there, so programming is going to be a big part of my plans.
My immediate inclination was just to lay down the $495 for MAX/MSP (my copy is so old I don't think I can even upgrade). Maybe add jitter too - looks insanely cool. But then I clicked on a link to a company that does cycling74-based work, and followed a link to Pure Data. And things really opened up from there.
So it looks to me as if there's a bit of a conflict here...
- If go with what's semi-familiar and go with MAX on my Mac, I immediately also get an IRCAM Forum Pass. Quite a bit of $, but a lot of bang. Good support. Established community.
- But then there's Pd, which is open-source - that's a HUGE plus in my book, not just because it's free, and it'll run on my Mac...
- ...but from what I can tell, a lot of the most interesting development in this field would appear to be taking place on the Linux platform, AGNULA, dyne:bolic, etc.
So that's my question. MAX vs. Pd, OS X vs. Linux, and the natural combinations. Will Pd be a practical alternative to MAX/MSP, with its established user base and support, or even an improvement? Am I shortchanging myself with Pd on OS X? Might it be worth taking the plunge into Linux? I'll be interested to hear all advice and experiences.
Thanks...
Alex

i've been wanting to write my own softsynths for years, and i think PD is the tool i've been looking for, so i'm very excited about this. however, i can't get an IF statment to work to save my life; i've tried everything i can think of, and it always says:
"error: if: no method for 'float'"
(i've gotten IF statments to work before in Max, and when i change the number of input variables (&\#036;i1) the number of inlets change, so i think i'm on the right track)
i'm sorry if this is a dumb question.
if it matters, i'm running PD 0.37.0 on a pc w/ windows 2000\.
also - i'm probably not going to be doing this for quite a while, but is it possible to compile PD programs into DX or VST pluggins?

greetings
i find it difficult to build synth sounds using the pd oscillators and amp them to sufficient levels without hearing distortion when playing 3+ note chords. this is especially the case with lower notes in proximity to each other
if i turn overall gain down, obviously it goes away. however this is a relatively low volume when compared to external sound through pd or even playing samples. i've used lop~, hip~, tanh~, clip~ and limiter~ to no avail

Hi,
I've been an OS X user all my life, have never once used Terminal etc... Anyways I REALLY want to try out pd-l2ork because it seems to have so many awesome features. I partitioned my hard drive and installed the latest version of Linux Mint. I went to the virginia tech site and followed the instructions exactly. I downloaded the .deb file, opened it with package installer, installed the missing dependencies with sudo apt-get install -f and....nothing. No icons, no evidence that pd-l2ork has been installed anywhere. I type in "pd-l2ork" into Terminal and i get "pd-l2ork: command not found", I try searching for the program itself on the hard disk and...nothing.
I tried installing pd-vanilla through the software manager, worked like a charm. I have no idea how to troubleshoot in Linux so if anyone could give me some pointers that would be awesome!!
Thanks

Hi,
i am completely new to PD but before i really get into it i need to do some basic research...
i am looking into the possibility of performing a live score for my diploma movie (~60min) . we did some experiments hooking up the (SMPTE) timecode out of an analog betaSP VTR to a "motu midi timepiece" converting it to mtc and then going into max / msp of my friends powerbook. this approach is easy, failsafe and it works but as i am not a student anymore it is practically impossible to access a betacam player for this purpose - so i'd rather use a software-only solution, which could be distributed on more than one computer, if necessary, though.
when i tried to load my 60 min movie with pix\_film (or pix\_movie whatever it is called) PD crashed immediately, i bet it is trying to load the whole movie into RAM, which is not possible, even with the "small" DV compressed version. we would even prefer a DVCPro HD compressed movie which would be much bigger than the DV version. i was looking for a movie player which can be controlled by OSC but didnt find anything. in a "perfect" projection environment i would like to project the movie using an HD video projector.
the other question is, that unlike most video projections for media arts, i need to have the whole setup perfectly in sync, as i would play at least 2 tracks (rather 4) of direct location sound along the movie, which has to stay in lip-sync for 60 minutes.
furthermore, i was looking into a scoring interface like OTL to have a graphical score for the movie, controlling certain dramatical parameters automatically.
can anyone give an me advise if the idea for a setup like this is theoretically possible to realize or should i quickly forget about it ?
thanks in advance,
till

Hello all,
I'm new to PD. Spent my weekend holed up learning as much as I could cram into my head in 48 hours. YouTube tutorials are a real blessing.
I have an idea for an installation for an art show coming up where audience members could send a text to a number, and PD translates the text into midi that triggers a synth to play back at them. Gimmicky, I know, and probably been done 1000 times, but it seems like a reasonably simple path to interactive sound (and maybe visual, eventually) art, something I'd like to explore.
I messed around with entry box this weekend and mangled it into sending a midi trigger out for each uppercase letter that I manually entered, based on my limited understanding of this post:
[http://puredata.hurleur.com/sujet-3601-ascii-binary][0]
I have some questions. Is it possible to route an sms message into PD? Can I turn it into a list? Can I separate the list into spaced characters?
Also, I handpatched each letter to a specific number corresponding to the midi trigger. Is there a more elegant way to assign a number to a symbol, perhaps some sort of array or formula? I don't mind drawing the lines, it just looks unwieldy and messy.
Thanks for your help.
NB
[0]: http://puredata.hurleur.com/sujet-3601-ascii-binary

I've been working on a simple step sequencer design that uses a Korg Nanokontrol. For each of 8 channels on the Nano, the knob controls the note duration, the fader controls the note value, and I'd like the two momentary toggles to control velocity.
So the two buttons are set up as toggle switches, sending out a MIDI CC, with 0 or 127 depending on whether they're on or off. With two on-off toggles, I'd like to use the four states to feed four velocity values into a makenote - 0, 42, 84, 127\.
I started by feeding the ctlin output to some == logic objects, so I can get four 0s or 1s, based on whether the two toggles are on or off. But from there, I'm stuck. I've tried some conditional logic from there, but the cold inlet concept has me stymied. I suspect there's some order of operation stuff going on, too? There's also the issue of differentiating between the two one-on, one-off states. Any advice would be appreciated!

Question -
I need to basically mix 6 different audio streams. Simplest way possible, no need to set levels, they're already set.
I have seen some example code where an object has multiple audio streams feeding the \*\*same\*\* inlet. I've tried that. It seems to work just fine.
Is it safe to feed multiple audio streams to one audio inlet on an object? Will it always work? Or should you explicitly add the streams (with +~)?
My hunch is that you should add the streams, and not assume what the object will do if given multiple input streams, since objects are written by all kinds of folks...
I couldn't find this explicitly stated in the online books, I may have missed it...
Shawn

Hello,
I currently have a patch with
\[adc~\]-\>\[fiddle~ 4096\]-\>\[34.04\]-\>
Which works great for detecting the pitch. I know I have to use threshold~ and spigot to define that I only want the pitch when the input is above a certain volume. Can anyone point me in the right direction? Thanks!

Is it possible to add externals to the "Application" version of Pd-0.38 for OS X?
Further to this has anyone managed to get comparser~ to work in any version of Pd for OS X.
comparser~
[http://kmt.hku.nl/~pieter/SOFT/CMP/doc/bin.html][0]
[0]: http://kmt.hku.nl/~pieter/SOFT/CMP/doc/bin.html

hi there
I want to use pd to analyze realtime-audio and maybe send the information via OSC to the blender game engine. In essence I want to create something for visualizing real-time-music.
I tried out processing for a short while, but because I have no programming background and therefore was completely overwelmed by java, I thought maybe I should give node based programming a try. What i found was max/msp, vvvv, and pd. Since pd is like an open source version of max/msp, and I feel that there are more possibilities than in vvvv, I want to get into pure data.
Now, I realize this is not something I just can ask "tell me how to specifically to that, because that's the only thing i want", I know this will take some time, but I kinda dont know where to start, and it would be nice if you can overlook my rough roadmap to my final goal, and maybe "push" me in the right direction, or add any hints you might have for an honest newcomer.
While I stumbled upon some patches that do audio analysis, I really want to get into it, so I'll be able to do this on my own some day.
My rough roadmap:
- Do as many tutorials as possible, to learn what is possible, and how things are done, and what all the "nodes" do
- review bonk~ and fiddle~ which I stumbled upon earlier, but do not know what to to with them, read up on fft
- (be able to) analyse existing patches that do audio-analyzing
-???
- be able to detect beats, amplitudes, frequencies or even different parts of a frequency graph, and write my own uber-analyzing patch
-use real-time values to make fancy visualizations either in pd, or via osc in the blender game engine
-eternal happiness
As ressources for tutorials and documentations i found the following:
- [http://puredata.info/docs/ResourcesToStartLearning][0]
- [http://crca.ucsd.edu/~msp/Pd\_documentation/][1]
- the links in the "suggestions for newbs"-sticky ([http://puredata.hurleur.com/sujet-248-suggestions-noobs][2])
I would be really thankful, if you had anything to add to my "roadmap" or any ressources I have overlooked, especially with my interest in audio-analysis.
My background is more of a hobby 3d-animation thing. I once tried doing visualization to music with cinema4d ([http://vimeo.com/album/1530753/video/29506617][3]), but I think real-time would be much cooler.
thanks in advance
alex, off to the tutorials
\\\\Edit: I just realized that maybe I should have posted this in "technical issues"? sorry about that.
[0]: http://puredata.info/docs/ResourcesToStartLearning
[1]: http://crca.ucsd.edu/~msp/Pd_documentation/
[2]: http://puredata.hurleur.com/sujet-248-suggestions-noobs
[3]: http://vimeo.com/album/1530753/video/29506617

Hi, I have an arduino already communicating with pd through processing but I'm thinking of cutting out the middle man and have a couple of questions if anyone could help me out a bit
1.Can pduino read and use string data sent out on the arduino serial?
2.Is there limitations on the sensor values pduino can handle? I thought I read somewhere that it can only handle values up to 255 because there are no limits on the sensor I am currently using(a rotary encoder).
3.Do you have to load a certain firmata example of the board? Or can you use \#include to get it working? I'm asking because I have some custom code to handle a sensor that can't be handled by just firmata unfortunately.

Hi. I would like to write series of numbers with \[urn\] in an array with \[tabwrite\] ; I understand I have an issue with the index input of tabwrite, but I cant figure out how to fix it.
Please, help me ! Thanks !
[http://www.pdpatchrepo.info/hurleur/random\_list.pd][0]
[0]: http://www.pdpatchrepo.info/hurleur/random_list.pd

Hi,
I need to play a basic sound like a piano key, but I need to pitch shift it to be possible to play it in all possible frequencies.
Is it possible?
Will I have a limit?
Is it better to find pre recorded samples of all the keys from a bank?
Thanks,
Regards,
Nuno

I am brand new to PD. I have been messing around a bit and watching some tutorials. I was watching a tutorial where midi notes that were output from PD were played in FM8\. Depending on the channel, a different instrument was used in FM8\. I was wondering how something like this works on android. I found a tutorial for building a guitar tuner that received input from the microphone, but not much about how sound is played. All I could find were some discussions about how real time MIDI is not support. Presumably this means that you need to output to a file and then ask the media player to play it back? Is there any way to select different instruments? Or is there some altogether different approach like including patches that produce guitar sounds. I guess my general question is whether or not you can build a stand-alone application that simulates multiple instruments without having to involve other 3rd party applications to actually play the sound?

I'm sure there must be an easy way to do this, but I'm at a loss.
I want to pack a sequence of floats into a list (or message list?), and to hold that data there without sending anything, then later be able to bang out the nth element of that list on demand.
It would be a huge help if someone could point me in the right direction.
Cheers

hello everybody,
First thank you for puredata,
i have a question about my microcontroller,
I need to control a lampewith a single output MIDI from multiple sensors values.
So I receive my data and send them all in a Numbers, but there is only the one sensor that displays values&\#8203;&\#8203;. (see my patch, it's the "arduino-test-5.pd")
Maybe I should select only the highest values&\#8203;&\#8203;.
thank you
good evening
and sorry for my english
[http://www.pdpatchrepo.info/hurleur/screencapture.png][0]
[0]: http://www.pdpatchrepo.info/hurleur/screencapture.png

Hi all, my name is Ian. I use to do a fair amount of work in Max/MSP and, after a few years hiatus, I have decided to get back into it. This time though, I am planning on sticking with open-source. So, I've been digging into PD, and I have a few questions right off that I hope you can help me with.
1\. The number box seems to accept and display floats, but doesn't seem to act as a slider input for them. How does one typically input a range 0\. to 1\. in PD?
2\. Before I go re-creating a bunch of oscillators, is there a standard distribution of oscillator objects or abstractions? Specifically, I am looking for something analogous to MSP's tri~ and trapezoid~ and band limited / anti-aliased versions of the standard saw, square, sin waves.
3\. What does it mean when I get the error "can not create object"? I've received this a few times on what I thought were objects included in PD extended.
4\. What do you use for an oscilloscope?
Thank you for your time. I'm really having fun getting back into this sort of thing.
Ian

i saw this:
[http://cycling74.com/project/drawing-music-an-interactive-project-by-yuval-gerstein/][0]
can this be done with a web cam in pd on a macintosh?
and howto put the tilda on one key on a mac?
is there any chance to autocomplete text in the objectboxes like in max?
cheers
andreas
[0]: http://cycling74.com/project/drawing-music-an-interactive-project-by-yuval-gerstein/

Hi all,
I have been messing around with Python. I wanted to learn a programming language for a while and I found thenewboston tutorials on youtube and it just seemed to be very easy and clicked. I tried a few tutorials in various languages over the years but none of them seemed as interactive and generally simple as Python.
I have seen packages that link PD and Python and I have a very very general question. What kind of thing would you do. It's kind of a conceptual problem if you understand. Do you use one or the other as a front end and if so what is the advantage or what can be achieved in combination that cannot be achieved separately.
It's not really a technical issue but I wasn't sure where exactly to post it.

Hi,
I've been looking for introductory info on moses objects. (basically a guide to what they are and how to use them) but have become a bit frustrated by the lack of info (unless I'm not looking in the right places...also very possible). I'd really appreciate any help on my moses quest ("moses quest"..wow..)

Hi everybody,
this is my first post and embarassing nubi question.
I checked out all the examples and tutorials and I still can't figure out how to do this in PD.
I want to trigger samples according to the time on the computer clock
i.e. sample x played every 3 seconds from 12:10 to 12:12 pm
sample y play once at 12:11pm
can anyone direct me to a similar patch or example? I will be very grateful.
thank you
Gottlieb

HI,
I have some question, and maybe someone have jaust posted this, sorry, but i need only some ideas.
1- Is possibile (in linux) connect more mouse and use it like midi knobs, and exlude this mouses from the desktop system ? mouses are very cheap and have rotary encoders and buttons, if i can put two or more mouse and use only for PD....its a way for build a cheap midi knob box.
2- Someone can explain me another things: i have an arduino, but, i'm very noob...i want to use with PD and i read somethings on pduino...i'm confused, using pduino wath i have ? i want to try but before i need to know wath i can do with an arduino/pduino..
thanks.

Hi guys!
First of all congratulations on a vibrant and well moderated community. Its great to have found this place, and finally to have a hobby I'm really into.
I'm quite new to Puredata. The video tutorials are of limited use to me at the moment because I am yet to get pure data sharing the sound card with firefox (I'm on Ubuntu).
Generally I much prefer solving my own problems, but this time I thought I'd throw it out there and see what happens. I'm sure that apart from the solution, people will probably be highly amused by my inefficient "coding".
Anyway here is my Aim:
To make a clock which pauses, IF no audio has been received for a while AND the mouse hasn't moved for a while. When one of these things happens the clock starts again from where it left off.
The mouse detection works fine, and the mock audio pulse I've set up works fine, but I can't get the two conditions to meet properly and switch the clock on and off.
I know the solution is probably embarrassingly simple, but I just thought I'd throw it out there anyway.
Just to give you an idea of the ridiculously challenging project I've set myself- I'm going to write a VST plugin that counts your active time working in an audio session. When I'm done that I'm going to wrap it to RTAS!! I'm sure many a nightmare waits to ensnare me along the way but hey I'm having fun!
K back to it :) (patch attached)
[http://www.pdpatchrepo.info/hurleur/clock\_in\_progress.pd][0]
[0]: http://www.pdpatchrepo.info/hurleur/clock_in_progress.pd

Hy, first time with pd.
I don't know how to make an osc go through all the frequencies between a value (x) to an other value (y) in a certain time. For instance, from 960 to 60, going through 959 - 958 - ... - 61 - 60, in 1 second, and from 60 to 960 in 1 second.
Is it about pitchshifting ? Or is it about linking an object to the osc to change its frequency of a certain amount every x millisecond to have 60 hz after 1 seconde ? Can you help me ?

Hello im reading about arrays and how it can read to .txt files...
im wondering where those txt files are saved when you send a message to an array and what do i have to type in a .txt file to make an array read a .txt file, do i need to create the .txt file with a some syntax or is it just numbers...
i know its a reaaaaaaaaaaaaaaaaaaaaaaaaally simple borderline stupid question, id just like to know.

Hi All,
Im trying to figure out how to use \[scale\]
im looking to map a range of input number to a range of output numbers (similar to zmap in max/msp)
when i use scale in this method is get constant errors in the pd window (error: scale: no method for 'float')
\[num\]
|
\[scale 0 480 0 127\]
|
\[num\]
if i click on the help for \[scale\] it tells me it is a GEM object.
Apologies for the newbie question. Moving from Max to pd is taking some time :)
Thanks

I just started pd and I have a simple question.
Can anyone teach me how to make a pendulum like sequence, for example(1,2,3,4,5,4,3,2,1,2,3,4,5,4,3,2,1,2,3...and so on)?
I'm actually trying to build a pendulum style note sequencer.
So if I decide maximum step as 7, it will be like 1,2,3,4,5,6,7,6,5,4,3,2,1,2,3...and repeat forever.
So I would be glad to know if anyone can teach me how to do it.

Hi,
I could really do a bit of simple help here! I've done a few searches on this forum but can find nothing so far...
I'm building a patch and am struggling with the logic in a small section which I've attached .
I'm using a radio selector in the GUI to select values, and as you can see in the patch they aren't coming up correctly when you select at different points in the radio.
The thing is I'm using the same principle elsewhere in my patch and it works fine. I'm guessing it's to do with sending the radio msg to all the bangs??
Any help would be greatly appreciated!!
Thanks
J
[http://www.pdpatchrepo.info/hurleur/ordering.pd][0]
[0]: http://www.pdpatchrepo.info/hurleur/ordering.pd

?Hi Everyone :)
I've just done the Youtube tutorial video, on creating a basic sequencer. But I would now like to randomize the sequence from the table. I aim to do this using the randomF object, however the numbers it outputs are not whole intergers.
how may I get it so that it outputs only whole numbers from 1-16\.
Many thanks,
Dom

Hi all,
Great forum.
Coming to PD from a background in recording studios, so excuse the ignorance.
I don't know if this is the correct place to post this query but here goes.
I'm looking for some specific resources and each one I try on the tutorials is giving me 404 from the link.
These are links to tutorials about graphing on parent windows and canvases etc. And the ones about FFT's.
I'm getting through Millers chapters on Fourier Analysis but I'm looking for patches and papers on Wavlets and FFT's for analysis and resynthesis. And somewhere I can get a handle on getting my arrays on the main window of my patch.
Thanks.
sr.
ps I've found \[sigmund~\] and \[fiddle~\] so regarding those things I'm really looking for papers and applications etc.

apologies for being such an idiot and insulting your forum with my lowliness :)
I've been having lots of fun with puredata and want to start incorporating some images and possibly video into my stuff.
I have Pd extended - which apparently has GEM included, although I can fins no trace of it.
I've downloaded GEM for mac, but the instructions for installing it make no sense with what actually appears once it's downloaded. The instuctions:
"GEM under Mac OSX
\* Download GEM for MacOSX
\* Unpack"
All this is fine
"\* copy folder content to your pd installation, under /gem
\* copy or move Gem.PD\_Darwin to PDs extra folder
\* start with additional parameter -lib gem"
These are not fine for me..I don't appear to have a pd installation folder...just the app.
I also don't seem to have anything called Gem.PD\_Darwin
I've tried about a million variations on the correct link in the startup menu of pd to load up GEM but it never works.
I'm a bit exasperated now...and desperate enough to show myself up as a muppet in front of you guys....pleeease can anyone help me out here?
btw I've been lurking for months and picked up many many wonderful tips and bits of advice on here, for which I thank you all.

Is there a simple way to repeatedly add an integer to another integer. I want to output something like 2, 4, 6, 8 etc.. Specifically, each time a sub recieves a bang I want it to output the next number in the sequence,
Thanks in advance

Hi folks, first post here. I am new to pd, coming from the Max/MSP camp.
Setup: Pd-0.38-3 on Mac OS X 10.4.1
Question 1: I want to channel output to multiple MIDI devices (I have two ports specified in my IAC Driver in Audio MIDI Setup). In Pd \> Preferences \> MIDI settings... I can see these as two devices and select "use multiple devices" to enable them both, although selecting "Apply" seems to last only for the current session (upon next launch Pd forgets these settings). THE QUESTION: Can I use a parameter in midiout to target just one of these output devices? I seem to only be able to access the first device.
Question 2: The help patch for MIDI objects displays a couple of warnings in the output when I open it, should I worry about this?
error: midiin: works under Linux only
error: sysexin: works under Linux only
Thanks a heap.

Hi,
I'm just getting started with Pd on Fedora Core 3; installation was pretty painless thanks to the CCRMA repositories, but I've run into an issue when I first tried to make some sounds with Pd. I recreated Miller Puckette's example of a [constant amplitude scaler][0]. When I start the audio, the signal seems to take a few seconds to 'settle' into a constant frequency; I get a few seconds of static that eventually turns into the sine wave I was expecting. When I click the DIO errors button, I get a bunch of A/D/A sync errors, as follows:
audio I/O error history:
seconds ago error type
0.73 A/D/A sync
0.73 A/D/A sync
0.75 A/D/A sync
0.94 A/D/A sync
0.94 A/D/A sync
0.95 A/D/A sync
1.01 A/D/A sync
1.01 A/D/A sync
1.10 A/D/A sync
1.10 A/D/A sync
1.11 A/D/A sync
1.11 A/D/A sync
1.17 A/D/A sync
1.17 A/D/A sync
1.17 A/D/A sync
1.17 A/D/A sync
1.27 A/D/A sync
1.31 A/D/A sync
1.31 A/D/A sync
1.36 A/D/A sync
If I let the steady sine wave tone continue, eventually there's a bit of a 'hiccup' in the sound, and then I get the following errors:
audio I/O error history:
seconds ago error type
1.25 DAC blocked
1.25 data late
1.25 data late
1.25 data late
1.25 data late
1.25 data late
1.25 data late
1.25 data late
1.25 data late
1.25 data late
1.25 data late
1.25 data late
1.25 data late
1.25 data late
1.25 data late
1.25 data late
1.25 data late
1.25 data late
1.25 data late
1.25 data late
Can anyone help me out in diagnosing this problem and fixing it? Where should I start looking to fix this?
Thanks!
[0]: http://www.crca.ucsd.edu/~msp/techniques/latest/book-html/node16.html

was just wondering what the minimum p.c. specs are for pd.
i have two old ibm thinkpads. one is a p100 with 40 megs of ram and the other is a p150 with 45 megs. i tried to run it once on the p100 under windows 95 but it had problems at start up, was unreasonably slow, and said it couldn't find the sound card. would linux be any better? should i give up? also have grandiose ideas about networking them somehow but thats probably pretty far down the line...

hi, i have no experience with PD and need a help how to:
generate sound using a specific equation in realtime. how woud you do that in pd?
i want to be able to change several parameters of the equation by mouse - in realtime. For example: the equation for calculating the amplitude of a sample might be something like this:
a = sin(x) / sin (y)
where x and y are the mouse coordinates.
I know there are objects like sin~ or cos~ in pd for creating oscilators, but i need to calculate the amplitude using more complicated expresions.
i suppose i need to calculate a number of samples into a buffer and then play the buffer, while calculating new saples, but i dont know how to do that in pd.
if you have an idea how to patch something like ive described, please give me a hint.
thanks a lot.

I downloaded the PD version pd-0.37.4.dmg from the url [http://puredata.info/Members/hans/downloads/installers/][0] but after dragging the Pd.app icon of the image file into my applications folder it just opens the terminal when cklicking the icon. I'm running Mac OSX 10.3.5\. Has anybody out there an idea how to solve this problem or knows another version that works well under OSX?
greetz PidAid
[0]: http://puredata.info/Members/hans/downloads/installers/

I downloaded the PD version pd-0.37.4.dmg from the url [http://puredata.info/Members/hans/downloads/installers/][0] but after dragging the Pd.app icon of the image file into my applications folder it just opens the terminal when cklicking the icon. I'm running Mac OSX 10.3.5\. Has anybody out there an idea how to solve this problem or knows another version that works well under OSX?
greetz PidAid
[0]: http://puredata.info/Members/hans/downloads/installers/

Hello,
there is a vst~ external available for pd + linux?
This plugin is the opossite (to convert pd patches to vst or I am wrong?)
[http://crca.ucsd.edu/~jsarlo/pdvst/][0]
I found this page
[http://puredata.info/community/pdwiki/Vst][1]
That redirects to:
in CVS @ /externals/grill/vst
I downloaded grill/vst but it says that I have to compile flext.
Sooooo, I compiled flext succesfully (not so easy, I am not a programmer)
[http://grrrr.org/ext/flext/][2]
Now - almost done - I have to compile somehow grill/vst but I don't understand how.
Anyone?
[0]: http://crca.ucsd.edu/~jsarlo/pdvst/
[1]: http://puredata.info/community/pdwiki/Vst
[2]: http://grrrr.org/ext/flext/

Please bare with me as i try to explain..
I want to send bangs to different places if two arguments are true... how would i implement this.
e.g 2 number boxes next to each other
if the first number is 20 and the second number is between 30 and 20
it will send a bang or let these numbers through.
i will appreciate your help dearly

Hey guys, I got a question about unpacking serial data from a 6-degree-of-freedom sensor using the 'comport' object.
The receiving data from the sensor should be in a XML format ( float float float float float float \n), which would need a further extraction. But it seems like that I cannot get any string from 'comport', but only ascii values (It is not converted into ascii when sending out from arduino). Is it possible to get the data in the way of how it is sent (the XML format)? Or how to make the ascii values back into that format? Would be very thankful for all kind of help and advices :smile:
cheers!

Hi,
I have never used Reaktor so I can't say anything about it except for what I heard, but as far as I know it is not in the same category as Pd or Max. My understanding is that Pd and Max are programming languages that allow you to create and manipulate sounds and video in a much more fundamental way (and using some interesting objects and libraries, you can do much more than music and video) and that Reaktor has more pre-made stuff on it (which may be great for beginners) but it is not as flexible. But I repeat, I have never used it before, so what I am writing may be complete rubbish ;)
As for Max, I am one of those few people that made the path from Max to Pd (typically people start with Pd since it's free and get seduced by Max later on). In my view, Max MSP beats Pure Data in the following categories:
- Max has a larger community, which means more externals, more libraries, etc. Some of the libraries for Max are simply fantastic, and it is a pity we can't use them in Pd (see for instance this library here, how I wish we had something that powerful in Pd: http://www.bachproject.net/examples )
- Max has better documentation, although there are nowadays great sources on how to learn Pd.
- More active development than Pd.
- overall, Max is more polished than Pd. The GUI objects have more options and look prettier (there are also more GUI objects in Max as far as I know). The patch cords in Max are not simple straight lines, but are smoothly curved (although, if you use Linux, you can use Pd-l2Ork - is a fantastic distribution of Pd - to get prettied patch cords). But in Max you can route the patch cords around while in Pd you can't (and if we should or not implement this in Pd is subject to an eternal argument in our community, and I don't want to step there.. except to say that yes we should do it! :) )
- some other features that I miss: infinite undos and less buggy typing inside an object box (try hitting the home key after typing the name of an object in Pd: the cursor moves to the beginning, but as soon as you start typing you are still in the end of the word). But by the way, Pd-l2Ork solves these two problems as well.
Now why did I migrate to Pure Data? Here are all the reasons behind it:
- Pure Data runs natively on Linux and I can't bother to boot Windows any more (actually, the only version of Max you can run on Linux via WINE is Max Runtime, which allows you to load and play patches but not edit them. To edit, you'd need an iLok - an USB device that unlocks the software - and iLok does not support Linux at all, their drivers are made only for Mac and Windows)
- Pure Data is free as in freedom, and I firmly believe that open source is the way to go!
- Pure Data is free as in beer! You can install 10 copies of it in all computers you and your friends and your school and your neighbourhood have. Max costs an absurd amount of money, and if you ever lose that iLok you lost $399. I can't really come to terms with carrying around a little piece of plastic and cheap electronics that worth that much...
- did I ever mention iLok? Using it is part of an outrageous 90's mentality of how to lock your software. Look at REAPER for instance, a fantastic DAW that charges a fair $60 and has no locking system (that is, the demo is the same as the paid version and lasts forever: it's up to you to pay them after 2 months of free trial).
- The community of Pd is nicer ;) Seriously, it seems to me that people dealing with open source software (the other one I use everyday is LilyPond) are really passionate about these programs and subjects, and will go through hell just to help you (you don't even have to answer a "thank you" back!).
- Pure Data is a better learning tool in my opinion. In Max, you have so many pre made objects that you can either get overwhelmed quickly, but also spoiled as well. It is very nice to create your own basic abstractions. Here is an example: my professor was very impressed that I created myself a 2d virtual space in Pd, when actually the logic and the math behind it was pretty simple. But in Max, he uses an external library so he never really bothered to think of how to pan sound manually (i.e., how the volume changes from one channel in relation to the other), and how to calculate distance, angles, etc.
These are all my personal opinions, but I hope you'll find something interesting here.
All the best and good luck with Pd,
Gilberto

I'm not really sure what you mean by "less professional." If you're saying it might be less suited for professional use, I would have to disagree. I have actually found Pd-extended to be more stable than Max 5\. The differences in sound quality are generally negligible, if at all noticeable. And there are few things that one can do that the other can't (and they \*both\* have their advantages over the other). They are both great pieces of software, and you can probably get them both to do what you need them to do. Max is a commercial product, but that doesn't necessarily make it better suited for professional use. It just makes it commercial software.
The only thing that I've found that Max hands-down does better than Pd is the gui. But, really, it's not that big of a deal, because the gui offerings in Pd give you what you need to function. Most of the stuff in Max is eye candy that you end up wasting a lot of time working on. There are very few instances where the gui is actually very important to the functionality of the patch.
Max also has ReWire capabilities (one of the advantages of being commercial), but they are so buggy and the implementation is so crappy that it is hardly usable. David Zicarelli himself even said, in so many words, that ReWire sucks.
As far as famous artists go, keep in mind that Max has a longer history than Pd and so has has a bit of a foothold in this area. And, more importantly, it is a commercial product. You're likely to find a list of famous people using just about any commercially successful product because the advertising department knows it will get people to buy it. They find out if an artist has used it, even if only for a small part of one track, and then the go around saying, "Hey, Aphex Twin uses Max. You should buy it if you want to be like him."
As someone who has both programs, I can say this: Max/MSP/Jitter is not $800 better than Pd-extended. It's not even $250 better (which is the student discount price). And if you really must go for Max, the transition from Pd is not hard at all. I would at least say play with Pd for a while and decide if it's a paradigm that you really enjoy. If you find that you need the extra goodies that Max has and Pd doesn't, then download the demo and see if it's worth it.

i don't believe it,
I found it, it wasn't as far away as last night(!):
[http://puredata.hurleur.com/sujet-487-newbie-general-question-max-x-linux][0]
[0]: http://puredata.hurleur.com/sujet-487-newbie-general-question-max-x-linux

@anechoic said:
> I'm not famous but my new CD will contain many uses of Pd
>
> also check out the article I wrote on my switch from OS X -\> Linux
>
> [http://createdigitalmusic.com/2009/08/04/linux-music-workflow-switching-from-mac-os-x-to-ubuntu-with-kim-cascone/][0]
Great! I have Linux/Windos/OSX installed. I discover GNU/Linux 8 years ago. If your favorite software is on Linux, use linux!
Kim, the other day I discover your work, searching for MAX/MSP music. Great to see you here. Im new to MAX/PD, as a MAX user, what do you think of PD?
[0]: http://createdigitalmusic.com/2009/08/04/linux-music-workflow-switching-from-mac-os-x-to-ubuntu-with-kim-cascone/

Puredata is free open source software, Max costs money.
Puredata is more flexible and extensive than Max because it is maintained and
supported by a community who use it.
There are more freely available examples, patches, tutorials and documentation for Puredata than Max.
Max data can be imported into Puredata with Cyclone, but afaik the converse is not possible.
In many cases Max MSP has prettier GUI components, but depending on your point of view these can also be seen as fluff and cruft.
I consider Pd the "grown ups version of Max"
Since I don't use Max because I can't afford it (Cycling74 refused to offer me a complementary version in return for writing them tutorials) there may be advantages to Max I have no idea about.
The downside is what is typical of most free software, installation and configuration is more difficult.
So basically if you are prepared to do a little thinking and don't need a shrink wrapped spoon feed, it does more than Max, better, for free. :)

Hi folks!
I was creating a metronome GUI today and gave myself a bit of a conceptual problem that I am struggling with. In my patch I have a knob that controls tempo, and a number box that displays BPM. Both the [knob] and the [nbx] send the tempo data to my metro patch (in case I want to select tempo with the knob or just type it in).
**My problem:** what if I want the changes made in one to be reflected in the other? To make my changes from the [knob] reflect in the [nbx], I just connect the outlet of the [knob] to the [nbx] inlet (DUH!). But what if I also want any changes made in the [nbx] to be reflected in the [knob] as well? Connecting the outlet of [nbx] to [knob] just produces a stack overflow.
I suspect I need some kind of mechanism to output a bang only when the [nbx] changes, connected to a [spigot] so that when the [nbx] changes, it updates the [knob] but the knob's output is momentarily silenced, and vice versa. So far I'm stumped!

[http://eyebeam.org/events/rjdj-skillshare][0]
December 5, 2009
12:00 -- 1:30 PM : Introductory workshop on Pd with Hans-Christoph Steiner
2:00 -- 6:00 PM : SkillShare w/Steiner and members of RjDj programming team
Free, capacity for up to 30 participants
RSVP HERE: [http://tinyurl.com/ykaq3l3][1]
Hans-Christoph Steiner returns to Eyebeam with members of the RjDj programming team from Europe to help turn your iPhone or iPod-Touch into a programmable, generative, and interactive sound-processor! Create a variable echo, whose timing varies according to the phone's tilt-sensor or an audio synthesizer that responds to your gestures, accelerations and touches. Abuse the extensive sound capabilities of the Pure Data programming language to blend generative music, audio analysis, and synthy goodness. If you're familiar with the awesome RjDj, then you already know the possibilities of Pure Data on the iPhone or iPod Touch (2nd and 3rd generation Touch only).
Creating and uploading your own sound-processing and sound-generating patches can be as easy as copying a text file to your device! In this 4-hour hands-on SkillShare, interactive sound whiz and Pure Data developer Hans-Christoph Steiner and several of the original RjDj programers will lead you through all the steps necessary to turn your phone into a pocket synth.
How Eyebeam SkillShares work
Eyebeam's SkillShares are Peer-to-Peer working/learning sessions that provide an informal context to develop new skills alongside leading developers and artists. They are for all levels and start with an introduction and overview of the topic, after which participants with similar projects or skill levels break off into small groups to work on their project while getting feedback and additional instruction and ideas from their group. It's a great way to level-up your skills and meet like-minded people. This SkillShare is especially well-suited for electronic musicians and other people who have experience programming sound. Some knowledge of sound analysis and synthesis techniques will go a long way.
We'll also take a lunch break in the afternoon including a special informal meeting about how to jailbreak your iPhone!
Your Skill Level
All levels of skill are OK as long as you have done something with Pd or Max/MSP before. If you consider yourself a beginner It would help a lot to run through the Pd audio tutorials before attending.
NOTE: On the day of the SkillShare we will hold an introductory workshop from 12:00 until 1:30 PM, led by Steiner, for those who want to make sure they're up-to-speed before the actual SkillShare starts at 2:00\. The introductory workshop is for people who have some done something in Pd or Max/MSP but are still relative beginners in the area of electronic sound programming.
What You Should Bring
You'll need to bring your iPhone or iPod Touch (2nd or 3rd generation Touch only), your own laptop, a headset with a built-in mic (especially if using an iPod Touch) and the data cable you use to connect your device to your laptop. Owing to a terrific hack, you won't even need an Apple Developer License for your device!
More Information
RjDj is an augmented reality app that uses the power of the new generation personal music players like iPhone and iPod Touch to create mind blowing hearing sensations. The RjDj app makes a number of downloadable scenes from different artists available as well as the opportunity to make your own and share them with other users. RjDj.me
Pd (aka Pure Data) is a real-time graphical programming environment for audio, video, and graphical processing. Pd is free software, and works on multiple platforms, and therefore is quite portable; versions exist for Win32, IRIX, GNU/Linux, BSD, and MacOS X running on anything from a PocketPC to an old Mac to a brand new PC. Recent developments include a system of abstractions for building performance environments, and a library of objects for physical modeling for sound synthesis.
----------------------------------------------------------------------------
kill your television
[0]: http://eyebeam.org/events/rjdj-skillshare
[1]: http://tinyurl.com/ykaq3l3

I switched to PD 3 years ago from MaxMSP and never looked back, the two programs seem to go through periods of convergence and divergence - an innovation will often appear ported for it's neighbor in a relatively short time. Generally I suppose Max is a bit more juicy graphically which can be attractive to the Mac user. However, PD really comes into it's own when combined with linux as I have recently discovered:
As a recent convert to Linux I first tried the CD boo-table distros like dyne:bolic and DeMuDi from the AGNULA project which I found a little clunky. After further searching and testing I've settled on a Hard drive installation of Puppy Linux - compiling PD for this OS. At only 70MB or so Puppy is tiny and this frees the computer up for all those complicated patches - I even have PD running on a Windows95 spec computer relatively well. The other awesome Linux experience I've had is PpPod for iPodlinux - portable PD programmed instruments in my pocket!
Long story short I teetered on the brink of the same decision and PD+Linux has done me very well.

_I see there is a pop-up menu in the file open dialog that lets you select Max files instead of Pd files. I try this and loading the patch produces no errors, but neither does it open a window (it displays nothing at all). _
I've never used cyclone, but...
Quoting from the Pd list:
_\*File-\>Open is obsolete\* for loading max patches. Instead, create
a \[cyclone\] object and click on it. It should load max-text,
max-binary, and even the max-old format (which max itself does not
load anymore...) _
Another message suggest that you can switch cyclone between normal mode and max compatability mode using messages, although I didn't find out how.

I've recently taken "the tour" to Mathematics Stack Exchange. Initially, I thought it was going to be some sort of light-hearted pics plus some words about the site. Fortunately, it was much better than that.
"The tour" is an interesting way of inviting you to read the rules. I think these rules also act as a reminder, a reminder of the important stuff.
You can, of course, take the "tour" by yourself, but these are some of the things you can read there.
- Ask questions, get answers, no distractions.
- This site is all about getting answers. It's not a discussion forum. There's no chit-chat.
- Get answers to practical, detailed questions.
- Focus on questions about an actual problem you have faced.
- Not all questions work well in our format. Avoid questions that are primarily opinion-based, or that are likely to generate discussion rather than answers. Questions that need improvement may be closed until someone fixes them.
http://math.stackexchange.com/tour
As @dangrondang (F.) said, hope this helps.
Cordially, Landon

Hi there. My little problem is that i can't choose between buying Max/MSP or diving for free in Pd. So i started to look for comparisons between these two.
As i see in wikipedia Max link [http://en.wikipedia.org/wiki/Max\_(software)][0] there is a long list of well known artists (like Aphex Twin, Tim Hecker) who are using it. As for Pd, there is absolutely no information on the web about famous musicians who are working with this software..
So my question is do you know any electronic/electroacoustic artists who are using Pd?
Or maybe in generally Pd is well less professional than Max?
From what i have tried i can say that sound quality is identical, just Max's interface is far more advanced..
Maybe Pd better suited for game music and for creating things like Reactable?
Thanks for answers.
[0]: http://en.wikipedia.org/wiki/Max_(software)

hi guys! so i've had an oxygen8 for a few years now and i see them everywhere, so i'm sharing this handy little patch i made.
basically you're stuck with 8 knobs, and two sliders (modwheel and data entry). all this patch really does is takes the keyboard's input, numbers those 8 knobs and 2 sliders in sets of 10, and lets you switch between those sets with the hradio for up to 120 different controller values. the key input is passed straight through so even when you switch between various controls you can play the keyboard consistently.
even if you don't have an oxygen8, this patch will give you a little selfcontained set of sliders that you can use as a midi controller... so it's still useful for when you're not at home with your keyboard, or if you don't even have one.
basically all this patch does is take those 10 controls and lets you switch between 12 sets of them. it's useful for me in ableton so when i need to map more parameters than i have knobs for, i can assign more, and the numbering system is much easier to stay on top of than the default control values for those knobs (it's like 17, 80, 74, no consistency it seems).
on linux you should be able to jack the keyboard to pd's midi in, then jack the output to wherever you want. i'm currently on windows and i select usb keyboard in for input, and loopbe for output.
the numbers do nothing but change when you switch the hradio - the sliders are the corresponding controls (with the mod wheel as slider 9 and the data entry knob as slider 10).
come to think of it i don't think i tested the pitch bend wheel, i've been using this patch almost entirely for parameter controlling and not playing the oxygen8 notes at all. \[notein\] is patched directly into \[noteout\]
any questions/comments/ideas please, post them. this is a real quick patch i put together that worked almost better than i wanted it to but it can be very expanded upon. i was going to add symbols so you could tag/name all 120 controls but i was having trouble figuring out a way to store them and recall them, and send/receive to the symbols... so i just scrapped that.
basically all i do is make a tiny pd window and make \[SCET\], and just have that sitting at the bottom of the screen under my DAW (in this case ableton).
i haven't run into any conflicts yet for the most part but it's possible the controller numbering system might conflict with certain apps/synths/etc.
cheers guys!
[http://www.pdpatchrepo.info/hurleur/SCET.pd][0]
[0]: http://www.pdpatchrepo.info/hurleur/SCET.pd

I currently use a Macbook (OS 10.5) for all my PD patching. However, as I am primarily focused on using GEM for live video performance alongside musical groups, I am thinking about getting a second laptop for performances (to keep my Macbook safe).
I'm thinking of getting an Acer Aspire One netbook running linux.
I'd like to know the pros and cons of this.
-Will swapping patches between Mac OS and Linux be a problem (I'm guessing no, but I figured I'd ask)?
-I've heard of some problems with VGA out on Linux laptops, is this going to be an issue?
-Does the netbook have enough processing power for general GEM applications? I'm generally not dealing with video files, but rather particle generation, shape manipulation, GIF texturing, and audio-response.
-Are there any other issues that you think of given this scenario? (and if so, what other affordable/really-cheap laptops are there out there that I can run linux on)

If your intention is to use windows to develop and linux to perform, your patches should be portable between windows and linux. In fact going from windows to linux should be easier since the latter has certain pd features the former doesn't.
But watch out for case insensitive filesystems. In windows and mac the filesystem is case preserving but not sensitive, while linux has mostly case sensitive filesystems. For PD this means that under linux you can have two distinct patches, ex: Not.pd and not.pd while in windows this would not be allowed. Porting from windows to linux this example shouldn't be a problem, but you might have an abstraction patch saved as not.pd and use it as "NOT" in another patch. Under windows this will work since "NOT" will be matched to not.pd, but under linux it won't work since it will be looking for NOT.pd

Hmm, the project sounds interesting, a melody for each personality.
You need to take your time over this one. If your tutor thinks this is something that can be done in a few days I'd say that's a terrible underestimate. Did you leave it to the deadline... tsk! :)
What you need is a melody generator with a number of parameters. You need less parameters than there are variables in your test results (to do it simply) and some way of reducing the test results to a smaller set of melodies.
First convert the test results to numeric values.
Then apply these to a mapping that converts the scores to parameters making music you think suits each personality.
Create a few formulas, one for each generator parameter that maps the test results onto a factor for each generation control.
For example;
Do you wear black and lthink the Cure are the best band eva (YN)? Y
We will
a) Rock you \[0\]
b) Overcome \[0\]
c) Stay in tonight, because I haven't got a stitch to wear \[X\]
The Police are
a) Doing their best to balance law and order with the liberal values of a post industial society \[0\]
b) Fascist tools of an opressive regime \[0\]
c) The best rock/pop trio of the 1980s \[X\]
maps onto
Introvert-extrovert : Shoegazing emo depressiveness 6
Motivation: Rocking Godhead attitude 1
Individuality- conformity : Sheep factor 5
Humour/lightness : Slack factor 6
which maps onto
Tempo 110
Scale - 70% minor 30% major
Change factor 8
Liveliness factor 4
Because the problem is complex and involves data collection, scaling, mapping and generation my advice would be to keep everything VERY simple. Melodies with a choice of four or five note parameters, in simple patterns, keep the number of questions to 8 or less.
Start by building a melody generator that has something like the following properties
1) tempo
2) scale division
3) liveliness (change magnitude)
4) Density (rests vs notes)
Then come back with that to show us and a list of your questions and explanation of how you interpret the scores.

1) To cross synthesise two voices you must ensure that two speakers make exactly the same utterances which are phonetically aligned. This is hard as I can tell you from experience of recording many voice artists. Even the same person will not speak a phrase the same way twice.
<< This is not possible in my experiment, as I am supposed to morph the actual conversation, so it is upto subjects what ever they want to speak. There is some work done by (Oytun Türk , Levent M.Arslan) who conducted experiment in passive enviornment (not at real time).
2) The result is not a "timbral morph" between the two speakers. The human voice is very complex. Most likely the experiment will be invalidated by distracting artifacts.
Here's some suggestions.
1) Don't "morph" the voices, simply crossfade/mix them.
<< yes I also want to do this, crossfading and mix, as I just want to create illusion, so that listner start thinking whether it is B's voice or A's voice\>\>
2) For repeatable results (essential to an experiment) a real-time solution is probably no good. Real time processing is very sensitive to initial conditions. I would prepare all the material beforehand and carefully screen it to make sure each set of subjects hears exactly the same signals.
<\>
Question :
Is VoCoder alone is sufficient for morph/mix/crossfade among two voices?
or I should also add the pitch shifting module with VoCoder to get some more qualitative results.
Question 2:
I already tried this VoCoder example, but could not change it according to my requirements. In my requirements, I have a target voice (the target voice is phonetically rich) and now source speaker is speaking (what ever he want to speak) and source voice is changing into target voice. (illusion/crossfade/mix)
The (changetimbre1.pd ) file that I attached first, give you an idea of what kind of operational interface I am looking for.
Question 3:
what should be the ideal length of target wave file?
Before the start of experiment, I would collect the voice sample of all participants.
I am highly oblige for your earlier help and looking for more (greedy). Meanwhile I would once again study this vocoder example to change it according to my requiement. ( though i doubt I may change it).
Thanks.

I like that track, kept me interested and made me smile, and when it finished I wanted to listen to it again - I've listened to it about 5 times this evening. Good work! :-)
I've been trying to make similar sounds. I'm working on doing more live stuff with Pd, it's mighty fun, I can spend hours twiddling with midi knobs making freaky noises. I think I spend too much time fiddling with the knobs and not enough time enhancing my patches - I've not got much to show for the last six months of Pd-ing.
At first I was mapping each midi knob to a single control of the patch, but poorly thought out - in one patch I have 4 breakbeats, and I had 4 knobs mapped to the bpm control of each beat - "bpm 1", "bpm 2", "bpm 3", "bpm 4" - and it was a nightmare trying to get anything to sound good with it. Now I am trying to have more useful controls - "master bpm", "second pair/first pair bpm ratio", "pair 1 bpm spread", "pair 2 bpm spread". Still having 4 knobs to control the 4 bpms, but in a more musically useful way. Like a mathematical change of basis or change of coordinate system.
Another way I am trying to make live performance easier is using algorithmic processes - instead of controlling every beat I control aspects of a process that generates the beats - instead of being the drummer and the bassist and whatever else I am more of a conductor or director, controlling "jitteryness" or "density" or whatever. These processes can have a random part, so the live performance takes on a new element of reacting to the unpredictable output.

Hi all,
Just got this via the pd mailing list, and it looks very interesting.
Best,
Gilberto
* * *
__Linux Audio Conference 2015 - Call for Participation__
(Due to exceptional circumstances, this announcement comes a bit late,
so please note the early deadline of Feb 1st for submissions. We
apologize.)
We are happy to announce the next issue of the Linux Audio Conference
(LAC), April 9-12, 2015 @ JGU | Johannes Gutenberg University, in
Mainz, Germany.
http://lac.linuxaudio.org/2015/
The Linux Audio Conference is an international conference that brings
together musicians, sound artists, software developers and researchers,
working with Linux as an open, stable, professional platform for audio
and media research and music production. LAC includes paper sessions,
workshops, and a diverse program of electronic music.
*Call for Papers, Workshops, Music and Installations*
We invite submissions of papers addressing all areas of audio processing
and media creation based on Linux and other open source software. Papers
can focus on technical, artistic and scientific issues and should target
developers or users. In our call for music, we are looking for works
that have been produced or composed entirely/mostly using Linux and
other open source music software.
The online submission of papers, workshops, music and installations is
now open at http://lac.linuxaudio.org/2015/participation
The deadline for all submissions is Feb 1st, 2015 (23:59 HAST).
You are invited to register for participation on our conference website.
There you will find up-to-date instructions, as well as important
information about dates, travel, lodging, and so on.
This year's conference is hosted by the Computer Music Research Group
(Bereich Musikinformatik) at the IKM (Institut für Kunstgeschichte und
Musikwissenschaft) of the Johannes Gutenberg University (JGU) at
Mainz. Being founded in 1991, our research group has been among the
first German academic institutions in this interdisciplinary field at
the intersection of music, mathematics, computer science and media
technology. In our media lab students are working almost exclusively
with Linux, and in our research we are also devoted to contributing to
the growing body of open source audio and computer music software.
http://www.musikwissenschaft.uni-mainz.de/Musikinformatik/
We look forward to your submissions and hope to meet you in Mainz in
April!
Sincerely,
The LAC 2015 Organizing Team

Hi
Here is my offering of a drum machine. there are various abstractions that use the oddities of PD's mouse pointer focus and I assume they work on other computers. Please read the manual below before running, paticularly if you have issues with automated directory creation on your system.
Any questions welcome
Have fun
Dave Adams (Balwyn)
a-drum-kit comprises a clock with 16 start and stop points, four 64 column velocity patterns and four programmable drum modules.
The clock controls from the top left are bang for reset, indicator for an external OSC clock pulse on port 9000, on/off button, repeat on/off button (default on), bang for save clock settings, bang for load clock settings *.clk tempo knob, start point knob, current point knob (read only), end point knob, repeat start and end sliders with return to start led.
The four pattern modules have 64 x 16 level velocity settings driven in sync from the clock, each module outputs to the adjacent drum module.
The four drum modules are identical. They have ADSR, level, pan, vcf & Q settings for both osc and noise, plus frequency tuning for the osc. There is an effect and duration for both, which ramps the freq of osc and the vcf of the noise up or down from the original point over time.
THE YELLOW SAVE BOX -- THIS IS TACKY BUT IT WORKS:- IF YOU PLACE THE CURSOR AT THE END THE TEXT AND PRESS ENTER. A DIRECTORY OF THAT NAME WILL BE CREATED IN YOU HOME DIRECTORY
The yellow box on the right of the save label is a text entry with a default text of ~/PureData/Drums/. There is a space at the end of the default text that needs to be backspaced over before adding the new name. Using this box creates a new subdirectory when the enter-key is pressed and then opens a save as dialog box within the new subdirectory. Just enter the name of subdirectory again in the filename field and the folowing files will be saved there filename.clk, filename.drm, filename.drm-2, filename.drm-3, filename.drm-4 and filename.set
The Gui-bang next to the load label opens the open file dialog for loading all the parts and defaults to ~/PureData/Drums/. Open a directory and select any file, as only filename part before the extension is used for loading
There are separate [S] and [L] bangs for saving and loading the clock, each drum pattern and the whole drumset settings
Along with the clock settings the volume, low pass filter and high pass filter are saved and loaded
For this to work out of the box you will need to create a Directory ~/Puredata/Drums/. the tilde (~)refers to your home folder in linux and your user folder in windows and the copy files in the attached /Drums directory to your newly created /Drums directory
[a-drum-kit.zip](/uploads/files/upload-224a17a6-aff5-468d-8681-4188d57907c3.zip)

I'd like to dedicate a netbook to Linux (Ubuntu, I think) and I'd like eee-1101ha (cheap and long battery duration). But I wondered if puredata could sufficiently run on it. My doubt refers mainly to the GMA 500 graphics chipset integrated into the Intel Atom Z520 CPU, infact (if I understand correctly) this GMA500 could not support opengl on linux (or linux-drivers could not give opengl support) and this is a bad thing for gem-library.
Another possibility is eee-1201N (NVIDIA ION-LE + intel AtomTM 330 Dual Core). It isn't cheap and no-long battery duration, but I suppose nvidia ion-le supports opengl also in linux.
Just in the middle there's eee-1201ha (Intel Poulsbo US15W + Intel Atom Z520 CPU) that's quite cheap and perhaps Poulsbo could give opengl linux support?
I don't know guys, please tell me your opinion to run puredata+gem and linux.
Thanks a lot

I'd like to dedicate a netbook to Linux (Ubuntu, I think) and I'd like eee-1101ha (cheap and long battery duration). But I wondered if puredata could sufficiently run on it. My doubt refers mainly to the GMA 500 graphics chipset integrated into the Intel Atom Z520 CPU, infact (if I understand correctly) this GMA500 could not support opengl on linux (or linux-drivers could not give opengl support) and this is a bad thing for gem-library.
Another possibility is eee-1201N (NVIDIA ION-LE + intel AtomTM 330 Dual Core). It isn't cheap and no-long battery duration, but I suppose nvidia ion-le supports opengl also in linux.
Just in the middle there's eee-1201ha (Intel Poulsbo US15W + Intel Atom Z520 CPU) that's quite cheap and perhaps Poulsbo could give opengl linux support?
I don't know guys, please tell me your opinion to run puredata+gem and linux.
Thanks a lot

Hello all (my first post here)
Just started to learn PD. Im doing a surface for my Akai LPD8 at the moment, view attached.
Some questions.
I was looking for a knob, couldnt find one in PD. But when i looked around here in the forums i found the Korg Nano abstraction, with knobs in it, i copied those over to my patch. My question about this is. Where did this knob come from? Was it hidde all the time somwhere in pd? or was it a speciall build or something of the patch i opened? Are there more of these hidden things in pd, and where can i find them?
Question 2:
If you look at my patch to the far right is a little thing ti light up my pads on my my controller when a note is sent. I am not sure about the \[== notenumber\] part, is that the way to go?
/Jon
[http://www.pdpatchrepo.info/hurleur/LPD8\_surface.pd][0]
[0]: http://www.pdpatchrepo.info/hurleur/LPD8_surface.pd

He there. I'm beginning to get acquainted with music production under
linux and there's a question that bugs me about pure-data.
Some argue that one strength of music production under linux is the modular
approach given by the jack audio server; it allows completely independent apps to route signals to each other, etc. etc. The major problem with this is a complete lack of cohesion: there is the classical foss paradigm, you have xyz apps, each providing the appoximately the same functions as the other, are coded in perhaps a different languages, have their trade-offs here and there and then A is missing some must-have feature of B and conversely; lots efforts are dissipated, UIs are a chaos, the softwares architectures are redundant with bugs, duplicated semi-polished features, etc.
Then you have pure data, which at first glance, looks like a high-level, integrated audio developement environnement, just the unified framework one would need to build and enhance a perfect music creation environnement. Yet as it looks (forgive the short sighted vision of a newbie) pd has remained a developement-only platform that has yet to yield it's fruit to that other end of the spectrum of the music community which interests are more focused on the architecture of rythms and melodies instead of the software generating them.
So my question is this: is there such environment and if not, why not?

Here is my experience...
im learning pd before i buy max/msp , since pd is a lot like max but free and there is a whole lot more documentation on PD than max, and the people who developed PD are the same people developing MAX (from what ive read) i figured id learn PD before buying MAX.
This has a lot to do with me buying Reaktor 5 before i learned programming and hated myself for years for that and ended up using the library and almost never opened reaktor after that.
After about 3 or 4 months of annoying reading on PD im on my way to building my first FM synthesis drum machine so yay! i dont feel so stupid now and studying on PD is making me understand Reaktor 5 so im guessing itll help me with max/msp as well.
so take it from someone who made an idiotic choice on purchasing software about 2 years ago based on who is using it, being a fan of richard devine and datachi and electroacoustic music.
Software wont make you sound great only studying hard will.
PD is a great place to start and im guessing itll be a great place to stay...
hope this helps.

@snowball: I should probably clarify what I mean about gui and functionality. When I said there are few instances when the gui is very important to the functionality, I was referring to the fancier gui elements. I actually do believe in many cases the gui is important to the functionality. It is important when it gives you the proper visual feedback to help the user understand what's going on. In most cases, sliders, knob, number boxes, etc. present you with all you need. You don't always need interactive waveform displays or filter curves; and you aren't likely to need adjustable rounded corners, elaborate color schemes, gradients, and segmented patch cords. Now, to be honest, I am kind of a sucker for cool guis, and spent time doing "hacks" with what Pd provides to make them do things they weren't intended to do. But I have always gotten what I needed.
To give you an example of something that I felt needed a nice gui, here is a video presentation of my final project in college:
[http://www.youtube.com/watch?v=CBKeylzQHOc][0]
The idea was to use a Photoshop-based interface that most of us are in some way familiar with and do sound design with it. I did this in Max because I didn't think GEM had what I needed to do it (I've just recently looked at GridFlow, however, and I think it may have the missing pieces I was after). pd123's comment about Jitter similarly reflects my experience with this project. While it has the nice objects I wanted to use, it also had some irritating bugs that were hard to pin down. As a result, this project was, and still is, unfinished.
When it comes to "more advanced" things like \[poly~\] and \[pfft~\], this is one of the big differences between Max and Pd-vanilla. Max offers quite a bit more higher level objects, which make it easier for new users as they don't have to try and patch those things (\[freqshift~\], \[gizmo~\], and \[stutter~\] are a few others that come to mind). BUT, Pd offers what you need to patch most, if not all, of those yourself using the objects it does have. And, Pd-extended includes many externals and abstractions that clone Max objects. \[nqpoly4\], for example, is a \[poly~\] clone made with Pd objects. I haven't done much FFT in Pd, but as far as I can tell, \[pfft~\] is essentially like putting Pd's FFT objects in a subpatch and adjusting the blocksize with \[block~\]. I don't believe Max lets you adjust the blocksize per subpatch, so \[pfft~\] is a workaround for that.
Don't get me wrong, I don't think these high level objects are a bad idea. I use Pd-extended, after all ;-).
[0]: http://www.youtube.com/watch?v=CBKeylzQHOc

I agree that the Max/MSP tutorials are great, especially for newbies as it's very thorough, though not as comprehensive as Pd's. However, it should be noted that the Max/MSP documentation is a little better suited for those using Pd-extended than vanilla Pd. Much of the objects used in Max/MSP go by different names in vanilla Pd (i.e. MSP's \[cycle~\] is like Pd's \[osc~\] or \[tabosc~\]), if they are there at all (generally, the ones that don't exist can be built as abstractions). Pd-extended, on the other hand, includes clones of most Max/MSP objects, most of which are in the cyclone library. \[cycle~\] and \[play~\], for example, are not vanilla Pd objects.
[www.pd-tutorial.com][0] and the FLOSS manuals at en.flossmanuals.net/puredata are both very good resources for those getting started with Pd as well.
[0]: http://www.pd-tutorial.com

I'll take another stab at a knob.
check this one out. You can send it \[label $1( and \[background $1( to change the color, and the knobs work much better. Still eats way too much cpu.
[http://www.dafe.lukifer.net/pdpatches/crazymachineredux/knob.pd][0]
Still needs to:
autoresize eg. \[knob 100\] for big knob
eat less cpu.
etc... what else?
[0]: http://www.dafe.lukifer.net/pdpatches/crazymachineredux/knob.pd

Hi Daisy,
I followed your reference to Türk and Arslan, I found many interesting claims but no published material. Critically I cannot find a published algorithm nor any experimental data. Can you give a reference to the algorithm as a complete DSP block diagram, pseudo code or source code please. Further, I found a reference to a patent on this claimed approach which as a scientist I have moral objections to. If this work is not in the open and free to build upon it has no utility. One abstract mentions training. This implies a neural, expert system or Markov approach. This aligns with my proposal given in the other thread to use a dictionary of transformations on the intermediate data.
As you said you are not a DSP programmer so I believe this is probably beyond the scope of your research. Have you tried contacting the authors to ask for their code?
I must admit I do not entirely understand your proposed experiment. It seems flawed in two repects. It has arbitary constraints and too many variables for what I understand are the experimental objectives.
To answer your questions as best I can:
\> Is VoCoder alone is sufficient for morph/mix/crossfade among two voices?
\> or I should also add the pitch shifting module with VoCoder to get some more qualitative results.
A vocoder, even if combined with pitch shifting is insufficient to create a morphology between two people speaking unpredictable sentences in real time.
\> Question 2:
\> I already tried this VoCoder example, but could not change it according to my requirements. In
\> my requirements, I have a target voice (the target voice is phonetically rich) and now source
\> speaker is speaking (what ever he want to speak) and source voice is changing into target
\> voice. (illusion/crossfade/mix)
I'm sorry, I can't find the question.
\> Question 3:
\> what should be the ideal length of target wave file?
5 seconds
Time permitting I may be able to help you and your supervisor design the appropriate framework for this experiment. Please contact me by private email (look in the profile), as we are now off-topic for this forum.
best regards,
Andy

@obiwannabe said:
> So the data is a set of distances between objects
> on a plane (and their orientation?)
Yep. so you can chain a generator and an effect to the output, just by putting them on the plane.like this:
osc ------------ delay -------------- out (center of plane, represented by 0,0)
each module is represented by a marker on the table, and each one has 1 control (rotation: like a knob) + volume (determined by distance)
I've got this much done:
osc -------- out
with the rotation of the osc determining frequency, and the distance from the center determining volume. The subsystem for dynamically connecting things is in place (just send a float to fid-in like 5-in to connect to something other then 16: mainout)
@obiwannabe said:
> With more than one occurance of the same symbol possible (or must they be unique?)
In the Tuio system, you can use fid (the number of the actual marker) or the individual id (sid or session\_id) of the marker on the table. For performance, I didn't want to dynamically load/unload modules (it made it choppy) so I just went by fid, but you could use the sid, if you wanted to. I have no problem mapping hundreds of them at start time (in the main\_snd and main\_gfx parts) so going by fid works really well for me, and allows you to just map all the numbers that you have fiducials printed out for. Later, I could even read these mappings from a \[textfile\] but right now, I just want to get the logic worked out.
@obiwannabe said:
> So you have the vetices of a fully connected graph of points
> plus a rotation for each point as control data. Additionally each object
> has a shape and the distance is between edges not center points
> so an objects rotation affects its distance to all other objects
> (or use perfect circles). Am I getting this?
Yep, I'm going by center point of the marker, so it's a circle. I'm using the coordinates to determine gain, so the further from center, the quieter. What I want is the same thing, but between individual markers. Right now, all the marker snd units have 1 ain and 1 aout, and if they are not effects, the 2 are just connected (I think it'd be cool to setup osc and phasor units that are normal, or ringmods depending on if they have audio input, but I need to work out distance logic, first.)
This might make a whole lot more sense if you download the thing, then download TuioCLient [here][0] and the reactable simulator [here][1]
Put TuioClient in your pd path (they have versions for mac, linux, and windows in the same file, I am using linux and mac)
Run the simulator (java -jar TuioSimulator.jar)
open my patches, and press the "0" connected to to start the graphics (in a window) I do it this way, so I can script it going into fullscreen (have a look at my shell script.) You can also leave it off, and just hear the sound stuff (without any graphic output) by not pushing the 0\.
Move 16 (volume knob) onto the table area on the simulator and turn it. It will change the brightness of the center dot (to show gain) If you remove it from the table area, the gain will freeze at that level.
0, 4, 8, 12 are \[phasors\]'s
36,37,38,39 are \[osc\]'s
Put some of these on the table to hear them. you can get cool beating effects by moving a phasor close to the center, turning it so it slows to a click-beat, then putting some \[osc\]'s around on the table.
This is an example of what I've got so far. The reason all this is necessary, is that the reactable people haven't released any of the code they use to make graphics or sound, just interpret the actual camera data. I want to make a free, easy to use version, that does pretty much the same stuff, but on low-end hardware, all in pd, so non-opengl/dsp programmers can play with it.
I attached a screenshot of the gem window in Linux.
[http://www.pdpatchrepo.info/hurleur/dktable.png][2]
[0]: http://prdownloads.sourceforge.net/reactivision/TUIO_PureData-1.3.zip?download
[1]: http://prdownloads.sourceforge.net/reactivision/TUIO_Simulator-1.3.zip?download
[2]: http://www.pdpatchrepo.info/hurleur/dktable.png

I think it would be fairer to say thatl Pd is more 'efficient' than Csound. But to explain why it's necessary to explain why that's comparing screwdrivers to spanners a bit.
Csound was never designed as a real-time language. It came from an old Music V style design. As a very brief summary it's Barry Vercoes take on Max Mathews stuff that came out of a job based paradigm. It kinda got real-time by dint of computers getting faster while Miller took straight to a real-time approach because Pd came out of early versions of IRCAMS Max patcher which was more midi oriented. Same under the hood but with a subtle difference. In Csound you have i-time as well as two computation times krate and arate. Pd has the same duality but these are the message domain and the signal domain. Pd doesn't have i-time (initial time), but you can make it yourself by precomputing tables and then raising a DSP=1 to start the program going. In Csound these can be part of a "note" definition that lives in another file called the "score", while the patch is an "instrument" which is part of a collection called an "orchestra" file.. No score in Pd, Pd just sits there and listens for input or produces sounds by virtue of the message domain program it's running.
Because Csound doesn't have the same time constraints it can take longer and compute in more detail.
Csound has a lot of special functions, it makes use of library things that would be "external" in Pd. What it doesn't have that Pd has is the ability to make "abstractions", so that complex things can be built in terms of lots of simple things and shared, reused and deconstructed easily.
Where Csound triumphs is accuracy and detail of sound quality. You can ask Csound to produce "impossible" scores that would never run in Pd/Max and set it rendering overnight. It's a different approach to composition than the interactive one on which Puredata wins. Think of it as a games engine vs 3D Max. The sense of "Max", whatever that may be, is inverted.:)
How Csound achieves this is by design it can take as long as it wants to work something out. How Pd achieves its speed is that it uses practical approximations that will run in a guaranteed time. Pd is less accurate than Csound in most uses because it cuts some corners, but for the mostpart you don't hear these much.
This difference suggests two appropriate kinds of use. For design, interactive and real-time audio you would use Puredata. For final rendering of highly detailed soundscenes for a broadcast media like film you would render your ideas using Csound.
Hmm, so spanners and screwdrivers. But to answer the question, no, Pd is the more efficient, because it has to be, Or to use a car coordinate system Csound is possibly classier like a Merc, Bentley or Jaguar, while Pd is a Lamborghini or Aston Martin without being chavvy like a Lexus (Reaktor).:)

The most interesting thing I read this morning was this interview posted on Digg,
[http://sztywny.titaniumhosting.com/2006/07/23/stiff-asks-great-programmers-answers/][0]
in which David Heinemeier Hansson says in his opinion "a strong sense of value" is essential to creativity.
That's at the nub of many a flame war - really differing sets of personal value judjements within a landscape of options that are't really that different when you look at it critically.
Learning Linux will teach you more than just Linux, it's a great way forward to understanding more about computing in general. Do you want to do that? Do you want to understand the machines at a lower level? I always reccomend Mac to "artists" even though I am a pure blooded Linux zealot and grand master of the Templars of the Church of the Holy Penguin, sworn to slit the throats of non believers on our crusade against the darkness. Sometimes it's just better to point people at what is right for them.
$400 to Cycling74 will help sustain a fine commercial software enterprise and it will buy you support. You need to value that, do you need the support and do you think the bargain is worthwhile (consider your own comment that you "cannot upgrade").
What are your past experiences of commercial software support? My own decisions were based on wasting $500 on Reaktor to find a years worth of my patches were scrap because I couldn't upgrade. That was a deliberate decision by Native Instruments to enforce a dongle based copy protection that they knew was screwing over their users, so I abandoned NI products forever.
On the other hand, Puredata is in a constant state of development. Do you need stability? Do you value that over cutting edge development? If so paying for a tightly version managed product may save you a lot of time and headaches. It may also create as many when they won't do what you want or need.
As a classical (writing) composer, you are certainly going to want integration with Logic or Sebelius or some kind of scoring package - so what is it about Pd that is attractive? There are no score interpretation marks other than MIDI I/O .
Anyway good luck figuring it all out.
[0]: http://sztywny.titaniumhosting.com/2006/07/23/stiff-asks-great-programmers-answers/

\> yes, probably. but now I'll try to run it on my Atari ST ;-)
Seriously try it if you get time one day. Oddly the Atari ST is still \*THE\* choice for some serious techno musicians. Why? The simplicity of how the UART is addressed and clocked gives it rock solid midi timing. It's something that seems to elude complex architectures even with the best preemptive scheduling, buffering etc. I've watched top producers take a midi file on floppy disk from their $5000 super Mac/PC systems to have it play back on an Atari for final mixdown. It's one of those analog vs digital type debacles where real experience of good ears trumps what "technically shouldn't be so". The ST lacks enough grunt for useful audio DSP, but as a midi processing hub or sequencer it could be an astonishingly powerful tool with pd if you can compile it.
\>mhh... this is just a anthropomorphic vision of reality...
You got me.
\>what I need to ask now is where I can find reference for all objects:
\>I know that there's no menu of them and i have to type their name in those little boxes, but
\>I need to know, at least, what objects I can create, typing their names, is it true?
Yeah that bothers me too. Even after using for it some time I forget the name of an atom and have to go looking for it. I often do something like "ls /usr/lib/pd/externs/ | grep pd\_linux$ | less" to see if I actually have something. For windows likewise search the externals directory for .dll files
\>so, I would like to have a list with the object identifier (for oscs, filters etc.), their
\>details (kind of filters, slopes, ripples etc. for filters, as example ), their parameters (cutoff, Q, etc.)
\>is there a documentation like this?
The help files are detailed, well written and easy to use. Once you know that such an object exists. Just right click any atom and select "help". Usually there's an example case.
Check these to find common atoms
[http://puredata.hurleur.com/sujet-248-suggestions-noobs][0]
[http://ccrma.stanford.edu/courses/154/pd-reference.html][1]
[http://pure-data.sourceforge.net/documentation.php][2]
\>I know... but I still feel more confortable with a traditional language (C++, pascal), also
\>for writing my personal VSTs (you know, for those weirdest things...) I think it's still easier to write "algorithms" with a textual language,
\>without a graphical metaphore.
Raw code is not an expedient or practical way to make music. Having used Music(N), Csound, Nyquist (LISP/SCHEME), and all that stuff I can say this from the bottom of my heart after 15 years experience. Pd gives you two really important things from a software engineering point of view. It's modularity and clarity of interface in abstracting things just beats any C++ classes hands down for it's intended purpose - digital signal dataflows. Consequently you get better decoupling and better reuse. One of the few pitfalls for a trad programmer imho is that pd is very dirty on types, in a way it's one of the most badly typed languages I've ever experienced. Ironic for a tool called "pure data", but you get used to it's lovable idiosynchrocies vis lists, messages, numbers, arrays, symbols and generic "anys". Also it's scoping rules leave a lot to be desired, everythings global within one instance of the server unless you say $0- at the head of a name.
\>But now I need to teach a course on "languages for electronic music" in classical, academic shool.
\>they don't know DSP matchematics or something like,
\>so I need to urgently search for use a more "abstract" instrument for doing the lessons...
You couldn't wish for a more appropriate tool. For non maths/physics students you can use the power of abstraction to build "black boxes" like synths, analysis tools and sequencers and then open them up later in the course. As Claude says, it takes about 9 months or more before you really take to PureData. Electronic music is BIG, really big, not as big as space but it's a discipline that just explodes in scope once you get into it. You can waste weeks writing externals in C, or designing a synth, or creating a composition method...you can get really lost on a random walk in d\>2\. The best way forward is to have a context and a goal. Teaching this course sounds like an excellent vehicle to focus your scope.
\>Tried also Jmax but on Windows (required OS, because \> 95% students use billgatesware ) is quite unstable
I would make it "unrequired". Put your foot down as course leader/tutor that Windows is unsuitable. In order of preference I would go with Mac, then Linux, then Win. If the students only have Windows then try Dyne:bolic ( [http://dynebolic.org/][3] ), a minimal GNU/Linux distro that runs from a CD in RAM and comes preconfigured with PureData and a smorgasboard of other digital media tools. That said, I've seen it work really well on Windows. Once. I've no hard evidence to back this up, but I feel a disturbance in the force when Pd runs on Windows, as if a million threads cried out at once and were suddenly silenced. I don't think it likes heavily loaded machines and I guess 99% certain the reason it's unstable on Win is down to \*other\* things running. Hint: a music machine shouldn't double as an email server and GCHQ spyware centre. Start with a clean install and nothing else running and you may have better luck, but that will probably remain stable about as long as a schizophrenic Z-boson particle if you network it.
[0]: http://puredata.hurleur.com/sujet-248-suggestions-noobs
[1]: http://ccrma.stanford.edu/courses/154/pd-reference.html
[2]: http://pure-data.sourceforge.net/documentation.php
[3]: http://dynebolic.org/

Hi List,
I'm new to PD, GEM and computer video but am familiar with midi in max. I have a few questions. PD and gem seem like truly wonderfull and amazing tools but I need to get my bearings.
Is there any way to start up PD so it knows my audio and midi prefs - so I dont have to reset this if it crashes (I'm using the wish&tc&tk shell installation)? Maybe something in the terminal when I run it? Is there some preference file for the mac components I can tweak?
So what's the best strategy for dealing with presets in patches as there is no preset object? I noticed that there is no preset module like in MAX. I read the "differeces from max/msp" section but am still unclear. Do I need to make lots of message boxes and click all of them?
Is there any Show/hide for cables?
Is this the best place on the web to hang out and ask questions to learn about PD and GEM?
Also is there any place I can see where inlets and outlets go to or what signal they want?
Thanks a lot in advance.
-b

could you tell which os ,card, platforme , distro you use ?
i use pd ,37 test10 under win2000 sp4 (but it perform perfectly on xp sp1 too on my computer)
i use zexylib,maxlib,iemlib.
i use mmio mode for best latency (i know it's sound really strange but it perform Much better than asio on my gig.)
my card is a rme hdsp9632 with a buffersize of 1,5 (1,415...) ms. (64 block)
whatever value i set on audiobuf or block it stay at 1,5...
i ve made a patch using pd as an effect (decoding live a LtRt movie mix to LCRS (surround))
it was performing quite good in sync with the screen. (indeed one video image is 40ms lenth so with a 1,5ms latency that's ok)
but i heard about the jack low latency driver . id like to know opinion from linux users....
anyway , i should already been on linux but since ive bought that new card , i don't manage to have sound with it , before it was ok with my motherboard chipset AC97 (working huh ) and as far as i remember , it had never crashed under linux against 2 or three times the day with windows, and the cpu consumption was really lower than under zindaube. something like 25 to 40 % less (using windows performance and top on linux .) i understand that this difference come from the graphical management of tck .

I've been getting into writing patches that generate music all by themselves, using mathematical
rules that apply quite nicely to music theory. I've made a few rhythm patches that make nice cross
rhythms using metronome division and delays (with values derived from multiples of the master
metronome), and i'll post these too if anyone is interested.
In this thread I'm showing off my "Mauritz Escher like Chord progressions" patch.
Screenshot: ![](http://responsible7.googlepages.com/zenpho_escher.gif)
Mp3: [http://responsible7.googlepages.com/zenpho\_escher\_pd.mp3][0]
Patch: [http://responsible7.googlepages.com/zenpho\_escher.pd][1]
**First some basic music theory:**
(skip this if you're comfortable with chords, 7ths, and inversions)
A major scale is constructed of 8 notes, with the "root" note doubled at the 8th note.
For the key of C major (all the "white" notes on a piano) the names and numbers of the notes in
the scale of C-major are:
Name, Number:
C, 1st (root)
D, 2nd
E, 3rd
F, 4th
G, 5th
A, 6th
B, 7th
C, 8th (remember the root is doubled at the octave)
A triad is constructed of the 1st, the 3rd, and the 5th notes in the scale.
A SEVENTH chord is constructed of a triad (notes 1,3 and 5) PLUS the 7th note in the
scale. So a C major 7th is note 1,3,5,7 or C,E,G,B.
Up until now we've been describing "standard" voicings of the chords, in other words, the notes
are played so that the root is the lowest pitched note, the 3rd is higher, the 5th is higher
still, and the 7th is the note just below the octave of the root.
At the risk of sounding redundant, "octave numbers" after the note name help clarify which octave
the note is to be played in. To play a C major 7th on the third octave, we would write:
C3,E3,G3,B3\. To play it an octave higher we would write: C4,E4,G4,B4\.
"Inversions" of chords re-order the pitches of the notes, but still play notes with the same
"name" as the 3rd, 5th, 7th etc. For example:
C3,E3,G3,B3 is a standard C major 7th...
...and G2,C3,E3,B3 is an inversion. All the notes are there (C,E,G,B) but they are in a different
order to the normal "Root, Third, Fifth, Seventh" arrangement. In this case, we say that "the
fifth is in the root".
----
Okay so now we know what a major 7th chord is. Lets deal with chord progressions.
Now imagine playing C3,E3,G3,B3 and removing the "root" (the C3) from the notes played,
we have a chord that reads "E3,G3,B3" - we were playing C major 7th and now we're playing E minor.
\*THIS IS A VERY IMPORTANT STEP\* Moving from C major 7 to E minor sounds "natrual" because the
notes that occour in C major 7 ALSO occour in the E minor.
Now lets make this E minor chord a 7th...
We've said before that a 7th chord can be constructed by playing the 1st, 3rd, and 5th notes, PLUS
the 7th note in the scale.
The scale of E minor (a flavour of minor) is:
Name, Number
E, 1st (root)
F\#, 2nd
G, 3rd
E, 4th
B, 5th
C, 6th
D, 7th
E, 8th (octave)
The 7th note is "D" so we add the D note to our E minor triad to make E minor 7th.
E minor 7th is therefore: "E3,G3,B3,D4".
We can extend this E minor again, removing the root, working out the new scale for G major, adding
the 7th to make G major 7th, and again, and again, and again... but if we do - we keep moving
\*UP IN PITCH\* and spiral off the end of the keyboard.
----
**HOW THE PATCH WORKS**
Okay, so what my patch does is to take the idea of generating new 7th chords over and over,
but to play inversions of these chords so that the notes stay inside a single octave. If the
"root" note is in the 3rd octave, C3 for example. Then when I move to E minor, the D4 is
transposed to be a D3, to keep within this octave range.
Due to the fact that there are 12 semitones in an octave, and notes that fall outside the octave
range will wrap around to be an octave lower. The maths for generating the new chords basically
involves taking each note in the current major 7th chord and adding two semitones to each note in
turn.
Now our terminology could cause confusion here, because there are "notes in a scale" and "notes in a chord"... So I'm going to define some notation to show when i'm talking about the notes in a
chord.
For example:
A C major 7th has the notes C3,E3,G3,B3\.
Note-1-in-the-chord is to be defined as chord\_note\_1\.
Note-2-in-the-chord is defined as chord\_note\_2\.
Note-3-in-the-chord is defined as chord\_note\_3\.
Note-4-in-the-chord is defined as chord\_note\_4\.
chord\_note\_1 has the pitch C3\.
chord\_note\_2 has the pitch E3\.
chord\_note\_3 has the pitch G3\.
chord\_note\_4 has the pitch B3\.
It is important to be clear about the idea of "pitch", "chord\_notes" and "scale\_notes" because
because chord\_note\_3 has the pitch "G3" and scale\_note\_3 of C major which is the pitch "E3".
----
Back to the procedure for generating new seventh chords.
We generate a major 7th to begin with.
C3,E3,G3,B3\.
We add 2 semitones to chord\_note\_1 to get "D3", and we leave the other notes alone.
Our chord now reads: D3,E3,G3,B3\.
Which is an "inversion" of E minor 7th.
This time we add 2 semitones to chord\_note\_2 to get "F\#3", and we leave the other notes alone as
before.
Our chord now reads: D3,F\#3,G3,B3
This is an inversion of G major 7th.
This time we add 2 semitones to chord\_note\_3 to get "A3", we leave the other notes.
Our chord now reads: D3,F\#3,A3,B3
This is an inversion of B minor 7th.
This time we add 2 semitones to chord\_note\_4 to get C\#4...
\*BUT C\#4 IS OUTSIDE THE OCTAVE 3! So we TRANSPOSE it down to C\#3\*
Our chord now reads: D\#3,F\#3,A3,C\#3
This is an inversion of D major 7th.
After my patch modifies all 4 chord\_notes, it moves back to chord\_note\_1, and adds another
2 semitones... over and over.
Eventually we get back to C major 7th again, but on the way we move through a variety of different
chords that evokes very interesting changes of moods.
**Want to try playing with it?**
Mp3: [http://responsible7.googlepages.com/zenpho\_escher\_pd.mp3][0]
Patch: [http://responsible7.googlepages.com/zenpho\_escher.pd][1]
[0]: http://responsible7.googlepages.com/zenpho_escher_pd.mp3
[1]: http://responsible7.googlepages.com/zenpho_escher.pd

Hi Liam,
If you'd ask me if I had a same experience as yours, I'd answer both an yes and a no. Yes because indeed my compositional output with Pd has been ridiculously small since my switch from Max, basically I have only been rewriting older compositions of mine from Max into Pd, but on the other hand I dedicated these last months to really and properly learn Pd without any worries or pressure about producing music (I feel much more confident about Pd nowadays than I've ever felt with Max in the past). So right now I am just starting to work on a major instrumental piece with live electronics and I consider that what I have been doing so far was some important research for this and all my future pieces.
As for your points, here is what I think:
1) I have always felt this difficulty in creating _actual good sounds_ as well. Certainly this is a huge topic, but on the most basic level I see that it is simply very tough to construct sounds from scratch using simple oscillators and achieve some kind of rich or new sonority. A lot of my pure electronic pieces use simple sinusoidal oscillators (and this is not a problem if these pieces are conceptual, or if the structure is of main importance, etc.). But if this is troubling you, here is a possible solution: use Pd for composing musical structures, and use sounds from somewhere else (via other synths, such as some VST). This way you can skip (at least for now) the tough area of sound design. Or use samples: record them yourself, manipulate them in some DAW (I really recommend REAPER http://www.reaper.fm/, I am utterly in love with it right now) and then use them algorithmically in Pd. Then you can simply organize them with Pd, or also manipulate them using reverb, delays, effects in general, etc.
2) That's the tough part of being a composer in general, I'd say. I come from an academic background, which means that I am a full time student of composition and I have been dedicating almost a decade of my life exclusively to that, and this is still something that hits me from time to time (and everyone around me as well). On the bright side, the older (and hopefully wiser!) I get, the less these types of insecurities and blank periods and crisis affect me.
Take care and good luck with your music!
Gilberto

Hi everyone!
Here is some new patch of mine:
ALFATAPE is an emulation of a simple 8-track-recorder.
It is built to use with a KORG nanoKONTROL2 controller.
You can re-assign, of course, to use with a different controller
(see README.pdf)
You *have* to use *some* midi-controller - only menu-kind-of-things are
accessible via mouse.
***EDIT: As for the new version (0.1.1) this is no longer true! You now can also
use your computers keyboard to control ALFATAPE. New Link below!***
The goal is simple:
- no mouse-use
- no visualized audio
- 100% destructive
- good transport-behavior
- easy to use
some ALFATAPE-features:
- 4 markers (set or go to)
- choice of two loops (range or punch)
- punch-in/out
- recording while looping
- slow rew/forward (while listening)
- fast rew/forward (while listening)
- bounce tracks together (in real-time)
- write out audio quickly as wav (rough-mix)
- latency-test
- "open", "save as" or "eject" tape
- 44.1k or 48k
- change input of individual tracks easily
- import audio to track (placing at zero)
In the main-folder you find a .png of the nanoKONTROL2 with the names of the
individual buttons/knobs/sliders (yellow) I'm referring to as for shortcuts and
midi-controlling.
You'll find a KORG nktrl2_data-file too which you can use for flashing your
nanoKONTROL for the use with ALFATAPE. In order to do so, you have to use
an application from KORG, called KORG Kontrol Editor. I used the KORG Kontrol
Editor v.1.3.0 under Linux with wine without any problems. This is not about PD,
but it's anyway a nice little peace of software: You can alter how the buttons/
knobs/sliders of your nanoKONTROL behave (momentary or toggle) and a lot
of other useful things...
![printscreen_ALFATAPE_0.0.3_2.png](/uploads/upload-45da3b96-8f41-48c7-a38e-bbc7992e7f8f.png)
![printscreen_ALFATAPE_0.0.3.png](/uploads/upload-158e08c9-1e60-47da-98c4-e2bc13b9d19d.png)
Here's the zip:
[ALFATAPE_0.0.3.zip](/uploads/upload-96e7a5e4-0423-485f-acc1-3d84e3ffb4ec.zip)
You can alternately download ALFATAPE from my site too:
http://www.marcobaumgartner.com/puredata/ALFATAPE/
Hope you like it!
It's still alpha - I'm glad to hear what questions/problems you encounter.
Have fun!
Marco

Hi all,
I have been using pd for quite some time already, but only recently I have migrated to Linux from Windows (currently I am using Linux Mint 17 64-bit). Dealing with MIDI objects was very simple with Windows, but I am having a hard time configuring it on Linux. With Windows, I could simply create a simple patch using some MIDI objects and I would hear output straight away.
My current pd instllation on Linux does produce sound when the DSP is on (such as with [osc~]), but no MIDI sounds are heard when I try the "Test Audio and Midi..." patch nor with any of my patches.
Here is what I am doing:
- first, I open QjackCtl (GUI for JACK) and click START (so I am assuming I am starting JACK by doing so, am I right?)
- then I open pd-extended, which has "jack" selected as its audio output (my choices are OSS, ALSA, portaudio and jack). Pd always gives me the following error message when I open it:
couldn't open MIDI input device 0
couldn't open MIDI output device 0
opened 0 MIDI input device(s) and 0 MIDI output device(s).
- as for MIDI, pd always loads with default-MIDI selected, but I go to that menu and chose ALSA-MIDI. This makes pd output:
Opened Alsa Client 129 in:1 out:1
- when I do so, I can see that Pure Data appears in QjackCtl's Connection under the tab ALSA
- Now if I open the Test Audio and MIDI patch, I can hear the oscillators, but when I click on the [tgl] MIDI OUT, nothing can be heard (but I can see a [bng] blinking).
- I have also tried tried to install a program named TiMidity (together with some sound samples for it) in order to play MIDI files on Linux (as far as I understood, this is necessary in order to hear any sound from a MIDI file, unlike Windows which can do it out of the box). Now, when I play MIDI files via the terminal ("timidity filename.mid"), I can hear it playing. Also, after reading about it, I learned that the command "timidity -iA -B8,2 -Os" makes Timidity become the ALSA sequencer device (after this command, timidity appears in the list of "Writable Clients/Input Ports" in QjackCtl's Connection (tab ALSA). But I still don't hear any MIDI sounds...
So does anyone know what am I doing wrongly? Any help or ideas would be highly appreciated! Thanks a lot.
Regards,
Gilberto

Maybe you habe both versions of Tk installed and PD is using the old one. On linux i would look for those librarys with like this:
locate libtk | grep .so
...
/usr/lib/x86_64-linux-gnu/libtk8.4.so
/usr/lib/x86_64-linux-gnu/libtk8.4.so.0
/usr/lib/x86_64-linux-gnu/libtk8.5.so
/usr/lib/x86_64-linux-gnu/libtk8.5.so.0
then check the links with "ls -l" if there is a libtk.so that points to libtk8.4 for example.

ExpoChirpToolbox.
This will be the name of an IR measurement toolchain, based on the \[expochirp~\] object. The first two patches are built now:
- \[expochirp-generator\], generates an exponential chirp of excelllent quality, together with inverse chirp. Deconvolution-test and inspection of the chirp's spectral characteristics within the patch. Chirp & inverse chirp can be stored with metadata in a textfile.
- \[IRrecorder\], loads a chirpfile as created with \[expochirp-generator\], applies the chirp to the system under test. Deconvolution, and inspection of spectral characteristics of the impulse response. The raw IR can be stored as 32 bit floating point .wav file.
Next component on my program will be \[IReditor\], where you can load a raw IR as produced with \[IRrecorder\] and edit it: trim the useful portion, and eventually window the tail if necessary. The editor should apply the IR as a filter in a soundfile player, so you can check the resulting sound. When these patches work satisfactory, we can think of additional tools: \[IRinverter\] to produce correction filters, \[IRanalyser\] for extracting acoustics parameters, \[IRambisonic\] for multichannel IR...
Attached is ExpoChirpToolbox.zip containing \[expochirp~\] binaries for OSX and Windows (I'll do Linux later) plus patches \[expochirp-generator\] and \[IRrecorder\]. Please post comments or suggestions.
edit: can't attach a file at the moment, download from here;
[http://www.katjaas.nl/temp/ExpoChirpToolbox.zip][0]
Katja
[0]: http://www.katjaas.nl/temp/ExpoChirpToolbox.zip

Hello Katja,
Fortunately I had a conversation with a colleague today about IR normalisation for another reason and probably we got our answer.
In simulation softwares (Acoustics 3D models simulations), the auralisation module when deriving an IR and then the convolution of the anechoic file, applies a gain at the two stages and stores the info in the file.
In the case of the IR it applies a gain in order to have the maximum peak digital value (0.9999999999) and then consequently the same gain to all the rest of the IR (24-bit integer).
When doing the auralisation it applies another normalization gain to prevent clipping of the auralised file and takes into account that auralisation is made at 16-bit integer.
a simple program for the IR normalisation (converting the IR to PCM) can be:
Variable definitions
FP(N) = floating point impulse response
PCM(N) = linear impulse response
M = sample length
Algorithm
MAX = 0
For N = 1 to M
If abs (FP(N))\> Max then MAX = abs(FP(N))
Next N
GAIN = (2^16)/MAX
For N = 1 to M
PCM(N) = FP(N) \* GAIN
Next N
Store GAIN
As you can see the gain is arbitrary and depends on the IR (or the recorded sweep response) and there isn't a general rule as far as I know.
IN our case we should do these normalisation of the file and store somewhere (metadata in the wav file) the gain applied that would be useful in future applicaitons.
IN the case of multichannel IRs this can be a bit more complex but one thing at time is better.
Hope this helps
Bassik

i prefer pd than max.
i dont like max 5 gui , is is not simple and direct.
and i have the same feeling that pd is more stable than max.
but the usability design of max is better than pd.
and max 5 help center is better too.