In case it was my remark that made you think this - that one was in reply to what bachus said in the message before mine, and not directed at your code ... I do not know enough about superCollider to comment on it ... it looks complex is all I can say, but then again I program in Forth which to many will look strange too I'm sure._________________Jan

I think that you would have to put some randomization into it, or else it would either play the same thing every time or just exactly what you tell it to do. Maybe for every event there are several things that could be triggered (or none) and it makes a random selection.

Well, you *could* drive it from the words in a Scrabble game The tiles are selected pseudo-randomly, but then the players impose structure on that pseudo-randomness._________________When the stream is deep
my wild little dog frolics,
when shallow, she drinks.

Then I needed to work on a couple of pieces that would be performable without my presence. That is, the things I would do in performance (loading processes and other resources, starting and stopping musical activities, adjusting controllers) needed to be automated.
James

Nice project, and commendable direction, James!

The most recent bit of automation in Scrabble-to-MIDI is a set of AI Scrabble players written by my students and me. My ulterior motive is so that I don't have to take my fingers off of the banjo or guitar strings. A foot controller nudges the current AI. Humans can still play if they like, of course.

The next step is to semi-automate the tile-to-MIDI translator this way. It's quite feasible, because I always plan out a loose "score" set in terms of a sequence of scales, accent patterns, tempos, harmonies, etc. It's a "fuzzy sequence." I could give an AI for the tile-to-MIDI mapper this translation score, and nudge *that* AI with a foot controller from time to time. Really keep those hands on the banjo.

That's my goal. Keep those hands on the guitar or banjo _________________When the stream is deep
my wild little dog frolics,
when shallow, she drinks.

I was seeing more song description back in the 90s, before everybody started using sequencers/DAWs to manage real-time plugins.

I am guessing that we are distinguishing between simple notation (such as MIDI) and dealing with something more programmatic. I have made generative patches in Max/MSP which definitely are a complete description of the music heard. Also I have dabbled in CSound and SuperCollider with decent results. HMSL seems capable of this also, but I haven't really worked with it.

In the 80s, I wrote a very crude MCL (Music Composition Language) in Forth. It was so crude that the entire code was less than a page long! A few lines were for the driver that controlled my Digisound 80 via a parallel cable and the printer port. A few more lines defined the 12 notes, and the few remaining lines were for the melody from Kraftwerk's Computer World. Sadly, I lost the code years ago. It would've been trivial to rewrite it, but I never bothered!

However, I've been using Csound for the last decade or so, with the Haskore frontend for the last few years. I extended Haskore a little bit to support the features I wanted - more detailed note annotations and support for more Csound opcodes.

More recently, I wrote some code to translate MIDI files into Csound's score language, splitting polyphonic parts into multple monophonic scores. I then use Csound to play the scores, converting back into MIDI via my Kenton Pro Solo, and recording the result via my soundcard.

This is a lot more complicated than the crude system I created in the 80s, but it's also a lot more powerful - and I'm still adding features. Yesterday I began using the Csound mixer opcodes so I can make dynamic mixing changes. I also began using a feature of my midi2cscore tool that lets me select a time range. Now I can split a piece into sections, record and mix each section independantly, and then join them up later in a final mixing pass. So, yesterday I started work on the first 20 seconds, represented in the score as 0-60 because of the tempo changes.

So I'm making heavy use of an MCL, but not all of it directly. I'm mainly writing the mixing score and rarely looking at the score files that define the notes._________________http://soundcloud.com/nerdware/
"render unto digital what is due to digital, render unto analogue what is due to analogue"

In the 80s, I wrote a very crude MCL (Music Composition Language) in Forth. It was so crude that the entire code was less than a page long! A few lines were for the driver that controlled my Digisound 80 via a parallel cable and the printer port.

Sounds like brilliant fun! Some Forth-driven Digisound is just the sort of thing that sounds like good listening to me. I have no experience with Forth, but it does seem mind-bogglingly elegant. This kind of stuff makes me question what little I think I know about computers:
http://www.ece.cmu.edu/~koopman/stack.html

And by coincidence, my searches two weeks ago on compilers and MIDI sequencers on the Atari ST platform linked me to the work of Frank Rothkamm, who it seems has been developing a Forth composition environment for the Atari ST called IFORMM - (intuitive|improvised)(future|forth)oriented(retrograde|realtime)(motion|music)(music|machine)

Forth is pretty hardcore programming by today's standards. I don't recommend it as a first experience of programming! However, it's great for getting dirty with the hardware. When I used it, there was no other way. Thankfully we've come a long way since then.

If you want to mess around like that now, I recommend using an Arduino starter kit. You could use Forth on that, but everyone I know using an Arduino codes for it in C. I don't recommend C as a first programming experience, either, but...the Arduino makes my first computer, a TRS-80, look very powerful and friendly. Unfortunately, there's not much support for beginner-friendly programming tools in this area.

So the Arduino reminds me of those early days with micros. The thing I really like about it is the electronics angle - I played with a breadboard before I got into software.

Quote:

And by coincidence, my searches two weeks ago on compilers and MIDI sequencers on the Atari ST platform linked me to the work of Frank Rothkamm, who it seems has been developing a Forth composition environment for the Atari ST called IFORMM - (intuitive|improvised)(future|forth)oriented(retrograde|realtime)(motion|music)(music|machine)

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot vote in polls in this forumYou cannot attach files in this forumYou can download files in this forum

Please support our site. If you click through and buy from our affiliate partners, we earn a small commission.