As I already mentioned in my post Bleu 2 - electronic music - transcription, I composed the electronic music for Bleu 2 using as source material only Bleu 1, a set of three clarinet improvisations (well, clarinet in its extended meaning: bass clarinet, basset-horn, and contrabass clarinet). No other sample, nor synthesizers of any kind, but only sound processing of the original recordings. My additional constraint has been to make Bleu 2 the same duration as Bleu 1.

Feel free to remix

Now, you can listen to Bleu 1, download the files, and make your own remix. You can also listen to each of the parts while looking at Miró's corresponding painting.

Comments

Post a Comment

Popular posts from this blog

In April 2008, I was invited by composers Eric Chasalow and Maxwell Dulaney to give a 2-day seminar on spectral sound processing techniques at Brandeis University Music Department.A topic the music students particularly enjoyed was the frozen sound, the audio equivalent of the cinematic "freeze frame shot". I taught the nuts and bold of the real-time stochastic spectral freeze technique (the stochastic component is aimed at breaking the ice - with the audience).On this video, discover 5 variations on a Max/MSP/Jitter freeze tool:DownloadsNote: the Max patches available here have been completely revamped since this article & video were initially published.New Spectral Freeze Max MSP Jitter patches(link updated Nov. 2019)A Tutorial on Spectral Sound Processing with Max/MSP and Jitter: Computer Music Journal, Fall 2008SyllabusThe program for just two 3-hour workshops was quite ambitious!April 14thOverview of the topic: “Spectral processing, or having fun with graphics and s…

When I discovered electronic music during the Centre Acanthes 2000/Ircam, my favorite topic was real time sound processing in frequency domain. Hans Tutschku taught the wonders of AudioSculpt in Avignon, before Benjamin Thigpen taught Max/MSP in Helsinki.Now, the Computer Music Journal just published an article I wrote about spectral sound processing in real time and performance time (whereas real-time treatments are applied on a live
sound stream, performance-time treatments are transformations of sound files that are generated during a performance). If you are interested in graphical sound synthesis, phase vocoder, and sonograms (or spectrograms), I hope you will enjoy this tutorial.The great news is that you can download the article for free on the page of the Computer Music Journal, Issue 32, Volume 3.Max/MSP/Jitter patchesYou can readily apply the described techniques in the development environment Max/MSP/Jitter. For a hands-on approach, make sure you download the patches on my…