A small application made with Processing to produce MIDI control messages from a live audio feed. Auto-levelling frequency band meters are used to obtain “peak” output values across the entire audio spectrum analysed (whereas it is usual to have strong bass response with weaker treble response), and easing is applied to make the levels less erratic. 10 separate monitor outputs are supplied, where each monitor is attached to one of the frequency band meters and that meter’s output is scaled to the particular output range for the monitor, allowing for a reduced final output range as well as inverted ranges. Controls are provided to select the MIDI output device and the MIDI channel to use, as well as for selecting which monitors are connected to which meters and the monitor output ranges.

I haven’t changed (at least not that I’m aware of, and not intentionally!) the actual image process, just the implementation and how the the paper folds are determined – well, the last fold, anyhow, which I’ve animated,

The modifications to the implementation resulted in a massive speed gain (about 3x the original), enough to have “smooth” animations instead of one still image every two seconds. The “animation” I have done is pretty basic, and might be extended in future to animate more than the last fold, or to animate that fold in some other manner.

Oh, and I added another source texture – a photo of Funckarma at Plastic People in Shoreditch that I took last year. Neon glow FISK. Awwww yeahhhh. :o)

I’ve been thinking about adding pointy bits to some 3D models, instead of making everything from boxes and/or spheres. Maybe still a bit boxy (eg a square pyramid), but maybe also some nice cones. I saw a while back that one of the geometry-creating modes was a triangle fan, which looked perfect for making cones and pyramids.

Clearly, having just one cone is n’t enough, so I made a spiky ring whcih went through various changes (with varying animations) until I arrived at this. I quite like it when the “cones” drop down to having only 2 sides, so they are then just flat. Maybe a bit boring if just a triangle, but the arrangement of base-to-base cones here instead leaves a rather pleasant kite shape, reminding me of a blade, perhaps. When there are heaps of them – more than would at first seem sensible, a nice rippling/fluttering effect occurs. Overall it reminds me of an assignment I had in a class on colour to make a colour wheel – the name ConeWheel being a reference to that.

I’m surprised at how fast this runs in the P3D mode (faster than OpenGl in this case?), even with a seemingly large number of “cones”. It is more bright and cheery in P3D mode at any rate, and was chosen to allow posting to OpenProcessing.org.

I think I might find myself delving into (more) non-standardard OpenGL calls within Processing in the near future. This is an example using a small library (SuperPoint) that allows you to build an array of points (that don’t move relative to each other) once and draw them many times. This allows you to do all your setup calculations just once, and allows you to plot multiple copies of the set of points as well. Here I’ve chosen to plot the same set of points at slightly differing scales and rotations (for the base geometry) and differing “point sizes” to give the spikes the appearance of little soft-pointy cones. I built in some wandering movement that is enabled by default, so this sketch is pleasing enough to watch without having to fiddle with it. Of course fiddling is more fun.. turn on the auto [c]entre mode and click the mouse somewhere (or press an arrow key) on (or off) the beat to some music, for example. Hit [s] to change the spin or [j]ump to a random position. Okay, so they’re not advanced spline-based keyframe activated audio-reactive magic – that’s to be added (by you or me?) later!

View sketch page for source code. Please note that the sketch itself appears to freeze after only a second or two of runtime. Please try building the program yourself in the Processing IDE – you cannot save files from the applet in any case (this is a video frame generator).

Features

Virtual Playhead

When the target framerate is adjusted, the frame number is recalculated
to maintain the play position as closely as possible

A “running time” is kept independently from the strict frame time, but
various operations cause the frame time or running time to be
determined by each other, eg if stepping frames, the running time is
set to match the frame time.

Temporal Anti-aliasing (motion blur)

Frames containing movement (translation, rotation) are multisampled then recombined into a single image

The number of sub-frames rendered is governed by the fastest estimated motion for that frame, as well as a maximum limit determined by the user

Low-quality (or no) temporal anti-aliasing is put into effect for “realtime” playback to maintain a decent frame rate, with full-quality implemented whenever the movie is paused

I saw Johan’s video generator program and was interested to find out what made it tick. I really should have written something like this last year when I had a need for supplying lyrics for music at parties… I’d had a bit of an experiment in another direction, using custom HLSL pixel shaders in the Neon v2 VJ software I use, but that proved to be painful and not especially useful!

Anyhow, when I looked inside the hamstergyro, aside from the world of pain of setting up those hundred or so timed lyrics entries, I saw that the method use to calculate the animation was a bit confusing and awkward. My first thought was to post a comment recommending Johan check out the lerp() and lerpColor() functions for future projects, and started typing some example code in the comments box to show how nice it might look in comparison.

I then realised the code probably would not be formatted properly in the comment, and I also didn’t want to leave a suggestion to use code that I’d not actually tried out, sp downloaded the zip file and started hacking away to make sure it _would_ work the way I was suggesting. I may just have a use for this myself, so it seemed a worthwhile project to keep things tidy and make it easier to expend on in future! :o)

Of course, just testing out a couple of lerp() calls wasn’t going far enough, and what I’ve ended up with is something I reckon is pretty cool. A kind of video transport system with a virtual playhead that can be controlled independently of Processing’s current frame number and/or frame rate. I also had this brilliant idea to try out rendering in-between frames and blending them together for a nice motion blur effect. That part in particular I’m sure can be improved upon, but it is a nice start. :o)

In all, I’m pretty pleased with the result so far, and I’d like to thank Johan for the inspiration – seeing something done is sometimes all the motivation you need to go out and try it yourself.

Splitting this sketch into separate files proved a little troublesome. At some point the compiler told me that there was a duplicate of something (a static final int), when I’m certain there was only one of it… Remove the “offending” lines and the next thing is then flagged as a duplicate. I couldn’t figure out what the problem was, so tried cut’n’pasting the files back into a single file again. Seems to be okay (well, there were other problems, like a null pointer exception, but it didn’t fail on the compilation at least!). Closing the sketch (I’m not sure if exiting Processing altogether) and opening it again laster made the problem mysteriously disappear… A gremlin in the system somewhere no doubt.

Well… I managed to pull the monolith into smaller many-liths, and added a bunch of new features. I think I spent most of the time staring at the screen and twiddling the knobs to see how it looked and worked. There was some great music playing on some net radio station found in an iTunes playlist – maybe getting a bit carried away with that, but a good sign for playing out later.

I now have a system (of sorts) where I can add a new “effect” and have it fired by the random effect button… I made a couple of samples and they don’t necessarily look that nice. One of them started as something basic, led to another step and another step, until it was helicoptor blades spinning above the central box. Looks kind of okay at first, but when you have a lot of objects on screen (especially if there are a lot of ‘coptors!), it just looks bad. Too vicious. Too much fast movement. Well, the framework is there, now time to do something more interesting with it.

There is another (major?) part to be started yet, and that is the beat synchronisation stuff. I haven’t written/drawn anything about it, and should probably do that before putting finger to button to code it. I might have a look inside the MIDI library for ideas on how to build a sequencer, and also have a look at the Mother library for combining multiple Processing sketches using OSC (Open Sound Control).

That reminds me, I wrote a sketch last week or something that allowed me to control Neon V2 via MIDI. I used an X-Y rectangular region to send two signals to one of my custom pixel shader effects, and by that was able to control the positioning of a graphic effect on-screen using my Wacom tablet. Neat! Well, there was one problem, and that was that the reaction time was somewhat delayed. I don’t know if my PC is up to the task of running a Processing sketch whilst generating visuals with Neon. Maybe if I reduce the screen updates from Processing the load will be lighter, but it seems at the moment that it isn’t really viable. Setting up the MIDI listening in Neon is especially problematic, since when you select MIDI control it latches on to any and all MIDI messages coming in on a certain channel (not sure if that is the correct terminology), so if there are two signals you have to chance it to get lucky that it picked the right one at that split second.

In case you’re wondering, the screenshot for SpaceBoxes v1.2 above isn’t doctored, but it does include a nice graphic from another adventure in the background! Instead of just the partly coloured (and colour-inverted) colour wheel image, there are now a small group to choose from (well, cycle through by pressing SHIFT-B).

I’ve been trying to work out how to make transparent images, and have now seen an answer but am yet to try it – PImage.mask() – which allows you to apply a greyscale mask as an alpha channel on another image. It kind of sucks that ARGB PNGs don’t work just like you think they should

Meanwhile, the out-of-disk-space problem is getting to be a real hassle now. I can’t even open my image editor to create some test images! I have some blank DVDs here, so might find some “space” with their help. I don’t tend to trust optical media much. It is slow, and appears to be unreliable. Maybe I’m letting the debacle with the unfinalisable DVD video I made at Christmas for a gig at Matter (but since I couldn’t finalise the disc, it couldn’t be played) poison my attitude towards it. One thing I do know is that the burner in my laptop is slooowwwwww…

Another Processing sketch, this time with some boxes flying out of the screen at you… a little like an asteroid field, only super boxy. I’ve included a 2D image in the background as well, and it all swirls around almost dizzyingly… I don’t know if there is a better way to just have the 2D image as a backing layer with the 3D drawn over the top. I had to move the “camera” and objects so they weren’t hidden by the 2D image plane.

It shall also go into my box of tricks for an upcoming gig: Immersion at The Flea-Pit on Columbia Road in February.

Edit: versioning has bitten me again… I wanted to update the program on OpenProcessing.org, had it working nicely, then went too far! Started on more extensive changes that will take a bit more time and thought, and could possibly break everything. Perhaps I should fork every time I have it in a working state, or at the very least when I publish online.

Another Processing sketch, this time it is experiments in graphics using PGraphics layers to construct an image.

I like the idea of having artificial “construction lines” optionally appear in an image, similar to showing a “wireframe” on a 3D model. This example uses two layers of construction lines: one behind the objects, and one in front of the objects. The level of detail can also be adjusted; in this case, the crosshairs to the edge of the window are at a higher level (not to be confused with higher layer!) than the small “+” to mark a circle’s centre and a circle’s outline.

The base Shape class includes details for presentation (fill colour, edge colour, edge weight, etc) as well as position and animation (in this case simple velocity with acceleration), including drawing a “trace” of the shape’s location. The motion calculations aren’t “reality-accurate”, but they are time based, not frame based. To demonstrate this, you can increase and decrease the sketch’s frame rate and see that the objects seem to move in similar trajectories whether the updates are regular (eg at 32 fps) or infrequent (eg at 1 fps). Time is “stopped” (or more like skipped) when the sketch is paused.

Another neat trick implemented is seamless wrapping of objects at the edges of the window. This is achieved by drawing an object multiple times, if required, offset by the window width and/or height. A similar trick is used to wrap the trace line generated for the shape.

The overlaying of the (5!) layers hasn’t turned out quite as I expected. If you look closely (press space to pause the animation), you can see that the supposedly semi-transparent red “background” of the circles is actually fully opaque with respect to the layers beneath it(the yellow motion trace is the bottom layer, next layer is the target crosshairs to the edges of the window). In this case the result looks nice, but it isn’t what I was expecting, which was to be able to (partially) see the lower layers through the red circles, a-la Photoshop and its ilk. Perhaps this can be overcome using blend() instead of image(), or maybe it is a flaw/limitation in the way that the drawing operations modify the PGraphics bitmap. What I’d really like to do is have the layers merged by my graphics card so there’s no slowdown by Java having to do so much hard work going over the window several times (which I’m assuming happens now with the image() calls).

Overall, I like the idea of being able to render each object as it comes, instead of having to do everything in stages (eg draw the contents of each layer – all objects for each layer – before the contents of any other layer, necessitating multiple traversals of the object list and either needing to store intermediate results or recalculate, possibly leading to incoherence or other errors). I also like the idea that the objects know how to render themselves in different styles. One thing it isn’t, though, is particularly efficient. There’s afair amount of overhead from drawing things multiple times, for instance, but maybe to get that particular look (or features), that’s the price you have to pay. I might try implementing something similar in DirectX (using C#.Net), though I don’t expect it will be quite as straightforward as using Processing!