It looks like the zooming issue is related to mouse scrolling freeware. From seeing the comments below about “KatMouse”, I took a look at a tool I run called “WizMouse.” The purpose of these tools is to allow scrolling windows without focusing on them. Settings can be changed in these to fix the problem.

Last night, I was exporting a video I’ve been working on. And since I’ve lately become very interested in color correction in video, I was not very pleased to see that the black level in the video was shifted to brighter, as I played it back in VLC. It looked like the black was clipped at broadcast levels. In short, it looked like the black was set to (R,G,B) (16, 16, 16) instead of the (0, 0, 0) that I had be working towards.

Searching the web I found lots of people having problems with gamma in mp4/h264 files from Premiere exported with QuickTime, but I didn’t use QuickTime, and a gamma shift wouldn’t move the black level. I also tried a different codec, and so my thoughts started to shift towards VLC.

It turns out that VLC was partly the problem. In my search I found this blog entry by Ben Krasnow which pointed me in the right direction.

In the NVIDIA Control Panel you can go to “Adjust video color settings”, “How do you make color adjustments”, I chose “With NVIDIA settings”, clicked the “Advanced” tab, and set the dynamic range to Full.

I was trying to set up mingw/msys with freeglut and Glew, to be able to do some programming in OpenGL. None of the instructions I found seemed to be just right, so I’ve tried to document the steps I needed to get it to work.

I have used Processing with an external editor for years now, and I relied heavily on the “use external editor” option in the IDE. That option is now gone, and has been replaced by a command line option instead.

At first the removal of that option annoyed me, but it has grown on me after I managed to set it up properly.

To set it up in Notepad++ you have to install a plug-in called NppExec. That can be done with the plugin manager.

When that is done, open up the source file for a Processing sketch, and press F6.

I presume you have been able to download and install the VSTi yourself, and that you see it in the VST-list in Ableton Live.

Start up a new project, and drag the sanestation vsti into a MIDI-track. Rename the track to SaneStation. It is not important, but I will persume you did, so it makes it easier to follow along here. This will be the main track for controlling the synth, and editing patches and stuff. You can only have one instance of SaneStation, or it will probably crash, so keep one in there.

Now, add another MIDI-track. (Ctrl-shift-T)

Set “MIDI To” to point to “SaneStation”

In the dropdown box underneath, chose “1-sanestation”.

Add a new MIDI-clip in the track, and set both “bank”, “sub-bank” and “program” to 1.

Then add some notes and hit play.

To tweak the sound, select the SaneStation track, and the VSTi interface should pop up.

Make a cool sound.

Add another MIDI track.

Set MIDI To to SaneStation and 2-sanestation

Add a MIDI-clip, and set “Bank” and “Sub-Bank” to 1, “Program” to 2. Add some notes, hit play.

To edit the sound for this channel, select the SaneStation track again, and in the Track View-pane select “Instrument 1” in the sanestation contoller.

You can now tweak the sound in the VSTi GUI.

Repeat this for aditional tracks.

Do all your composing and arranging and stuff like that, as usual.

When you are done, and ready to export you have to do the following.

In arrangement view

Make sure that every track starts at the same time,fill in with blank clips if necessary.

For each track, select all the clips in the track.

Right click, and choose “consolidate”

Right click again, on each track, an choose to export midi clip.

Name them wisely.

Open the VSTi-GUI for SaneStation, and export the soundbank to the same directory as all the midi-clips.

You should now be able to put all the files together with the compile-utility that came with SaneStation. Refer to that manual/readme for how it is done.

There might be easier and/or better ways to do this in Live, but this was the thing I figured out could work, and it did in testing, so…

Let me know if something is hard to understand, or if there are any problems.

There is also some information about using VSTs with multiple channels in an article from SoundOnSound

I constantly come up with new ways I think I will use to keep track of the charge status of my camera batteries, but I tend to forget between each time I am on a shoot, so it is kinda silly. The other day I came up with this:

I used a marker, and added a plus sign to the battery covers, on one side. When the battery is fresh, the cover is put on so that the plus faces the contacts. When it is flat, the other way around. Now it is easy to see of I have just charged, or used up whatever battery I fish out of my bag.
I also added a piece of gaffers tape on the “negative” side, in case the plus gets rubbed off.

I am working on making a bass synth. It will be controlled by some old organ pedals, but currently it just works by adjusting pots.

In the schematics you can see the voltage regulator in the top left. I use 12volt DC in, from an old PC power supply. Which I regulate to 9 volts.

The 4093 contains four NAND gates with Schmidt triggers, and I use two of them. One controls the pitch of the sound, and the other controls the first one, by turning it on and off, so you get kind of an arpeggio. You can also turn that on and off with SW1.

The 4040 is a frequency divider, that is fed the output of the tone oscillator, and then each of the outputs of that is fed into two rotary switches. In that way you can mix together two octaves at the same time, getting a richer sound.

Most of this project was inspired by the book “Handmade electronic music”, by Nicolas Collins.

If you have any questions, or suggestions for improvements, please drop me a line.

They wanted some sort of permanent installation that would make their hallways a bit more interesting, and they had previously seen one of my prints from the “Wasted time” project, and so I used that as a basis when I started thinking.

"Wasted time, 2009-03-27 11:49:01"

I also wanted to do something that was tightly connected with the department, and what they do, so sound would have to be, in some way, an element in the work.

As the work was to be permanent, and is to be there for a long time, I wanted to make something that would need little to no maintenance, and not have the risk of stop working in some way. It should also not be to obtrusive, since people will need to walk past it every day, and I don’t want it to end up be an annoyance to the people who use the premises. With that in mind, I decided early on that I wanted to make some sort of generative prints, and started checking out possibilities at a prints shop. The choice I made was to make prints on acrylic plates.

I went to the location, and after deciding where I wanted the plates to hang, when they were done, I recorded the ambient sound in the hallways, with microphones placed at the spots where the pictures would be. The sound was then cut to find some interesting segments and then normalized. I then used the sound as data for drawing curves, circles and lines. This was done with utilizing the language Processing and the library minim.