I've been working on REAPER's video rendering/processing pipeline, allowing video processors to access and share memory with JSFX and ReaScripts, and exposing some new APIs for things like FFTs. It's pretty low-level, the JSFX/video processors are responsible for a lot of stupid work like synchronizing with eachother... but: the key thing is that we (REAPER users) can now effectively build things like AVS into the rendering pipeline, combining video and rendered audio and rendering that to video. I find this really awesome and exciting in the way AVS was, and in a way that has been missing for a long time. And now we can render AVS-like stuff directly to YouTube videos.

To explain: why is this like AVS? AVS had two key things that made it really cool:

You could build complex things by putting simple things together

You could also write complex things in code (making those "simple things" a little less simple).

(I'm bad at counting) (ok REAPER doesn't really address this -- AVS was cool because awesome people made tons of awesome presets for it. but REAPER's made for creators, so I guess we don't have to fulfill this one, so two things.)

REAPER does this too -- you can use video material directly, or one of many video processor presets to do basic things. Then you can also do the second thing by coding video processors (or hacking existing ones) to do new things. As an explanation, the effect in the video below is a combination of an existing preset (the spectrum analyzer), and a little bit of code (15 lines or so) which transforms that into polar coordinates (I could save that second bit of code as a preset for re-use later, including on a video file, etc).

The future is awesome. Now if only the present could catch up.

Addendum:
I'm irrationally pleased by this bit of video processor code I spent far too long on, for converting HSV (360/1/1) to RGB: