On my last entry about a shader compiler, a commenter asked why I don't use 3D acceleration. One of the problems with this is availability. There are many reasons why 3D acceleration may not be viable, such as:

No viable 3D acceleration available. This is particularly the case for video setups -- a Matrox Rainbow Runner makes a decent MJPEG capture device, but not a very good 3D one.

Non-local session. If you're logged in via Terminal Services or Remote Desktop, you get no access to 3D hardware.

Locked workstation.

Headless execution environment. If you're running as a service for job scheduling reasons, you have no access to the 3D hardware.

The one that I forgot, however, is virtualization. Until recently, running in a virtualized CPU environment meant you got no 3D acceleration, because only a 2D display adapter was emulated. This was true for the two majors at the time, VMWare and VirtualPC. This is perhaps most annoying for testing -- I don't know about you, but I don't have any real systems left running Windows 98. Thus, your only options for running under these systems was to have a fallback where you used no 3D acceleration at all.

That's changed, now that both VMWare and Parallels have 3D acceleration support.

I've been curious as to how effective the 3D support is, so out of curiousity, I installed VMWare Player and dumped the Direct3D caps bits from it and my friend's MacBook running Parallels. Just for kicks, I'll throw in the caps from the system I ran VMWare Player on as well:

(Getting DXCapsViewer running on VMWare Player was a bit interesting, as for some reason it wouldn't run -- probably because I tried running it under Windows XP RTM. If you run into errors about problems with application configuration, open DXCapsViewer.exe in Visual Studio in Resource mode and delete the manifest resource.)

For a while now I've been wishing that I had some nicer way to write video filters. I've been working on VirtualDub long enough that the actions necessary to create a new video filter, write the callbacks, write the pixel processing routines, etc. are mostly second nature. However, it's still a lot of work to toss out a quick video filter. An example was a person who emailed me a while back asking for a YCbCr-to-RGB video filter, because he had the interesting situation of trying to record YPbPr component outputs connected to the RGB component inputs of a video capture card. In the end, it took me a couple of hours to bang out a test filter that ideally should have only taken a few minutes. It would have been nice to come up with a language to write quick filters in, even though it might be slower than straight C++ code:

Well, a couple of weeks ago I sat down and starting banging out such a system. Presenting VDShader:

You need VirtualDub to process video with it (I tested it with 1.6.19 and 1.7.2), but if you just want to play with the editor and compiler, you can run the shader editor as follows:

rundll32 vdshader.vdf,OpenEditor

The VDShader video filter is probably the most complex video filter for VirtualDub that I've ever written, because it consists of:

A shader compiler.

A shader bytecode interpreter.

A shader bytecode x86 JIT engine.

A text editor.

There are still a lot of things missing from the filter, particularly a lot of primitives in the language, but to give you some idea of what a shader looks like, here's the convert-to-grayscale example shader:

The shader language parsed by the filter is similar to a subset of the kinds of vector processing languages processed by NVIDIA's Cg compiler and Microsoft's High Level Shader Language (HLSL). I chose a similar syntax because I'm familiar with HLSL and this style of syntax is the most fluid syntax I've seen so far for working with colors and vectors. The idea is that if someone wants to try a really quick image processing operation, like scaling green by 80%, it can be expressed in this filter by shader code as simple as (pixel.g * 0.8).