I ran test the test application and can confirm that this bug occurs in Windows Vista RTM. Essentially, every program that uses the Microsoft AVIFile APIs (avifil32.dll) to generate AVI files will produce them with a malformed RIFF structure!

There's two of them, and one's generated in a very odd place. Worse yet, LIST chunks are supposed to be containers for other chunks, but the first 'movi' LIST starts with garbage.

What's going on here: In most cases, it isn't possible to generate an AVI in a single pass. The reason is that AVI, like any other RIFF file, uses chunks which are prefixed with the data length, and it isn't generally possible to predict the size of the 'movi' chunk when generating the AVI incrementally, nor is the data small enough to buffer everything in memory. To get around this, the AVI file is generated with dummy size fields, and when the file is finished, those size fields are backpatched with the correct sizes, which are known after all of the data has been written. It looks like someone either broke the backpatching code in Vista so that it fails to rewrite the 'movi' header in the right place, aligned to 2K - 12 bytes, instead effectively extending it backwards, or somehow managed to open two 'movi' chunks.

I should note that VirtualDub does write correct AVI files when running under Windows Vista, because it has its own internal routines for doing so. It's still affected by this bug, though — more on that in a bit.

So, what's the correct fix? Well, it's not so simple. Here is a hex dump of the index at the end of the file:

The field highlighted in red is the position value for the first video frame in the file, and is supposed to be an relative index within the 'movi' LIST chunk to the start of the sample chunk header. Well, the first 'movi' LIST is at 0xDC, and 0xDC + 0x08 + 0x71C = 0x800, which is where the first sample in video stream 0 lies ('00db'). So the first 'movi' chunk is the one that is consistent with the index. We can fix the RIFF structure of the file by turning the garbage at the start of the 'movi' chunk into another padding (JUNK) chunk:

Now the file is consistent... but it still isn't compatible. It turns out that VirtualDub still won't read the file because the oddball positioning conflicts with some code I have to try to detect relative vs. absolute indexing. (There are some other applications that have similar problems.) To make matters worse, the AVIFile library in Windows XP SP2 also has problems with the file, so the "AVIFile input driver (compat.)" mode in VirtualDub, which tells it to use AVIFile, won't work either. In fact, when I tried to use the old Media Player (mplay32.exe) to play the file, since it can be forced to use AVIFile through the Video for Windows MCI driver, the Windows XP AVIFile library actually crashed in memmove(). Great.

So, basically, I don't have a good solution at this time, other than to... chastise... the nearest Microsoft representative. I can tell you that VirtualDub 1.7.2 will be able to read such files, because I was tipped off to this problem and checked a fix into my dev tree before I became aware that Windows Vista itself was the culprit. For those of you who are shipping or have shipped applications that write AVI files through AVIFile, I don't know what to tell you. I haven't dug into the Vista avifil32.dll and couldn't tell you a fix, and DirectShow is a very rough way to write an AVI file from scratch. AVIFile writes relatively simple AVI files compared to what's flying around, though — it doesn't handle indexing beyond 2GB — so I'd recommend looking into writing a replacement. It's not that hard to generate a basic AVI file.

If you see a crash when selecting the "screen capture" driver in capture mode, you don't have it.

Some people have unfortunately discovered that this prevents entering capture mode again, since the crash happens after VirtualDub remembers the last capture driver. Workaround: Hold down SHIFT when selecting File > Capture AVI from the menu.

One of the lessons you eventually learn the hard way as a programmer is never to "fix" working code. If you have a confirmed bug, then fix it; if you have a feature you need to add, then by all means, change it.

"Fixing" code that just looks buggy, without actually knowing how it is bad, is a mistake.

Let's say you have an 80% chance of getting code right in the first place, and thus a 20% chance of getting it wrong. That means if you just look at it again, you have a 64% chance of verifying it right again, a 16% chance of writing it incorrectly but catching the bug, and a 4% chance of blowing it both times. That leaves a 16% chance of writing it correctly and thinking it's wrong. There's then a chance that you'll act on that instinct and break working code.

Here's a real example that I just discovered:

A certain vendor's C runtime library has code to display an error dialog under certain circumstances, such as an assert, with a program path. If the program path is too long, it truncates it and only displays the end of the path with an ellipsis at the start. This was done via strncpy():

Now, I'm not really a fan of the secure string library -- I'd rather just use a string class instead -- but strncpy() is admittedly a bit risky to use given its tendency to create non-null terminated strings. You can use strncpy() safely, but it's easy to forget or goof the boundary check. Safer alternatives like strlcpy() are often used instead. At first glance, the change looks like a perfectly good improvement to eliminate any potential buffer overrun mistakes... except for two problems. One, the source is a string literal and the destination is a fixed buffer that is far larger than four characters, so there was never any issue with non-termination. Second, the original code actually depended on strncpy()'s non-terminating behavior, because it is putting the ellipsis at the beginning of the string. The attempt to fix a non-existent bug actually broke the code.

So, basically, don't guess at bugs. If you think there is a bug, reproduce it first so you can actually confirm it and verify that you actually fixed something instead of breaking it, and then keep that case around for regression testing.

Animated GIF support is indeed in this version. I experimented with an animated palette delta trick that a reader suggested, and it does indeed create some really nice animated GIFs. (For the three of you who remember my really old DOS paint program, I pulled the adaptive palette generation code from it.) Animated GIF output wasn't really meant to be a real feature in VirtualDub, particularly since I don't want to implement two-pass encoding just for an optimized global palette, so the output path doesn't have options and tends to write enormous files. Still, if you want to experiment with it, you can do so via the Tools > Create Animated GIF menu option.

There are a few other miscellaneous features in this version as well. I threw in a CPU version of the "warp resize" display pixel shader I posted a while back, and the "display decompressed output" option can now be toggled on the fly, so you can preview part of a save operation without slowing down the whole thing.

The real features in this version, though, are all due to me screwing around a bit with 3D acceleration.

First, the Direct3D and D3DFX display minidrivers now have hardware YCbCr acceleration support for most formats, including UYVY, YUY2, YV24, YV16, YV12, I420, and YVU9. I used pixel shaders for this, so it's not vulnerable to the imprecise or incorrectly ranged hardware conversions that the DirectShow Video Mixing Renderer 9 sometimes hits with StretchBlt(). Also, the D3D9 and D3DFX minidrivers now have full support for field display modes. The plan is to eventually switch the default display path from DirectDraw to Direct3D for Windows Vista, but I'm not ready to make the switch yet, as I'm still seeing annoying issues with Vista and NVIDIA display drivers... and I haven't gotten around to reinstalling Vista RTM on my ATI-based machine to figure out who to blame.

Second, preview acceleration in capture mode now has options to do field-based display. This isn't particularly useful during normal capture, but they're more useful if you're viewing non-interlaced output... like if a certain programmer decided to use his new capture device to play an old Playstation 1 game, like Lunar 2....

The distinctive feature in this version, though, is the screen capture support in capture mode. Now, you might think... why the heck did he add screen capture? Because he's too cheap to buy a real screen capture program? Well, yes, but the real reason is that I found an interesting way to do it. The screen capture minidriver supports OpenGL acceleration, which means that if you have a 3D card with pixel shaders, you can get hardware accelerated scaling, RGB-to-YCbCr conversion, and duplicate frame detection (probably the strangest use ever for occlusion query). I have successfully captured a 1920x1200 desktop, shrunk to 960x600, converted to YUY2, and compressed to Huffyuv at 30 fps. Combine this with "What U Hear" or "Stereo Mix" or whatever your sound card calls the loopback, and you can capture motion video with sound with surprisingly good sync. Surprisingly, it works even if another program is using the 3D hardware -- I successfully captured Final Fantasy VII in an emulator at full rate. The downside is that it doesn't seem to work with the NVIDIA drivers for Vista, where it just gives a black screen. It fails even if the DWM is disabled and only Aero Basic is displayed. Again, I haven't tried this with ATI's Vista drivers yet, although I did test it successfully on a RADEON X850XT in Windows XP. If anyone has an idea how to get desktop readback into a texture working on Vista or manages to get it working with WDDM drivers, please let me know.