I'd basically just like to know if it's normal for virtualdub to take so long to save a file as an avi. When I aborted my previous attempt, the estimated time it would have taken to finish was 12 hours and counting, and the file size was 2GB (For 5 minutes of video, that definitely doesn't seem right). Is this normal, or has something gone horribly wrong here?

Just to note, the laptop I'm using has a 2.3GHz (Turbo to 2.9) processor, so I'm aware that the laptop may just be not up to the task. Also, fast recompress is selected in virtualdub, if that makes a difference.

Looking at it again, I may have gone overboard on the sharpening. However, I couldn't seem to make a difference at lower settings. I'm sure I've probably just made some horrible and embarrassing mistake somewhere. I seem to do that quite regularly ...

Considering the age of the gunslinger girl DVD, it should not look bad enough to require most of those filters (like undot)...but I could be wrong. Seeing that list I can tell why it's so slow, that is not a "basic" set of filters and it will run slow with all of those.

Crop before resizing, or make sure to resize to a standard size after doing a resize->crop. Whether you want to compensate for the slight aspect ratio distortion is up to you, but a part of the problem may be that you're operating some of those filters with a frame size that's not a mod value that they're optimized for. You were trying to work with an 840x462 frame. Ick. For that matter, with the second script, if it's been resized to 1920x1080, the Crop values taken from the 848x480 script are going to be wrong when applied to that larger frame.

The gradfun filters have long been superseded by flash3kyuu_deband (aka f3kdb) and Dithertools. My personal preference is f3kdb, but either one of them should work fine. They also have the fringe bonus of being able to dither up to 16bits so you can do more 'native'* high bit depth encoding in x264 than just handing x264-10bit an 8-bit source (provided you're using a patched build of x264 that lets you set the --input-depth).

*even if artificially-produced

Finally, this is an aside, but I generally go for clip editing rather than converting an entire block of episodes (also because I'm rather strapped for hard drive space). This can be done in an efficient way if you plan things out ahead of time:

In addition to your regular HQ script with all of the filtering and slow options, prepare an 'LQ' script that's reduced down to only those filters that affect frame position - IVTC and Deint, basically. You don't want to do any cropping or other filtering, since this is going to be more or less throwaway. If you prefer, you can resize to a smaller resolution using BilinearResize() - use Bilinear, because it's the fast and produces a soft, acceptable image that's easy to compress.

Convert the LQ script's output to MJPEG using a bitrate where it still looks okay. For 432x240, 900-1500 kbps is acceptable. For 848x480, you'll probably want to raise it to 2500-3000 or so.

Track through the MJPEG copy in VirtualDub and write down frame ranges you want to use.

When you're finished, create a bunch of small scripts that consist of only two things: an Import() line which references your HQ script(s), and a line that uses Trim() to isolate the clip itself.

You can actually just edit with these trim scripts if you want (if your NLE accepts AviSynth scripts as input, anyway), but it may cause issues with running out of memory or general instability. So you may want to use VirtualDub to convert these small scripts to clips using your lossless codec of choice and edit with those instead.

Command line-savvy users can automate two big parts of the above process easily: generating the Import/Trim scripts can be very easily batched, and converting the trim scripts to lossless can be done with a one line for-loop that invokes ffmpeg to do the conversion.

You can of course use a variant of this if you still want to edit with full episodes instead of clips: all you have to do is edit with the MJPEG copies, and then swap them out for the HQ scripts at the end (this is the traditional interpretation of the 'Bait-and-Switch' method described in the AVTech guides).

Thanks very much for all that advice! I'll try it out as soon as I can, but it sounds a lot better then what I was previously doing!

Just out of interest, would it also work just to remove everything but the frame-affecting filters, use trim() to isolate the part I want, and then add the other filters before saving as an avi, without the MJPEG process? Your method seems more efficient, but I'm just interested to know, as it's the method I first considered when you suggested clipping in VirtualDub. Thanks once again for the advice.

I would use Ut Codec instead of MJPEG, using trim to isolate the parts you want is what a lot of us do to avoid conversion time being super long, and you should try on every source to find out what combination works (as a lot of users find what works for the worst sources, then use that for EVERYTHING, which is a bad idea).

The basic point is that the MJPEG copy is used as a quick reference (compare the speed at which VDub will seek with Ctrl+Left/Right Arrow in a slow script - or even one with just the frame stuff - and the MJPEG copy), while the HQ scripts are there in order to generate the proper clips you'd be working with.

I guess there's a chance that, since AviSynth treats filters linearly (unless you load into a variable), that applying processing filters on a small frame range that's already been selected out might shave some time off, but the real problem with that is that if there's a lot of clips to be taken from the source file, you'll have to change the frame range for Trim after every clip gets converted (which is a lot of messing around in Notepad or whatever editor is used), rather than only having to do it once and keeping a clear separation between the script that opens the video and the individual clips. Or it concatenates all the clips into a single clip, but that's almost as unwieldy as using entire episodes.

This is a different issue, but whilst making the changes to the script that you suggested, I noticed that tfm doesn't actually seem to have done anything in terms of IVTC. There are still blended frames, if that's even the right name, at nearly all scenes of movement. I've changed the settings within tfm based on the avisynth guide (The settings match the example you posted previously), but it still doesn't seem to have had much of the desired affect.