Ok, here we go, and I hope they don't have any leftover mistakes. Otherwise, please correct what needs to be corrected.

First function is another function that works like applyrange. Now, I know that there have been made various in the past, including Phantasmagoriat's range, which I had been using up until now.However, for 3 reasons I decided to write my own:1) Phan's range does all the fade etc things, which are cool advanced features if you need them, but end up using some unnecessary memory if you don't.2) The fixes it did for the checks weren't completely correct. It would change the values with all (which is the framecount) instead of all-1 (which is the actual number of the last frame in the stream, since the first frame is frame 0).3) It didn't allow me to use -num_frames from trim if I wanted to, so it was completely impossible to actually filter only on frame 0 by just specifying startframe and endframe, which is how I can most easily automate it within YATTA's Custom Lists.While if it was only for reason 2, I could have just easily updated his range function, reasons 1 and 3 lead me to the decision to just redo it from the beginning. The former, because I'd have to trash some neat things it does, which are helpful from time to time, but not in my daily usage. The latter, because I didn't have the effort to actually look into the code enough to see what I had to change without screwing up anything. So well, here's the code:

#strange v1.3 by mirkosp#Yet Another function similar to ApplyRange in purpose that works somewhat differently.#Start and end work exactly like first_frame and last_frame work with trim(), for the #sake of consistency, which means that you can use end as if it was -num_frames too.#ofps parameter tells whether to keep the original fps (true) or not (false).#For reference: http://avisynth.org/mediawiki/Trim

Which was something that couldn't be done with range, plus it doesn't require the triple quotes for strings, as opposed to corran's sceneadjust.I tried to check all odd cases that could throw trim off, and they worked fine. I might have still missed something, so do report bugs. The code should be legible and easy enough to understand, too.Still, while you can use it on single filters, and that's fine if you only have a few instances in the script, if you need to use, say, nnedi3 on a couple hundred frames in different points, instead of having 200 strange(x,y,nnedi3), I'd suggest you to save a clip at the beginning and then calling that, ie:

With strange out of the way, the second filter I had the need to make is one that I called fadefix since I'm bad with names. I'm positive other people made something like this too, but this one I haven't been able to find anywhere, so it might be more useful than strange:

The scriptclip is meant to be on a single line, if it splits, remember to have it pasted as a single line in the avsi. Also I apologize, this one doesn't have really have much checking, but I didn't feel like it was worth the effort.

Ok, now, let's explain why this filter has a reason to exist.

Since MPEG-2 is very bad with fades and high luma changes in general, it can often show fucked up frames during fades. While in some cases one could just keep a single field, in others, if one does not feel like photoshopping, nuke would be the only option. I made this filter because in certain occasions, without the need to do heavy photoshopping, one could restore such bad frames without nuking the frame, or at least not in their entirety, which ends up being a better solution for quality and detail.

This filter is meant to be used in tandem with an applyrange-like filter (such as strange). To further explain how it works, I'll show an example.Let's say that your MPEG-2 animu has fade, and a frame during that fade looks really bad. Now, let's say that this frame is frame 9001, and that by overlaying frames 8999 and 9003 with the same weight, you could restore it. Here's how it would look in avisynth with the former filters:

Now, fad, which is 0.5 here, is quite easy to understand. The higher it is, the higher the opacity of the "next" frame (the 9003 one here) is when overlayed on "prev". 0 means that only prev is shown, 1 shows only next, and then you have float inbetweens.But as for the values of prev and next, it might not be entirely immediate. Since I can't come up with an explanation in words, I'll show it visually:

Hopefully this is clear. Now, only one parameter left, which is omode. It works the same way as overlay's mode parameter works, so I suggest you to check overlay's page on the avisynth's wiki if you don't know the modes available. This is hardly used in fades, however it does have a point with flashes and such.An advanced use of fadefix combined with masks in lieu of just nuking a frame could have unexpectedly pleasant results without too much effort.Using previous numbers, let's say that we have frame 9001, 8999 and 9003. As you can see, the object at the top rotates, so we can't just overlay, but aside from that, the image is static, so with a simple mask to nuke only where we can't overlay, we can do:

Which isn't too bad compared to what we had before, at least in motion.

Since they don't seem to be buggy enough to be troublesome while being not unuseful enough to not be worth sharing, I decided that I should post them here. Try and see if you can get a use for them or something.

Oh I see, you use replacement clips as your main modifier. That makes sense. Range() can do that too, but I wasn't aware it uses unnecessary memory with some of the extra features. So I might consider [st]range() if I'm encoding something really long. Otherwise, for encoding an AMV, I still like the extra features of Range() if memory usage isn't an issue. (Plus I wrote it so I'm really familiar with it ).

I can't believe I missed one of those pesky special cases when error-checking! So, I might have to look into that bug when I have time, and maybe address some of the other limitation you pointed out. Although as you probably noticed, I threw in so many features that it might be hard to modify without breaking anything. So I may have to write it from scratch one day.

Yup, I was quite aware range can work with clips, in fact I had been using it like that. However, I'd have lossless crash even two or three times while encoding an episode, and I could make sure it was due to it because the same scripts with just range replaced with strange run smoothly from start to end.So yes, the extra things range can do are indeed useful if it's what you want to do, just that they do make it all heavier when not needed.

#strange v1.2 by mirkosp#Yet Another function similar to ApplyRange in purpose that works somewhat differently.#Start and end work exactly like first_frame and last_frame work with trim(), for the #sake of consistency, which means that you can use end as if it was -num_frames too.#For reference: http://avisynth.org/mediawiki/Trim

Further usage showed me that with the old version, when using start != 0 and negative end, the video would become 1 frame shorter because I accidentally left an extra +1 that wasn't meant to be there due to a couple wrong copy and pastes.This new version fixes the issue.

#strange v1.3 by mirkosp#Yet Another function similar to ApplyRange in purpose that works somewhat differently.#Start and end work exactly like first_frame and last_frame work with trim(), for the #sake of consistency, which means that you can use end as if it was -num_frames too.#ofps parameter tells whether to keep the original fps (true) or not (false).#For reference: http://avisynth.org/mediawiki/Trim

Updated strange to v1.3. This introduces a new parameter, ofps, to tell whether to keep the original or the new framerate in case the user's filtering would alter it. By default, the new framerate is kept.Fixes issues with framerate-altering scripts and filters, which used to break in previous versions.

#quadratura v1.0 by mirkosp#Substitutes the outer clip's selected area with the same area in the inner clip.#Make sure outer and inner clips are of the same resolution and colourspace.#If sources are, for example, both YUV, but with different subsampling, the filter#will still work with the avisynth's default behaviour to determine which subsampling will be used.#x, y, w, and h work as with avisynth's crop to determine the area.#In order to allow odd values to be used with YUV sources, I am using Overlay.#However, since overlay would internally convert RGB sources to YUV,#I am using it only if the sources are YUV to avoid colourspace conversions,#which means that the opacity parameter is ignored with RGB sources.#If you want to overlay them so badly, convert to YUV yourself before using this script.#I don't want to be responsible for incorrect RGB->YUV conversions,#even more so because they're lossy and should be avoided if not necessary.#For fine tuning of the area, turn on show.

I made this one, once again, out of need. I often have to stack clips due to filters I only want to apply on a certain portion of the image, and while doing a binary mask is a more elegant solution (and a better one too when a high precision is needed), stacking clips is faster and sometimes works fine, so for those times when drawing a binary mask would take more time than the result is worth it, I made this function.Since this script uses overlay for YUV, but otherwise works in RGB, you're not bound to a bunch of limits crop has with subsampled YUV, namely odd values and very low output resolutions.Hope someone else will find it useful.

#edgefix v1.1 by mirkosp#A function that supposedly fixes the edges of the image in upscaled stuff.#Ideally similar results can be achieved through inpainting, but as that process is quite slow and as we can just try to restore the ideal pixel#value through some process reminiscent of delogoing of partially opaque logos, I made this function, which is rather fast for what it does, as I'm having it work on the edges only.#Either way, here's the gist of what I'm doing and why.#Sometimes, with upscaled sources, it happens that the pixels at the borders of the picture are slightly darker or brighter than what they should be.#Since it might be better to not just crop these pixels away (particularly if one wants to employ an algorithm like debilinear), I made this.#As I've mostly seen this happen with upscaled anime, and particularly with edges darker than they should be, there is a slight bias towards increasing the brightness of a pixel when in doubt.#This is due to the fact that in areas with slant lines and such it would recognize the pixel as lighter than it should be, whereas it's still#supposed to be doing a brightening. I haven't had the occasion to properly test the darkening effect of the filter on brighter edges as I don't have any handy, so feedback for those#would be helpful.#Either way, keep in mind that some sources might require different settings based on the scene. Mostly if a certain scene was zoomed and thus doesn't need fixing or if#an edge is darker/brighter than the rest in a single scene for whatever reason.##Usage:#xedge, yedge, wedge, and hedge are the amount of pixels that need fixing on each side.#thrdark is the threshold for how darker a pixel is than how it should be. If a pixel is detected as such, it will be brightened by a threshold amount proportional to how bright#the pixel currently is to begin with.#thrbright works in a similar fashion, except it darkens pixels that are recognized as brighter than they should be.#radius is used to decide the area on which to calculate the median, which is used as reference on what the pixel's value is supposed to be more or less.#matrix is, well, avisynth's matrix value. Pick your poison between "Rec601", "PC.601", "Rec709", and "PC.709".#I was too lazy to implement checks, so here's a few tips.#Negative thresholds will do the opposite of what they're supposed to do, so just set them to 0 if you don't need either. Actually, negative values could be helpful if you ONLY need to do#either brightening or darkening and the automatic detection is failing (it is a bit rough in how it works, I didn't have better ideas).#Radius should always be bigger than the biggest edge value. Perhaps an equal value could work anyway, but I'd advise against it.#The way I'm doing the speedup could mean that the areas around the corners might not be perfectly filtered. I'm not sure. Worst case, run multiple instances of the filter based#on your needs and put stuff together with quadratura on your own. That might do. IDK.##Oh, right. Requires avisynth 2.6, masktools 2.0a48+, fillmargins, and quadratura.#

Seems like I have a bad habit of placing unnecessary WoTs in my functions. Oh well. Comes without saying that you're supposed to use it BEFORE you do any resizing. If there are some pure black pixels too, crop them away before using this filter (eventually add them back and fillmargins over them, if you prefer).Btw, it internally converts to y8 to work around yv12 limits (and for very slight speedups too I guess), but input has to be yv12. Yes, it means I'm doing a colourspace conversion internally. I really had to. I'm letting the user specify the matrix if they want to, but otherwise, I'm doing a rough check which should be fine most of the time.

#refadelinear v1.1 by mirkosp#re-make a linear fade of two static shots#if you need to use it in a static scene with some small things in motion or#which do not have to fade, simply save a clip, use refadelinear, and then#add whatever it was back with quadratura or similar approach##the first and last frames must have 100% opacity and the frames inbetween must be#linearly fading.#has to be applied after decimation. if you need to apply it before,#then you'll have to decimate, apply this, and dup back. ##if it's a fade from/to black/white and the supposedly pure black/white frame is fucked#up, use strange on a blankclip to fix the frame and refadelinear afterwards.#likewise, if the clean frame is a bit after the end of the fade, try to freezeframe#or whatever approach works so that the end frame is clean before using this filter#function refadelinear(clip c, int firstf, int lastf) {c = blankclip(c,length=2)+c+blankclip(c,length=2)firstf = firstf+2lastf = lastf+2c.trim(firstf,lastf)scriptclip("""overlay(freezeframe(0,framecount-1,0),freezeframe(0,framecount-1,framecount-1),opacity=(((framecount-1-(framecount-1-current_frame))/float((framecount-1)))))""")c.trim(1,firstf-1)+last+c.trim(lastf+1,c.framecount-2)return trim(1,framecount-2) }

An "alternative" to fadefix. Sometimes, instead of a single frame during a fade, you have a long fade with many fucked up frames, courtesy of mpeg-2 interlacism at tv bitrates, but you're lucky enough that it's a static shot. If that is your case, this function will help even more than fadefix as it's quicker to use (works especially well for YATTA post decimate custom lists assuming the frame count isn't off by 1 frame, which sometimes happens, particularly when doing vfr; you can still work around those by applying it to the range of frames that corrisponds to the decimated ones you want, help yourself with the preview to know if the count is off).Of course, fadefix is still relevant for the fades with limited animation going on (so common with anime).

#edgefix v2.0 by mirkosp#A function that supposedly fixes the edges of the image in upscaled stuff.#Ideally similar results can be achieved through inpainting, but as that process is quite slow and as we can just try to restore the ideal pixel#value through some process reminiscent of delogoing of partially opaque logos, I made this function, which is rather fast for what it does, as I'm having it work on the edges only.#Either way, here's the gist of what I'm doing and why.#Sometimes, with upscaled sources, it happens that the pixels at the borders of the picture are slightly darker or brighter than what they should be.#Since it might be better to not just crop these pixels away (particularly if one wants to employ an algorithm like debilinear), I made this.#Keep in mind that some sources might require different settings based on the scene. Mostly if a certain scene was zoomed and thus doesn't need fixing or if#an edge is darker/brighter than the rest in a single scene for whatever reason.##Usage:#xedge, yedge, wedge, and hedge are the amount of pixels that need fixing on each side.#thr lightens the pixel with positive values and darkens them with negative ones. major difference with the older implementation#radius is used to decide the area on which to calculate the median, which is used as reference on what the pixel's value is supposed to be more or less.#matrix is, well, avisynth's matrix value. Pick your poison between "Rec601", "PC.601", "Rec709", and "PC.709".#Radius should always be bigger than the biggest edge value. Perhaps an equal value could work anyway, but I'd advise against it.#The way I'm doing the speedup could mean that the areas around the corners might not be perfectly filtered. I'm not sure. Worst case, run multiple instances of the filter based#on your needs and put stuff together with quadratura on your own. That might do. IDK.##Oh, right. Requires avisynth 2.6, masktools 2.0a48+, fillmargins, and quadratura.#

Edgefix v2 (now is called as edgefix2() since it differs a bit), changes a bit from the first one. Now it uses a single thr value, and brightens with positive values, darkens with negative ones. Radius is smaller by default, too. The parameter order has also been switched up since most often than not one would want to just touch up the thr, rather than the edges (I found the 1 pixel an all sides to be the most common situation).Should give some small speedups compared to the older version, though I can't stress enough how this would really benefit from a proper implementation through a .dll, if only somebody would take the time to give it a try.

#edgefix v2.1 by mirkosp#A function that supposedly fixes the edges of the image in upscaled stuff.#Ideally similar results can be achieved through inpainting, but as that process is quite slow and as we can just try to restore the ideal pixel#value through some process reminiscent of delogoing of partially opaque logos, I made this function, which is rather fast for what it does, as I'm having it work on the edges only.#Either way, here's the gist of what I'm doing and why.#Sometimes, with upscaled sources, it happens that the pixels at the borders of the picture are slightly darker or brighter than what they should be.#Since it might be better to not just crop these pixels away (particularly if one wants to employ an algorithm like debilinear), I made this.#Keep in mind that some sources might require different settings based on the scene. Mostly if a certain scene was zoomed and thus doesn't need fixing or if#an edge is darker/brighter than the rest in a single scene for whatever reason.##Usage:#xedge, yedge, wedge, and hedge are the amount of pixels that need fixing on each side.#thr lightens the pixel with positive values and darkens them with negative ones. major difference with the older implementation#radius is used to decide the area on which to calculate the median, which is used as reference on what the pixel's value is supposed to be more or less.#matrix is, well, avisynth's matrix value. Pick your poison between "Rec601", "PC.601", "Rec709", and "PC.709".#Radius should always be bigger than the biggest edge value. Perhaps an equal value could work anyway, but I'd advise against it.#The way I'm doing the speedup could mean that the areas around the corners might not be perfectly filtered. I'm not sure. Worst case, run multiple instances of the filter based#on your needs and put stuff together with quadratura on your own. That might do. IDK.##Oh, right. Requires avisynth 2.6, masktools 2.0a48+, fillmargins, and quadratura.#

#edgepp v1.0 by mirkosp#Quadratura lazymod to postprocess edges after edgefix.#Technically could be used in lieu of it too, but I don't think#that would be a good idea. Depends on the case, perhaps.#rad is the strength of the median blurring. Big values, big blur.#x, y, w, and h work like in crop and quadratura.#Colorspaces with subsampled chroma need appropriate mod#values (for example, mod2 for YV12).#Required filters:#Fillmargins ( http://avisynth.org/warpenterprises/ )#Masktools v2.0 (a48 or later, possibly) ( http://manao4.free.fr/ )#TEdgeMask ( http://web.missouri.edu/~kes25c/ )

While edgefix still has a few issues I should really get around to fix, it's in a usable enough state generally speaking.However, sometimes, the output is still not perfect or perhaps the edge issues aren't exactly consistent and so on.For this kind of small refinements, I wrote this filter. It basically mods quadratura's approach of stacking but is much faster than using quadratura with the filter since it only works on those edges.Additionally, since I have the extra pixels with fillmargins, this allows me to draw a better mask on those borders.Either way, if your source doesn't need this, try to avoid it, since it's a bit of an ugly and bruteforce approach. However if there are some leftovers which you aren't able to quite nail as appropriate, this might very well help out.

#edgefix v2.21 by mirkosp#A function that supposedly fixes the edges of the image in upscaled stuff.#Ideally similar results can be achieved through inpainting, but as that process is quite slow and as we can just try to restore the ideal pixel#value through some process reminiscent of delogoing of partially opaque logos, I made this function, which is rather fast for what it does, as I'm having it work on the edges only.#Either way, here's the gist of what I'm doing and why.#Sometimes, with upscaled sources, it happens that the pixels at the borders of the picture are slightly darker or brighter than what they should be.#Since it might be better to not just crop these pixels away (particularly if one wants to employ an algorithm like debilinear), I made this.#Keep in mind that some sources might require different settings based on the scene. Mostly if a certain scene was zoomed and thus doesn't need fixing or if#an edge is darker/brighter than the rest in a single scene for whatever reason.##Usage:#xedge, yedge, wedge, and hedge are the amount of pixels that need fixing on each side.#thr lightens the pixel with positive values and darkens them with negative ones. major difference with the older implementation#radius is used to decide the area on which to calculate the median, which is used as reference on what the pixel's value is supposed to be more or less.#matrix is, well, avisynth's matrix value. Pick your poison between "Rec601", "PC.601", "Rec709", and "PC.709".#Radius should always be bigger than the biggest edge value. Perhaps an equal value could work anyway, but I'd advise against it.#The way I'm doing the speedup could mean that the areas around the corners might not be perfectly filtered. I'm not sure. Worst case, run multiple instances of the filter based#on your needs and put stuff together with quadratura on your own. That might do. IDK.##Oh, right. Requires avisynth 2.6, masktools 2.0a48+, fillmargins, and quadratura.#

With a LSmashWorks version that now supports high bitdepth and 4:2:2 and 4:4:4 video subsampling, and considering how much of a speed boost I've enjoyed compared to ffms2, I'd like to transition to it.It also turns out that LWLibavVideoSource gives out an actual 10bit video when called in avisynth. This means that reading it as 8bit looks funny.Fortunately enough, it's quite possible to bring this to a much more usable stacked 16bit format (which also means we get to keep the extra bits, instead of getting a dithered 8bit input!), so with a little help from firesledge a wrote a simple function that does just that.Obviously requires dithertools in order to work.