I have made what may be a crucial breakthrough in the automatic correction of glitches -- particularly, the "noisy glitches" that have previously proved difficult to detect.

What I did was, I started from the hypothesis that glitches occur when two nearby delta-orbits (perturbation calculations for adjacent pixels, say) are getting too close together compared to their overall magnitude, so that they "snap together" (solid-color glitches) or worse, get close enough to lose important bits of precision of the difference between them without quite snapping all the way together (noisy glitches).

I formerly had Nanoscope checking many local maxima of iterations in the image for noisy glitches by computing a non-perturbative point and comparing, and using the computed point as a replacement main reference point if it had much different (usually higher) iterations from the perturbation calculation of the same point using the original reference point. But this was unsatisfactory for three reasons. One, it required changing the main reference point, and thus there was no way to cope with two different types of noisy glitch, should such an eventuality occur. Two, there were glitchy behaviors with the "solid" glitch correction with large solid glitches, caused by a narrow "fringe" of noisy-glitch where the precision loss is slightly less fatal. This region could be distorted noticeably even with the solid glitch within corrected. Finally, small noisy glitches occasionally snuck past Nanoscope. So I have been looking for a way to turn all glitches solid. And I found one.

I applied perturbation theory to the perturbation theory iteration!

The perturbation iteration is:

Perturbing that yields:

So,

which gets small in comparison to when is small.

But that's just the actual (unperturbed) orbit of the current pixel! Whose size we need to check anyway, to detect bailout. It's when this gets very small that glitches can occur. This fits the observation that glitches hit at a) deep minibrots and b) deep "peanuts", embedded Julias, etc. (peanuts of course are just especially sparse embedded Julias) that are associated with deep minibrots. So, this suggests checking for the current orbit point to be sufficiently closer to 0 than the corresponding iterate of the reference orbit.

The breakthrough: checking for

The implementation actually precomputes 10-3 |zn| for all points of the reference orbit and keeps this data in an array, which means we only have to check for the current orbit point magnitude to be less than the value looked up for the current iteration number.

Bailing promptly if this happens turns a noisy glitch into a flat glitch that's somewhat larger, and also expands flat glitches while de-noising their borders. Detecting flat glitches (by simply finding two identical pixels in a row), then finding the blob bounding box (find its extent left, right, up, down before a different pixel value), then applying the "contracting net" algorithm I've previously described to locate the center, sort-of works. It was necessary to "fingerprint" blobs with a hash of the specific reference point calculation used when encountering it, so blobs hit at the same iteration using different reference points were treated as different; and to chain this whole system, so that it might calculate with reference point A, find a blob, look up that blob's fingerprint in a hash and get reference point B stored earlier, recalculate that point with B, and repeat as necessary, and if the blob is not in the hash, create a reference point at its center and both use it and add it to the hash.

That worked, but it generated upwards of 70 reference points for smallish test images, even in non-glitchy areas (the sensitivity of one-thousandth can't be turned down any further without missing real glitches in some of the test cases I have). And it actually caused a glitch or two in some cases. Even in images without glitches before.

So I hybridized the two approaches! Since many of the "blobs" caught with this method calculate fine otherwise, I decided to test which need what. So I iterate with glitch-catching on, and maybe land in a "proto-blob". If it's already in the hash with a reference point to shrink/fix it, switch to that and redo point. If it's already in the hash with a special value ":ignore", recalculate with that last reference point and glitch-catching turned off. If it's not already in the hash, discover the blob's extent, apply the contracting net to that region with a temporary copy of the hash adding ":ignore" for this proto-blob, and thus zero in on a local iteration maximum or a smaller, *solid* glitch. Then calculate a reference orbit at this high iter point, use it for this proto-blob if solid glitch, otherwise compare the non-perturbative iteration count with the perturbation value gotten at the same spot using that temporary ":ignore" directive to look for a discrepancy. If the difference is less than 10 iterations, make the temporary ":ignore" permanent and discard the new reference orbit, otherwise treat as in the solid glitch case and use the new reference orbit for this proto-blob.

This method computes a 1280x960 unantialiased image of Dinkydau's "Flake" image in 12 minutes ... correctly. It ends up with about 24 secondary reference points, though I suspect only one or two of them are doing most of the work.

I think I'm close to having something open-sourcable soon. When that point is reached I'll announce it here so Kalles Fractaler's, and other perturbation engines', authors (I think most of them are watching this thread) can benefit from looking over Nanoscope's implementation of the algorithm sketched out above for detecting possible-noisy-glitches on the fly and testing them for are-they-really-noisy.

Update: I've got a couple of ideas for further improvements, which I might test in the future (though not now). Meanwhile, I'd appreciate anyone posting noisy glitch locations to serve as additional test cases (anywhere where Kalles Fractaler, Superfractalthing, or Mandel Machine fouls up that's not just a unicolor blob is potentially useful).

To better visualize what it is doing, here is Dinkydau's "Flake" location with the same color gradient, three times.

First, calculated with only the main reference point. Nanoscope produces the same glitch as KF, probably because it uses the same FP calculations under the hood.

Next, this is what happens if the "glitch warning system" is engaged, but only the primary reference point is used (no autocorrection):

Note that the "noisy" glitches have been replaced by somewhat expanded, uniform blobs of color, with small satellite blobs. It's not just the obvious areas near the center, either; the upper left corner showed whitish stripes in curls and became small solid blobs. It turns out that these were noisy glitches too. Here is the version with auto-correction on:

Note that the corner curls now have orange spirals which were missing before. The center region is most spectacularly corrected, showing normal Mandelbrot spirals.

This is what Nanoscope now produces with zero human intervention if given only the "Flake" center coordinates and magnification. No manual placement of added reference points is necessary. It's completely automated. It reported calculating about three dozen auxiliary reference points for this image. Just three dozen points iterated at 163 decimals of precision, instead of the one-and-one-quarter-million at that precision to render this same image in conventional software.

How many iterations does your program skip with Series Approximation?It looks very promising, but unfortunately I found that the time when the glitch is detectable can be on iterations that are skipped with SA.

KF use 5 terms and skip 23653 iterations.First image is a closeup on the area where the structured glitch occurs, detected glitches are encoded with yellow.Second image is the same area without SA, then the glitch is detected.The glitch is only partly detected also when using 3 terms and skipping 15769 iterations...

This is the weirdest one I've come across, at 2.25E15 on the zoom-out from the attached location. Note the difference in the lower right area (Kalles Fraktaler did sort it out, but only with the 'no approximation' option.) The erroneous version looks almost plausible out of context.

Nanoscope is only skipping 6230 iterations in the Flake image with series approximation. The smallest iteration where errors are occurring seems to be in the seven thousands. Looks like going too far with series approximation can cause problems subtler than previously noted. The odd thing is that the series approximation shouldn't be able to "make the error" when it "hits" those iterations!

I have investigated this a little bit more. Oh Yes, this is awesome!!!Despite that the glitch detection can be within the iteration span skipped by Series Approximation - this is, as far as I can see, the bullet proof glitch detection we all been waiting for so long.Thanks a lot Pauldelbrot, you have done a really good job on this!!!For almost all location the glitch detection is not within the SA span - I have so far found none except flake.

I don't fully understand your glitch solving method, but I think you make it a little bit too complicated.KF uses a simple flood-fill algorithm to detect one-colored blobs examining both the iteration count value and the smoothing coefficient, and adds a reference in the center of the biggest one and recalculate all pixels with the same iteration count value.With your new glitch detection, all the detected pixels are now set to the same iteration count and smooth coefficient.A new reference is put in the center of the largest area, and all those pixels are recalculated.This is repeated until no more blobs are found.

By doing so, KF with Series Approximation turned off automatically creates a perfect flake image 1280x720, including all the tiny spirals in the corners, with 7 additional reference in just under 1.5 minutes:

This method computes a 1280x960 unantialiased image of Dinkydau's "Flake" image in 12 minutes ... correctly. It ends up with about 24 secondary reference points, though I suspect only one or two of them are doing most of the work.

This image needs no more than 3 reference points. Automatic detection of the issue would be progress, though. I will look into your approach. Would be cool if it's generally working. A region for testing: -1.41036459426074570658817618676297211879321324385433824208227598E+001.36711010515164632751932900773139846402453359380892643209313503E-012.865303424E-53

This image needs no more than 3 reference points. Automatic detection of the issue would be progress, though. I will look into your approach. Would be cool if it's generally working. A region for testing: -1.41036459426074570658817618676297211879321324385433824208227598E+001.36711010515164632751932900773139846402453359380892643209313503E-012.865303424E-53

Thanks hapf.Yep, your location breaks this method.If the main reference is calculated in the center of the big julia, the outer ring of julia glitches are not detected with this method.The center is the parameters slightly changed to:-1.410364594260745706588176186762972118793213243854338242161867741-0.1367110105151646327519329007731398464024533593808926432093135031.39601271072E53

A more refined method, but with more per-iteration overhead, would be to compute epsilon alongside delta and watch for epsilon/delta to get too small. The first post in this thread already shows how to compute epsilon for each iteration from its previous iterate and delta. This could be combined with series approximation by using series approximation to generate the delta for the current pixel and for an adjacent pixel, and then use the difference between those deltas as the epsilon for the first "real" iteration to begin. That method would be slower, but possibly detect even more glitches (maybe all of them) reliably, and maybe with fewer false positives as well.

As far as I can see at the moment the basic idea (using absolute value of 'perturbated' iterate versus reference iterate) is very good. The idea of a fixedthreshold (0.001 etc.) less so. Better results are possible by not aborting early but looking at the statistics when all pixels finish their run.Corruption happens due to rounding errors. Rounding errors happen when bits are lost due to adding numbers not so close together as one would wish themto be. As zn * deltan gets bigger rounding errors get bigger. The deltan get bigger on average as reference orbit and computed orbit via difference computationgo out of sync more or less quickly on average. One way to judge out of sync is to look at absolute value of 'perturbated' iterate versus reference iterate as suggested.One could use a threshold as suggested, or find the minimum value and compare with all other pixels minimum. One could use the sum and compare. Once can use instead thesum of the deltan and compare or the max. The results seem to be comparable. What the methods don't provide is a clear yes or no to the question: Is a pixel corrupted?Only when hard clipping occurs one knows for sure.