(Other) Buddhabrot-style Burning Ship [65536x24576]

I call it the Ghost ship. Still work in progress, but getting there.This particular version uses the following parameters:Bailout: 64Window: (-1.9)+(-0.1)i to (-1.5)+(0.05)iSample distribution: ”=(-1.7)+(-0.025)i, σ=(0.2)+(0.075)iIteration counts:

I experimented a bit with this Ghost Ship (good name, hope it sticks) stuff recently. Unfortunately, I don't think it exists (as in, it doesn't converge) - as the iteration count increases, more and more points are plotted, seemingly without limit. Compare with the Buddhabrot of the regular Mandelbrot set, where each doubling of iteration counts seems to yield a smaller total contribution (I have an open M.SE question about whether it really converges or not, I suspect yes but I have no proof).

Wow that's REALLY cool. I think Ghost Ship is a perfect name for it. I'm not very familiar with the Burning Ship least of all its Buddhabrot-counterpart, so I don't even know if this is possible, but if so I would try adjust the parameters a so that the far-right of the screen captures just a little bit more of that aurora-like haze in the sky seen in most of the image. Just enough to blend in that hard edge that forms between the black and blue regions of the image.

@claude, I'm not sure I understand your reasoning behind why it would not converge. The more points I sample, the more hits will individual pixels have, but the way my program calculates the brightness of each pixel compensates for that. It is the following:Start with a 2D array, where each cell \( x_p \) represents the number of hits of a corresponding pixel.

Find the pixel with maximum number of hits \( m=\max_p\{x_p\} \)

Apply the following formula to each pixel: \( y_p=\sqrt[2+t]{\frac{x_p}{m}(2-\frac{x_p}{m})} \) where \( 0\leq t \) is a brightness scaling parameter

Multiply each \( y_p \) by 65536, store as uint16_t, and save the array of \( y_p \) as one of the RGB channels

So, if the number of samples doubles, so will the \( m \), and the overall distribution will stay somewhat the same, no?

@Fraktalist, thanks for the link, I'll certainly have a look at that program. The region on my picture is that small tail on the left, the exact coordinates are in my first post. I did rotate the image 180 degrees, though.

@AlexH, I will have a chance to adjust the parameters as soon as I implement tiling in my program (so that I can split huge images to smaller chunks, preserving RAM. As it is, I'm using all 256 GB of ram with 16 workers running in parallel). In the meantime, I'm going to generate more samples for the current parameter set.

@Fraktalist I tried rendering this with Buddhabrot Mag. I could get similar looking results only with low orbit lengths (~200).

I've attached two renders of a similar region. The first one uses orbit length 200.For the second one I tried to match the settings in the description, but it looks quite different.I assume this is due to a different sample distribution. Rainbrot uses a gaussian distribution with the mean in the image center, whereas Mag mutates orbits that gave good results.

Really cool image, that must have taken quite some time to create.Nice to see more fractals uploaded to gigapan. It is unfortunate their server is so unstable right now, I have been trying to upload a new project for a few days but it always ends up failing.

I am away from home, should be able to provide some images re convergence tomorrow evening. Trying to explain it in words is a bit awkward, but I'll try:

Divide the image into multiple layers. Layer N plots the orbits of points that escape at iteration count m where 2^N <= m < 2^(N+1). Convert each layer into a grayscale image in the same way, IE any normalization works uniformly over all the layers, so you can compare the results between layers meaningfully. For example, step 1 finds the average pixel density of each layer, step 2 finds the maximum M over step 1, and the final normalization step 3 converts each pixel total n to (255 * 0.125 * n / M) for 8 bit RGB (clamping to the range). Step 3 doesn't use per-layer information, only global information.

Now, for Buddhabrot, each successive grayscale image is less bright, so the total new contribution as you double the iteration limit goes down, so it might converge (but note: sum 1/n doesn't converge, so it must decrease quickly enough to be sure).

However, for the Ghost Ship, each successive grayscale image seems to get brighter, so the total new contribution as you double the iteration count goes up, so it doesn't converge (you can't generate successive approximations, as they will always be dominated by the higher iterations you didn't include).

Of course, these conclusions are only for limited data (small finite iteration count), so maybe the Ghost Ship reaches a peak and then starts to converge later.

@Sharkigator I use Gaussian distribution to avoid biases of the mutation algorithm, at the cost of slower progress. Although, to be completely fair, the mean should be at 0+0i, but that would slow me down too much.

@RedshiftRider It took about 1 week, an image with more samples is coming soon.

@claude Ah, I get you now. You claim that it doesn't converge as you increase the iteration cutoff, not the number of samples. That may very well be the case, I haven't tried very large iteration counts yet.