Recursion 2018

Hmm… I’m sure I must have seen that at some point, but if I have I’ve forgotten. I’ll try to find some time to run it tomorrow to see it running for myself.

However, I think even reducing PROCpoly would still result in something unwieldy, given that I want to put it on a four page A5 pamphlet, with accompanying text! :)

Maybe you can have a challenge to see who can beat me to adding trees and cars (so far, two years and counting)

It might be too big for my little pamphlet, but it might be worth having it on one of the MUG machines as part of the code club idea of Doug’s, with some print outs of the program, and giving people a little challenge along those lines.

People like visual stuff. Remember the days when you’d pick up an AU at WHSmith and the first thing you’d do is flick to the *info column, or just the yellow pages, to see if there was a creation of Jan Vibe’s to type in? Or was that just me?

No, that wasn’t just you. :)

But that does remind me – just after the last Recursion, I discussed with Richard Ashbury that it would have been a nice idea to have some of his modified Jan Vibe programs running as a rolling demo on a spare machine if there was room to set one up. Something else to consider for this time around. (Though my one table didn’t really leave much room for that.)

EDIT: I should be clear that I just hacked one of the many implementations of this; I’m no mathematician! Specifically this one: Let’s draw the Mandelbrot set

EDIT: For my own education, I’d like to know a) How to plot the points faster in BASIC. b) How to centralise the image without the clunky offset. (VDU command?) Any pointers / observations would be appreciated by me.

Simplify some of the maths. E.g. pre-calculate 4.0/width% and store it in a variable so that it won’t have to be recalculated (twice!) for each pixel.

An easy way to make it 4x faster is to realise that the coordinates used by VDU commands are at a different scale to the pixel resolution of the screen. There’s a bit of information about it on this wiki page (“Eigen factors”), and I think the BASIC manual will have an explanation as well, but to summarise, for most screen modes there will be 2 OS units per screen pixel. So by incrementing col% and row% by one for each loop iteration, you’ll be drawing to each screen pixel four times. Increasing the step to two will eliminate that and immediately make things faster.

Slightly more complex optimisations would be to avoid calling GCOL or POINT for every pixel. E.g. only use GCOL when the number of iterations changes, and use LINE to render horizontal strips instead of individual pixels. Or you can have a go at rewriting it to use array arithmetic (I have an array-based Mandlebrot renderer which I wrote as a performance test for BASICVFP)

b) How to centralise the image without the clunky offset. (VDU command?)

If you are doing the whole M set, which is symmetrical, you can double up by plotting the top and bottom at the same time. There are also other calculations to move to outside loops. However, the main user of time is the floating point complex number calculation.

Back in the early days of the Archimedes there was some competition to write the fasted M set plotter, both in BASIC and machine code. This generated many ‘tricks’ such as Jeffrey is suggesting.

I have an array-based Mandlebrot renderer which I wrote as a performance test for BASICVFP

Perhaps you are following in the steps of Sophie Wilson. ;-) During the development of ARMBASIC there was a keyword MANDEL which did the complex number calculation. There were vestiges of it in the source, but I do not have a recent version on hand to check.

In 2010, after the death of Benoit Mandelbrot, I wrote a MANDEL keyword into Basalt. However, that uses fractional integers, not floats.

Yes, you can change the graphics window origin using VDU 29

Or, indeed, ORIGIN. ;-) One graphic optimisation is to confine the plotting to only the significant parts of the set and its surrounds.

If this is to be a demo to young /novice programmers then would it not be a good idea to show the difference between non-optimal code and a set of optimising changes?
It then demonstrates that simply doing something that works is a bad idea and that something that is properly designed to work efficiently is a good idea.
Properly designed code can be much swifter, easier to follow and use less memory / disc space is the lesson.

As a programmer of just about the “Hello World” variety, I’m tempted to come to Recursion with a couple of Pi setups and some worksheets (like the above) for people to have a go on. Somewhere I have copies of the old Input magazine in digital format and could make worksheets from some of those listings for people to have a go at? A sort of “guess what it does” kind of thing.