I got to thinking, this might make a nice integer benchmark, if instead of printing the Mandelbrot, we just accumulate a value (the sum of the generated characters) and looped for a while.

So, I modified the program, and after testing, found that a C version of the above, compiled with gcc -O3, took 1 second to perform 1545 iterations on my machine. The code is very similar to the above ascii integer Mandelbrot, except that it performs multiple iterations, accumulates the character to be printed, and the 'exit loop' is removed - not all simple languages support early-loop-exits, and I wanted the code to be as common between languages as possible.

I figured I could use the C version as a standard, and finding a count that took 1 second would give me an easy estimate as to how much slower other language processors are, compared to C, and of course, only in regards to this benchmark.

And (ta dah), the FreeBasic version compares very favorably with the C version, speed-wise. In fact, it was the second fastest processor that I tested.

In order to get the same output from each language (to verify that each language was essentially computing the same thing and doing similar work), I had to figure out how to preclude one of the operands from being negative.

The code checks, and if either x or y is < 0 (but not both), then it multiplies by minus one to force positive division, and then by minus one again at the end to restore the sign.

Whats the purpose of this?

I enjoy fiddling with compilers/interpreters, especially those that are simple enough that I can understand. I also enjoy writing my own. I wanted to find out what makes some interpreters so much faster than others, and why are others so slow. Examining the source to some of these interpreters has helped me learn some of the reasons, and has helped to improve the speed of those I'm working on.

FreeBasic Screamingly fast! Impressive!TinyC Very fast compiling compilereuc -gcc -con Euphoria rocks!Oxygen basic Oxyben Basic rocks!NodeJs Server/desktop Javascriptpe Micro Euphoria. Fastest interpreter tested.toy.bas Updated version of a simple interpreter I wrote several years ago.Pike Neat C-like scripting languagetoy4.c Updated version of a simple interpreter I wrote several years ago.Tinypas.c I converted Pascal-S (famous pascal compiler/interpreter) to C.Ruby Ruby rocks!tiny-c.c I updated this, adding what was needed to run the benchmark.calc3a I updated this, adding what was needed to run the benchmark.vspl B-like programming languageLua Lua rocks!php php rocks!c4 Neat C subset interpreterUnderC Almost full C and lots of C++ interpreterjwillia basic Neat tiny Basic interpreter.NaaLaa NaaLaa rocks!CInt Full C interpreterPython Python rocks!CH Full C interpreterSI I updated this, adding what was needed to run the benchmark.LBB Liberty Basic BoosterLittleC I updated this, adding what was needed to run the benchmark.PicoC Almost full C interpreterDDS I updated this, adding what was needed to run the benchmark.

Some take-aways:

FreeBasic is really fast.

If you are going to write an interpreter, and speed matters, compile to byte code or an AST.

Several well-known interpreters are surprisingly slow, in this case at least. In there defense, there may be other features that they offer that make up for the slowness in this instance.

If using the 64bit version of FB, that means it's using the -gen gcc backend, and you can do fbc -O 2 to tell it to use gcc -O2, which should make it similar to the gcc benchmark with the same options.

Because I was surprised by the score of 1.09 for FreeBASIC compared to 1 for C/gcc.I think that for C/gcc, INT is a 32 bits long.Thus, 1.37 (for FreeBASIC with LONG) must rather be compared to 1 (for C/gcc with INT).

dkl wrote:If using the 64bit version of FB, that means it's using the -gen gcc backend, and you can do fbc -O 2 to tell it to use gcc -O2, which should make it similar to the gcc benchmark with the same options.

Yes I would think that the C code would be a nearly 1:1 translation by fbc, so the result should be statistically insignificant from the C-only version.

I'm pretty impressed by Java's results. I bet if you ran the test in a big loop, say 1000 times, the Java speed would come pretty near the C code speed, after the JIT has had a chance to analyze the execution paths.

EDIT:On my 64-bit Linux machine here, with the -O2 optimization for both FB and C versions, the results are so close as to be identical, 1.3 s each, ran several times.

I also for kicks ran the Python version through PyPy JIT and was pretty impressed. The execution time was 14 seconds vs the interpreter which took at least 60 seconds. Not too shabby for a dynamic language.

caseih wrote:I'm pretty impressed by Java's results. I bet if you ran the test in a big loop, say 1000 times, the Java speed would come pretty near the C code speed, after the JIT has had a chance to analyze the execution paths.

I wonder why that is surprising? It's all value types, and then the basic problem to the (JIT-) compiler is no different, except that the JIT compiler knows the exact CPU it runs on. Java could be FASTER than gcc. Though of course it is, as a JIT compiler, probably more reluctant to spend time in compiling/optimizing.

Well I never said I was surprised by Java's results (and in fact I was not at all surprised). Rather I was impressed. On my machine, when I turn the benchmark into a method and run it many times, the execution time comes within .1 s of the C version. Anyway, sort of interesting.

Wait. Why would andalso be cheating, and how would it be faster than the C code? I realize that in normal FB code, the AND is bitwise, not logical, so it's not short-circuited. hence it will be slower. But in C, a logical and is short-circuited, so in the C version it's already as fast as it can be. ANDALSO would simply bring it to C equivalence, no?

I learn something new about FB a lot on this forum. I hadn't known about these operators and I always had found BASIC's lack of true logical operators to be a weak spot, which ANDALSO and ORELSE handily address. This brings up a question. Why would I not want to use ANDALSO or ORELSE in a logical expression? I can't think of any situation, unless I specifically need bitwise operations. Thoughts?

It takes a certain overhead to implement the short-curcuiting behaviour, so if that isn't needed and the rhs operand is fairly simple, then the bitwise operations may be faster. Although with -gen gcc -O 2 it may not matter anymore, gcc can probably solve out the overhead if it's unnecessary.