An artfully inefficient Hello World program

This is the tale of a really, really inefficient way of printing out the text Hello World.

It was not a simple programming project. It kind of turned into a really slow-burning programming project.

NOTE: In February 2015, the project underwent some major new developments. This page is being re-constructed and rewritten! Meanwhile, here’s a link to the Git repository.

Some minor background

I hated math in school. There was only one big reason for this: There was no real motivation to learn any of the stuff. This, combined with my depression, didn’t really make it too much fun to learn.

But this little project was one of those things that slooooowly woke me up to brush up at least some basic practical math skills. It got much more interesting once I started tinkering around with Processing – for example, at one point I thought “man, juggling all these numbers with x and y components is really fucking tricky”, then I read tutorials that used vector classes, and lo and behold, I had motivation to dig out my math books and re-read the stuff on vector calculations.

As of writing (in early 2015), the Finnish school system is planning on introducing programming as a mandatory subject. GOOD. Especially if the math classes have something to contribute to the programming classes and vice versa. Give the kids practical ways of using math for good and awesome, I say. I only learnt the practical ways of programming on my own, and not much on the math because there was no motivation to study it.

The original code

This is how it all began in February 2001, written for the mathematical programming language GNU Octave:

1. We have a string: “Hello world”
2. We turn it into an array of ASCII values: [72, 101, 108, 108, 111, 32, 119, 111, 114, 108, 100]
3. We use Octave polyfit() function to fit a polynomial which, for x = {1..11}, will evaluate to the ASCII values in the above array.
4. Polyfit will return the coefficients in array “p” and evaluated values in “evald”.
5. We convert the values from “evald” back to character values and print it out. Tadah!

GNUPLOT produced this delightful graph that shows what the polynomial looks like when it’s plotted out:

…I’ve been told that Octave these days has plotting functionality of its own. I don’t know. For practical purposes, I’ve since moved to other software packages. My current choices for number-crunching are R and Maxima.

A mysterious block of numbers

So Octave creates a polynomial – so far, however, we haven’t seen the coefficients. Here’s how the original dump of the coefficents looked like:

Yep. Now that the smoke clears, we can see that Octave has produced a nice 13th order polynomial.

Here’s a few immediate observations:

1. Precision is going to be a total headache here. We’re operating in realm of binary computers, and here we have gigantic floating point numbers. Shit is going to be rather evil when precision is going to be issue. Rounding errors are going to make us cry.
2. Mathematicians are probably screaming in agony right now. This is computer mathematics, not real mathematics! You’re supposed to prove that the house is on fire, and go back to bed! I’m a computer person so, after proving this is feasible, I rather design a networked smart-home application to automatically forward fire alarms to the fire department.
3. Computer science folks will probably say “oh gosh, I’m so glad that IEEE floats are a thing, because otherwise this would not work.” Such are the mysterious modern ways of the floating point numbers.
4. Most people will look at the bunch of numbers and say “where the FUCK is that ‘Hello World’ hidden, anyway?” MISSIONACCOMPLISHED.

for \( x = 1 \ldots 11 \) where \( x \in \mathbb{Z} \), rounding each value of \( y \) to nearest integer and converting the resulting integer to a character based on the ASCII value.

Implementation

The above numbers mean that we’re totally at the mercy of IEEE double precision floats.

You can store the above numbers in doubles, write an evaluator, and you get proper results… as long as you’re using a real, honest-to-goodness, modern programming language. Tried this in Ruby – no problems, works as advertised.

My original plan was to write this program in FORTRAN 77. Because I wanted to do a fitting tribute to the grandfather of number-crunching.

My original implementation didn’t work at all, though. First – one of the reasons was that Fortran is a little bit picky about implied number conversions. I had to be damn explicit at every turn that calculations have to be performed as double precision floats at every turn. This confused the hell out of me – modern languages usually have some sort of notion of “the programmer has no fucking clue how to implement this and mixes integers and floating point numbers on willy-nilly, let’s fall back to full-on floating point math to be sure”

Secondly? …uh, I’m not a Fortran expert, but I think Fortran has some really weird concepts of what the hell is a double precision float. Based on a little bit of digging, I think FORTRAN 77’s notion of double precision numbers isn’t the same as IEEE doubles and that explicit support of IEEE floats is a new notion in Fortran land. *ahem* which would make sense, because IEEE floating point standard only came to be in 1985…

So here’s the big damn caveat of the FORTRAN 77 version: Trying to compile this with gfortran (which is primarily a Fortran 9x compiler) will probably screw the end results up because it’s doing whatever passes as doubles in Fortran-double-land.

However, if you compile this using f2c, it will work, because it will essentially dump the things in C doubles as is and everything is smooth sailing in IEEE-double-land.