SICP Exercises 1.22-24 are Somewhat Out of Date

(This is the point where I broke down and finally started using Scheme. Going without my Emacs shortcuts in DrScheme is painful, but the IDE-like environment is probably a lot more friendly to the raw recruits. I’m as of yet too lazy to russle up SLIME48/Scheme48 for now and just want for finish some problems….)

Starting with exercise 1.22 in SICP, we are given code with a reference to “runtime.” This does not exist in the MzScheme language I was using under DrScheme…. Thanks to a quick tip from Sridhar Ratnakumar, I didn’t have to bang my head against the wall figuring this out. By defining runtime thusly, (define (runtime) (current-milliseconds)), the code from SICP will run as-is.

One of the first things we have to do here is to write a procedure search-for-primes that “checks the primality of consecutive odd integers in a specified range.” If you think in Basic, you’ll probably want a FOR-NEXT loop for this. Or if you’re really hip, you might want a DO-UNTIL loop. I looked at some options for doing that, but I have to admit that a recursively defined iterative function is much simpler. Much simpler than trying to keep track of yet-another-language’s control construct syntax….

Exercise 1.22 asks us to compare the runtimes of our prime checker for the ranges of 1000 to 1,000,000. On my machine, these came back as taking 0 milleseconds– apparently my box is a little faster than the ones running back in the eighties. I started my comparison at 100,000,000… and did a quick munge to make the code only give output when we actually hit a prime. We can see the output of my code below. Each time my range goes up by a factor of 10, we expect the time it takes to find the prime to increase by about SQRT(10) or 3.1622. Is that what’s happening here? (Here’s my code file if you want to try this at home.)

And it looks like we’re not quite halving the time it takes to solve the problems now…. Nice.

Moving on to Exercise 1-24… we end up with problems. The random number generator expects numbers just within a certain range– and because our machine is running so fast, we’ve bumped things up to some *really* large numbers….

I don’t think I have enough information to give a really good answer to Exercise 1.24, but it’s clear that log(n) growth is the way to go if you can manage it. The fact that you have to choose the number of Fermat tests to run adds some additional complexity to the question… and the combination of a fast computer with a limited random number generator makes it difficult to determine what’s going on here experimentally.

By the way, instead of
(define (runtime) (current-milliseconds))
you can use
(define runtime current-milliseconds)

The subtle difference is that this creates an alias for current-milliseconds (taking advantage of the fact that functions are first class objects in Scheme), instead of creating a new function which calls current-milliseconds and returns its value unchanged.

Which doesn’t change anything really (except possibly execution speed) but you can’t do it the second way in C++ 😉

Nice work! I was just running into the same problems myself, and it was great to find this well-reasoned response to a slightly flawed question-set. By the way, I’ve heard that MIT has updated their course to use python instead of lisp now…