2013.08.24 11:46

I haven’t had any reason to write assembly by hand for quite a while, but the other day a deep hardware geek friend of mine asked for an opinion on an instruction set for an architecture he is working on — and of course that means using his assembly instructions directly.

I had forgotten what that is like. It is common today to hear people who have never written anything in assembly put C in the same category as assembly, having themselves heard that C is “low level”. C is certainly lower level than, say, Python or Erlang, but its a far cry from assembly. Saying “low level” isn’t the same thing as saying “hardware level” or “lower than…”. Abstractions are always relative to the level of your problem of the moment.

Perhaps C is metaphorically comparable to assembly if you are programming in userland and your major concerns are stretching background images or including on-click sound effects or whatever. But that doesn’t compare to assembly. What is most interesting about C is that compilers can be written that abstract away the quirks of different hardware instruction sets and make programs magically portable.

Of course, the phrase “portable” is in the same boat as the phrase “low level” these days. If its not interpreted or compiled to bytecode (or in the case of the ultra ignorant, if its not Java) then folks who have no experience in compiled languages will think you must be mistaken for using the phrase “portable”. Some more Java-hype driven misconceptions that haven’t yet died out*.

After a long stint in the world of garbage collection and either extremely strong typing or completely dynamic typing it is funny to think of everything as an integer again. Assembly is not a bad way to deal with hardware-level problems (provided the instruction set doesn’t suck). A good set of machine instructions is precisely the sort of thing needed to solve the sort of problems you encounter when dealing with specific device behaviors or, say, bootstrapping a compiler for a simple language like C. An assembly with a decent syntax provides mnemonic devices in a way that makes dealing with those machine instructions enormously easier. (But I wouldn’t try writing a graphical MMORPG in pure assembly.)

C is excellent for providing abstractions at the level required for portable systems or limited-feature/limited-aspect general programming (and I still wouldn’t try writing a graphical MMORPG in pure C). Building abstractions up from there is a sensible thing to do, of course, and this permits us to stack abstractions up high enough that we can almost talk in human(ish) terms about human problems with computers and do neat things. Deciding whether a particular tool or form of expression is appropriate for a particular set of problems is all about the nature and level of the problems and is rarely a judgment on the tools themselves.

This is why I don’t like monolingual or even single-paradigm programmers. They have been tricked into thinking that mental overhead is a fixed quantity, a universal constant from which all other difficulty and complexity is derived. This is a handicap which can only be surmounted by forcing oneself to explore outside the comfort zone provided by FavLangX or FavParadigmY.

[* What is funny about that is C is far more portable than Java. “Compile once, run anywhere” doesn’t actually solve a problem that anybody has in the real world. It is trivial to compile a program once per architecture it will run on, or even once per platform. After all, that is how the runtime environments for interpreted/bytecode languages get published in the first place.]

2013.08.16 22:07

I’ve got an annoying issue on my mind: keyboard layouts. This is brought on by the recurring need for me to travel to places where two different keyboard layouts are common and other programmers look at you funny if you have to glance down to make sure whether you’re hitting ‘@’ or ‘`’ or ‘”‘ or whatever is there just then. Neither of these regions, of course, use the keyboard layout of my region (Japan).

I was born in the US and grew up with that layout, but it is difficult to find US-layout keyboards here. Though I usually write only a few Japanese-language emails per day its just not practical to use anything but the local flavor — especially since I use ã‹ãª input for Japanese, not the insane (to me) Romaji system so many people are accustomed to. Even if I did have a bunch of US-layout keyboards it would be crazy to switch among JP-layout laptops,Â US-layout crash carts, JP-layout customer systems and US-layout workstations at my offices. So the “when in Rome” rule is in play and I’ve grown accustomed to this layout. It now works well for me.

“But what’s the big deal?” you may ask. Well, the main keys that do Latin letters and numbers are all generally in the same place, so it seems like this wouldn’t be a big deal. The problem is the crazy keys that do “top row” and wildcard stuff like bangs, hashes, quotes, backticks, at-marks, brackets, colons, semicolons, parens, etc. (Not to mention the Japanese-specific stuff like mode changes, å…¨è§’ãƒ»åŠè§’, and other nonsense the majority of the world is spared.) All the parts of a keyboard that are pristine and rarely used on a typical email/pr0nz/games user’s keyboard are usually worn smooth on a programmer’s keyboard. Those frequent-use weird keys are the one that seem to be completely, unexplainably jumbled around when you compare US, JP and various European keyboards (so why even have different layouts…?).

But that brings up a point about how personal familiarity and user expectations play into the perception of a given tool as being “good” or “bad”. I grew up on US keyboards; I am intimately aware that they are not “bad” tools; they areÂ bad tools for me right now. The idea that a concept being “easy” is not the same as it actually being “simple” intersects with the idea that a tool being perceived as “good” is not the same as it being objectively “better” than some given alternative.

(It is interesting to note that even the descriptive language must change between the two cases above. Though the basic pattern of thought is similar, “concepts” and “physical tools” are different in ways that magically prevent certain forms of direct comparison.)

I could easily take the position that US-layout is poo and that JP-layout is superior. I could go uber nerd and pretend that some statistical study on finger reach, frequency of use, angle of finger motion, key tension and travel variance, etc. matters when it comes to programming. (Programmers who use Dvorak do this all the time. I always wonder if they take themselves seriously, are consciously crossing the satire-reality divide as a joke, or if they have a getting-punched-in-the-face fetish.) In the case of programming layout doesn’t really matter — consistency does. To imagine that the charater-per-second count of input matters in programming is to imagine that input is the hard part of programming. Its not*. The problem is figuring out what to write in the first place.

[* Unless you are a Java or COBOL programmer. I didn’t realize how intensely verbose either of those were until I dealt with both again recently after years of absence.]

More to the point, what matters is which layout prevents the wetware halting problem in a specific person. When the human doing the work has to stop what he is doing to figure out something unrelated to the essential task at hand then he’s distracted and that sucks, and you have a wetware halting problem.

But it is still almost certainly true that some layouts are probably objectively worse or better than others for intense input jobs that are not intensely creative, like transcription. If that wasn’t true then the stenograph would never have taken off and court reporters would have been just happily flying through rapid-fire legal discourse in an uncompressed, fully textual medium on their typewriters — which would have been quite a thing to witness, actually. But they didn’t because the stenograph was objectively better than the typewriter for this. It follows that other sorts of tools can often be judged against one another in the same way.

Languages can be like that. Not the ones we speak, I mean real languages, the one you speak math to your computer with. The problem with judging that is that there are several dimensions to a programming language. They usually center around three themes: some concept of “safety features”, and some other concepts of “operational features”, and some other concepts of syntax-as-a-feature.

Safety is a bit of a mixed bag, because few folks seem to declare what they want from a language in terms of safety up-front, and rarely define whether it should be a feature of the language, the runtime or the compiler (until after the fact, of course, which is when everyone bitches about whatever the design decision turned out to be). There is the “safety from others” aspect, which The Management loves because they believe it permits them to demand that different elements of a program become invisible to other elements (the whole `public static blahblah` bit in Java). There is the type safety angle, which people are either really loud about or have nothing to say about (probably because once type systems start making sense to you you’re prone to freaking out about it for about a month). There is the runtime safety angle (Three competing approaches: “It will never run if its unsafe.” VS “Its OK if it crashes.” VS “Nothing is safe and it will always run but gradually grow more magical and mysterious over time and you will never really know bwahahahahaha!”).

Operational features are sort of hard to judge, though it seems like this would be easier somehow than the safety tradeoffs. I think this is because language design is still in its infancy and we’re still totally turned around backwards about the idea that “computer science” has anything to do with computers. So the language feature problem turns into the problem of finding the Goldilocks point of “just enough of the right hot features mixed in with the cool familiar ones to get work done” as opposed to having way too few or way too many features to make much sense to use in production.

Judging syntax is totally arbitrary — until you have to come back and read your own code after two years of not having seen it. There are sometimes languages X and Y where both have the same major features but the syntax of one makes it look more like line noise than code. Meh. This is why language arguments are never going to end (so I tend to have runtime arguments instead).

That’s a lot of dimensions. With all that in mind, I personally dislike Java. I dislike some of the things the JVM assumes should be true. I dislike that its taught first to people who really should learn more about the nature of programming first. I… ugh, I just really dislike it as a platform. It solved a certain sort of problem we almost sorta had in the 90’s, but now its just causing problems — and yet as an industry we are too brainwashed to even spot them.

And, to make this post meander even further, that reminds me of guns. No, seriously. There are several excellent machine gun, rifle and pistol designs employed in militaries across the world. Many of them are decent enough that, while some have a slight edge over others in some areas, I’d go to work with pretty much any of them.

For example: the M4 vs. the SCAR. Meh. The SCAR is indeed objectively better, but the M4 is familiar enough to me that I just don’t really care which one I wind up getting stuck with. On the other hand, I don’t have nearly as much faith in the AK-47, especially in an environment where precision, reaction time, quick on/off safety and partial reloading are critical. While they are famously resistant to neglect (which is often mistaken for durability) that’s really a key attribute for a rifle intended for the Mindless Commie Horde or the Untrained And Unwashed Mass Formation of insurgent/freedom-fighter/terrorist whose backers need cheap, trashy guns with which to arm their cheap, trashy goons. Indeed, the AK-47 is in real terms dramatically less good than the SCAR or M4 and there is a whole list of rifles and carbines I would consider before going to work with one. (That said it is not absolutely awful, just so much less good than the alternatives that I’d avoid it if possible — sort of like Java. I mean, at least its not the C++ of guns.)

Where this is really striking is with machine guns and pistols. On the pistol side there are a rather large number of designs that actually break down frequently during heavy use (granted, “heavy use” meant something very different for me than most people who don’t shoot for a living). This never happens in a James Bond movie, of course, but in real life it happens at the most inconvenient times. Come to think of it, there is never a convenient time for a pistol to break because usually if you’re using your pistol it means your real weapons already broke. Once again, despite the absolute superiority in design of the semi-automatic over the revolver, familiarity can overcome the technical deficiencies between the two (with lots of practice) and I would actually prefer to go to work with certain revolvers over certain semi-autos. (This is to say nothing, of course, of the issue of caliber…)

With machine guns, however, the differences in good vs. bad designs are vast. In nearly any modern military you’re absolutely spoilt. A “bad” gun is one that doesn’t have a TV built into the stock to ease the passage of long turns on security and automatically arrange for pizza to be delivered to an 8-digit grid. They are mindlessly easy to load, sight, barrel change, fire, strip, clean, carry, etc. The links disintegrate and can be re-used on unlinked ammo, all sorts of cool toys fit around the thing (which can, sometimes, make them start to suck just from the “too much Star Wars” problem), runaways can have their belt broken, they will eat through just about any garbage that gets caught in the links or even fire bent ammo, and they probably prevent cancer. They aren’t even unreasonably heavy (and its patently unfair to compare it to the uber lightness of an M4). Its amazing how well these things work. But when great machine guns are all you know you start complaining about them, wishing you had a 240 when you’ve been handed an M60 (because its possible to jam it up if you accidentally load it bolt-forward, usually lacks a rail system, or you’re an unsufferable weakling and won’t stop bitching because you didn’t get the lightweight bulldog version).

I’ve had the misfortune of having to go to work with old Soviet machine guns, though, and can attest that they are indeed objectively horrible.

When we say “crew served weapon” in modern armies we mean “the weapon is the centerpiece of the crew” not “this weapon is absolutely unreasonable to assign to any less than three people”. The term “crew served” may have had more semantic purpose back when operating the machinery actually took a crew — back in the days when tripods included full-sized chairs, ammo came on a horse-drawn cart, and vast amounts of oil and water were consumed in operation. But that was the early 1900’s. We still employ machine guns as “crew served weapons” because it is generally advantageous to have an AG with you and usually a good idea to actually set up a tripod if you find yourself facing off against a for-real infantry force. That is completely different than calling it “crew served” because wielding one is equivalent in complexity to running a mortar section.

Today a single person can easily maintain and operate an M240, M60, MAG58, 249, MG42, MG3, MG11, or whatever. Not so with, say, the PKM (or heaven forbid the SG-43). An RP-46 is actually better if you come to the field with American-style assumptions that a single person is adequate to handle a machine gun (the “zomg! PKM!” not-a-Soviet fanboi hype may have infected your brain already though and make this sound like a crazy statement — until you actually try both).

Let’s clear one point of nomenclature up straight away. The PKM is not really belt fed, it is chain fed, and the chain doesn’t disintegrate. Its also extremely strong. “Strong” as in you can support more than a single person’s weight from an empty belt. The longer the belt the more bullets, and this seems good at first, until you realize thatÂ holy shit it feeds from the wrong side (the right). This prevents a right-handed shooter from feeding the pig himself with his left hand and leaves the indestructible spent chain right in front of the shooter, or rather tangled around his feet the moment he has to get up and move which you have to do pretty much all the time because gun fights involve a lot more running around than shooting.

This little design whoopsie! has made be bust my face in the dirt or on the top of the gun more than once — not so convenient at interesting moments, and absolutely detrimental to my Cool Point count. Being at a tactical disadvantage and almost getting killed is one thing, but people actually saw that and it was embarrassing. Every. Goddamn. Time. It also hurts pretty bad (like, my feelings!).

But the failure of design doesn’t stop there. That stupid belt is nearly impossible to reload by hand without wearing gloves and using a lever to force the rounds into the thing. You have to at least find yourself some gloves and a stiff metal boxtop, wrench, ancient steel desk or something else firm that the butt of the round casings won’t slip too easily on to force those stupid rounds into their fully-enclosing holes. (And to you, the guy reading this who is thinking “I went to the range once and loaded a brand new, totally clean, uncorroded, never-been-fired-or-stepped-on belt — didn’t seem too hard to me”: sure, you might load 50 rounds into a pristine, lubricated belt by hand, but how about 5000 into a hundred chewed up belts?).

They also rust instantly, in accordance with the PKM Belt Rust Time Law: the time for a belt to rust to the point that it will malfunction if not serviced immediately, but not enough for it to be replaced by management will be exactly the amount of time since you last placed it in a sealed container and now. (Go check right now — you’ll see that this rule somehow always holds.) If you try oiling them to prevent that they gum up or actually start “growing hair” instantly. Its a never ending cycle of trying to keep the belts from making your life suck without giving up, throwing them all away and resolving to fight by using your bad breath and attitude alone.

This brings me to why the Soviets conveniently invented a reloading machine. Which also conveniently sucks. (Compare.) I can’t even begin to explain the inadequacy of this stupid machine, but it actually is the only way to maintain even a marginally reasonable reload rate for belts. There is, however, no way you could do this under fire. Or on Tuesday. (The machine jams spectacularly at random, Tuesday tending to be the worst day.)

I haven’t even begun to mention the inadequacy of the ammo crates. The standard ammo crates are insanely stupid. Actually, this isn’t a gripe reserved just for 7.62 ammo, its true for all commie ammo I’ve ever seen. The ammo cans aren’t like the infinitely reusable, universally useful, hermetically sealed, flip-top boxes found in non-backward armies. They are actually cans. Like giant spam cans, but without a pull-tab — not even a sardine-key. They come with a can opener. A huge one (but only one per crate, not one per can). You read that right, a can opener — and only one for the whoooole crate, so you better hope Jeeter doesn’t drop it in the canal.

You may be thinking “Oh, a can opener, I’ve got one of those that plugs into the wall, how could Jeeter drop such a thing in the canal?!” Glad you asked. When I say “can opener” I don’t mean the first-world type. I mean the part of your boyscout era Swiss Army Knife you never realized had an actual use. You know, the lever-kind where you hook the grabby part onto the crimp at the top edge of the can and pull to lever the pointy part down until it makes a tiny puncture, then slide over a touch and repeat until you’ve prized and ripped a gash large enough to do your business. Let that sink in.

Now remember, we’re talking about an ammo can. Like with bullets that people need to do their job, hopefully sometime this year, but perhaps much sooner under the sort of conditions that make calmly remembering details like where the fscking can opener is very difficult.

Once you’re inside the fun just doesn’t stop — no way. The thousand or so rounds inside are in boxes of 5 or 6 or so. Well, “boxes”, what I really mean is squarishly shaped topographic nets made of some material that tenuously holds together well enough to almost seem like a paper packing material — so it loosely resembles a little paper box. So this means that, unlike what you would have come to expect from armies in nations where maximizing busywork employment wasn’tÂ the main and only goal, the can that you worked so hard to open isn’t full of pre-loaded belts. That would deprive someone of a government job somewhere and that’s just not Progressive. So inside there are dozens and dozens of those tiny, crappy, flimsy little cardboard boxes, each containing a few rounds. And before you start worrying that those stupid boxes are the end of the show, let me assure you that the party just keeps rolling along — each round is individually wrapped in tissue paper.

You just can’t make this shit up. Its amazing. How on earth could such a horrible, stupid, backward constellation of designs emerge from one of the two nations to achieve serious, manned spaceflight before the end of the 20th century? GHAH! Had WWIII ever occurred I wouldn’t have known whether to snicker and lay scunion, secure in my knowledge of how thoroughly the enemy had hamstrung himself, or feel pity and offer a face-saving peace deal to spare the poor saps who are forced to try to actually get anything done under these conditions.

But maybe its brilliant

A guy I worked with a few years ago called Mule had a theory that this was, in fact, an excellent design for a machine gun system in a Marxist paradise. His reasoning went something like this:

Nobody can use it alone, so you can’t get a wild hair up your ass and get all revolutionary — you need to convince at least a platoon to get crazy with you.

You employ a gazillion people not only in the production of a billion lovingly gift-wrapped-by-hand rounds of machine gun ammunition throughout the nation, you employ another gazillion or so to open and load the belts.

Its the ultimate low employment figure fixer — at least until the state digests enough of itself that this becomes suddenly unsustainable, of course (but that never stopped a socialist, despite the ever-lengthening list of failed socialist states).

You never really intended to go to actual war with the Americans in the first place (wtf, are you crazy?!?), so what you really needed was a police weapon of deliberately limited utility with which to suppress political dissent, not an actual infantry weapon of maximal utility under all conditions.

Mule’s theory was that this machine gun design — from the actual shittiness of the gun itself to the complete circus of activity which necessarily surrounds its production, maintenance and use — is a brilliant design from the perspective of the State, not the soldier. Furthermore, that the aims of the two are at odds is simply the natural outcome of being produced in a socialist/Marxist system. Mule was one of the most insightful people I’ve ever met (and I’m not being rhetorical — he really was a hidden genius).

Thinking about what he said has made me re-evaluate some of my assumptions of bad design. Perhaps the “bad” designs are excellent — not for the end user, but for whoever is in charge of the end user. And that brings me back to thinking about just why the Java programming language is so bad, yet so prolific, and how perhaps the design of Java is every bit as brilliant as the design of a good keyboard, differing in that each is brilliant from diametrically opposed points of view.

Java is the PKM of the programming world. Its everywhere, it sucks, it is good for some (Stalin-like) bosses, and the whole circus surrounding its existence just won’t ever go away. And sometimes those of us who know in painstaking detail why a 240 (or nearly anything else in common use) is better are still stuck using it to get real work done.

I should, perhaps, write another edition of this interdisciplinary mental expedition in text focused around the failings of JavaScript (oh, excuuuuuse me, princess, “ECMA Script”). Or Ruby. Or MySQL. Or SQL itself. Or HTTP. Come to think of it, subjects of my angry dissatisfaction abound.

2013.08.15 13:44

I’ve come across a multitude of technically inaccurate explanations of monads on the web, all intending to explain them in simple terms to FP newbies, most revolving around their use in Haskell. They all suck. This is mine. It also sucks, but I hope it sucks less.

[NOTE: I have no intent here to explain the mathematical concept of monads, the semantic gap between the way the term is used in set theory and in some flavors of functional programming, or any of that high-minded magic. My purpose is to explain what a monad is in terms of a computing utility to a functional programmer. This is a discussion about the sort of tool it is, not the theory behind that tool.]

Imagine using a purely functional language where side effects are verboten and we can’t do IO or anything else useful because of this. The host environment can clearly do IO just fine, otherwise our environment couldn’t do things like read memory locations (or our code) and our program would be useless. So it is obvious the host environment can do things externally that our program itself cannot do directly. This does not prevent us from describing what we wish it would do on behalf of our program, however.

In this way the host environment is sort of like a pantheon of gods and our programs are mere mortal blips of disembodied consciousness — capable of thought, desire and expression but powerless to touch the physical world in any way. All we can do is express our desires as regard the results of our thoughts and appeal to the gods to pity us and take action in accordance with our desires. Whether they take actions on our behalf is entirely up to them.*

With this in mind…
Imagine a function written in SomeLanguage:

int f(x): return x

It takes an integer and returns an integer. So what? The system doesn’t do anything with it so it is practically useless.

So w() has taken the output of f() and wrapped it in a tuple that contains the value returned by f() and a prayer to the system gods to “Please put this in STDOUT”. Pretty straight forward. Now the system gods can do whatever they want with it, but our language itself can’t — because talking to STDOUT would be a side effect and that’s strictly forbidden on Planet Functon. What we can do, however, is pray that the gods take mercy on us and act according to our wishes as regard the value that accompanies the prayer.

If this is the best model of a monad you can muster at the moment, then just stop here. This model is sufficient to make the concept of monads useful to you as you write programs that do useful work. (Which is the whole point. Unless you’re an academic, in which case your point is to write a lengthy article in a peer reviewed journal in which your explanation of monads is prefaced with a demonstration of your deep knowledge of set theory and type theory and esoteric discussion of things like monoids and thus make it appear that unfamiliarity with these subjects precludes the ability to use monads in a useful way in functional programs.)

But there are a few unresolved issues above, most of which I won’t address below, but the glaring issue of typing is relevant… (The other glaring issue of the undefined and imaginary syntax of SomeLanguage won’t be addressed at all.)

The type tuple is pretty ambiguous for this use (we could be talking about coordinates or relational values or immutable strings or whatever). How are the gods to know that we have a prayer for them? Should they inspect every tuple, or even every value we dream up, just checking for a “prayer” type? Of course not, they are busy doing immaculate things. We need a more clear signal.

One way to clear this up is to make a function that takes a function, its arguments and a request specification and returns a type prayer whose existence is itself a magical signal to get the attention of the gods. So ultimately a prayer is a type whose definition is a combined value composed of a thingy plus a prayer request regarding that thingy:

prayer p(function z, list l, request r):
return (z(*l), r())

And then define an action of type request and rewrite w() in a way that returns a prayer:

request q = "Print to STDOUT"
prayer w(int x):
return p(f, x, q)

Here we have defined a request for the system to perform, passed a reference/pointer/label to the function f() along with its arguments to the prayer maker function p(), which has executed f() along with its argument and returned a bundle of type prayer that contains the original integer return value along with a request to the system gods to do something regarding the bundled value.

Note that in Haskell it is common to define the underlying function (represented here by f()) and its monadic version (represented here as w()) separately. I suppose this isn’t strictly necessary — one could define monads in terms of lambdas — but this is the way things are done.

So we have specified a side effect, but we are only praying that the system takes an action based on our specification. Nothing more. Obviously I’m glossing over the details of how a request object itself is actually defined, but this is the general idea behind how monads are used to write programs in purely functional languages that have side effects.

An interesting consequence of this idea of a program specifying actions instead of actually taking them is that it is now trivial to write a host environment that deliberately ignores all prayers. In this way writing a host environment purely for the purpose of examining the behavior is also trivial. Its the ultimate sandboxing tool. It is equally easy to write a host environment that selectively observes requests, taking only some and ignoring all others, or ignoring some and taking all others.

I believe the abundance of confusing monad explanations to be a side effect of the Haskell community being disproportionately populated with mathematicians, type theorists and set theorists (“for real” or self-educated) and their explanations assume a knowledge base and vocabulary that is foreign and intimidating to most newcomers. The Clojure community doesn’t seem to have this problem, at least not as deeply.

[*It is easy to reverse this metaphor, of course, and imagine that we are uncorporeal gods, capable of thought and expression of our command to the slavish host to do whatever we specify. But I like the “program as mortals, host as gods” way better, because it extends to include the very real case of writing a test host that takes no action and merely observes (or even checks for the correctness of) the program’s behavior.]

2013.08.14 20:03

Yesterday I came across a blog post from 2010 that said less than 10% of programmers can write a binary search. At first I thought “ah, what nonsense” and then I realized I probably haven’t written one myself, at least not since BASIC and Pascal were what the cool kids were up to in the 80’s.

So, of course, I had a crack at it. There was an odd stipulation that made the challenge interesting — you couldn’t test the algorithm until you were confident it was correct. In other words, it had to work the first time.

I was wary of fencepost errors (perhaps being self-aware that spending more time in Python and Lisp(s) than C may have made me lazy with array indexes lately) so, on a whim, I decided to use a random window index to guarantee that I was in bounds each iteration. I also wrote it in a recursive style, because it just makes more sense to me that way.

Two things stuck out to me.

Though I was sure what I had written was an accurate representation of what I thought binary search was all about, I couldn’t actually recall ever seeing an implementation, having never taken a programming or algorithm class before (and still owning zero books on the subject, despite a personal resolution to remedy this last year…). So while I was confident that my algorithm would return the index of the target value, I wasn’t at all sure that I was implementing a “binary search” to the snob-standard.

The other thing that made me think twice was simply whether or not I would ever breach the recursion depth limit in Python on really huge sets. Obviously this is possible, but was it likely enough that it would occur in the span of a few thousand runs over large sets? Sometimes what seems statistically unlikely can pop up as a hamstring slicer in practical application. In particular, were the odds good that a random guess would lead the algorithm to follow a series of really bad guesses, and therefore occasionally blow up. On the other hand, were the odds better that random guesses would be occasionally so good that on average a random index is better than a halved one (of course, the target itself is always random, so how does this balance)?

I didn’t do any paperwork on this to figure out the probabilities, I just ran the code several thousand times and averaged the results — which were remarkably uniform.

I split the process of assignment into two different procedures, one that narrows the window to be searched randomly, and another that does it by dividing by two. Then I made it iterate over ever larger random sets (converted to sorted lists) until I ran out of memory — turns out a list sort needs more than 6Gb at around 80,000,000 members or so.

I didn’t spend any time rewriting to clean things up to pursue larger lists (appending guaranteed larger members instead of sorting would probably permit astronomically huge lists to be searched within 6Gb of memory) but the results were pretty interesting when comparing the methods of binary search by window halving and binary search by random window narrowing. It turns out that halving is quite consistently better, but not by much, and the gap may possibly narrow at larger values (but I’m not going to write a super huge list generator to test this idea on just now).

It seems like something about these results are exploitable. But even if they were, the difference between iterating 24 instead of 34 times over a list of over 60,000,000 members to find a target item isn’t much difference in the grand scheme of things. That said, its mind boggling how not even close to Python’s recursion depth limit one will get, even when searching such a large list.

Something just smells exploitable about these results, but I can’t put my finger on why just yet. And I don’t have time to think about it further. Anyway, it seems that the damage done by using a random index to make certain you stay within bounds doesn’t actually hurt performance as much as I thought it would. A perhaps useless discovery, but personally interesting nonetheless.

2013.08.13 15:58

If you find yourself writing a class and you can’t explain the difference between the class and an instance of that class, just stop. You should be writing a function.

This antipattern — needless classes everywhere, for everything — is driving me crazy. Lately I see it in Python and Ruby a good bit, where it really shouldn’t occur. I feel like its a mental contagion that has jumped species from Java to other languages.

In particular I see classes used as inter-module namespaces, which is odd since its not like there is a tendency to run out of meaningful names within a single source file (of reasonable length). Importing the module from outside can either import * from foo,or import foo, or from foo import filename, or even import foo as bar (ah ha!) and make flexible use of namespacing where it is needed most — in the immediate vicinity of use.

So don’t write useless, meaningless, fluffy, non-state&behavior classes. Any time you can avoid writing classes, just don’t do it. Write a function. Its good discipline to see how far you can get writing zero classes in an implementation, and then write it with some classes and see which is actually less complex/more readable. In any case writing two solutions will give you a chance to examine the problem from multiple angles, and that’s nearly always an effort that results in better code since it forces you to value grokness over hacking together a few lines that sort of cover the need for the moment (and forgetting to ever come back and do it right).

Or maybe I should be admonishing people to seek flatness in namespaces, since this implies not writing classes that are stateless containers of a bunch of function definitions which incidentally get referred to as “methods” even though they don’t belong to a meaningful object in the first place.

Or maybe I should be railing against the oxymoronic concept of “stateless objects/classes”.

Or maybe I should be screaming about keeping our programming vocabulary straight so that we can finally say what we mean and mean what we say. (Sound advice in all walks of life.)

This last idea is perhaps the most indirect yet powerful of all. But it implies “standards” and so far in my experience “standards” and “computing” don’t tend to turn out very well once we get beyond, say, TCP/IP or ANSI and Unicode.

2013.08.12 22:27

One of my daughters just surprised me by showing up dressed for bed — underbits, bound hair, PJ bottoms and top buttoned up. It surprised me because I was headed in to dress her. At first I figured her mother must have dressed her for me. But nope, she had done it alone and wanted to show me.

If she was 17 the only odd part would be that I felt the need to dress her in the first place. But she’s 2. Two. And all her buttons? And her hair? Kids these days…

It reminds me not to underestimate what can be accomplished when motivation is intrinsic to the doer. The skill might be marksmanship, programming, aesthetic design, buttons and hair or whatever; the capacity for a person to demonstrate extraordinary ability in an area they can bring themselves to focus intently on should never be underestimated, regardless their assumed level of competence. Consider the proto-Israelis in 1947, Japanese chip makers in the 1970’s, me writing rainbow box emulators in 1989, Chinese component makers in the 2000’s, and my daughter in 2013. None of these things seem odd when referenced lightly, but all are quite extraordinary when remembered in the context of their time.

The number of daily, unheralded, extraordinary private achievements must be mind boggling.