Tag Archives: code

Some say the information revolution happened in the 1970s upon the advent of the personal computer, some say it happened in the 90s when the internet reached critical mass. These were incredibly important events in the history of humanity, but I claim the information revolution has seen only its beginnings.

Consider the case of the automobile. The transportation revolution did not happen overnight in 1897 when Rudolf Diesel built the first combustion engine. Instead, it happened gradually as the world was changed around the people as a result of this new technology. Oldsmobile and Ford refined the process of creating cars, and with that transformation came a new kind of economics based on the assembly line. The transportation revolution came to its apex upon the construction of the interstate highway system, upon which the choice of where a person lived and where he worked became decoupled.

We have seen the analog of the advent of the combustion engine and the beginning of Ford’s innovations. “The Information Superhighway” is superficially related to the creation of the highway system, but I think of that as an echo of the revolution of the highway system, not the revolution of the information age. We are at the creation of the assembly line, before it gained wide adoption. Google is Ford.

The reason is that the information age is about information, which is a totally different kind of beast than traditional commodities, around which our economy is based. As few as five years ago, businessmen tried to charge $50 for ready-made software sitting on a shelf as if it were a television or a bag of rice. But that is an ancient conception that completely fails to reflect the economics of what a software package is.

To see this issue clearly, we have to step back from our personal conceptions of money as a thing which allows us to live and operate in society, and think about it in terms of what it was when it was created: an abstraction for trade, which served to make society as a whole more efficient. Money is about allocating scarce resources to where they will most benefit society. It isn’t perfect at doing that job, but it is pretty damn good all things considered. It works way better than communism, which puts that allocation in the government’s hands.

Back to the television and the software package. A single television requires resources to produce. When you buy a television from the shelf, you are communicating “a television has value to me, please continue to allocate resources to produce televisions”. As society moves beyond the need for new televisions, people stop buying them, money (an abstraction for resources) stops flowing to the manufacturer of the televisions, and the company shrinks or dissolves so that those resources can be allocated somewhere where they can more benefit society.

Try to apply this story to software. Software costs resources to produce initially, but after it is on the shelf, all the resources it will ever consume have already been spent during its development, modulo its useless and wasteful packaging. Now there is a component of continuing to improve the software, but the cost of improving the software is not proportional to the number of users the way the cost of producing televisions is proportional to the number of people that want televisions. While treating software as a commodity does serve to compensate the creator for producing the software, when seen from the perspective of the economy as a whole rather than a single business, it makes no sense. The idea of commodity software is a misallocation resources.

Fortunately, the idea of commodity software is gradually fading away. We have mostly done away with the wasteful practice of putting software — essentially free to reproduce — into boxes, which have a cost to reproduce and are only advertisements until they are thrown away by the purchaser. But the model persists in the App Store, among other places. But note how the great Giants of the age are no longer using this model. Apple is profiting off of others using this model, but they are not using it directly. Google and Facebook will have nothing to do with it. Microsoft is dying a slow, painful death.

While there is a social realization that the old commodity model isn’t working anymore, it is not clear to me that anyone sees where it is headed. Google has hit a sweet spot where they can provide value to everyone without charging consumers any money — by collecting data about people, they make it easier for producers and consumers to connect when they stand to benefit from each other, and they found a nice place to skim compensation off of that arrangement. Google essentially has one very valuable product. Apple’s business model is basically that of a hardware company. But how does a typical software company work in the new age of information?

To explore this idea, I will take the vantage point of looking at society as a whole and follow the scent of efficient resource allocation. Resources are required to produce software in the first place: we need ideas, programmers, testers, and marketers. After the software has been conceived of, written, and tested — that is, at the point when the consumer uses the software — all the resources required for producing the software have already been expended. It is nonsense to charge people at this point; society would benefit more if you simply gave your software away, because the cost of doing so is (almost) zero. We need a way to pay for ideas, programmers, testers, and marketers. The resources required for providing a product are proportional to the difficulty of its creation, not the scale of its distribution.

I picture a combination of Kickstarter and an economic extension of UserVoice due to John De Goes. Allow people to pledge money for the improvement (or creation) of a product or feature, to be paid when that feature is usable. The features that are most valuable to people will have the most money pledged to them, providing incentive for the company to develop those features. We are now allocating resources where they need to be: in improving the product, rather than paying for the vacation of somebody who created valuable software in the past, somebody whose mind and expertise would be more beneficial to society developing or improving their product. This is just one idea, I’m certain there are other models that will accurately reflect information economics as well. In particular, this model compensates those who implement an idea, but not those who came up with the idea in the first place, which is a place for improvement.

Observe how this new type of model has shifted the economic emphasis to one derivative higher. People are compensated for continuously improving their product and creating new products, rather than having one great idea and banking on it. This may frighten innovators: their great innovations now stand to make them less money; we now need to constantly work to create value instead of sitting atop a great idea allocating resources. But look at it from society’s perspective: we are coming up on an age of immense growth, in which every worker in the economy is seeking not just to continue the status quo, but to improve it! Everyone is an innovator or an enabler of an innovator. And this all comes from software being free to copy. When something is free to copy, everyone should have equal access to it. Any other way is short-changing society.

It’s time to stop clinging to software as if it is consumed when it is used. There is an economic boom waiting to happen, if we just let information resources flow the way they want to.

Share this:

I’d like to share what I’ve been working on instead of homework for the past month: codecatalog.net. From the FAQ:

CodeCatalog is a lot like a wiki for source code. We aim to socially build a database of high-quality, well-documented, instantly-usable code in a variety of languages.

It is the fruit of my thoughts’ recent focus on reusability. I noticed that because of purity, Haskell was very close to being able to have a “database of all reusable code ever”, and I kept focusing on what language features were missing from languages that prevented that. After sharing the idea with Max (cofounder of Hubris Arts), we determined that the main missing feature was, er, a database of all reusable code ever. CodeCatalog is the first buddings of an attempt to create one.

Check out the FAQ for a glimpse of our philosophy. Haskell sophisticates might be able to see beyond my simple wording into the underlying vision — that was my intent at least. I’ll write a post soon describing our deeper plans in more detail. We’ll be working on this all summer, and if we meet our goals, by the end of the summer it should be pretty freaking cool. :-)

Oh, the code that we have on the site so far is mostly Python and Javascript, mostly because that’s what we wrote the site in and we were eating our own dogfood while developing it. But Haskell is also supported.

Anyway, fellow Haskell community, I encourage you to check out the site. We would appreciate any feedback you have about it. There’s a UserVoice tab on the right if you’re into that kind of thing. Thanks. :-)

And if you would like to support us financially, you can always .

Share this:

In this post I will share a personal discovery about object-oriented design. It is so simple and obvious that it may not be news to experienced OO programmers. However, it should be noted that before I became a Haskell nut, I was a dedicated OO programmer for 7 years. I occasionally empolyed the pattern in my designs without realizing its full generality. So it may yet be news to you vets.

I came across it when exploring the consequences of my recent post, Encapsulation Considered Harmful. I was trying to design a concept language around the idea, and realized that the language design I kept coming back to could be mostly encoded in existing languages.

Here’s the rule: never reference a class directly — always go through an interface. This includes creating new objects — go through a factory interface or take it as a parameter instead.

Obviously this rule cannot be followed 100% because you’ve got to give a concrete instantiation at some point. But push that as far “up” in your program as possible. Also, doing this 100% in contemporary languages would probably involve too much boilerplate. That’s okay, it can be done piecewise, and many of its advantages can still be reaped if it is done partially.

We have recently done this refactor on our game, and I have to say, it is gorgeous! I really like the clean, uniform way it separates concerns. More importantly, it is an abstraction instead of an encapsulation mechanism. That means that when code doesn’t depend on a particular detail, you can always substitute something else for that detail. Keep that property in mind, it will guide you in applying this pattern correctly.

Here is a simplified example from our game. We used to have a class FileSystem to access the filesystem. ProgressResult<T> was a class that monitored the progress of loading a file from disk and yielded a T when it was finished. Unit is the trivial type: struct Unit {}.

See how we avoided creating a filesystem in World, and instead took it as a parameter? World is now more flexible than it used to be: we can (and did!) instantiate it with a disk filesystem and an Amazon S3 filesystem. They have different sorts of ProgressResults too, thus that is also an interface.

Now, if you go to apply this pattern in your project, you might find that it breaks down for more complex designs. It’ll do that if you aren’t precise about your phrasing. For example, in another part of our project we had the following combinator library for our UI:

But this is not the way. Imagine if you were to construct some IUIs with a SilverlightUIModule and then try to combine them (with, say, VerticalGroup) with a UnityUIModule. What is the unity module supposed to do with silverlight UIs? This design flaw will show its head much earlier than this predicament, however: you will find in UnityUIModule that you need to downcast the IUIs you get into a specific type. Downcasting is an indication that you are doing it wrong. The correct way is to make IUIModule parametric in the UI type:

Now UnityUIModule implements IUIModule<UnityUI> and SilverlightUIModule implements IUIModule<SilverlightUI>, so it is a compile-time type error to pass a silverlight UI to a unity combinator. In addition, you do not need to downcast: the UnityUIModule now knows statically that it will only be passed SilverlightUIs.

One thing remains… how are you supposed to use that damn module? Surely you don’t make every class that uses a IUIModule parametric in the TUI parameter like this:

Type parameters would accumulate in classes, and every time you used that class you would have to write all those fucking type parameters. That can’t be the way to go!

Well… actually… remember the rule? You never reference a class concretely, you only go through interfaces. And the interfaces don’t accumulate type parameters from usages in the implementation like classes do, they only accumulate type parameters from the cleaner and scarcer usage in interfaces. So the worry of unsightly verbose code is unfounded. It’s fine, let the type parameters accumulate, they are a way of stating your assumptions. They tell you exactly how a class may be reused. So to defeat this notational burden in this example, we must continue the pattern: create an IWorld!

As you do this, all your assumptions will bubble up to the top of your program. Each more complex thing is parametric in all the simpler things it is made of. When you go to write code that can actually be executed, you will find an ocean of flexibility: you basically have created a rich combinator library for your domain, allowing it to target multiple underlying frameworks easily, getting multiple different (reasonable, even desirable!) behaviors just by passing a different parameter to some object you are building at the top level. Mmmm, parametricity.

(For dynamically typed languages, duck typing will mostly take care of this pattern for you. But the design motivation remains, make your code parameteric: take parameters saying where it should get the objects it is creating and the functions it is calling rather than referencing them directly!)

Share this:

When I say encapsulation, I mean having a region of your program that knows or has access to some information about the implementation of something, and hiding that information from the rest of the program. If you have another definition of encapsulation, I’m not arguing against that.

Why do software engineers encapsulate? I claim it is for two major reasons: (1) to reserve the right to change the encapsulated code later without breaking anything, and (2) to minimize the propagation of assumptions through the program. To illustrate this, consider the following C code which uses a (supposedly) abstract List type:

This code breaks encapsulation. It assumes that List is represented as an array. So if we wanted to go back and change List to a linked list because it is more efficient in the majority of cases for which we are using it, sum would break. (In this example, it would break very badly — i.e. it might not even segfault but instead return some nonsense number)

If you’re about to object that we should have exposed an iterator interface instead of a indexed get interface because we knew we wanted to change it to a linked list, I preemptively respond that we didn’t know that at the time — as in most code, when you are encapsulating, you don’t usually know what you’re going to come back to change — if you did, why didn’t you just write it like that in the first place?

The two reasons above are noble. They promote flexibility and simplicity, so that the minor decisions we make do not ripple their way through our architecture, making it impossible to change or understand. It takes the pressure off.

Although I agree with the goals, I believe this is the wrong solution. I think we should turn the barriers inside-out, and use only encapsulation’s dual: abstraction. When I say abstraction, I mean code that is defined polymorphically with respect to its assumptions. The above C code is abstract with respect to the list argument, since we can pass it any list we like. It is not, however, abstract with respect to the the Listtype and its operations — it fixes a single choice.

Encapsulation and abstraction are duals in the following sense: let’s say your program is defined in two parts A and B. A knows some information C, and B does not. There are two ways to look at this. A encapsulates C, or B is abstract with respect to C. That is, if you change C, you have only affected the specification of A, and B reacts polymorphically. So just divide your program at an information boundary — one side is encapsulating, the other side is abstracting.

While mathematically they are two sides of the same coin, there is a very real software engineering difference between them. Let’s say you are have three functions which use your abstract interface: sort, reverse, and frobnicate (some complicated business logic that can’t be written in 5 lines). With encapsulation, you might have this sort of usage graph:

And then, in one fell swoop, you can change it to:

That’s power, that’s flexibility. But… it’s lacking something. Now we have another part of our program which uses arrays, and I sure wish we could use that sort code we’ve already written. But we need that sort code for lists. Hmm, well we could copy and paste. But that sucks, maybe we could go back and make sort polymorphic in the list type. Yeah, that’s the right way to do it.

But, then why didn’t we just do that in the first place? Look what happens when we do:

Look at all those combinations. Those are the possibilities for code reuse without changing anything. Imagine if every encapsulation you made was an abstraction instead. Your usage graph would be basically black with arrows. But these aren’t bad dependency arrows, these are good reuse arrows. These are your possibilities for correctly using code that has already been written.

So I suggest: instead of advertising your guarantees and hiding details, advertise your assumptions and abstract over details. This has a pretty profound effect on the way your code is structured — like I said before, it turns all the encapsulation barriers inside-out.

My inspiration for this idea came from studying the form of mathematical theorems. They advertise their assumptions prolifically: “given a ring R with (+) and (*) as operations, …”, as opposed to “a real number has (+) and (*) and some implementation details”. This allows theorems to be maximally reusable, since although the mathematician was thinking about real numbers when he proved the theorem, he realized the same logic could work on any ring — including the integers, modular integers, matrices, and new things that were discovered later.

Object oriented programming seems to get close. One can consider a class (or an interface) as the specification of an assumption, and an instance the implementation of an assumption. Then you just take parameters that correspond to your assumptions, which are nicely bundled into classes so you don’t have to say “I use (+) and (*) and (-) and absolute value and …”. But this essential idea is muddied up by all sorts of crap, both in the language design and in popular usage.

I started to list the features we need to add/remove from OO languages to make them support this style, but it just got long and nitpicky. So I’ll just say this: I think we should be using objects differently. Objects as implementations of assumptions, not as “agents”. A natural number is not an object, The Natural Numbers is an object. As generics gurus are noticing, this:

interface Number {
Number operator + (Number y);
}

has got to go. The assumptions are not specified properly — you are requiring that + take any kind of number whatsoever, when we probably mean the same kind of number as on the left side. What does it mean to add 2 (mod 4) to 7 (mod 13)? Instead we are learning to write:

interface Numeric<N> {
N operator + (N x, N y);
}

That’s the right way. Notice that, although it is specified with different notation, N is just another abstract member of Numeric. We specify the set and its operations together, but the set is distinct from the interface. The interface is the name for the assumption of such a set. In some cases the interface and the set can be bundled into one, and it is these cases that provide most of the study material for OO design. But inside the soul of pop OO lies a form of modeling that is so much more powerful.

To reiterate, this is not breaking the guarantees of encapsulation — anybody who uses Numeric gets encapsulation for free. They are not allowed to see the details of N, just as if you had encapsulated it. It’s just that now you can swap out different N’s in the same code, where you couldn’t before.

I am pleased to see software engineering practice slowly converging on this already. I just thought I would point out the uniform rule behind it. Next time you write an encapsulation, ask yourself, what would it look like to make the users of this code abstract instead? Is it a small enough notational burden to do it that way? If not, what kind of language would allow it to be?

Did that give you something to ponder about?

Share this:

A few years ago, Martín Escardó wrote an article about a seemingly-impossible program that can exhaustively search the uncountably infinite "Cantor space" (infinite streams of bits). He then showed that spaces that can be thus searched form a monad (which I threw onto hackage), and wrote a paper about the mathematical foundations which is seemingly impossible to understand.

Anyway, I thought I would give a different perspective on what is going on here. This is a more operational account, however some of the underlying topology might peek out for a few seconds. I’m no topologist, so it will be brief and possibly incorrect.

Which returns a Nat satisfying the criterion if there is one, otherwise Nothing. We assume that the given criterion is total, that is, it always returns, even if it is given infinity. We’re not trying to solve the halting problem. :-)

We want it to return Nothing in this last case. It’s no surprise that it didn’t — there is no condition under which search is capable of returning Nothing, that definition would pass if Maybe were defined data Maybe a = Just a.

It is not at all clear that it is even possible to get what we want. But one of Escardó’s insights showed that it is: make a variant of search that is allowed to lie.

That condition is never going to pass, so lyingSearch going to keep taking the Succ branch forever, thus returning infinity. Inspection of the definition of * reveals that infinity * infinity = infinity, and infinity differs from 15 once you peel off 15 Succs, thus f infinity = False.

With this example in mind, the correctness of search' is fairly apparent. Exercise for the readers who are smarter than me: prove it formally.

Since a proper Maybe-returning search is trivial to construct given one of these lying functions, the question becomes: for which data types can we implement a lying search function? It is a challenging but approachable exercise to show that it can be done for every recursive polynomial type, and I recommend it to anyone interested in the subject.

Hint: begin with a Search data type:

newtype Search a = S ((a -> Bool) -> a)

Implement its Functor instance, and then implement the following combinators:

We can’t expect to construct a reasonable Search Integer. We could encode in the bits of an Integer the execution trace of a Turing machine, as in the proof of the undecidability of the Post correspondence problem. We could write a total function validTrace :: Integer -> Bool that returns True if and only if the given integer represents a valid trace that ends in a halting state. And we could also write a function initialState :: Integer -> MachineState that extracts the first state of the machine. Then the function \machine -> searchInteger (\x -> initialState x == machine && validTrace x) would solve the halting problem.

The reason this argument doesn’t work for Nat is because the validTrace function would loop on infinity, thus violating the precondition that our predicates must be total.

I hear the following question begging to be explored next: are there any searchable types that are not recursive polynomials?

Did you like this post?

Share this:

Photoshop, Premiere, Maya, AutoCAD, ProTools, Finale, Reason, InDesign. These are the state-of-the art tools for creators. They are all rich, powerful, versatile programs that strive to work at the artist’s level of thought.

And the people who write this amazing software get to use… Visual Studio? Eclipse? Emacs? At least I’ve heard IntelliJ IDEA is great, I’ve never used it. But when contrasting these to the tools above something seems missing. Like a library full of features that bring programs to the coder’s level of abstraction. Languages are attempting this now, but languages are written in text and IDEs are basically text editors with a few extra features. Photographers have clone (a tool that lets them erase sections of images replacing it with something that looks convincingly the background in that area), we have refactor-rename that doesn’t even respect alpha conversion (avoiding name conflicts with your new name).

Why do we even have to worry about alpha conversion? That’s like a composer worrying about MIDI patch numbers! We still denote identity with a string? Premiere wouldn’t have that — it would link directly to the clip in question. Who cares if you have named something else the same thing? My friends have no trouble distinguishing me from another Luke who walked in the room.

And files? We have libraries, namespaces, modules, classes, and functions to organize our code. Files are almost entirely orthogonal and not-almost entirely meaningless. Kudos to CodeBubbles for noticing and removing that tumor. But, that’s just a guy at a university, so naturally we won’t get to use that for real for quite some time.

What’s up with import statements? That’s just some junk that comes with representing programs as text. Eclipse has surpassed those… sortof… but we’re not all Java programmers. Why can’t I just type the class name and then pick the one I want from a list?

All the state-of-the-art creative programs have multiple views: more than one way to see your creation to get a better handle on it. Maya has isometric, wireframe, flat-shaded, full light…. We have the class hierarchy view. Oh boy. Why can’t I look at

click a little [+] next to predict(p) and see it right there inline, with its argument replaced by p? Oh, that’s how that works, cool, [-]. Instead we go to its definition, where we see it next to the functions we happened to define near it, about which we care nothing. Then we substitute the arguments in our heads, fathom loop invariants and fail to see how they are violated, and spend the next 5 minutes wondering if p is mutated in this call chain.

How come I can still have syntax errors? How come it is ever possible to have a syntax error in a program? Shouldn’t the IDE at least be helping me to have a valid program all the time? Finale doesn’t let you write a measure with the wrong number of beats and then complain when you push play. It just fixes it for you — “oh look, you need a rest there.”

“My indentation’s wrong. Oops, rename missed that one. Oh right, need to import Data.List. Ugh, namespace pollution. Fine, looks like I need to copy and paste again because abstracting will be a pain. I hate how you can only have one class per file and how it discourages small classes. Shit, that mFoo/foo accessor pattern again… weren’t get/set supposed to do away with the need for accessors? Fuck, looks like this virtual method needs another parameter — give me fifteen minutes.”

Do we not hear ourselves?! Software developers, the masters that create the masters’ tools, are touching up Avatar with mspaint. Shouldn’t we be sculpting effortlessly a masterpiece with a beautiful dynamic interface while robots bring us platters of Mountain Dew? We’re wasting our time with spelling errors while the 3D artist in the back is putting finishing touches on his city.

W. T. F.

If you were entertained, . Thanks :-)

Share this:

This post is a follow-up to my recent post programs are made of ideas. I received a lot of interesting comments and criticism on this philosophical exposition. But there is one in particular that caused me to deeply consider whether my argument made sense. It starts with this excerpt from the post:

Why constrain ourselves by forcing a fundamental idea? Must there really be a One True Way which forms a basis for everything else to be defined?

Surely the set of all the ideas we are capable of having forms a fundamental basis in which everything else can be defined. Or more concretely, it is presumptuous to think that our brains are so great as to not be constrained by some underlying system. We may not see the definitions of our ideas, but if our ideas can be said to exist at all, they have to have some expression in terms of something, right?

It is another thing entirely to wish to communicate that basis to a computer. Logicians will be quick to intuit that we would not be able to fully understand any system which forms the basis for our own thoughts (however, this intuition will be difficult to formalize unless our thoughts obey first order logic). And I’m pretty confident in saying that it isn’t first order logic, it isn’t lambda calculus, and it (still) isn’t actions on digital state machine1.

A perhaps clearer way of saying what I was trying to say is this: let the system in which we communicate our ideas be our slave, not our master. Think first, then decide which words will lead to the clearest explanation. We as software engineers, mathematicians, and even speakers of natural language have a tendency to give in to Sapir and Whorf. While we may not be capable of freeing our thoughts from our language2, we may be caving too easily. I felt a breath of fresh air when I noticed and cut the ropes tying me to lambda calculus and functional thinking.

It is a beautiful thing that most of these systems are Turing complete. I have never read the Church-Turing thesis as “C is Turing complete, so you might as well use C for all your programming tasks”. I read it almost the opposite way: “choose whatever language you want, because I can always write an interpreter for your language in my language”. Isn’t it great that we are able to choose between the uneasy handwaving of Perl and the frustrating precision of Agda?

I have always seen a programming language as a single core system for which ideas must be tailored. In other words, as a master. A current project of mine is to devise software development software that embraces the plurality of these systems: a way to tie together all these different methods of expression so that they can benefit each other. Let the language camps remain polarized, but let the computer do the arguing.

1 You might ask, “But what if we consider the brain to be a big digital state machine?” That is beside the point. By the time our consciousness gets hold of our thoughts, the mechanism used to implement them is long abstracted away. (Or perhaps not — maybe these thoughts are the result of a highly abstracted digital state machine, and with a different sort of brain we would have vastly different sorts of thoughts. That even seems likely. Hmm, food for… digital state machine operations.)

2 Nor may it be desirable — Bertrand Russell admired Giuseppe Peano for the clarity in which he could express thoughts using the then-novel idea of formal logic.

Share this:

quietfanatic – The fundamental unit of a computer program is not a function; it’s an action. CPUs don’t do lambda calculus; they follow procedures.

I saw this on twitter a couple days ago and responded with a jerk of my knee, only to commence a discussion1 in which I failed to accurately express my objection. So here is a full treatment.

First I would like to disclaim that I am not attacking quietfanatic in any way. His quote merely provides a convenient means to discuss a widespread pattern of thinking.

Before we start talking about fundamental units of X, it’s probably a good idea to get clear about what we mean by X (where X = computer program). Here are some of the possible definitions I can think of:

A computer program is a pattern of memory intended to be read by a CPU — machine code.

A computer program is an abstract specification for the behavior of the CPU when fed (to-be-determined) machine code.

A computer program is the idea of its behavior. To illuminate what I mean by this, think of Firefox. What idea did you have just then? Was it of an abstract specification? A series of bits? I doubt it. You probably pictured a window browsing the internet or something like that. I’m talking about that kind of idea.

A computer program is a state of a physical system. If it is not in a block of hard disk or ram somewhere, it cannot be considered to exist.

The difference between these is the level of abstraction at which they are defined. But importantly, they are all defined at some level of abstraction — which means that they are all just ideas. Perhaps (4) is not, depending on how concretely you see the world.

Here are some of the ways to think of how a program is run:

The specification of the program is interpreted by some rules, resulting in some changes to physical things.

The CPU deconstructs memory into a series of discrete instructions, which it then “performs” one after another.

The computer is a system in which the states of some physical devices are coupled to a system of interacting voltages, some of which encode a program.

Again, they differ at the level of abstraction at which they are defined.

The way I read quietfanatic’s quote is talking about level (2) of the program and level (2) of the CPU. To me it says “CPUs perform instructions, so our abstract specifications of computer programs should be (ultimately) in terms of instructions”. With that phrasing there seems to be a wide logical gap between the two clauses. Why does the language we use for abstract specification have to correspond to one particular abstract conception of how a CPU works?

By permuting my list of interpretations, I can construct many plausible quotes that follow the same logic. “The fundamental unit of a computer program is a rule, since the program is ultimately interpreted as rules” or “the fundamental unit of a computer program is a voltage, since voltages are ultimately what drive the devices.”

But my objection is larger than this fallacy. To me, and I presume to most software developers, programs are made of ideas. When thinking about programming languages, we are looking for a way to express our ideas in a way that the computer can approximate them. Our ideas form the central theme, not the mechanisms that approximate them.

What is the fundamental unit of a number? How about of a web page? A shape? A set? A cursor?

All these things are ideas, each of which has many different lights under which it can be viewed, each of which may give a different answer about a “fundamental unit”. But ultimately ideas are not defined in terms of anything — we use definitions to grasp how our ideas connect to each other. Some might say that the fundamental unit of mathematics is the set, but when I think about a triangle, I’m sure not thinking about sets.

I am unable to see how computer programs are any different. Why constrain ourselves by forcing a fundamental idea? Must there really be a one true way which forms a basis for everything else to be defined? In the historical development of mathematics, our ideas have had a shifting fluid foundation, each generation bringing a new way to think about the same things. Currently the evolution of programming languages is serving as our shifting foundation. Let’s use our full vocabulary of ideas to express programs, and try not to limit our concepts’ true breadth and beauty by telescoping in on what they mean in terms of a fixed basis.

This article is reflecting a change in thought that I am going through right now. After a lot of time being committed to the lambda calculus, finding ways to construct the naturals out of some pretty-looking set of primitives, devising ways to abolish imperative thinking and embrace functional thinking, I realized that I was lost in a world of details. Everything I did and thought was coupled to the fundamental system I was creating.

It is dawning on me that the fundamental system is merely a means to express my ideas, and the true beauty is in the ideas themselves. My denotationalism fades as I perceive that most of my mental structures are intrinsic. They are not defined in terms of anything and they have no denotation other than themselves. There are ways to model those ideas, but I am suddenly being careful not to confuse the model with the idea itself.

1 To the extent that a series of tweets can be considered a discussion.

Share this:

In 2001, Ray Kurzweil published his inspiring, controversial essay The Law of Accelerating Returns. I acknowledge that the essay sometimes seems far-fetched, and perhaps some details are overly (or underly) idealistic, but in essence I find it very convincing. I will take it as an assumption for this article, and I also assume my readers are familiar with the gist of the essay.

I read Kurzweil’s essay about six weeks ago, and it has set off a chain reaction in my consciousness, causing me to reconsider my perceptions of technology. At first, it seemed to murder my denotational idealism, destroy my hopes of Dana (my purely functional operating system). It filled me with an excited fear, as if I were being carried down the rapids of a great river, trying not to hit the next rock.

Now the ideas have settled and my idealism is born again with a new perspective. The spirit of Dana still has a place, and I will describe what I think that is.

We are at a unique inflection point in the process of technological acceleration. In the last 50-100 years, orders of magnitude of progress began to occur in a single human lifetime. Technology is folding upon itself, each new invention making use of more and more recent knowledge and tools.

Dana was originally envisaged to facilitate this growth by observing that the vast majority of software out there is not compositional, thus we are re-inventing many wheels. Dana’s goal was to provide a framework in which everything must be compositional. But my early execution of the idea was misguided. It may have had a noble goal, but I was pretending that I was at the beginning of an acceleration of software, not the middle. Dana’s implementation did not make use of modern technology, and thus its development was an order of magnitude or two slower than the rate of modern software’s development. Further, its despotic purism would never catch on: it was trying to change the direction of an avalanche by throwing a snowball.

A truly modern compositional environment must embrace the flow of knowledge. Those who try to reverse the paradigm will end up reinventing all the technology that the mainstream is already using, and by the time they are finished the mainstream will be lightyears ahead. The state of the art may be ugly, but it exists, which is more than the purists can say. One person — one team — cannot develop software anymore; to do something new, we must employ the labor of the entire world.

Currently, the most constraining property of software is that for it to be reusable, it must be designed for reuse. It is hard to reuse libraries in a way for which they were not intended. And in fact most of the coding that goes on is not in libraries at all — people aren’t intending their code to be reused by anyone. They just want to get their product out the door and get some cash flowing in. Below is a reified mental picture of this.

The bulb on the bottom is the software designed for reuse. It grows slowly and uniformly, mostly occurring in the open-source world, created by people who like to write good software. The rays at the top are individual, result-oriented projects. This code is not typically designed to be reused. These projects grow quickly in a single direction, but if someone else wants to go in a similar direction, they have to start at the bottom. This is an economic phenomenon, and the economic goals of developers are not likely to change.

To accelerate progress, we must find a way to re-use those chutes, the goal being to transform the picture into something more like this:

In this picture, each results-oriented chute carries with it a net of all the support code that was written for it, upon which anyone can build. Of course only with the consent of the creator (people will find a way to protect their information), but if it is useful, then people will buy it.

I have thus become a supporter of Donald Knuth’s mantra:

I also must confess to a strong bias against the fashion for reusable code. To me, “re-editable code” is much, much better than an untouchable black box or toolkit.

Forking a project in order to use one of its features in another project is no simple task. Depending on the complexity of the feature, you might gain some time by doing things this way, but many competent software engineers prefer to simply rewrite from scratch. One reason is in order to understand the code, but another is simply economics. Saying that I rewrite features because it is fun is missing half of the equation. I rewrite features because it is more fun then spending a week fighting integration issues (very much not fun). I claim that to support this new picture, our primary goal should be to vastly reduce integration issues.

Here is where my purist ideals kick back in, having found a new purpose. Pure functions have no integration issues, at least semantically speaking (performance is always a bit more subtle). A pure function makes explicit all of its inputs and outputs, so to use it, you can just rip it out of its context, provide the inputs, and transform the outputs. While depending on the situation that may be a lot of work, compare it to finding all the places a singleton was modified in order to recreate its state at some point in the execution of the program. What if the singleton returned a reference to an object that was stored away and modified by an obscure corner of the program, upon which your feature later depended. This sounds like a terribly inflexible design, but most software is terribly designed, and we still want to reuse it.

However, just saying “use pure functions, voila!” is no solution. The trend of the world is gradually shifting toward more functional idioms, but asking everyday programmers (who we care about, because they are doing our work for us) to switch to purity is asking a lot. Retraining brains is too hard.

So that is my open research question. How do we introduce the immense advantage of purity from this perspective into programming pop culture? Perhaps it is a new virtual machine, that does dynamic dataflow analysis. Perhaps it is a static analysis tool. Perhaps it is a new language, which can interface with many popular existing languages. This would be slow to take off because it requires migration (thus risking never), but because of interop and the good reuse properties (with tool support) it might be faster to achieve critical mass. Clojure and Scala are doing kindof okay.

Once you have a way to track assumptions, the rest follows easily. It would be possible to assemble a database of every function ever written, with the ability to just drag and drop it into your program (ehem, Haskellers, why don’t we have this?). The natural version control system, another step in the direction of the DVCSes, tracks changes between individual objects, and how those changes induce changes in the objects that reference them (this is the part I have thought through the most, but I will forgo describing it for it is a small detail and this post is already very long).

A singularity-aware culture should keep its eye on a way to bring the next inevitable order of magnitude of progress. I believe the elimination of integration issues is one such way, and Haskell, for the most part, has already eliminated them. The first step is to bring this property to the popular languages in any way we can (this includes making a Haskell a popular language… LOL).

Share this:

As my regular readers know, I am a zealous Haskell advocate. After many years of programming in many different languages, Haskell has secured itself as my #1 language for almost every problem: using any other is like painting with a mop — like playing tetris without being able to rotate your pieces. Nonetheless, Haskell has some unsightly areas, especially when considering programming in the large. The more I use Haskell to tackle big problems, the more obnoxious they become. This post is my account of Haskell’s Big Three annoyances. Contrary to my usual shtick about cutting out features because fewer features means more properties, these are missing features. The order in which I list them is meaningless.

1. No Module Abstraction

Some of the code I am writing for Evo (purely functional RTS game), if stated in full generality, would look like this:

The first three parameters are constant throughout my entire program. I can’t in my right mind make a data type with 5 type parameters, and I certainly refuse to write those same three constraints on every function involving UI. So, sacrificing generality, I have fixed those three parameters to the particular choices I need for Evo. But suppose this module were to be reused — by fixing my choice of, eg., image to Graphics.DrawingCombinators.Image, I have disallowed the reuse of this module by any program that uses another library for drawing. But I stand by my aesthetic choice, so that I don’t obscure my code’s inner beauty behind repulsive, indecipherable type signatures.

Haskell needs some form of abstraction over modules. ML functors, Agda modules (aka. awesome records), Coq sections — any way to abstract over more than one definition. This would allow module authors to make their code clean and reusable at the same time.

2. Module Naming Policy

Hackage is a mess. I cringe looking at vector-space, taking up 9 valuable top-level names (“Data.” doesn’t count, everybody uses that), preventing anyone else from naming a module or module suite named Data.Cross or Data.Derivative (the latter I actually have a candidate for). data-object for JSON objects, as if nothing else could concievably be called an object. transformers and mtl both have a module named Control.Monad.Trans, preventing both versions from being used from the same package (suppose this package depends on two other packages, each of which depending on a different monad library).

The quick fix, what every other language has done, is to institute a policy or culture of naming conventions that makes the probability of collision low. I feel like Hackage is nearing its limit — the ad-hoc Data.Blah naming conventions won’t last through another order of magnitude. If we encouraged people to name packages more conservatively, we may last through one or two more.

But a naming convention is just delaying the inevitable, giving us a false sense of security. What happens when a package is forked, two packages come to depend on two different branches of this package which forgot to rename itself, and a third package wants to depend on both of those? We need something innovative, and in Haskell’s spirit! Let’s allow the flexibility in package names that we allow in symbol names from imports now — if there is a name collision, just rename one of them for the scope of your package. No big deal. Let module authors name their modules whatever they think is clearest — go ahead, name your module Monad, we don’t care. Right now, a name like that would be a death sentence for that module due to impoliteness.

3. Unrefactorable Typeclasses

The Haskell prelude is a very nice place, in general (what, you haven’t read it? Go, it’s a nice read!). However there is one cesspool of which every experienced Haskeller is ashamed: the Num class. Let’s look:

class (Eq a, Show a) => Num a where
(+), (-), (*) :: a -> a -> a
negate :: a -> a
abs, signum :: a -> a
fromInteger :: Integer -> a

(+) and (-) almost surely belong, as does fromInteger (supposing you don’t allow more flexibility with numeric literals a la IsString). (*) might be considered reasonable, though we have just disallowed vectors as instances of Num. abs and signum… well I guess as long as you’re putting in (*) those make sense. But the superclasses Eq and Show, especially Show, what? What about being a number involves being convertible to a string? Eq and Show both disallow computable reals from being a bona fide instance of Num, they have to cheat. Same goes for notational extensions like f Int where f is any applicative functor.

The Num class is pretty bad, but I excuse the Haskell designers for making that mistake. Library design is hard, especially when you have never used the langauge you are designing the library for. The weakness in Haskell is much deeper: we are stuck with this cruft because any attempt to change it would break all sorts of code everywhere. Num isn’t the only class affected, it is just the most public example. Any module which defines a typeclass and allows users to instantiate it suffers from this problem. As soon as the typeclass is defined, it may no longer change. You are stuck with that decision forever, or else you break backwards compatibility.

Ultimately what we would like to do is split Num up into algebraic structures like Group (with (+), (-)), Ring (with (*)), etc. There is a simple, conservative proposal that could solve this problem, which essentially would allow us to say:

class alias Num a = (Eq a, Show a, Field a, IsInteger a)

Which would allow all four of those typeclasses to be instantiated in one go, thus treating the collection of four as one, and allowing us to introduce finer granularity in the Num typeclass without breaking everything. GHC has implemented all kinds of crazy extensions, and this seems tame in comparison. I wonder what is preventing it. (Maybe I should get into some GHC hacking?)

I have identified further typeclass modularity problems could be solved by a more radical suggestion: outlaw orphan instances. But that fight is for another day.

Conclusion

I consider Haskell to be by a wide margin the prettiest practical language in existence. Haskell code in the small is often perfect to my eyes, “from the book” as Erdös would say. Each of these suggestions is about improving Haskell code in the large. They allow the incremental creation of a perfect module, a perfect package, a perfect typeclass. We are closer than we think to a language in which an entire program could be considered perfect.

I wrote this article to emphasize these features, open them to active discussion and critique, and entice their implementation. What do you think, are they worth the effort?