I want to fix programming

(The SB series, part 1 of 7)

Programming is broken. Completely broken. The very way we write code today is just so broken that it makes you want to puke. I’ve been saying for years that I hate programming. I am and have been a fulltime developer for over 20 years, I don’t regret it, and I still love what you can do with programming. Still, I hate programming.

The way code is written is just braindead. At each step of the process, you can crash the program, be it by filling up all available memory, accessing a stray pointer or reference, or entering an infinite loop. No doubt programming feels like walking barefeet in a floor filled with broken glass. One tiny inch out of the right place, and bang, you nearly lose one of your toes.

Every step of the way, in every statement, line of code, function call or sequence-point, if you want to write good code, you need to think about all the different possible states of the whole program. This is invisible and impossible to define. Provably. Always. In all existing languages. That’s the reason 100% code-coverage unit-testing doesn’t guarantee bug-free code, and it never will. This is also the reason bad programmers don’t become good: they just can’t think about all those possible cases in a structured way.

(By the way, this only gets about a thousand times worse with multithreaded code – so it’s not getting better, it’s getting worse.)

The problem is that the fundamental way that code is written is wrong. It’s just wrong. You write a bunch of instructions, step by step, which supposedly result in a given wanted state. But each step is separate from all others, it is only understood by the compiler/interpreter separately, and it’s almost easier to screw things up than to get them right.

I thought for a long time that functional programming could be the solution, and studied it in detail. Lisp. Haskell. The Lambda calculus. The functional approach brings indeed several improvements over regular imperative or object-oriented programming. But still it’s not the solution. It is still composed of many discrete simple steps that excruciatingly calculate the output.

What is the key thing that is wrong with code? The fact that you don’t express what you want. You express what steps have to be taken. Imagine telling your pal to grab you a beer from the fridge, step by step, with each step being robotically-rigid and with very little regard for context. It would be excruciating. And prone to catastrophic failure. Exactly like code is.

Libraries help with this phenomenon, but they are only shortcuts for higher-legoal-specific needs. They don’t really fix it.

“The more I push code through static analysis, the more I’m amazed that computers boot at all.”

A vision

So, how should code look? Let’s take a simple example: sorting a list. Say you have an input list, let’s call it, ahem, input. Assume it has several elements. And now we want to compute a new list, called output, containing the same elements that the input has, but sorted in ascending order. How should we do it?

Traditional methods include bubble sort, quick-sort, shell-sort, insertion sort, etc… these are all algorithms that allow us to sort a list in a way that provably works. For example, bubble sort:

1

2

3

4

5

6

7

8

9

def bubble_sort(input,output):

output=input# start with the unsorted list

swapped=True

whileswapped:

swapped=False

fori=1tolength(output)-1:

ifoutput[i+1]>output[i]:

swap(output[i+1],output[i])

swapped=True

Straightforward enough. But if you write the code to do the sort, you may still make a mistake! What if you forget to set the “swapped” parameter to true when you exchange two elements? Or more typical than that, what if you make an off-by-one mistake with the loop counts?

This is why I say that programming is broken: sorting is a simple concept to grasp and describe, and still, the code to do this is convoluted, and full of traps where you can crash the program or generate wrong output. It’s nuts!

Someone could come up with a functional version of one of the algorithms above, but it would still be very similar: no side effects, but many steps to do all the tasks needed. Recursion may be more elegant than iteration (ahem), but it isn’t fundamentally better.

So, how should the code to sort a list really look like? After many years, I’ve come to the conclusion that it must be something like this (pardon the pseudo-code for a style of programming that doesn’t really exist yet):

1

2

3

4

5

def sort(input[])=output[]:

one_to_one_equal(input,output)

foreachi,jin1..len(output):

wherei<j:

output[i]<=output[j]

Let me try to explain it: the first line of the sort definition says that there exists a one-to-one set of “relations” between elements in the input and output sequences. We will see later how the one_to_one_equal definition achieves this. But this ensures that the input and output list contain the same elements. It defines the space of possible answers.

Second, the key thing, the next line specifies that for each pair of elements in the output sequence, when the first index is lower than the second index in the pair, then the value of the element is less or equal. Which basically states that the output list is sorted. It defines which of the possible answers is the solution.

And it’s that simple. The sort function only specifies WHAT sorting is. Not HOW it is done. It shows the properties of the output and the input, related, and it leaves as an exercise for the compiler to figure out how to get there.

Undoubtedly, there are two key questions here:

First, how can the compiler process this? Is it actually possible? In future articles, I will try to show that yes, it is indeed possible, and that the compiler could even be able to figure out the algorithms to get this result.

And the second key question is, how applicable is this to more complex problems? I will also try to show that it is indeed applicable to any fully-general programming or computation task, and that such a way of programming can be much simpler, more efficient, and much closer to being failproof!

I had been meaning to save this knowledge to set up a company in the future, but several circumstances have made me rethink this plan. I’d rather share this knowledge now, and see where things go. Watch out for the next articles in the series.

Afternote

We’ll get deeper into this later in the series, but here is a small bite. This one_to_one_equal function would be a “standard library function” in this idealized language, but here it is more or less how it would look, so that you can see the underlying logic:

1

2

3

4

5

def one_to_one_equal(output[],input[])=c:

c=relations(input[i],output[j])

foreachx=input[i]:len(c(x,))=1

foreachx=output[i]:len(c(,x))=1

foreachx(a,b)=c[i]:a==b

Let me try to explain it: the first line of the definition says that there exists a one-to-one set of “relations” between elements in the input and output sequences.

The second and third lines state that there is a single relation in “c” that pertains to each element in the input and output lists, thus ensuring the relation is one-to-one. Finally, the last line states that the two elements in each relation are of equal value, ensuring the two lists are the same, just reordered.

My current day job involves Mouinpress (a system to turn WordPress sites into native iPhone, Android, Blackberry and J2ME mobile apps), and ViEmu (a vi/vim emulator for Visual Studio, Word & Outlook, and SQL Server). If you want to help me with my project to fix programming, I’ll be grateful if you have a look, evaluate, and maybe buy or promote either of them’products. Or, since I’m looking for investors, maybe you can help with that!

No. First of all, Prolog is not an algorithm and it doesn’t do anything, since it is a language, not a solver.

Most Prolog engines use combinations of forward and backward inference, search space reductions, heuristics, tail-call optimalization and much much more. So, even though the basic way of thinking of a solver is backtracking, it doesn’t do justice to all the advanced implementations currently available.

Indeed, I don’t know everything. I do know many things. One reason I am posting this publicly to get all help I can. If something exists which already works, it will have been worth it! If, as I suppose, there are things pointing in the right directions from which to “steal” things and build a usable tool, then it will be great to learn about them!

I’ve heard of Prolog, sorry, forgot to mention it on the article. Indeed I’m going for a declarative approach. There are things which I’ll describe which go deeper than Prolog (at least in its intention!).

Programming (imperative) is just a natural evolution of machine language. That’s what makes it “broken”. The fundamentals remain unchanged since this model has been extended/rethought by A) libraries, engines and B) functional programming, which makes it less painful and makes creation of functional (though unreliable) software which are commercially successful.
And yes, declarative is the way to go.
Something I missed about this article is more focus on functional languages and what exactly are their flaws.

Interesting idea. Essentially, you want to do design-by-contract, but instead of invariants being specifications & tests for the code, you want them to BE the code.

Off the top of my head, I suspect that, what you’re proposing is, in full generality, provably impossible. Which is not to say that it would not have lots of doable special cases. Indeed, it seems likely that there are enough doable special cases that, yes, you could do all programming this way. Certainly the sorting example you gave, would be feasible.

I can imagine that writing a program would be something like this. I start with A being true. I tell the computer that I then want B to be true, then C, then D. The computer chugs away, and tells me that all is well, except that it can’t figure out how to get from B to C. So I insert new conditions between them: make B true, then X, then C. The computer says okay, and I have a complete program.

Even if this did not turn out to be a good way to do ALL programming, one could certainly mix it with more traditional techniques.

The big question is whether this would really lead to fewer bugs. I can imagine subtle problems with stated invariants being just as tricky to figure out as more traditional bugs. Something like design-by-contract works well because the invariants and the (traditional) code function as checks on each other. But if you get rid of the (traditional) code … then I don’t know.

Regardless, a thought-provoking idea. I’d be interested in seeing some kind of implementation, however rudimentary.

Glenn, I’ve been toying with some implementation, of just a “sum” function, but indeed it’s tough. I planned to do it in the future. But since you never know if there will be a future, I’ve decided to share my thoughts and hopefully find some help, advance in the open, etc…

It might be worth spending an afternoon looking at the “provably impossible” part before spending any significant time on implementation. In particular, it seems, that it would be trivial to define in this new language the “WHAT” part of a NP-hard problem, and it would be possible to define provably undecidable problems such as halting problem.

In any case, performance is the killer for such applications. Most people can accept an order of magnitude slower performance from a language that has other benefits; but if you don’t specify the “HOW” part, then you can’t ensure that a potentially O(n) problem doesn’t compile to a O(n!) solution.

Peteris, you are right that it’s worth it to make sure all the effort is not going down the drain. NP/NP-hard/NP-complete problems all show the property that they are easy to test, but difficult to attack. Still, I *believe* that there are many problems that, even if they are NP-grade in their full generality, they are tractable in many particular cases, which do cover a lot of everyday programming. That is my goal.

You’re right about most NP/NP-C/NP-H problems having ‘good enough’ solutions (same for some intractable problems, too, like the halting problem).

The hard part of this is going to be writing a compiler or interpreter for it, because (as far as I can tell) you are basically asking the computer to solve a satisfiability problem and a computability problem: “Does (set of boolean conditions for output) (a) represent something satisfiable at all, (b) represent something computable, and (c) can I compute it from the (set of boolean inputs)?”

That’s an NP-Complete problem (n-SAT for n>2), plus two generally intractable problems: computability reduces down to the halting problem, which is provably impossible to compute in all cases. I know computability has reasonable heuristic/etc. solutions that you could probably take advantage of, I don’t remember if there are polynomial solutions for general SAT problems.

I think with this approach you can even solve problems that were hardly approachable before. I am thinking in particular of problems where you know there is one (or many) solutions but you cannot write it on paper. This could become handy for mathematical stuff. I’m definitely interested.

Besides, one could create a fuzzy compiler that can only solve the code, it is smart enough.

This is similar to the Relational Model for DBMS’. Codd’s original theory was simply predicate logic and set theory applied to data. SQL has a lot of these constructs (it’s declarative), although is a pretty poor implementation of a true relational language. Read up on Rel/Tutorial D for some inspiration/implementation help:http://dbappbuilder.sourceforge.net/Rel.php

Yes! SQL will come up further down in the series. It’s a clear case of how ALL programming will work. You specify the data model, and then making it run is the task of the runtime, and making it run efficiently is even the job of someone else, who can tweak things until they work faster – but they can never break the code! In a DBMS, you can add or remove indices, but that doesn’t change the logic. You can’t introduce bugs by the optimization, isn’t that great? All programming will be like that some day.

You can add indices, but that won’t prevent your query optimizer to suddenly decide that another ordering of joins will be ‘faster’ (and after performing the query, it shows that it is multiple orders slower).

Your data is mutable, so your optimizers will be mutable as well. Think about this! You only want this if your optimizers are 100% correct, however, there are (provable) no algorthms which can do this. The simplest way to prove this is by looking at the halting problem: there is no algorithm which can decide wether a generic proposition on natural numbers holds true or not. The same is true for complexity analysis.

So, in the end, you need heuristics or human-machine interaction to solve all these problems, making declaritive programming a frail business that works in certain domains, but fails miserably in others.

Yes, you are right that a new tool in this direction will introduce other “gotchas”! It is my belief that, the same as SQL, doing a “higher-level correct description” and then babysitting the strategies separately when performance suffers can still be better that manually specifying the whole sequence. Especially since the “optimization” part would not be able to “screw up” the correctness, as nowadays programming often involves introducing bugs in the optimization phase!

Isn’t that the same argument that was levied against higher level languages than Assembly (or, C/C++?). Sure, all abstractions leak — but if you’ve studied DBMS join algorithms (I have, as a DBA and a thesis) they are complicated, generally use statistical approaches, and are generally better than what your “average” programmer can create.

Just like optimizing compilers have made the general-case programmer able to get better performance at the expense of “optimal, by-hand” performance, I suspect general-case programming optimizers, in the way the OP suggests, will beat out “by hand” optimization. Eventually.

Interesting, thanks! Great to find out others are attempting to climb this tall, steep mountain. I’ll have a look at this and the other projects I’ve been referred to as soon as I can have a bit of time!

A bit of a non-sequitor there. Yes, programming is fundamentally broken in all the ways you mention. But your solution leaves me cold. First, it’s not dissimilar to Haskell: http://c2.com/cgi/wiki?QuickSortInHaskell
Haskell is a fine language, perhaps the finest, but Haskell as well as your proposed solution are still leaving the huge burden of having to conceptualize an enormous mountain of details and express the solutions through the strict requirements of the programming language.

I think John Edwards (the creator of the subtext programming language) said it best: You are sitting in front of a _computer_, for heavens sake, why the heck are you doing boolean logic in your _head_. Isn’t that what the computer is for?

This applies to all of programming really. We are sitting in front of machines that are fully capable of keeping track of all the details that programmers juggle in our heads presently. Yet when programming my computer sits idle, just making my cursor blink. Seems like it could be put to something more useful, eh?

“”” but Haskell as well as your proposed solution are still leaving the huge burden of having to conceptualize an enormous mountain of details and express the solutions through the strict requirements of the programming language.”””

Yes. So, an ideal language would free us from that too, and it will thus be a single statement:

Haskell, as all functional programming languages, is a cleaner, more elegant version of imperative programming. But it doesn’t get to the core. Still, *at the bottom of Haskell*, if you want to find the largest number in a list, you have to recurse. It’s the same as iterating. In either case, a library function can solve that, and Haskell has great abstractions for that, but it’s not what I mean. I mean, you *say* that you want the largest element, and the runtime does the job to decide how to find it!

Chris, you are right, my incomplete proposal/approach doesn’t solve all problems. But I think it can solve several very important ones. We’ll see, and I hope the feedback from everyone will be helpful advancing in this area.

Well, Haskell, Prolog, C, C++, Java, Z, Eifell, Smalltalk: they are all Turing complete and AFAIK, Turing complete is as good as we can get (for now).

In the end, our turing complete definition of algorithms need to be serialized into instructions performed by a more-or-less sequential processing machine. On way or another, you can call any language imperative.

Edwin, thanks for the pointer. I’m familiar with that book. IIRC, I think it even described how to implement a functional language using the SKI combinators? I’ve always loved the SKI combinators.

I definitely don’t mean that Haskell is imperative, with all its pure functionality glory! But I have researched the applicability of functional approaches to this solution, and even if I’m sure I’ve missed a few things, I think I got the major part of why the functional approach has a fundamental limitation, that requires a declarative approach. I will share my thoughts on this in a future article, I’ll be happy to listen to your thoughts on it.

The last paragraph of your comment is golden! The image of the idle computer blinking the cursor is spot on. Even with the latest IDEs, what do we get? Syntax highlighting, code completion, perhaps some static analysis and code refactoring, if we are lucky.

I’ve seen a bit of LINQ (I am the developer of the ViEmu for Visual Studio plug-in…). LINQ is based on SQL somehow, see above for my reflection that SQL *is* a great model in some ways of how all of programming ought to work, and hopefully will work some day.

Linq is fine, because it’s expressive and more readable. It resembles SQL, but it is nothing else than classical functional programming (map, filter, lazy evaluation, monads ,…). The same is true for reactive programming. [Again, Microsoft didn’t invent it.]

Pattern matching, aspect oriented programming and especially Meta-programming is a great help in readability too (and is fun, having a dynamic scripting language). Remember: you only write code once, but you or someone else will read it many many times (this is not only my experience). And this sucks most for me.

Declarative programming or CLP, although invented many decades ago, haven’t found their way into mainstream programming yet. But as someone else already mentioned here, debugging declarative programs can be obscenely difficult and runtime and memory consumption are difficult to predict too. Improving a declarative program is as hard as improving a program written in any other language.

Of course it it possible to write it this way in python, ruby, c# etc too. But how many programmers would code it this way? The problem is, almost all languages allow you to do the same thing in many different ways and you need a lot of education and experience to write it the functional way.

Thanks for posting this! I’m interested in using your language for a project of mine… is there any way that I can get in touch with you? I haven’t been able to find any contact information for you on the two pages on the internet that have the tutorial.

A thought-provoking article. I suspect, given your example of “Go get me a beer from the fridge.” that your future language will either stem from an evolution in AI or will cause a revolution in AI. I think that what you are proposing is that the computer is smarter than it is now. You want it to behave like a human in the sense of how easily we categorize and many different parts of our brains cooperate to perform the details of what your instructions imply. We still get stumped however. For example, when I get to the fridge, what if there is more than one type of beer, or no beer? We’ll still have the possibility of failing because we may bring you a type of beer you didn’t want or bring you a coke instead of a beer. No system is perfect (at least one that humans invent) but we strive to improve. Good work on thinking outside the box!

Thanks for the encouragement. I do think that AI will come via the evolution of programming. But I think we are still far from that. Even my “overly-ambitious” approach is (hopefully) just a tiny step in that evolution!

You might be interested in http://coq.inria.fr/ , even if it is quite hard to grasp (I have tried a long time ago, got some small things, very exciting BUT difficult).
The idea is not exactly the same as yours: in COQ, you write a proof of a theorem, and then you have the program to implement it. For example, you write the proof of the 4 colors theorem, and then you have an algorithm of how to color a map. The cool part is that the algorithm is produced by the compiler from your proof, and of course you have the proof that your algorithm is correct too !
That’s called constructive logic.

I will agree with your premise that declarative programming looks cleaner, but I must point out the debugging is much harder!

Consider the very simple domain of CSS, which is clearly much simpler that programming in general. Even with CSS, “width=200px” is highly contextual and laborious to debug if it shows wrong. It does not help that Firebug will lie to you.

And from what I have heard, that is why Prolog did not take off: Debugging the logic errors was a nightmare. At least with procedural programming you can step though the code to find the error.

Yes, I’m sure debugging is a problem (and performance, as many others have mentioned). Hopefully, if the tools is practical for many everyday things, the tools will advance. Just look at HTML and CSS, they are a pain in many aspects, still they are unstoppable.

This is not an answer to this problem (not that I really believe there is one) but you might find this talk by Gerald Sussman at Strange Loop to be a fascinating jump forward (or maybe just sideways) from what you’re talking about:

There is indeed a common part! That part is required (specifying the space of answers and determining the solution), and I think there is a part missing in Prolog, about the strategies with which to tackle this.

This is a very interesting article.
I believe that what you are talking about here compiler problems domain vs programming languages problems domain. As in, “who is responsible to know and solve this problem? The compiler or the language itself?”

You can provide declarative statements like one_to_one_equal() (reminds me of Rails DSL) pretty easily and then can indeed be implemented as part of the compiler. But what about real world problems and business logic?
For instance, if you are implementing Facebook, and you decide that a status update should reach all your friends – how will you declare that? sent_to_all_friends_of(user)? This is definitely a language domain problem.
So you implement a declarative statement for your application using more abstract compiler declarative syntax (or through intermediate application layers, until you reach the library or language syntax)..

If you’ve been in the industry for 20 years, you should know that the Japanese burned several generations of compsci grad students trying to build “inference machines”. They wrote a fucking operating system in Prolog before the whole thing was shit-canned.

I know, I know. I also know about the “4GL” fiasco in the 80s. I think Prolog is not enough. I think more things are needed. I’m not sure I will be able to provide the solution! But I do think it’s worth exploring. I don’t want to be debugging code like today 20 years from now. And I think the right way to go after this is in the direction I’m pointing. Hopefully I’m right, maybe not. But it’s worth a shot!

Love Scheme and the thinking behind it. I still think that adding “exclamations” to the functions that are not pure doesn’t solve the problem. A consistent approach is needed and Scheme falls short in that regard. Haskell fixes that part, but there are other issues.

You need to look at the work or lectures of Alan Kay (STEPS project about meaning) and David Ungar (Renesaince project) and you’ll see that programming does not have to be broken. You need to get out of your pop-culture knowledge.

I’m familiar with stuff by Alan Kay, definitely not with everything. He has been a visionary. Still I don’t think Smalltalk is it (don’t know whether STEPS is Smalltalk or not). I love his quote that “C++ was not what he was thinking about when I invented object-oriented programming”.

I will try to look up the two references you provide, as soon as I have a bit of time!

I am not a programmer (at least yet as I am trying to learn). For someone who doesn’t know how to program, indeed programming is ugly. I don’t mean it in a way where it is hard to grasp etc. It is simply ugly! So many variations of things that do the same thing and you need to write so much to do it.
Hey I have a list, an array whatever [ 1, 2 , 3 ,4 ,5] can you please reverse its order and output or put it in a var x so i can access it in that shape or form and output that. How simple is that? 5,4,3,2,1. It should be as simple as me typing it.
Plus reading all those tutorials starting with “How to reverse a string.” Which by the way is something so stupid! Why would i want to reverse my name for no reason. I actually asked that on stackoverflow beginners for programming and what did i get. I think a bunch of laughs ( and i really do not understand why, why no one can answer me why do i need to reverse my name, and yes if i do get an answer it will be 100 different ones why ? because every programmer programs differently and has a different view and so on) Maybe I am following something different here, a different way of understanding everything. Looking for simplicity. I studied electrical engineering. My mom was a math teacher. I am good at it. I am a designer. I like empty rooms and very simple, minimalistic things. Life is simple. Programming should be as well. People tend to complicate their lives and as a non programmer who is trying to learn it I really think this is happening. It is ugly.
And yes i think programmers are very brilliant people and i love all of you. =D

PS. This is just really my opinion. I do not know every language that is out there but in the last a couple of months I have touched on python, c, ruby, objective c, js, etc.

To answer your question ‘why would I want to reverse my name?’, I’d say ‘you probably don’t’. However, when you’re solving a problem you can easily discover places where you have a list of characters that you *do* want to reverse. Because situations like that are extremely context-driven based on the rest of the (current) implementation it’s easier to just teach how to reverse a string so that you’re aware of the process. That way, when you encounter a situation where you do want to, you can spot the solution much more easily.

Programming is ultimately entirely context-driven, which is part of what the original article is decrying. When you have thousands or millions of lines of code executing to achieve some aim and you’re trying to modify it, you quickly hit the limits of human awareness, so we abstract things away, ‘chunk’ the concepts and define structures to make some sense out it all. However, our personality influences what we believe to be sensible, so different programmers have approached the same problem in different ways, leading to a multitude of ways to reverse a string.

Mathematica operates in a way very much like this. It’s functional (and also imperative, and declarative) but it’s also ‘intentional’. Here is a simple example. Given a list a:

a := {1,5,2,4,3,1};

We can find the maximum value with a single call:

Max[a]

We can sort it similarly:

Sort[a]

Or supply our own sorting function (descending, here)

Sort[a, #1 > #2 &]

We can map functions over lists:

Total[Map[1/#&,a]]

Gives a sum of the inverses of each element.

So far, so functional, but those functions (Max, Sort etc.) are fundamental parts of the language. There are thousands and thousands of such functions, all of which can be combined in exactly the way you advocate in your post. The runtime does *all* the work.

At its heart it’s a term rewriting system – quite unique and extremely flexible. Many people assume it’s ‘simply’ an algebraic manipulation tool for mathematicians, but it’s a complete, general purpose programming language in its own right: one that’s richer by far than any other language. It suffers from none of the problems you identified in your post (though it has others.) It’s worth a look for inspiration.

Developing software is fundamentally a learning process; As developers, we explore the problem domain and the space of possible solutions. The programming language helps us to specify either a single possible solution, or a set of possible solutions in a formal manner, but to guide us in our search, we need to understand the implications of the statements that we have made. This is really about the software development process as a whole, and how the specifics of the language support that process. The output of the software that we create, test results, debugger output, static analysis etc.. all help us to understand the problem domain and the current state of our search for a solution. The languages that we use do need to support these tools better than they currently do, (especially static analysis, IMHO) but the focus of the (many) discussions on the language itself and it’s syntax is somewhat misleading, I think. I would rather the effort be spent on extending and improving the toolchains surrounding languages that already exist and are popular. I want continuous testing, style checking, static analysis and fuzz testing to be ready “out of the box” and enabled by default; from the get-go, I want my IDE/compiler/interpreter (with default settings) to refuse to build/run my code unless the style is spot-on, the test coverage 100% and the static analysis gives the thumbs-up. If that means I must constrain the way that I develop software, so be it. I agree with the original post: it would be nice for a program to consist only of invariants and test-cases, with the algorithm generated automagically. I suspect that, feasibly, the developer will still need to provide some guidance in the choice of algorithm, but I see no reason why we could not have our IDEs/text-editors provide continuous feedback when the statements that we type violate some constraints that we have previously specified, or cause some test-case to fail. This would have been unthinkable a few short years ago, but the computational power at our disposal right now is immense: enough to make us shift our concept of what is possible.

I totally agree. Current tools have to get better, and that will be a great business and will improve the status quo sooner.

I also think that it’s worth it to look “further away” and try to find better approaches to the fundamental problems. They may give us usable tools quickly, or they may not, but they may inspire the evolution of the technologies, etc…

I see static code analysis as very valuable and interesting, but I think it is a “bandage”.

Indeed, the developer will or may have to provide guidance in many cases. Like tweaking a DMBS that uses SQL. But it’s good to have those parts separated, being able to ensure the correctness/security of the code, having code that can be more easily or quickly adapted to evolving specs, etc… and hopefully, especially, making programming a more pleasant experience than it is today!

don’t get me wrong, it would totally like a language like the one you are describing. sometimes i ask myself how many kids would like computers in the programming sense if languages where more a kind to the common man.

but computation isn’t symbolic as our minds, programming is about our capability of arithmetization of our thoughts (which is the base of all mathematical analysis on formal system) so there is the dichotomy. it’s a difficult process “downgrade” the way of thought in this way.

I really wish you the best. (BTW if you had launched your company as you said, i’d totally pay for your product)

yes to us integer are symbols, but for the computer not!. when you’re using a formal system capable of arithmetic you can proof Gödel’s incompleteness theorems and yourself will say “oh, this is odd and recognize this pattern” instead what will the computer do? could it capture that idea (the idea of incompleteness) in itself (the computer has a formal system)? no. because such system, the one that is capturing the idea also is susceptible to godel’s. that why i say humans operate in symbols. and computer on pure arithmetic, for our capability of stepping out and look from the outside. that arithmetic can be understood as symbols is irrelevant.

You are the only one in the comments who figured out why programming is the way it is.

Programming IS the materialization of a sequence of steps, which later get translated in pure arithmetic. It doesn’t matter the “paradigm”, the programming languages just “fake” what will later turn into a sequence of steps (e.g., a functional or OO language let will manipulate functions or objects, but will simply produce bytecode, which is, again, just a bunch of procedural steps).

Any succesful “declarative” programming, like the author proposes in this post, can only be a result of one of these options:

1. Have a big enough library of common patterns, such that programs can be specified at high abstract levels. This is the closest you can get to define a program by intention without knowing the intermediate steps. Worth of note here is that OO was created with this very intent, but inheritance, compatiblities and other technical issues limit this.

2. Have computers able to manipulate symbols, so you just specifiy the desired output and provide an input, which is basically a neural network. That means programming would be closer to telling a friend “Get the beer”, instead of writing down all the steps a robotic arm has to perform to actually get a beer. That also means, though, that this computer (neural network) has to learn (train), so it can best approximate your desired output. It also means it won’t be deterministic, as so pretty horrible at pure algorithmic tasks (e.g. sort a list).

Anyway you look at it, though, I would say it’s impossible to have anything too dissimilar than current languages without fundamentally changing the underlying machine, since the means we have to interact with computers nowadays are limited by the very nature of how computation takes place (Turing machine).

The incompleteness theorem applies to our own reasoning processes as well. We are nothing more than symbolic processors ourselves, with our own (extremely complex) formal system driving our ‘consciousness’.

Our own cognitive processes are just as rigid as the ones underlying a computer, we just have many more (and more complex) layers on top of it insulating us from the rigid, mechanical deterministic processes of our neurons firing.

yes, but we also can step-outside the system (in a contrived manner) that let us pull out of paradoxes and incongruence.

we say for example, “we aren’t truly free we can’t decide our “desires”” instead when you face a computer with such problem it cannot step outside. it will day with that contradiction, not recognizing it as a systemic problem

That’s a completely open question, with an enormous number of scientists who disagree.
No one had ever created a complet theory of our cognitive process “as rigid as the ones underliying a computer” that is fault-prove.

I think your underlying assumption is wrong – programming languages are broken therefore we need a better programming language, people have been trying this since the seventies, and nothing much has changed. try a different solution.

I’ve spent a bunch of time searching for “the answer” too, including a bunch of time with OCaml, Scheme and Erlang (= Prolog with other good stuff for real-world apps). Like you I think part of the answer is relational-based programming so I wrote r17 (www.rseventeen.com). R17 is a language that combines SQL-like relational operations with pipeline concurrency. Currently it’s set up mostly for data mining but maybe the idea has wider application.

I know I know, someone sent me that on the programming reddit (before they totally buried the article!). Hopefully what I’m thinking and sharing has a bit more substance than the cases Perlis refers too.

I’ve been working on exactly this goal for quite a while now… (I am working on it in other window right now in fact…) Email me if you like. (I’ve also heard all the skepticisms above. There are a lot of assumptions people take for granted that need to be unlearned.,,)

Will try to look at it! I’m very pressed for time (launching a start-up at the same time…). Will be good to share the thoughts, ideas, and approaches. I will try to do it publicly, as I think it will be the most beneficial for all, and give the best chances to the effort. It’s a long-term effort!

First, as several have mentioned, Prolog, yada yada, been done before. None of that is interesting though. What is interesting is why you’re barking up the wrong tree, and it can be summarized in a single word.

Determinism

Or rather the lack there of. Computers strive for determinism, but the reality of our world is a non-deterministic one, and all programs must account for that. There exists not one single truly deterministic program in existence and there never will for the simple fact that the world we inhabit is non-deterministic, and so computers themselves are also non-deterministic.

Take for example the humble hello-world program. Surely, if anything can be deterministic, such a simple program can, after all, it simply prints a single String and exits yes? Lets assume our language of choice is C, and therefore uses null terminated strings (important in this example, but it could be generalized to other settings). Now, lets assume that in the fraction of a second between the program being loaded into RAM, and execution starting that a cosmic ray passes through the computers RAM and flips that null terminator to a value of 4. Why 4, who cares, it’s random (non-deterministic, remember). Suddenly our nice simple deterministic program just jumped the rails and is spewing all manner of garbage from RAM.

Now, that is a contrived and extreme example, but it can be generalized into all kinds of other situations. Any time you must perform IO in a program (and any program that doesn’t perform some kind of IO is rather pointless no?) you bump up against non-determinism.

So, why does determinism matter in your intent driven language? Simple, in a deterministic world, all possible states could be enumerated (even if it takes a lot of memory to do so) and accounted for, and therefore a provable correct solution could always be derived given all inputs and expected outputs. However, in a non-deterministic world there exist a infinite number of states, and only a subset of those states may be enumerated and accounted for. Given this constraint, only a limited subset of inputs and possible outputs can be derived and unfortunately any non-trivial program is nearly guaranteed to include some piece of functionality not included in that limited subset.

In my personal opinion, the best we can hope for is to attempt to coral as much determinism as we can and provide tools for the computer to prove all deterministic code correct, and leave the non-deterministic code for the programmer to deal with.

Determinism is reason why problems like “fetching a beer from fridge” would force computer to “know everything” in case we would want to remove “how it must be done” part and just shout out loud “what must be done”.

In the end – somehow it must know “where fridge is”, “how to get to fridge?”, “what my favorite beer is?”, “why my favorite beer is beer #3?”, “what if there is beer #4 that could become my new favorite beer?”, “what would be qualities of beer #4?”, “why quality #1 is important?”, “what happened when I was 10 years old, on saturday, may 16 at 3PM, that might make quality #1 important?”, “what is the answer to universe, life and everything and does it make beer #4 quality #1 important?”.

Cool. It may sound strange, but I came to that conclusion roughly 20 years ago. But I was not sharp and experienced enough to solve the problem. I am glad someone with more brains attack the problem. This IS BIG!

Hello fellow frustrated programmer. You seem to have attracted quite a few negative people, but that is the internet for you. I am on a similar “absurd” journey with so much to learn. Fortunately there are many willing to educate instead of scorn. Keep up the good thinking.

A very intersting article. If you’re researching Prolog like solutions, it would be worth checking out Constraint Programming and Constraint Logic Programming and the various implementations that exist. I’m surprised no-one has mentioned these paradigms.

Of course you can introduce “bugs” in SQL with optimisation:
– Changing isolation level change how concurrent transactions behave
– Reduced performance in general for some request may be saturate the hardware and produce deny of service… or simply be unnaceptable in term of performance. So adding or removing an index can introduce bugs.
– Backup strategy. What happen if you loose some data? How this backup strategy will react depending of the failure? How the software will react if some data is corrupted or missing?
– Many alteration in database like adding or removing an index alter the performance of a database while they are performed or my even prevent any access to the data while being done.

Yes yes you are right, but at least you limit the kind of bugs. SQL is still used rather than hand-coding a DBMS for most cases, so that would be proof that it can be useful. Although I’m sure there are custom DBMS systems out there that avoid the problems of SQL! (And I don’t mean NoSQL, that could be another, modern, case of the same too).

The same way that there is code that still has to be written in assembly language, newer types of tools won’t fully substitute pre-existing ones.

You should take a look at the Oz programming language and at APL (and it’s descendants J and K). For years APL programmers have been whining about how all programming has been a step back from the things they are able to do with their crazy one-line programs.

Fred Brooks wrote a book on computer architecture where he presents about 25 different architectures and includes working APL programs for each of them. The idea being that having a common declarative specification language allowed for better comparison of the architectures. Needless to say, if a working simulator for a computer can be written in a declarative language, then there is no lack of expressive power in the kinds of language you envision.

Coq and Agda were already mentioned but if you fancy LISP then you should take a look at ACL2 as well. Concerning quality of software I don’t see your declarative approach as a silver bullet for there can be errors in specifications just as in implementations. Interesting (and familiar) ideas nonetheless…

Have you looked into planning and domain description languages (such as PDDL)? I think you’ll find that there is a whole branch of Artificial Intelligence which has been working on this problem for a very long time…

I t feels that what we are looking for here is like trying to find ‘a better tool for punching nails than a hammer’. We want a hammer that won’t allow us to land hard on our fingers…or on someone else head.

While working with any tool we have to handle them with responsibility and learn the craft of the job if we don’t want to get hurt in the process…A programming language is a tools we have to learn to deal with…until we find ‘something/someone’ to do the work for us.

I’m afraid that by the time we find a problem-free solution we people won’t be needed for programming any longer.

Maybe you’re right! But hopefully, we can find a tool that helps us do those tasks better/faster/more easily. I prefer to try to work on it, rather than wait for the “magic bullet” and have to leave with tools as poor as today’s meanwhile.

Absolutely!…It has been a long and inspiring journey since the days of wire-programming and I’m confident things will improve thanks to the evolving and ground-breaking ideas of all the people that loves this crazy thing of programming.

The idea of ‘Natural Selection’ applied to this ecosystem – where hardware, software and ideas live and battle everyday – suggest that at some point we will reach that stage, and the people what pushes the limits of our current tools are the ones that keeps this world evolving…If we conform with the current “status quo” we will died of boredom!

In general case, yes why not try to make a (better) declarative language?

Now would it solve the problems you described? It could improve, I agree. But would it really solve the problem in general?

The most problematic issues in software development are wrong specifications, poor understanding of user needs or processes. This would not help there.

Even if you have the good specification and understand users needs, you have to translate it to your declarative language… Essentially if you make an error in the translation, your declarative language will not help you here. Wrong request, wrong result.

As an example,If we focus on your first declarative version of sort, it doesn’t require sort stability. It doesn’t specify memory constraint or complexity constraints. We don’t know if it will be applyed to a small or a big list, well in fact we doesn’t even know the meaning of ‘<=' operator. This meaning would change depending of the concrete type.

I will also try to make an educationally negative post. I see in your example that you have essentially specified a test for measuring if a list is sorted, and as part of that you have tacked on a constraint about the relations between the variables.

I will not belabor the prolog references made before – this looks to me like something that might be readily addressed using some kind of evolutionary search. A previous poster invoked the lack of determinism in computing as a limiting factor in your approach, but that poster failed to make the right connection.

The trade-off I believe you are facing is similar to the one you face under Prolog – if you want the computer to find a solution, while you only specify an objective function, you will save time sitting at your computer, but then you will need to find something else to do while the computer runs a giant cluster job in an attempt to find a program that satisfies the many inherent constraints you(not you personally) probably did not think you were imposing — cannot grow unbounded in memory; cannot take a year to run;cannot change datatypes from double to string, etc.

This kind of meta-progamming is *possible* now. But it is not feasible. All you need is a Phd in statistics another one in computer science and then take up where Estimation of Distribution Algorithms leave off.

A grammar-based approach that combines stochastic search with a logic system like Prolog would be awesome. And, to be very Web 2.0 about things, I should also note that if you have a database of all code ever written, your algorithm can be that much simpler because it will have much better statistics.

Sounds like you’re talking about meta-programming. The problems you’re describing are all related to lower level languages. What you’re wanting is a way to describe your problems at a higher level, another tier of abstraction. There are ways to do this of course but it remains very hard and not yet a completely solved problem in my opinion.

Have you heard of OMeta? Check that out. Basically the idea is that using the principles of pattern matching you can easily create grammars that allow you to transform higher level code into lower level code. It’s not yet perfected but I think this is the solution to the problems you’re describing. Because you’re right, it’s nuts. These general purpose languages we write in are great and have only gotten better over time but they don’t scale to massive levels of complexity (or in the wrong hands, minimal levels of complexity). Abstraction is needed. Abstraction of general purpose programming itself.

It is Martin Fowlers classic state machine DSL. It allows a user to write in the language of a state machine. Or, more importantly, it allows you to think in terms of a statemachine, not classes or functions or data structures. I think this is the solution to the problem you describe. You need to be able to express your abstract concepts in equally abstract semantics. Then you author code that transforms those abstractions into something more concrete. All using the same, proven techniques since the advent of programming, which is to say compilation. Because a standard compiler merely transforms general purpose code into machine code. Taking this to it’s next logical step is to create a way to, just as easily, transform more abstract code into general purpose code. Rinse and repeat.

And more generally, search for formal methods. It sounds like the language you’re proposing is exactly the specifications that the proof-checking languages (like Verifast or vcc) try to help you check, and you can probably get a lot in your language design from studying those, or starting with parsing them.

It would be pretty simple to make it dump a stupid implementation (almost just like Haskell, in most cases). Making it dump an efficient implementation would probably take a datacenter to grind it, but is also probably possible in many cases.

I think one day all programming will be done this way and I’d love to see more eyeballs on it to make it happen faster, so I’m glad to see you’re looking into it, and I wish you success. It would be really nice to have a usable tool that lets me re-code a performance bottleneck while being sure I haven’t introduced a bug.

Of course there’s the halting problem and undecidable specifications, which I imagine will always be a problem. But that doesn’t mean nothing useful can be done.

PS: don’t actually USE Verifast (or vcc) for anything without reading the crazy Microsoft Research license, if you don’t want to sign your soul away. But check out the syntax, and think about the utility of being able (optionally) to combine specs with existing source code. If you really want to fix programming, you have to keep in mind you can’t get everybody to switch without providing a path to upgrade their legacy code bases incrementally.

Just a general question here, doesn’t your bubble_sort python/pseudocode example end up sorting the final output list into descending order rather than ascending? (minor detail but just thought I would ask as it caught my attention when reading the post).

“You express what steps have to be taken. Imagine telling your pal to grab you a beer from the fridge, step by step, with each step being robotically-rigid and with very little regard for context. It would be excruciating. And prone to catastrophic failure.”

Except it’s not. Something *does* describe each and every step to be taken; It’s just that biology abstracts so much of it, you don’t think there is that much to it. You work with an API to talk to your friend (language). His concious works with an API to move his body (central nevous system). His nervous system works with an API to effect movement in muscle (neurotransmitters). This goes on and on. The enormous amount of excruciating details are abstracted at every level.

You can never make the steps go away. At some point they all have to broken down to the most excruciating details. The key is, and has always been, resonable abstraction; Be it by a library, a language, anything.

No programming language is magic. No library is magic. Fundamentally, at a low level, they both do the *exact* same thing. All of the steps you so abhor have to be accounted for, and handled by, someone; They do not just disappear. Whether it is through the creation of a new language, or a new library, a *programmer* will have to take care of these steps for you. Programming is anything but broken.

Maybe your problem will be solved if you create a thinking head (or a thinking creature) !? Why do you expect from a programming language to be able to solve intellectual problems? You should first write an artificial programmer and then just give him programming tasks in your every-day language.

These kind of articles pop up every few years and I always think, though I’ve never said it for fear of sounding stupid, is that what you’re really dissatisfied with is not the programming paradigms but the idea of the random access machine computing model i.e. a Von Neumann machine. In fact you pretty much state that in the third paragraph. When In fact what you’d really like is to change the face of computing and work in an environment that uses something like associative memory instead. Which is basically what you’re version of bubble sort does – the idea of “relations” in one_to_one_equal is essentially an alternative implementation of associative memory with an added twist. Of course this means throwing out everything we know about computing and building from the ground up again. Oh well, at least the Perlis page is here. Derp goes me.

I find it amazing you claim to have programmed for 20 years, and yet you fundamentally don’t understand what programming actually is.

At it’s core, programming is a creative endeavor by (fallible) humans. You might as well say “sculpture is broken” or “architecture is broken”, or “writing a novel is broken”. Get my point?

In each case, as with programming, the fault lies not in the tools, it lies in the artists. I think what you mean to say is “programmers are broken”, and to be more specific “my programming skills are broken”.

I would even take exception to your fundamental argument. *IS* programming broken? Really? Because I look around the world and see TRILLIONS of dollars in economic value being created by “broken programming”. Broken compared to what? I feel confident saying the founding fathers of computer science would be ASTOUNDED that one or two people can create a fully functional software solution and build a company around in about two weeks.

Sure it could be more efficient but the reality is that in the broader effort of creating software, programming is a very very small part. The real challenges involve eliciting user requirements, delivering software, managing deployed software, training users, interoperability.

I ‘m not sure about what exactly you would like programming to become but I have the strong inutitve feeling -judging form the last pieces of code- that you try to re-invent λ-calculus or functional programming. Why don’t you just try LISP, F# or J even better?

Your example for sorting is not applicable to many scenarios. Many problems offer algorithms with different Memory/Runtime tradeoffs. Even if you could find the “optimal” solution with your hypothethic compiler, you don’t account for different needs of the environment – maybe I want to sort in a low-memory environment and don’t care too much for runtime, or I might want a guarantee that the sorting finishes in the minimum required O(n log(n)).

I’m sorry but you’re an idiot. I don’t care if you’ve been a ‘developer’, a real -programmer- is not bothered at all by your concerns, they’re actually why we LIKE, even LOVE, to program.

The difference can be shown by the analogy of driving a car. You don’t want to know how any of the components work, such as why the engine actually runs, how the steering wheel makes the wheels change direction, or even why music comes out of the radio – you just want to go somewhere.

We, on the other hand, NEED to know these things. By understanding the relationship of all the interconnected systems we are able to drive faster and safer at the same time -and- foresee possible problems down the road.

Hi there, simply become aware of your blog thru Google, and found that it is really informative. I’m going to watch out for brussels. I’ll be grateful if you proceed this in future. Lots of other people shall be benefited out of your writing. Cheers!

So what you want is Eiffel with extra bits? Because it looks like a lot of your example would be fixed with “Design by Contract”. Which is actually trademarked by the Eiffel people… But it’s also available in other languages. .Net languages (C#, VB.Net, etc) have Code Contracts, Java has about 20 different implementations, Python has Contracts for Python, Ruby has ruby-contract, etc. Then there are a number of languages with it built in: D, Eiffel, Cobra, etc. I mean it doesn’t fix everything that you mention but it would be a start.

programmers = mucous creatures that ejaculate when they can lick the warts every current programming language has
software developers = shiny persons who want to build tools to make the world a better place (and the customers happy)

Programming languages have to enable developers to bridge the gap between the unfortunate complex state machines that microprocessors are and the problem domains. I think a consistent set of good DSLs (for GUI, concurrency (FSM), persistence, data structures, math, deployment, …, and for making new DSLs) would be my favorite next step in software development.
DSLs help the customers (= domain specialists) to formulate their requirements (= first and most expensive source of errors) in a way they and the developer understand.
They make the code more readable (a major flaw in near all “next gen” languages mentioned here, imho).
And they may be declarative, like qml, or a mixture of whatever, like R, depending on the problem domain. Religious wars are pathetic.

The target is definitely not ‘get me a beer’, but ‘get me a Bitburger out of the fridge’.

It seems to me that you’re saying: step-by-step development with attention to every detail and the encompassing environment is a broken paradigm.

Meaning, of course, that every sort of craftsmanship of every sort of thing humans do is also a broken paradigm. Name me a creation or craft discipline that doesn’t involve painstaking incremental steps and repetitive approaches.

Functional and declarative programming break the way humans think. We do not function inside-out in everyday life. I don’t get up in the morning and pour milk out into the air and expect a bowlful of dry cereal to slide into place, from somewhere that my smart kitchen has predefined for me, to catch it. This is way oversimplifying it but that seems to be what you are asking for.

What is wrong with this model? You approach every activity this way all day every day. I see nothing broken about this, unless life itself is a broken methodology. If I forget to close the door in my steps above, something horrible will eventually happen.

Like it or not, doing things requires a ordered-step approach with knowledge of the working environment. One can argue, as you do, that programming is broken, but these alternative approaches are only just putting the steps in a different order, and wrapping the whole idea in a confusing backwards thinking process that is unnatural. The initial learning curve is gigantic, and as others have noted, the debugging can be obscenely difficult. I just don’t see how it’s worth it.

I agree with some commenters that this is really a problem with the computer and not the language. I recall tutoring math students in college. When trying to help them find the number and types (real/complex) of solutions to ax^2+bx+c = 0, some students you could say “See if the determinant is negative, positive, or 0” and they’d get the answer (analogous to you telling the computer to sort the number without explaining how). Others would just stare at you because they didn’t know what the determinant was. Then, you’d have to slowly explain “if b^2 – 4ac is negative, there are 2 complex roots, if it is 0 there is 1 real root, and if it is positive, there are 2 real roots”. In many cases, when programming, it’s like explaining things to a 5 year-old with a very good memory. If you’ve told them how to sort some time in the past (already have a library for sorting), then you don’t need to teach them. But if they’ve never sorted, you have to tell them exactly how.

And imagine writing a program to apply payments to unpaid invoices… unless the computer understands accounting, you have to explain it in several small steps.

I’m also reminded of those flow chart things for 1st level phone tech support. If they “crash” (encounter a problem the flow chart designer didn’t think of), they have to escalate it up to level 2.

It sounds kind of like you would like to tell the computer what you want it do to in the same way that users tell us what they want their computer to do. Programmers are the liaison between the users and the computer. If it were simple enough that anyone could do it, then I’d have to help out around the house… 🙂

Having been in programming since 1989…and as one that LOVES programming (yeah.. the twiddling part too), at the same time I am drawn towards something FAR better and that is STILL allusive at best, I SO get what your saying. However.. I think your not going FAR ENOUGH, you are stopping short of the real issue – code with the human error factor in it!

In order to move into this next arena, we MUST change our way of thinking. As you put it so well.. we need to think BIG, but we also need to keep “concept and loose the lines of code” imo. (lines of code ARE where we get into trouble)

So I disagree that we should even have code like def sort(input[]) = output[]: . our way of putting together applications needs to be “code-less” it NEEDS to be a larger scale (for lack of a better term – forgive me) “Drag and Drop” with the use of known code repositories that are NOT futzed with and are KNOWN, TESTED, goof proof – and we dont or cant, mess with them.

Having lived thru the 4GL period and seen the damage and destruction first hand, there WAS some cool stuff there, but we were not ready to push it out yet, we were not ready to use it all properly.

I believe, the concept you (and I , along with many of us) are struggling for is the ability to put our ideas down and have it happen – So large scale Drag and Drop programming begins to work here, then we need the ability to connect the dots and make it all work.

While SSIS drives me crazy to use, it also is extremely similar to this concept, and while it has many issues, and also requires “connect the dots via code” it also can create some very cool things using almost only its GUI. (that said – there is a lot of code level that needs to happen here as well usually, so its only a starter)

But until we can let go of the safety and need for writing it ourselves and twiddling the code because we “have too”, we wont really buy into the mind change and that IS where this all needs to happen.

I have programmed in approximately 32 different languages and while C and C+ are favorites, my favorite language is a Visual drag and drop where pieces are created for you and then you extend it out. Frowned upon.. sure, discussed heavily as “beginner”, “concept” or “inadequate” sure. (In fact – I hesitate to even mention its name since it is frowned upon by “real programmers”) However, the fact remains that good programming design, style and implementation can take even that simple old tool and create a better mousetrap faster and with less error prone code!

Keep looking forward, but dont settle where your heading – look further down the road and ask for more. Think differently about this issue and keep a vision of what you want out in front of you.

To me this seems more like the problem that one programming language can’t solve everything. Some of the features you mention would be nice to have, but when you’re not happy with the default behaviour, it could quickly turn into a mess, or just something unpractical if it will be necessary to take all possibilities into account.

Also, few tools (essentially what a programming language is) work like one’s friends who only need to know that you want a beer and they know what to do, well if they’re good friends anyway I guess. I see programming more like driving a car and the way I see it you want to set the route in your gps and the car will drive you there. The way I see it, the destination is the vision and you have to make the effort of getting there.

Declarative languages are one thing and I’ve also heard of tools that translate uml into code, but so far the efforts do not seem to make programming easier.

UML is an (IMHO failed) attempt of the “jack-of-all-trades” fanatics to implement their god. As I mentioned before, a set of DSLs is the way to go: they can be grained as needed, can have the paradigm as needed, can be readable, bridge the gap between problem domain (customer) and program(mer), …
As DSLs can (? or should?) not cover 100% of the tasks, a GPL is still needed. It should be readable (like Python, unlike Perl and all functional PL I’ve seen so far) and fast in all respects (unlike Python :-(, Java,…). The paradigm it follows is less important, as it’s role is just a minor one.
The DSLs can be translated to the GPL, or better togther with the GPL directly to a platform independant IL, like the one of LLVM.
So it is possible to run an app in an interpreter, or to compile an OS specific driver to machine code with one toolset.

Btw, even some years before it was possible to let a car drive autonomously from munich to hamburg (~ 800km).

The problem with this idea is that as your program gets bigger, correctly stating WHAT you want to compute will have roughly the same complexity as stating the HOW. Already for such a simple algorithm as sorting you need a few lines, and a rather complex mapping test. Sure, finding the maximum value in an array can be formalized, but how do you formalize the minimax search in board games?

Now consider an algorithm which computes a random permutation of 1…N. This is very easy to program, but harder to specify in a formal way.

If I need an image of size NxN with K random pixels red and the rest a random shade of gray, then the problem is not the algorithm, but all the code around it (allocating an image object, setting up the right color space). How to ever formalize that?

In fact, I think not more than 2% of my code can be formalized at all. The majority of the lines have to do with creating a user interface, setting the right values, and hooking up the right callback functions. Then there is responding to notification concerning networking or user input. How do you specify “emit a beep when the user swipes right”? It’s only one sentence in English, and 2 lines of code in iOS. How is specifying this easier then just programming it?

The problem is: programming IS hard. You correctly wrote that in large programs, you lose oversight, but the same is true if you wouldn’t write code but rather specifications. How will you know that everywhere where you call a certain function, the right preconditions are being met? The same amount of checking is needed, and you will need to understand all specifications at the same time, in order to be sure that the program as a whole is correct. So, even if you would manage to create a compiler that would create code from the specification, that would be of interest, but it wouldn’t make programming easy.

I forgot to mention: The next big challenge: language support for parallel programming, STM (software transaction memory, perhaps with onchip support one day). This isn’t any fun with imperative languages.

I think the real deal here is that why are we dealing with crap like pointers and references at this stage in the game?

Really, think about it? We have AI that according to recent articles can guess numbers in groups, which it evolved, but we don’t have something that can handle all these little tasks that eat and chip away at all the time we have.

I think you stopped short, we should not be writing code, we should be “directing” code which should by any measure, all machine written by this point or very soon.

Liked the article very much.
I agree, it all does seem a little complicated doesn’t it.
Maybe it’s us IT bods that make it so difficult so the barriers to entry of the industry are high and we can keep our jobs and ask for a lot more money?
You’d have thought that people as smart as developers could make a programming language whivh would except:

The Ninja must be red
The ninja must be able to jump
The ninja has a sword
(thanks Louis)

Master Chief, Super Mario, Niko Bellic, and N+ all jump, but one does wonder why the act of jumping itself (within a physically-constrained world with collision, friction, flotation, gravity, magnetism, wind, waves and particulate-spray) cannot be abstracted out as a common factor, leaving only the need to tweak the parameters that gave each their own characteristic behaviour in a runtime simulation, like:

Yet, great though this note that the (jump) function knows “how high” and is linked to the game’s control interface. Is this intrinsic to the act of jumping common to all platform games? No.

Then this (jump) acts in concert with the (gravity) and (move) functions, each of which mix similar collision code and characteristic parameters in hard-to-edit places. It isn’t as if the program has a set of adjustible sliders for gravity, movement speed and jump strength. The collision code ought to act as a constraint on a dynamical system of tangible, weighted, entities. In a sense “The World” should be a given: a stage onto which your actors may (attempt to) perform.

I liked your article. It reminded me of studying VDM at Uni. Although not a programming language (VDM is a formal method), it had the same concept of defining WHAT a function’s effect should be, without saying HOW it should achieve that. In VDM, this was called an implicit function definition. See for example wikipedia: http://en.wikipedia.org/wiki/Vienna_Development_Method#Functional_Modelling

I think programming languages are fine the way they are, and that you’d enjoy LINQ.

I had a similar idea a while back with the goal of an AI that could “computer_do_what_i_want_NOW”, but I don’t have the time it would take to complete that. It’s very much like teaching an infant. You basically have to start with a database divided up like the human brain. Then you create a simple input program and start feeding it information. For visual objects, input would be “cat “, and after you input several hundred images of cats, it would think that all images are cats, so you have to send it millions of all sorts of pictures with “image ” so that it would eventually understand that images are pictures and some images are pictures of cats. I think it’s obvious that it reviews the database with a probabilistic algorithm to figure that out. Painfully, you’d have to teach it about contrast and colors, I don’t even want to think about dealing with video.

Eventually though, you’d say “Get me a beer” and it would think “I understand that that means I should locate a beer and retrieve it. Beer is frequently kept in refrigerators in residential settings, I should look there. Oh, Bud Light as well as Heineken? I have seen the requester drinking Heineken on 33 separate occasions, and Bud Light on 4 occasions, therefore I will choose Heineken.”

I’d rather create a human, it only takes 9 months to get a prototype up and running, it’s self learning rather than input-based, and takes about 20 years to get it productive.

Ha! I’ve already spent 20 years on researching the design for my programming language Zeitgeist. Probably, got about 5 years of implementation work ahead of me for a basic prototype and it isn’t remotely as advanced as what you propose.

Interesting blog – some very intelligent comments even if some are not what you wanted to hear. I’m not a programmer but would like to offer my 2 cents worth from a more philosophical pov. It seems to me that you have two key objectives: A) To develop a programming language that is more intuitive and efficient; and, B) To reduce the number of bugs routinely generated by conventional programming methods. You have assumed that A will result in B, but as others have noted this ain’t necessarily so. Bugs are a fact of life in all industries, and even more so in complex industries like computer programming. Even if you were successful in reducing the number of bugs generated at the programming stage, you might simply end up transferring the problem to the runtime side of things. Also, there is a risk that while you might reduce the number of familiar bugs, you end up creating a whole new set of as yet undiscovered bugs. However, if A is your principle goal, then go for it – you don’t need to invoke bugs as an excuse – just do it for its own sake. Mankind has advanced precisely because of this type of attitude in individuals. But if B is your true goal, then there may be better ways to achieve this using existing technology. For example, perhaps you could develop a program that helps programmers to write less buggy code. It seems to me that the industry is heading this way naturally, hence, all the tools, libraries and IDEs. This approach still fits with your philosophy of expressing what you want to do and leaving it up to the compiler to work out how to do it, only the players are different. For example, suppose I want to do a calculation in Excel. I know in my mind what I want to do, I then set out my input data in some suitable form, specify my input variables, and then set about selecting the various predefined functions to produce the output I want in the format I want. If I get the wrong answer, I go back and check my work including checking that I’ve selected the correct functions. Sorry if I’ve missed the point. Best of luck.

The reality of programming in 2012 – you would NEVER write a sort from scratch. The author therefore fails out of the gate with a strawman.To illustrate:

list.Sort();

Voila. Done. In fact the problems we tackle in computer science are getting more interesting, complex, and less related to a particular implementation language. Strange a “developer of 20 years” hasn’t noticed this.

Take for instance cloud computing – how to create an infrastruture that allows you to immediately and “transparently” scale computing resources on an as-needed basis. A better programming language isn’t going to help you solve this problem. Only the tough job of engineering, design and even configuration and deployment will.

If this post survives the coming robot apocalypse, I have a feeling some will wander across this in 2022 and have a good laugh.

The issue isn’t that there isn’t a standard way of writing a sort, but the main crisis of “software engineering” / programming is that after all these year we still don’t have a standard, “good enough” sort routine.

The reason I can walk into a hardware store and build myself a deck or renovate my bathroom or build myself a certain object is because of standardization. The toolsets and the parts have been standardized.

I don’t have to have the metallurgical knowledge on how to create high carbon steel, nor do I have to have the knowledge and the equipment to create a certain nut and bolt. All that has been taken care of me. All I have to do is use standard nuts and bolts, and standard tools and I can work and create a reasonable object with little “end user” knowledge.

In computing however, we still expect “the programmer” to do the equivalent of digging the iron ore, smelting it properly, Building a lathe, then turning that piece of metal to make themselves a bolt, and the process go on.

Let’s say a programmer was given the equivalent of putting together a simple fence. All you need is wood planks, some nails, and a hammer.

Programmers don’t use hammers, they start with the ridiculous equivalent of digging up iron ore, growing trees to build the handles and building smelters from scratch to melt the ore and make the hammer heads, then they’ll melt the rest of the stuff down to make the nails and THEN they’ll actually start the process of nailing pieces of wood together which they wouldn’t have standard pieces of (but having grown the right tree, cut it down, then make the 2×4 out of them)…

Now imagine every handyman walking around with his own bloody smelter, ore extraction machines, lathes, and you get the idea.

What you’re arguing is that this process (ore digging, smelting, lathe turning) needs to be standardized. (yes but for a very small minority!)

What you NEED to argue is that NOT EVERY PROGRAMMER NEED DO IT! Just like not every engineer or handyman concerns him/herself these days with the type of bolts,nuts, handles, weights and measures.

And therein lies the problem. Programmers LOVE to jerk off to complexity, and that’s their main issue. Instead of solving problems, they love to WALLOW in the problem solving process itself. They love to re-invent the same wheels over and over again! They love to imagine themselves all as McCarthy’s or Chuck Moors or Dijkstras and leave a shitstorm of unusable garbage toy languages in their wake.

The problem isn’t that we don’t have “the right language”, the problem is that we have way too fucking many, and it’s too easy to create more. That is a way of avoiding the problem which essentially is that of standardization of basic computing parts/idioms and interfaces between them. For most computing problems TCL or even Bash with the basic set of UNIXcommands all piped together should be enough. But is it? …

“Out of the Tar Pit” (web.mac.com/ben_moseley/frp/paper-v1_01.pdf) addresses precisely the same issue and comes up with a similar proposal: a new paradigm they call “Functional Relational Programming” (not to be confused with functional reactive programming).

The bandicoot project (that I have discovered thanks to a comment above) seems to be an implementation of this programming paradigm, but with a better syntax than the one presented in the paper.

In regards to the Bandicoot project, there has been a new release (v5) pushed out just recently. The changes introduced were primarily around the syntax of the language as a preparation work for next features as: attribute sets, function calls, and modules.

I’d be happy to hear your comments on the new language if you’re interested.

Programming is a necessarily specific monologue when it should be an exploratory dialogue. You may not even properly know what you want when you embark on a given project, the act of refining the computer’s “erroneous” / “misunderstood” model would force you to think about the ramifications of your naive base assumptions.

I don’t know whether it’s just me or if everybody else experiencing problems with your website. It appears as though some of the written text within your content are running off the screen. Can somebody else please provide feedback and let me know if this is happening to them as well? This might be a issue with my web browser because I’ve had this happen before. Thanks
ugg bailey bow boots black http://www.uggukboots2012.com/ugg-bailey-bow-1002954-boots-p-227.html