Posted
by
Soulskill
on Friday August 09, 2013 @01:27PM
from the coding-at-88-mph dept.

theodp writes "Bret Victor's The Future of Programming (YouTube video; Vimeo version) should probably be required viewing this fall for all CS majors — and their professors. For his recent DBX Conference talk, Victor took attendees back to the year 1973, donning the uniform of an IBM systems engineer of the times, delivering his presentation on an overhead projector. The '60s and early '70s were a fertile time for CS ideas, reminds Victor, but even more importantly, it was a time of unfettered thinking, unconstrained by programming dogma, authority, and tradition. 'The most dangerous thought that you can have as a creative person is to think that you know what you're doing,' explains Victor. 'Because once you think you know what you're doing you stop looking around for other ways of doing things and you stop being able to see other ways of doing things. You become blind.' He concludes, 'I think you have to say: "We don't know what programming is. We don't know what computing is. We don't even know what a computer is." And once you truly understand that, and once you truly believe that, then you're free, and you can think anything.'"

On the one hand, it is a good thing to prevent yourself from constrained thinking. I work with someone who thinks exclusively in design patterns; it leads to some solid code, in many cases, but it's also sometimes a detriment to his work (overcomplicated designs, patterns used for the sake of patterns).

Unlearning all we have figured out in computer science is silly, though. Use the patterns and knowledge we've spend years honing, but use them as tools and not as crutches. I think as long as you look at something and accurately determine that a known pattern/language/approach is a near-optimal way to solve it, that's a good application of that pattern/language/approach. If you're cramming a solution into a pattern, though, or only using a language because it's your hammer and everything looks like a nail to you, that's bad.

Use the patterns and knowledge we've spend years honing, but use them as tools and not as crutches.

Having just watched this video a few hours ago (sat in my queue for a few days, providence seemingly was on my side to watch it right before this story popped), I can say he argues against this very idea.

He mentions late in the talk about how a generation of programmers learned very specific methods for programming, and in turn taught the next generation of programmers those methods. Because the teaching only involved known working methods and disregarded any outlying ideas, the next generation believes that all programming problems have been solved and therefore they never challenge the status quo.

Much of his talk references the fact that many of the "new" ideas in computing were actually discussed and implemented in the early days of programming. Multiple core processing, visual tools and interactions, and higher level languages are not novel in any way; he's trying to point out that the earliest programmers had these ideas too, but we ignored or forgot them due to circumstances. For example, it is difficult to break out of the single processing pipeline mold when one company is dominating the CPU market by pushing out faster and faster units that excel at exactly that kind of processing.

While TFS hits on the point at hand (don't rest on your laurels), it is worth noting that the talk is trying to emphasize open mindedness towards approaches to programming. While that kind of philosophical take is certainly a bit broad (most employers would rather you produce work than redesign every input system in the office), it is important that innovation still be emphasized. I would direct folks to look at the Etsy "Code as Craft" blog as an example of folks that are taking varying approaches to solving problems by being creative and innovating instead of simply applying all the known "best practices" on the market.

I suppose that final comment better elaborates this talk in my mind: Don't rely on "best practices" as if they are the best solution to all programming problems.

Much of his talk references the fact that many of the "new" ideas in computing were actually discussed and implemented in the early days of programming. Multiple core processing, visual tools and interactions, and higher level languages are not novel in any way; he's trying to point out that the earliest programmers had these ideas too, but we ignored or forgot them due to circumstances. For example, it is difficult to break out of the single processing pipeline mold when one company is dominating the CPU market by pushing out faster and faster units that excel at exactly that kind of processing..

I can attest to this. The phrase "Everything old is new again." (Or "All of this has happened before, and all of this will happen again." for you BSG fans) is uttered so frequently in our office that we might as well emblazon it on the door. It's almost eerie how well some of the ideas from the mainframe era fit into the cloud computing ecosystem.

Much of his talk references the fact that many of the "new" ideas in computing were actually discussed and implemented in the early days of programming. Multiple core processing, visual tools and interactions, and higher level languages are not novel in any way; he's trying to point out that the earliest programmers had these ideas too, but we ignored or forgot them due to circumstances.

So what's the point? They want a cookie? They want people not to use these concepts even now that they are viable bec

As with so many things, it is a matter of balance. We now have what, 60 years or so of computer science under our collective belts, and there are a lot of good lessons learned in that time... but on the down side most people only know (or choose to see) a subset of that knowledge and over apply some particular way of doing things,.. then they get promoted, and whatever subculture within CS they like becomes the dogma for where they work.

Designs are only complicated when they are unique. If I write my own LinkedHashMap to store 2 values, it is overcomplicated. If I just invoke a standard java LinkedHashMap to store 2 values, then it's the same design, but since everyone knows what a java LinkedHashMap does, it is simple. Also It can be swapped out for a simple array with relative ease if the code is designed in a way that is maintainable.

Even if you are using design patterns, you should be leveraging not just the knowledge that other peo

The future of programming, from the seventies, it's all hippie talk...

"We don't know what programming is. We don't know what computing is. We don't even know what a computer is." And once you truly understand that, and once you truly believe that, then you're free, and you can think anything.'"

Next thing we can throw our chairs out and sit on the carpet with long hair, smoke weed and drink beer....

The future of programming, from the seventies, it's all hippie talk...

What you don't understand is, in ~1980 with the minicomputer, Computer Engineering got set back decades. Programmers were programming with toggle switches, then stepped up to assembly, then started programming with with higher level languages (like C). By the 90s objects started being used which brought the programming world back to 1967 (Simula). Now mainstream languages are starting to get first-class functions. What a concept, where has that been heard before?

Pretty near every programming idea that you use daily was invented by the 80s. And there are plenty of good ideas that were invented back then that still don't get used much.

My two favorite underused (old) programming ideas:

Design by contract.
Literate programming.

If those two concepts caught on, the programming world would be 10 times better.

I did an entire thesis with Tangle and Weave, and I'm glad that I did, but I'm not convinced that a narrative exposition is any better than the more random-access style that a hierarchical directory layout with some decent (embedded and out-of-line) documentation and viewer IDE does.

I consider the important elements of literate programming to be: the idea that you are writing for a human, rather than for a computer; and making the structure of the program clear to other humans, rather than what's best for the compiler. If you do these, than I would say that you are doing literate programming.

If you have any other ideas on the topic I'd be interested in hearing them.

Naturally I *have* to ultimately present the program text in a form that the computer will be happy with, but I am very hot on appropriate human-centric documentation in-line and off-line, and phrases that make me spit include:

"It's self documenting."

"Oh it's hard to keep the comments in sync with the code."

Farg me!

I'm suffering on a new project from a slight lack of consideration as to what the coder following in your footsteps would need in order to understand what was intended vs what actually happened.

In college in the 1970s, I had to read the Multics documents and von Neumann's publications. We're still reinventing things that some very clever people spent a lot of time thinking about - and solving - in the 1960s. It's great that we have the computer power and memory and graphics to just throw resources at things and make them work, but imagine how much we could make those resources achieve if we used them with the attitude those people had towards their *limited* resources. And we have exactly the same sets of bottlenecks and tradeoffs; we just move the balance around as the hardware changes. Old ideas often aren't *wrong*, they're just no longer appropriate - until the balance of tradeoffs comes around again, at which point those same ideas are right again, or at least useful as the basis for new improved ideas.

imagine how much we could make those resources achieve if we used them with the attitude those people had towards their *limited* resources.

We would gain nothing, hell we would still have god-damn teletype machines if everyone was worried about wasting nanoseconds of compute time. We would get some multiple of compute increase but we would loose out on the exponential increases in human productivity that comes from dealing with all of those abstractions automatically.

But I think we agree in that we need to focus on fixing the problems that need fixing. It's more important to figure out what you need done and then figure out whether you will h

Sorry, I think you missed my intent. Lots of people have pointed out how much of their hot new computer power winds up being wasted on fancy-frosted-translucent-glass GUI effects which don't actually achieve anything. Not only is that a waste of my CPU time, it's a waste of so much computing resource around the world - and equally a waste of the time and effort of presumably clever and artistic developers.

Speaking of setting programming back, the current push in languages to get rid of declaring types of variables and parameters has set us back a few decades. In languages like Ruby, you can't say ANYTHING about your code without executing it. You have no idea what type of objects methods receive or return, whether formal and actual parameters match, or whether types are compatible in expressions, etc. I actually like a lot of aspects of Ruby, but it seems like it's thrown about 50 years of improvement in p

Yes and no. It's true that objects have classes, but that's entirely malleable, and there's no way to look at a particular piece of Ruby code and have any idea what class an object has, unless you actually see it being created (yes, yes, even then you don't know because classes can be modified on the fly, but let's ignore that for the moment). Basically, I can't look at a method and do anything except guess what parameters it takes. Personally, I think that's a bad thing.

That I agree with. Ruby is strongly typed with very aggressive and implicit type conversions. Anyway, strongly typed languages are dominant: C++, C#, Java; so it would make sense the alternatives are dynamic. Moreover scripting has always been dynamic.

I can understand the virtues of strongly typed. IMHO dynamic typing works best in programs under 20 lines of code, works OK 20-1000 and starts to fall apart after 1000. Most Ruby programs are under 1000 lines.

Doing prototyping, or UI intensive work? Most UI frameworks suck, but the ones designed for static languages generally suck more, because some stuff just can't be done (well), so they have to rely on data binding expressions, strings, etc, that are out control of the language. At least dynamic languages deal with those like they deal with everything else and have them as first class concepts.

Case in point: an arbitrary JSON string, in a dynamic language, can be converted to any standard object without needing to know what it will look like ahead of time. In a static language, you either need a predesigned contract, or you need a mess of a data structure full of strings that won't be statically checked, so you're back at square one. These type of use cases are horribly common in UI.

I dabble in image processing algorithms. A lot of the things I write for my own use end up being a C program to do the serious number crunching, with a perl script for the interface. Perl does all the pretty work, then just calls upon the compiled executable when serious performance is required.

You still need to know what your JSON string will look like at some point in order to use it. It's always (for at least as long as I've been programming, a bit over 2 decades) been a problem that programmers don't fully know or understand their requirements, so they try to keep their code as generic as possible. The problem with that is that at some point you're going to have to do actual work with that code, so you end going through a labyrinth of libraries, none of which want to take the responsibility to

Maybe you do need to know, maybe you don't. Maybe the object is being introspected, maybe its used to feed to a template engine, maybe its just converted from one format to another. All these things can be done in any language. Some languages make it easier than others.

You ssem to be saying that using strings - and thus bypassing the static type checking of a static language - is worse that using a language with no static type checking in the first place. I don't see that at all.

Either you're fine with not knowing what the object looks like ahead of time - in which case you can't directly reference member names in any case, and strings are far better than reflection - or you have a specific subset of the object that you understand and needs to be how you expect it to be,

Dynamic languages have better support for introspection, handling what happens when a property is missing, dealing with objects that aren't QUITE the same but "close enough", and deep nested dynamic data structures.

If you want to represent the same things in a static languages, you need things like arbitrarily nested dictionaries or very complex data structure with meta data. Thats why to handle a JSON string in Java or.NET, you'll need a JSON object framework. The parsing is the trivial part.

Well, it's a problem if I'm trying to actually read the code and understand what it does and what arguments it takes. Not to mention the wasted time when I pass the wrong argument type to a method and the problem doesn't show up until runtime. And god forbid that it's Ruby code using "method_missing", then I'm really screwed in so may ways it's hard to imagine. For example, in a Rails app you want to see if you are in the development environment:

Well, it's a problem if I'm trying to actually read the code and understand what it does and what arguments it takes.

Don't write code so complicated that you can't easily tell what type is needed from inspection. Seriously, these are solved problems. You can write code with strong type-safety, or you can write code with runtime binding. Both are workable.

Design by contract is my favorite way of handling interfaces. It really is a good idea.

Literate programming though I'm not sure if I see much point to. There are cool examples like mathematica notebooks but in general even very good implementation like Perl's POD and Haskell's literate mode just don't seem to offer all that much over normative source code. API documentation just doesn't need to be that closely tied to the underlying source and the source documentation just doesn't need to be literate.

Literate programming has two benefits: the idea that your program should be written more for another human to read instead of for a computer; and to make it obvious to a human the structure of your program. If you fulfill these you are doing literate programming.

As for your 1990s and Objects. I also disagree. Objects were used for implicit parallelism and complex flow of control. No one had flow of controls like a typical GUI to deal with in 1967. Event programming was a hard problem solved well.

I don't understand what relationship 1990s, objects, and implicit parallelism have to do with each other, you'll have to explain it more clearly. But the complex flow required by an OS managing multiple resources is significantly more difficult than

What GUI system are you using that has thousand and thousands of threads passing messages? I don't think you've really thought this through......all modern systems use only one thread. At a minimum the performance hit is often serious for thousands of threads. What you are describing seems to be the actor model, which was developed by the mid 70s.

I understand the ideas behind it. But I'm not sure why understanding the structure of the program matters much. If it does, throw a few paragraphs in about the structure or include a doc to the side.

Because structure is the key to understanding, in programming and literature.

Thousands of potential threads. And all of them OSX, Windows, KDE, Gnome. They all utilize tremendous numbers of objects able to operate with implicit parallelism. Generally in terms of execution threads some sort of thread pool is used to match actual CPUs to potential threads. In terms of modern systems using only one thread, look at any of the design of systems books that just ain't true. As for this being the actor model of concurrency. Yes it is. The event driven model's concurrency system was

As for literate you aren't answering the question what the point is of the human understanding that limited amount about the program. You are just sort of asserting that limited understanding by a human is useful.

The opposite. Literate programming makes it easier to understand programs.

In terms of modern systems using only one thread, look at any of the design of systems books that just ain't true.

Which GUI system doesn't? Swing? Openstep?.net? Android? They all use one thread.

"'The most dangerous thought that you can have as a creative person is to think that you know what you're doing,'... 'Because once you think you know what you're doing you stop looking around for other ways of doing things and you stop being able to see other ways of doing things. You become blind.' "

Unless of course you know you know what you are doing, because you also know to never stop looking for new ways of doing things.

But if you know what you are doing, you still have a majority with no clue around you, in the worst case micro-managing you and destroying your productivity. I think the major differences between today and the early years of computing is that most people back then were smart, dedicated and wanted real understanding. Nowadays programmers are >90% morons or at best semi-competent.

Well, I did include the semi-competent, those that eventually do get there, with horrible code that is slow, unreliable, a resource-hog and a maintenance nightmare. Plain incompetent may indeed just be 80%. Or some current negative experiences may be coloring my view.

Unless of course you know you know what you are doing, because you also know to never stop looking for new ways of doing things.

If you have to look for a new way to do something, then you don't know the answer, so how can you know you know what you are doing when you know you don't know the answer? When you are 100% confident in the wrong answer, you know you know what you are doing (and are wrong). If *ever* you know you know what you are doing, you don't.

The most dangerous thought that you can have as a creative person is to think that you know what you're doing,' explains Victor.

Yeah. I bet Vincent Van Gogh thought he was total shit at painting, didn't know anything about paint mixing, brushes, or any of that. Look, I know what you're trying to say, Victor, but what you actually said made my brain hurt.

However, exploring new things and remembering old things are two different things. You can be good at what you do and yet still have a spark of curiousity to you and want to expand what you know. These aren't mutually exclusive. To suggest people murder their own egos in order to call themselves creative is really, really, fucking stupid.

You can, in fact, take pride in what you do, and yet be humble enough to want to learn more. It happens all the time.. at least until you're promoted to management.

Um... yes actually. Van Gogh actually only sold one painting in his entire life, and he considered himself somewhat of a failure as a painter. He did not become famous until after his death.

He considered himself a failure commercially... Because he was. He never stopped painting. That's fairly compelling evidence he knew he didn't suck... and that it was the world that was wrong, not him.

Just because you're bad at business doesn't mean you're bad at what you do. I know, I know... it's hard for people these days to understand that, but 'tis true.

Um... yes actually. Van Gogh actually only sold one painting in his entire life, and he considered himself somewhat of a failure as a painter. He did not become famous until after his death.

He considered himself a failure commercially... Because he was. He never stopped painting. That's fairly compelling evidence he knew he didn't suck... and that it was the world that was wrong, not him.

Just because you're bad at business doesn't mean you're bad at what you do. I know, I know... it's hard for people these days to understand that, but 'tis true.

Van Gogh was notoriously depressed. His entire career as an artist was little more than five years, ending with his suicide in 1890. The nature of his work changed dramatically at a rapid pace, pieces from a year before could almost be from another artist entirely. This all suggests that he was never truly satisfied with his works. It has nothing to do with the lack of financial success, but rather the lack of acceptance from his peers, who often derided him. He continued painting, not because he thought he

When I was doing design work, my mentor taught me the rules and told me to stay within them. After you've mastered the rules, learning the successes and mistakes of everybody before, then you can start breaking them as you explore new possibilities.

I am afraid this will convince people who know nothing yet to just go off in whatever direction they please, wasting massive time on things others already learned not to do, subjecting others to more horrible code.

That's guaranteed to happen. The only question is the extent. There's bound to be a few who say, "Hey! I don't have to know what I'm doing. That one guy said so!" In reality, we know different. Progress is made by learning from the mistakes of others.:)

Massively out of context. The quote is about how people have been taught to assume procedural programming is the only way of programming. The point is that creative people are being limited by these mistaken assumptions.

We creative people are the good ones...those others...gosh, they're capable of violence.

I don't see how the two are mutually exclusive. Oh, the creative ways I murder people in my fantasies!

After all, what kind of creative would call himself fearful of people who can't create so much as a scrapbook unless they're following an example from youtube posted by...a creative.

Depends. Are they armed with just a scrapbook and a laptop, or something more substantial?

The fact is, there are way too many non-creatives and they are screwing up the planet. Just imagine how much better the world would be if every member of the Tea Party suddenly disappeared overnight. Oh, we can dream....

A true creative doesn't want people dropping dead or disappearing... they want them doing something useful and productive so they don't have time to "screw up the planet."

No. Time for real theory coupled with real experience. Apprenticeships only work when the profession is ruled by real craftsmen. The programmers today rarely qualify, hence real apprenticeships would only make things worse.

Won't change much. Even the "real theory" is half assed except in a select few colleges, usually (but not always) the high end ones. Then the professors that are good at the theory are usually impossibly terrible at the engineering aspect but still pass on their words as laws.

I had the luck to have a really good theoretician do the introductory CS year at university for me and that he invested a lot of effort to find out how to do these things well in practice. I only later found out that the years before and after (they did rotate the introductory year) got a far, far worse education, either by bad practitioners or by theoreticians with exactly the problem you describe.

The bottom line is however that to be really good, you have to understand both theory and practice and you hav

One reason I had so many patents relatively early in my career is I wound up doing hardware design in a much different area than I had planned on in school. I did not know the normal way to do things. So I figured out ways to do things.Sometimes I wound up doing stuff normally but it took longer, this was OK as a bit of a learning curve was expected (they hired me knowing I didn't know the area yet).Sometimes I did things a bit less efficiently than ideal, though this was usually fixed in design reviews.But sometimes I came up with something novel, and after checking with more experienced folks to make sure it was novel, patented it.

A decade later, I know how a way to do pretty much everything I need to do, and get a lot less patents. But I finish my designs a lot faster:).

You need people who don't know that something isn't possible to advance the state of the art, but you also need people who know the lessons of the past to get things done quickly.

'I think you have to say: "We don't know what programming is. We don't know what computing is. We don't even know what a computer is." And once you truly understand that, and once you truly believe that, then you're free, and you can think anything.'

I agree having an open mind is a good thing. There is, of course, taking things too far. Just throw away everything we've spent the last 40-50 years developing? Is there some magical aura we should tap into, and rock back and forth absorbing instead? Should we h

Think of it like this. If you believe you already know what a computer is, then you are not likely to look for alternatives. If you're looking for alternatives, then you might come up with something interesting like this [hackaday.com]. If you just accept that super-scalar pipelines, the way Intel does it, is the best way, then you're not going to find a different, potentially better way of doing it.

Far from it. I seem to recall a researcher I read about over a decade ago who was designing a chip that worked more like a human neuron. Superscalar pipelines is just how Intel does instructions, and even they're trying to get away from it due to the cost of cache misses becoming more expensive as pipeline lengths increase. Having a talk on not being constrained to accepted dogma, and outright throwing away all known concepts are completely different things.

It's possible to misinterpret what I'm saying here. When I talk about not knowing what you're doing, I'm arguing against "expertise", a feeling of mastery that traps you in a particular way of thinking.

But I want to be clear -- I am not advocating ignorance. Instead, I'm suggesting a kind of informed skepticism, a kind of humility.

Ignorance is remaining willfully unaware of the existing base of knowledge in a field, proudly jumping in and stumbling around. This approach is fashionable in certain hacker/maker circles today, and it's poison.

Knowledge is essential. Past ideas are essential. Knowledge and ideas that have coalesced into theory is one of the most beautiful creations of the human race. Without Maxwell's equations, you can spend a lifetime fiddling with radio waves and never invent radar. Without dynamic programming, you can code for days and not even build a sudoku solver.

It's good to learn how to do something. It's better to learn many ways of doing something. But it's best to learn all these ways as suggestions or hints. Not truth.

Learn tools, and use tools, but don't accept tools. Always distrust them; always be alert for alternative ways of thinking. This is what I mean by avoiding the conviction that you "know what you're doing".

We do know a number of things about programming. One is that it is hard and that it requires real understanding of what you intend to do. Another is that languages help only to a very, very limited degree. There is a reason much programing works is still done in C, once people know what they are doing, language becomes secondary and often it is preferable if it does not help you much but does not stand in your way either. We also know that all the 4G and 5G hype was BS and that no language will ever make th

The big disappointment when I read about the 1st "design patterns", other than liking the idea of encouraging generic and somewhat uniform descriptions of solutions, I found many of them were solutions to problems created by JAVA and other restricted thinking.

See Fast Food Nation; the parts about industrialization and how it took expertise and eliminated it in favor of simple mechanistic low wage zero talent jobs.

Indeed. Some things were nice to see a bit more formalized, but the patterns are typically far too lean to fit a real problem. I also think that many things were formalized in the pattern community early on, just to get some mass. Most of these had better been left informal, as good programmers had no issues with them anyways and bad programmers did not get what they truly meant.

The analogy with Java is a good one, as my experience with Java is that once you have a certain skill, it stands constantly in you

I went to work at a C project a few years back. It was a monolithic app with a bunch of custom file parsers built in for something vaguely resembling a home-grown ETL system. It was transitioning from a very small (1-2 developers) to a small (5-6 developers) project. A lot of the code was... bad. I spent a week bounding all their string copies, which eliminated a number of their crashes. Then I ran some tests with libefence, which isolated most of the rest of the sigsegvs pretty quickly. We still had the oc

Well said. One permanent problem with Java is that it is inherently slow and takes extraordinary skills to get to run fast. Most Java folks do not have that skill. I have seen really large, critical and expensive Java projects fail, because things were just to damn slow. The same thing could never have been slow in C, as it would have meant intentionally wasting inordinate amounts of CPU and memory. True, the team that wrote that Java would never have finished the code in C, but a much smaller team of peopl

It's not even that the language is inherently slow. Its programmers just don't put much thought into storage or optimization. Just shove everything into a map and call it good. Or install a framework that shoves everything into a map for them. I've run across several cases where the programmer seemed to be trying to implement the least-optimum solution to his problem, and the company will just throw gigabytes of RAM at the VM without question because nobody seems to know any better. C, you HAD to roll your

Try writing something like GIMP in Java, it will fail horribly. GIMP is OO C (yes, not C++), something that requires actual skill. This does not make people that have that skill slower, it makes them faster. Yes, I admit that I nowadays mix Python and C modules, because doing glue code that has low performance needs is easier in modern scripting, while the C classes give me the control and performance needed for many things.

But Java? That thing is an abomination. Neither fast nor simple. Neither well struct

Most people I worked with in the 80s (and learned from in the 70s) had a good feel for concepts like "stable systems", "structural integrity", "load bearing weight", and other physical engineering concepts. Many from engineering degrees (most of them weren't CS grads like me), and a lot from playing with legos, erector sets, chemistry sets, building treehouses (and real houses). These concepts are just as important in software systems, but I can only think of a handful of people I've worked with over the

I find it interesting that people in software think they are the first ones to ever design complicated things. It seems there are so many arguments over design styles and paths. All they need to do is look at what other engineering fields have done for the past 100+ years. It's pretty simple. When you are working in a small project where the cost for failure and rework is low you can do it however you want. Try out new styles and push technology and techniques forward. When it comes to critical infrastructure and projects where people will die or lose massive amounts of money you have to stick with what works. This is where you need all of the management overhead of requirements, schedules, budgets, testing, verification, operation criteria, and the dozens of other products besides the "design".

I'm a mechanical and a software engineer. When I'm working on small projects with direct contact with the customers it's easy and very minimal documentation is needed. But as more people are involved the documentation required increases exponentially.

We think in terms of programming languages. The language abstracts away the complexity of manually generating the instructions. Then we build APIs to abstract away even more. So we can program a ball bouncing across a screen in just a few lines rather than generating tens of thousands of instructions manually, because of abstraction built upon abstraction.

A major problem we have in computing is the Mess at the Bottom. Some of the basic components of computing aren't very good, but are too deeply embedded to change.

C/C++ This is the big one. There are three basic issues in memory safety - "how big is it", "who can delete it", and "who has it locked". C helps with none of these. C++ tries to paper over the problem with templates, but the mold always comes through the wallpaper, in the form of raw pointers. This is why buffer overflow errors, and the security holes that come with them are still a problem.

The Pascal/Modula/Ada family of languages tried to address this. All the original Macintosh applications were in Pascal. Pascal was difficult to use as a systems programming language, and Modula didn't get it right until Modula 3, by which time it was too late.

UNIX and Linux. UNIX was designed for little machines. MULTICS was the big-machine OS, with hardware-supported security that actually worked. But it couldn't be crammed into a PDP-11. Worse, UNIX did not originally have much in the way of interprocess communication (pipes were originally files, not in-memory objects). Anything which needed multiple intercommunicating processes worked badly. (Sendmail is a legacy of that era.) The UNIX crowd didn't get locking right, and the Berkeley crowd was worse. (Did you know that lock files are not atomic on an NFS file system?) Threads came later, as an afterthought. Signals never worked very well. As a result, putting together a system of multiple programs still sucks.

DMA devices Mainframes had "channels". The end at the CPU talked to memory in a standard way, and devices at the other end talked to the channel. In the IBM world, channels worked with hardware memory protection, so devices couldn't blither all over memory. In the minicomputer and microcomputer world, there were "buses", with memory and devices on the same bus. Devices could write anywhere in memory. Devices and their drivers had to be trusted. So device drivers were usually put in the operating system kernel, where they could break the whole OS, blither all over memory, and open security holes. Most OS crashes stem from this problem. Amusingly, it's been a long time since memory and devices were on the same bus on anything bigger than an ARM CPU. But we still have a hardware architecture that allows devices to write anywhere in memory. This is a legacy from the PDP-11 and the original IBM PC.

Academic microkernel failure Microkernels appeared to be the right approach for security. But the big microkernel project of the 1980s, Mach, at CMU, started with BSD. Their approach was too slow, took too much code, and tried to get cute about avoiding copying by messing with the MMU. This gave microkernels a bad reputation. So now we have kernels with 15,000,000 lines of code. That's never going to stabilize. QNX gets this right, with a modest microkernel that does only message passing, CPU dispatching, and memory management. There's a modest performance penalty for extra copying. You usually get that back because the system overall is simpler. Linux still doesn't have a first-class interprocess communication system. (Attempts include System V IPC, CORBA, and D-bus. Plus various JSON hacks.)

Too much trusted software Application programs often run with all the privileges of the user running them, and more if they can get it. Most applications need far fewer privileges than they have. (But then they wouldn't be able to phone home to get new ads.) This results in a huge attackable surface. The phone people are trying to deal with this, but it's an uphill battle against "apps" which want too much power.

Lack of liability Software has become a huge industry without taking on the liability obligations of one. If software companies were held to the standards of auto companies, software would work a lot better. There are a few areas where software companies do take on liability. Avionics, of course. But an

The whole x86/64 architecture is a mess when you get deep enough. It suffers severely from a commitment to backwards compatibility - your shiny new i7 is still code-compatible with an 80386, you could install DOS on it quite happily. But the only way to fix this by now is a complete start-over redesign that reflects modern hardware abilities rather than trying to pretend you are still in the era of the z80. That just isn't commercially viable: It doesn't matter how super-awesome-fast your new computer is when no-one can run their software on it. Only a few companies have the engineering ability to pull it off, and they aren't going to invest tens of millions of dollars in something doomed to fail. The history of computing is littered with products that were technologically superior but commercially non-viable - just look at how we ended up with Windows 3.11 taking over the world when OS/2 was being promoted as the alternative.

The best bet might be if China decides they need to be fully independant from the 'Capitalist West' and design their own architecture. But more likely they'll just shamelessly rip off on of ARM or AMD's designs (Easy enough to steal the masks for those - half their chips are made in China anyway) and slap a new logo on it.

Some of the problems were pointed out:- The device access model is still stuck in the ISA age, when peripherals were just wired up to the address and data buses. That isn't how things are done now - even the PCI-e 'bus' is actually a series of high-speed serial links. This means that all device drivers have to run in kernel memory space. Stability and security problems result.

- The 16-bit 'real' addressing mode. Another relic of the past, but still can't be abandoned without breaking the boot process. Lose that, and you could lose some complexity in silicon.

- Even the 32-bit mode could arguably go. The only upside it has over 64-bit is slightly lower memory usage when there are a lot of pointers being used, and it's a real headache at the OS level maintaining two variations on every library to support both 32-bit and 64-bit programs. Lose 32-bit, and you lose a load more complexity. Also means you could lose PAE as a bonus.

- There are opcodes for handling BCD. These are just completly pointless.

Regarding C/C++, Those languages are optimized to be close to the hardware; that's their forte: they are semi-assembler-language. If you optimize the language for software engineering improvements (code design & reliability), then you likely de-optimize it for hardware.

This is a common, and dangerous, misconception. It's quite possible to have efficient languages that are close to the hardware without having buffer overflows all over the place. Pascal did it. The various Modulas did it. Ada does it. Go is getting close. Subscript checking is really cheap, and often free, if the compiler understands how to optimize it. Hoisting subscript checks out of loops is important. The current Go compiler gets the easy cases (FOR loops), which is enough to keep the overhead down for

It's an entertaining presentation, but I don't think it's anything nearly as insightful as the summary made it out to be.

The one thing I take away from his presentation is that old ideas are often more valuable in modern times now that we have the compute power to implement those ideas.

As a for-example, back in my university days (early-mid 1980s), there were some fascinating concepts explored for computer vision and recognition of objects against a static background. Back then it would take over 8 hours on a VAX 7/80 to identify a human by extrapolating a stick figure and paint a cross-hair on the torso. Yet nowadays we have those same concepts implemented in automatic recognition and targetting systems that do the analysis in real time, and with additional capabilities such as friend/foe identification.

No one who read about Alan Kay's work can fail to recognize where the design of the modern tablet computer really came from, despite the bleatings of patent holders that they "invented" anything of note in modern times.

So if there is one thing that I'd say students of programming should learn from this talk, it is this:

Learn from the history of computing

Whatever you think of as a novel or "new" idea has probably been conceptualized in the past, researched, and shelved because it was too expensive/complex to compute back then. Rather than spending your days coding your "new" idea and learning how not to do it through trial and error, spend a few of those days reading old research papers and theories relevant to the topic. Don't assume you're a creative genius; rather assume that some creative genius in the annals of computing history had similar ideas, but could never take them beyond the proof-of-concept phase due to limitations of the era.

In short: learn how to conceptualize and abstract your ideas instead of learning how to code them. "Teach" the machine to do the heavy lifting for you.

It's not because we didn't or don't know. It's because software was free back then. Hardware was so bizarly expensive and rare that no one gave a damn about giving away software and software ideas for free. It's only when software was commercialised that innovation in the field started to slow rapidly. The interweb is where it was 18 years ago because ever since simply because people are busy round the clock 24/7 trying to monetise it rather than ditching bad things and trying new stuff.

Then again, x86 wining as an archtecture and unix as software model probably does have a little to do with it aswell. We're basically stuck with early 80ies technology.

The simple truth is:CPU and system development need's its iPhone/iPad moment - where a bold move is made to ditch out decade old concepts to make way for entirely new ones!

Look what happed since Steve Jobs and his crew redid commodity computing with their touch-toys. Imagine that happening with system architecture - that would be awesome. The world would be a totally different place in 5 years from now.

Point in case: We're still using SQL (Apollo era software technology for secretaries to manually access data - SQL is a fricking END-USER INTERFACE form the 70ies!!!) as a manually built and rebuilt access layer to persistance from the app level. That's even more braindead than keeping binary in favour of ASM, as given as example in the OPs video-talk.

Even ORM to hide SQL is nothing but a silly crutch from 1985. Java is a crutch to bridge across plattforms because since the mid 70ies people in the industry have been fighting turf wars over the patented plattforms and basically halted innovation (MS anyone?). The sceomorphic desktop metaphor is a joke - and allways has been. Stacked windowing UIs are a joke and allways have been. Our keyboard layout is a provisionary from the steam age, from before the zipper was invented (!!). E-Mail - one of the bizarest things still to be in widespread use - is from a time when computers weren't even connected yet, with different protocolls for every little shit it does, bizar, pointless, braindead and arcane concepts like the seperation of MUA, editor and seperate protocolls for sending and recieving - a human async communication system and protocol so bad it's outclassed by a shoddy commercial social networking site running from webscripts and browser-driven widgets - I mean WTF??? etc... I could go on and on...

The only thing that isn't a total heap of shit is *nix as a system, and that's only because everything worthwhile being called Unix today is based on FOSS where we can still tinker and move forward with babysteps like fast no-bullshit non-tiling window managers, complete OpenGL accelerated avantgarde UIs (I'm thinking Blender here), workable userland and OS seperation and a matured way to handle text-driven UI, interaction and computer controll (zshell & modern bash).

That said, I do believe if we'd come up with a new, entire FOSS hardware arcitecture "2013" with complete redo and focus on massive parallel concurrency and build a logic-and-constraint driven touch-based direct-maniplation-interface system - think Squeak.org completely redone today for modern retina touch display *without* the crappy desktop - that does away with seperation of filesystem and persistance seperation and other ancient dead-ends, we'd be able to top and drop *nix in no time.

We wouldn't even miss it....

But building the bazillionth web framework and next half-assed x.org window manager and/or accompaning windows clone or redoing the same audio-player app / file manager / UI-Desktop toolkit every odd year from bottom to top again appears to be more fun I guess.

You must be new here. That "pretentious philosophical BS" is like the spark in a fuel-and-oxygen filled chamber. It ignites into a heap of comments, and those comments are the actual story. Who needs an article when you can browse +5 funny / informative / interesting and -1 trolls?

As for the linked articles, that's just a cleverly disguised DDoS botnet setup. Some figured it out, but few seem to care the/. botnet is still operating. Heck, I'm even contributing people-time to it (on top of CPU cycles).

The context here surrounds abstractions and not allowing users (programmers) to play with pointers directly (C, and later, C++), which is a setback concerning optimization, because of the assumptions/connections you make about/with the underlying machine.

If you want to learn more about the ideas of the 1960s and 1970s, I highly recommend looking up talks by Alan C. Kay ("machine OOP" which is Smalltalk in a nutshell), Carl Hewitt (actor model), Dan Ingalls, Frances E. Allen (programming language abstractions and optimization), Barbara Liskov ("data OOP" which is C++ in a nutshell), and don't stop there.

Sure, if you're doing high level programming (and plenty were in the 70s just as today), C is a bad tool. If you're writing an I/O driver and the hardware works though updates to specific memory addresses, well, you need to be aware of pointers.

I see the biggest failing of C itself was this notion of "int", where you don't know how many bits that is. If you're writing the kind of code that belongs in C, you have to know that, and endless 16-32 and 32-64 bit porting nightmares were the result. It wasn't u

If you're writing the kind of code that belongs in C, you have to know that, and endless 16-32 and 32-64 bit porting nightmares were the result. It wasn't until C99 that int32_t became standard.

I've always suspected (and I could certainly be wrong) the main cause of 32/64-bit pain is not actually that the programmer can't (or rather, shouldn't) depend on the limits of the fixed-point primitive types, but instead that programmers stupidly assume things like "an int will always be wide enough to hold the value of a pointer". C being over-lenient with implicit-casts is largely to blame, of course.

The mistake was mistaking "you can write a program that will compile for all platforms" for "your program will do what you expect on all platforms for which it compiles". The latter being rather more useful.

You're ignoring the reason C went the way it does: performance. 'int' can translate to whatever is fastes

In C99 this is called int_fast32_t: give me the fastest size that holds at least 32 bits. That's what was needed all along - well, with a shorter, less obnoxious name. If I'm counting higher that 2^16, I just don't care how fast that 16 bit int is, my program will fail mightily. But if you can do 64-bits faster than 32-bits, maybe that's OK.