Posted
by
Soulskill
on Tuesday July 30, 2013 @09:59PM
from the never-get-involved-in-a-land-war-in-COBOL dept.

theodp writes "In the movie Groundhog Day, a weatherman finds himself living the same day over and over again. It's a tale to which software-designers-of-a-certain-age can relate. Like Philip Greenspun, who wrote in 1999, 'One of the most painful things in our culture is to watch other people repeat earlier mistakes. We're not fond of Bill Gates, but it still hurts to see Microsoft struggle with problems that IBM solved in the 1960s.' Or Dave Winer, who recently observed, 'We marvel that the runtime environment of the web browser can do things that we had working 25 years ago on the Mac.' And then there's Scott Locklin, who argues in a new essay that one of the problems with modern computer technology is that programmers don't learn from the great masters. 'There is such a thing as a Beethoven or Mozart of software design,' Locklin writes. 'Modern programmers seem more familiar with Lady Gaga. It's not just a matter of taste and an appreciation for genius. It's a matter of forgetting important things.' Hey, maybe it's hard to learn from computer history when people don't acknowledge the existence of someone old enough to have lived it, as panelists reportedly did at an event held by Mark Zuckerberg's FWD.us last Friday!"

Here's some worthwhile reading on why Lisp has trouble staying put—possibly a little flamebait-y: Lisp is not an acceptable Lisp [blogspot.ca], The Lisp Curse [winestockwebdesign.com], and Revenge of the Nerds [paulgraham.com]. The core arguments seem to be (a) it's really easy to invent things in Lisp so no one can agree on how to do it, and (b) the lack of a coherent standard platform means there is no easy target for university courses or job descriptions.

Lisp is easy to get good programs from. You just have to stop thinking in non-Lisp ways. What confuses people is the functional orientation, but if you don't understand functional style of programming then a lot of modern stuff will pass you by. Procedural stuff is very easy in lisp too.

Then name some "good programs" written in Lisp. I have worked with thousands of programs written in C. Plenty in C++, Java, Perl, Python, and even a few in Ruby. But other than Emacs scripts, I have never come across Lisp in a non-academic program.

Macsyma? Emacs itself is more Lisp than C. Zork was a Lisp dialect. Mirai is lisp, and was used to do animation in Lord of the Rings. Lots of expert systems. Several CAD systems and other modelling programs. Data analysis stuff. Whoever uses Clojure is using Lisp and it seems to have some traction. And Orbitz is apparently using a lot of Lisp internally (just to throw out a web site since some people think it's not real if it's not a web site or PC app).

There are also thousands of failed projects in C, C++, Java, Perl, Python, and Ruby. There are two separate questions to debate - whether Lisp is a good language to use, period, and what obstacles there are to widespread adoption. Obviously if Lisp sucks, then that can be a big obstacle to widespread adoption.

Going from C to C++ is easy, I did that. Going from C++ to Java is easy. I did that too. Going from Java to Lisp is damn difficult, at least for me. But the fact that teaching mainstream C, C++, and Java developers Lisp is difficult merely makes it unlikely Lisp will be popular. It does not prove Lisp is a poor language.

Whether or not a language is functional is a matter of degree (just as whether or not a language is dynamic is a matter of degree).

Out-of-the-box Lisp is not nearly as much a functional language as *ML or Haskell. There is no pattern matching for example. Of course you can turn Lisp into a functional language, which is what Qi [wikipedia.org] is. In true Lisp fashion it's incredibly clever - 10k lines of CL and you've got a functional language that is arguably even more of a functional language than Haskell (not sure about functional specific optimizations though). And also in true Lisp fashion, there's already a (not fully compatible) fork/successor called Shen [wikipedia.org] (by the same guy who created Qi!).

In other words it's an ongoing experiment, rather than something you can rely on. It grew out of the L21 project, which was supposed to be about Lisp for the 21st century. The lesson is that Lisp for the 21st century is just like Lisp for the 20th century - incredibly clever and powerful but not stable or standardized enough to rely on. Of course the great exception to that is Common Lisp - a byzantine composite of many pre-1985 dialects, warts and all, that hasn't really been updated in 27 years.

It's absurd to talk about which is better except with respect to a certain type of application. For example, deeply embedded processors running on an RTOS (not embedded Linux or something), or even bare metal, with memory limitations and hard real-time constraints measured in tens of microseconds is not the best environment for Lisp (or any GC language for that matter).

For example, deeply embedded processors running on an RTOS (not embedded Linux or something), or even bare metal, with memory limitations and hard real-time constraints measured in tens of microseconds is not the best environment for Lisp (or any GC language for that matter).

It's the best environment for Forth, and conversely, Forth is the best language for than environment.

It works fine if you're judicious about what C++ features to use, and when and where to use them. As much of a pig as C++ is, one thing Stroustrup got right was making it a true multi-paradigm language, and he stuck to the principle of not dragging in any baggage or overhead that you don't explicitly ask for.

What extra effort is that? Calling your files *.cpp instead of *.c? Ok, that is an extra two letters per file name.

The why is so that you can take advantage of C++ features. Templates for example are a great way to write very fast code, and if you know what you're doing you don't get the dreaded bloat. Object based programming is a nice way to encapsulate things and adds zero overhead. True OO can be used judiciously in the non-speed critical parts (often a clean way to have a single image handle several minor hardware variants). A combination of object based and operator overloading can be a clean way to handle the semantics of fixed point DSP, which don't map nicely to most languages.

C++ is not designed for efficiency.

Read Stroustrup. As I already said, one of the key design principles was not to add overhead unless you explicitly ask for it. It was designed to be as efficient as you need it to be.

If you don't care quite so much about efficiency, use LISP

1. GC and HRT? Don't go.

2. Even where you can write Lisp to be as efficient as C/C++ for low level operations, all you're doing is writing C/C++ in Lisp. What's the point?

3. How many Lisps, Schemes, whatevers, have you seen for cross-development to DSP's and other architectures that are usually only embedded, have good optimizers (ala SBCL for example), and can run without an OS?

it gives me somewhat of a sign of relief to see C, C++ and Java/C# be stable in the face of this recurring tide of fad languages

Sheesh, kids today. I remember when C++, Java and C# were the fad languages. I even remember when C outside of the Unix world was a "fad" replacing Fortran, Basic and Pascal. The "classic" languages you grew up with are not the end of programming language evolution.

OTOH I admit that the so-called Cambrian explosion of languages really needs to be followed by a mass extinction. Perl, Python, Ruby, Lua, etc., etc., etc. You could spend the rest of eternity debating their pros and cons, but do we really need all of them? It's great if you want to spend the rest of your life learning yet another genius's "best of" mix of existing language ideas, but it sucks if you just want to get work done. Then there's Clojure, because what the Lisp world really needs is yet another dialect, and F# because, uh, well OCaml has been around a while and we really want yet another variant, and...

There's arguably a list of things that make a language into "a" Lisp [paulgraham.com], and not all of the languages that meet those criteria are actually forks or directly inspired by McCarthy et al.'s LISP programming language. GP was referring to this concept, but probably has a much looser understanding of what it means to be a Lisp. Tragically, TFA is mostly about APL.

It's managers and executives who make the decisions, and to them whether it's a browser or mobile app or SaaS or whatever the latest trend is, who cares if you're reinventing the wheel as long as profits are up.

Also in line with this: I can't imagine that way patents work actually help with the problem of inventing the wheel. You almost have to reinvent the wheel to create a working solution that won't get you sued.

Actually, from the examples cited, it seems to me to be painfully obvious why in those cases information was not shared.

One of the most painful things in our culture is to watch other people repeat earlier mistakes. We're not fond of Bill Gates, but it still hurts to see Microsoft struggle with problems that IBM solved in the 1960s.

For quite a long period of time, IBM and MS were stiff competitors (remember OS/2 warp?). I doubt MS would inform IBM what they were working on, much less seek help. In fact, it seems to be the exception rather than the rule for software companies to share code with each other. Selling code, after all, is usually how they make money.

'We marvel that the runtime environment of the web browser can do things that we had working 25 years ago on the Mac.'

Im fairly confident that Apple would sue any company that copies its software written for the Mac. Let us also not forget how much problems Oracle caused for Google when they sued over the Java API in Android [wikipedia.org]. Yes, it is efficient to reuse old tried and tested code, but it also opens you up to a lawsuit. So it is not so much reinventing the wheel as trying to find a different way of doing things so you wont get sued. For that, you have current IP laws to thank.

One of the problems with modern computer technology is that programmers don't learn from the great masters. 'There is such a thing as a Beethoven or Mozart of software design,' Locklin writes. 'Modern programmers seem more familiar with Lady Gaga. It's not just a matter of taste and an appreciation for genius. It's a matter of forgetting important things.

The problem here is with equating writing software to producing works of art. People are willing to go out of their way to learn and improve themselves to paint better or make beautiful music because it enables them to express themselves. It's emotionally satisfying. OTOH most software is programmed to achieve a certain utility and the programmer is faced with constraints e.g. having to use certain libraries etc. He is rarely able to express himself, and his work is subject to the whims of his bosses. For most everyday programmers, I think there is no real motivation to 'learn from the great masters'.

An exception might be the traditional hacking/cracking community where the members program for the sheer joy/devilry of it. I understand there is a fair amount of sharing of code/information/knowledge/learning from the great masters within their community.

It's managers and executives who make the decisions, and to them whether it's a browser or mobile app or SaaS or whatever the latest trend is, who cares if you're reinventing the wheel as long as profits are up.

That hasn't changed either. Just the specific subject of the idiocy has changed. Idiotic managers are timeless. Lady Ada probably had the same thing to say about Charles Babbage.

On two occasions I have been asked [by members of Parliament], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

I've always felt like that quotation had another interpretation, one that's much more favorable to the MPs:

If you're an MP, you've probably had to deal with a lot of people asking for money to fund what is essentially snake oil. If you don't understand the underlying 'cutting edge' technology (both plausible and acceptable), one simple test is to ask a question that you KNOW if the answer anything other than "No" that the person is bullshitting, and you can safely ignore them... and as reported the question is phrased in such a way that it would sorely tempt any huckster to oversell their device. I think Babbage's lack of comprehension was due to his inability to understand the idea that the MP was questioning HIM, rather than the device.

Come to England and look at our MPs! You will the probably feel that it wasn't such an unfair interpretation on the part of Babbage.

Seriously though there are many people out there (and they tend to be non technical) who simply do not understand comptuers. The lack of understanding means that they effectively interpret the actions of computers as magic in that there is no way for them to reason about what a computer might do. Even pretty smart people fall prey to this.

The UK has never had a tradition of putting technically minded people into parliament.

There was a point made at the 30-year retrospective talk at last year's European Smalltalk conference. If you have two languages, one of which allows developers to be more efficient, then you will end up needing fewer developers for the same amount of work. Unless your entire company uses this language and never experiences mergers, then this group of developers will be outnumbered. When you begin a new project or merge two projects, management will typically decide to pick the language that more developers have experience with. If you have a team of 100 C++ programmers and another team of one SilverBulletLang programmers, then it seems obvious that you should pick C++ for your next project because you have vastly more experience within the company in using C++. The fact that the one SilverBulletLang programmer was more productive doesn't matter. In the real world, languages tend not to be silver bullets and so the productivity difference is more in the factor of two to five range, but that's still enough that the less-productive language wins.

What I've seen in the 3 decades I've been in the industry is that the number of programmers using OldVanillaLang versus SilverBulletLang is less of an issue - Managers are often willing to go with a more resource-efficient solution, given that IT/MIS departments are often considered overhead on the bean counter spreadsheets. The thing that keeps managers on the OldVanillaLang track is the answers to the questions: "Supposed my SilverBulletLang guys leave - Who takes over their code? How do I evaluate SilverBulletLang developers in interviews? And since they're rare, can I afford them?"

You're missing the point. Developers may pick the language, but if you've only hired a tenth as many programmers for language A than as for language B (because those who use language A are ten times more productive), then when you come to start a new project you'll have ten times as many advocates for language B as you do for language A. Which language will your development team pick?

That's such a cop out and it's not true. Most the managers making these decisions are technical managers who come from development backgrounds themselves.

There is a problem at a more fundamental level, even outside of determining what buzzwords to use for a product and it's prominent even in some of the higher echelons of web society. The most obvious I'm going to point out is that of HTML5 - it's a braindead spec full of utterly amateur mistakes that could've been avoided if only Ian Hickson had spent 5 seconds understanding why existing things were the way they were and why that mattered.

An obvious example is that of HTML5's semantic tags, using a study to determine a static set of tags that would be used to define semantic capabilities in a spec that was out of date before the ink had even dried was just plain stupid. The complaint that we needed more readable markup rather than div soup to make writing HTML was naive, firstly because amateurs just don't write HTML anymore, they all publish via Facebook, Wordpress and so forth, and secondly because there's a good reason markup had descended into div soup - because genericness is necessary for future-proofing. Divs don't care if they're ID is menu, or their class is comment, they're content neutral, they don't give a fuck what they are, but they'll be whatever you want them to be which means they're always fit for the future. In contrast HTML5 tried to replace divs with tags such as aside, header, footer and so forth which would be great except when you have a finite number of elements you end up with people arguing about what to do when an element doesn't fit. Do you just go back to using divs for that bit anyway or do you fudge one of the new tags in because it's kinda-loosely related which means you bastardise the semantics in the first place because we now don't really know what each semantic tag is referring to because it's been fudged in where it doesn't make a lot of sense?

The real solution was to provide a semantic definition language, the ability to apply semantics to classes and IDs externally. Does that concept sound familiar? It should because we had the exact same problem with applying designs to HTML in the past and created CSS. We allowed design to be separate from markup with external stylesheets because this had many benefits, a few obvious ones:

1) Designers could focus on CSS without getting in the way of those dealing with markup making development easier

2) We could provide external stylesheets for no longer maintained sites and have the browser apply them meaning there is a method to retrofit unmaintained sites with new features

3) Our markup is just markup, it just defines document structure, it does one thing and one thing well without being a complete mess of other concepts.

Consider that these could've been benefits for building a semantic web if HTML5 had been done properly too. The fact that Ian Hickson failed to recognise this with HTML5 highlights what the article is talking about exactly. He's completely and utterly failed to learn the lessons before him as to why inline styling was bad but on a more fundamental level demonstrates a failure to understand the importance of the concept of separation of concerns and the immense benefits that provides that was already learnt the hard way by those who came before him. His solution? Oh just make HTML5 a "living spec" - what? Specs are meant to be static for a reason, so that you can actually become compliant with them and remain compliant with them. Spec compliance once you've achieved it shouldn't ever be a moving target. That's when you know you need to release a new spec.

It's a worrying trend because it's not just him, I see it amongst the Javascript community as they grow in their ambition to make ever bigger software but insist that Javascript is all they need to do it. The horrendously ugly fudges they implement to try and fudge faux-namespaces into the language in a desperate attempt to alleviate the fact the Javascript was just neve

I saw the Lady Gaga quip and Scott's fondness for effective ancient map-reducey techniques on unusual hardware platforms. It reminded me about things like discovering America. Did the Vikings discover it years before any other Europeans? Certainly. Did the Chinese discover it as well? There's some scholarly thought that maybe they did. But you know whose discovery actually effected change in the world? Lame old Christopher Columbus.

Perhaps there's a lesson to be learned here from people who want to actuall

Eh? starting 15,000 years ago various waves of people came here from asia and huge and important civilizations have risen and fallen in the Americas since then. Some of those people are still around and their influence on art, food, medicine continues into our culture. One group of those asians was absolutely crucial to the United States winning its independence and also had influence on our Constitution. Talk about effecting change in the world; and they're still around by the way.

Lady Gaga is mentioned because she is both a classically trained artist and sui-generis of successful PopTart art through self-exploitation. Yes, the reference is recursive - as this sort of folk are prone to be. They can also be rude, if you bother to click through, as they give not one shit about propriety - they respect skill and art and nothing else.

When I plussed this one on the Firehose I knew most of us weren't going to "get it" and that's OK. Once in a while we need an article that's for the outliers on the curve to maintain the site's "geek cred". This is one of those. Don't let it bother you. Most people aren't going to understand it. Actually, if you can begin to grasp why it's important to understand this you're at least three sigmas from the mean.

Since you don't understand why it's important, I wouldn't click through to the article and attempt to participate in the discussion with these giants of technology. It would be bad for your self-esteem.

For the audience though, these are the folk that made this stuff and if you appreciate the gifts of the IT art here is where you can duck in and say "thanks."

This is perhaps the most masterful parody of the passive-aggressive, cringe-inducingly self-congratulatory hipster attitude that I've seen on this site, and possibly anywhere on the internet, in some time. Bravo.

I find it curious that he didn't mention this [chris-granger.com] or this [vimeo.com] at the end, given that they're both about a year old and both flirt with death and/or the halting problem in order to offer better debugging features.

It's pretty damn obvious why this is: as an industry, we no longer shun those who should definitely be shunned.

Just look at all of the damn fedora-wearing Ruby on Rails hipster freaks we deal with these days. Whoa, you're 19, you dropped out of college, but you can throw together some HTML and some shitty Ruby and now you consider yourself an "engineer". That's bullshit, son. That's utter bullshit. These kids don't have a clue what they're doing.

In the 1970s and 1980s, when a lot of us got started in industry, a fool like that would've been filtered out long before he could even get a face-to-face interview with anyone at any software company. While there were indeed a lot of weird fuckers in industry back then, especially here in SV, they at least had some skill to offset their oddness. The Ruby youth of today have none of that. They're abnormal, yet they're also without any ability to do software development correctly.

Yeah, these Ruby youngsters should get the hell off all of our lawns. There's not even good money in fixing up the crap they've produced. They fuck up so badly and produce so much utter shit that the companies that hired them go under rather than trying to even fix it!

The moral of the story is to deal with skilled veteran software developers, or at least deal with college graduates who at least have some knowledge and potential to do things properly. And the Ruby on Rails idiots? Let's shun them as hard as we can. They have no place in our industry.

I think you were trolling, but there's a point under there. In the 70s you had to have a clue to get anything done. As more infrastructure and support system has been built, in the interest of not having to reinvent the wheel every project, you *can* have people produce things - or appear to produce things - while remaining clueless. Flash and sizzle have been replacing the steak.

I think you were trolling, but there's a point under there. In the 70s you had to have a clue to get anything done. As more infrastructure and support system has been built, in the interest of not having to reinvent the wheel every project, you *can* have people produce things - or appear to produce things - while remaining clueless. Flash and sizzle have been replacing the steak.

Indeed. Half of today's programmers have roughly zero engineering education, and want to be called software engineers. They have no idea, no idea at all, what their data structures look like in memory and why they are so damn slow. Heck "data structure" is an unfamiliar term to many.

It's not entirely young vs old, either. I'm in my 30s. I work with people in their 50s who make GOOD money as programmers, but can't describe how the systems they are responsible for actually work.

How do we fix it? If you want to be good, studying the old work of the masters like Knuth is helpful, of course. Most helpful, I think, is to become familiar with languages at different levels. Get a little bit familiar with C. Not C# or C++, but C. It will make you a better programmer in any language. Also get familiar with high level. You truly appreciate object oriented code when you do GUI programming in a good Microsoft language. Then, take a peek at Perl's objects to see how the high level objects are implemented with simple low level tricks. Perl is perfect for understanding what an object really is, under the covers. Maybe play with microcontrollers for a few hours. At that point, you'll have the breadth of knowledge that you could implement high level entities like objects in low level C. You'll have UNDERSTANDING, not just rote repetition.

* none of this is intended to imply that I'm any kind of expert. Hundreds, possibly thousands of people are better programmers than I. On the other hand, tens of thousands could learn something from the approach I described.

Yes, back in the day it could be said that becoming familiar with assembly could make you a better C programmer. It can still be said, because it's true, and I said it above. That's the point of "spend a few hours playing with microcontrollers", to get a minimal familiarity with that level of intimacy with the hardware.Are you suggesting that it's not true, that C won't show you things that you don't learn from Ruby? Also the reverse - GUI programming in.NET will make you truly appreciate the value of obj

Sorry, my point was not as clear as I had hoped, since I never actually made it directly...

Are you suggesting that it's not true, that C won't show you things that you don't learn from Ruby?

Not as such. I'm suggesting that some people are simply impervious to learning. Back then as now, people managed to struggle through careers in programming without seeming to gain knowledge, skill or apparently any understanding deep enough to write programs. For those people, they learned nothing really from C then and

To be honest, I sort of softened on ruby on rails after being forced to endure a project on it, and must to my teeth-grinding resentment, actually found it a decent and productive environment (Although I'd say Django more so because of its relative lack of magic, and hey who doesn't enjoy screwing around with python).

Now don't get me started on javascript on the server and NoSQL systems. Somewhere between "lets call ourselves amazing because we got a god damn web browser script environment to implement a pa

This is one of the sayings that's always bugged me. It's true, but that's because the first thing that a good workman does is pick appropriate tools, or build them if they don't exist. Many of the great scientists, artists and craftsmen over the decades have been as well remembered for the tools that they created as for the things that they did with the tools, yet this saying is often used to mean 'put up with crappy tools, their limitations are your fault,' when it should mean 'if you are failing because of bad tools, it's your fault for not using better ones.'

Neither one of your meanings matches how I've always heard it. A poor worker will try to place the blame on someone else. "It couldn't have been my fault, they must have been bad tools." So the tools were the constant rather than the variable.

I can tell that you're young simply because you used C++ in a debate where someone slightly older would have used C. Either that, or you're a Windows programmer.

C utterly dominated open source (and thus the Slashdot community) until about 5 years ago. That's when the overwhelming number of university switched to C++. Of course, before that it was Java, so you can see the trend.

Unless you're a Windows programmer, I'd stick with C, which is infinitely simpler, and provides you freedom to maintain competency i

The un-useful parts - overloading and inheritance for example, detract from that.

C has overloading. The + operator is overloaded so that it operates differently on pointers, ints, chars, floats, doubles and various combinations of the two. Could you imagine actually using a language without overloading?

I have used them and the result is not particularly pleasant.

Likewise for C++ which has *user defined* overloading. The idea of writing complex maths in C is horrible compared to C++.

The designers or Ruby wanted Smalltalk with Perl syntax. I find it amazing that anyone could look at Smalltalk and think 'the one thing this needs to make it better is Perl syntax'. And you can substitute pretty much any language for Smalltalk in the last sentence.

These guys were openly publishing their brilliance before hiding how your shit works was even a thing. Believe it or not once upon a time if you invented a brilliant thing in code you shared it for others to build upon so you could learn and grow and benefit. Hiding it for profit wasn't even thought of yet. It wasn't just undesirable: the thought did not even occur. That was the golden age of much progress, as each genius built upon the prior - standing upon the shoulders of giants reaching for fame. Now that we're in a hiding era we go around and around reinventing the same shit over and over, suing each other over who invented it first. It is madness. In the process we have moved backwards, losing decades of developed wisdom.

But it must all be re-discovered. If you know what, without why, then you will not know where the true limits are, or when you can break them. Knowing they are there is a good thing, but knowing why is much much more valuable. And experience (failures) gives you that information. Others often don't document their failures well, which leads to problems for those wanting to learn the "why" without having expensive failures.

We just had an earthquake here, and one of the most heavily damaged buildings was

I'm offended that ruby keeps getting thrown in with this Node/NoSQL stuff. Node has a couple of real use cases, but outside of that its a waste of time. NoSQL has a couple of real use cases, but outside of them it's not something you build around.

Ruby on the other hand is a really interesting language that has the benefit of being so flexible that its made for creating DSLs. Puppet, Chef, Capistrano and Rails just off the top of my head. Do some libraries have memory leak issues? Yes. Does its thread handli

Package management and fit for purpose tool chains don't exist in other languages? Is that a joke? Have you seen the Ruby gem ecosystem? Have you seen the Java ecosystem? You can do everything that you described in Ruby or Python without blinking and you won't incur the technical debt that Node's global insanity creates. Node came a long and people went "OMG! Non-blocking I/O!" and everybody else with a pulse looked at it and said...yea, that's what background workers are for but background workers encapsulate the logic instead of letting it all float around in one process. Eventually, node code grows to insanity.

Mongo is awesome...for write heavy applications. In most applications that means that one table could probably be better served with Mongo. For logging or cloud based data aggregators it's EXCELLENT. It's a fantastic session store too. Also a great query cache. That doesn't make it the optimal tool for your entire system where you might actually care about normalization, data compression, data integrity or the amount of hard drive space required to store all the data bloat that comes with it.

I can built a fully functional ecommerce system with an API, payment gateways, account system and analytics in 2 weeks (and most of that is just setting up the payment gateway and merchant account) with Ruby, Python, Groovy or Scala. With 1 person. Having it do $100k / month in sales is a product of what it's offering, how effectively it's marketed, how the supply chain side of the business can scale with demand and has absolutely zero to do with Node and/or Mongo.

That's why I don't work in areas where that can be done. In embedded systems you still need to know the basics and can't just rely on the web technology invented last week. Even in mobile devices they need bare metal C coders (I just got a recruiter letter for it too). Gotta know hardware, operating systems, some assembler, debug from core files, good algorithms, communication with other processors, etc. Try getting a $10/hour a guy doing that, or someone who thinks a certificate is proof of qualificati

Computer science worked better historically in part because humorless totalitarian nincompoopery hadn't been invented yet. People were more concerned with solving actual problems than paying attention to idiots who feel a need to police productive people's language for feminist ideological correctness.

You may now go fuck yourself with a carrot scraper in whatever gender-free orifice you have available. Use a for loop while you're at it.

That's genius: comparing a "$100k/CPU" non-distributed database to a free distributed database. Also no mention that, yes, everyone hates Hive, and that's why there are a dozen replacements coming out this year promising 100x speedup, also all free.

And on programming languages, Locklin is condescending speaking from his high and mighty functional programming languages mountain, and makes no mention of the detour the industry had to first take into object-oriented programming to handle and organize the exploding size in software programs before combined functional/object languages could make a resurgence. He also neglects to make any mention of Python, which has been popular and mainstream since the late 90's.

One of the things that I still see is the idea that when a problem exists, throw more people at it. The mythical man month pretty much threw that to wind for software development, and I am sure there are a whole slew of books that predate it saying essenstially the same thing. Yes advancements do mean that more people can communicate more directly, but there still is a limit and I do not think it is as great as some believe. Define interfaces, define test that insure those interfaces exhibit high fidelity, and let small teams, even a single person, solve a small problem. What technological advance has done is make clock cycles very cheap, so there is less excuse to go digging around trying to change code that will make your code run a little faster.
Speaking of interfaces, we know that when data and processes are not highly encapsulated, it is nearly impossible to create a bug free large project. One thing that object oriented programs has done is to create a structure where data and processes can be hidden so they can be changed as needed without damaging the overall software application. Now, many complain because the data is not really hidden, it is just a formality. But really coding is just a formality, and a professional is mostly one who knows how to respect that formality to generate the most manageable and defect free code possible.
One thing that has been lost with the generation of rapid development systems quickly spouting out bad code is that code and the ability to tweak it is the basis of what we do.

For any given software project there is an optimal team size. If the project is small enough, you can keep the team size down to what works with an agile development methodology. If the project is bigger than that, things get ugly. I started my career in a company that considered projects of 50 to 100 man-years to be small to medium sized. Big projects involved over a thousand man-years of effort and the projects were still completed in a few years calendar time. You can do the math as to what that mea

Yes. No code runs cross platform on varying architectures - INCLUDING the stuff that supposedly does, like Java and Javascript and all of the web distributed stuff. All of it DEPENDS on an interpretation level that, at some point, has to connect to the native environment.

Which is what BASIC was all about. And FORTRAN. Expressing the algorithm in a slightly more abstract form that could be compiled to the native environment, and then in the case of BASIC turned into interpreted code (Oh, you thought Java invented the virtual machine?)

There are a lot of things that if source code was available, other people could build on it and make higher quality products. In the absence of source code, people need to start from scratch often rebuilding the wheel.

Competition for money might get people to strive to make better pieces of art. But on the flip side, this same competition will sue your pants off for any reason they can find so you don't compete with them either.

An on an unrelated note, I had an idea for a zombie video game like Ground Hog day today. When you die, it starts out as the beginning of a zombie pandemic. As you die and play through it over and over, you get secrets to where weapons and supplies are. You find tricks you can use to survive and save people. Eventually you find out who caused the zombie pandemic. You can then kill him before he goes through with it. I'm not sure an ending where you serve in prison is a good ending though. I didn't think it the whole way through, but it sounded like a good premise for a zombie game.

There are a lot of things that if source code was available, other people could build on it and make higher quality products. In the absence of source code, people need to start from scratch often rebuilding the wheel.

That doesn't seem true for the most part.

All open source does with regard to code reuse is that it makes it painfully obvious how much redundancy there is. The spat between the different Linux display managers is one recent example, but I'm sure you can think of many others.

We marvel that the runtime environment of the web browser can do things that we had working 25 years ago on the Mac.

Did the Mac, 25 years ago, allow people to load code from a remote server and execute it locally in a sandbox and in a platform independent manner all in a matter of a couple of seconds? No. No it did not.

25 years ago you couldn't transmit the data in a matter of seconds. You *could* execute BASIC bytecode, though. Dynamic link libraries were invented for MULTICS in the 1960s. IBM assembler macros in the 70s could do more than a C++ template function. (OTOOH IBM deliberately crippled the small computer world by choosing an overlapped 24-bit address space instead of a 32-bit linear one (on the Motorola chips) because their mainframes were still linear 24-bit.)

Did the Mac, 25 years ago, allow people to load code from a remote server and execute it locally in a sandbox and in a platform independent manner all in a matter of a couple of seconds? No. No it did not.

Depends on how much leeway you are willing to grant. Around 1990 or so, the Mac could run Soft PC, a virtual machine x86 emulator running DOS or Windows. The Mac could certainly network and had file servers. So you should in fact have been able to download code from a fileserver and run it in the virtual machine, which from a Mac perspective would effectively be a sandbox. Although the PC DOS/Windows platform isn't "platform independent," it was nearly universal ( minus Mac only systems*) at the time.

We marvel that the runtime environment of the web browser can do things that we had working 25 years ago on the Mac.

Did the Mac, 25 years ago, allow people to load code from a remote server and execute it locally in a sandbox and in a platform independent manner all in a matter of a couple of seconds? No. No it did not.

Dude, just ignore this guy. Of all people who have the right to indulge in a good, old-fashioned 'get off my lawn' rant, Dave Winer ranks last. This is the man who, for our sins, gave us XMLRPC and SOAP, paving the way for the re-invention of... well, everything, in a web browser.

The third link is mainly a praise of APL, the programming language. Talk about odd.

It would be great if he'd actually given examples of why APL is a good language. I would be interested in that. Instead he says mmap is really interesting, which actually doesn't have anything to do with programming language.

He says that old programmers have left a lot of examples of good source code. It would be great if he'd actually linked to their code.......

He says system performance is the same as it was way back then. He thinks that stuff just happened immediately on those systems because they were running very efficient code. So what. Here's a simple test. Go get one of those computers and set it next to yours. Turn them both on. Mine would be at a desktop before the old one even thinks about getting down to actually running the operating system. Or start a program. On a current system it loads now. As in, right now. Back then it was a waiting game. Everything was a waiting game. He must have simply forgotten or repressed those memories.

Also those old programs did a lot less than many of our new programs. People often forget that when complaining about performance.

That's not to say, of course, that modern programs couldn't be written more efficiently. Because of Moore's Law and other considerations, we have moved away from spending a lot of time on performance and efficiency.

What past were you from? When I had DOS 3.3 running on my XT (on a hard drive), it booted in a few seconds after POST. When I loaded Windows 3.1 (no network at home at the time, so didn't run W3.11) on an XT with 1M RAM, it would take forever. And DOS 3.3 from floppy was slow, and loud. But 3.3 from HD on an ancient XT was much faster than Windows is today. DOS programs loaded fast, granted I was running 300k programs, not 300 MB programs, but they were still fast on DOS 3.3 back in the day. What were you running on your ancient computer?

I had an Atari ST at college. It booted (to a graphical, no less) desktop pretty much instantly, say a few seconds if you had a slew of SCSI peripherals (especially a CDROM drive), but otherwise it was about half a second.

It was ready to go, too. None of this crap of *showing* the desktop and then spinning the busy cursor for another 30 secs...

When people don't learn from people who have made mistakes or even had some real work place experience (not years of academic experience) it easy to end makeing mistakes that in theory seem like good ideas.

Also similar to some of the certs type stuff there the book says this but in the real work place that does not work.

The best and brightest at Apple, MS, BeOS, Linux did learn from "the great masters" - thank you to all of them.
They faced the limits of the data on a floppy and cd.
They had to think of updates over dial up, isdn, adsl.
Their art had to look amazing and be responsive on first gen cpu/gpu's.
They had to work around the quality and quantity of consumer RAM.
They where stuck with early sound output.
You got a generation of GUI's that worked, file systems that looked after your data, over time better graphics and sound.
You got a generation of programming options that let you shape your 3d car on screen rather than write your own gui and then have to think about the basics of 3d art for every project.
They also got the internet working at home.

Some people seem to think this article is about going back to the past. They miss the entire point. We're not saying that older programs were better, or that older computers were better, or that we should roll back the clock. We're saying that they had to pay more attention to what they were doing, they had to learn more and be broad based, they had to learn on their own, and so forth. When they had good ideas they were shared, they were not continually being reinvented and presented as something new. They didn't rely on certification programs.

As a 66 year old life long geek I actually saw many of the things I worked with decades ago reinvented numerous times under a variety of names, but there is one thing I used extensively on IBM OS/360 that I have never seen in the PC world that was a very useful item to have in my tool kit. The Generation Data Set and by extension the Generation Data Group were a mainstay of mainframe computing on that platform the entire time I worked on it. When I moved on to Unix and networks in the last few decades of my career I looked for something similar, and never found anything quite as simple and elegant (in the engineering sense of the word) as the Generation Data Set was. Oh, you can build the same functionality in any program, but this was built into the OS and used extensively. If anyone has seen a similar feature in Unix or Linux I would love to know about it.

... which really means the late '60s into the '70s. Isaac Newton said that he saw far because he stood on the shoulders of giants. Bill Gates and Steve Jobs were *proud* of knowing nothing about the industry they were trying to overturn. The same free, open, do-your-own-thing attitude (partly based on the new abundance helped along by technological advancement) that permitted startups to overtake established manufacturers, also encouraged tossing out anything "established" as "outdated" whether it was useful or not.

You can learn a lot from Mozart because you can read all the notes he published.You can listen to many interpretations of his works by different people.We don't have the chance to read through 25-year-old Mac symphonies^W programs.We aren't even writing for the same instruments.

Chariots were masterpieces of art. They were often made of precious metals and had elegant design work. They were environmentally friendly, using no fossil fuels whatsoever. They didn't cause noise pollution, or kill dozens of people when they crashed.

Aircraft makers should learn from the past. They have totally functional designs, no semblance of artistry anywhere. Accommodations are cramped, passengers treated like cattle.

We should go back to the good old days, things were so much better back then.

I'm old enough at 55 to remember the past, and yes I did love APL briefly but lamenting that the present isn't like the past is like wishing it was 1850 again so you could have slaves do all your work. Neither the web nor the modern mobile application is anything like the past, and what we use to write that code today is nothing like what I started with. Trying to relive the past is why old programmers get a reputation for being out of touch. The past is important in that I learned a lot then that still rings today but I can say that about every year since I started. Today is a new day everyday.

For instance: As a cyberneticist I'm fond of programs that output themselves, it's the key component of a self hosting compiler... Such systems have a fundamental self describing mechanism much like DNA, and all other "life". While we programmers continue to add layers of indirection and obfuscation ( homeomorphic encryption ) and segmented computing (client / server), some of us are exploring the essential nature of creation that creates the similarities between such systems -- While you gloat over some clever system architecture some of us are discovering the universal truths of design itself.

To those that may think Computer Science is a field that must be studied or be repeated, I would argue that there is no division in any field and that you haven't figured two key things:
0. Such iteration is part of the cybernetic system of self improvement inherent in all living things -- to cease is death, extinction.
1. Nothing in Computer Science will truly be "solved" until a self improving self hosting computing environment is created...

So, while you look back and see the pains of Microsoft trying to implement POSIX poorly, I've studied the very nature of what POSIX tried and only partially succeeded to describe. While you chuckle at the misfortunes of programmers on the bleeding edge who are reinventing every wheel in each new language, I look deeper and understand why they must do so. While you look to the "great minds" of the past, I look to them as largely ignorant figures of self import who thought they were truly masters of something, but they ultimately did not grasp what they claimed to understand at a fundamental level -- The way a Quantum Physicist might acknowledge pioneers in early Atomic thinking... Important, but not even remotely aware of what they were truly doing.

I think "learning from the old masters" really isn't the problem. It's not that we don't have lots of smart people writing software. I think the core problem is that we haven't figured out how to do upgrades and backward compatibility properly, which the old masters haven't figured out either. You can go and develop a HTML replacement that is better and faster, sure, but now try to deploy it. Not only do you have to update billions of devices, you also have to update millions of servers. Good luck with that. It's basically impossible and that's why nobody is even trying it.

In a way HTML/Javascript is actually the first real attempt at trying to solve that issue. As messed up as it might be in itself, deploying a HTML app to billion of people is actually completely doable, it's not even very hard, you just put it on your webserver and send people a link. Not only is it easy, it's also reasonably secure. Classic management of software on the desktop never managed to even get near that ease of deploying software.

If software should improve in the long run we have to figure out a way how to make it not take 10 years to add a new function to the C++ standard. So far we simply haven't. The need for backward compatibility and the slowness of deploying new software slows everything to a crawl.

There are a few problems which keep being rediscovered. In many cases, the "new" solution is worse than the original one.

Flattened representations of trees Fairly often, you want to ship a flattened representation of a tree around. LISP had a text representation of S-expressions for that. XML managed to make a mess of the problem, by viewing it as "markup". JSON is essentially S-expressions again.

Concurrency primitives This goes back to Dijkstra, who got the basic primitives right. We had to suffer through decades of bad UNIX/POSIX/Linux locking primitives. The Go language touts as their big advantage the rediscovery of bounded buffers.

Virtualization IBM had that in 1967. IBM mainframes got it right - you can run VM on VM on VM... X86 virtualization can't quite create the illusion of a bare machine.