As a "new" programmer (I first wrote a line of code in 2009), I've noticed it's relatively easy to create a program that exhibits quite complex elements today with things like .NET framework, for example. Creating a visual interface or sorting a list can be done with very few commands now.

When I was learning to program, I was also learning computing theory in parallel. Things like sorting algorithms, principles of how hardware operates together, Boolean algebra, and finite-state machines. But I noticed if I ever wanted to test out some very basic principle I'd learned in theory, it was always a lot more difficult to get started because so much technology is obscured by things like libraries, frameworks, and the OS.

Making a memory-efficient program was required 40/50 years ago because there wasn't enough memory and it was expensive, so most programmers paid close attention to data types and how the instructions would be handled by the processor. Nowadays, some might argue that due to increased processing power and available memory, those concerns aren't a priority.

My question is if older programmers see innovations like these as a godsend or an additional layer to abstract through, and why might they think so? And do younger programmers benefit more in learning low-level programming BEFORE exploring the realms of expansive libraries? If so, then why?

Learning should be easy and get harder

Having cheap memory, enormous disks, and fast processors aren't the only things that have freed people from the need to obsess over every byte and cycle. Compilers are now far, far better than humans at producing highly optimized code when it matters.

Moreover, let's not forget what we're actually trying to optimize for, which is value produced for a given cost. Programmers are way more expensive than machines. Anything we do that makes programmers produce working, correct, robust, fully-featured programs faster and cheaper leads to the creation of more value in the world.

My question though is how do people feel about this "hiding" of lower-level elements. Do you older programmers see it as a godsend or an unnecessary layer to get through?

It is absolutely necessary to get any work done. I write code analyzers for a living; if I had to worry about register allocation or processor scheduling or any of those millions of other details then I would not be spending my time fixing bugs, reviewing performance reports, adding features, and so on.

All of programming is about abstracting away the layer below you in order to make a more valuable layer on top of it. If you do a "layer cake diagram" showing all the subsystems and how they are built on each other you'll find that there are literally dozens of layers between the hardware and the user experience. I think in the Windows layer cake diagram there's something like 60 levels of necessary subsystems between the raw hardware and the ability to execute "hello world" in C#.

Do you think younger programmers would benefit more learning low-level programming BEFORE exploring the realms of expansive libraries?

You put emphasis on BEFORE, so I must answer your question in the negative. I'm helping a 12 year old friend learn to program right now and you'd better believe I'm starting them in Processing.js and not x86 assembler. If you start a young programmer in something like Processing.js they'll be writing their own shoot-em-up games in about eight hours. If you start them in assembler they'll be multiplying three numbers together in about eight hours. Which do you think is more likely to engage the interest of a younger programmer?

Now if the question is "do programmers who understand layer n of the cake benefit from understanding layer n - 1?" the answer is yes, but that's independent of age or experience; it's always the case that you can improve your higher level programming by understanding better the underlying abstractions.

As much as machines have gotten faster and bigger, software has gotten slower and bigger.

To be more constructive, what I proposed was that information theory, and its direct relevance to software, be part of computer science education. It is only taught now, if at all, in a very tangential way.

For example, the big-O behavior of algorithms can be very neatly and intuitively understood if you think of a program as a Shannon-type information channel, with input symbols, output symbols, noise, redundancy, and bandwidth.

On the other hand, the productivity of a programmer can be understood in similar terms using Kolmogorov information theory. The input is a symbolic conceptual structure in your head, and the output is the program text that comes out through your fingertips. The programming process is the channel between the two. When noise enters the process, it creates inconsistent programs (bugs). If the output program text has sufficient redundancy, it can permit the bugs to be caught and corrected (error detection and correction). However, if it is too redundant, it is too large, and its very size, combined with the error rate, causes the introduction of bugs.

As a result of this reasoning, I spent a good part of the book showing how to treat programming as a process of language design, with the goal of being able to define the domain-specific-languages appropriate for a need. We do pay lip service to domain-specific-languages in CS education but, again, it is tangential.

Building languages is easy. Every time you define a function, class, or variable, you are adding vocabulary to the language you started with, creating a new language with which to work. What is not generally appreciated is that the goal should be to make the new language a closer match to the conceptual structure of the problem. If this is done, then it has the effect of shortening the code and making it less buggy simply because, ideally, there is a 1-1 mapping between concepts and code.

If the mapping is 1-1, you might make a mistake and code a concept incorrectly as a different concept, but the program will never crash, which is what happens when it encodes no consistent requirement.

We are not getting this. For all our brave talk about software system design, the ratio of code to requirements is getting bigger, much bigger.

It's true, we have very useful libraries. However, I think we should be very circumspect about abstraction. We should not assume if B builds on A and that is good, that if C builds on B it is even better. I call it the "princess and the pea" phenomenon. Piling layers on top of something troublesome does not necessarily fix it.

To terminate a long post, I've developed a style of programming (which sometimes gets me in trouble) where:

Invention is not a bad thing. It is a good thing, as it is in other branches of engineering. Sure it may be creating a learning curve for others, but if the overall result is better productivity, it is worthwhile.

Haiku-style minimalist code is valued. That especially goes for data structure design. In my experience, the biggest problem in software these days is bloated data structure.

All hail abstractions

High-level abstraction is essential to achieving ongoing progress in computing.

Why? Because humans can only hold so much knowledge in their heads at any given moment. Modern, large scale systems are only possible today because you can leverage such abstractions. Without those abstractions, software systems would simply collapse under their own weight.

Every time you write a method, you're creating an abstraction. You're creating a bit of functionality that's hidden behind a method call. Why do you write them? Because you can test the method, prove it works, and then invoke that functionality any time you want just by making the method call, and you don't have to think anymore about the code that's inside that method.

In the early days of computing, we used machine language. We wrote very small, bare metal programs with intimate knowledge of the hardware we were writing them for. It was a painstaking process. There were no debuggers; your program usually either worked, or it crashed. There was no GUI; everything was either command-line or batch process. The code you wrote would only work on that particular machine; it would not work on a machine with a different processor or operating system.

So we wrote high-level languages to abstract all of that detail away. We created virtual machines so that our programs could be portable to other machines. We created garbage collection so that programmers wouldn't have to be so diligent about managing memory, which eliminated a whole class of difficult bugs. We added bounds checking to our languages so that hackers couldn't exploit them with buffer overruns. We invented Functional Programming so that we could reason about our programs in a different way, and we rediscovered it recently to take better advantage of concurrency.

Does all this abstraction insulate you from the hardware? Sure it does. Does living in a house instead of pitching a tent insulate you from nature? Absolutely. But everyone knows why they live in a house instead of a tent, and building a house is a completely different ball game than pitching a tent.

Yet, you can still pitch a tent when it is necessary to do that, and in programming, you can (if you're so inclined) still drop down to a level closer to the hardware to get performance or memory benefits that you might not otherwise achieve in your high-level language.

Can you abstract too much? "Overtake the plumbing," as Scotty would say? Of course you can. Writing good APIs is hard. Writing good APIs that correctly and comprehensively embody the problem domain, in a way that is intuitive and discoverable, is even harder. Piling on new layers of software isn't always the best solution. Software Design Patterns have, to some degree, made this situation worse, because inexperienced developers sometimes reach for them when a sharper, leaner tool is more appropriate.

A broken dream

The increase in the complexity of systems is relentless, oppressive, and ultimately crippling. For me as an older generation programmer, it is also bitterly disappointing.

I've been programming for well over 40 years, having written code in 50-100 different languages or dialects, and become expert in 5-10. The reason I can claim so many is that mostly they're just the same language, with tweaks. The tweaks add complexity, making every language just a little different.

I have implemented the same algorithms innumerable times: collections, conversions, sort and search, encode/decode, format/parse, buffers and strings, arithmetic, memory, I/O. Every new implementation adds complexity, because every one is just a little different.

I wonder at the magic wrought by the high flying trapeze artists of the Web frameworks and mobile apps, at how they can produce something so beautiful in such a short time. Then I realize how much they don't know, how much they will need to learn about data or communications or testing or threads or whatever before what they do becomes useful.

I learned my craft in the era of fourth generation languages, where we genuinely believed that we would produce a succession of higher and higher level languages to progressively capture more and more of the repetitive parts of writing software. So how did that turn out, exactly?

Microsoft and IBM killed that idea by returning to C for writing apps for Windows and OS/2, while dBase/Foxpro and even Delphi languished. Then the Web did it again with its ultimate trio of assembly languages: HTML, CSS, and JavaScript/DOM. It's been all downhill from there. Always more languages and more libraries and more frameworks and more complexity.

We know we should be doing it differently. We know about CoffeeScript and Dart, about Less and Sass, about template to avoid having to write HTML. We know and we do it anyway. We have our frameworks, full of leaky abstractions, and we see what wonders can be done by those chosen few who learn the arcane incantations, but we and our programs are trapped by the decisions made in the past. It's too complicated to change or start over.

The result is that things that ought to be easy are not easy, and things that ought to be possible are nearly impossible, because of complexity. I can estimate the cost of making changes to implement a new feature in an established code base and be confident I'll be about right. I can estimate, but I can't justify it or explain it. It's too complicated.

In answer to your final question, I would strongly advise younger programmers to start as high on the layer cake as they possibly can, and only dive down to the lower layers as the need and desire provided they have the impetus. My preference is for languages with no loops, little or no branching, and explicit state. Lisp and Haskell come to mind. In practice I always finish up with C#/Java, Ruby, Javascript, Python, and SQL because that's where the communities are.

Final words: complexity is the ultimate enemy! Beat that and life becomes simple.

54 Reader Comments

Ultralurker here. What young programmers need to understand is that they must solely concentrate on the task at hand before addressing other concepts. Coming from an app/server support background, Ive found that I only want to look at the lower levels of the cake, stack, OSI model, or what have you when need I need to troubleshoot my code. Doing this over time helps young programmers understand the mind-boggling relationships of their code, dependencies, and their environment. Its an organic process. Once I feel capable in one thing, I start to learn about what makes my code tick.

Its been a difficult process, to be sure. But hey, that's how you learn things. Troubleshooting and learning efficient methods of how to do so is the most important aspect to learning anything, IMO. Add print statements, use verbose logging whenever possible, and don't be overly clever for your own good. Utilize the tools you have: strace, tcpdump, etc. Ask questions and utilize Google appropriately. While we have the hardware now, it doesn't excuse a young programmer from being excessively lazy.

TL;DR: The trick is to learn things the hard way, and nevermind complexity until it needs to be truly addressed. And of course, ask questions.

I've noticed that all those libraries and frameworks let programmers do simple stuff easily. But once you hit a certain point, the libraries break down and programmers who've learned only within the frameworks hit a wall they can't get over. Take a really simple thing: reading a record from a TCP socket where the record's preceded by a 2-byte length. The newbies look at that and go "Sweet, I know the length so this'll be simple.", and they produce a quick bit of code using the library routines that works. Except it doesn't. It locks up forever the moment a faulty remote end causes a record to be shorter than the advertised length. It takes longer than allowed to finish if the remote's slow sending data (whether because of a problem with the remote or a network problem). And when they're told to handle timing out and handle unexpectedly-closed connections and handle over-long reads (where two records are being sent back-to-back and part of your final read for the first record got the leading bit of the next record) and the like, they're lost because the libraries don't directly handle any of that and they've never learned how the low-level stuff, the stuff the libraries used to implement their code, works. At that point they're lost, unless someone like me's taken the time to write them a library that handles all that. And I've had arguments with managers about taking the time to package up that kind of code so the next guy doesn't have to worry about it, with me having to argue vehemently against the manager's not wanting to "waste" the time.

Frameworks and libraries are fine as long as you're only doing things the framework/library designer thought of and intended you to do, and as long as you're doing them the way the designer intended. But a fact of life in software development's that you won't always have the luxury of doing things the way someone else wants you to, you'll regularly be asked to do things differently or do things that weren't planned for. When that happens, if you don't know the building blocks you'll be lost.

Abstraction itself is not the problem but abstraction without thought is the problem. If one views all programs as an models of reality designed to run a digital computer, then the question to be asked is what is the appropriate level of abstraction to appropriately model the problem and create a "solution". Some problems are best modeled at lower levels of machine abstraction and others are best modeled at higher levels of abstraction. So the question really resolves around what type of problems is one modeling and using the appropriate tools for the problem.

My theory is that every programming environment converges to the same level of complexity. In c/c++ it's hard to create objects on the heap correctly. Java is much easier so people decided to add higher levels of abstraction and complexity like ejbs and dependency injection frameworks like spring and package solutions like Osgi. Would be to simple otherwise.

All of these technologies solve a specific problem but often times they are just idiotic and over engineered. - SOAP is a solution to a problem nobody had, luckily rest interfaces are winning - ejbs suck balls in the end the database is handling the transaction isolation anyway, again systems like hibernate that are simple and lower level r are winning. Or just program the dB layer yourself instead of learning another framework- dependency injection frameworks are intellectual wank and should be banned. The technology adds way more complexity than it helps in simplification- similar thing to pre- compilers like annotations 99% of the time they solve non existing problems a nice clean Java lib would solve the problem much better

And so on and so forth. Higher levels of abstraction are great and sometimes needed but I tend to prefer relatively lower levels because each level of abstraction comes with a cost. So go as high as you have to but not higher. In 20 years we will still need Java and c programmers but ruby programmers? No idea.

And I also oppose the idea that programmers do not need to know basic computer science algorithms. 90% of the time perhaps but I have seen too many O^3 implementations that hold up well during testing and then crap out if you reach 10000 entries. Better compilers do not help you if you violate the rules of complexity theory and Im contrast to the newest little Web Development framework, knowledge about this will also still be valuable in a 100 years.

I don't think the complexity lies so much in any specific high-level abstraction, but in reinventing the wheel so many times. Used to be you could learn C, C++ (later Java), Perl, and SQL and pretty much handle anything that came along. Right now, we're at a time when no one is interested in standards, so we're seeing too much fragmentation where each company reinvents the wheel by providing their own language of choice, database, ORM tool, MVC layer, and so on. Each abstraction is fine, but if there are too many of them, it's hard to cope. This isn't innovation, it's fragmentation. There are way too many programming languages which are all similar but different, to the point it's hard to remember any specific language because they all blur together. By the time you learn one over-engineered ORM framework, another has come along to displace it.

Skills aren't portable enough. There's a "shortage" of skilled programmers, but a glut of them at the same time. Employers are looking for people with such narrow, specific skills that there just aren't that many people who have the skills. The "shortage" would be solved by industry standards, but employers don't seem to care. They're too busy chasing purple squirrels.

So abstractions and high-level libraries are good. There's no reason to write basic containers from scratch in 2014, or low-level socket code. Fragmentation and lack of industry standards, with the consequent lack of skill portability, is bad.

What I see is a stratification in the practice of software development.

There are people who only speak HTML who call themself software developers. HTML is fine, as far as it goes, but you can't use it to write a database. There are application programmers who write in Java or C# with their big helpful libraries of GUI and network stuff, but you can't (productively) implement Java bytecode compilers in Java. There are system programmers, writing in C or C++ and maybe some other languages, programming down on the bare metal, seeking high performance. And there are really specialized people with no name I can think of, optimizing a ten instruction convolution loop on a digital signal processor or writing a 3D graphics engine on a GPU, sweating individual instruction timings and code size limitations.

Very high level languages limit the kind of conversation you can have to a bland and simplified domain. You can get a lot done quickly, but only if you want to talk about what the language has a vocabulary for. There just is a difference between a single haiku and a jet engine maintenance manual. Haiku can be beautiful and elegant, but describing how to service a jet engine takes a $#!^load of haiku.

I used to think that programming at lower levels was a more valuable and highly compensated skill than programming at higher levels. But the total market for low level coding is increasingly limited, so the difference is not as great as you'd expect.

I keep running into a brick wall when trying to brush up my coding to anything higher level than C. Functional Programming fits me like a glove, and I'm perfectly happy bit-banging outputs on a microcontroller and carting bytes around manually. C is a logical extension of that, and I can grok it just fine. But above there things get gnarly, and you end up looking up at a vast stack of APIs and layers with "stuff you can interact with" perched at the top. I've tried just starting right at the top and ignoring everything underneath, but I just can't handle telling a magic box to 'do stuff' to an abstract representation of what I'm working on without diving into how it's doing it, and several hours later give up after trying to dig down through the API stack. I just haven't found a way to maintain a state of blissfull unawareness of how things are happening to take advantage of abstracted languages. I'd probably have been delighted to live in the days of FORTRAN and COBOL.

First, I am not a Software Developer- but I see a lot of the same issues on the sysadmin side of things. I like abstraction when it saves time and simplifies the work, but an abstracted tool tends to be a blunt one. If you have a nuanced problem, it helps to have the knowledge to go down a layer and troubleshoot or make your own tools. As a general rule, after getting comfortable with whatever layer I'm working in, I will try to familiarize myself a little with layer n - 1. That is often the difference between a junior and a senior sysadmin. The junior admin knows how to drive the car well; the senior sysadmin can rip open the dash and hotwire the car if the ignition stops working.

I personally feel "complexity" was over sold and bought in flood loads. People (experienced or amature alike) tend to escape to the nearest available "free and open source"/popular library/framework/"complexity absorber" and push a solution out the door. And unwanted features, code bloat, lack of control as well along with the proto-type which is believed to be the solution.

o.O I just turned 30, and I'll let you in on a secret, when I was in high school, I was learning how to write C compiled with DJGPP and the 486's were on the left side of the classroom, the 386's on the right.

Anyway, let me add my own answer. I'll frame it with, I'm currently a programmer in charge of maintaining and adding on to a legacy system of over 200,000 lines of Perl and SQL. Lol. The idea that frameworks is an evolutionary outcome of the biological act of programming - meaning, everyone will eventually abstract the menial repetitive stuff. This is a good thing.

Everyone solves problems a little differently, and if you look at the sellers of any given book on Amazon, you'll find like 30 of them. What do they do different? Similarly with CakePHP, Plone, Django, Drupal, Wordpress, Objective C, .NET, etc, etc. At the end of the day, I just want to read the book, or see text on a web site.

So anyway, my advise to a new programmer about frameworks and languages is, pay attention to the problem that they are used to solve. Don't really worry about the underlying aspects of the language or framework unless it is not solving completely the problem that you are using it for.

An exception would be, if you're writing a framework for other people, like MongoDB libs or a new prototype.js. In that case, you're probably not a new programmer, and you've got a responsibility to be able to answer 'why' for pretty much everything.

Frameworks are a great starting point. You can look at what they try to simplify as an idea of what problems people are trying to solve. Then you read up on how the framework solves and simplifies those issues.

I'm sure there's more fun topics. I learned a lot about some of these topics from Ars back in the day, got me so interested. I used to assume most programmers have knowledge on these topics, but I have been starting to doubt that over time.

Do I need to know the innermost workings of an engine to drive a car? No. Does having an understanding of what my inputs are doing make me a better driver? Absolutely.

Not sure that's an appropriate analogy - when programming a computer, you're not "driving" - that analogy is more applicable to the user of the computer - the programmer is more analogous to the next auto engineer tasked with perhaps adding fuel injection to the engine - for which you should know how the engine works, otherwise you probably end up with a Rube Goldberg contraption - which is unfortunately what a lot of the code developed by engineers coming out of most comp Sci programs looks like

A lot of the points I would have liked to make have been made already, and probably better than I could. So I'll have to take what's left...

In this discussions about how programming has evolved, there's always a little bit of history repeating. Many things that have been said and arguments that have been made in the 90s could pass as being from last week with replacing some names. That's not to say they're wrong or bad, I think the opposite is true. One key aspect to me is really that what's doable is limited not by technology, but by humans. How much complexity and structure you can mentally oversee for example or the interesting fact that bugs can be estimated better by the amount of code than by the amount of things a program does in the end.

And to me it seems that we've indeed made a lot of progress, but not at the fast pace hardware has made and not in the linear way one might expect when just thinking about the number of abstraction levels. Complaints about how new languages or frameworks have made some things worse or cut possibilities off seem very familiar from almost any time. And I think they're true, but doing it better is, in a way, just too hard. Everybody who learned to program probably knows the feeling of not really getting how to solve a problem, with kind of a mental knot in their head, until, at one point, they get it and everything just seems obvious. And from then on, you can't imagine how you couldn't see what's now so obvious before. I believe it is the same on a bigger scale.

Many new languages, frameworks and paradigms have been kind of stabs into the dark by people who didn't really get how to do what they wanted to achieve. They simple didn't get it quite right yet, because what's now obvious to everybody by the generally slightly shifted perspective was opaque to everybody when they were created. After some iterations, better approaches have been developed for some things. It's basically the same kind of progress we have made in other intellectual disciplines, where real progress also takes long and in hindsight many side and backsteps have been made until another one forward happened. And looking back, we can't really imagine why it ever was that hard.In the middle ages, building Fachwerk was cutting edge stuff for the best architects and builders. Today, you get to do the calculations for that in your first term of engineering. I got taught newtonian mechanics in 10th grade.

So in a certain way, I think writing software hasn't changed. The specific problems have shifted, so have the tools and the limits, but the mistakes we make in general haven't changed and the reasons why it's hard haven't either. And I think it will stay like that for the foreseeable future. Progress has been made and will keep on happening, but at the usual human pace and in the usual, complicated human way.

If your program does not ever add up to a lot of time to run, or cost much in the way of resources, then probably the most important thing is for you to write it clearly, be able to understand it, and be sure it is correct. High level languages and good libraries are ideal for that, Use a language with clean design and a library that has a high reputation. None of them are perfect, and certainly not perfect for everything. But, you will find a favorite which is productive for turning out everyday code.

Every so often you come across a project which is big. It uses petabytes of data, or runs for hours at a time, and many people will use it over a period of time. This is when it is good to know what is possible. Have you analyzed the algorithms to know how they scale? How many items per second should you be able to process, how many terabytes of DRAM or SSD, what should the network be doing, how many cores or computers do you need? How many frames per second can you draw, how many polygons can you model?

Knowing the real world is important to these big programs, and will help you understand if the language you are using is the right tool, or if your scaling really works. Is your program running within a factor of 2 of what you think it should, or within a factor of 10? Or even 100? I've seen production code even at the 100x slower level occasionally, mostly because no-one ever tried to estimate how fast it could be. It worked, it produced correct and useful answers, they threw hardware at it and called it job done.

Now, if all that hardware just added up to a few thousand dollars, and they got a million benefit, and they moved on to solve some other problem, it is job done. But if it took a million dollars of hardware maybe they should have taken a couple of days to go through the exercise of analyzing what it should do.

This is surprisingly rare. Much more important than worrying about which language to use is knowing how to estimate the quality of your result. If you are practiced with your language of choice and you know what the goal is, you probably can end up within 2x of the ideal with almost any competent set of tools. But if you have not first figured out what success looks like, it will hardly matter what tool you choose.

"New" programmer here. I am someone who was brought into programming learning Java and ActionScript and working with abstractions and libraries galore. It was only afterwards that I moved down the stack, learning C and C++, how UDP and TCP operated, how linkers and compilers operate at an OS-level, and the way memory allocation works. It's been a wild ride, and I now know how to write data structures of all varieties, sorting algorithms, and pointer management. Sure, I couldn't write a kernel, but I could certainly implement most of the libraries I use if necessary.

That said, I couldn't get done what I get done without libraries, and I find abstraction immensely valuable. I went through three stages of understanding: under-abstraction, when I didn't really grok how it worked at all, over-abstraction, when I figured everything needed to be under seven layers of indirection, and finally, something of a middle-ground. It's an invaluable tool, whether done through functions or modules or objects, but I think increased abstraction is, from my perspective, an objectively good thing.

That said, having a more low-level knowledge, if even something of a surface-level one, has been immensely helpful to understanding what's going on. If something goes wrong with a library and I don't understand how it works, there's not a lot I can do. On the other hand, even if I couldn't write the library myself, understanding how it works in theory helps when the leaky abstractions hit. That is what is most important, imo, enough knowledge to understand how what you're using works, even if it would take a little more time to write it yourself.

My question is if older programmers see innovations like these as a godsend or an additional layer to abstract through, and why might they think so?

Definitely a godsend.

Abstractions mean you write less code, and writing less code is not only faster but it means less bugs.

However, you need to be careful. Make sure you *understand* any abstraction layer you choose to use. Do not treat an abstraction as something you don't need to learn, instead treat it as something you don't need to write yourself.

You should always understand the abstraction layer well enough that you could throw it away and replace it with your own code. And if you find a problem in the layer, then do go ahead and do it.

In my opinion abstractions do not reduce the amount of learning a programmer needs to undertake, you still have to learn all the same stuff. You just don't have to write as much actual code yourself.

Point blank, using libraries that do far more than you understand isn't a problem. The problem comes if you can't identify if you're using a library in a way it wasn't intended or beyond it's scope. You can use a whole slew of libraries if you can identify which ones are causing performance problems, why they're causing problems, and how to fix them.

Naturally, the more programming experience one has, the easier it is to do, so more experienced programmers will have less of a problem with this. There is no easy way for new programmers to get around this hurdle.

An O(n^2) algorithm that didn't work in 1994, that works now, is no different than a O(n^2) algorithm that didn't work in 1974 and worked in 1994. So it really isn't about age. Also, hardware changes. Before the 90s, caching and pipelineing didn't really exist. An ordered list for an 8 byte data type should have been a linked list prior to 1990, now it should probably be an array. So age can be a hindrance. But if a programmer knows their stuff, they now why and how this is a problem, and can adopt.

Continuously learning all this stuff, and how it changes, is a pain in the ass. There is a huge burn-our rate in this industry. So while it may seem daunting for a new programmer, you haven't wasted time learning things that aren't relevant anymore.

I hear this a lot but today it is quite doable for a small team to develop an application that costs more to operate than the total compensation of the team. Sometimes it is cheaper to throw another programmer at a problem instead of more hardware.

My opinion based on 48 years programming: Yes learn the low-level basics. And the high level of abstraction in today's languages, libraries, and frameworks is good because it permits you to program in the language of the solution space (i.e. business or user experience) rather than the language of the engine (i.e. registers, bytes, pixels).

- dependency injection frameworks are intellectual wank and should be banned. The technology adds way more complexity than it helps in simplification

I disagree with this strongly. Before dependency injection frameworks(I mainly use spring) I was writing factory objects, my code was not very testable, scoping of objects was really hard(or I didn't understand the concept at all) and had copy paste all over for cross cutting concerns like transactions.

Of course you can do everything right without dependency injection frameworks but when you start to do advanced stuff you need to write some boilerplate and at that point you might just use a ready made framework.

My parents bought a Commodore-16 for me, on which I tried to program Monopoly in Basic. Unbeknown to me though, 4kB/16kB was reserved for graphics, and a further 8–10kB was reserved for system use when programming in the BASIC environment (I think I got that the right way around). So in other words, this "16kB" computer was actually a 2–4kB computer, as far as the programmer was concerned! I would get through a certain amount of programming, and then, BLIP!"OUT OF MEMORY." — Oh no… I didn't guess to save my work to the tape drive (2–4 mins of waiting) before writing the next few dozen lines of code… Everything is lost, all of my work from the last few hours! The machine won't say anything but this error message now, until I reboot it! I can't even review my code! There's no hope of completing any meaningful, creative code within the available memory! (A similar thing would happen if there was a ½ second power cut or slight supply voltage dip in the middle/end of your coding session.)So for my next project, I wanted to try out machine code (you could switch the machine into that mode with a special key-combination on something like that); but machine code was totally undocumented in the manual! I had no idea where to even start.The commercial software available for that machine was misleading: there was no way you could program similar software on the device itself (even in the machine code environment, you needed a certain amount of memory for the development environment). Rather, such software was written on another "developer" machine, perhaps even with a completely different architecture (something companies like Commodore didn't exactly advertise to mere "consumers".) Instead, in the 1980's, they all liked to give the impression (quite falsely) that all you needed to do was purchase one of their regular machines, and you'd be programming advanced software in no time; when in actual fact, the chances of recovering your investment on a machine like this by writing commercially meaningful software were precisely zero!The example programs that came with that "Commodore 16"? Basically, spirograph simulations (in vogue at the time, and possible within a dozen lines of code involving some mathematical functions/commands). Yes, I got some value out of modifying and adapting code like that, trying out every combination of parameters, writing similar graph software etc.; but it wasn't what we'd expected for a few hundreds of 1980's GBP! Probably the best things I did were at school, on their nice 32kB and 64kB Acorn "BBC" machines; hacking BBC system software like that BBC Master sound-wave-form generator and making it work better than it had been designed to work.Modern PCs are quite a different story: if you want to do "software development", just start with one of the tutorial websites out there, or buy a machine with just a little extra memory and storage space, download a free IDE like NetBeans and a free database like SQL Server Express; and start programming! Instead of purchasing hundreds of dollars worth of "programming reference manuals" (OS kernel manuals containing the secret dark arts of OS function calls) like I did in the 1990's, visit the free WikiPedia and W3C websites etc. to learn about HTML, CSS, Javascript etc.!The added "complexity" of PCs with gigabytes of memory, terabytes of storage, gigaflops of processing power, hundreds of CPU instructions, CUDA capability, networking, security access control, multi-user environments; is only a good thing if you ask me! You can use it if you want it, and not if you don't.OK, so we've almost lost that old kudos-contest of, "how much stuff can we fit into a 1kB program?" — instead we're competing purely on the actual meaning and power of our code (and to a slightly lesser extent, efficiency still matters — think big-O complexity, database indexing etc.); which is a much more fulfilling (and no less challenging) contest in my opinion!

It is important to understand what is being programmed. Rarely does one program a computer these days. Instead, one is actually writing instructions for a "container", a framework, an environment or some such thing. These paradigms really do change what it means to program a computer.

All of these technologies solve a specific problem but often times they are just idiotic and over engineered. - SOAP is a solution to a problem nobody had, luckily rest interfaces are winning - ejbs suck balls in the end the database is handling the transaction isolation anyway, again systems like hibernate that are simple and lower level r are winning. Or just program the dB layer yourself instead of learning another framework- dependency injection frameworks are intellectual wank and should be banned. The technology adds way more complexity than it helps in simplification- similar thing to pre- compilers like annotations 99% of the time they solve non existing problems a nice clean Java lib would solve the problem much better

Because they do solve a specific problem, sounds like the real issue is a programmer/team not knowing when to use these technologies and when not to use them (something about hammer and nails...)

Final words: complexity is the ultimate enemy! Beat that and life becomes simple.

I dunno about that. I really don't. Too many suffer from ADHD in the programming ranks. As soon as its simple enough, somebody will figure out how to make it complex again. Prime example: Java and the so called lambda expressions.

As a programmer that began to learn just as computers were really starting to take off as a desktop machine (mid 90s).

The machine really hasn't grown as much in complexity as it seems they have. The same concept that applied then still applies now. Learn how to find and use documentation in order to determine the correct tool for the job and how to use it effectively. This concept applies from the lowest to the highest levels.

I really like most of the advancements myself. A lot of the stuff you find in an API like the .NET Framework is pretty simple under the hood. Probably worth it to create your own versions of the features you use most just to better understand them (not saying making perfect clones, just implement the general idea in your free time). The major benefit of APIs like the .NET Framework is that they have been worked on and improved over a long period of time by multiple industry professionals and experts. The vast majority of it works as expected with little to no side effects. To create something of that quality yourself requires a lot of time.

It's very beneficial to know how things work. A lot of bugs I see these days could easily be attributed to a programmer just not understanding the internals of the things their working with. It's not absolutely necessary though.

In my experience as a software developer is that you should never think that you have too much memory and too much HD space.

As an example, I was asked to port the company's games from the PC platform to various mobile platforms. Even though we used ansi c/c++ and no Microsoft specifics we never considered the matter of memory and space so porting was relative easy related to the code and needed few adjustements for compilation and linkage but the games couldn't run on any mobile platform.

Too much memory consumption, also the number of open handles and resources couldn't be handled on the mobiles.

Refactoring the code was a must to make the games lighter. Obviously the changes were beneficial for the PC platforms, now the games run faster on the PC and we can install them on older and lighter PCs, so the cost was reduced for the PC platform as well.

Arduino gives you a fairly easy way to get right to the bottom of the programming stack, if that's what you want. It makes sense to start with the Arduino software, but you can go just a little further and program that thing with gcc and a datasheet without spending years learning how to cut through all the layers of operating system cruft.

Not sure that's an appropriate analogy - when programming a computer, you're not "driving" - that analogy is more applicable to the user of the computer - the programmer is more analogous to the next auto engineer tasked with perhaps adding fuel injection to the engine - for which you should know how the engine works, otherwise you probably end up with a Rube Goldberg contraption - which is unfortunately what a lot of the code developed by engineers coming out of most comp Sci programs looks like

I know what you're saying, but my analogy really relates to abstraction in general. It's simply that understanding what is behind the interface you are using typically leads you down the path of using that interface in a more appropriate manner. And, FWIW, I would say programming is absolutely driving. You are making a series of calls into code to drive it to do what you want, but happy to agree to disagree there because that part is more subjective.

While a lot of people swear by the "make it work first, then profile" philosophy, that doesn't mean you shouldn't consider optimisation as you work. Thankfully one of the great things about where we are today is that many high-level languages, and many common design patterns, are structured such that you're strongly encouraged to produce optimal code in the first place. There's also more emphasis on creating clean, maintainable code, and clean code is often efficient (or at least, easy for a compiler to optimise) code.

There are still issues to consider though; while processors are faster than ever, memory more abundant than ever etc., we also expect more of our hardware too, so efficiency isn't something to ignore. However, with compilers handling a lot of the minutiae for us, we're free to focus on more important issues like atomicity, synchronisation and so-on, all of which can hurt performance far worse than a few wasted bytes.

Even so, it's still important to consider things like how many distinct variables you're using (as these need to translate into registers at some point) or how you're feeding large data sets into the CPU as you don't want to create a memory bottleneck or hurt the CPU's cache performance.

I keep running into a brick wall when trying to brush up my coding to anything higher level than C. Functional Programming fits me like a glove, and I'm perfectly happy bit-banging outputs on a microcontroller and carting bytes around manually. C is a logical extension of that, and I can grok it just fine. But above there things get gnarly...

Have you considered getting into FPGA design? I'm a hardware engineer, but my favorite part of the job is getting my new hardware built and developing the code for the FPGAs.

You code in VHDL or Verilog, and the code directly represents gates and registers. You can even use variants of C that have the proper extensions, but I'd recommend one of V* languages. One big difference is some blocks of code will execute simultaneously on a clock edge instead of the order presented. We also seem to use state machines vastly more than other types of coding.

The FPGA manufacturers have core generators for complex functions that are black boxes of a sort, but you're not obliged to use them outside of meeting a schedule.

The circumstances under which programmers had to count bytes or machine cycles were always vanishingly rare. Even in the bad old days, when programming in assembler, the primary goal for producing code 99.99% percent of the time was pumping out functionality efficiently. Optimization was done only if necessary, and even then touched vanishingly small portions of code.

Algorithmic literacy obviously counts for a lot. Knowing the ins and out of how every clock-cycle of a processor is spent probably isn't possible anymore. Choosing the wrong algorithm will produce problems eventually, even with lightning-fast modern processors. Re-arranging C++ statements in order to get your integer instructions to dual-issue properly -- modern compilers do that sort of thing far more reliably than humans ever could.

In terms of multiplication of complexity. I don't really see a difference. An API is an API, no matter how many layers of code lie between you and the hardware. And API design has improved dramatically over the decades since I started programming. (Not always, of course, but I don't imagine ayone will ever make the same kinds of mistakes that are present in native windows, or native linux APIs ever again.

The single biggest change in software development: not APIs or living close to the metal, but integrated IDEs. Most programming time is spent in the documentation for the apis you were using, not pumping out lines of code. Integrated IDEs have delivered huge gains in productivity through things like prompted completion of parameter arguments, and inline documentation tool-tips.

When I first learned to program in the 1960s, it was all about frameworks. Use what was available, like higher level languages, scientific libraries, disk access managers and so on. If you couldn't find what you needed, build it, but build it as a framework, because the odds were that you were going to use it again and again, and it might help others to have what you produced. That was a big point in the Louden IBM 1130 programming book, but it was a theme repeated.

For a while, computers kept getting more and more powerful and had more and more memory. Then in the mid-1960s, the minicomputers came out. If you were used to sprawling out on an IBM 360 or CDC 6600, you suddenly had to start counting bits and cycles again when you moved to a PDP-7 or Nova.

As time went by the minicomputers got bigger and more powerful. You had the PDP-6, the PDP-11, the big Data General machines, and, again, you could exhale. But, then in the late 1970s and through the 1980s, computers got small again. You had to squeeze into an Apple II or an IBM PC.

Eventually, these personal computers grew, getting bigger and more powerful, and that was the situation for over 20 years. A lot of old mainframe frameworks were resurrected and old CS theses went from musty back shelves to hot conference topics. But, then came the smart phones and tablets, and suddenly you are counting bytes and cycles again. Even the network link shrank with expensive, spotty and relatively slow cellular data.

Is this the end of the line? Maybe, but maybe not. There's the whole world of embedded computing which up until now has been the domain of specialized processor engineers writing code for microwave ovens, car engines and dishwashers. Those processors are starting to grow and it's like the late 1970s again when there were computers like the Kim and the Sinclair, except now they're called the Arduino, the Pi, or the Flora, and we're getting a glimpse of the future with products like the Nest and the AppleTV.

Is there a next step? We haven't even scratched medical and smart material computing, where we will program at the cellular level or implement energy collecting plasmon surfaces. (I'm handwaving here, but you can see hints if you follow modern biochemistry or materials science.)

Of course, the frontier is moving up as well. The really big computers are getting bigger and bigger, so there will be platforms for massively distributed computing that make Amazon and Google's current offerings look like IBM 1130s, and they will require high level abstractions, so one thing is sure. We will be pulling out what is left of our hair as we deal with the frameworks of the future.

Todd Knarr makes an important point but doesn't make a distinction: the people who build things out of standard libraries are at a much lower skill level than the people who fix the holes left by those libraries.

I started with machine code when microprocessors were new, and as I moved up the software food chain I left behind, first machine code, then assembler, then 3rd gen languages. But it has always been necessary to understand a level or two below the main language for the immediate task, if you are doing anything that accomplishes any heavy lifting. Otherwise, you're just a coder, not a programmer.

This is like a scene out of an ancient book called "Future Shock" by Alvin Toffler. Everyone is comfortable or stuck at a different layer or several layers of the stack. But the stack keeps growing. We need everyone from simple script only people down to assembly surgeons. Innovation happens at all the layers, but obviously the bulk of activity shifts higher over time.

The Mac made GUI popular, but at first you had to hand write the event loop. How barbaric!

Today you can download Couchbase and install and configure it in 5 minutes. Clustered even. A half hour later you can persist JSON data structures in it. Practically instant cloud application after deployment to Amazon or Azure etc. You just need to write the interesting bits on top. Simple and elegant.

For new programmers, the difference is obvious: new programmers always live in a golden age of computing where the problems of the past are now just base infrastructure.

I don't think the complexity lies so much in any specific high-level abstraction, but in reinventing the wheel so many times. Used to be you could learn C, C++ (later Java), Perl, and SQL and pretty much handle anything that came along. Right now, we're at a time when no one is interested in standards, so we're seeing too much fragmentation where each company reinvents the wheel by providing their own language of choice, database, ORM tool, MVC layer, and so on. Each abstraction is fine, but if there are too many of them, it's hard to cope. This isn't innovation, it's fragmentation. There are way too many programming languages which are all similar but different, to the point it's hard to remember any specific language because they all blur together. By the time you learn one over-engineered ORM framework, another has come along to displace it.

Skills aren't portable enough. There's a "shortage" of skilled programmers, but a glut of them at the same time. Employers are looking for people with such narrow, specific skills that there just aren't that many people who have the skills. The "shortage" would be solved by industry standards, but employers don't seem to care. They're too busy chasing purple squirrels.

So abstractions and high-level libraries are good. There's no reason to write basic containers from scratch in 2014, or low-level socket code. Fragmentation and lack of industry standards, with the consequent lack of skill portability, is bad.

The circumstances under which programmers had to count bytes or machine cycles were always vanishingly rare. Even in the bad old days, when programming in assembler, the primary goal for producing code 99.99% percent of the time was pumping out functionality efficiently. Optimization was done only if necessary, and even then touched vanishingly small portions of code.

You're definitly grossly underestimating how often you had to count bytes and cycles in the olden days. For 8-bit computers, optimization and counting bytes was often necessary for efficient functionality, especially if you're talking about games (which was the vast majority of programs for 8-bit computers).