I started programming in C++ at uni and loved it. In the next term we changed to VB6 and I hated it.

I could not tell what was going on, you drag a button to a form and the ide writes the code for you.

While I hated the way VB functioned I cannot argue that it was faster and easier than doing the same thing in C++ so i can see why it is a popular language.

Now I am not calling VB developers lazy in just saying it easier than C++ and I have noticed that a lot of newer languages are following this trend such a C#.

This leads me to think that as more business want quick results more people will program like this and sooner or later there will be no such thing as what we call programming now. Future programmers will tell the computer what they want and the compiler will write the program for them like in star trek.

Is this just an under informed opinion of a junior programmer or are programmers getting lazier and less competent in general?

EDIT: A lot of answers say why re invent the wheel and I agree with this but when there are wheels available people are not bothering to learn how to make the wheel. I can google how to do pretty much anything in any language and half the languages do so much for you when it come to debugging they have no idea what there code does of how to fix the error.

That's how I cam up with the theory that programmers are becoming lazier and less competent as no one cares how stuff works just that it does until it does not.

This question exists because it has historical significance, but it is not considered a good, on-topic question for this site, so please do not use it as evidence that you can ask similar questions here. This question and its answers are frozen and cannot be changed. More info: help center.

Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise.
If this question can be reworded to fit the rules in the help center, please edit the question.

7

"Is this just an under informed opinion of a junior programmer or are programmers getting lazier and less competent in general?" - this isn't an either or, both are true (just not for the reasons you say).
–
Jon HopkinsJul 1 '11 at 11:17

15

How can anybody answer this without disproving the title?
–
user1249Jul 1 '11 at 11:19

29 Answers
29

No, developers haven't got lazier or less competent. Yes, there is a steadily decreasing need for actual development, in the sense that you know it. And yes, this is very much because businesses want quick results, and why shouldn't they?

However, there is an end-point. There will always be a need for some developers.

A lot of requirements are the same across different projects. The one you're talking about is UI code. Most UIs are made up of a specific set of fields - textbox, checkbox, radio, select, etc. - and there is really no point in developing these from scratch, over and over and over. So abstraction layers are put in to take away all of that boilerplate code.

Likewise the data layer, which is usually nothing but Insert This, Delete This, Replace This and a large number of different views of the same data. Why keep writing that over and over? Let's invent ORMs.

The only thing you should be developing is code that is unique to the business you're developing for.

But there will always be that uniqueness - where there isn't, there is a business opportunity - and there will always be a need for people to write code.

All that said, also bear in mind that there is a lot more to being a developer than writing code. Whether you are coding in pure assembly or knocking together Drupal components to make a content-driven site, you are translating the business need into something that the computer understands.

The most important part of being a software developer is being able to understand the business requirement well enough to explain it to the computer.

It doesn't really matter what language you're using to explain things to the computer, it only matters that you can. And this is hard work, nothing lazy about it.

There is a difference between being a developer and a programmer.
–
RaynosJul 1 '11 at 10:55

9

+1. Exactly. Working software is what you're paid for. Code is a means to create software, an artifact. Pure "programming" is the easy and fun part of creating software.
–
Joonas PulakkaJul 1 '11 at 11:00

When you call Programming: 'Knowing about Memory usage, pointers, etc', yes, i guess that will become less important (as 'Knowing about http, openid, unicode' will get more important).

But, in my opinion that all is 'accidental complexity', and the real Job as programmer is 'Making build machines solve problems, by making sure one understands enough of the accidental problems to achieve the task', and by that definition, someone conversing with a star trek computer needs to be a programmer (i.e. have the same virtues as now).

@Raynos: soooo true. Especially depressing when these people are teamleaders, and make guidelines like 'when the data to send is less than X bytes, use GET, when more, use POST'
–
kepplaJul 1 '11 at 12:10

8

@keppla - Your issue is not that your team leader didn't understand HTTP, it's that he was unwilling to change his opinion in light of evidence that he was wrong (assuming you tried to explain things to him). You can't know more than everyone who works for you about everything - the real crime is not accepting that someone else knows more about something than you.
–
Jon HopkinsJul 1 '11 at 13:21

3

"Tea, Earl Gray, Hot" is declarative programming. It's the computer's job to find a contextually relevant outcome based on reasonable expectations. Producing steam from "hot tea" in this type of language would be an error on a part of the computer's design team, not the programmer. It should use the contextually relevant case unless a specific query is entered.
–
diademJul 1 '11 at 16:39

1

@Diadem: even when it's declarative, you need to know what to declare, and as a programmer, imho, you would not expect that the computer can guess from the past what you will do next, because you will do sth new. The interface that interprets your wishes is for end users.
–
kepplaJul 4 '11 at 6:23

2

@Zan Lynx: Maybe a better example: make the computer warn you, everytime someone is abducted (the computer does not seem to care about that in TNG). 'Computer: inform me, when someone is abducted' 'Please define abducted' 'When he is taken against his will' 'Please define will', etc. To come up with a solution like 'Inform the Officer in Charge when someones location changes from known to unknown, and there is no log that a transportation officer beamed him away or he entered a shuttle, and the ship is not in the dock.' you still need a programmers Mindset.
–
kepplaJul 4 '11 at 13:35

Is a mechanic lazy and less competent because he is using a hydraulic wrench?

Image two guys, let's say Brad and Pete. They both work in two garages changing tires on a daily basis. Brad is a smart guy, he knows that using better tools can get his job done better and quicker. Using the hydraulic wrench helps him change more tires. Customers are waiting in a shorter queue - everybody is happy. Bard also knows that this wrench is sometimes too big and it cannot help him with different kind of screws.

On the other hand, Pete says that hydraulic wrench is blasphemy and Brad is not a "real mechanic". Sure Pete can only do half what Brad does, but he does it in a "right way".

Now what do you think, which garage customers would choose? One that take 20 mins or one with 40mins waiting.

It's pretty similar with programming. C++ is a good language and has its purpose (mainly performance). What I like about languages like C# is that I can focus on a problem, think about algorithm without all the noise that C++ does like ambiguous compiler warnings, undefined behaviors et cetera. Developing is getting more and more complicated that in old days of mainframes and first PCs ,yet human brain stays the same - pretty much dumb. One app can run in cloud, mobile, desktop there are a lot of dependencies, security issues and other problems. I want to have a better tool to focus on more complicated problems and solve them.

I don't think that the analogy works because both Brad and Pete will still know how to remove the tire, and everything involved (wenches, wrenches, and beer).
–
Kristofer HochJul 1 '11 at 16:08

3

+1 Great Answer. I would add that no matter what tool you use, if you understand what it does, you will do your job right. On the other hand, if you don't, no mater how much of the work is being done by the tool, at some point you are going to screw something.
–
Jacek PruciaJul 1 '11 at 19:40

1

@Kristofer: Maybe it'd be better if Pete knows some electronics. While Brad only knows how to use the diagnostics computer and read off that the O2 sensor has gone bad, Pete sees that the sensor wire is a bit burned, gets out the meter to measure it and realizes that the voltage regulator has gone wonky and is burning out O2 sensors.
–
Zan LynxJul 1 '11 at 19:42

Programmers are not getting lazier. Programmers have always been lazy. Being lazy is part of the fundamental nature of the job. The problem is that people assume that being lazy is a negative. Being a "lazy" programmer is a virtue.

Remember the old adage, "Work smarter, not harder." This is the fundamental drive of programmers.

The guys who built and programmed the first computers didn't do it because they liked doing hard work, they did it to AVOID even harder work. (doing pages of calculations by hand)

Being 'lazy' is one of the fundamental reasons why programmers program. It why we write new and ever higher level languages, better and better debuggers and IDE's, shell and build scripts, etc...

A programmer looks at a problem, anything he or she does and thinks;

"can I automate this?",
"how much time would that take?",
"how much time would that save me?"

We do this because we are lazy, we don't want to do a repetitive and boring task when we could be doing things that are far more fun.

If programmers were not lazy then no programmer would have ever seen the need to write a single new language or compiler.

As far as the notion that a programmer is "lazy" because he has to "look things up", so what, who cares. The assumption that it is more work to learn every nuance of a particular language (and never have to look something up) then it is to find and use what you need when you need it is a fallacy. Besides, the process of looking things up is the process of learning and the very reason sites like this exist.

If someone wants hard programming work I would tell them to go hand code some raw machine code in hex.
Done that? Want something harder? Now go debug it.

First of all calling people who use for example languages with garbage-collector lazy, is kind of calling people who drive cars with automatic transmission lazy. IMO it's bit ridiculous.

As for competence, programming is much more popular and egalitarian job that it used to be. So yes, there are many newcomers, who lack knowledge. I doesn't however mean, that there are suddenly less competent programmers. In fact there are more. You're just looking at the wrong side of the bell curve.

People who drive autos ARE lazy, there is nothing ridiculous about that. Manual with heel-and-toe gives a lot more control and performance out of the car, but requires a lot of skill and extra work.
–
CoderJul 1 '11 at 11:18

11

@Coder: "requires extra work" - on highway it doesn't, in traffic jam it does, but then it gives you no advantage anyhow.
–
vartecJul 1 '11 at 11:21

2

Manual transmissions also provide better fuel economy on the highway, though this is less true with lock-up torque converters.
–
Dave MarkleJul 1 '11 at 12:34

4

@Dave actually modern electronics have made the automatic actually more efficent on average. My Ford Fusion with same options was rated almost a full mile per gallon less. I am sure there are times where the manual is still better in the micro but over all automatic has the lead.
–
ChadJul 1 '11 at 15:26

3

@Coder - If you think driving a manual requires "a lot of skill", you need to look around at the thousands of incompetent drivers on the road with manual transmissions. ;)
–
techie007Jul 1 '11 at 16:55

I'm tempted to say, "yes, uninformed opinionated junior programmers have become lazy and less competent", but let's try a serious answer:

Many things have become easier, but more is expected from us. I'm currently creating a web app that has a lot of features typically found in well-made gui apps (tabbed views, editable&sortable grids, Excel export etc.). The tools I'm using (ExtJS etc.) make it reasonably inexpensive to create such an app.

Ten years ago, it would have been almost impossible, at least very expensive, to create such an app. But ten years ago, a simple HTML form with a HTML table would have been sufficient for the customers. Today, with the same effort, better (at least more beautiful) results are possible, and customers expect to get them!

In general, a software developer of today needs to know more languages than a software developer 20 years ago. Back then, something like C and SQL were sufficient. Today, I'm using JavaScript, HTML, Groovy, Java, SQL all in the same project.

Programmers are becoming less competent and lazier in some ways, but more competent in others, though the C++ / VB divide isn't the reason or a symptom in my mind.

Using a GUI builder isn't lazy, it's just different, it's about tools for the job in hand. If an assembler programmer called a C++ programmer lazy you'd call bullshit on that (rightly) and the same is true of C++ and VB. VB allows you to do some stuff quickly at the expense of some control. The barriers to starting coding in it are certainly lower but that's a very different thing to laziness - you just learn different things and apply them in different ways. VB programmers are no more lazy than C++ programmers are unproductive, they just work and produce in different ways.

On the wider point, generally education of programmers is better now than it's ever been. The idea of not using source control for instance is pretty abhorrent to pretty much everyone now where 10 or 20 years ago that wouldn't have been so true. Similarly they're more likely to understand and want to use automated unit tests, continuous integration and so on, so in that sense they're more competent than their they were.

But what I think has changed is that people no longer know how to problem solve the way they used to and that's true of pretty much any mainstream language. The instant response to any issue now is Google and while that's great and works 95% of the time, I see too many programmers who have no idea what to do when it doesn't.

It's not that they don't understand the fundamentals (they don't but that's not actually that big a deal), it's that they can't break down the problems in such a way that they can even work out what fundamentals they need to be getting to grips with.

Pre-Google you had no choice. Your resources were your team, a few dozen physical books you might have access to and your brain. That set up means that if you find a problem the chances are you're solving it yourself from something close to first principals so you either got pretty good at it or pretty unemployed quickly.

And this was was true regardless of what language you used. VB is high level and hides a lot but that means that when it comes to problem solving that actually meant there was more you needed to be working around. If something didn't work you had to get more creative and work harder as you had less control. As a VB programmer (and I speak from experience) you didn't know less than the C++ guys, you just knew different things but you both knew how to solve problems.

But it's probably harsh to see it as a significant criticism of programmers these days, they don't develop the skills because they don't need them, but it is a weakness compared to those who picked up the skills from when they were necessary.

@Jon-Hopkins, I would say that the massive uptick in Google dependent programming has to do with the massive number of APIs that we need nowadays. Its too difficult to keep track of it all. (But, in essentials, you are correct)
–
Jarrod NettlesJul 1 '11 at 13:16

1

@Skeith - Building a UI takes up about 5% of an average application developers time. What do you think they do the other 95%? The designer does not help much with backend code. You are clearly attacking a straw man. Most people know the tools they need for their job, or else they would not be employed.
–
Morgan HerlockerJul 1 '11 at 13:23

I note from your profile that you're 23 years old. Let me put my teeth in and give you some perspective from someone about twice your age who's been doing this a very long time:

It used to be that there was a lot less of everything, starting with computing power, storage and network bandwidth, if you had a network at all. Those scarcities put limits on what you could reasonably do, making it much easier to wrap your head around everything. The software we run today is far more capable than things I worked with 25 or 30 years ago, and those capabilities mean there's a lot more of it. That makes gathering a fine-grained understanding of everything a lot harder for one person to do. Part of that has to do with the fact that things are going to continue to increase in complexity and part of it has to do with the side effects of age.

The computing ecosystem is becoming a lot like biological systems: humans are more complex than single-celled organisms, and parts of us have to specialize if we're going to get good at doing anything. My brain cells are awfully good at brainy things but would be lost if plunked into my kidney and expected to do renal things. Similarly, the guy who's good at writing digital signal processors might not have any idea how full-text indexing works, because that just isn't his specialty. But both could evolve a bit and learn to understand it if they needed to, but there are limits to how far you can spread yourself and still be effective at what you do.

...no one cares how stuff works just that it does until it does not.

When you have a job to do, you often have to take the leap of faith that a tool you're using (library, RDBMS, whole subsystem, etc.) works as it should. One of the things experience brings is the ability to pick which rabbit holes you're going to run down to ferret out failures in your tools, fix the problem and then get back to what you were doing.

Now I am not calling VB developers
lazy in just saying it easier than C++
and I have noticed that a lot of newer
languages are following this trend
such a C#.

That's all a matter of perspective. I was around to see C++ come into existence, and it follows that trend as well. C++ makes things much easier than C does, C makes things much easier than assembly and assembly makes things much easier than writing opcodes by hand. As someone who's written a lot of assembly and assembled a few things by hand from scratch, that would put you, as a C++ programmer, three steps down the "it's easier" path.

+1 pointing out that it's a matter of perspective. I was around when UNIX first came out of Bell Labs and there was a considerable amount of 'tsk tsk'ing that high level languages like 'C' were dumbing down the ancient and esoteric art of writing operating systems, and this surely would lead to no good. As our tools get better and take care of more mindless bookkeeping for us we can use the time saved to tackle harder, and more subtle problems.
–
Charles E. GrantJul 1 '11 at 18:29

One of the greatest strengths of the
Visual Basic Language is that a beginner can learn to do many useful
things fairly quickly.

One of the greatest weaknesses of
Visual Basic Programmers is that they will learn to do a many useful
things fairly quickly, and then they
will stop learning anything.

When I would teach programming the first exercise, the first day of class was how to build an application in NOTEPAD and compile it using VCC or VBC. Yes, these are things we (as programmers) do not do on a daily basis, but should understand what is happening when we press "F6".

Programmers are not (generally) getting 'lazier' as much as we are expecting to get more out of our tools. I have no need to type "get/set" 10,000 times a day, I LIKE that Visual Studio and other tools like Code Smith and Resharper work for me to do what I already know how to do so that I can apply my effort to figuring how to do "new" things. That does not make me lazier, that makes me "innovative".

As a professional developer we should not be 'wasting time' reinventing the wheel but we should clearly understand what goes into making the wheel we are going to be using. These are things we 'should' be learning as student developers (but unfortunately, often are not). If a developer doesn't understand some "black box" technology does that really make them less "competent". Most developers only 'basically understand' how an ODBC driver works, they just understand 'what' it does. Do I have to know how a transmission works to be a competent driver? I would say not. Does it make me a more competent car owner to know this, yes.

The need for Rapid Application Development (obligatory wiki link: http://en.wikipedia.org/wiki/Rapid_application_development) has meant that developers write less code and newer developers understand less, because they don't need to understand how to implement a linked list since they're got something more high level to focus on.

I can't catch, kill, skin, butcher and cure meat, and I doubt the guy in cafe downstairs can, but I still get my bacon sandwich from him, much like business guys get their apps from developers who don't know about pointers (like me!)

It has been said that the great
scientific disciplines are examples of
giants standing on the shoulders of
other giants. It has also been said
that the software industry is an
example of midgets standing on the
toes of other midgets.
— Alan Cooper

A good software developer is not one who reinvents the wheel. He is able to use the tools that have been built before him. He doesn't waste time on rewriting the same old boring stuff, that has been written hundreds of times, becomes tiresome quickly and probably exists in a version of higher quality out there already.
If you give them a language that already has round stone disks bundled, chances are good they don't spend too much time on reinventing wheels. If I got a cent for every string copy routine ever written in C, I could probably buy the whole software industry.

Laziness is in fact one of the three great virtues of a programmer. The tools you speak of were built by good programmers for good programmers, to reduce redundancy and boredom and thereby increase productivity and motivation. Such tools can in fact have negative effects on beginners, as they inhibit a deeper understanding of the programming aspect they simplify.

Not joking, what you're experiencing is a sort of right of passage for developers. Has been ever since the first higher level languages supplanted assembly. Back then you'd have heard all the ASM programmers complaining about the same thing. 5 years from now, all the Ruby on Rails devs will be complaining about how lazy yet another crop of new tools are making people.

This refrain will be repeated until the machines destroy us all:
"Does it seem like technology X is making developers lazier and worse than the technology Z that I've always used?"

The good news is, even though compilers have come a long long way, people still need assembly and C and all the other old stalwarts for many things... just not the majority of cutting edge technology innovation. If you want to be on that cutting edge, I suggest you update your skill set.

From my experience, yes and no, but it's not the fault of languages; it's the fault of the developers themselves. I have worked with many developers that cared nothing about doing things right, improving themselves, or really doing anything other than churning out the same crap they have done for years. Trying to get these people to improve is like talking to a brick wall - half the time they're ignorant of anything that they haven't used in the past or are totally unwilling to "take a chance" with something that could improve their productivity.

More advanced languages aren't the problem, it's programmers who don't treat this profession as a constantly evolving craft. You don't have to be intimately aware of everything new, or jump on every new bandwagon, but you should at least try to become better at what you do.

For a concrete example: I'm a .NET Developer by trade. I would expect a competent .NET developer to be aware of things like LINQ, Entity Framework, WPF, MVC and the like; they don't have to have used it, or be pushing it at the workplace, but at least a passing understanding of "This exists" is better than absolute cluelessness that I see far too often.

I've only been coding for about 4 years in work now and that has been almost entirely c#. I did learn C++ when in College and Uni but I wouldn't be able to do much with it now.

So for GUI development, it could be seen as lazy, but then again could it not be seen that you can focus less on coding that part and more on developing the logic of the application itself.

so maybe rather than becoming less competent they are moving the focus, probably a lot towards communication and distributed systems e.g. cloud computing and SOA. Though this could be just as similar thoughts from an intermediate programmer as well.

It is probably true that the barrier to entry in programming jobs has been getting lower each year. For instance, it is now possible for engineers whose specialty is not primarily software and artists to write code using scripting languages.

This implies that the level of competence has actually increased, if you consider the breadth. That artists can program also means there are now more programmers with artistic skills.

by competence I meant programming, all other skill are irrelevant except mathematics.
–
SkeithJul 1 '11 at 10:51

3

@Skeith - "by competence I meant programming, all other skill are irrelevant except mathematics" - this is so wrong. One of the biggest improvements in the industry in the past 30 years is that communication skills are now understood to be absolutely key. Give me a basically competent programmer with great maths skills or one with great communication skills and it's the guy with communicate skills every single time.
–
Jon HopkinsJul 1 '11 at 11:22

1

@Skeith: So you only need to know programming and math to be a good programmer? What world are you in? You need to know about how to use a computer, how to communicate with customers and other programmers, how to write documents, etc. What you don't have to know is math. Sure, there are some overlap between math and programming, but only knowing the programming part is enough.
–
Martin VilcansJul 1 '11 at 18:23

There is a difference between "programmer" and "real programmer". Please don't call HTML a programming language, but there are lot of "HTML programmers". Each of you(programmers/developers) can make an experience with colleagues - just "turn off the Internet(actually don't allow them to use search engines)", and you'll see that a huge variety of "programmers" will sit without a job. What they can do, they just know that if they need, for instance, searching in text, they should "search 'text searching in %language_name%'"? They can't answer to this - what are the differences in Boyer-Moore and Knuth-Morris-Pratt algorithms.

So, IMO, programming means solving problems, knowing very good as minimum one programming language with its 'STL' and other important things. Programming is an art, is a kind of life, that's not a thing that can be done by everyone.

Sorry for more sarcasm than needed, but I think this article says better than I.

Am I wrong?

UPD: The main and important thing is knowledge of the fundamentals, such as algorithms, data structures etc. How many of you can implement the libraries/standard functions/etc in case if today's will be accidentally removed? IMO, programmer should use developed/well-debugged 'alien' code(libraries/frameworks/etc), but should be able to reinvent the wheel, always!

My only issue with this is that I've worked as a programmer (a proper programmer, not web / HTML / script) for 20 years and have no idea about Knuth-Morris-Pratt algorithms. For most programmers this sort of theory doesn't impact on their day to day lives as this stuff is bundled in libraries.
–
Jon HopkinsJul 1 '11 at 12:26

2

The standard libraries I work with have thousands of classes and hundreds of thousands of lines of code. Are you saying I should be able to reimplement all that without documentation? If not, you need to clarify how big something can get before it ceases to be a wheel.
–
Peter TaylorJul 1 '11 at 12:40

6

Humans don't have the lifespan required to learn how to reinvent all wheels invented so far, nor learning how to reinvent the wheels being invented right now.
–
MackeJul 1 '11 at 12:43

3

@Dehumanizer: I will hopefully be trained and have more than a C compiler at my hands to save the world, otherwise I'll be screwed anyhow. (BTW Why even a C compiler? Why not just a USB-stick, an oscilloscope and a 9V battery? Seriously....)
–
MackeJul 1 '11 at 13:03

1

Just turn off their compilers and you'll see most people just sit around while the REAL programmers type out machine code straight to a file!
–
PhilipJul 1 '11 at 15:14

Regarding VB being easy to use, and lazy programmers picking VB because of it:

I think VB is surrounded by one big myth of being easy to use. This myth was originally somewhat true: back in the days around 1991-1994 when dinosaurs walked the earth, there were only two real RAD tools around, VB and Delphi. They were quite similar, but NOTE THIS: Delphi and VB were equally easy to use! The only notable difference between them were that VB had completely illogical syntax and produced incredibly sluggish programs.

C/C++ GUIs were written either in MFC or in raw Win API. So VB was certainly easier to use than the Microsoft alternative. Then the rumour mill went like this:

VB is easier to use than Microsoft C/C++ / Win API. ->

VB is easier to use. ->

VB is easy to use. ->

VB is the easiest.

This rumour then lived on, even though Delphi was always equally easy, if not easier, since Pascal is a sane and logical language.

Then in the late 90s Borland released a C++ equivalent to Delphi: C++ Builder. Now there were 3 equally easy tools. Around this time, the few remaining rational arguments to use VB died. Yet the myth lived on still. "VB is the easiest".

Then Java came along and there were several RAD tools for it as well (and for its Microsoft fiasco version called J++). Yet the VB myth lived on.

Then Microsoft made RAD support for C++ too, and also came up with C#, baking it all into one big goo called .NET. Since the VB myth still lived on, they were able to trick old VB developers to use VB.NET instead of C++ or C#. Even though VB.NET was was quite non-compatible with earlier VB versions.

Today, VB is a completely redundant language. The RAD tool is not easier than any other RAD tool. The language syntax is downright horrible, so bad that it actually encourages bad program design and bad programming practice.

There is a huge variety of activities that are lumped together under the banner of "programming", and an ever larger number of workers involved at the "technician" end of the scale. You don't need to be capable of writing compilers, or even of selecting from among a set of algorithms to solve a particular problem to put together a website in PHP. Industry/society needs lots of people producing said websites (apparently), and also a certain number of programmers working on harder problems. That second group isn't lazy or incompetent, as a whole, or our aeroplanes would be going down in flames, ATMs delivering random amounts of cash, X Ray machines delivering fatal doses of radiation, financial markets going beserk etc. Hang on, forget about that last one :-)

One side of this that I think all the other answers are only glancing at is that this is just the generalized trend going from low-level languages to high-level languages.

Yes, the industry of software is shifting from low-level languages to high-level languages, always has, and will probably continue to do so as long as we build better tools. Yes, this could be considered getting lazy, as you had to work really hard to do stuff that is basic by today's standard. But I wouldn't say less competent. The competency is simply moving from implementation to design.

Low Level
It's somewhat subjective, but at a low level, you are working closer to the hardware. There is less hand-holding and assumptions of intent. The basic tools are presented and getting things done is left to the programmer. Low-level languages came first of course, and are usually the tools of the old guard since the higher-level languages didn't exist when they started. There will always be some low-level development. But I wouldn't make a website in assembly.

High level
The goal at high levels is to automate the basic functionality and make programming simpler. It lowers the bar to entry for new programmers, gets stuff done faster, and standardizes how we represent and process data, often with an overhead. Consider a string. In the early days, someone probably used 1-26 for a-z, and used only 5 bits and just had to know what size his words were. Then the ascii standard was developed and we had C strings with a terminator character. Now we have objects that handle things to avoid buffer overflows and special subtypes that disallow escape characters.
Or a loop. A "for" loop is ever so slightly higher level then a "while" loop. And a "while" loop is really just representation of a structured way of calling GOTO. Which is shorthand for a opcode number specific to a chip.

Also,

Future programmers will tell the computer what they want and the compiler will write the program for them like in star trek.

Welcome to the future! That's exactly what compilers do. In the olden days people had to write out the machine code by hand. Now we've automated that and simply tell the computer how the write the machine code for us.

I think somewhere along the way you lost sight of what programmers get paid to do.

Our deliverable is not Code, it is working software.

We are not building furniture where hand cut dovetails somehow impart extra value because of all the manual "craftmanship" that went into it.

We get paid to solve business problems on computers). If you can deliver the same product in less time for less money then I think it is our OBLIGATION to drop the pretense that C++ programs are superior simply because they are more complex to build.

Ratio of (core program developers/developer count) is decreasing because:

Tools are getting easier, this means smaller talent is needed for same problem

People are getting used to IT technologies, this have more willing to spend money for customized tools

Computer Science literature is exponentially growing, specialization and division of labor is increasing so there are no more "Aristoteles" people that talk about everything (actually they dont need to know everything because of abstraction layers)

More jobs offered, filter is loosen

More automation is needed at every cycle of life, demand is increasing and supply is not enough

Developer ratio to population is increasing.

So people are not getting lazier and less competent, average falls because computing is a more open area now.

A lot of answers say why re invent the
wheel and I agree with this but when
there are wheels available people are
not bothering to learn how to make the
wheel.

You are undermining your entire point via the fact that somehow, wheels still get made. I see your point, but I've noticed that in any discipline, there are enough people that are interested in the low-level stuff to keep that going. For instance, I use Qt to build GUIs. That tool didn't arrive by magic, people developed the link between the low-level stuff and the stuff I do. Do fewer people understand the low-level stuff, yes. Fewer people can also kill their own food or fix their own car, but society manages to survive.

Before the 1940's computers were hard wired circuits. Then Von Neuman came up with the idea for stored memory locations. I am sure those programmers at MIT thought he was going to degrade their trade into something too easy. Then came assembly, then came FORTRAN, then ada, then C, then c++, then java and so on. The point is, the point of a language is to allow further and further abstraction. That has always been the goal and it is the reason that c++ caught on and then java after it. My biggest beef is that Universities aren't teaching students anything about computers anymore. I don't hire c# programmers if they don't know c++ like the back of their own hand. Why? Because it is too easy to be a bad programmer the more and more abstract the language becomes. They need to understand pointers, memory management, dynamic binding etc... inside and out before they could possibly understand C# to the level that I trust them to contribute to our code-base. I also make them struggle through make files before I allow them to use Visual Studio. That said, I love C# and a good IDE, but they are good as tools when they are properly understood. In my opinion, an abstraction is most useful when you understand the particulars which are being abstracted--that's a very old idea, see Thomas Aquinas on the relation of Abstraction to particulars.

I think another good exercise for entry level developers is to make them write a few applications using the Windows API. Then, after they finish it, have them make it object oriented where every form inherits from your generic window class. Have them encapsulate the event loop and put some function pointers shooting back to their form class. Then say good job, Microsoft already did this for you, its called System.Windows.Forms. Have fun.

If they are to be web developers, have them write a few CGI programs so that they understand the POST, GET etc... and then scripting out the page. It makes ASP.NET and PHP make much more sense.

If they are working on something lower level on a network, make them write a few apps using sockets before introducing them to the libraries that have already done it.

I have found that this improves productivity in the long run because it gives them the tools and correct intuitions to solve their own problems.

Universities are supposed to be doing this, but they aren't so we have to.

That said, I agree that it is becoming harder and harder to find programmers that are worth a damn coming out of college, mainly because they weren't weeded out by being forced to write recursive algorithms and linked lists. Also, they usually have only had Java or .NET courses and therefore don't understand a damn thing about the way they work. Still, the abstraction is quite useful when properly used.

-Stop changing subject!
-I thought our love was special.
-Stop changing subject!
-I'm not changing the subject.
-You are! I'm trying to talk about Your inability to understand what we are talking about.
-No its not even close. my favorite beatles song is across the universe. what is yours?

I believe that only those programmers who don't get this point are kind a doomed.

They can't answer to this - what are the differences in Boyer-Moore and Knuth-Morris-Pratt algorithms.

And with "this point" i mean - it's wrong to try outmatch computer at what they are best - algorithms. Instead programmer is supposed to be aiding computer with context, telling about problems we are trying to solve.

Tools themselves don't fix problems, they just (sometimes) make programmers more efficient.

Absolutely not. In my experience, using the correct development tools allow for rapid application development without sacrificing quality. In fact, I would argue that, for the most part, quality has increased because of our "over reliance on tools." In addition, development tools can decrease the learning curve and introduce more people to programming. This, of course, has a downside, as there are many more novice programmers, but all in all, it aids in the creative process and pushes technology forward.

I started programming in C++ at uni and loved it. In the next term we changed to VB6 and I hated it.

I could not tell what was going on, you drag a button to a form and the ide writes the code for you.

Yes, indeed. Your experience at uni speaks to the very caveat that I mentioned.

If you don't know what problem your tool is solving, or you're incapable of troubleshooting things when your tool has problems of its own, then that's a huge red flag. This circumstance doesn't necessarily imply laziness, but it probably implies inexperience.

I think there are 2 flavors of programmers. There are programmers that just program to get the job done because of maybe a deadline or maybe just to be more productive. I would say they were lazy. I simply believe they have no interest in "how" a computer does what it does or "how" a program does what it does.

Then there are passionate programmers, like myself. Passionate programmers, like myself, like to know exactly what is going on in the CPU. Just like a good psychologist tries to figure out what is going on in the human head, progchologist's, like myself, want to know what is going on inside the CPU. So we learn, dissect and analyze a program and use tools, such as Reflector and disassemblers to try to figure out how a program works.