Posted
by
Zonk
on Saturday February 03, 2007 @05:52PM
from the comedy-and-software-are-in-the-same-club dept.

GoCanes writes "Salon's Scott Rosenberg explains why even small-scale programming projects can take years to complete, one programmer is often better than two, and the meaning of 'Rosenberg's Law.' After almost 50 years, the state of the art is still pretty darn bad. His point is that as long as you're trying to do something that has already been done, then you have an adequate frame of reference to estimate how long it will take/cost. But if software is at all interesting, it's because no one else has done it before."

Its esentially the problem with any large company. Any project coming out of a large company spends 90% of its time in committee and debates, and generally gets watered down to the few things those committees can agree on. Personally I disagree on the idea that two coders arn't as good as one. One coder rarely has the motivation as well as the insight to see clearly his way through a project (getting stuck on a problem and having another viewpoint is ALWAYS helpful). But there is definatly a deminishing return as project teams get larger.

This is a point I've come to also after 72 years, the last 35 or so of it coding this and that although not much recently as I seem to be fadeing into the dim sunset of SS mentally.

When working in assembly, then it may be that one person is the optimum number of coders as I've done some never before been done stuff in assembly several times in the early years, sometimes with hand assembly (on an 1802 board) where you look up the nemonic and enter the hex equ in a hex monitor. It took me about 6 months to fine tune about 3k of code, but it was still running 12 years later when I last checked in at that station. And still saving the station 2-3 man hours a day and giving them a better air product at the same time.

But for a higher level language, I think 2 can be more productive, particularly when one knows what he wants to do, and the other knows how to do it once its properly outlined. Many times the coder himself is simply too close to the code to see the job it has to do, but the partner in turn has a good idea of what its got to do. The genesis of at least 2 fairly well known amiga programs were from the mind of a younger man in another dept at the tv station, and he would hack up what he thought might work but didn't, but once I knew the requirements, the final code more than likely came from my keyboard. He had the imagination that I lacked, possibly due to my advanceing age, and was in turn concentrating on his job's duties which I wasn't always aware (I had other responsibilities too) were being done at less than optimal methods. We sure made a good combo crew though.

I have NDI how many man-years in in vista right now, but I dare say it is a substantial investment in both time and programmer salaries. I'd also wager that at least 75% of any one programmers day was spent conferring with other programmers as to the best way to do it, and get it done within the generally immutable confines of the.h header files. This is NOT to me, best use of the programmers time, so the what does it do, and how it does that, really ought to be seperated in any large project.

As for re-using known good code ideas, or a 150 line snippet here and there, it is to be encouraged at every staff meeting, re-inventing the wheel is not good use of his time and as others have said, only serve to intro new bugs that then have to be run down and fixed. Programmers really should get over the attitude that I can write it quicker, and spend more time reviewing older code to see if it can be recycled. There is much knowledge in 10 year old code thats still in use everyday.

Will my little treatise make any difference at the end of the week? Donbesilly, This is after all,/.:-)

"But if software is at all interesting, it's because no one else has done it before."

"Interesting" to me means something new and/or unknown...mostly. There are exceptions. Treading new ground always requires greater effort. If I cut a my way through virgin jungle then those who follow have a path.

I take game programming classes. One of the instructors made some very good points related to innovation. His context was game wise, but since my background is business application programming, I can easily see how it applies here.

When you innovate in a game, only make one....maybe two innovations. Otherwise, you skew so far away that you usually end up a complete failure. Applying it here: sure, keep things interesting by doing some piece new, but keep it manageable by keeping the rest of it "boring". You gain predictability while retaining "fun".

Hah, gaming programing classes. You won't find one person thats important in the industry that's taken those (grunt-work is ok for allot now I guess). As for your instructors words, that's the EA method right there. You want to be part of that? Games that fail, fail for a number of reasons that aren't because they did somethings that were innovative. Poor implementation yes, but thats not because they did too many new & interesting things. If you poorly implement an idea your going to fail (or at least

Actually, every instructor I've had works in the industry. Not *DID WORK*....but *WORKS*. Classes are at night. It's in Austin, so there are plenty of studios to pull from. I've had instructors that have worked on games from all eras and genres. Some of the companies that represents: Sony and SOE, Midway, NCSoft, and Microsoft. Plenty who have started their own studios after having worked at bigger ones, too.

It's not a degree program (yet), but I'm not too worried about that since I already have a CS degree. For me, it's more about having fun, learning some new stuff, and making good contacts for when I'm ready to jump into the industry.

Check out the list of names on the Advisory Board and the list of Instructors. There are some influential names on that list.

Daunting at first. But once you get some instruction, it is actually a whole lot easier than it seems. I've even learned to make 3D models. Another thing that once you find out how to do it right, it's really pretty easy.

In fact, I think that some of my business experience helps me more than others in the program. I have a better feel for structure within a process than most of them. Scope / Requirements / Design / Code / Test translates into the game world as Pitch / Game Design / Technical Design / Code / Test (with Art being like having a Web Tech team that does the HTML and Styles for you).

But if you ever did any old school Windows programming (where you had to actually hand roll your event loops), that's basically the core of game programming. Everything else is event handling (fire event, score event, death event, etc.) and calling libraries (graphics, sound, etc.). Granted, that's boiling it way down, but equating it that way should give you an idea of how easy it really is.

I read somewhere that in science fiction writing this is called "The Tooth Fairy Principle". Don't introduce more than one exotic technology or idea. I immediately realised that it applied even more strongly to software development. New areas represent areas of high risk, adding even a few to a project can change the risk from moderate to very high. I've participated in a few projects who broke this principle... as usual commenting on the risk that this implied only made me sound like a Cassandra when eventually the prediction bore fruit.

However, the major reasons I see for software projects becoming late are: clients repeatedly wanting to change design after the design phase (in one surreal case we had a client change a fundamental design issue 24 hours before going live!), poor resource allocation (a very large subject), management saying yes to unrealistic deadlines, bleeding edge technology (Tooth Fairy Principle - high buzzword compliance).

Respectfully, I have to disagree. Some of my best ideas have come from pondering over a problem. Pondering can be effort. It's not like daydreaming. To think about a problem and apply logic to try and come up with a resolution requires effort in many, if not most, cases.

I believe his point isn't that you're not doing work, but rather that scheduling pondering is impossible. Otherwise give me a fairly firm estimate of when you will either prove P = NP, or that you can prove that P < NP. Logical deduction isn't precisely the same as "resolving the unknown". One doesn't provide a time table for when the Twin Prime conjecture will be solved. I can apply logic deduction to lots of problems, but I can't necessarily provide a firm estimate of when I'll find the solution to a problem.

Any time you provide an estimate of the time it will take to do anything in "problem solving", you are using statistical conjecture about how long you think it should taken given that you've solved other similar issues. How long will it take me to resolve a logic puzzle. How long will it take to construct a proof to show something? You think logically on those, but you don't provide a schedule. If you tell me, I'm going to give you 30 different distance, rate, time story problems that are geared for a high school freshmen, I can tell you that I'll be done in about an hour. If you tell me that you'd like me to prove Fermat's last theorem without using reference material. I know it's true, and I know that I can't provide a schedule for it. It's highly unlikely that if I took the rest of my life I couldn't do it. Both require deduction and logical thought. One is an entirely different scale then the other.

When working in the unknown, you can't provide a schedule. Otherwise, you'd be working either in the known, or very close to the edge of known.

"What I did not say is that thinking is easy."No you didn't. You said, "Ideas are not the product of labor."

Definition of labor according to Merriam-Webster, just the first/primary definition:

Main Entry: 1laborPronunciation: 'lA-b&rFunction: nounEtymology: Middle English, from Anglo-French labur, from Latin labor; perhaps akin to Latin labare to totter, labi to slip -- more at SLEEP1 a : expenditure of physical or mental effort especially when difficult or compulsory

That's what I was doing; and within the context of a specific provided example.

Think about it. It might take some effort."

Okay troll, right. I've put some effort into it and I'm still clueless. Are you talking about "Ideas may come in a flash, or evade forever."? If so, I consider that a partial truism. Ideas also come about from a slow, plodding, methodical effort. Your generalization is half-assed. If you've got a point to make, please do so. You haven't stated how you disagree with my (and the general use) definition of "labor" and you certainly haven't clearly provided your interpretation of the context involved in the "specific provided example".

Are you talking about "Ideas may come in a flash, or evade forever."?"Which is the clue that I'm not talking about thinking."

If you've got a point to make, please do so.

"Think."

Yup, I'm still clueless. Are you talking about a "flash of inspiration"? If so, doesn't some prior thought have to have gone into the problem? No one has a flash of inspiration without having put the thought into identifying a problem or goal. If so, you still haven't stated how that is not labor. I've already put way too much

I'll reiterate, if you've got a point to make, do so...without obfuscation.

"No."

Hmmm. I could take that several different ways. How about this, I'll take it in the least negative way possible and assume that you have as good or even better sense of humor than I do. That being said, let me know if you're ever in Charlotte NC and I'll buy you a beer.

One programmer is better than two for the same reason that one woman in the kitchen is better than 2. You have to get on a pretty large scale before you need multiple cooks/programmers.

Software programming in general is hard for 2 reasons:1. Computers aren't built for interfacing with humans, thus UI us terribly time-consuming.2. The environments people like to drop an app into can be so bizarre, that rock-solid stability is very difficult to achieve.

Which real world? In this era of multiculturalism, we need to learn to accept that there are places where women essentially never leave the kitchen. We must respect this and fein that we admire it so as not to offend.

Most cooking projects don't take more than 10 man-hours, but pretty much every programming project does. And, furthermore, mostly when the chef makes a mistake it's obvious to her.

Neither condition hold for programming. It's for this reason that I think that, in general, *two* programmers can program faster than one. At least, me and my partner can program code that's more bug-free together than we can when we program separate projects, and that makes a difference. If the project is sufficiently large - i.e. takes longer than about 10 hours, the cost of communication between two people is less than the cost of switching.:)

While we're at it, I think that there's another misconceptions in this interview.

programmers are programmers because they like to code -- given a choice between learning someone else's code and just sitting down and writing their own, they will always do the latter

Two of the five developers at my little software company are programmers because they like to figure things out. So we almost always figure someone else's code out before we do anything ourselves. There are varying degrees of this in a lot of the developers we've got there. I would say that none of us will write anything ourselves unless it saves us a considerable period of time.

But even more, if you had a relative who was always wondering, "What is it that you do all day?" you could hand my book to that relative and say, This is what my work is really like.

No. I couldn't. My experience as a developer is nothing like what he's described. And he didn't talk about the phenomenon of unknowns that I've noticed - for every project I do, if I estimate how long the known things will take, dealing with unknowns will generally take 60% longer (so multiple time estimates by 3 is generally correct). He didn't talk at all about testing.

Almost everything he talked about are things that I thought would be true when I started but that have ended up more or less untrue. Discipline coding makes a difference. Automated unit testing catches most problems, and regression testing finds almost all the rest, and not everybody does these things.

......Software programming in general is hard for 2 reasons......Actually there is only one master reason. So far, there is no mathematical way to prove that a given non trivial software program will actually work as intended. When designing a physical thing, such as a bridge, machine or electrical circuit, there are precise mathematical formulas which can be applied that give a reasonable expectation that the building or machine etc. will perform according to expectations. There are no mathematics that can

Mostly programmers are trained in the technical details of languages
and the libraries/APIs associated with them. They don't gain skills
in knowing what users really want and are hurried into producing
barely-working stuff, fast.

Whatever testing is done often only tests that the product produces the
correct answers when feed the proper input - no account is taken for how
the program reacts to incorrect or incomplete data.
Changes are requested faster than they can be implemented and often are
not communicated very well.

In short there are systemic failures throughout the whole process, from
inception through to delivery. There is no single answer to why software
is hard and there won't be until the industry matures and people start to
get thrown out of the business for acting unprofessionally

I think this is one of the biggest problems with software today. Too many untrained/undertrained people working on too much software that they are not qualified to be working on. The only reason the term software engineering is a joke to most people is that most people who work on software do anything but engineering. It's not just me either. Everybody I talk to works with people who have no idea what they are doing, and should not be working in the software field. Granted, neither I nor any of my friends that work with these people are perfect, but some of the stories i've heard are almost unbelievable. I'm surprised software ends up working at all in most cases.

Yea, specifications *are* the biggest problem. Unless I happen to be the end user, rarely does the end user know what he wants me to build until I show him/her the first prototype. At that point they might start to clarify what they want, If I am lucky their ideas are heavily colored by what they saw in the prototype and we can go for there, if not its back to square one, and usually several more trips there after that.

I get requests like we need a program that users can run from the web that shows them if batch scans are in balance. I am then left to resolve on my own things like:*What is a batch scan*Where can one find a batch scan*What does it mean for it to balance*Can the thing just print a big Y or N in size 48font on the screen or does it need to detail something--Then comes the anticipatory stuff*Is the user going to expect to be able to correct an in blance*Now that I understand what a batch scan is *maybe* I see all this other stuff should I also report on those things*etc.. etc

People like to expect programming to go like engineering or architecture. Its not the coders don't want to apply discipline to their craft its that they can't. Nobody would dream of approaching an engineer or an architect without a pretty good idea of what they want to do. That is not to say the professional is not going to have to help them work out most of the details but the basics are going to be pretty clear. Imagine if I sent a request to an Architect asking for a "Structure that will be used by people." and gave no other information. I really doubt I would get a call back.

You can argue the PAs are not as good as they should be about requirements gathering, but I think there could be much more professionalism on the part of requesters as well. Its a waste of my time and theirs when they engage me as a developer before they have put even the most basic thought into what it is that they want. Some of this stuff is really esoteric and I understand that I will have to help them figure out how to solve the problem. I do feel they should know what the problem is.

..... Software is still an extremely new field. I'm not sure if things will get more reliable in my lifetime, but I'm sure that eventually we'll get stuff figured out, just like we have for bridges.......It is true that software and computers are relatively new compared to bridges. However, because software is immaterial, it is fundamentally different. There are NO fundamental immutable laws of physics that govern software production. Software is a pure product of mind only. That is the primary reason why

This is also a problem with some programmers. Most geeks place more emphasis in the tools than on the objectives. Some don't even care about the objectives (basically the need of the users) and just want to use a shiny new tool. Or they want to do whatever task in the same tool no matter what (there is a saying that a determined Fortran programmer can write Fortran programs in any language).

You are right on the money. Programmers tend to try put more priority on building re-useable components (tools, modules, etc.) than on actually building the damn product. I know I've been as guilty of that as anyone.

We're all taught that re-useability, modularity and portability are great ideas. But if you look around at many software projects, these principles are often given top priority, and the cart goes squarely in front of the horse. Few people realize that early architecting can be as evil as early

I design buildings for a living, and I've dabbled in programming, and I think architecture and software development have a whole lot in common.

Your step one in "building a house" can go through all 6 of the steps that you have listed for software development. We get hired by clients, sometimes they have a good idea what they want, sometimes they don't. Sometimes what they want is feasible, sometimes it isn't. It's not unusual for even smaller projects to drag on for years, because the client keeps changing his/her mind. Many projects that cross our desks will never be built.

Many projects are not the traditional design phase ->building phase. They often overlap, and it's pretty messy.

I could go on for paragraphs with the similarities that I see between software design and architecture, but I'll save that for another post.

I think one thing they all have in common is that they are always custom jobs. It isn't like going to a car dealership and asking for model X in dark blue. Software is more like "I'd like a car with extra wheels on top in case it flips and purple stripes and only 1 door...". Standardization is very limited.

Now imagine if every single weld was a unique, custom job that had never been done before, and if any of them are imprefect, the car crashes.

Right, because there are absolutely no "standard recipes" in software. There just isn't anything you could describe as a "Cookbook" [atomz.com] providing standard solutions to common problems that make up the basic nuts, bolts and welds of a lot of software.

And then, just as you finish the car, with wheels on top, hybrid Ferrari/DeLorean door, as requested, the customer reveals that it must travel up a smooth vertical surface and carry 2000 people. Sorry, did you need to know that earlier? Oh, and it has to provide oxygen, heat and cooling. What do you mean that will cost more? Product launch is March 1 and we've already been advertising the new system and hired all the new drivers! This will cost us billions, you stupid developers this is all your fault!

I've worked with architects. One thing I think that is different about software is that you can change things once they are in place. That is it is possible to move the whole building 100m down the road. The very changability tempts people to change things, which causes problems.Secondly I think people know more what they want when it comes to buildings because the options are more restricted. In software there are no restrictions on what can be built (although you can run into performance restrictions).

Well another problem is in building a house there is a definate line draw between architect and contractor/builder. You generally know whos fault it is when something goes wrong with a building. With software you often have part time contractors who think they are architects, and noone really knows better.

The clients who don't know what they want aren't so bad - most of them will accept whatever you give them. The real problem is with the ones who do know what they want, but can't describe it properly. Or the ones who want the impossible (but those are usually easy to spot early on).

Or the ones who say "hey, that gives me an idea... it'd be really neat if that could do X, too!" throughout the project. Feature creep is probably the single largest reason why programs don't meet deadlines.

Actually, I say, if you want to see the silliness of the venerable Construction metaphor, show how we'd really build houses if we built houses the way we built software.First, building a house is a solved problem, so you'd never hire an architect or builders. You'd go down to Best Buy and buy Microsoft House for $89.95. Any reasonable requirement you can think of is covered by Microsoft House; you have to really try to throw it for a loop.

Well, in Rome this is essentially what they did. They invented a high quality of concrete as well as other building technologies we take for granted today. Imagine the frustration of a builder in those days. Though I'm sure they were atleast appreciated..

Yes, writing software is hard, especially writing good software. The hardest part is to make things simple, even harder is to make things simple AND flexible. The need for a thorough analysis is greatly underappreciated.

Incompetent developers tend to make things more complex than necessary. From that point on, under economic pressure, workarounds are needed to get things done. This in turn makes things even more complex than necessary. THAT is what makes writing software hard. The problem is, it is difficult to be aware of the skills that we lack. As such, a lot of programmers with a huge ego don't deserve one.

I'm not into Extreme Programming per se, but I've noticed that if multiple people look at a piece of software, chances of problems going undetected get smaller and smaller. Yes, even if you, a master programmer, show your code to a rookie, the chance of bugs going undetected will reduce. In fact, it will inevitably result in more bugs being detected before rolling them out to customers.

Let's see: the idea is that technology is hostile and hard for humans and the conclusion is that a decade from now everyone with either be digital natives who have no problems or digital immigrants who are learning. Kind'a contradictory.

I don't think I can fully agree. I think software development may be hard, but that's never the main reason projects fail. The main reason projects fail in my 10+ years experience is because of product managers, not coders.

Product managers I have seen (and I have seen many) often don't know zilch about technology, but even worse they usually also don't know much about their market, target audience/users, User Interfaces, project management, etc.Consequently they simply don't know what they want and aren't able to explain it in one coherent paragraph of sentences. Once they would be able to explain it, the actual coding would be half as bad.

So if this guy complains that their projects back in the days at salon went bad, I'm not suprised. He's not a coder after all, he was a typical clueless product manager - started out as a journalist and suddenly he was responsible for a type of product he knew nothing about: CMSs, in addition to having no other qualification in software development or a related area (UI design, project management).

So am I surprised this project didn't succeed? LOL, of course not.

You wouldn't let a journalist build a space shuttle or a car now would you? But software? Sure, software is easy, anyone can do it. In the end, it's probably not harder than building a car, but not easier either. it just takes proper skills for all roles in the team, is all.

"Sure, software is easy, anyone can do it."..which is why we get the perennially insightful thread titles about "why software is hard" followed by a zillion-posts saying, essentially, "no shit." Really raises the question of if only one could get management lackeys to simply understand, "hey, software IS hard!" perhaps we could get on without having to perpetually explain why. I mean, you don't go to your surgeon and say "hey, Doc, why can't I just get a bunch of community college kids to swap this heart o

Product managers I have seen (and I have seen many) often don't know zilch about technology, but even worse they usually also don't know much about their market, target audience/users, User Interfaces, project management, etc.

As a product manager, I think you're right to an extent. If a product manager doesn't know who the product is being built for and what that customer needs, then the product is doomed to fail. If they know those needs, but can't communicate them to their development team, then the

I think that once someone improves the situation of software architecture and programming languages so that programmers don't have to mess with ad-hoc hacks but instead write the logic that they want to implement, then software will cease to suck.

The main problem is Operating Systems architecture and Programming Languages.Due to lack of time, I will only list a few of the Operating Systems problems that weren't solved after more than 30 years of OS development:

Don't allocate resources sanely. One program (even worse when it has many threads) that is wanting more memory and more CPU will get the entire User Interface to a halt, even though guaranteeing the required resources for a smooth UI is so cheap. (i.e: Instead of guaranteeing 0.5% of the memory/cpu to the UI so its always smooth, even this 0.5% goes as an extra 0.5% boost to the program that's already got 99.4%)

Offer an unnecessarily(historically) complicated model to programs, where there are multiple spaces of memory (malloc'able/sbrk memory, and file system space), even though these memory types are actually interchangable and when you malloc, your RAM is moved to disk, and when you use a file, it often allocated RAM. Instead, operating systems should just expose one type of memory, that is always non-volatile and persistent, so that programs don't have to worry about converting/serializing back and forth between these memory types.This would also get rid of the unnecessary bootup/shutdown sequence all programs are currently dealing with.

Does not offer a high-level world of network-transparent primitives, that allows all method calls to transparently run over a network. If this existed, we would not see the abomination that is web-forms+AJAX and the rest of this ultra-complicated world that still does not work nearly as well as local GUI's. Instead of extending the web to support GUI functionality (poorly), we should have seen GUI's be extended to transparently reach over the network. The X protocol is similar, but not good enough as it transmits too low-level primitives (pixel data and mouse movements) and is also an alternative and not a standard GUI API that the operating system offers.

The security model, using users, groups and assigning those to objects is of very rough granulity, requires a system administrator to modify the model (users/groups) and does not allow fine-grained control over the access of entities (processes) to objects (i.e: As a non-administrator, I cannot prevent my mp3 player from accessing the network or deleting the files it can read).Instead, a capability-security model should be used (not POSIX capabilities, but EROS/KeyKos type ones), which is much simpler to use, verify and much more powerful and fine-grained. This would also facilitate secure movement of components between computers - which could be done automatically by the OS to improve performance. More on that on a later post.

Number 2 really isn't desirable. The overhead would be quite painful for a lot of programs that need realtime performance. Also, clearing ram is desirable sometimes, such as when a program fails to maintain an invariant and then goes into an infinite loop. If the variable is persistant and nonvolitile, we have problems. Also, some data *shouldn't* be persistant, like passwords.

I like the general idea. In C you have the "static" keywork that makes a variable keep their values between calls of a function.I would not propose an "always persistent" memory, because too many calculations are temporary, but a "persistent" keyword in the most used programming language would be a very nice thing.However, there is the entire databases thing, they use another paradigm that should be taken into account.

1) How do you know a GUI application from a non-GUI one? What about programs that are run locally, but viewed remotely, and vice versa? What constitutes a "GUI" application?

2) But you are allocating different types of "memory"! See Leaky Abstractions [joelonsoftware.com] for more information on this. Your "everything is memory" model sounds nice, but lacks a few key components.... When I fclose() a file, I have a STRONG assurance that the file has been saved and wouldn't go away if the power failed. That's not the case in your "everything is memory" model...

3) You are either talking about a security nightmare or pixie dust. How does computer B know that it's OK to run code from computer A? See other comments on #4

4) Capability security requires somebody to set up all those !#@!@# permissions. POSIX, by contrast, is very simple and requires little effort to maintain. Is POSIX ideal in all situations? No. But it's adequate in most circumstances without a lot of effort, and it's usually better to have a "just barely suits" possibility with a decent default than a perfect possibility with a lousy default. Perhaps that explains why your touted EROS operating system died on the vine?

One thing I've noticed about companies is that they try to treat programmers like factory workers. Expect each one to be interchangeable and jump in anywhere on the "assembly line" at any place at any time for any piece of code. However, programming takes understanding, and complex programming takes complex understanding. Even a good programmer fixing a bug may need to analyze surrounding code for several hours before changing a single line.Unlike most engineering projects that are completed and done, most programming is a living growing process that is constantly changed modified and improved.

That implies that there is a need for specialisation and clear boundries, to assign "ownership" or "territory" over certain parts of code. A programmer who understands it and gets the last say on how it's changed and have clear non-arbitrary rules for changing that "territory". Like in open source projects. If you want a kernel fix, you submit it to the proper maintainers, or make your own fork, but no corporate bureaucrat comes along and micromanage how the code is merged and managed.

Implementing a good design is usually half the battle. Creating a good design is usually the other half, but in practice, a solid design is almost always the part that gets skipped. Let me bore you with a brief anecdote.

I have a large, global project underway. User requirements are done and have been done, and we're turning those requirements into things we can code or deliver ("View a workorder", "Print asset detail", "Group revisions into single document"). Of that, we have 150 odd deliverable items, not to mention all the fit/ finish work we may have to do, and all of this barely touches on reports, security roles for users, etc.

The reason we're going to make our date, despite the 1280 discrete requirements we need to test, is that we've taken the time to look at the requirements from a few different angles and come up with a solid design plan, before even thinking about implementation. Each piece will build on another, really hard parts are identified early, blockers and such are flagged ASAP. We know things will emerge that we didn't expect, but we've got the biggest chunks identified and working together on paper. We have the flows mapped out, exceptions and variations listed, and a user group that has to sign off on every iteration of the incremental build (we're spiraling out functions and features).

The only thing "hard" about all of this is the incessant thinking about the details, and discipline required to focus on the un-fun part of software construction, i.e. the planning and design walkthroughs. The itch to code something already is growing, but delayed gratification means that when the time comes to actually write something, the design will almost certainly lead to a working, if not optimal, solution. We can refactor as we go, but it needs to work completely before it can work efficiently.

I've been following Chandler off and on, somewhat through Spolsky's references to it and some stray links around the web, and sounds like design didn't go deep enough into what it'll really take to build some of the pieces.

Why doesn't anyone complain about how hard brain surgery is?
Why doesn't anyone complain about how hard building space exploration vehicles is?
Why doesn't anyone complain about how hard creating a successful marketing campaign is?
Software engineering is difficult because it's a complex subject that takes a combination of intelligent people and training to produce good results. Just because businesses are too stupid to realize this doesn't make the problem go away. You can't throw complex projects at untrained, stupid, incompetent people and expect them to produce quality software. You can't just invent some magic formula for software development that will work 100% of the time to maximize efficiency.
Software engineering is NOT manufacturing. Accept it and move on for fuck's sake.

Here we are, using a construction kit (modern OO language) that is 100% predictable and designable, yet beta software and patch cycle after patch cycle are the norm. You couldn't do that when you build a sky scraper - they get it right the first time. It's the level of design, prototyping, QA and intrinsic belt-and-braces attention to detail that is missing in most software projects.

The simple fact is there are no analogies for software development, software, or the software business.

This thread is full of analogy after analogy.

Software dev isn't building dev; the building can't be used incomplete, the building won't have to be changed, the building doesn't have to inter-operate with and depend on other buildings.

Software dev isn't like engineering cars or spacecraft; there is no finished product in software, and again you can use software even when incomplete.

Selling software isn't like selling cars; cars can't be copied

Selling software isn't like selling music/books/any other IP; other forms of IP are usually only used once, software is the only usable system that is entirely IP.

I think if you want to have a discussion amongst people who develop a software you have to ditch the analogies, because none apply. The reasons software development takes longer that you might think, or the reasons software is difficult to create, sometimes doesn't give the expected return, sometimes is buggy, etc, doesn't have anything to do with cars or buildings or spacecraft.

Even dealing with a high level language like that in Adobe's Director, there are too many unknowns and edge cases and it is not painfully clear just how everything plugs together. Building a structure for doing all the tasks you need to accomplish is also paramount. As I said, "The structure allows you the luxiry to focus on the details". Build the structure.

I think that possibly one reason that software isn't getting better is that hardware hasn't caught up to software in terms of abstraction.Why is there a hardware stack with special instructions for supporting pushing and popping data onto it? It's not absolutely necessary.

Why are there certain data types like int, float, etc?

Why is there hardware supported virtual memory? Didn't that start as software? Why didn't it stay that way?

It's because these are extremely useful abstractions over bits. The hardwa

Fred Brooks had much the same material in _The Mythical Man-Month_: communication overhead spirals out of control in large groups, project scope creeps out to infinity without a budget, overconfident people try to do too much and fail, it's impossible to know what the customer wants and (in a new area) even what works until you've built something and watched how it fails, only make change to known-good baselines, etc.

This author had to discover Fred Brooks after he'd started a career of big projects. TMM should have been in his school curriculum.

Everybody knows some footballers are worth a million dollars and others are not worth a fig, but somehow hardly anyone realises that some programmers are worth a million dollars but with some others it would be worth a million dollars to get shot them (see the daily wtf [thedailywtf.com] for details).

The analogy is that programming is today being approached in a manner that is far more limiting then it needs to be.

There are those who claim programming is nothing more than mathmatical algorithims, but it is more as programmers create higher level abstractions to deal with lower level abstractions faster. Sure it can be all boiled down to mathmatical algorithims and even down to binary or machine language but it is the higher levle of abstraction where software is today created.

The science of software has failed or been distracted from the genuine objective of identifying and defining abstraction physics. For it is Abstraction that is the essence of programming, and there most certainly is a physics that applies to our creation and use of abstractions.

We create abstractions in order to simplify or automate complexity of lower level abstractions, down to binary.The failure is that of not recognizing what we all constantly do, what action constants we apply in our creation and use of abstractions.

Its like doing chemistry before we came up with the understanding to create the table of elements. We didn't understand the underlying mechanics. But once we understood these underlying mechanics, we created chemical megaplants.

Though we would not create software megaplants, in understanding abstraction physics, we would do what was accomplished with teh conversion of math from roman numerals to the hindu arbic decimal system. We'd make programming easy enough that the adverage user would do alot more for themselves, just as the general population was able to not only do math for themselves in teh conversion of symbols used (roman numerals to decial) but were able to do more advanced math then the roman numeral elite accountants were able to do.

Of course the problem is in conversion, as it took 300 years for the conversion to happen. It took 350 years for Galelio to be exonerated....ask the catholic church why... and know why the industry of programming, and regardless of what side of the fence you are on (proprietary or open source), presents resistance to the needed change.

Programming is hard, because the industry wants it to be. So to keep the elitism, social status and pay scale.

Of course social demands weigh in on the change happening. As computers today could not have been created using the roman numeral system of math. It won't be hundreds of year for this change to happen, as we already can't keep up using the lessor/harder route.

If it were easy, then yes, we would have push-button frameworks that magically created programs. What bothers me more than the ignorance of the "no silver bullet" mindset, are the pushers of programming environments that supposedly will solve all our problems. Bullshit. Corba sucks. EJBs suck. All these things suck. Just write good software and stop looking for the golden chalice.

Software development is hard because of a misconception that knowing how to program in a particular programing language makes someone a good developer. Just because someone can learn a foreign language (i.e. an American who's native language is English can learn to read and write in French) but it does not mean that person will be able to write a good novel in that language.

I'm currently working towards my Masters in computer science and although I don't intend to ever work as a software developer programing concepts make up the majority of the curriculum. The majority of the graduate program, in computer science, is made up of international students (the university is in the US) of whom most hail from India. After working with these students for some time now I have learned that they obtained their undergraduate degrees from India where computer science is taught in theory. This means that they got their degree without ever touching a computer. Combine that with the fact that they have never owned a computer themselves makes me doubt the quality of their education. For those of you that know some C++ you will understand their level of knowledge when I say that they enter the program without understanding the concept of pointers (dynamically allocated memory instead of compile time arrays), structures, or object oriented programing concepts. Luckily many of these students do not make it past the remedial courses of the graduate program but most of them switch to the IT program under the business college of the university. Unfortunately I am sure many of these students go on to be programmers in their country where there is a big demand due to the trend to outsource coding projects to their country.

I have spoken with other students in similar programs at different universities and this seems to be a widespread scenario. So with people that barely understand the language they are programing in are being asked to write programs for consumer use. It seems that very little time goes into educating students on how to program. Here I am using the word "program" to mean given a problem utilize one's analytical skills and artistic ability (yes programing takes artistic ability) to conceive a solution and write it in a programing language in such a way that the language compiler and linker produces an executable program. With this in mind you add the fact that virtually no effort goes into teaching people how to move beyond this basic level of understanding of specific programing languages to a point where any competent programmer would consider them to be fluent in the language they are developing in. Would anyone buy a book written by someone that is not fluent in the language that the book is written in? I think not.

Beyond the fact that the majority of programs are being written by people who are not competent, by any standard, is the fact that programming is not just an application of knowledge of a specific language but an abstract concept that is created from the individuals understanding of the problem and their ability to conceive a solution entirely from within their own mind. This is where the artistic ability comes in. If you give 100 people a problem and ask for a solution in the form of a program you will get 100 different solutions. Out of those 100 people only a handful of the solutions will meet the standards of what could be considered efficient coding.

Consider that the programmers that I am speaking of here are people that entered into a post graduate computer science program. These people do not make up the majority of programmers out there. What makes up the body of the worlds programmers are people that have not attempted to progress to this level. Most enter the workforce after receiving their undergraduate degree and many have not received any higher education at all. It is my belief that aside from what can be taught to an individual programming requires an inherit ability within that individual for them to be able to produce what I would call quality code.

If some barrier would have kept CPU speeds below 100Mhz then I imagine that by now people would be developing very efficient code and that we would still have the same level of application performance that we enjoy today with our dual core 2Ghz processors.

Sorry to break this to you, but apps today are not faster than the ones they replaced. We used to write more efficient code, and had more spartan interfaces, because we had to. But I don't think Microsoft Office today is noticably faster than the WordPerfect / Lotus 1-2-3 type apps we wrote in assembler under DOS. All that extra CPU power is more than eaten up by layer upon layer of slower and slower software.

All this allegedly to make programmers more productive. I haven't seen that either.

The article's title gives an indication of this (as some other comments have pointed out): it talks about "programming" but not "software engineering" as a whole.

Still many companies hire people with not enough computer science knowledge, for performing software engineering tasks. You can do this, but the results cannot be good (at least in the long term).

I think this is because software is usually successful, in the short term. It apparently solves the problem and the customer gets satisfied. Therefore, why "losing" time and money making documents (where experience gets archived) or performing a good design?

If you create software, how often do you (your organization) apply these concepts?:

Disclaimer: Scott Rosenberg was responsible for Salon's hiring me 10 years ago, was my editor and boss for many years, and is a close personal friend. My daughter baby-sat his twin sons as I "interviewed" him. So I'm utterly biased and completely partial. But "Dreaming in Code" is still a darn good book. - is one of the reasons why software sucks. I have seen too many cases of friends hiring friends not because the people being hired are the best that could be found (or that interviewed) for the job, but because they are friends of the friends. This touches on all avenues of life and business, the best people for the job are not hired, because friends of the friends are hired and those people more often than not are not going to do the job right.

One should either recommend a person for a position or interview the person, but not both.

Hate to break it to you, but you should learn to read the entire comment and not just the first sentence. Just in case it is too difficult for you to understand what is said here:This touches on all avenues of life and business, the best people for the job are not hired, because friends of the friends are hired and those people more often than not are not going to do the job right. - see? Had you read the rest of the comment your comment would have been totally unnecessary, and so would have been this ex

A person needs to cross a river, and so believes a bridge needs to be built.

What he should be doing is ask: "I need to cross this river, can you help me?"

Instead, he asks someone to build a bridge, a bridge of this and this dimension so his particular vehicle can pass, and a bridge with this and this feature because "it would be nice if..". He also wants a bridge that looks in this and that way, because since he is paying for the bridge he wants it to look the way he wants it to.

First he was crossing a river, he is now building a bridge. Will the bridge help him cross the river? Who knows? Maybe a bridge like he imagines can't be built, or only be built poorly.

Its the same thing with software. People want the strangest things. In fact, what they WANT is the only thing they know. They don't know what they NEED.

So you get an organisation, a business, with an office that has a problem, any problem. The problem will ALWAYS occur because some OTHER process isn't working somewhere else in the organisation. You get bad data from here, and the clerk is expected to output good data out there. In the 50's I bet they wanted more filing capability or better typewriters to solve these problems, in this day and age they want IT. They want a system!

So instead of using excel, notepad or even a piece of paper like any other sane human being would to keep track of the information, a yell echoes down the corridors: "We need software to support our business."

So some programmers are hired. They get a description of the problem, which isn't logical in the first place, and they are expected to solve it. Their tool, software, is built using a logical language. It is used to describe the data, and solve the problem by adding a few flows for that data during certain conditions.

So the programmer (or bridge builder) sits himself down. The first 10% of code/thought he outputs is usually all that should be done about the percieved problem. That is, a description of the Need.

The next 90% of coding is about the programmer trying to coerce a logical language around non-logical flows and non-optimal solutions, hammering a square button into a round whole, with GUI's, buttons and special extra functions for special extra cases, and those extra wants on top that really describe other problems.

So we will end up with a mishmash of buggy code that describes the wants of the customer. The Want to solve a problem that should have been solved with organizational changes, or changes in the work processes. But hey now everyone is happy again, software is supposed to be a bit buggy, the organisation is obviously still working non-optimally (software can't fix that), but at least the clerks now have webpages to input the bad data in as long as the servers are up.

Optimally, the programmer (yes the programmer) should look at the problem, trace it down the whole organisation, yea trace the customer Want to the REAL problem and the REAL need, proceed to make the organizational changes and be done without an IT system at all.

We all know that won't happen, but that is what should be done. Don't expect a logical function (software) that describes a non-optimal situation, to function in an optimal way.

Now, its so easy to get on the internet, do your hotmails and digital photos, and connect with others in new and unexpected ways.

Its all about the WOW, and thankfully everything is now so much easier - including software development programming.

And its the existence of people that swallow this sort of shit that contributes to making software hard. These sorts of people, when they involve themselves at whatever unwanted level in the process of developing software, turn out making the whole game look so much more difficult than it really is.

And then one fateful day when the mangled bastard children of their best creative efforts needs to be interfaced with - then yes, at that point in time, software development truly is a difficult thing.

Utter bullshit. I use UML for not only analysis, but design, programming and working on things in daily life. It's a matter of understanding the techniques. I've designed four cooperative wire transfer subsystems using it myself.

"we need to listen to what Brooks said... more specifically their knowledge and experience."

A large number of the modern books suggest UML as a solution. Anyone who has actually employed UML knows that it's virtually nothing but hype. Yes, a UML class diagram may be somewhat useful when demonstrating how existing code is structured, and a sequence diagram may prove helpful in showing the flow of messages between objects. But it's unsuitable when used for the design of a large-scale system. One you get beyond 10 or so classes, UML diagrams become too complex to work with, and are basically useless.

Utter bullshit. I use UML for not only analysis, but design, programming and working on things in daily life. It's a matter of understanding the techniques. I've designed four cooperative wire transfer subsystems using it myself.

I sometimes wonder if it's a questions of
people who naturally see things as images,
versus people who don't.

Anyway, I'm with the grandparent.
A small class or state diagram can be useful to me,
but I get lost very quickly in a big or detailed one.
And when I go into details, I soon find that what I want to say is easier to express in text,
where I am not limited to the few languages permitted by the UML.

Wrong for who? They have provided myself and many others a very respectable living for decades. And surely you don't doubt the overall usefullness of their application to society over the same time period?

Comprehensive requirements, good design, elegant code and adequate testing requires: experience, knowlage, patience, and bags of money. And that's just for version 1.0, wait until the users get hold of it and tell you what you "should" have done (often as a response to you pointing to the requirenents a

Let me know when we other languages then Assmebly, C / C++ on consoles (such as PS2 and PS3), that other don't impose a Run Time overhead.

Languages are only a small minor part of the problem. You can write beautiful/elegant or trash code in any language. The bigger question is: How many side effects are there? How manageable is it? How long will it take to come up with a design that is both a) fast, and b) flexible.