I was once assigned the role of coordinator between project managers and technical staff on a highly sensitive internal project to improve our cost accounting processes. One of the difficulties that needed to be overcome was how to distribute payroll data across four states without this data being compromised. It was decided that we would use Lotus Notes to do the distribution and I was asked to find the programmers who could determine what information we would need to implement this solution.

In another instance, one of my coworkers sat down in a meeting for a non-critical project and was informed that the project needed to be finished two months from the date of the meeting that she was in. Part of her role was to ensure to determine the scope of the project and the resources available for it. The deadline was chosen to ensure that people working on said project would not have to deal with it during their Christmas holiday.

Both of those scenarios has a serious flaw in the reasoning. In both cases, the scope of the problems had not yet been determined, but specifics of the solution had. These are the sort of subtle issues that we tend to overlook, yet they come back to haunt us. Worse than haunting us, these decisions often make us look like fools.

On Perlmonks, we routinely hear posters saying things like "I've chosen this technology, but I am not sure how to implement it." When I hear someone say things like that, my gut reaction is to think "glad I'm not working with that person." Technology is part of the answer, not part of the question. Don't make choices only to then try to figure out how to twist the problem in such a way so as to fit your choice. This will often result in your solution being more convoluted than my previous sentence.

When you have a problem, any problem, the first thing you need to do is gather all of the information about it that you possibly can. What problem are you trying to solve and why are you trying to solve it? After you've clearly outlined the scope of the problem, figure out what resources you have available in terms of time, money, people and technologies. Further, figure out your opportunity costs. Opportunity cost is a fancy economic term that boils down to a simple concept: what do you give up by continuing on your current project? If you want to divert your resources to Project Foo, which is projected to increase revenues by 3%, but the identical resources need to be taken from Project Bar, which is projected to increase revenues by 7%, Project Foo is dead in the water.

Here's a slight more complicated example. Imagine that a software glitch is causing your company lost revenues of $5,000 per month. That's $60,000 a year. After analyzing the situation, you determine that the problem stems from a combination of limited hardware capacity and a bad database design. The project to redesign the database, rewrite the code base, and upgrade the hardware is expected to cost $250,000 and take a year. Maybe $60,000 isn't that bad of a loss. But wait! You still need to think further. Are your sales expected to increase, along with the projected losses? If the costs of fixing the problem are the same, maybe it is worth fixing.

Of course, there's also the matter of your company's reputation. Are customers also getting affected by these losses? Are their orders delayed and sometimes incorrect? It's tough to quantify how that can affect revenue. Further, do the people who would resolve the problem have other projects to do? If they're just sitting on their hands waiting for the next big contract, having them fix the glitch might at least allow you to minimize losses while you wait for more work for them (though if you're waiting a year for your next big contract, you have bigger problems :).

If you're just doing a small project at home, these concerns usually aren't that big of a deal. But if you're doing work that you're getting paid for, it's unethical for you to start making decisions about implementation without having clearly defined the scope of the problem. Choosing the database, choosing the programming language, choosing a due date are usually terrible things to do when you don't even know the question (of course, sometimes there are financial and legal constraints that interfere with "proper" decisions).

I'll finish this with a perfect example of someone "missing the point". I read this on a debate over whether or not MySQL is a reliable database (this was before MySQL started implementing transactions (with a tip o' the keyboard to chromatic for alerting me to that):

Next, I'd like to say that transactions are not always necessary, even when dealing with e-commerce. Imagine two e-commerce sites, where Widgets are being sold at $100 each:

Company Foo is using PostgreSQL. It needs to be restarted once per day to deal with stability/memory loss issues. During the restart (while the purchasing web page is unavailable), Company Foo loses 10 web-based sales, for a guaranteed (minimum) net loss of $365,000 in sales per year.

Company Bar is using MySQL. It does not need a nightly restart. But, some random distaster occurs (which Postgres, because of its transaction handling, could theoretically better recover from). The MySQL database loses half a day's worth of sales, say, $150,000 worth.

Well, that almost sounds reasonable, yet this is another person I probably wouldn't hire. This is a slipshod analysis of the situation. Let's say we go ahead and spend $20,000 on an Oracle license, another $20,000 to train our DBA's, and let's say that it winds up costing us another $50,000 to implement this database. We have great scalability and transaction support and no longer lose even the $150,000 in sales that MySQL loses us. That's a net gain of $60,000 for the first year and much more for subsequent years. Further, our Web site winds up having a better reputation for reliability. Who knows what that is worth? While my analysis is just as slipshod (I plucked the numbers out of thin air), it does show that an ad hoc analysis of a situation is not appropriate. Further, the person posting the comment above simply didn't seem to get it: don't pick a technology and then figure out how to justify it. Figure out what your needs are and then pick an appropriate technology.

I think we all have a few horror stories like this. Mine involves a dot-com that I worked at for several years. Originally we had a convoluted back end "build" that created nearly static html pages (well really SSI enabled, shtml files...) We soon ran into scaling issues with this method (mostly on the development side... tough to get more than a few people working efficiently with this setup.) I was tapped to come up with something better.

I wrote up a report suggesting mod_perl and Mason, partly because we already had a mod_perl box running, and partly because it was a good solution. The cost (i thought) was also a benefit because we hadn't really figured out how to monetize our visitors (never did, either). I happened to go on vacation for a
week shortly after submitting my recommmendations.

It was about this time that Vignette went public, and gained like 500% on its opening day. When I got back from vacation, I learned that vignette had come in and pitched their multi-million dollar Story Server content management system. Our management thought it was the best thing they had ever seen. (of course a lot of that was envy of Vignette's highly successful IPO)

So I sat down and wrote up a side by side comparison of the two... It turned out that mod_perl/Mason matched vignette feature for feature and had some extras to boot. Of course, no one was listening at that point, they had decided on their flashy solution, w/o really knowing why.

Anyway, we wasted a huge amount of money (over a years worth of revenue) on that software "solution" and it was broken from day 1. Granted, we opted for their should-have-been-beta 5.0 release, but still it felt like the darn thing hadn't even been tested outside of an NT environment.

Guess, what... eventually, I retooled our old mod_perl box
and we ran the most dynamic portions of our site off of that box. The Vignette "solution" was relegated to a multi-million dollar backend build process for nearly static content... Imagine this for a moment: four beefy sun boxes, running expensive proprietary software, getting smoked by a single[*] x86/linux/mod_perl machine with $0 spent on software.

This was a case of management saying "We're not sure what we want to do, but we're sure we want to use technology X to do it with." I should have walked away the day they forced that brain-dead logic upon me....

[*] We eventually rolled out a 16 node pseudo-cluster of x86 machines, but at the time it was a single box.

Boy, does that sound familiar! To be frank, I kinda fudged on the Lotus Notes story. What really happened was were talking about how to distribute the data and one person said Lotus Notes and I suggested we at least consider alternatives. I was told to research them and after much discussion, we concluded that either Lotus Notes or a Web-based system was the way to go. I got together some Web developers and Notes developers and we looked at the time it would take, the cost, the security, and ongoing maintenance costs to develop each solution. It was unanimously decided (yes, even by the Notes developers) that Web-based data distribution was the most viable alternative.

I brought up my findings at the next meeting and was told "we've already decided to go with Lotus Notes". As I found out later, we were sinking a lot of money into that license and some PHB who couldn't grasp the concept of "sunk cost"1 decided we needed to get our money's worth out of Lotus Notes.

1. Sunk costs, for the curious, is money already spent on something. Since the money has already been spent, it is completely irrelevant to future decisions. In other words, if I have spent X amount of dollars on Foo, but I now have to decide between Foo and Bar, X doesn't matter. All that matters is the financial comparison between Foo and Bar from this point forward. If Foo costs another $10,000 to implement and an identical solution with Bar costs $50, it doesn't matter that I've already spent $20,000 on Foo, ceteris paribus (another economics term, but I'm tired of explaining them ;)

All of this reminds me of the business folks I do short contract work for. I think they all went to the same school, so many of them are using MS Excel for a database, and use IIS or a web server without an admin or a webmin. Then they wonder why their "database" gets slow, data vanishes and customers complain.
Typically I take them to Perl and mySQL on Linux. A few of them like to put out the larger chunks-o-change for the M$ Sql Server (personally, if you are going to pay, go Oracle... No comparison on reliability or development). Most of them cite advertising as to why they bought into anything or better yet, "a friend of mine who writes software said".
The L.A.M.P. model is a great way to start. IMO it is more reliable than M$, more secure and more flexible (and if it matters, much less expensive up front). And, if you stay generic (ANSI), then you can go to almost any other platform anyway.

As a project member, there are many ways to respond
to poor management decisions.
An unfortunate but common response is to perform poorly in the assignment.
"If it is not worth doing, it is not worth doing well."

Another response is "passive noncompliance." That's when
you say, "sure, we are using Story Server" and instead
you use mod_perl.

If you are a technical leader, you should work hard to
avoid inappropriate tops-down technical "solutions."
This is a very old problem. My attempts to avoid this
problem are inspired by Benedict de Spinoza's
Treatise on the Emendation of the Intellect.
This is available at
Project Gutenberg
and elsewhere. This is not as much fun as
design antipatterns, but it provides deep insights into
the elements of a compelling technical argument.

An excellent book on this kind of thing is Rapid
Development by Steve McConnell. (Author of Code Complete.)
Among other things he lays out useful strategies, warns of
subtle traps, and explains the trade-offs. And it isn't
as simple as you might think. For instance often it is
worthwhile to try a technology on a project as a trial of
that technology. When you do so you take the risk of the
project failing. But to never take that risk guarantees
that you will not keep up with changing technology.

At another end of the trade-off, not that long ago I worked
on a project that when stated naively had every indicator
of failure. We had an open-ended task, a hard deadline,
fixed resources, and pre-chosen technologies. Sounds pretty
bad, huh? How is that for a recipe for failure?

But it really wasn't as bad as it looks. The open-ended
task was to take something live for a conference that would
cause buzz. The hard deadline was the date of the
conference. We could ask for anything, but given the time
it was unrealistic to try to bring more bodies on board.
As for technologies, we had a body of technologies that we
were familiar with and could estimate. New technogies, even
ones that might massively improve productivity, were largely
out because we didn't have time to learn, evaluate, and
integrate them.

So we did the dreaded site redesign. Everyone had goodies
that they wanted to put into the mix, must-do projects for
it. We took that list, went around to everyone and said
that we weren't doing that, we didn't have time. Then we
asked them to prioritize. What proceeded forward was two
hard months of triage. We dropped all other projects, and
had considerable downtime due to exhaustion later. It
wasn't very fun. But we delivered a project on time, it
was tested, it looked good, and our client feedback was
outstanding. We made a lot of sales, and we got a lot of
sales opportunities that we have been pursuing since.

As Steve McConnell says, you need to think about what kind
of rapid development you are facing. The task of having
to deliver whatever you can by a fixed date is rather
different from trying to deliver a fixed project as
fast as possible. When you think that your task is to try
to deliver a project as fast as possible, it is important
to know whether you are aiming for the fastest schedule
that you might be able to make, or the fastest schedule
that you can pretty much guarantee. (Most people who are
aiming for best possible speed really need best guaranteed
speed.) All differ substantially from trying to maximize
the overall long-term rate of development. (And that is
what most companies should really be aiming to
have.)

But if you are maximizing the overall rate of development,
then from time to time you will get projects where you are
told to do a given task with a given technology. And
while there is no way to justify the decision based on the
needs of the project, it is important to approach the task
with a open mind. Because for all that we talk about
choosing the right tool for the job, the fact is that we
can't possibly make those choices well unless we
periodically spend time and energy re-evaluating our
beliefs about what different tools can do.

That said, we live in an industry that is seemingly
fixated on finding silver bullets. Even though it may
feel like we are being mauled by were-wolves, most of the
time we would be far better off learning to become better
at spotting potential ambushes and aiming normal bullets.
Most of our problems are predictable and preventable with
some common sense and a little bit of maintainance. And
after you have been through the routine a couple of times,
seeing ongoing mistakes starts to look like an ongoing
episode of the Three Stooges. When that happens to you,
from time to time you have to sit back and enjoy the show
rather than focussing on being part of it...

for the last few years i've increasingly been of the conviction that a lot of the glitches i've seen in system development result from the corners we work ourselves into by not explicitly recognizing that software systems are evolutionary, and that they are never really done. i get criticism for that one, but i think that even a superficial examination of virtually any non-trivial system reveals a changing environment, which is frequently almost impossible to anticipate. the events of september 11 are a good example.

i find appeal in the algorithmic nature of development strategies such as those advanced in mcconnell's rad book and their explicit recognition of the dynamic nature of the process. i personally get criticized for not being a big fan of planning, if for no other reason than you rarely get users of whatever level to tell you what they want before they see something and tell you that ain't it. (as mcconnell says, build one recognizing that you're probably going to throw it away.) it's my opinion that most plans are worthless by the time they are codified, because they address where we've already been. kinda like that old dr commercial that had calvary facing tanks (and where are they now?). successful plans seem to me to have very broad stokes, with considerable flexibility for implementation details. few planners, of whatever nature, see things that way.

My horror story is management/marketing saying "we want SuperProduct 3.5 to be released in October, with this feature set" ... without actually sizing how long each feature would take to implement.

And guess what? We missed the deadline(s) ... and who looked bad? And it doesn't help to complain about it at the time, because you just end up with a reputation of being difficult/negative/not "can-do", etc.