Suppose I give my developers a screaming fast machine. WPF-based VS2010 loads very quickly. The developer then creates a WPF or WPF/e application that runs fine on his box, but much slower in the real world.

This question has two parts...

1) If I give a developer a slower machine, does that mean that the resulting code may be faster or more efficient?

2) What can I do to give my developers a fast IDE experience, while giving 'typical' runtime experiences?

Update:

For the record, I'm preparing my even-handed response to management. This isn't my idea, and you folks are helping me correct the misguided requests of my client. Thanks for giving me more ammunition, and references to where and when to approach this. I've +1'ed valid use cases such as:
- specific server side programming optimizations
- test labs
- the possibly buying a better server instead of top of the line graphics cards

This question exists because it has historical significance, but it is not considered a good, on-topic question for this site, so please do not use it as evidence that you can ask similar questions here. This question and its answers are frozen and cannot be changed. More info: help center.

Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise.
If this question can be reworded to fit the rules in the help center, please edit the question.

20

Maybe have them test the application in a virtual PC!
–
Mark COct 21 '10 at 19:13

209

I'm speechless that this is even a question. How could it result in anything other than slower development and poor morale?
–
FoscoOct 21 '10 at 19:23

76

Develop on the state-of-the-art. Test on the worst machine you can find.
–
AdamOct 21 '10 at 23:34

14

Does cleaning the floor with a toothbrush rather than a mop result in a cleaner floor? Sure it does, eventually. A mop operator can't spot the dirt from 150cm away quite as well as a toothbrush operator from 30cm away. Don't try with a large floor.
–
dbkkOct 22 '10 at 9:33

13

Note to self: never work for MakerofThings7
–
matt bOct 22 '10 at 17:13

45 Answers
45

I'm tempted to say "No" categorically, but let me share a recent experience: Someone on our project was working on some code to import data into the database. At the time he had the oldest PC in our group, maybe even the entire organization. It worked fine with VS 2008, but of course a faster machine would have made the experience better. Anyway, at one point the process he was writing bombed while testing (and that's before it was fully-featured). He ran out of memory. The process also took several hours to execute before it bombed. Keep in mind, as far as we knew, this is what the users would have had to use.

He asked for more RAM. They refused, since he was getting a newer machine in 3-4 weeks and the old one was going to be discarded.

Keep in mind that this guy's philosophy on optimization is: "We have fast machines with lots of RAM" (his and a few machines excluded, anyway), so why waste valuable programmer time optimizing? But the situation forced him to change the algorithm to be more memory-efficient so that it would run on his 2Gb machine (running XP.) A side-effect of the rewrite is that the process also ran much, much faster than it did before. Also the original version would eventually have bombed even with 4Gb when more data was being imported - it was a memory hog, plain and simple.

Soooo... While generally I'd say "No", this is a case where the developer having a less powerful machine resulted in a better optimized module, and the users will benefit as a result (since it's not a process that needs to be run very often, he initially had no intention of optimizing it either way, so they would have been stuck with the original version if the machine had had enough RAM to run a few large tests...) I can see his point, but personally I don't like the idea of users having to wait 8 hours for a process to complete, when it can run in a fraction of that time.

With that said, as a general rule programmers should have powerful machines because most development is quite intensive. However, great care should be taken to ensure that testing is done on "lowest common denominator" machines to make sure that the process doesn't bomb and that the users won't be watching paint dry all day long. But this has been said already. :)

In reading the question, and the answers, I'm kind of stunned by the vehemence of the NO case.

I've worked in software development for 25 years now, and I can say without any hesitation that programmers need a bunch of things to develop good code:

A REASONABLE development environment. Not dinosaur. Neither does it need to be bleeding edge. Good enough not to be frustrating.

A good specification (how much is done with NO written specification?)

Good and supportive management.

A sensible development schedule.

A good understanding of the users AND THE ENVIRONMENT the users will have.

Further, on this last point, developers need to be in the mindset of what the users will use. If the users have supercomputers and are doing atom-splitting simulations or something where performance costs a lot of money, and the calculations run for many hours, then thinking performance counts.

If the users have 286 steam powered laptops then developing and having developers do their development test on the latest 47 GHz Core i9000 is going to lead to some problems.

Those who say "give developers the best and TEST it" are partly right but this has a big MENTAL problem for the developers. They have no appreciation of the user experience until its too late - when testing fails.

When testing fails - architectures have been committed to, management have had promises made, lots of money has been spent, and then it turns into a disaster.

Developers need to think like, understand, and be in the zone of the user experience from day 1.

Those who cry "oh no it does not work like that" are talking out their whatsit. I've seen this happen, many times. The developers usual response is one of "well tell the CUSTOMERS to buy a better computer", which is effectively blaming the customer. Not good enough.

So this means that you have several problems:

Keep the devs happy and piss of the management, increase the chances of the project failing.

Use slower machines for development, with the risk of upsetting the devs, but keeping them focussed on what really matters.

Put 2 machines on the devs desk AND FORCE THEM TO TEST ON THE CLUNKER (which they wont do because it is beneath contempt.... but at least its very clear then if there are performance problems in test).

Remember batch systems and punch cards? People waited an hour or a day for turnaround. Stuff got done.

Remember old unix systems with 5 MHz processors? Things got done.

Techo-geeks love chasing the bleeding edge. This encourages tinkering, not thinking. Something I've had arguments about with more juniour developers over the years.... when I urge them to get fingers away from the keyboard and spend more time reading the code and thinking.

In development of code, there is no substitute for thinking.

In this case, my feeling is - figure out WHAT REALLY MATTERS. Success of the project? Is this a company making / killing exercise? If it is, you can't afford to fail. You can't afford to blow money on things that fail in test. Because test is too late in the development cycle, the impacts of failure are found too late.

[A bug found in test costs about 10x as much to fix as a bug found by a dev during development.

And a bug found in test costs about 100x as much to fix as that bug being designed out during the architectural design phase.]

If this is not a deal breaker, and you have time and money to burn, then use the bleeding edge development environment, and suffer the hell of test failures. Otherwise, find another way. Lower end h/w, or 2 machines on each desk.

Techo-geeks love chasing the bleeding edge. This encourages tinkering, not thinking.: Gosh, you sound like you'd be a blast to work for. ;) Has anyone ever accused you of being pompous, arrogant, or prone to making broad generalizations?
–
Jim G.Oct 25 '10 at 0:23

1

@Jim G - To be fair, I think @quickly_now may be referring to copy-and-paste coders, and those who don't really understand what's going on. I've seen guys bash on things (3rd party components, SQL joins, etc) with no idea where they are going and are OK checking in a solution they don't understand and can't support.
–
makerofthings7Nov 19 '10 at 23:19

This theory is simple-minded and outdated. It was true back in the days.

I remember spending more time microoptimizing my Turbo Pascal stuff on my pre-Pentium computer. It just made sense before Y2K, much less ever since. Nowadays you don't optimize for 10 year old hardware. It's sufficient to testrun software to find bottlenecks. But as everyone here agress, this doesn't mean developer (and thus optimization) productivy correlates to giving them outdated hardware for development.

I say developers need the best development system available - but that doesn't necessarily mean the fastest. It may well mean a modern but relatively slow system with all-passive cooling, to keep noise to a minimum, for example.

One thing - a development system should be reasonably new, and should absolutely have multiple cores.

An old PC may sound attractive in a show-performance-issues-early kind of way, but a Pentium 4, for example, may actually be faster (per core) than some current chips. What that means is that by limiting a developer to using a P4 system (actually what I'm using now - though that's my personal budgeting issue)...

You encourage the development of non-concurrent software that will not benefit from the current mainstream multi-core systems.

Even if multi-thread software is developed, bugs may not be noticed (or at least not noticed early) because concurrency-related issues may not show up in testing on a single-core system.

Multi-threaded software can cause serious performance issues that may get much worse with multi-core processors. One would be causing disk head thrashing (which can result in many thousands of times slower access to data) where individual threads are doing sequential access, but each to a different part of the disk. This can even go away on older slower PCs, by e.g. having two old 160GB drives instead of one 1TB drive, those threads may no longer be fighting each other for access to the same disk.

There are also issues with PCs that are too limited to support virtual machines well - e.g. for testing in multiple platforms.

The run-time speed on developer machine is so irrelevant, unless you want to revenge or punish your developer for writing slow code and for ignorance of target deployment environment.

As the manager, you should make sure the developers knows the objective of the project and always ensure they are on track. About the target machine issue we are discussing, it could be prevented by early and frequently testing on slow machine, not by giving them slow machine to use and be suffering.

The slow run-time speed also slow down development, as most programmers are using code-and-test method. If the run-time is slow, their task will be slow too.

Let's go against the flow here: YES.
Or at least that's been the general wisdom in the industry for decades (except of course among developers, who always get angry when they aren't treated like royalty and get the latest gadgets and computers).

Of course there's a point where reducing the developer's machine will become detrimental to his work performance, as it becomes too slow to run the applications he needs to run to get his job done.
But that point is a long way down the line from a $10000+ computer with 6GB RAM, 2 4GB videocards, a high end soundcard, 4 screens, etc. etc.

I've on the job never had a high end machine, and it's never slowed me down considerably as long as it was decent (and the few real sub-standard machines were quickly replaced when I showed how they slowed me down).

THIS IS THE MOST DISGUSTING THING I HAVE EVER READ... I think giving your developer a 2 leged stool to sit on would have the same desired effect... "he will work for you, and when you are not looking, seek other employment"

Boy I'll get clobbered for this, but there's something people don't want to hear:

Nature abhors a vacuum.

Of course programmers want faster machines (me included), and some will threaten to quit if they don't get it. However:

If there's more cycles to be taken, then they get taken.

If there's more disk or RAM to fill up, it gets filled up.

If the compiler can compile more code in the same time, then more code will be given to it.

If it is assumed that the extra cycles, storage, and code all serve to further gratify the end user, one may be permitted to doubt.

As far as performance tuning goes, just as people put in logic bugs when they program, they also put in performance bugs.
The difference is, they take out the logic bugs, but not the performance bugs, if their machine is so fast they don't notice.

So, there can be happy users, or happy developers, but it's hard to have both.

@Peter: Sure there are in principal, except where I look. We even have test machines and virtual machines, and if a tester says something is taking too long, what do they say? "This blasted virtual machine is too slow!" I think the coder him/herself has to feel the pain, and needs an IDE to do something about it.
–
Mike DunlaveyNov 20 '10 at 0:54