Generally speaking, what type of optimizations do you typically slant yourself towards when designing software?

Are you the type that prefers to optimize your design for

Development time (i.e., quick to write and/or easier to maintain)?

Processing time

Storage (either RAM, DB, Disc, etc) space

Of course this is highly subjective to the type of problems being solved, and the deadlines involved, so I'd like to hear about the reasons that would make you choose one form of optimization over another.

Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.

All three of the above but I want to throw in generality (which relates to maintenance). When you take your time to design a really efficient data structure widely applicable to your software's needs, for example, and thoroughly test it, it'll serve you for years and prevent you from having to write many more data structures narrowly suited to solving individual problems.
– user204677Dec 10 '17 at 1:53

+1, if you optimize for maintainability to start with, then it will be easier to optimize for speed or storage later on if it proves necessary.
– Carson63000Nov 9 '10 at 5:00

You still need to at least consider processing time and storage so you do not pick an extremely excessive approach.
– user1249Nov 9 '10 at 6:14

@Thorbjørn, if you are optimizing for developer time you'd probably (more than likely) pick the algoritms that are easier to read/write in a given language. Only later, and only if performance becomes an issue, would you both to (as @Tim said it) pick up a profiler.
– Jason WhitehornNov 9 '10 at 11:33

@Jason, I strongly disagree. You should be familiar with the performance characteristics of the algorithms you choose so you choose a suited implementation. I.e. choosing an ArrayList when you need it primarily for looking up zip-codes might not scale well.
– user1249Nov 9 '10 at 11:55

1

@Thorbjørn, I do not disagree with you. In fact, I think you are correct in that attention should be given to the algorithms at hand. However, I think where we differ in opinions is that in my opinion the idea of which algorithms to choose is something learned through experience, and fixed only when a problem presents itself. You are correct in your point, I just don't see the need to optimize for that at the expense of less readable/longer to implement code.
– Jason WhitehornNov 9 '10 at 12:02

Development Time

Processing and storage is cheap. Your time is not.

Just to note:

This doesn't mean do a bad job of writing code just to finish it quickly. It means write the code in a fashion that facilitates quick development. It also depends entirely on your use cases. If this is a simple, two or three page web site with a contact form you probably don't need to use a PHP framework. A couple of includes and a mailer script will speed development. If the plan is instead to create a flexible platform on which to grow and add new features it's worth taking the time to lay it out properly and code accordingly because it will speed future development.

In the direct comparison to processing time and storage, I lean towards a faster development time. Is using the collectionutils subtract function the fastest and most memory efficient method of subtracting collections? No! But it's faster development time. If you run into performance or memory bottlenecks you can resolve those later. Optimizing before you know what needs to be optimized is a waste of your time and that is what I'm advocating against.

Moore's law ended about two years ago. It might be time to start thinking about concurrency and using those extra cores. That's the only way you're going to get cheap clock cycles in the future.
– Robert Harvey♦Nov 9 '10 at 4:50

3

To be correct, Moore's law is still going strong with a doubling of the number of transistors on a chip approximately every 2 years, which is what is enabling the placement of multiple cores on a single die. What has ended is the 'free lunch' of ever escalating number of cycles a second.
– Dominique McDonnellNov 9 '10 at 5:12

1

"Processing and storage is cheap." CPU cache and the bus speed are not. They are the main performance bottlenecks today.
– mojubaNov 9 '10 at 14:59

3

I completely agree with this. Maintaining readable code, using appropriate tools for the task, and adhering to your company's agreed-upon standards will significantly reduce your time spent actually typing code into a computer, saving your company a ton of money. You're time is better spent engineering than typing.
– Matt DiTrolioNov 9 '10 at 15:40

1

@Bill: Or if you're doing embedded, where you may have hard limits that will significantly increase product cost if you exceed them. Or for server software, sometimes - if somebody could improve processing on Google servers by 1%, that'd be quite a bit of savings.
– David ThornleyNov 9 '10 at 16:17

User experience.

This is the only value that matters to your customer.

Development Time is less important. I can write a fully featured command line application a lot faster than a GUI, but if Mrs. Jane can't figure out how to make it spit out reports she wants, it's useless.

Maintenance is less important. I can repair a a seesaw really quickly, but if it's in the middle of a forest, users can't find it.

Processing Time is less important. If I make a car that goes 0 to light speed in 60 seconds, users can't steer.

Aesthetics is less important. I can paint a Mona Lisa, but if she's hidden behind a wall no one gets to see her.

User Experience is the only value that matters. Making an application that does exactly what the user wants in the way the user expects is the ultimate achievement.

Processing Time

My user's time is not cheap. What comes around goes around.

I just upgraded an application I use this last year. They had completely rewritten the app, and boy was it slow. I finally had to buy a new computer to run it quickly. I guarantee you that wasn't cheap, but my time is more valuable.

Interesting take on the processing time slant. Care to share what type of applications you develop? I am intrigued.
– Jason WhitehornNov 9 '10 at 4:21

1

Processing time is important if you run on lots of computers. For example, if it's a choice between spending an extra 2 months optimizing, or upgrading 10,000 PCs to newer hardware, in that case the developer's time does not win out. But of course, it's a compromise. If you only run on half a dozen servers, the developer's time likely wins out in that case.
– Dean HardingNov 9 '10 at 4:33

1

@Jason, I have it easy right now, working with Excel and VBA in a conglomeration of spreadsheets (which I've been condensing rapidly). My users work in the next room, and they let me know if I have any problems. My perspective comes from using computers for thirty years, and watching applications keep bloating up, forcing upgrades just too compensate. I know that developers can do better, they just have to get in the habit of writing efficient code.
– user1842Nov 9 '10 at 5:04

+10 for the efficient code. That's far too often overlooked, especially in modular programming. Every module runs at a reasonable speed, but the sum of all can be horrendeously slow.
– Joris MeysNov 9 '10 at 21:50

I tend to slant towards limiting memory consumption and allocations. I know it's old school, but:

Most of the non-throwaway code I write is heavily parallel. This means that excessive memory allocation and garbage collection activity will serialize a lot of otherwise parallelizable code. It also means there will be a lot of contention for a shared memory bus.

My primary language is D, which doesn't have good 64-bit support yet (though this is being remedied).

Memory hogging programs can be run on 64-bit systems, by and large. That's what we did when one of our apps ran into memory issues (it legitimately uses large amounts of memory). The first bullet point is important when performance is.
– David ThornleyNov 9 '10 at 15:11

I would say I optimise toward efficiency, with efficiency being defined as a compromise between development time, future maintainability, user-experience and resources consumed. As a developer you need to juggle all of these to maintain some kind of balance.

How do you achieve that balance? Well, first you need to establish a few constants, such as what the deadline is, what hardware your application will be running on and what type of person will be using it. Without knowing these you cannot establish the correct balance and prioritise where it is needed.

For instance, if you are developing a server application on a powerful machine you might want to trade-off performance efficiency to ensure you hit an immoveable deadline. However, if your developer an application that needs to respond quickly to user input (think a video game) then you need to prioritise your input routine to ensure it is not laggy.

Whatever virtualization technology I'm using

Remember the days when systems with more than 512 MB of RAM were considered bleeding edge? I spend my days writing code for the prior.

I work mostly on low level programs that run on the privileged domain in a Xen environment. Our ceiling for the privileged domain is 512 MB, leaving the rest of the RAM free for our customers to use. It is also typical for us to limit the privileged domain to just one CPU core.

So here I am, writing code that will run on a brand new $6k server, and each program has to work (ideally) within a 100kb allocated ceiling, or eschew dynamic memory allocation completely.

Concisely, I optimize for:

Memory footprint

Sorts (where most of my code spends most of its time)

I also have to be extremely diligent when it comes to time spent waiting for locks, waiting for I/O or just waiting in general. A substantial amount of my time goes into improving existing non blocking socket libraries and looking into more practical methods of lock free programming.

Every day I find it a little ironic that I'm writing code just like I did 15 years ago, on systems that were bought last month, due to advancements in technology.

This is typical for anyone working on embedded platforms as well, though even many of those have at least 1GB at their disposal. As Jason points out, it is also typical when writing programs to be run on mobile devices. The list goes on, Kiosks, thin clients, picture frames, etc ..

I'm beginning to think that hardware restrictions really separate programmers from people who can make something work without caring what it actually consumes. I worry (down vote me if you must) what languages that completely abstract type and memory checking to the collective pool of common sense that (used to be) shared amongst programmers of various disciplines.

+1 for the memory foot print angle. I've never coded against the particular constraints that you are dealing with, but remove the first section talking about Xen and replace that with iPhone and I know exactly where you are coming from :-)
– Jason WhitehornNov 9 '10 at 12:15

Research Results

As an academic, I figured I should share what I optimize for. Note that this isn't quite the same as optimizing for a shorter development time. Often it means that the work might support some research question, but not be a deliverable, polished product. This might be viewed as an issue with quality, and it could explain why many say that (academic) computer scientists don't have any "real world" experience. (E.g., "Wouldn't they know how to develop a deliverable product otherwise?")

It's a fine line. In terms of impact, you want your work to be used and cited by others, and Joel's Iceberg Effect comes into play: a little polish and shine can go a long way. But if you aren't making a foundation for other projects to be built on, you just might not be able to justify the time spent making a deliverable product.

Most of what I do is constrained heavily by processing capability and memory, but does not go through very many, if any, significant changes in the average year.

I have in the past worked on projects where the code is changed frequently so the maintainability becomes more important in those cases.

I have also worked on systems in the past where the amount of the data is the most significant issue, even on disk for storage, but more commonly the size becomes an issue when you have to move the data a whole lot, or over a slow link.

Expressiveness of my intent.

I want someone reading my code to be able to easily see what operations I was trying to invoke on the domain. Similarly I try to minimize non-semantic junk (braces, 'function' keywords in js, etc) to make scanning easier.

Of course you gotta balance that by maintainability. I love writing functions that return functions and all sorts of advanced techniques and they DO further my goal, but if the benefit is slight I will err on the side of sticking to techniques that solid jr programmers would be familiar with.

All of them

Processing time

Today's computers are fast, but far from what's enough. There're many many situations where performance is critical - if you do streaming media servers.

Storage

You customer might have a big disk, let's say, 1Tb. Which can be taken up by 1000 HD movies, if you want to make it a service it's far from enough, isn't it?

Development time

Well I'm not sure if this count as “optimization", what I do is I use Java instead of C++, and the development get 10 times faster. I feel like I'm telling what I think directly to the computer, very straight forward and totally rocks!

BTW I believe to speed up development your development process you should choose java, never try rubbishes like python... which claims they can shorten you DEV time.