I was wondering how software optimization and hardware optimization compare when it comes to the impact they have on speed and performance gains of computers.

I have heard that improving software efficiency and algorithms over the years has made huge performance gains. Obviously both are extremely important, but what has made the bigger impact in the last 10, 20 or 30 years?

And how do hardware changes affect the software changes? How much of software optimization is a direct result of hardware improvements and how much is independent of the hardware?

To be clear, I am asking about software optimizations at the level of compilers and operating systems. Obviously using better high level-algorithms will result in the largest speed ups (think: quick-sort vs. bubble-sort), but this question is about the underlying reason why computers are faster today, in general.

Hardware: Maxed out by 'rate of processing of items.' Items/Software: 'Execution sequence determines rate of processing' - an invention/improvement in one definitely affects the other (for the better :-)
–
PhDMay 23 '12 at 15:55

4

With software, if your predecessor was a moron, you can get a speedup of 100x. (As I did, just by keeping a tail pointer in a singly linked list.) You are much less likely to get that kind of speedup in hardware.
–
Steven BurnapMay 23 '12 at 20:08

6 Answers
6

In terms of software, one of the biggest changes in the past 30 years is that we don't write nearly as much low level code as we used to. For example, software now relies on automatic compiler optimizations as opposed to hand written assembly, and makes extensive use of existing frameworks and patterns which have matured in the past few decades. On the other hand, software has become increasingly complex, and there have been corresponding performance hits.

However, hardware capabilities have improved mostly in accordance to Moore’s Law, and CPU speeds and memory bandwidth have increased hundreds of times over the past 30 years. Manufacturing processes have improved, allowing components to become smaller and faster because more transistors can be packed together. One of the biggest things which has sped up computers is memory access and usage of caching. CPU cache sizes are now bigger than total RAM used to be, and low level programs have shifted to make better use of this. Also, when 64 bit CPUs became commonplace, a corresponding instruction set (i.e. x86-64, the use of which might still qualify as “software”) was required to take proper advantage of this. In that way, it is a combination of improvements in hardware, that are taken better advantage of by shifts in software design.

In short, the biggest incremental strides in performance come from hardware – however changes to software are often required to make optimal use of new hardware. Either one doesn’t really work without the other!

Since he's talking about tools, often the tools are optimized to a specific platform. So in that case it's a combination of the two
–
PaulMay 23 '12 at 19:45

2

While Moore's law continues to allow for more transistors per sq inch on, the ability to increase clock speed seems to be slowing. So this requires optimization of software. If you don't write it to support multicore you aren't going to see the hardware gains. So threading a program could provide significant improvement. But! if we are only talking tools, I don't believe most of those provide automatic threading, so I don't know that this counts....
–
PaulMay 23 '12 at 19:48

1

In fact most of the improvements in software performance are mostly only needed because better hardware has allowed computers to tackle much larger and more complex problems. In simplistic terms, when your biggest possible dataset is 64KiB, it may not matter so much if you use a bubblesort, but that takes time quadratic in the problem size so twice the memory (and twice the problem size) means 4 times as much work, and if the CPU is only twice as fast... Also, a sophisticated optimizing compiler won't run in 64KiB anyway.
–
Steve314May 23 '12 at 19:49

I have heard that improving software efficiency and algorithms over the years has made
huge performance gains

I think what you have heard here is related to better algorithms, but that is what you explicitly excluded from your question.

At the level of operating systems, I don't think many applications run faster today than 10 years ago because the OS has improved (in most cases the opposite is true - most OSs getting bigger and bigger each year, producing more overhead). Only exception may be better support for multiple processors and parallel computing, but that cannot be seen independent of hardware improvements.

For compilers, there have been some improvements of the decades, but the ones with the most impact on performance were the improvements to support features of new hardware. IMHO parallel computing features (multicore, SIMD etc.) and 64 bit processing are the most important things here to mention. EDIT: there is one aspect where improvements in compiler software has increased application performance by around an order of magnitude in the last 10-20 years: just-in-time compilers, especially the ones for Java (since ~1999), and later Javascript.

So it comes mainly down to hardware. In terms of 20 or 30 years ago, it was mainly the processor clock speed, where the increasement correlated roughly 1:1 in application performance for a lot of applications (that's not perfectly true, the real increasement was less since application performance depends also on other hardware components like memory access speed, hard drive speed, GPU speed etc). When you think about the last 10 years, processor clock speed did not increase much more, since processor vendors came to practical manufacturing limits. Instead, there was a paradigm shift to parallel computing. Multi-core processor, GPU computing (which is also a form of parallel computing), SIMD in the mainstream etc. have dominated the last decade.

Scheduling algorithms have gotten better, so applications do respond quicker to user input. The difference is tiny, though - possibly even negligible.
–
IzkataMay 23 '12 at 18:00

@Izkata: yes, you are right, and I think there are a lot of other aspects where OSs were improved on a low-level scale, but I don't know any single thing in OS improvement where we got an order of magnitude of application performance gain.
–
Doc BrownMay 23 '12 at 19:35

That mainly covers it, however, in relation to multithreaded/multicore processors and the speedup, I'd mention Amdahl's law. Some problems are hard or even impossible to do in parallel, and such problems do not (or have a low) benefit from parallelism.
–
usobanMay 25 '12 at 8:13

If hardware gives a speedup ratio of 2x, then that makes everything 2x faster than before.

If a new compiler optimization gives a speedup ratio of 2x, then that also makes everything 2x faster than before.

Together these will make things 4x faster than before.

Before that cheers you up, there's a very old rule: Nature Abhors a Vacuum.
If there's spare room on RAM, somebody will fill it up.
If there's spare room on disk, somebody will fill it up.
If there are spare cycles on your CPU, somebody will use them up.

Don't think so? Find that old discarded laptop with the 500mhz processor and 3gb disk and try to install some new software on it.

Programmers (including me) tend to write code that uses up the available cycles, no matter how many the hard-working engineers give us, and not always for more functionality.

30 years ago, a PC computer had a Z80 Micro running at maybe 1 Meg, and if you were lucky, another Z80 running the display.
Today, a Typical PC computer has a CPU with 2 or 4 cores running at 2-3 Gig Hz. They also have GPU's with 100's of parrallel processes, again measured in the GhZ range.
Then we have 8 bit vs 64 bit bus widths.

Will all that, why are computers not running 10 thousand times faster than there 30 yo counterparts. The hardware is. I would argue computer software optimizations are not as good as your questions supposes.

Assuming no gross software algorithmic inefficiency, it can still be unclear where the biggest gains are. Most recent hardware improvements are useless without the software optimization side of things anyways. Is task parallelization and vectorization a software or hardware optimization? It is both.

Furthermore, speed is sometimes realized through a sea change in hardware capability and software philosophy. For example, due to an explosion of both memory capacity and disk sequential throughput, many software engineers are ditching the concept of seek-heavy relational databases in favor of slamming all the data into memory.

The general point is that hardware optimizations without the corresponding software optimizations often buy you only incremental gains. Same vice versa.