As any ‘family source of computer information’ will testify, every so often a family member will want an upgrade. Over the final few months of 2012, I did this with my brother’s machine, fitting him out with a Sandy Bridge CPU, an SSD and a good GPU to tackle the newly released Borderlands 2 with, all for free. The only problem he really had up until that point was a dismal FPS in RuneScape.

The system he had been using for the two years previous was an old hand-me-down I had sold him – a Core2Duo E6400 with 2x2 GB of DDR2-800 and a pair of Radeon HD4670s in Crossfire. While he loves his new system with double the cores, a better GPU and an SSD, I wondered how much of an upgrade it had really been.

I have gone through many upgrade philosophies over the decade. My current one to friends and family that ask about upgrades is that if they are happy installing new components. then upgrade each component to one of the best in its class one at a time, rather than at an overall mediocre setup, as much as budget allows. This tends towards outfitting a system with a great SSD, then a GPU, PSU, and finally a motherboard/CPU/memory upgrade with one of those being great. Over time the other two of that trio also get upgraded, and the cycle repeats. Old parts are sold and some cost is recouped in the process, but at least some of the hardware is always on the cutting edge, rather than a middling computer shop off-the-shelf system that could be full of bloatware and dust.

As a result of upgrading my brother's computer, I ended up with his old CPU/motherboard/memory combo, full of dust, sitting on top of one of my many piles of boxes. I decided to pick it up and run the system with a top range GPU and an SSD through my normal benchmarking suite to see how it faired to the likes of the latest FM2 Trinity and Intel offerings, both at stock and with a reasonable overclock. Certain results piqued my interest, but as for normal web browsing and such it still feels as tight as a drum.

My recent testing procedure in motherboard reviews pairs the motherboard with an SSD and a HD7970/GTX580, and given my upgrading philosophy above, I went with these for comparable results. The other systems in the results used DDR3 memory in the range of 1600 C9 for the i3-3225 to 2400 C9 for the i7-3770K.

The Core2Duo system was tested at stock (2.13 GHz and DDR2-533 5-5-5) and with a mild overclock (2.8 GHz and DDR2-700 5-5-6).

Gaming Benchmarks

Games were tested at 2560x1440 (another ‘throw money at a single upgrade at a time’ possibility) with all the eye candy turned up, and results were taken as the average of four runs.

Metro2033

While an admirable effort by the E6400, and overclocking helps a little, the newer systems get that edge. Interestingly the difference is not that much, with an overclocked E6400 being within 1 FPS of an A10-5800K at this resolution and settings while using a 580.

Dirt3

The bump by the overclock makes Dirt3 more playable, but it still lags behind the newer systems.

Computational Benchmarks

3D Movement Algorithm Test

This is where it starts to get interesting. At stock the E6400 lags at the bottom but within reach of an FX-8150 4.2 GHz , but with an overclock the E6400 at 2.8 GHz easily beats the Trinity-based A10-5800K at 4.2 GHz. Part of this can be attributed to the way the Bulldozer/Piledriver CPUs deal with floating point calculations, but it is incredible that a July 2006 processor can beat an October 2012 model. One could argue that a mild bump on the A10-5800K would put it over the edge, but in our overclocking of that chip anything above 4.5 GHz was quite tough (we perhaps got a bad sample to OC).

Of course the situation changes when we hit the multithreaded benchmark, with the two cores of the E6400 holding it back. However, if we were using a quad core Q6600, stock CPU performance would be on par with the A10-5800K in an FP workload, although the Q6600 would have four FP units to calculate with and the A10-5800K only has two (as well as the iGPU).

Despite FastStone being single threaded, the increased IPC of the later generations usually brings home the bacon - the only difference being the Bulldozer based FX-8150, which is on par with the E6400.

Xilisoft Video Converter

Similarly with XVC, more threads and INT workloads win the day.

x264 HD Benchmark

Conclusions

When I start a test session like this, my first test is usually 3DPM in single thread mode. When I got that startling result, I clearly had to dig deeper, but the conclusion produced by the rest of the results is clear. In terms of actual throughput benchmarks, the E6400 is comparatively slow to all the modern home computer processors, either limited by cores or by memory.

This was going to be obvious from the start.

In the sole benchmark which does not rely on memory or thread scheduling and is purely floating point based the E6400 gives a surprise result, but nothing more. In our limited gaming tests the E6400 copes well at 2560x1440, with that slight overclock making Dirt3 more playable.

But the end result is that if everything else is upgraded, and the performance boost is cost effective, even a move to an i3-3225 or A10-5800K will yield real world tangible benefits, alongside all the modern advances in motherboard features (USB 3.0, SATA 6 Gbps, mSATA, Thunderbolt, UEFI, PCIe 2.0/3.0, Audio, Network). There are also significant power savings to be had with modern architectures.

My brother enjoys playing his games at a more reasonable frame rate now, and he says normal usage has sped up by a bit, making watching video streams a little smoother if anything. The only question is where Haswell will come in to this, and is a question I look forward to answering.

Post Your Comment

136 Comments

I have thought about this. One issue is that when you get older the connections change. Moving back to IDE, AGP and further back adjusts where the bottleneck is and it is harder to keep consistency between the setups. When you get far enough back an OS change is needed too, which puts a different spin on things. What may be a 10 second benchmark today could be a 48 hour test if you go back too far :) Although I do wish I had more AMD chips for comparison in these graphs, such as Athlon, Athlon X2, Phenom and the like.

That's true but since we can't use just the CPU.we use the system, using the hardware that was available at the time for each system provides the relevant results you would be looking for.On the software side it might be hard to find the best benchmarks,since ideally you would have to use the same version of the software.In the end you should be able to figure out a reasonable solution and i do hope you find the energy to give it a try.Including ARM would be fun too but would be too limiting on the software side.Reply

The only thing I can think of is something similar to superPI. It only tests the cpu, but it's probably the only thing that could be tested on all machines no matter what age they come from and what OS they use.I have a working IBM compatible 286 computer from 1986 at my parent's house, would be fun to compare that to something more modern ;DReply

Should you decide to give it a try some time, linux would take much of the OS incompatability away and a game like Spring RTS would be ideal for testing single threaded CPU performance by watching the same replay on each machine and noting the min/max/avg FPS (and on the really old stuff time to complete the run).

A PCI/SATA card would also allow the use of an SSD, which would be the absolute maximum IO performance the machines weren't even capable of, thus eliminating that bottleneck.

Would be one hell of a project though and I'm sure people here would be willing to donate hardware to the project. I for one could contribute a couple of Athlon64/X2 CPUs.

I'm sure ATI released an AGP card not long back aswell, which would keep that bottleneck away (other than the interface itself, but that's all part of the evolution).Reply

I think for the vast majority of home computer use, a Core2 is comfortably fast enough for most people. I'm actually running dual Core2 based Xeons and a Radeon 5870 with a 10k Raptor from 2006 and 4GB of RAM, and I never have any issues doing what most people do at home - web surfing, Netflixing, a bit of light gaming (in fact, the Radeon 5870 might be overkill for that last part).

With enough RAM and fast enough storage, these machines could last a very long time, especially if OSes and apps stay constant or even speed up slightly.Reply

I always enjoy seeing some older hardware compared to the latest stuff. Gives a clear perspective on just how large a difference is really there.

Those chips can overclock signficantly further though. When Core came out I was among the first to buy in with the E6300 and a budget OCer board from GB. It would hit 3.5Ghz easily at reasonable temps on a top-end cooler for sustained load operation (F@H). Going from 965P to a midrange P35 allowed me to attain that golden 100% overclock at the same voltage (1.86Ghz to 3.73Ghz), which did wonders for boosting performance as these results can clearly illustrate from a lower 670Mhz boost.

Games love having that integrated memory controller. But for the CPU-centric tests I'd still love to see how a 3.4Ghz or higher Q6600 would fair, especially against AMD's offerings.Reply

About a year ago I upgraded and old Core 2 Duo computer and was extremely pleased with the results.

It was a Dell Optiplex 755 desktop with a 2.33GHz CPU. Originally, it only had 2GB DDR2-667 RAM (1GB times 2 sticks), a sucky 80GB hard drive and even more sucky on-board graphics. I went to the Dell website, entered the machine's Service Tag number, and discovered that it could be upgraded to 8GB RAM using four 2GB sticks. At the time, DDR2-800 RAM was still cheap (although prices have gone up recently) so just for the hell of it, I pulled the DDR2-667 stick and replaced it with 8GB DDR2-800 I bought online. Then I replaced the 80GB hard drive with a 120GB SATA II SSD. Finally, I bought an ATI 6750 single slot graphics card with 1GB GDDR5 and 128-bit bus. Although I would have preferred a more powerful graphics card with a 256-bit bus, I was limited to a single slot solution because the CPU fan shroud was too close to the PCI-e x16 slot to accomodate a dual slot card. - Oh, yeah, and I upgraded the OS from 32-bit WinXP SP2 to 64-bit Win7 SP1.

The new WEI numbers were impressive. Although the CPU stayed at 5.8, the RAM went from 5.8 to 6.1, the graphics went from 3.4 to 7.3, and the disk I/O score went from 5.6 to 7.8.

Including the cost of a new 650 watt power supply (necessary, because the old 350 watt Dell power supply didn't have a 6-pin connector needed for the graphics card) the total upgrade cost came to about $350. Keep in mind that this machine (with a DVD burner and CD-ROM) originally sold for about $750. So for less than 50% or the original cost I wound up with a computer that boots in 25 seconds, plays 1080p H.264 video, and most games at 1920x1080 with medium settings.

I agree with the poster who said 2560x1440 gaming was a poor choice for your review. 1080p scores would have been far more useful to Core 2 Duo owners. I also agree that Core 2 Duo owners don't care about the multi-threaded benchmarks you included. Let's face it, the average computer user doesn't do advanced encoding and such, and anyone who does would have junked their Core 2 Duo machine long ago.

Although I have several more modern computers at my home and office, I find that this upgraded Dell is worth keeping around, probably for another 2 or 3 years. Although it looks like 2560x1440 monitors will become more popular as time goes on (and prices drop) the average user will probably still be using 1080p monitors for a long time to come, so an upgraded Core 2 Duo is still a worthwhile project.Reply

Made a quick calculation comparing the single-threaded 3DPM bench of the i7-3770K and stock C2D. Taking the difference in clock speeds into account the i7 turns out to be merely 4% faster (assuming full turbo boost). Has the IPC really not improved or is it simply a matter of the benchmark not using AVX or any of the other new extensions?Reply