Post Your Comment

136 Comments

Thanks for posting these results since it is much more realistic to have people doing an upgrade from a Core 2 to Ivy/Haswell/Piledriver than it is to have somebody who just ran out and got a high-end Ivy Bridge 2 months ago to upgrade to Haswell.

Despite all the talk that modern CPUs aren't getting any faster, your benchmarks make it clear that even in single-threaded performance there is a big step up from Core 2 to more modern systems, and Haswell will continue the trend.Reply

I upgraded from a C2D E6300 to a i5-3570k for the sole purpose of gaming, and the perceptible difference is substantial. Not to mention my SSD become much faster with the addition of SATA6 which isn't available on the previous platform. If games that are older than 2010 are being played, then an upgrade isn't necessary. But anything beyond that, a CPU upgrade is very very useful.Reply

The benchmarks run are the ones from my motherboard testing suite. Go look at the motherboard reviews to see the parity between the two. As part of those reviews Metro 2033 and Dirt 3 are my primary gaming benchmarks.

As a trained scientist and academic with peer-reviewed and published papers, I should take offence to your supposition that a conclusion was made before anything was written. No definite conclusion was made before testing began at all - only that the newer platforms would perform better in the testing suite. If you know more about my presupposition before testing than I, then praise be your powers of clairvoyance - there are institutions that will pay good money if you can prove it under laboratory conditions. If anything I thought the gap between C2D and the modern Ivy Bridge would be larger than the results I got, and the 3DPM ST result stunned me to begin with.

In case you missed it, the brief overview was to see how much of an upgrade my brother got in terms of my normal testing routine.

Well said, Ian! Have you ever noticed how some people like to make rediculous statements in this and other forums, just to try and get an arguement started?

I thouroughly enjoyed reading the results of your comparison. I am using An HP Pavilion Entertainment Laptop PC with a Core2Duo processor. It is still great for word processing, spread sheet, database and email tasks as well as surfing the web. I do not play games. This laptop serves my purposes perfectly and I do not plan to upgrade, except maybe for the operating syste, This laptop shipped with Vista. Reply

I don't have the $ to get a new rig so recently I've decided to upgrade my 6-7 yo. desktop. It's spent many years in my basement because I was using a laptop instead. I dindn't want to spend a lot as the whole thing is meant to serve a "hot fix" until I can throw some cash at a brand new desktop machine.

All I bought was a decent SSD, 2 Gb more RAM and one of the cheapest GPUs I could find to run two 1080p monitors and be able to play Bluray quality videos enjoyably (I'm not a gamer or anything, so it's fine).

Yes, the SSD is SATA 3 and my mobo only supports SATA 2 so the speed of my SSD is limited, and yes, 4X1Gb RAM performs worse than 2x2Gb would. But still, although I couldn't agree more with the conclusion of the article (more on that later), the performance gains of the upgrade were huge.

I only use my rig for studying, communicating, browsing, working (a lot) in Excel, mindmapping, watching movies, and playing online poker on 16-24 tables, multiple poker sites and using tracking software. And, more often than not, more of those things simultaneously. Before the upgrade, multi-tabling was almost impossible and when I was studying or researching stuff (meaning Chrome with 15+ tabs in multiple windows, Outlook, OneNote, Word and Excel running, Viber, Skype, qBittorrent and the like in the background) it was a real headache. A restart took almost 3 minutes. I didn't know what to expect from the upgrade. I knew an SSD will speed things up and adding more RAM won't hurt either, but in real life that translated into a more than noticeable performance improvement.

After clicking 'restart', I have full control over my desktop within 80 seconds (don't forget that BIOS part of the boot sequence on these ancient mobos take a lot longer than on recent ones), but now that boot time includes loading Outlook, OneNote and Chrome with all the windows and tabs from my last session, because I've added them to my startup programs. Before the upgrade, this would've been a very, very stupid thing to do. Now, right after entering my password, these apps just pop up like it's nothing.

Even when many applications are running the system loads new ones INSTANTLY, and while I was forced to close tabs and disable some extensions in Chrome to reduce memory load before the upgrade, it hasn't happened ever since. I literally haven't experienced ANY LAG. So, for just a bit more-than-average load it could seem like a good idea to upgrade your old machine. With the proper parts it gets the job done really well. Way better than I thought, to be honest.

BUT!!!!!

First of all, with an old system like that, I'm missing features and connectivity that are less than standard in a modern day PC. And many of these make a noticeable difference, even if you don't want to render 6K videos or do hardcore gaming or Photoshopping. I can't make use of the full potential of my SSD because I only have SATA2 (half the bandwidth of SATA3), and I can't transfer large amounts of data fast to external devices due to the lack of USB 3 ports (I copied 800 Gb to an external HDD yesterday and at 30 MB/s, by no means had it been a seamless experience…). DDR2 is also heavily outdated compared to DDR3. My RAM's maximum transfer rate is somewhere around 2 Gb/s as I recall, and recent chips are easily 3-4 times faster than that; not to mention response times, etc. I'm no expert but I think those differences are so huge they could actually be noticeable even under average use.

Also, prices of old but powerful components are unreasonably high. After upgrading and realizing all the benefits, I was playing with the thought of maxing out my mobo with one of the most powerful CPUs with the same socket (Intel's quad-core q6600, that is) and 8 Gb of 800MHz DDR2 RAM. I thought old stuff is cheap so I'll spend almost nothing and use a decent desktop while saving money for a new one. Not really. Q6600 sells for $280 and 4X2Gb of quality 800MHz DDR2 RAM is also over $200 (these are prices from Amazon for good measure, in my country they are even more expensive). So I'd have to shell out about $500 for outdated technology that is way below even mid-class by today's standards. Not to mention reliability , power consumption, compatibility, support, etc…

Conclusion:

I don't have the $ to get a new rig so recently I've decided to upgrade my 6-7 yo. desktop. It's spent many years in my basement because I was using a laptop instead. I dindn't want to spend a lot as the whole thing is meant to serve a "hot fix" until I can throw some cash at a brand new desktop machine.

All I bought was a decent SSD, 2 Gb more RAM and one of the cheapest GPUs I could find to run two 1080p monitors and be able to play Bluray quality videos enjoyably (I'm not a gamer or anything, so it's fine).

Yes, the SSD is SATA 3 and my mobo only supports SATA 2 so the speed of my SSD is limited, and yes, 4X1Gb RAM performs worse than 2x2Gb would. But still, although I couldn't agree more with the conclusion of the article (more on that later), the performance gains of the upgrade were huge.

I only use my rig for studying, communicating, browsing, working (a lot) in Excel, mindmapping, watching movies, and playing online poker on 16-24 tables, multiple poker sites and using tracking software. And, more often than not, more of those things simultaneously. Before the upgrade, multi-tabling was almost impossible and when I was studying or researching stuff (meaning Chrome with 15+ tabs in multiple windows, Outlook, OneNote, Word and Excel running, Viber, Skype, qBittorrent and the like in the background) it was a real headache. A restart took almost 3 minutes.

I didn't know what to expect from the upgrade. I knew an SSD will speed things up and adding more RAM won't hurt either, but in real life that translated into a more than noticeable performance improvement. After clicking 'restart', I have full control over my desktop within 80 seconds (don't forget that BIOS part of the boot sequence on these ancient mobos take a lot longer than on recent ones), but now that boot time includes loading Outlook, OneNote and Chrome with all the windows and tabs from my last session, because I've added them to my startup programs. Before the upgrade, this would've been a very, very stupid thing to do. Now, right after entering my password, these apps just pop up like it's nothing.

Even with many applications running, the system loads new ones INSTANTLY, and while I was forced to close tabs and disable some extensions in Chrome to reduce memory load before the upgrade, it hasn't happened ever since. I literally haven't experienced ANY LAG. So, for just a bit more-than-average load it could seem like a good idea to upgrade your old machine. With the proper parts it gets the job done really well. Way better than I initially thought, to be honest.

BUT!!!!!

First of all, with an old system like that, I'm missing features and connectivity that are less than standard in a modern day PC. And many of these make a noticeable difference, even if you don't want to render 6K videos or do hardcore gaming or Photoshopping. I can't make use of the full potential of my SSD because I only have SATA2 (half the bandwidth of SATA3), and I can't transfer large amounts of data fast to external devices due to the lack of USB 3 ports (I copied 800 Gb to an external HDD yesterday and at 30 MB/s, by no means had it been a seamless experience…). DDR2 is also heavily outdated compared to DDR3. My RAM's maximum transfer rate is somewhere around 2 Gb/s as I recall, and recent chips are easily 3-4 times faster than that; not to mention response times, etc. I'm no expert but I think those differences are so huge they could actually be noticeable even under average use.

Also, prices of old but powerful components are unreasonably high. After upgrading and realizing all the benefits, I was playing with the thought of maxing out my mobo with one of the most powerful CPUs with the same socket (Intel's quad-core q6600, that is) and 8 Gb of 800MHz DDR2 RAM. I thought old stuff is cheap so I'll spend almost nothing and use a decent desktop while saving money for a new one. Not really. Q6600 sells for $280 and 4X2Gb of quality 800MHz DDR2 RAM is also over $200 (these are prices from Amazon for good measure, in my country they are even more expensive). So I'd have to shell out about $500 for outdated technology that is way below even mid-class by today's standards. Not to mention reliability, power consumption, compatibility, support, etc…

Conclusion:

Have an old desktop? Thinking of upgrading and using it for a few more years? DON'T. Get a new one. Don't have the money and want/need good performance ASAP? Get a decent SSD (it'll work with your future rig, so why not?) and some cheap RAM. Well… 30 bucks for 2 Gb is certainly not cheap, but for a few months of better performance, I'm sold. There you go. You have a neat and completely functional 'temporary PC' for $30 (plus the SSD but you're going to use it anyway).Reply

I don't think anyone has said CPUs aren't getting faster, just that we've reached a point where even entry-level CPUs are "good enough". You have to spend a great deal of money just for small bumps in performance that most people won't even notice with their work loads.

I don't anticipate upgrading my i5-2500k @ 4.4GHz for quite a while, unless something unexpectedly craps out. I also generally look at AMD solutions for notebooks since the CPU performance is more than acceptable while the integrated GPU is far superior to Intel's, especially as GPU-acceleration is becoming more common.Reply

There's a lot of moaning about the "5-15%" increase in performance in existing applications that Haswell is supposed to bring about. As this article shows quite clearly, it isn't about existing applications; it's about how future applications run. Look how terribly the C2D performs in comparison to even AMD machines in modern applications. 6 years down the road, the C2D chokes. Even dual core i3 fares much, much better.Reply

I'm curious as to FX's performance in FP tests. Sure, there's one FPU per two cores, but each FPU is theoretically twice as capable at 128-bit calculations as any other FPU. They must've really castrated the units, which might also explain poor AVX performance as compared to SB and IVB. Bet they're regretting that now.Reply

It's not that simple. Remember that most of the code is Intel optimised. If you want to have a comparable result on FP you would have to use 128bit optimised code path. Their strategy is totaly different: If you want heavy FPU calculations then use on-die GPU coresReply

Future applications? For your parent's/brother/grandparents pc? They will still be Office 2002, Email client of some description (but usually gmail/hotmail/yahoo via browser) a browser aaaaand maybe some sort of a movie player. (VLC?) Last one is a big maybe too.

6 years down the line? They are likely to be using the same things still.

And no offence but surely the upgrade time should then be when the c2d chokes as opposed to when it still works ok? :)

For geeks like us yes a C2D is long since past its prime. But we've also long since upgraded to w/e suits us better no?Reply

The REAL problem is that in using modern PCs what one should care about in terms of UI is not throughput but "snappiness". The issue is no longer "how long will it take to transcode my mp3s", it's "does the machine respond instantly to everything I do? How often do I have to wait?" Obviously SSDs have done a lot to move us into this world.

The problem for a site like AnandTech, then, is that classical benchmarks do a truly lousy job of quantifying "snappiness". We need a new set of relevant benchmarks.You see the same sort of thing (at a more virulent level) in Android vs iOS fights, where both sides are claiming that their OS runs "smoother", more "like butter", but once again in the absence of actual numbers, both sides are talking past each other. (And the situation is not helped by the fact that the last iPhone Android guy used was an iPhone3GS, while the last Android phone iOS guy used was a Galaxy Nexus. Vague recollections of a phone three years old, and ZERO actual numbers, do not make for enlightening discussion.)

A further complication is that, for internet interaction, the properties of your TCP stack (perhaps tweaked), your router, and your ISP (does it offer "turbo boost" for the first 1MB of downloads) all also affect perceived snappiness, so it's no longer about the pure CPU+RAM.'

I don't have an answer, but I do see these sorts of benchmarks as becoming less and less relevant every year, and the web site that comes up with an alternative will RULE this space.Reply

Snappiness has been solved since 2009, when SSDs and Windows 7 (RC) came around. Since then, I haven't owned a laggy computer. Even the 2nd gen MacBook Air (Core 2 Duo, 2 gigs of memory) of my wife is butter all the time.

On the smartphones, it was a talking point when powerhouse Android phones running Gingerbread were choking compared to old iPhones and Windows 7 phones. Ever since Ice Cream Sandwhich: snappiness achieved.

Microstutter can still be an issue when gaming. Depending on the focus of the site and if all the reviewers are gamers the best ways to evaluate this are either to collect the time between each frame to measure the number of times a frame is slow like Tech Report does; or like [H]ocp play parts of the games at various settings to determine which combination gives the best graphics while remaining fast enough to be playable.

The latter is IMO the gold standard; but is only really doable if all the reviewers are gamers. Tech Report's data gathering is more like the common FPS numbers in that anyone equipped with with a loaded steam account can collect the data.Reply

One could measure maximum and average latencies between "sending task out" and "finished". It's a tough job to select meaningful benchmark scenarios suitable to such measurements and to interpret that data carefully. Anyway, pioneering work could be done here.Reply

Yup, I'm going to keep my 3.6GHz E8400 for a little longer now. I thought it would significantly bottleneck if I put a decent GPU alongside it, but now I know they I can get 95% of the potential gaming performance with this high clocked CPU. Reply

Well I daresay this (fairly poor by Anand's standards) "review" shows that actually the ANCIENT E6400 is still good enough for most people.

And that there is barely any reason to upgrade. But yeah all them "modern advances" which are once again only very arguably useful... (thunderbolt? really? UEFI? for your family's pc? USB 3.0 is also pretty pointless atm)

In the end a pointless review. Ancient cpu's are a bit slower than new ones. Well yeah? We all knew that.

It's one thing to know, another to actually publish results in a testing environment to show the difference. Oh right, but you already knew. Then why bother reading, surely it's a waste of your time?

For the record, my father and grandfather already take advantage of USB 3.0, with my father moving his band music around and the grandfather with his veteran meets. Stick in some WiDi for less cables, or a simple UEFI so if something goes wrong I can tell them what to do over the phone with a lot less hassle.

This is despite the fact that I'm upgrading my brother's computer here and testing his old one. He plays a lot of games, he watches a lot of paintball streams and often does both across two monitors at the same time. As stated, he feels a benefit, and the numbers in the regular testing can at least quantify that to some degree based on the upgrade methodology listed.

Your bench shows it clearly: most Core 2 owner, especially owners of E8xxx or Q9xxx can still hang onto their rig and upgrade their GPU and get and SSD for an overall speed-up. The DDR2 may limit the performance, but it's not like kits of low-timing DDR2-1066 aren't availables and may provide better performance than generic DDR3-1333. And there's lot of second hand socket 775 CPU and motherboard (some DDR3), which may provide an incentive to fix instead of replace.Reply

Hm, halving the time it takes to unpack a .rar or doubling/tripling/quadrupling the time of my encodes is not a trivial thing. If all you do is game and use word, maybe hanging on to the old stuff is good. But I think this shows the benefits of upgrading quite clearly.I'd love to see something similar done with the CPU I have (3.8GHz i7 860). I'm holding off on upgrading that for Haswell if I have the money then, don't want to upgrade to a dead platform again. :DReply

Actually I think you've got it backwards. doubling/tripling/quadrupling the time of encodes isn't a big deal at all to me and I'd guess most people. They spend MUCH more time gaming than bootlegging ripped DVDs or the occasional re-encode of a video file to youtube. A few more minutes to finish an encode isn't a big deal to most people, yet if they can't get playable framerates on Diablo 3 or Sim City then they need to upgrade. It's about whether or not their computer can run their apps rather than how fast they are. If a game is a slideshow then it is unplayable, if you encode at 3 fps then you still get it done, it just takes longer. It all depends on the applications that you run. If you run only office tasks and the internet, then even my parent's single core Sempron 140 feels snappy working in Word, taking out red eye from a photo, or even with web video because the 780G graphics accelerates it.Reply

Also, is unpacking RARs really CPU-limited? It's always been storage limited for me. 7zip seems to write the RAR as it unpacks it into a temporary file, then copy it and delete the temp file - 2x the optimal disk writes. Any more efficient and equally functional ZIP/RAR software?Reply

I used to think this as well but you may be surprised if you benchmark it. The suffix sort in the BWT used by bzip2 and the markov chain predictors used in LZMA 7zip are actually quite slow and even a high end CPU may be slower than disk. This is especially true (for compression) if you get a good compression ratio since you end up writing that much less data to disk.Reply

And this is true. But in the scope of the review above "for most people" (I know that's too general) how big is the rar they open going to be?

1gb tops? If its some stolen game. In reality usually even less. So say its some rar with some photos or some random songs? 200mb? takes what? 10 seconds to unpack? with c2d it would take 20. Oh The Horror.Reply

However, I have used other compression software that doesn't have this issue with drag and drop (most recently, PeaZip - don't use it now as the interface doesn't work well for me) so I'm not sure I trust their explanation of it :-)Reply

You seem to misunderstand my intent: The OP talked like his usage case is encompassing nearly all users. I was specifically talking about my own usage cases where there were huge improvements. And I also mentioned that gaming is an outlier case here. So all you have done seems to be to reiterate my own post.I spend about as much time ripping my BDs and DVDs as I do playing these days and having the ability to get the encode done way faster as well as having a more responsive system in the mean time is a huge boon.Btw. not everyone here is a torrent kiddie, I buy my stuff, tyvm.Reply

To be honest I kind of fear for AMD. Trinity didn't really enhance the A-series computationally, so if Intel releases a GT3/GT4 HD graphics i3 part they could seriously encroach on AMD's A-series niche market. Not only that, but as FM2 isn't truly available in Mini-ITX form factors a graphics-heavy i3 would be even more more versatile and enticing from a SFF/HTPC perspective, not to mention laptops.

I know Trinity is only a few months old, but it feels like AMD needs to release something very soon after Haswell in order to stay relevant.Reply

When talking low tech in 2013, my brother still uses the desktop I built him back in 2007 as his sole household PC. It's a single core S754 2.0ghz A64 running the classic nVidia 6100 IGP. I asked him if he wanted me to upgrade it, and he said not yet. :pReply

he must not click on the HD tab for youtube video. I have a similar setup at my office for when I forget my laptop. It's a S754 at 2.4ghz and AMD 9600pro and it feels ok on the web but falls over with video. Putting a real video card in it would fix mine but it's AG. The nvidia 6100 is PCI-e so you could actually help him a lot by getting a used 3450 video card for $10 online. I bet that system would last for several more years if it had video accelerationReply

Eh? Trinity *did* enhance the A-series computationally as compared to Llano. It wasn't enough to catch Intel (not that anyone seriously expected it would be), but progress is certainly being made, and the Piledriver cores are a good step up from Bulldozer cores.

I don't think AMD needs to equal Intel's CPU prowess in order to stay relevant. They just need to leverage their GPU advantage, which is what they're doing. When using a dGPU, there's no reason to get an AMD CPU. When using an iGPU, there's no reason to get an Intel CPU, IMO, unless all you need is enough GPU to animate UI elements. Although, Intel's GPU in the mobile space are still struggling even with that, Haswell notwithstanding.

If Haswell is all its hyped up to be (which I don't think it will be), then AMD may be in trouble if they aren't able to enhance their APU offerings to surpass it in graphics prowess fairly quickly.Reply

Thanks for the article, I object to the screen resolution of 2560x1440 on the two gaming benchmarks.

Is your brother sporting a $1000+ screen ?

Do you think that people with a core2 generally are ?

Your pci-e was only 1.1, there's an awful lot of pci-e 2.0 core2 out there - and higher than E6400.

So the gaming tests were important but useless due to resolution.

OC'ing the cpu was useful, but brought it to a bare minimum, as a lower clocked core2 would be an easy $40 upgrade to say E5700 or something like that, or a $49 E8400 w/6mb cache and a 3600mhz easy potential.

I find the whole idea great, but dropping in a decent cheap cpu and gaming at 1650x or 1920x would have done the article proud for people here IMO.

No one with a core2 gives a darn about the Computational benchmarks.Reply

You can't find C2D chips unless you hop onto ebay nowadays, which makes this sort of 'quick' upgrade more difficult than it was just a few years ago. I agree that this, and upping RAM to as close as 4GB as possible (a lot of people with earlier c2d systems have 1-2gb) is a much more realistic, painless procedure.Reply

The cheap Korean panels are less than $400. At this resolution and settings, any CPU driven calculations can be a bottleneck. It also helps that all the other results I have for other CPUs were done at that resolution and those settings, otherwise this quick test for a mini article would have run into a couple of weeks of testing, rather than a day.

As for PCIe, it was just the board taken out of an essentially random system, but happened to be the one my brother was using. He has a dual screen setup at home, often running a video/audio stream on one while playing RuneScape almost full-screen on the other. We can always argue 'why didn't you test XYZ', but the truth of the matter is this is what I had to test.

If I didn't upgrade his machine, or didn't have the capacity to, then a motherboard or CPU swap would indeed be on the cards if he couldn't afford a full system. As you note, there are some cheap and cheerful prices to be had for s775, although a jump to Sandy Bridge could be as little as $125 new.

The point was to compare the system using the SAME benchmarks used on the modern systems, that way he could have numbers to compare to. It's not like he actually benched all of those machines for this article, he just benched the core2 system, and used pre-existing benchmarks to compare against.Reply

what I did w/ a stepfather's system a couple years back was to get one of them low-cost 45nm C2D Pentiums and add a stick of RAM I wasn't using anymore, replacing a E4300 or equivalent, as well as doubling the RAM. Long as you're not doing video processing or 3D rendering those late Wolfdales seemed to be more than powerful enough for the job. I've done the same with my system by using a mid-tier Q9x00 CPU. Most of my gaming is console-based these days anyway, and aside from that my only gripe is that I'm stuck with 4GB of RAM till I upgrade to something better.

I'm looking forward to finally upgrading with Haswell, but much of the drive for that's been due to the power-saving features.Reply

By any chance did you also log the minimum and maximum fps for the games?

I found that when I upgraded from my Q6600 (OC 3.2GHz) to a stock 3570K, while keeping the same GPU and SSD, I all but eliminated low fps spikes and max fps nearly doubled. Some games are immensely quicker while others simply no longer stutter.Reply

Interesting. How does that compare to the newer systems? Do you have minimums for those too?

A year and a half ago, I ran a similar upgrad-as-much-as-you-can Socket 939 Opteron machine with a then-new 6870 in it, and when the card was carried over to a new i7 2600 build, it was the minimum frame rates that really improved the most. When the old machine was CPU limited, it was really CPU limited.Reply

Just upgraded my dad from a Pentium D, 4GB DDR2, 9500 GT system to a Core i7 920, 12GB DDR3, GTX 460 768MB hand me down system for his Christmas 2012 present. Let's see if he notices the difference?!

I think the biggest relative upgrade I performed was for my in-laws though. Took them from a Pentium 4 3.0Ghz to a Core 2 Quad q9400. Now that's an upgrade! They're still using that system and it still runs like a champ by the way.

I treated myself to an i7 3770k, 16GB DDR3, SLI 680 system this past fall.Reply

I am so utterly jealous of how much money you have xD. I will be rocking a core i3-3220+GTX 650 and a decent ivy bridge laptop probably until broadwell. I might pick up a surface pro like machine with a broadwell chip, and a quad core desktop system by then.

Although broadwell might be really only more for mobile, so I might end up waiting for the revision after that to get a CPU, and upgrade my GPU first. Reply

Well, I actually had the GTX 460 768MB as a dedicated PhysX card alongside my original GTX 680. Swapped out the motherboard, CPU, and RAM that I ended up repurposing for my dad's Christmas present, and also repurposed the GTX 460 for his rig. That left an empty PCIe 16x slot in my current rig, which obviously is a dangerous thing for my pocketbook, as I ended up going SLI. Found a great deal on the FS/FT boards that I couldn't pass up. ;-)

I gave my parents a similar bump in two stages a few years ago. From an Athon-900 and 128Mb PC-100, initially to an A64-2.0 and 1gb DDR2-400, and a year or two later to a 2.4ghz dual core and 3GB ram when I retired my S939 box.

Arguably it's due for another upgrade soon; but they almost never use it anymore. Mom mostly uses my old netbook from her recliner while dad buys and batters a cheap 15" laptop to death on the road every 18 months or so. The desktop only ever really gets used if the wifi is down, or by my dad to use the networked printer. He only does this because he's terrified that connecting his laptop to wifi will screw up his ATT dongles software (this apparently happened once a half dozen years and at least two dongles ago). I've gone as far as offering to image his drive first to guarantee I can undo anything if it breaks but he won't let me touch it.... *sigh*Reply

Interesting article. I'd be interested to see power consumption differences in the C2D and the latest and greatest as I imagine that may be a more decisive win for the modern hardware than the actual performance difference. Thanks so much for sharing this.Reply

More striking is how little you gain when replacing a 6.5 years old CPU, 6.5 years used to be an eternity yet now it's still functional.If you would go back 6.5 more years you would land in the pentium 3 era,think how slow that was compared to core 2 duo.And this gave me an idea, would be cool to test every gen from Intel and AMD,as far back as you can manage to go and plot it in on a time line see how perf evolved (and slowed down) - ofc you would have to exclude the extreme (pricing) series to get relevant results.Reply

I have thought about this. One issue is that when you get older the connections change. Moving back to IDE, AGP and further back adjusts where the bottleneck is and it is harder to keep consistency between the setups. When you get far enough back an OS change is needed too, which puts a different spin on things. What may be a 10 second benchmark today could be a 48 hour test if you go back too far :) Although I do wish I had more AMD chips for comparison in these graphs, such as Athlon, Athlon X2, Phenom and the like.

That's true but since we can't use just the CPU.we use the system, using the hardware that was available at the time for each system provides the relevant results you would be looking for.On the software side it might be hard to find the best benchmarks,since ideally you would have to use the same version of the software.In the end you should be able to figure out a reasonable solution and i do hope you find the energy to give it a try.Including ARM would be fun too but would be too limiting on the software side.Reply

The only thing I can think of is something similar to superPI. It only tests the cpu, but it's probably the only thing that could be tested on all machines no matter what age they come from and what OS they use.I have a working IBM compatible 286 computer from 1986 at my parent's house, would be fun to compare that to something more modern ;DReply

Should you decide to give it a try some time, linux would take much of the OS incompatability away and a game like Spring RTS would be ideal for testing single threaded CPU performance by watching the same replay on each machine and noting the min/max/avg FPS (and on the really old stuff time to complete the run).

A PCI/SATA card would also allow the use of an SSD, which would be the absolute maximum IO performance the machines weren't even capable of, thus eliminating that bottleneck.

Would be one hell of a project though and I'm sure people here would be willing to donate hardware to the project. I for one could contribute a couple of Athlon64/X2 CPUs.

I'm sure ATI released an AGP card not long back aswell, which would keep that bottleneck away (other than the interface itself, but that's all part of the evolution).Reply

I think for the vast majority of home computer use, a Core2 is comfortably fast enough for most people. I'm actually running dual Core2 based Xeons and a Radeon 5870 with a 10k Raptor from 2006 and 4GB of RAM, and I never have any issues doing what most people do at home - web surfing, Netflixing, a bit of light gaming (in fact, the Radeon 5870 might be overkill for that last part).

With enough RAM and fast enough storage, these machines could last a very long time, especially if OSes and apps stay constant or even speed up slightly.Reply

I always enjoy seeing some older hardware compared to the latest stuff. Gives a clear perspective on just how large a difference is really there.

Those chips can overclock signficantly further though. When Core came out I was among the first to buy in with the E6300 and a budget OCer board from GB. It would hit 3.5Ghz easily at reasonable temps on a top-end cooler for sustained load operation (F@H). Going from 965P to a midrange P35 allowed me to attain that golden 100% overclock at the same voltage (1.86Ghz to 3.73Ghz), which did wonders for boosting performance as these results can clearly illustrate from a lower 670Mhz boost.

Games love having that integrated memory controller. But for the CPU-centric tests I'd still love to see how a 3.4Ghz or higher Q6600 would fair, especially against AMD's offerings.Reply

About a year ago I upgraded and old Core 2 Duo computer and was extremely pleased with the results.

It was a Dell Optiplex 755 desktop with a 2.33GHz CPU. Originally, it only had 2GB DDR2-667 RAM (1GB times 2 sticks), a sucky 80GB hard drive and even more sucky on-board graphics. I went to the Dell website, entered the machine's Service Tag number, and discovered that it could be upgraded to 8GB RAM using four 2GB sticks. At the time, DDR2-800 RAM was still cheap (although prices have gone up recently) so just for the hell of it, I pulled the DDR2-667 stick and replaced it with 8GB DDR2-800 I bought online. Then I replaced the 80GB hard drive with a 120GB SATA II SSD. Finally, I bought an ATI 6750 single slot graphics card with 1GB GDDR5 and 128-bit bus. Although I would have preferred a more powerful graphics card with a 256-bit bus, I was limited to a single slot solution because the CPU fan shroud was too close to the PCI-e x16 slot to accomodate a dual slot card. - Oh, yeah, and I upgraded the OS from 32-bit WinXP SP2 to 64-bit Win7 SP1.

The new WEI numbers were impressive. Although the CPU stayed at 5.8, the RAM went from 5.8 to 6.1, the graphics went from 3.4 to 7.3, and the disk I/O score went from 5.6 to 7.8.

Including the cost of a new 650 watt power supply (necessary, because the old 350 watt Dell power supply didn't have a 6-pin connector needed for the graphics card) the total upgrade cost came to about $350. Keep in mind that this machine (with a DVD burner and CD-ROM) originally sold for about $750. So for less than 50% or the original cost I wound up with a computer that boots in 25 seconds, plays 1080p H.264 video, and most games at 1920x1080 with medium settings.

I agree with the poster who said 2560x1440 gaming was a poor choice for your review. 1080p scores would have been far more useful to Core 2 Duo owners. I also agree that Core 2 Duo owners don't care about the multi-threaded benchmarks you included. Let's face it, the average computer user doesn't do advanced encoding and such, and anyone who does would have junked their Core 2 Duo machine long ago.

Although I have several more modern computers at my home and office, I find that this upgraded Dell is worth keeping around, probably for another 2 or 3 years. Although it looks like 2560x1440 monitors will become more popular as time goes on (and prices drop) the average user will probably still be using 1080p monitors for a long time to come, so an upgraded Core 2 Duo is still a worthwhile project.Reply

Made a quick calculation comparing the single-threaded 3DPM bench of the i7-3770K and stock C2D. Taking the difference in clock speeds into account the i7 turns out to be merely 4% faster (assuming full turbo boost). Has the IPC really not improved or is it simply a matter of the benchmark not using AVX or any of the other new extensions?Reply

In that case: Good on Intel! That's a more than 3-fold improvement on a core-frequency basis if we're talking multithreaded here. Too bad this improvement does not come automatically though. There's probably a whole lot of programs that don't make use of these extensions.

BTW, nice to see a chemist on Anandtech. Keeps my fantasy of seeing a Gaussian09 bench on a Xeon Phi alive :)Reply

Things always go boom when I'm in the lab. At least it takes me a couple of years to burn out a $300 GPU rather than a couple of minutes to have $10k of chemicals explode in my face / get washed down the drain :)Reply

I just did an upgrade from Core2Quad Q9450 to a Core i7-3570K and some new Corsair DDR3-1600 memory. I went with the Asus P8Z77-V LK motherboard. I already had a Corsair 60GB SSD I had been using as a boot drive. For the new system, I moved the SSD to being used for Smart Response on an 1GB EARZX Western Digial drive. Those two went on the SATA 6Gbps ports and some other data drives and optical on the SATA 3Gbps ports. I kept my EVGA GeForce GTX 560 Ti graphics card. I decided to stick with Windows 7 Home Premium 64bit, for the upgrade.

I haven't bench marked games, but in generally I am really happy with the new system. Everything in the OS happens significantly faster, though boot time is a little slower. The old system was having stability issues, so this was as much a repair as it was an upgrade. Also, some things like opening up my Chrome session and closing are much faster. Games do seem to be more responsive as well.Reply

Different times. Back then I was at university, where money was a scare resource, and after paying for my own new build I wanted to recoup some of the cost. Now in the world of jobs and such, it's less of an issue, and since he drives and I do not, his runabouts at my request have grown over time and I wanted to repay him.

I'm running a Core 2 Duo E8400 (stock 3.0GHz but oc'd to 3.7GHz) in a system that I bought used off Craigslist for $380 in mid 2010 (it was quite a steal). It originally had an HD 4870 512MB, but that died and I bought a friend's NVIDIA 460 GTX 1GB. It still runs any game I throw at it, usually at High or Highest settings (with or without anti-aliasing, depending on the game) @ 1080p.

Core 2 Duo E8400 with 6MB of L2 cache, and overclocked is quite a potent combo. Definitely a powerful performer from that generation, at a decent price. And in single/two-threaded workloads, it's not THAT much slower than today's offerings. It's definitely fast enough to be responsive in day to day tasks (like JS-heavy webpages and Facebook and HD video streaming).

The only area that I REALLY feel the lack of power is in video encoding (which I don't do that much of) and in multi-tasking situations where I'd love to have a full screen Twitch.tv video stream open on my second monitor while playing an intensive 3D game on my main. Not enough cores :P Also, the other time I feel the lack of speed is probably in boot-up and installation of certain things, because I have a Vertex 3 SSD (only in 3Gbps mode, though) which is fast enough to remove the HD bottleneck for most things.Reply

I have a similar set up but with an E5200 OCed to 2.7GHz and 2mb L2 and get a similar experience to you. CPU performance is irrelevant for the majority of tasks if you do not have other high end components. Reply

@Ian: I am a big fan of ANANDTECH, and I have read your articles on QX 9650 and X48 motherboards several times.

Your article gave me an idea: I am running a QX 9650 @3952 Mhz, DDR2 @1115 MHz and a Gigabyte Nividia GTX 580 SOC (Super Overclock) on an ASUS P5Q Deluxe PCI 2.0 MOBO. Why don't you guys do the same for my QX 9650. I only play WoW and it can hold its own quite well. I play at 1920X1080, all settings at ULTRA.

Pretty sure that mY QX9650 would score substantially higher on a X48 board with DDR3 memory and a higher 4.2GHz OC. Why don't you guys do a similar article on QX 9650 vs modern CPUS and see how much is it worth it to upgrade. I know QX 9650 is one of your favourite chips (you even managed to fry one!).

Just focus on a 1920X1080 resolution or even a bit lower, as really very few people have 2560X1600 monitors.

You might be getting me confused with Rajinder Gill, the previous motherboard reviewer. He tackled X48 - I've never touched a QX 9650 :) Though I would like to. I have some ideas for future articles :)

A Core 2 Duo from years ago can beat the A10 in single threads? That's gotta hurt. I knew AMD was lacking single threaded performance but I thought they had at least crawled past the Core 2 Duo.

My laptop is a Core 2 Duo t6500 (2.1GHz Penryn), and while I would definitely want something more for gaming, I must agree that it is still plenty capable for what the vast majority of people do on the computer. A few die shrinks down the road, when Core 2 Duo like power becomes the standard for smartphone/tablet power, I think desktops and laptops will start to shrink at an even faster rate than they are now. Some will still need them of course, like some need trucks, but for the majority a tablet will do just as well. Reply

The older Stars cores performed much better in single thread, as shown by the X6-1100T, and the OC'ed E6400 only beat the A10-5800K at stock in a single non-memory related benchmark. Just to put it in perspective ;)

I too had a launch-model Core 2 Duo, the E6300 (1.86Ghz at stock). I was running it a bit faster (3.29Ghz), but had definitely started to notice it's age (GTX260 for graphics, 8Gb of DDR2).

I opted for a whole new PC in May (it was nearly GPU upgrade time, I wanted an SSD and I was sick of my old case) and the speed difference is actually quite astounding. A lot of the general responsiveness I put down to the SSD, but photoshop, gaming and compiling all got significantly faster with the upgrade to a 4Ghz Ivy Bridge quad core.

I gave my old computer away and the mate I gave it to was pretty stoked (he had a Pentium 4 and a Radeon 1950 or something like that), but I couldn't be happier that I've upgraded.

I only upgrade my CPU/Mobo every four-five years (with other upgrades as needed) and the difference when changing to the newer platform is always very significant.Reply

While it certainly can be frustrating slow when it comes to computation, I can still run most of the newest games well enough @ >= medium settings.

While I recently had enough cash to upgrade to an i3, SLI mobo and 8GB, I really couldn't find myself able to justify it still due to the lack of a major step in performance, or rather, due to the continued stubbornness of the old 775.

Hopefully I'll actually get a full time permanent job soon so it'll be easier to stomach a decent upgrade (K series and xxx(x)GPU).Reply

The 'newer' E7200 (2.53 ghz, 45nm) continues to serve me well. The E7200 still makes an awesome home server, with exceptionally low power consumption (even decent by today's standards!) and more horsepower than any 3 Atoms ever built. Until this week, I ran my home server (with 2 VMs on top of Windows) off of it.

This week, it's my desktop again...

My Phenom II motherboard went kaput last week, so I swapped back to the old Core 2 Duo+motherboard as my desktop until a replacement arrives. With RAID SSDs and a good graphics card, I have no complaints except the 4 gb of ram, which is why I upgraded in the first place - at the time, 8 gb of DDR2 cost as much as the Microcenter Phenom II CPU+mobo deal and 8 gigs of DDR3.

For those who aren't power users or serious gamers, any 45nm Core 2 Duo should last at least a couple more years with an SSD and enough RAM. Any upgrade less than a Sandy Bridge or Ivy Bridge i5 isn't worthy.Reply

I have a very similar system (E7300 with 4GB of DDR2-800 + a HD4670 GPU) which I mainly use as an HTPC with occasional gaming at 1366x768. Overall I'm very satisfied with it.

My main concern is the power consumption. I know it's based on a newer 45nm architecture (the reason I chose this particular CPU was its power efficiency back in 2008).

I just wanna know how much would I benefit from using a modern, say Ivy Bridge Core i3 instead of my current rig? From a power consumption standpoint. Since I can't build a new PC right now I thought It'd be better to just upgrade what I currently have? Maybe add an SSD and a new GPU.

Since you mentioned you used yours as a server which might have been on 24/7 I thought you'd know the estimated power consumption?

The G31 is dated even by Socket 775 standards, so with underclocking/undervolting and a better motherboard, you can probably drop that - quick and dirty, either use SpeedStep to lower the multiplier or drop the FSB from 1066 to 800Mhz. then reduce voltage until it starts crashing :-)

I know the Radeon 4670 was very efficient for its day, but not sure what might be a good upgrade. First idea that comes to mind is G45+integrated graphics?Reply

I'm using a 4 year old X58 i7-920 system. It has since been upgraded with 24 GB of the cheapest ram and a new graphics card. Aside from USB 3, i don't see any reason to upgrade in next 4 years. Long gone are the days where you couldn't run mp3s on a 486 or a divx on Pentium II 266, or a 1080p x.264 on a C2D in a laptop.Reply

...shame that the benches here are totally pointless in regards to the usual situation of 'hand me down PCS'.

Would have been more useful to see testing on how fast a Word document opens and closes compared to a top end i7. How fast Facebook opens, maybe how playable the Sims was on both machines, Farmville performance.

Thats more real world stuff that normal people do. I'm pretty sure the results would have been negligible.

Yes the tests show how far things have advanced but they don't address how pointless all that extra power is for 95% of users in general and 99.999% of hand me down owners.

Most I do now in such cases is just make sure it had a dual core, at least 2GB of ram and maybe slap a cheap SSD in it. Good to go for quite some time.

Second point is I'm intrigued to know what your brothers home looks like. Thats one dusty PC for 2 years of use. I can always tell what a persons home looks like when they bring me a PC. If its 5 years old and spotless inside, the home 99 times out of a 100 is spotless too. If it arrives full of dust and spiders then I know its a hell hole. Proven when I arrive to take it back and I don't stop long.Reply

I just recently updated from Core 2 Quad Q9550 to Core i5-3570K, wanted to have 1920x1200@60 fps in Borderands 2 consistently. I tried overclocking the C2Q first but 3.4 GHz still wasn't enough so I gave up on that idea.

Now BL2 pushes enough frames. Probably Alan Wake too, if I ever get back to it although FPS drop weren't so visible in it. Other than that I haven't noticed much of a difference. Then again, not doing anything heavy other than games at the moment.

I also like the single page format of this article since I read this initally on a tablet.Reply

I'm running an i5 with ssd array and plenty of toys but must say a good word about my semi retired C2D.

A bit newer than this article:

C2D E8400 (@4GHz air easily) - 6GB DDR2-1066 Ocz Reapers - Asus P45 mobo - a pair of ocz vertex 2 60's in R0 for boot and 2 x 1TB WB black with an Asus 4850 with AC kit.. For the last "Rah!" of the high performance duals is still kicks damn good esp with the ssd array...makes a HUGE difference.Reply

As an owner of an old E4300 (overclocked to 2.4 GHz, though), if I'm reading this correctly, buying a modern graphics card (I have a Radeon 4670 - don't laugh) would be enough to push me into the realm of "decent gaming at 1680x1050"? I don't have money for a full computer upgrade, and I do occasionally feel the urge to play a game that isn't half a decade old.

So if I buy something like a 7850, that would work reasonably well, right...? Help me out - I've been out of the hardware loop for many years now.Reply

I've been using ATI cards since the VGA days, when only genuine "built by ATI" cards had analog output comparable to Matrox. I really don't know much about the new cards, but I figured something like a 7850 is "standard".

If it's CPU-limited, then I guess I could max out the graphics settings anyway...?

I saw a passively cooled 7750. It's interesting (silence is golden!), but costs only slightly less than the 7850, which could be a future-proof upgrade in case I somehow find money to upgrade the MBO, CPU and RAM in the next year or so.Reply

Got an old fashion C2D E8400@4.1GHz on air - 4GB DDR2 CORSAIR Dominator - DFI X48 LT T2R rocking mobo ! and a Sapphire HD7970 GHZ VAPOR-X (up from a HD5870)... Getting Skyrim running at 1080p on a 27" Full HD Modded at no less than 40 fps...Only playing and photoshop editing with this rig ! See no reason to upgrade at the time :DI think that for gaming at a descent fps even in 1080p today's CPU's horsepower helps, but is not vital... Much is done by the GC; have seen a 80% to 100% increase in playnig comfort upgrading the CG !Reply

Although I think that a system where every part is lacking is better replaced than upgraded, I thoroughly enjoyed this piece. For a person with an aging system, it's nearly impossible to find published tests of what targeted upgrades yield. I have no doubt this article will be very useful to folks still running old hardware and evaluating upgrades.Reply

Something must be lost in translation here. You are apparently buying a very nice computer system for your late (i.e. dead) mother. That seems very generous, but even if your mother is still living (which I hope she is) it's unlikely that she would need that much computational power given that she's been working just fine on a Pentium 4 system.

Even if she keeps the computer for 8 years (like her previous P4 system), the reality is that the difference between an i5 and an i7 will be trivial compared to the difference between it and any modern computer system in the future. But then again, only the best will do for mom, who is hopefully just late for Bingo and not actually departed!Reply

I recently replaced a C2D 6400 system I built in January 2007 with an Ivy Bridge Core i5 system. It started out with 2 GB and eventually got up to the maximum 6 GB PC6400 the Intel board would allow. Began with the Intel video, then got an Nvidia 210 card. A USB 3.0 card was added. Original OS was Vista beta, then Vista final, then Win7 beta, Win7 final, then Consumer and Release Previews of Win8.

The old C2D still has plenty of utility but I haven't the space to let it keep a position on the KVM. Still, it's sitting in reserve in case some situation comes up to put it in service again. For day to day use it had finally gotten old enough that a new machine could be justified, more for the assorted niceties beyond the CPU than for processing power.

It helps that Microsoft has been making an effort to reduce Windows resource requirements. In an earlier era a machine this old would be showing its age much more when running the latest Windows release.Reply

Really should've tested multi-GPU configs, the CPU has been a serious bottleneck for gaming rigs since Nehalem. The results are even more apparent in multi-GPU configs where there is little or no improvement in performance scaling from additional GPUs with a slower CPU that isn't overclocked. Reply

I like seeing the comparison to current CPUs...probably useful for people looking to upgrade too.

I actually have a Core 2 x2 @ 2.4GHz that I use regularly. I notice a big difference between it an Sandy Bridge for web browsing or the like (obviously not as big as between the Core 2 and c50, let alone my iPad, but you still notice it).

It all depends on what you want. I have a very nice 21" monitor that I bought 5 years ago that only runs at 1680 resolution not 1920x1080. Nvidia 8800 GT I have had for years is more than adequate for most games I play.

On the other hand, an Atom netbook I have had for about 3 years is incredibly and frustratingly slow.Reply

Another area were the e8400 held/holds it's weight was encoding with that big 6MB cache. Re-encoded mannnny vids with it at 4.3GHz using a thermalright 120 hsf w/ no probs even when doing other stuff. Was the mutts nutts till the Q6600 launched and smoked everything.

to keep up the rant...before you sell off or give away an older system...

PLEASE try running it with an ssd boot drive....any size will do! ;)Reply

It's nice to see reviews like this, coming from reviewers. It brings things down to earth for the every-day, average end-user.

I would like to note that the significance of power consumption, while not exactly glossed over by the author, but not expressly noted with data either, is really where the bottom line is going to be. I mean, even in the last 2 generations of hardware, power consumption has dropped dramatically, while still maintaining a small, if not noteworthy increase in performance. To me, this is the more important selling point of newer hardware, than anything.

Either way, I certainly appreciate the time spent to reaffirm common knowledge in the component world. :)Reply

I wouldn't count core 2 out just yet, I have a core 2, and it's doing a hell of job keeping my crysis 2 (with high-res and MaLDo in dx11 mode with everything on the very highest) min fps at 20. I would only add that a highly overclocked core 2 (whether 45 or 65 nm) is not only for browsing, and people don't throw them away or count them out so fast, they are fast. I did some benchmarks of my own and found out that my core 2 at it's current mhz (overclocked to the maximum) is about equal to the performance of an i3 (I suspect a penryn at over 3600 mhz would be faster than i3), and also in the games that performance does matter, it's more often a graphics card problem and not (as it used to be in the pre-core 2 era) the processor. if you are asking me core 2 at stock or i3, then i3 for sure, but I think most people who visit this site are not about stock clock :)Reply

I also have many people who come to me to upgrade their systems. For many of these people who had core2Duo's, I merely upgraded them to Windows7, added more memory, gave them a SSD boot drive, and used their old drive as a storage drive. Some got upgraded video cards depending on thier needs.

The vast majority of people feel like they got a brand new faster computer for less money than a full upgrade would cost them. Anecdotal evidence is great, but seeing the numbers quantified in this article was very interesting. It makes me wonder how a E8400 @ 3.0 Ghz would fare or perhaps a Core2Quad Q9650 @ 3.0 Ghz. Pair one of these processors with a GTX680 and see how they handle the gambit of modern games. I would like to see if they would render decent enough framerates to put off an upgrade and justify spending $500 on a video card.Reply

I just went and read all these comments. Come on Anand Tech! Many of your readers are interested in this type of investigative journalism. We all have systems that we have pieced together for ourselves or friends or both. There is a lot of interest in what targeted upgrades can do for a system.

If I were a manufacturer, I know I would want you to test my part and have you reccomend it to my audience as your "Gold Award" upgrade route.

For gaming, I'd have to say yes, especially those that work the CPU hard too. These are probably the more extreme examples and the majority of the games do still play fine on the old Q6600.

For regular use, like papa and mama reading emails, browsing the web and streaming videos, the Core 2 Duo and Core 2 Quads with 4GB+ ram still provide plenty of power. The SSD upgrade would certainly be cheaper and would make things more snappier.Reply

These results are definitely missing in this comparison. Because the gaming tests were only limited, this comparison might led to the false believe that an OC'd E6400 can cope nearly as well with nowaday-games as current CPUs. As stated above, the difference in performance will be exaggerated in more CPU intensive games. I can confirm this (only qualitatively) by the stutter I experience while playing BF3 or Bioshock Infinite due to my E6400@3Ghz hitting 100%. In contrary, my 6-year-old machine copes well with nowaday-games as the Assassins Creed series.Reply

I have 2 older systems built in 2008 and 2009 before the first generation Core i-series. One, which was my gaming system for a while, had a E8600 and the other is a Quad Q6700. They both still serve their purpose today, despite me not gaming on PC anymore. The E8600 is being used for a HTPC and the Q6700 is my main machine I use for productivity and surfing the internet as well as some light gaming. I have no need for a high-end Ivy Bridge processor but if I came across a Sandy Bridge setup for a decent price, I'd bite.Reply