Select Regions

PC Performance Tested

Kaya Systems
Feb 12, 2014

Page 1 of 10

With our lab coats donned, our test benches primed, and our benchmarks at the ready, we look for answers to nine of the most burning performance-related questions

If there’s one thing that defines the Maximum PC ethos, it’s an obsession with Lab-testing. What better way to discern a product’s performance capabilities, or judge the value of an upgrade, or simply settle a heated office debate? This month, we focus our obsession on several of the major questions on the minds of enthusiasts. Is liquid cooling always more effective than air? Should serious gamers demand PCIe 3.0? When it comes to RAM, are higher clocks better? On the surface, the answers might seem obvious. But, as far as we’re concerned, nothing is for certain until it’s put to the test. We’re talking tests that isolate a subsystem and measure results using real-world workloads. Indeed, we not only want to know if a particular technology or piece of hardware is truly superior, but also by how much. After all, we’re spending our hard-earned skrilla on this gear, so we want our purchases to make real-world sense. Over the next several pages, we put some of the most pressing PC-related questions to the test. If you’re ready for the answers, read on.

Core i5-4670K vs. Core i5-3570K vs. FX-8350

People like to read about the $1,000 high-end parts, but the vast majority of enthusiasts don’t buy at that price range. In fact, they don’t even buy the $320 chips. No, the sweet spot for many budget enthusiasts is around $220. To find out which chip is the fastest midrange part, we ran Intel’s new
Haswell Core i5-4670K
against the current-champ
Core i5-3570K
as well as AMD’s
Vishera FX-8350
.

AMD’s FX-8350 has two cores up on the competition, but does that matter?

The Test:
For our test, we socketed the Core i5-4670K into an Asus Z87 Deluxe with 16GB of DDR3/1600, an OCZ Vertex 3, a GeForce GTX 580 card, and Windows 8. For the Core i5-3570K, we used the same hardware in an Asus P8Z77-V Premium board, and the FX-8350 was tested in an Asus CrossHair V Formula board. We ran the same set of benchmarks that we used in our original review of the FX-8350 published in the Holiday 2012 issue.

The Results:
First, the most important factor in the budget category is the price. As we wrote this, the street price of the Core i5-4670K was $240, the older Core i5-3570K was in the $220 range, and AMD’s FX-8350 went for $200. The 4670K is definitely on the outer edge of the budget sweet spot while the AMD is cheaper by a bit.

Intel’s Haswell Core i5-4670K slots right into the high end of the midrange.

One thing that’s not disputable is the performance edge the new Haswell i5 part has. It stepped away from its Ivy Bridge sibling in every test we ran by respectable double-digit margins. And while the FX-8350 actually pulled close enough to the Core i5-3570K in enough tests to go home with some multithreaded victories in its pocket, it was definitely kept humble by Haswell. The Core i5-4670K plain-and-simply trashed the FX-8350 in the vast majority of the tests that can’t push all eight cores of the FX-8350. Even worse, in the multithreaded tests where the FX-8350 squeezed past the Ivy Bridge Core i5-3570K, Haswell either handily beat or tied the chip with twice its cores.

The Core i5-3570K was great in its day, but it needs more than that to stay on top.

Even folks concerned with bang-for-the-buck will find the Core i5-4670K makes a compelling argument. Yes, it’s 20 percent more expensive than the FX-8350, but in some of our benchmarks, it was easily that much faster or more. In Stitch.Efx 2.0, for example, the Haswell was 80 percent faster than the Vishera. Ouch.

So where does this leave us? For first place, we’re proclaiming the Core i5-4570K the midrange king by a margin wider than Louie Anderson. Even the most ardent fanboys wearing green-tinted glasses or sporting an IVB4VR license plate can’t disagree.

For second place, however, we’re going to get all controversial and call it for the FX-8350, by a narrow margin. Here’s why: FX-8350 actually holds up against the Core i5-3570K in a lot of benchmarks, has an edge in mulitithreaded apps, and its AM3+ socket has a far longer roadmap than LGA1155, which is on the fast track to Palookaville.

Granted, Ivy Bridge and 1155 is still a great option, especially when bought on a discounted combo deal, but it’s a dead man walking, and our general guidance for those who like to upgrade is to stick to sockets that still have a pulse. Let’s not even mention that LGA1155 is the only one here with a pathetic two SATA 6Gb/s ports. Don’t agree? Great, because we have an LGA1156 motherboard and CPU to sell you.

Benchmarks

Core i5-4670K

Core i5-3570K

FX-8350

POV Ray 3.7 RC3 (sec)

168.53

227.75

184.8

Cinebench 10 Single-Core

8,500

6,866

4,483

Cinebench 11.5

6.95

6.41

6.90

7Zip 9.20

17,898

17,504

23,728

Fritz Chess

13,305

11,468

12,506

Premiere Pro CS6 (sec)

2,849

3,422

5,220

HandBrake Blu-ray encode (sec)

9,042

9,539

8,400

x264 5.01 Pass 1 (fps)

66.3

57.1

61.3

x264 5.01 Pass 2 (fps)

15.8

12.7

15

Sandra (GB/s)

21.6

21.3

18.9

Stitch.Efx 2.0 (sec)

836

971

1,511

ProShow Producer 5 (sec)

1,275

1,463

1,695

STALKER: CoP low-res (fps)

173.5

167.3

132.1

3DMark 11 Physics

7,938

7,263

7,005

PC Mark 7 Overall

6,428

5,582

4,408

PC Mark 7 Storage

5,300

5,377

4,559

Valve Particle (fps)

180

155

119

Heaven 3.0 low-res (fps)

139.4

138.3

134.4

Best scores are bolded. Test bed described in text

Hyper-Threading vs. No Hyper-Threading

Hyper-Threading
came out 13 years ago with the original 3.06GHz Pentium 4, and was mostly a dud. Few apps were multithreaded and even Windows’s own scheduler didn’t know how to deal with HT, making some apps actually slow down when the feature was enabled. But the tech overcame those early hurdles to grow into a worthwhile feature today. Still, builders are continually faced with choosing between procs with and without HT, so we wanted to know definitively how much it matters.

The Test:
Since we haven’t actually run numbers on HT in some time, we broke out a Core i7-4770K and ran tests with HT turned on and off. We used a variety of benchmarks with differing degrees of threadedness to test the technology’s strengths and weaknesses.

The Results:
One look at our results and you can tell HT is well worth it if your applications can use the available threads. We saw benefits of 10–30 percent from HT in some apps. But if your app can’t use the threads, you gain nothing. And in rare instances, it appears to hurt performance slightly—as in Hitman: Absolution when run to stress the CPU rather than the GPU. Our verdict is that you should pay for HT, but only if your chores include 3D modeling, video encoding or transcoding, or other thread-heavy tasks. Gamers who occasionally transcode videos, for example, would get more bang for their buck from a Core i5-4670K.

Benchmarks

HT Off

HT On

PCMark 7 Overall

6,308

6,348

Cinebench 11.5

6.95

8.88

Stitch.EFx 2.0 (sec)

772

772

ProShow Producer 5.0 (sec)

1,317

1,314

Premiere Pro CS6 (sec)

2,950

2,522

HandBrake 0.9.9 (sec)

1,200

1,068

3DMark 11 Overall

X2,210

X2,209

Valve Particle Test (fps)

191

226

Hitman: Absolution, low res (fps)

92

84

Total War 2: Shogun CPU Test (fps)

42.4

41

Best scores are bolded. We used a Core i7-4770K on a Asus Z87 Deluxe, with a Neutron GTX 240 SSD, a GeForce GTX 580, and 16GB of DDR3/1600 64-bit, with Windows 8

Click the next page to read about air cooling vs water cooling

Air Cooling vs. Water Cooling

There are two main ways to chill your CPU: a heatsink with a fan on it, or a closed-loop liquid cooler (CLC). Unlike a custom loop, you don't need to periodically drain and flush the system or check it for leaks. The "closed" part means that it's sealed and integrated. This integration also reduces manufacturing costs and makes the setup much easier to install. If you want maximum overclocks, custom loops are the best way to go. But it’s a steep climb in cost for a modest improvement beyond what current closed loops can deliver.

But air coolers are not down for the count. They're still the easiest to install and the cheapest. However, the prices between air and water are so close now that it's worth taking a look at the field to determine what's best for your budget.

The Test:
To test the two cooling methods, we dropped them into a rig with a hex-core Intel Core i7-3960X overclocked to 4.25GHz on an Asus Rampage IV Extreme motherboard, inside a Corsair 900D. By design, it's kind of a beast and tough to keep cool.

The Budget Class

The Results:
At this level, the Cooler Master 212 Evo is legend…ary. It runs cool and quiet, it's easy to pop in, it can adapt to a variety of sockets, it's durable, and it costs about 30 bucks. Despite the 3960X's heavy load, the 212 Evo averages about 70 degrees C across all six cores, with a room temperature of about 22 C, or 71.6 F. Things don’t tend to get iffy until 80 C, so there's room to go even higher. Not bad for a cooler with one 120mm fan on it.

Entry-level water coolers cost substantially more, unless you're patient enough to wait for a fire sale. They require more materials, more manufacturing, and more complex engineering. The Cooler Master Seidon 120M is a good example of the kind of unit you'll find at this tier. It uses a standard 120mm fan attached to a standard 120mm radiator (or "rad") and currently has a street price of $60. But in our tests, its thermal performance was about the same, or worse, than the 212 Evo. In order to meet an aggressive price target, you have to make some compromises. The pump is smaller than average, for example, and the copper block you install on top of the CPU is not as thick. The Seidon was moderately quieter, but we have to give the nod to the 212 Evo when it comes to raw performance.

The Cooler Master 212 Evo has arguably the best price-performance ratio around.

The Performance Class

The Results:
While a CLC has trouble scaling its manufacturing costs down to the budget level, there's a lot more headroom when you hit the $100 mark. The NZXT Kraken X60 CLC is one of the best examples in this class; its dual–140mm fans and 280mm radiator can unload piles of heat without generating too much noise, and it has a larger pump and apparently larger tubes than the Seidon 120M. Our tests bear out the promise of the X60's design, with its "quiet" setting delivering a relatively chilly 66 C, or about 45 degrees above the ambient room temp.

It may not look like much, but the Kraken X60 is the Ferrari of closed-loop coolers.

Is there any air cooler that can keep up? Well, we grabbed a Phanteks TC14PE, which uses two heatsinks instead of one, dual–140mm fans, and retails at $85–$90. It performed only a little cooler than the 212 Evo, but it did so very quietly, like a ninja. At its quiet setting, it trailed behind the X60 by 5 C. It may not sound like much, but that extra 5 C of headroom means a higher potential overclock. So, water wins the high end.

Benchmarks

Seidon 120M Quiet / Performance Mode

212 Evo
Quiet / Performance Mode

Kraken X60 Quiet / Performance Mode

TC14PE
Quiet / Performance Mode

Ambient Air

22.1 / 22.2

20.5 / 20

20.9 / 20.7

20 / 19.9

Idle Temperature

38 / 30.7

35.5 / 30.5

29.7 / 28.8

32 /
28.5

Load Temperature

78.3 / 70.8

70 / 67.3

66 / 61.8

70.3 / 68.6

Load - Ambient

56.2 / 48.6

49.5 / 47.3

45.1 / 41.1

50.3/ 48.7

All temperatures in degrees Celsius. Best scores bolded.

Is High-Bandwidth RAM worth it?

Today, you can get everything from vanilla DDR3/1333 all the way to exotic-as-hell DDR3/3000. The question is: Is it actually worth paying for anything more than the garden-variety RAM?

The Test:
For our test, we mounted a Core i7-4770K into an Asus Z87 Deluxe board and fitted it with AData modules at DDR3/2400, DDR3/1600, and DDR3/1333. We then picked a variety of real-world (and one synthetic) tests to see how the three compared.

The Results:
First, let us state that if you’re running integrated graphics and you want better 3D performance, pay for higher-clocked RAM. With discrete graphics, though, the advantage isn’t as clear. We had several apps that saw no benefit from going from 1,333MHz to 2,400MHz. In others, though, we saw a fairly healthy boost, 5–10 percent, by going from standard DDR3/1333 to DDR3/2400. The shocker came in Dirt 3, which we ran in low-quality modes so as not to be bottlenecked by the GPU. At low resolution and low image quality, we saw an astounding 18 percent boost.

To keep you back on earth, you should know that cranking the resolution in the game all but erased the difference. To see any actual benefit, we think you’d really need a tri-SLI GeForce GTX 780 setup and expect that the vast majority of games won’t actually give you that scaling.

We think the sweet spot for price/performance is either DDR3/1600 or DDR3/1866.

Benchmarks

DDR3/1333

DDR3/1600

DDR3/2400

Stitch.Efx 2.0 (sec)

776

773

763

PhotoMatix HDR (sec)

181

180

180

ProShow Producer 5.0 (sec)

1,370

1,337

1,302

HandBrake 0.9.9 (sec)

1,142

1,077

1,037

3DMark Overall

2,211

2,214

2,215

Dirt 3 Low Quality (fps)

234

247.6

272.7

Price for two 4GB DIMMs (USD)

$70

$73

$99

All temperatures in degrees Celsius. Best scores bolded.

Click the next page to see how two midrange graphics cards stack up against one high-end GPU!

One High-End GPU vs.Two Midrange GPUs

One of the most common questions we get here at Maximum PC, aside from details about our lifting regimen, is whether to upgrade to a high-end GPU or run two less-expensive cards in SLI or CrossFire. It’s a good question, since high-end GPUs are expensive, and cards that are two rungs below them in the product stack cost about half the price, which naturally begs the question: Are two $300 cards faster than a single $600 card? Before we jump to the tests, dual-card setups suffer from a unique set of issues that need to be considered. First is the frame-pacing situation, where the cards are unable to deliver frames evenly, so even though the overall frames per second is high there is still micro-stutter on the screen. Nvidia and AMD dual-GPU configs suffer from this, but Nvidia’s SLI has less of a problem than AMD at this time. Both companies also need to offer drivers to allow games and benchmarks to see both GPUs, but they are equally good at delivering drivers the day games are released, so the days of waiting two weeks for a driver are largely over.

The Test:
We considered using two $250 GTX 760 GPUs for this test, but Nvidia doesn't have a $500 GPU to test them against, and since this is Maximum PC, we rounded up one model from the "mainstream" to the $300 GTX 660 Ti. This video card was recently replaced by the GTX 760, causing its price to drop down to a bit below $300, but since that’s its MSRP we are using it for this comparison. We got two of them to go up against the GTX 780, which costs roughly $650, so it's not a totally fair fight, but we figured it's close enough for government work. We ran our standard graphics test suite in both single- and dual-card configurations.

The Results:
It looks like our test was conclusive—two cards in SLI provide a slightly better gaming experience than a single badass card, taking top marks in seven out of nine tests. And they cost less, to boot. Nvidia’s frame-pacing was virtually without issues, too, so we don’t have any problem recommending Nvidia SLI at this time. It is the superior cost/performance setup as our benchmarks show.

The Test:
For our AMD comparison, we took two of the recently released HD 7790 cards, at $150 each, and threw them into the octagon with a $400 GPU, the PowerColor Radeon HD 7970 Vortex II, which isn't technically a "GHz" board, but is clocked at 1,100MHz, so we think it qualifies. We ran our standard graphics test suite in both single-and-dual card configurations.

Two little knives of the HD 7790 ilk take on the big gun Radeon HD 7970 .

The Results:
Our AMD tests resulted in a very close battle, with the dual-card setup taking the win by racking up higher scores in six out of nine tests, and the single HD 7970 card taking top spot in the other three tests. But, what you can’t see in the chart is that the dual HD 7790 cards were totally silent while the HD 7970 card was loud as hell. Also, AMD has acknowledged the micro-stutter problem with CrossFire, and promises a software fix for it, but unfortunately that fix is going to arrive right as we are going to press on July 31. Even without it, gameplay seemed smooth, and the duo is clearly faster, so it gets our vote as the superior solution, at least in this config.

Benchmarks

GTX 660 Ti SLI

GTX 780

Radeon HD 7870 CrossFire

Radeon HD 7970 GHz

3DMark Fire Strike

8,858

8,482

8,842

7,329

Catzilla (Tiger) Beta

7,682

6,933

6,184

4,889

Unigine Heaven 4.0 (fps)

33

35

30

24

Crysis 3 (fps)

26

24

15

17

Shogun 2 (fps)

60

48

51

43

Far Cry 3 (fps)

41

35

21

33

Metro: Last Light (fps)

24

22

13

14

Tomb Raider (fps)

18

25

24

20

Battlefield 3 (fps)

56

53

57

41

Best scores are bolded. Our test bed is a 3.33GHz Core i7 3960X Extreme Edition in an Asus P9X79 motherboard with 16GB of DDR3/1600 and a Thermaltake ToughPower 1,050W PSU. The OS is 64-bit Windows 7 Ultimate. All tests, except for the 3DMark tests, are run at 2560x1600 with 4X AA.

PCI Express 2.0 vs. PCI Express 3.0

PCI Express is the specification that governs the amount of bandwidth available between the CPU and the PCI Express slots on your motherboard. We've recently made the jump from version 2.0 to version 3.0, and the PCI Express interface on all late-model video cards is now PCI Express 3.0, causing many frame-rate addicts to question the sanity of placing a PCIe 3.0 GPU into a PCIe 2.0 slot on their motherboard. The reason why is that PCIe 3.0 has quite a bit more theoretical bandwidth than PCIe 2.0. Specifically, one PCIe 2.0 lane can transmit 500MB/s in one direction, while a PCIe 3.0 lane can pump up to 985MB/s, so it's almost double the bandwidth, and then multiply that by 16, since there are that many lanes being used, and the difference is substantial. However, that extra bandwidth will only be important if it’s even needed, which is what we wanted to find out.

The Test:
We plugged an Nvidia GTX Titan into our Asus P9X79 board and ran several of our gaming tests with the top PCI Express x16 slot alternately set to PCIe 3.0 and PCIe 2.0. On this particular board you can switch the setting in the BIOS.

The Results:
We had heard previously that there was very little difference between PCIe 2.0 and PCIe 3.0 on current systems, and our tests back that up. In every single test, Gen 3.0 was faster, but the difference is so small it’s very hard for us to believe that PCIe 2.0 is being saturated by our GPU. It’s also quite possible that one would see more pronounced results using two or more cards, but we wanted to “keep it real” and just use one card.

Benchmarks

GTX Titan PCIe 2.0

GTX Titan PCIe 3.0

3DMark Fire Strike

9,363

9,892

Unigine Heaven 4.0 (fps)

37

40

Crysis 3 (fps)

31

32

Shogun 2 (fps)

60

63

Far Cry 3 (fps)

38

42

Metro: Last Light (fps)

22

25

Tomb Raider (fps)

22

25

Best scores are bolded. Our test bed is a 3.33GHz Core i7 3960X Extreme Edition in an Asus P9X79 motherboard with 16GB of DDR3/1600 and a Thermaltake ToughPower 1,050W PSU. The OS is 64-bit Windows 7 Ultimate. All games are run at 2560x1600 with 4X AA except for the 3DMark tests.

PCIe x8 vs. PCIe x16

PCI Express expansion slots vary in both physical size and the amount of bandwidth they provide. The really long slots are called x16 slots, as they provide 16 lanes of PCIe bandwidth, and that’s where our video cards go, for obvious reasons. Almost all of the top slots in a motherboard (those closest to the CPU) are x16, but sometimes those 16 lanes are divided between two slots, so what might look like a x16 slot is actually a x8 slot. The tricky part is that sometimes the slots below the top slot only offer eight lanes of PCIe bandwidth, and sometimes people need to skip that top slot because their CPU cooler is in the way or water cooling tubes are coming out of a radiator in that location. Or you might be running a dual-card setup, and if you use a x8 slot for one card, it will force the x16 slot to run at x8 speeds. Here’s the question: Since a x16 slot provides 3.2GB/s of bandwidth in one direction, and a x8 slot pumps 1.6GB/s, is your performance hobbled by running at x8?

The Test:
We wedged a GTX Titan into first a x16 slot and then a x8 slot on our Asus P9X79 motherboard and ran our gaming tests in order to compare the difference.

The Results:
We were surprised by these results, which show x16 to be a clear winner. Sure, it seems obvious, but we didn’t think even current GPUs were saturating the x8 interface, but apparently they are, so this is an easy win for x16.

The Asus P9X79 offers two x16 slots (blue) and two x8 slots (white).

Benchmarks

GTX Titan PCIe x16

GTX Titan PCIe x8

3DMark Fire Strike

9,471

9,426

Catzilla (Tiger) Beta

7,921

7,095

Unigine Heaven 4.0 (fps)

40

36

Crysis 3 (fps)

32

37

Shogun 2 (fps)

64

56

Far Cry 3 (fps)

43

39

Metro: Last Light (fps)

25

22

Tomb Raider (fps)

25

23

Battlefield 3 (fps)

57

50

Tests performed on an Asus P9X79 Deluxe motherboard.

IDE vs. AHCI

If you go into your BIOS and look at the options for your motherboard’s SATA controller, you usually have three options: IDE, AHCI, and RAID. RAID is for when you have more than one drive, so for running just a lone wolf storage device, you have AHCI and IDE. For ages we always just ran IDE, as it worked just fine. But now there’s AHCI too, which stands for Advanced Host Controller Interface, and it supports features IDE doesn’t, such as Native Command Queuing (NCQ), and hot swapping. Some people also claim that AHCI is faster than IDE due to NCQ and the fact that it's newer. Also, for SSD users, IDE does not support the Trim command, so AHCI is critical to an SSD's well-being over time, but is there a speed difference between IDE and AHCI for an SSD? We set to find out.

The Test:
We enabled IDE on our SATA controller in the BIOS, then installed our OS. Next, we added our Corsair test SSD and ran a suite of storage tests. We then enabled AHCI, reinstalled the OS, re-added the Corsair Neutron test SSD, and re-ran all the tests.

The Results:
We haven’t used IDE in a while, but we assumed it would allow our SSD to run at full speed even if it couldn’t NCQ or hot-swap anything. And we were wrong. Dead wrong. Performance with the SATA controller set to IDE was abysmal, plain and simple.

Benchmarks

Corsair Neutron GTX IDE

Corsair Neutron GTX AHCI

CrystalDiskMark

Avg. Sustained Read (MB/s)

224

443

Avg. Sustained Write (MB/s)

386

479

AS SSD - Compressed Data

Avg. Sustained Read (MB/s)

210

514

Avg. Sustained Write (MB/s)

386

479

ATTO

64KB File Read (MB/s, 4QD)

151

351

64KB File Write (MB/s, 4QD)

354

485

Iometer

4KB Random Write 32QD
(IOPS)

19,943

64,688

PCMark Vantage x64

6,252

41,787

Best scores are bolded. All tests conducted on our hard drive test bench, which consists of a Gigabyte Z77X-UP4 motherboard, Intel Core i5-3470 3.2GHz CPU, 8GB of RAM, Intel 520 Series SSD, and a Cooler Master 450W power supply.

Click the next page to read about SSD RAID vs a single SSD!

SSD RAID vs. Single SSD

This test is somewhat analogous to the GPU comparison, as most people would assume that two small-capacity SSDs in RAID 0 would be able to outperform a single 256GB SSD. The little SSDs have a performance penalty out of the gate, though, as SSD performance usually improves as capacity increases because the controller is able to grab more data given the higher-capacity NAND wafers—just like higher-density platters increase hard drive performance. This is not a universal truth, however, and whether or not performance scales with an SSD’s capacity depends on the drive’s firmware, NAND flash, and other factors, but in general, it’s true that the higher the capacity of a drive, the better its performance. The question then is: Is the performance advantage of the single large drive enough to outpace two little drives in RAID 0?

Before we jump into the numbers, we have to say a few things about SSD RAID. The first is that with the advent of SSDs, RAID setups are not quite as common as they were in the HDD days, at least when it comes to what we’re seeing from boutique system builders. The main reason is that it’s really not that necessary since a stand-alone SSD is already extremely fast. Adding more speed to an already-fast equation isn’t a big priority for a lot of home users (this is not necessarily our audience, mind you). Even more importantly, the biggest single issue with SSD RAID is that the operating system is unable to pass the Trim command to the RAID controller in most configurations (Intel 7 and 8 series chipsets excluded), so the OS can’t tell the drive how to keep itself optimized, which can degrade performance of the array in the long run, making the entire operation pointless. Now, it’s true that the drive’s controller will perform “routine garbage collection,” but how that differs from Trim is uncertain, and whether it’s able to manage the drive equally well is also unknown. However, the lack of Trim support on RAID 0 is a scary thing for a lot of people, so it’s one of the reasons SSD RAID often gets avoided. Personally, we’ve never seen it cause any problems, so we are fine with it. We even ran it in our Dream Machine 2013, and it rocked the Labizzle. So, even though people will say SSD RAID is bad because there’s no Trim support, we’ve never been able to verify exactly what that “bad” means long-term.

It’s David and Goliath all over again, as two puny SSDs take on a bigger, badder drive.

The Test:
We plugged in two Corsair Neutron SSDs, set the SATA controller to RAID, created our array with a 64K stripe size, and then ran all of our tests off an Intel 520 SSD boot drive. We used the same protocol for the single drive.

The Results:
The results of this test show a pretty clear advantage for the RAIDed SSDs, as they were faster in seven out of nine tests. That’s not surprising, however, as RAID 0 has always been able to benchmark well. That said, the single 256 Corsair Neutron drive came damned close to the RAID in several tests, including CrystalDiskMark, ATTO at four queue depth, and AS SSD. It’s not completely an open-and-shut case, though, because the RAID scored poorly in the PC Mark Vantage “real-world” benchmark, with just one-third of the score of the single drive. That’s cause for concern, but with these scripted tests it can be tough to tell exactly where things went wrong, since they just run and then spit out a score. Also, the big advantage of RAID is that it boosts sequential-read and -write speeds since you have two drives working in parallel (conversely, you typically won’t see a big boost for the small random writes made by the OS). Yet the SSDs in RAID were actually slower than the single SSD in our Sony Vegas “real-world” 20GB file encode test, which is where they should have had a sizable advantage. For now, we’ll say this much: The RAID numbers look good, but more “real-world” investigation is required before we can tell you one is better than the other.

Benchmarks

1x Corsair Neutron 256GB

2x Corsair Neutron 128GB RAID 0

CrystalDiskMark

Avg. Sustained Read (MB/s)

512

593

Avg. Sustained Write (MB/s)

436

487

AS SSD - Compressed Data

Avg. Sustained Read (MB/s)

506

647

Avg. Sustained Write (MB/s)

318

368

ATTO

64KB File Read (MB/s, 4QD)

436

934

64KB File Write (MB/s, 4QD)

516

501

Iometer

4KB Random Write 32QD
(IOPS)

70,083

88,341

PCMark Vantage x64

70,083

23,431

Sony Vegas Pro 9 Write (sec)

343

429

Best scores are bolded. All tests conducted on our hard-drive test bench, which consists of a Gigabyte Z77X-UP4 motherboard, Intel Core i5-3470 3.2GHz CPU, 8GB of RAM, Intel 520 Series SSD, and a Cooler Master 450W power supply.

Benchmarking: Synthetic vs. Real-World

There’s a tendency for testers to dismiss “synthetic” benchmarks as having no value whatsoever, but that attitude is misplaced. Synthetics got their bad name in the 1990s, when they were the only game in town for testing hardware. Hardware makers soon started to optimize for them, and on occasion, those actions would actually hurt performance in real games and applications.

The 1990s are long behind us, though, and benchmarks and the benchmarking community have matured to the point that synthetics can offer very useful metrics when measuring the performance of a single component or system. At the same time, real-world benchmarks aren’t untouchable. If a developer receives funding or engineering support from a hardware maker to optimize a game or app, does that really make it neutral? There is the argument that it doesn’t matter because if there’s “cheating” to improve performance, that only benefits the users. Except that it only benefits those using a certain piece of hardware.

In the end, it’s probably more important to understand the nuances of each benchmark and how to apply them when testing hardware. SiSoft Sandra, for example, is a popular synthetic benchmark with a slew of tests for various components. We use it for memory bandwidth testing, for which it is invaluable—as long as the results are put in the right context. A doubling of main system memory bandwidth, for example, doesn’t mean you get a doubling of performance in games and apps. Of course, the same caveats apply to real-world benchmarks, too.

Avoid the Benchmarking Pitfalls

Even seasoned veterans are tripped up by benchmarking pitfalls, so beginners should be especially wary of making mistakes. Here are a few tips to help you on your own testing journey.

Put away your jump-to-conclusions mat. If you set condition A and see a massive boost—or no difference at all when you were expecting one—don’t immediately attribute it to the hardware. Quite often, it’s the tester introducing errors into the test conditions that causes the result. Double-check your settings and re-run your tests and then look for feedback from others who have tested similar hardware to use as sanity-check numbers.

When trying to compare one platform with another (certainly not ideal)—say, a GPU in system A against a GPU in system B—be especially wary of the differences that can result simply from using two different PCs, and try to make them as similar as possible. From drivers to BIOS to CPU and heatsink—everything should match. You may even want to put the same GPU in both systems to make sure the results are consistent.

Use the right benchmark for the hardware. Running Cinebench 11.5—a CPU-centric test—to review memory, for example, would be odd. A better fit would be applications that are more memory-bandwidth sensitive, such as encoding, compression, synthetic RAM tests, or gaming.

Be honest. Sometimes, when you shell out for new hardware, you want it to be faster because no one wants to pay through the nose to see no difference. Make sure your own feelings toward the hardware aren’t coloring the results.

Dream Machine:

Magazine:

For nearly 20 years, Maximum PC is considered by enthusiasts to be the absolute source for the latest hardware reviews, in-depth analysis, and breaking news on the latest PC hardware. Our team of industry experts give you the guidance you need to make the most informed buying decisions and deliver the best guides on how to use and optimize your experience. If you’re looking for the definitive reference on PC hardware, you’ve found it.