Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

An anonymous reader writes "With Intel's motherboard chipsets supporting both DDR2 and DDR3 memory, the question now is whether DDR3 is worth all that extra cash. Trustedreviews has a lengthy article on the topic, and it looks like (for the moment) the answer is no: 'Not to be too gloomy about this, but the bottom line is that it can only be advised to steer clear of DDR3 at present, as in terms of performance, which is what it's all about, it's a waste of money. Even fast DDR2 is, as we have demonstrated clearly, only worthwhile if you are actually overclocking, as it enables you to raise the front-side bus, without your memory causing a bottleneck. DDR3 will of course come into its own as speeds increase still further, enabling even higher front-side bus speeds to be achieved. For now though, DDR2 does its job, just fine.'"

Who in their right mind would pay so much for RAM? The only people I can think of are the middle - upper class teenagers with lots of money. The ones who run 8800Ultra's in SLI thinking that 2 cards = twice the performance when it's more like 30 - 50 % increase. Most educated system builders wont spend more money then they have to, and DDR 3 is just overpriced.

Heck, DDR2 is only now worth it, since it's cheaper than DDR. I have a DDR 400 system at home that's more than 20% OC'd on the memory bus and rock stable. When I bought the 2GB that's in that machine, DDR was about half the price of DDR2.

I'll jump on the DDR2 bandwagon with my next system, unless DDR3 drops to the same or less than DDR2 prices.

What's more, you're spending all that money in a system that will be obsolete in a few months (obsolete for what you want it, anyway) and on top of that, there aren't any games that require that much speed! When the new games that do require it come out, you will have gotten the new latest and greatest. Who needs 140 FPS anyway? Are there even screens that can display it?

If there are I doubt they are available to the general public. So few of those prior mentioned young people don't realize that if they have a 60 - 70 hz LCD monitor it doesn't matter what FPS you get over 70. Some claim to be able to tell the difference between 60 and 90 FPS. Like they can even see it.

The interesting point is the absolute min FPS, measured as 1/(max time between two frames). Even with an average of 70, you can sometimes go above 16 ms between frames. And we're quite sensitive to these things, as an example I notice the somewhat uncanny effect of having two TFTs with different panesl with video scaled over both. They have different response times and possibly different processing, but they should still be at most a (60 Hz) frame or so apart. The effect is nonetheless very visible.

There's a frequency that seems to be somewhere between 50 and 72 Hz at which the perception of flicker ends for most people. I know that on a CRT a 60 Hz refresh rate is quite bothersome to me but at 72 Hz it's not, while on an LCD screen a 60 Hz refresh rate doesn't bother me at all. This makes me believe my perceptions are related to the overall luminance of the screen ( which is evened out on an LCD by the backlighting, ) rather than the display rate of the bits themselves.

That's because LCDs don't refresh. They don't have a beam that scans the screen 60 times a second, like CRTs do. Instead, their pixels remain at that value until they are given the signal to change, and the faster that change happens, the faster the screen is. That's why an LCD could be at 1 Hz and you wouldn't notice anything (until the picture changed, anyway).

That's because LCDs don't refresh. They don't have a beam that scans the screen 60 times a second, like CRTs do. Instead, their pixels remain at that value until they are given the signal to change, and the faster that change happens, the faster the screen is. That's why an LCD could be at 1 Hz and you wouldn't notice anything (until the picture changed, anyway).

False, actually.

LCDs are refreshed much the same way as CRTs are. You start at the upper left, write the pixel's data, then the next pixel, until y

Not quite: most monitors will display a [pick your color, usually a shade of black] screen for a few seconds upon detecting loss of sync before powering down to standby, thereby clearing whatever happened to be displayed within 20ms of the VGA/DVI/HDMI/whatever cable plug being pulled.

If you really want to see Active-Matrix TFT persistence, you would fare better with yanking the power plug out of the wall socket and shining a very bright light on the LCD. Un

I can tell the difference between 60 and 90fps easily, but visually is just the start of it - Most games sync the gameplay to the fps. Higher fps = moving faster, jumping higher, potentially shooting faster, regenerating things faster. I could cite sources if needed, but basically anything based off quake1 (incl. future versions of quake and their derivatives) function this way. If you play competitively, its worth it. Of course, if you're the type to really care this much you probably dont use a 70hz lcd,

Um, I run my monitors at SXGA and 100 FPS with VSchync on and it looks nice, I think the VGA Standard will allow like 160-200 FPS at like 640X480, but until DVI CRTs come out we wont be able to get more than 100 fps on a CRT at SXGA. ALso in games, 200+ FPS will help tremendously with shot registration but LAG is usually the biggest factor in registration once you are over like 100 FPS.

I play world of warcraft in 120Hz and many other games in 160Hz on my eizo F930. With that kind of refresh rate, you can turn off vsync without any noticable effect for most games.
Personally I am very sensitive to refresh rates and it doesn't get comfortable until i get at least 90Hz on a CRT. (I run 1600*1200@100Hz for desktop usage). You can definetly feel/see the difference, even in games, depends on what game it is and how bright it is of course.
For a comparison, try scrolling a page like slashdot o

Take two machines. Machine A renders a low-complexity scene at 140 fps. Machine B renders the same scene at 70 fps. Which is more likely to render a high-complexity scene at >= 60 fps?

Even if your machine is rendering the highest complexity scenes at twice your monitor's refresh rate, programs can still put excess GPU horsepower to work with full-scene motion blur. Draw a scene with objects positioned midway between the last frame and this frame, and blend it 50-50 with this frame, and things start

Actually, with the 8800's the benchmarks I've seen reflect more like a 60% - 80% increase. The new DX10 hardware design makes SLI much more powerful. I've got two 8800 GTSs in SLI and I've seen improvements similar to the 60%-80% I talked about. With older cards, that's obviously not the case, but the newer stuff is pretty impressive.

That's really only useful if you are using super high resolutions or multi monitor. My 8800GTS (320 MB version) can play any game at my monitors native 1680x1050 with everything maxed out at 40 FPS. Which is the minimum I would go.

Some, oh, I think 6-7 years ago, I happened to be at the local computer store, to buy some stuff. (In the meantime I buy most components online, so that's not to say it hasn't happened ever since, just that I wasn't there to see it.)

So an older guy came and said he wants them to build him a system. He was pretty explicit that he really doesn't want much more than to read emails and send digital photos to his kids. You'd think entry level system, right? Well, the guy behind the counter talked him into buying a system that was vastly more powerful than my gaming rig. (And bear in mind that at the time I was upgrading so often to stay high end, that the guys at the computer hardware store were greeting me happily on the street. Sad, but true.) They sold him the absolute top end Intel CPU, IIRC some two gigabytes of RAM (which at the time was enterprise server class), the absolute top-end NVidia card (apparently you really really need that for graphical stuff, like, say, digital photos), etc.

So basically don't underestimate what lack of knowledge can do. There are a bunch of people who will be just easy prey to the nice man at the store telling them that DDR3 is 50% better than DDR2, 'cause, see 3 is a whole 50% bigger than 2.

And then there'll be a lot who'll make that inferrence on their own, or based on some ads. DDR3 is obviously newer than DDR2, so, hmm, it must be better, right?

Basically at least those teenagers you mention read benchmarks religiously, with the desperation of someone whose penis size depends (physically) on his 3DMark score and how many MHz he's overclocked. If god forbid his score fall 100 points short of the pack leader, he might as well have "IMPOTENT, PLEASE KILL ME" tattooed on the forehead. At 1000 points less, someone will come at a door with a rusty garden scissors and revoke his right to pee standing. So they'll be informed at least roughly what difference does it make, or at least if the guys with the biggest e-penis are on DDR2 or DDR3.

I worry more about moms and pops who don't know their arse from their elbow when it comes to computers. Now _normally_ those won't go for the highest end machine, but I can see them swindled of an extra 100 bucks just because something's newer and might hopefully make their new computer less quick to go obsolete.

I agree, but it's not trial becoming an educated system builder. Back when Computer Shopper could kill small pets if dropped and I read it monthly, I could build a system no problem.Fast forward a decade, my career is different and I'm not as well informed. But I need to build a specialized system. For FEA on large problems (100,000+ nodes) you need masses of fast RAM. Fast everything. But I own a big chunk of my business and have to pinch pennies or it comes out of my pocket. Even if I go to someone

I'm so used to crap like c|net that I immediately went searching for a "printer-friendly" (aka, ad-free) version of the article, but lo and behold, that's not necessary. To think, I could actually read an article online without having to navigate through the usual nightmare... what an intriguing concept!

Firefox extension idea : automatically redirect to the printer version of such pages.

A lot of sites have redone their "printer friendly version" to 1. use PDF and 2. require more effort from the user. Now you get more "free registration required" and even "subscription required" on sites such as Ars Technica.

Every time I see "the need isn't there" or "there's more than enough memory bandwidth" I check their figures, they're only measuring the CPU memory needs. Well, hate to break it to you, but there's more to a computer than just the CPU. Having that extra bandwidth means that those lovely PCI Bus Mastering devices (such as my SCSI 3 controller, and quad firewire card) aren't fighting with the CPU for memory access. Frankly, add in a game accelerator like the Phys-X and a high-end GPU fetching data from the main memory for local cache, and even DDR3 starts looking a bit narrow....

Having that extra bandwidth means that those lovely PCI Bus Mastering devices (such as my SCSI 3 controller, and quad firewire card) aren't fighting with the CPU for memory access.

With a SCSI 3 card and 4 port Firewire you'd be looking at about 360MB/s of bandwidth assuming that they reach their max theoretical speed (and of course PC hardware always reaches its maximum theoretical speed). Unless they're both on the PCI bus in which case 133MB/s max for both. Which is fairly minor compared to the 6GB/sec of memory bandwidth that I get with shitty DDR2 on a shitty motherboard.

Unless you can provide evidence to the contrary, I am going to go out on a limb and suggest that the performance increases you are expecting do not actually exist. Unless your primary workloads involve running memory benchmarks and Prime95 in which case I would point out that you accidentally posted to Slashdot instead of the Xtremesystems forums.

I don't see why the faster memory is worth paying enough extra that I could buy an entire extra computer instead, when I will only use it in the rare case I'm maxing out both I/O bandwidth and CPU bandwidth.

Agreed.

When I buy memory, it's always the best value stuff that I get. Do I get 1GB DDR2-533 (no-name brand) for $55au, or 1GB DDR2-800 (Corsair brand) for $140au?
Gee... it's a tough choice, but I think the no-name stuff is the goer here...

Pretty much all new memory technologies have been historically ridiculously overpriced for the first many months following their initial introduction.It takes a while for people to adopt new memory technologies because they do not want to pay the full introductory price. It takes a while for manufacturers to ramp up production because they do not want to end up with excessive inventory caused by slow initial uptake. It takes a while for new technologies to become mainstream but it will happen in due time as

Pretty much all new memory technologies have been historically ridiculously overpriced for the first many months following their initial introduction.

Well, yes. But as the title of this article is "DDR3 Isn't Worth The Money - Yet", I don't see anyone disagreeing with this. The point is that it isn't worth it, for the vast majority of people, to buy this technology if they're upgrading their computers right now.

The point is that it isn't worth it, for the vast majority of people, to buy this technology if they're upgrading their computers right now.

The vast majority of people use mainstream systems built with mainstream components. DDR3 is still quite early on its ramp-up and about one year away from becoming mainstream technology - most dramurais are waiting for DDR3 support on AMD's side before pushing volumes.

The currently ridiculously large premiums combined with marginal performance gains (3-4X the price, 5-1

when the price comes down. i think thats the key point being made. of course faster is better. just not if it costs a stupid amount of money. so when ddr3 costs the same or not much more than ddr2 then it will indeed become an attractive proposition.

Well yes, but who cares right now? My system still uses AGP - even though I knew it was "obsolete" when I built the system (2 years ago), it had the right price/performance at the time, and by the time I need a better video card I will also need a new CPU, a new motherboard to accommodate it, and much more memory - so it's going to be easier to build a new system. It was the right decision at the time, and I don't regret it one bit. It's the same sit

Anyone remember when DDR2 was rolled out and was actually *slower* than the standard of the day, regular DDR? It took about a year IIR for the speed of the newer ramm to catch up and overtake the older ram, and even then it was still pricey. I expect with the current glut in the market of DDR2 that it will take quite a while for DDR3 to be considered a worthy upgrade.

Part of the reason that DDR2 was so much slower at most clockspeeds is because of the added latency. The lower speed DDR2 can have more than twice the tested latency of DDR400. The problem is that apparently both JEDIC, or whoever standardizes memory now, isn't thinking of what is the best direction for DDR to take. They're going in the same direction as the manufacturers, trying to sell higher "Megahertz" and "gigabytes per second" ratings, even when they're effectively meaningless now.

Does it exactly matter if your computer can do 6GB/s, or 12GB/s? 14GB/s? Where does it stop? And even then, that's mostly theorhetical, particularly in the case of DDR2. But a very important distinction is that so many memory accesses are of very small to small size. On basically all of those accesses, the memory request will be served in far less time than the latency will allow the command to return and allow another request.

Way back when, Intel motherboards tried out RDRAM for its 'higher end' boards, and the Nintendo 64 also started using it. Both were fairly large fiascos, in that sense, with more or less all technical reviews noting that the increased latency more than cancelled out the improved bandwidth. Now we're looking at DDR3, with far higher latencies than classic RDRAM for a relatively minor bandwidth improvement that only extremely large memory requests (such as applications that would theorhetically be done in an extremely large-scaled database and scientific research).

It reminds me acutely of the early 'Pentium 4s'. A 600Mhz Pentium 3 could beat up to a 1.7Ghz Pentium 4 in most applications and benchmarks, and the (rare and expensive) 1.4Ghz Pentium 3s were real monsters. But people kept trying to tailor benchmarks to hide that, so people would buy more product.

Overclocking has also generally demonstrated that overclocking regular 'old' DDR1, while a bit pricier (mostly due to the virtual elimination of production nowadays, though), scales better and also has far better numbers than DDR2 and the like. DDR600 equivalent is extraordinarily zippy, and (of course) real-world latency is also absurdly low.

It makes me feel like the 'governing bodies' here have really let people down. Instead of trying to standardize on and promote what's best for general computing, they're trying to push a greater volume of merchandise that has no meaningful improvement, and in fact usually a notable decline, over what we've already had for years. The bottom line for them is money, and that's just wrong to put their own pocketbooks over the long term well-being of computing technology and the needs of the consumer.

The increased latency means a larger problem but the argument is that the aggregate improvement over time is better. That is, there was no further way to improve standard DDR other than to start dual or quad-channeling it (making 512-bit buses on the motherboard). There is a clear frequency hit unless you start increasing latency and pipelining memory accesses. There is a penalty, yes, and with latency-sensitive applications that does a lot of pointer-hopping, it can mean that the application will actual

Part of the reason that DDR2 was so much slower at most clockspeeds is because of the added latency. The lower speed DDR2 can have more than twice the tested latency of DDR400.

It is not quite that simple.

The latency is ultimately limited by the characteristics of the DRAM array which has a specific access time after the row and column addresses are provided. When you compare the latencies of DDR to DDR2 or DDR2 to DDR3, you need to take into account the interface clock speed. Internally, DDR-400, DDR2-800

Intel's C2Ds love their memory bandwidth. Even the extreme low end, such as the E4xxx, can profit from something like DDR2-800 and an asynchronous 1:2 FSB:RAM. The E6xxx with their 266 MHz FSB can run at 2:3 with DDR2-800 and perform better than with 1:1 and slightly lower latencies.

Besides, the price difference between DDR2-533 and DDR2-800 is really small. You might as well go for it, if only for futureproofing your system.

There is no such thing as "futureproofing" a computer. I thought that once too, and spent ridiculous amounts of money on computers that should last very long. They did, but while I could run most future programs well and fast, the people I knew bought a new computer for much cheaper that did the same stuff faster than my futureproofed machine. In the end buying more PCs, for less money. While they had 3 machines over that time, and I only one, they always had the faster machines except for the first 6

Well, "futureproofing" in this context means replacing your E4400 @ 200 MHz FSB with a new quad-core Penryn that has an FSB of 333 MHz. It would be a noticeable upgrade for gaming, development, video encoding, etc. I agree with you that trying to buy the latest and greatest is a bad idea; my old PC lasted since 2000, and it was only replaced this year with medium-range components.Anyway, the point is that if you buy that Penryn, your "good enough" DDR2-533 (266 MHz FSB) you bought with the E4400 isn't guara

Anyway, the point is that if you buy that Penryn, your "good enough" DDR2-533 (266 MHz FSB) you bought with the E4400 isn't guaranteed to work as DDR2-667 needed for the new CPU. If you have an "overkill" DDR2-667, it'll feel right at home with the Penryn...

Well, tell that to the people that bougth an AMD64 socket 754 or even 939.... I know, the article is about Intel, but that's one way one loses faith in futureproofing. The machine I used in my example was a PPro 200. It was a great machine, but Int

...or when PC-133 SDRAM first came out. Or when 72-pin DIMMs first came out. Or when you could stuff 4MB onto a 286 instead of just 1 or 2.

Each step was nic,e, but hampered by the tech that used those parts (e.g. DOS and its apps were still fighting each other between EMS and XMS for using anything over 640k, back when boxes started coming out with 1, then 2MB of RAM on 'em).

...and don't get me started on how frickin' worthless that 512k RAM cartridge turned out to be on my old Commodore 64. ITt took

It's similar, but not the same. When DDR2 came out, we were comparing low-latency DDR-400 to high-latency DDR2-533; the latency wiped out the marginal bandwidth improvement. With DDR3, we have higher latencies again but the bandwidth boost is much higher (some of the best modules are rated for 2x the bandwidth of what JEDEC-spec DDR2 does) but it's still not getting much of a performance boost. At this point the problem is what's using the RAM, Intel's P35 memory controller + C2D design can't properly use a

Really, memory and CPU bottlenecks are not the biggest issue right now. The problem is and has been storage speed. It doesn't matter if we can crunch bits faster on the mainboard if we can't get them in and out to begin with.
Memory and CPU speeds are skyrocketing and hard disk performance has stayed rather flat for years. Until drive performance catches up we'll still be waiting forever for the OS to boot up or apps to load.

That may be for things like application boot-up and OS boot-up time but I don't think those things are a priority for speed-up. Most applications now-a-days can run almost entirely out of RAM (and store their data-sets in RAM). 2GB of memory is not uncommon. This makes memory speed predominant in limiting the speed of a computer in most applications.Having photoshop filters run faster or your iTunes transcode your "collection" of Simpsons episodes so you can play it on your iPod are all things that are c

Photoshop filters and video transcoding are predictable interruptions that come in big chunks. You can then use your other core(s) to do other tasks, for example to load some files from HDD to RAM...Opening photos for editing in Photoshop and opening video files for transcoding are tasks that are limited by HDD-performance. This HDD lag comes in tiny bits all the time. You can't avoid it. It's also ANNOYING and pisses you off (some call it "micro stress"). Let's say you lose 10 seconds every two minutes tha

Considering I'm writing this from work. I don't think computer speed's the limitation to my productivity.

And while HD lag is annoying, the concern for most computational limits, IMO, has been with processing heavy workloads (simulation time, gaming, processing filters, etc.) The actual time it takes to load a picture from HD is quite trivial compared to waiting 5 min for a black-and-white filter.

I run 4 GB now with memory prices being what they are. 64-bit operating systems (I run Vista Ultimate 64) can take advantage of it, and high-end apps like Photoshop and Premiere seem to love the extra headroom. Add in a high-end graphics card (8800 GTS) and it's all good.

Nearly every incremental step in technology is met with a barrage of "it's too expensive, it doesn't work right, it's not worth it, nobody will go there..." at which point it goes on to become the norm.

On a dual-processor Intel machine, you have to move to FB-DIMMs. I'm not sure if there are currently DDR3 FB-DIMMs, but I don't think so. If there were DDR3 FB-DIMMs, they'd also be quad-channel.

On a dual-processor AMD machine, you have NUMA (non-uniform memory architecture), so each each processor (processor, not core) has its own set of memory and its own bus, meaning you have 2 dual-channel busses.

Is it just me or does it seem like every new memory technology disappoints? I've built systems since before EDO-DRAM was all the rage, and we've seen lots of advances since... Burst EDO, SDRAM, RDRAM, DDR, DDR2, DDR3... but every time one of these supposed breakthroughs debuts, the review sites quickly go to work and reveal (at most) 5-10% performance increases over the previous generation. Often it's in the 1-2% benefit range. It seems like its very difficult to squeeze extra performance out of memory with

Intel, when they are prototyping a new CPU, run it in a simulator. This simulates an entire computer, and is very tweakable. A few years ago, they did an experiment; they made every CPU operation take no simulated time. Effectively, this meant that the CPU was infinitely fast. In their standard benchmark suite, they showed a 2-5x performance improvement overall. After doing this, however, increasing the speed of RAM and the disk gave significant improvements.

I'm looking for a motherboard that has DDR2 and DDR3 slots, but also a firewire port (and eSATA would be a plus), necessary for video editing. Any takers? I could only find one by Gigabyte on newegg but the reviews are mixed.

I'm looking for a motherboard that has DDR2 and DDR3 slots, but also a firewire port (and eSATA would be a plus), necessary for video editing.

Check again in nine days. There should be at least a few more boards with both DDR2 and DDR3 slots when Intel's X38 chipset is "officially" launched in on September 23 [xbitlabs.com] (early X38 boards are starting to appear in stores). Since X38 will be Intel's "performance" chipset, most motherboards should have firewire and eSATA ports (in addition to PCI Express 2.0).

The real question I have is whether or not DDR2 is worth upgrading over DDR1. I have 2 gigabytes of DDR RAM in my computer, and I recently started thinking that upgrading might be a good idea. But would I notice a performance increase by upgrading to DDR2? I don't want to spend $150 on a new motherboard and RAM only to get a marginal speed boost.

It depends on your current CPU and hard disk. If you have a very old CPU and a slow hard disk, then no. If you have more recent hardware (which seems a bit unlikely on a DDR motherboard) then your RAM may be holding you back.

As default luser said, the difference would be marginal. I probably wouldn't bother upgrading your RAM unless you were also upgrading your CPU to a Core 2 Duo (which doesn't have any motherboards that use regular DDR, IIRC). Here's a benchmark. [anandtech.com] I have a E6600 which is stable and quiet when overclocked to 3.474 GHz with a Scythe Infinity...and 31 degrees C.

Athlon X2 CPUs have on-die memory controllers, so you'd also have to upgrade your CPU (in addition to the motherboard and memory). That seems like a waste (to me) since the X2 4600+ is still a pretty sweet CPU. If you're currently using DDR, then your CPU and motherboard uses Socket 939. To use DDR2, you would need to get a Socket AM2 CPU and motherboard.

With that setup, would my RAM be holding me back?

Not by much, if at all. Since your next memory upgrade will require a CPU upgrade, your next upgrade should probably have quad-core CPUs in mind (or octo

I'm no expert but I wouldn't expect a big performance boost from upgrading from DDR to DDR2. Memory performance in general isn't the bottleneck in a typical desktop system; memory CAPACITY might be, but if you have 2GB already that's not the issue.

If you're looking for an easy speed boost, a new motherboard plus a new CPU would be the way to go; CPU performance has been increasing dramatically lately. Here's a chart from THG [tomshardware.com] that illustrates the progress; even the mid-range Core 2 Duos benchmark at 2-3 tim

Because of latency DDR2 is only faster than DDR if you have a CPU over 2Ghz clock speed. And pretty much all speed boosts are marginal now days. The only way you really notice the difference is aggregated marginal increases. Like CPU + mainboard + hard drive + RAM, etc. Typically you can't see much of a difference in changing out one part anymore.

I appreciate some users make heavy use of graphics software and/or games etc, but for regular office use I am willing to bet that 90% of people have an absolute overkill of a system. I'm using a 1.6 ghz Pentium 4 with 640 MB of ram ( oblig: it should be enough for everybody ; ) , and currently about 258 of that is used ( when accounting for buffers and cache ) to run my desktop environment and most of the software I ever use. Essentially, I expect that in perhaps 2-3 years time I might actually consider to

I hate these freaking articles that tell ME, a hard-working, well-paid computer consumer that something is "too expensive". That is a relative term, and relative to most everyone else's income, maybe it ISN'T too expensive for me. A product is worth what somebody will pay for it, everyone else who isn't buying it can STFU and butt out. Just because YOU can't afford it, doesn't mean it is "too expensive" (whatever that means).