Posted
by
CmdrTaco
on Monday January 28, 2008 @09:41AM
from the heckuva-lot-of-video-cards dept.

MojoKid writes "AMD officially launched their new high-end flagship graphics card today and
this one has a pair of graphics processors on a single PCB.
The Radeon HD 3870 X2 was codenamed R680 throughout its development.
Although that codename implies the card is powered by a new GPU, it is not. The
Radeon HD 3870 X2 is instead powered by a pair of RV670 GPUs linked together on
a single PCB by a PCI Express fan-out switch. In essence, the Radeon HD 3870 X2
is "CrossFire on a card" but with a small boost in clock speed for each GPU as
well.
As the benchmarks and testing show, the Radeon HD 3870 X2 is one of the
fastest single cards around right now. NVIDIA is rumored to be readying a dual
GPU single card beast as well."

No mention from the article summary of whether this is supported by ATI's recent decision to release driver source code. If you buy this card can you use it with free software?

While AMD has done a good thing and released a lot of documentation for their cards, it has not been source code, and has not yet included the necessary bits for acceleration (either 2D or 3D). That said, I'm watching what I'm typing right now courtesy of the surprisingly functional radeonhd driver [x.org] being developed by the SUSE folks for Xorg from this documentation release. While lacking acceleration, it's already more stable and lacks the numerous show-stopper bugs present in ATI's fglrx binary blob.

Dunno yet if this latest greatest chunk of silicon is supported, but being open source and actively developed, I'm sure that support will arrive sooner rather than later.

Actually, what did they really release? I remember some time ago, there was a lot of excitement right here on/. about ati releasing the first part of the documentation, which was basically a list with names and addresses of registers but little or no actual explanations. (Although I guess if you have programmed graphics drivers before, you'd be able to guess a lot from the names...)

The point is, it was said that that these particular docs were only barely sufficient to implement basic things like mode-set

What's the real status of the released docs? Is there enough to do a real implementation with all the little things like RandR, dual head support, TV-Out and 3D-support, or is ati just stringing us along, pretending to be one of the good guys?

RandR and dual head work, based on what's running on my desk right now. Better than fglrx.

No idea about TV-out. Some 2D acceleration is in the works, but the 3D bits were not in the released docs (although rumors of people taking advantage of standardized calls

Um? Actually video cards are an inherently parallizable problem set. You see this in every modern video card where the difference between the top and bottom of a product line is often simply the amount of parallel execution units that passed QC. All they are doing here is combining two of the largest economically producible dies together into one superchip. Oh and I already have multiprocessor network cards, their called multiport TCP Offload cards =)

When can I have a quantum graphics card that displays all possible pictures at the same time ?

Quantum algorithm for finding properly rendered pictures:
1. Randomly construct a picture, splitting the universe into as many possibilities as exist.
2. Look at the picture.
3. If it's incorrectly rendered, destroy the universe.

But now, with Quantum Graphics, you don't have to destroy the unfit universes - the card will take care of it for you! Buy now!

In the case of graphics processors, this has been the trend for quite some time (and in fact, has always been the trend). Each generation of graphics processors has been made more powerful than the previous by adding more parallel pixel processors. The main clock for these chips have been kept steadily at the sub-GHz region. In fact, my old 4200 Geforce operates at 250 MHz.It makes sense since the processing of a pixel's shading and texture data is very parallel. In theory you could have up to your reso

Two GPUs on a single card? Who the hell needs that kind of power? Besides, don't modern graphics cards waste ridiculous amounts of energy even when they're simply drawing your desktop?For those who haven't been following the recent releases of ATI graphics cards, it's probably interesting to note that the AI HD2850 and HD2870 use only 20 Watt when idling (most low-end cards use at least 30W nowadays, and high-end cards are often closer to 100W).

Isn't AMD working on a system which switches back to a low-power on-board graphics chip when drawing the OS?

I don't know what AMD/ATI is currently working on, but you can not draw an Operating System. You can however draw a windowing system, for instance XOrg rendering KDE or Gnome. This is Slashdot, us nerds are pedantic.

Perhaps you meant having a low power chip which can take over for simple 2D graphics. I believe Aero (hopefully I got the name correct) uses 3D graphics now, and its all the rage in

Actually, both nVidia and ATi are working on a system that allows a lower powered onboard GPU core to handle things like Veesta aero, then switch to the octal SLi Geforce 10000 GTX when rendering Crysis 2 or something. I believe it's a part of nVidia's hybrid SLi, and ATi's hybrid Crossfire. It's supposed to save a lot of power because not only does it divert light rendering load to a chip that can easily handle it, it suspends the main GPU, saving a lot on idle power draw (current cards, especially high-en

Don't get pedantic if you're not going to go all the way. An operating system operates all parts of the computer, including the video output parts. Even if it's running in text mode, something has to tell it what characters to draw where. Even DOS had to be "drawn" on the screen, so to speak. If you want to get even more pedantic, then yes, what you saw *most* of the time was command.com or some other program, but the OS itself had video output routines too, specifically, "Starting MS-DOS..." and the always

In that case, allow me to give you a quick grammar lesson. If you're going to use a phrase like "us [sic] nerds are pedantic," there's a simple rule for determining whether to use "we" or "us." The sentence should be grammatically correct without the additional descriptive word you've added (nerds in this case). Following that rule, you would consider two possibilities: "we are pedantic" and "us are pedantic." Obviously, the latter is incorrect.

Who needs it? Probably graphics artists who are rendering amazingly complex scenes. I can imagine it would help some game designers and potentially even CAD architecture-types. Probably not so much with films because I think they're rendered on some uber-servers.

Who wants it? Gamers with more money than sense and a desire to always be as close to the cutting edge as possible, even if it only gains them a couple of frames and costs another £100 or more.

Who needs it? Probably graphics artists who are rendering amazingly complex scenes. I can imagine it would help some game designers and potentially even CAD architecture-types. Probably not so much with films because I think they're rendered on some uber-servers.

Not necessarily. Most standard rendering engines eat system CPU a lot more than it ever would GPU - especially when it comes to things like ray tracing, texture optimization, and the like.

Most (even low-end) rendering packages do have "OpenGL Mode", which uses only the GPU, but the quality is usually nowhere near as good as you get with full-on CPU-based rendering. Things may catch up as graphics cards improve, but for the most part, render engines are hungry for time on that chip on your motherboard,

Two GPUs on a single card? Who the hell needs that kind of power? Besides, don't modern graphics cards waste ridiculous amounts of energy even when they're simply drawing your desktop?

For those who haven't been following the recent releases of ATI graphics cards, it's probably interesting to note that the AI HD2850 and HD2870 use only 20 Watt when idling (most low-end cards use at least 30W nowadays, and high-end cards are often closer to 100W).

So that should mean that this new card should eat about 40W when idling, making this card not just the most powerful graphics card today, but also less wasteful than nVidia's 8800GT. Not a bad choice if you're in dire need of more graphics power. Although personally I'm planning to buy a simple 3850.

Raises Hand. Who needs this kind of power? Ever done any Solid Modeling? Real-time rendering? Engineering computations that can be off-loaded onto a GPU that can do massive floating point calculations? As a Mechanical Engineer I want to be able to do this without buying a $3k FireGL card or competing card from nVidia and I also want to be able to deal with Multimedia compression and other aspects that those cards aren't designed to solve.

Or, pick up a pair of 8800GT's for roughly the same price as AMD's X2, and more performance (most likely). This is assuming you have an SLI capable board. An X2 from nvidia is gonna cost an arm and a leg most likely..

Am I the only one underwhelmed by almost every new graphics card announcement these days?

Graphic cards have long since been really fast for 99.9999% of cases. Even gaming. These companies must be doing this for pissing contests, the few people who do super high end graphics work, or a few crazy pimply faced gamers with monitor tans

Actually, graphics power isn't fast enough yet, and it will likely never be fast enough. With high-resolution monitors (1920x1200, and such), graphics cards don't yet have the ability to push that kind of resolution at good framerates (~60fps) on modern games. 20-ish FPS on Crysis at 1920x1200 is barely adequate. This tug-of-war that goes on between the software and hardware is going to continue nearly forever.

Me, I'll be waiting for the card that can do Crysis set to 1920x1200, all the goodies on, and 50-60fps. Until then, my 7900GT SLI setup is going to have to be enough.

That's the biggest problem that I see with PC gaming. Last week, I went out and bought a Nvidia 8800 GTS for $300, so that I could play some of the more recent PC games at an acceptable frame rate at my primary monitor's native resolutions (1680x1050). My computer is fairly modern, with a 2.66 GHZ dual core processor and 2 GB of DDR2 800. The problem is, even with this upgrade, I could only play Crysis at medium settings.While it was definitely a performance improvement over my 6800 sli setup, the qualit

Play the game and enjoy it for the best settings you can get. I downloaded the Crysis demo last night for my 20" iMac booted into WinXP (2.33ghz c2d, 2gb ram, 256mb X1600 video card, hardly an ideal gaming platform, eh?). I read that I wouldn't be able to play it on very good settings, so I took the default settings for my native resolution and played through the entire demo level with no slowdowns. It looked great.

The real problem here is people feeling like they are missing out because of the higher se

You've made my point for me. The game looks GREAT with the default settings (great being a relative term). I'm sure it looks even better with better hardware, but that doesn't make the game any less fun to play. I seriously doubt there are any graphical items missing that take away from the game play. The highest settings require expensive video cards that don't quite justify the expense, in my book. In the case of Crysis, I read they developed it to play at higher settings than are even possible with

I dont know why people try to argue that graphics dont matter, if they didnt high end graphics cards wouldnt sell and crysis would look like pong.

Crysis doesn't look like Pong, even on a crappy low-ish end X1600 video card. Unfortunately, I have an iMac, so newer cards pushing down prices are of no benefit to me;-) Perhaps FEAR requires the most subtle of graphics capabilities, but not at the expense of a $500 video card. I'll just play FEAR next year when I can build an entire PC with a decent video card (that will be outdated, but cheap) for less than the cost of that same video card now. For the record, I've played the FEAR demo on my iMac a

I don't want sound argumentative, but I don't really understand what you don't understand? I'm not trying to make this anything about an iMac. I'm simply trying to point out that a much less than ideal computer can handle the latest crop of games to a "much better than decent" level. I'd even argue that a cheap 3d card coupled with a strong core2duo chip and lots of ram is better than the best video card with a slower cpu and less ram. From what I can tell, it is cheaper to buy a faster cpu and more ram

Crysis looks *beautiful* on medium settings. The fact that it will look even better on new hardware a year from now is an advantage for people who buy that hardware and completely irrelevant to anyone who doesn't. At least for people who don't have some sort of massive jealousy issue that makes it so they can't handle the idea that someone might, at some point in the future, have nicer toys than they

Actually, graphics power isn't fast enough yet, and it will likely never be fast enough. With high-resolution monitors (1920x1200, and such), graphics cards don't yet have the ability to push that kind of resolution at good framerates (~60fps) on modern games. 20-ish FPS on Crysis at 1920x1200 is barely adequate. This tug-of-war that goes on between the software and hardware is going to continue nearly forever.

Me, I'll be waiting for the card that can do Crysis set to 1920x1200, all the goodies on, and 50-60fps. Until then, my 7900GT SLI setup is going to have to be enough.

But then you'd just be complaining that resolution Xres+1 x Yres+1 can't be pushed as FPS N+1. Honestly, you only need 24 to 32 FPS as that is pretty much where your eyes are at (unless you have managed to time travel and get ultra-cool ocular implants that can decode things faster). It's the never ending b(#%*-fest of gamers - it's never fast enough - doesn't matter that you're using all the resources of the NCC-1701-J Enterprise to play your game.

Many things you are wrong with there. The first is framerate. If you can't tell the difference between 24 and 60 FPS, well you probably have something wrong. It is pretty obvious on computer graphics due to the lack of motion blur present in film, and even on a film/video source you can see it. 24 FPS is not the maximum amount of frames a person can perceive, rather it is just an acceptable amount when used with film.

So one goal in graphics is to be able to push a consistently high frame rate, probably somewhere in the 75fps range as that is the area when people stop being able to perceive flicker. However, while the final output frequency will be fixed to something like that due to how display devices work, it would be useful to have a card that could render much faster. What you'd do is have the card render multiple sub frames and combine them in an accumulation buffer before outputting them to screen. That would give nice, accurate, motion blur and thus improve the fluidity of the image. So in reality we might want a card that can consistently render a few hundred frames per second, even though it doesn't display that many.

There's also latency to consider. If you are rendering at 24fps that means you have a little over 40 milliseconds between frames. So if you see something happen on the screen and react, the computer won't get around to displaying the results of your reaction for 40 msec. Maybe that doesn't sound like a long time, but that has gone past the threshold where delays are perceptible. You notice when something is delayed that long.

In terms of resolution, it is a similar thing. 1920x1200 is nice and all, and is about as high as monitors go these days, but let's not pretend it is all that high rez. For a 24" monitor (which is what you generally get it on) that works out to about 100PPI. Well print media is generally 300DPI or more, so we are still a long way off there. I don't know how high rez monitors need to be numbers wise, but they need to be a lot higher to reach the point of a person not being able to perceive the individual pixels which is the useful limit.

Also pixel oversampling is useful just like frame oversampling. You render multiple subpixels and combine them in to a single final display pixel. It is called anti-aliasing and it is very desirable. Unfortunately, it does take more power to do since you do have to do more rendering work, even when you use tricks to do it (and it really looks the best when does as straight super-sampling, no tricks).

So it isn't just gamers playing the ePenis game, there's real reasons to want a whole lot more graphics power. Until we have displays that are so high rez you can't see individual pixels, and we have cards that can produce high frame rates at full resolution with motion blur and FSAA, well then we haven't gotten to where we need to be. Until you can't tell it apart form reality, there's still room for improvement.

Honestly, I doubt you play FPS games because the difference between the 24-32fps range and the 50-60's is way noticeable. Forget the theoretical technicalities of human eyes capabilities for one second because I'm sure when the FPS of a game reaches the 30's, there are other factors that make it sluggish and all that together give us the perception that the difference between 30's and 60's is an important difference.

I hear what you're saying, esconsult1, in that the top-of-the-range GPUs are capable of hoovering up the most demanding apps and games at ridiculous resolutions and so product announcements such as this are neither groundbreaking nor exciting.

In terms of the progression of GPU technology as a whole, however, I for one shall be acquiring a new 'multimedia' laptop in about six months and I need a fairly high spec graphics card that will, for example, support game play of the latest titles but (1) will not d

Am I the only one underwhelmed by almost every new graphics card announcement these days? Graphic cards have long since been really fast for 99.9999% of cases. Even gaming.

Do you play current games? They keep getting more demanding, and people who want to play those games also want hardware to match. If your current hardware suits your needs... Good for you. Please realize that others will have different needs.

Pfft, I could just about run Crysis on medium at 1024*768 on my 7950GT; numerous other games need settings turning down and/or resolution decreasing to run smoothly, never mind 4xAA or 16xAS. I recently upgraded to a G92 8800GTS and it's great actually being able to run everything in my monitor's native resolution again, and remembering what "smooth" meant.Now I'm thinking about getting a 30" monitor; 2560x1600 -- ruh oh, now my card needs to be twice as powerful again to avoid having to run in non native

I am really not that impressed. It's not much faster than the 8800 GT which is MUCH MUCH MUCH less expensive. I am sure you can pick up two 8800 GT's for the price of this card. Of course then you have to deal with the noise, but overall it looks to me that the price/performance ratio of this card is not that great.

No matter how well they designed the card, at the end of the day price/performance is what you are looking for in a graphics card. This card delivers performance that teeters around the same performance that the 8800 Ultra gives at a much lower cost and produces about the same noise and power ratios.ATI announced that they won't sell cards for over 500 dollars and I think that gives them a good standing in the market place. If you are willing to spend 450 dollars http://www.newegg.com/Product/Product.aspx?I [newegg.com]

ATI/AMD's drivers can make you cry. But their Crossfire already scales much better than Nvidia's SLI which is a comparative disaster. Most games use Nvidia's cards/drivers for development so Nvidia cards hit the ground running more often. As manky as ATI drivers can be, when they say they will be getting better they tell the truth. ATI drivers tend to show substantial improvements after a cards release.

Hm, well if that's the case, then nobody should run out and buy this card.WRT Crossfire... I had a friend who wanted to buy Intel because they're "the fastest." Hence, he was stuck with ATi for video cards. Except the latest driver bugged Crossfire and he spent a couple hours uninstalling the driver to reinstall the older one. Doesn't that sound like fun?

nVidia's drivers aren't better because they're used for development, they're better because nVidia knows "IT'S ALL ABOUT THE DRIVERS, STUPID". ATi stil

They probably are pulling a Matrox. Release partial specs, promise to release more, rake in $$$$$$$$$$ from gullible members of the Open Source community, fail to deliver on promises. Great short-term strategy but only works once before said community stops trusting you, especially those who were dumb enough to go for your promises like I was back in 1999.Ever since I made the mistake of buying a Matrox G200 (Partial specs - more complete than what ATI has released so far as I understand it, and a promise

My reply: Intel's graphic cards won't get faster if no one buys them. Other companies won't open source their drivers if you keep buying them with closed source drivers. Other companies will only open their drivers if they see it works for Intel.

I haven't heard anything about any specs for 3d operations being released from AMD. I know they were talking about it, but what happened then? Did they release anything while I wasn't looking?

They released another 900 pages of 2D docs around Christmas, 2D/3D acceleration is still coming "soon" but given their current pace it'll take a while to get full 3D acceleration. So far my experience with the nVidia closed source drivers have been rock stable, I have some funny issues getting the second screen of my dual screen setup working but it never crashed on me.

Drivers are something for the here and now, they don't have any sort of long term implications like say what document format you use. The d

The summary failed to mention the most important factor: the new AMD card is actually much cheaper than the 8800 Ultra and at the same time a lot faster in many tests. In addition, it seems that the X2 equivalent of nVidia is delayed by one month or more, so AMD does have the lead for at least another month.

AMD would have the lead for another month if they would ship actual product. But they haven't yet, in usual ATI form, and I wouldn't recommend holding your breath...I would not be at all surprised to see Nvidia's competitor, while delayed, in the hands of actual consumers around the same time as the 3870X2.

Anyone remember the ATI Rage Fury MAXX [amd.com]? I've still got one in use. It was a monster in its day. Dual Rage 128 Pro GPU's and 64MB of RAM. But for some reason the way that they jury rigged them on one board didn't work properly in XP, so it only uses one. Oh well, still s nifty conversation piece.

The Rage 128 Pro was never close to the top of the line for a graphics accelerator (and doesn't really qualify as a GPU since it doesn't do transform or lighting calculations in hardware). It was around 50% faster than the Rage 128, which was about as fast as a VooDoo 2 (although it also did 2D). You had been able to buy Obsidian cards with two VooDoo 2 chips for years before the Maxx was released, and the later VooDoo chips were all designed around putting several on a single board.

Why only pci-e 1.1 a 2.0 switch would better split the bus to the 2 gpus.

Because there simply aren't any 3-way PCI Express 2.0 switches available on the market yet - Waiting for that would have delayed the product substantially for very little in the way of real-world gains.

Work is in the pipeline for a board which can house all your computer's necessary components, including a multiple core CPU that can handle graphics AND processing all-in-one! It will be the mother of all boards.

whatever happened to the physics card that some company released a while back? It seemed like a pretty good idea, and I wonder if it could be modified to fit onto a graphics card as well. I just think that would be a nice coupling because I like the small towers rather than the huge behemoth that I have in my Mom's basement (no, I don't live at home any more, wanna take my geek card back?). It's nice that they are putting an extra chip into their cards, I can definitely imagineer that as being pretty helpfu

That is only the case on lower-end CrossFire boards. The better ones not only have two full x16 slots but they are PCIe v2.0 and have 8 MB/sec full duplex. So a 3870 X2 on a new 790FX board allows each GPU the 4 MB/sec bandwidth that a single PCIe v.1 x16 slot provides.

You admitted that you didn't even RTFA before asking, your question is covered in TFA, and you said you were about to read it. Kinda like asking a mechanic how much oil your car takes while you start to open the car's manual.

http://anandtech.com/video/showdoc.aspx?i=3209 [anandtech.com]
Anandtech's article compares the 3870x2 against 8800 GT SLI (a good comparison since they cost almost exactly the same). 8800 GT SLI wins in almost every case. 3870x2 is still a damn good card for people with only one PCIe x16 slot though.

Interesting thing is what happens when you stop looking at synthetic benchmarks... and start looking at real gameplay.Take a read through hardocp's review [hardocp.com] for an example.

As to why AMD released? Well, my understanding is that NVidia is looking to release thier own 2-GPU card (9800 GX2) in Feb/March. Given the benchmarks of the current cards, I can't see the 3870 X2 holding up well... so... beat 'em to market. Although when you factor price in, I'd imagine it'll still be competitive; just not anywhere near

Not really. This codename was created in remembrance of those that gave their lives in the 'Crossfire' Revolution of 680AD, where the French (or the Gauls as they were known back then) ambushed the Germans with their Black Widow catapults, from opposite sides of a treacherous ravene, and accidentally killed each other in the process. WTF are you expecting from a codename? o_0

In the 7th Century what we know as France today, along with the low countries and some of western Germany, was known as Francia [wikipedia.org] and was ruled, at least in theory, by the Merovingian [wikipedia.org] line of Frankish kings. This century saw the rise of the Carolingian [wikipedia.org] dynasty within Francia, which reached their height in the late 8th and early 9th Centuries with the reign of Charlemagne [wikipedia.org].

Germany wasn't a single political entity until the 19th Century, and the Franks were Germanic [wikipedia.org], which is more of a group of identities, but

Really? The benchmarks I've seen put it at a fair bit faster than the 8800 Ultra, which makes sense considering it's got two GPUs. And uh, the 8800 Ultra costs $700, this costs $450, so I don't know what crazy inside deals you've got but there is no way you could get another of those for way less than one of these.