I hate to knock a great idea, but this is something that should have been invented 3-4 years ago and not now. As far as I know, there's quite literally no reason to SLI anything. Single cards with multiple cores do the job more then well enough and nothing out there is going to require some crazy multiple card set up.

Beyond that, I can't see any use for this. Its a great idea, but no very useful.

What this is supposed to do is bring up the fps from what you get on sli or crossfire. Since there are dual gpu cards running in sli (4 gpus total) to run games like crysis at a better frame rate than they could with a single dual gpu card there is definitely a reason for this chip... IF (and thats a BIG if) it performs as its claimed to or even close to it. Your average SLi or Crossfire bump is 40% or so and thats IF the game supports the solution you have (if not you have up to 600 dollars worth of paperweight in your rig). This is supposed to get you double (or close to it) the fps if you put a second card in rather than 40% more. So yeah there's definitely a need for this technology. Now if we can just get games that have more replay value.... Reply

If I'm reading the schematics correctly the threads are sent to the video card as they would be if the hydra wasnt there except the hydra only sends a thread to the card if its ready for one. So it would use the memory as far and I can tell of both cards (which would really be an improvement and probably where they get the scalability claim). Think of the hydra chip as a traffic cop. It sends data down the appropriate channel when that channel is ready for more data and lets the card handle the rendering using all of its tools. Reply

I do not think it will use memory like You want it. Because lot of data needs to be preloaded locally on each card for fast access and high bandwidth. That would require the same set of data on both cards like with Crossfire or SLI. Look at the HD5870 memory bandwidth- 153GB/s or 159 GB/s for GTX 285, PCIe bus with its 16GB/s bandwidth does not come even close to be able to feed that data on demand not even talking about increased latency. I do not think they can predict which data will be needded on each card and even if they could that would mean lot of loading and unloading data on each card as circumstances change and that would require a lot of bandwidth as well.

If You want to use all of video memory efectively that would require one memory card to be able to directly acces data on other cards memory like in multi CPU setups. That would mean change in video cards themselves. And even then I expect it to appear on X2 cards first if ever because that access would still need to be done over PCIe with low bandwidth and increased latency for two physical cards. On the X2 cards tey could introduce another connection between GPU's kind of like QPI which would allow access to other GPU's memory. This later could become part of PCIe connection for multiple card interconnection.
But honestly I think the answer is with the 2 GPU dies in the same package kind of like Q6600 is two E6600 in one package to be able to use all memory efectively.

All of this is just my somewhat educated guess, I am not a GPU engineer so I could be wrong. Reply

that could be true but we will have to wait and see I guess. Its not like they are releasing this info or anything LOL. On another site I saw them playing ut3 and one of the cards was plugged into one monitor and the other was plugged into a second monitor and each of them was drawing a portion of the screen. Ive not used SLI or crossfire (too expensive for my blood) so Im not sure if you can do that with the current sli/crossfire tech or not. If not then it could be that the cards are working independently and sending the data through the hydra and out the primary card to the display. But you could be right too I have no clue... The biggest tell that it wont work at all like everyone is hoping is that they arent waving their arms in the air saying "look at what we built! It doubles your FPS! Buy it buy it buy it!" you know? Reply

Can we Please please please see some proof of the framerate increase? Im happy they are putting out actual silicon but if it doesnt do what its intended to do I dont wanna have to go out and get an entirely new motherboard to replace my asus p6t deluxe v2 one to find out. Youve got a working one with an ati and an Nvidia card in it and bioshock... show us the FPS count with it on and with it off PLEEEEEASE??? Reply

... but geeze, yet another thing to add an additional $75 onto the price of motherboards? It's getting pretty crazy, the gulf in prices between what a high-spec system costs and the box your average gamer uses (which has what, a 3 year old single GPU and an early C2D?).

Personally I think more work on this front needs to be done in the drivers, not dedicating another hunk of silicon and board real-estate to the task, and $75 seems a bit o the high side even then, I suppose we're paying for a large chunk of R&D at the onset. Reply

My biggest concern is that the technology is somehow dependent on the ability of the monitor to accept dual input.

Now I've heard that SLI/x-fire synchronization can occur over the PCIe bus. Is this a possibility with this technology? If so, does it have any drawbacks I.E. reduced bandwidth?
What about the new QPI bus? If the chip was put on a X58 motherboard? Reply

I'd be very surprised if this does not do near enough what it says on the box. For a start Intel was the main investor in this technology, and as mentioned previously MSI have now got involved too. The fact that Nvidia/AMD have got their own (but inefficient) multi-gpu methods already, there'd be no place for this if it wasn't any better than SLI/Crossfire so I really don't think they'd have bothered. 100% scaling may be a little optimistic, but around ~90% is probably more likely. Also this tech was already demo'd working last year (anyone remember the photo's of the two Unreal III scenes split across 2 monitors?) and this is now an improved version so I do have fairly high hopes that it's going to work. Reply

But doesnt ATI and Nvidia use different colours? Like two greens arent exactly the same colour, we see it as the same, but its technically two diff colours, how would this work with both? and does it mean if an older card only supports dx9 and a new dx11 card will be only doing dx9 to work with the older card ? Reply

using your old X850 with a 4850 = no DX10 and no shader 3.0 (no bioshock?) Plus the proportion of extra electricity v.s. extra FPS = meh

Throwing a pair of 5870's for perfect scaling might still be a no go, as I'd think because you're limited to 16x to the cpu, it's still effectively 8x by 8x. Plus on an X58 mobo, you actually get 16x by 16x. I'd hope the "high end model" would actually by 32x to the cpu. Maybe I'm wrong, benchies will be interesting. Reply

I may consider this, when prices come down and if / when it becomes mainstream technology for multi gpu! It sounds amazing so far! the only thing I dont want it stuttering / microstuttering that SLI / Xfire has! Reply

Thats my question. If you use SLI or have ever browsed slizone forums, you'd notice SLI can be used without a bridge, but the downside is that it runs over the PCIe bus and can run slow. On cards like the 8600GT and such, they sometimes recommend or need to run without a bridge and it runs fine, but on quicker cards, like a 9800GTX or 200 series, a bridge is required because its too much to push over the PCIe bus without a large performance hit.

I'll just be curious to see when more information comes out how it actually works. Also using only 2 somewhat older games to demonstrate it is questionable too. It seems like there's going to be A LOT of different variables and MANY different configurations that people could use. I can see this being hit and miss depending on what hardware you use and what they support. Reply

It's at least something to be excited about, but of course I think everyone has their doubts. It's been quite a long time since anything big came across the PC scene.

What gives me a good feeling about this is they haven't been hyping the hell out of it - a company of few words and a lot of action maybe?

At any rate, whether this thing flops or not, at least for now it's nice to see something exciting in the immediate future. If it does indeed work as they claim it'll be a major milestone in PC gaming hardware. Reply

If it scale linearly then it would be NVidia and ATI own multi-GPU solution. This claim is IHMO way too ambitious. ATI and NVidia haven't been able to do so without having to work on the constraint that the Hydra gets. Reply

I suppose we can assume that we can never upgrade our current motherboards with this exciting chip.

Right now, it is only slated to be available on this particular MSI P55 board? What about the higher end X58 boards?

It would be a shame to purchase a new board but this technology sounds so exciting.

My brother upgraded from a Radeon X300 to a Geforce 7900GT to his current Radeon HD4850. It would be very nice if he could use his two older cards collecting dust in the closet to gain a few extra precious frames per second.

Honestly I think everybody needs to settle down and wait. I see all sorts of reasons why this won't work the way everybody hopes.

Memory sizes? DirectX feature sets? Image filtering, hue, etc? I mean if I mixed and ati and nvidia card how would my colors look? If I mixed a 5870 and a 2900 how would my AA or AF look?

What are the overheads? I don't buy linear scaling, sorry people. Half of the work is being done on another card (in reality some of work is going to be duplicated to keep the same textures and scene data in both cards), and then you have to copy it from one card to the other for rendering over the pcie bus, that's latency and parts of that operation could be blocking and definately require synchronization. That means slower. I would guess pairing an x300 and 4850 would probably be slower than just using the 4850.

My thoughts exactly. I already have a gtx 260 216, and was thinking of upgrading to a P55 and Lynnfield anyway. I can get all that plus a 5870 for single GPU DX11, or get free performence boost using two different cards for DX10 titles, plus PhysX! Reply

would be interested in this myself. i just ordered an HD5870 to replace my 9800GTX so physx support would be leet if it works, since it would fix that whole physx not working if an ATI card is detected thing, since disabling it if it's running in this type of a mode would be shooting themselves in the foot, especially if it scales better than their own technology, at no cost to them other than simply optimizing their drivers for it instead. Reply

Anand, how were you able to verify Bioshock was running in mixed-GPU mode? From this bit from the article:

quote:If for any reason Lucid can't run a game in multi-GPU mode, it will always fall back to working on a single GPU without any interaction from the end user.

It seems it would've been difficult to determine if you were running in single-GPU or mixed-mode without comparing to single-GPU performance for either Nvidia part. Not to mention Bioshock does run quite well on any single GT200 or RV670 part. Just seems VERY misleading to claim you saw Bioshock running in mixed-mode without expanding on how you came to that conclusion.

quote:Lucid claims to be able to accelerate all DX9 and DX10 games, although things like AA become easier in DX10 since all hardware should resolve the same way.

Which brings us to price....$72 premium for what is provided already for free or for a very small premium is a lot to ask. My main concern besides compatibility would be of course latency and input lag. I'd love to see the comparisons there, especially given many LCDs already suffer 1-2 frame input lag. Reply

No, it just means they were mislead to believe the configuration was properly running in mixed-GPU mode, which is my point.

I'm not saying Anand was purposefully misleading, its quite possible he was also mislead to believe multi-GPU was functioning properly when there's really no way he could've known otherwise without doing some validation of his own.

I never claimed it was a review or anything comprehensive, but if a product is highly anticipated for a few features, say:

1) Vendor Agnostic Multi-GPU

and

2) Close to 100% scaling

And the "preview" directly implies they've observed one of those major selling points functioning properly without actually verifying that's the case, that'd be misleading, imo, especially given the myriad questions regarding the differences in vendor render outputs.

But getting back to the earlier fella's question, I guess I'm old enough to engage in critical thinking and know better than to take everything I read on the internet at face value, even on a reputable site like Anandtech. As people who seem genuinely interested in the technology I'd think you'd want these questions answered as well, or am I wrong again? ;) Reply

I'd written this off, thinking that it was nothing more than smoke and mirrrors...from the looks of it, I'm wrong..and also glad about it...^^

A couple of questions:

1> Will we be able to see this in AMD systems in the future, or is it an intels' exclusive?
2> Regarding the optional monitors in the pics, would this work with eyefinity? (I assume it does)
3> If this catches on (and it will if it delivers linear graphic acceleration without latency), what would this mean for driver development in nvidia and ATI? I know hydra is supposed to be driver agnostic, but

a> It would render most sli-Xfire work as an exercise in futility. (?)
b> Drivers could be optimized to take advantage of it. (?)

Im sure the latency isnt going to be that big of a problem...remember all the hype about 1156 having an integrated PCI-e controller and the massive frame rates from how the latency is going to be so low.

Howabout 1336 vs 1156; 1336 has an IMC while 1156 doesnt, yet memory performance is basically the same if both are run on dual channel (yea; 1336 is slightly faster dual channel, but its like less than 1%) Reply

I don't remember any "massive" hype, but of course I am don't buy much into hype until I see something working. I really don't think this will turn into much. I bet a few people are going to buy it like those killer nic, and maybe there will be a slight speed increase over just using your one nvidia 8800gt compared to running a 8800gt and a 3870, but I think there wont be much benefit in Lucid Hydra over SLI or crossfire.

Of courses all this speculation can end in 30 days, right? Of course though if it sucks they will say there is a driver issue that needs to be sorted out... Reply

At a very simple level, this is essentially a DIY video card that you plug your own Frankenstein GPU combos into. For example, instead of the "old way" of slapping two 4890's together in Crossfire to have them render alternate frames (which means you "need" an x16 connection for each card), here you plug two 4850s and a 4770 into the Hydra to get one 5870 (minus the DX11) that only requires a single x16 connection.

Maybe I don't know that much about parallelization, but isn't compartmentalizing complicated scenes a very difficult problem?

For example, most modern games have surfaces that are at least partially reflective (mirrors, metal, water, etc). Would that not mean that the reflecting surface and the object it's reflecting needs to be rendered on the same GPU? Say you have many such surfaces (a large chrome object). Isn't it a computationally hard problem to decide which surfaces will be visible to each other surface to effectively split up that workload between GPUs of different performance characteristics without "losing anything", every 1/FPS of a second?

This is pretty much the problem, yes. Modern graphics engines do a *lot* of render-to-texture stuff, which is the crux of the problem. If one operation writes to the texture on one GPU, and then the other operation writes to the texture on the other GPU, there's a delay while the texture is transferred between GPUs. Minimizing these transfers is the big problem, and it's basically impossible to do so since there's no communication between the game and the driver as to how the texture is going to be used in the future.

SLI/CrossFire profiles deal with this by having the developers at NV/ATI sit down and analyse the operations the game is doing. They then write up some rules from these results, specific to that game, on how to distribute the textures and operations.

Lucid are going to run into the same problem. Maybe their heuristics for dividing up the work will be better than NV/ATI's, maybe they won't. But the *real* solution is to fix the graphics APIs to allow developers to develop for multiple GPUs in the same way that they develop for multiple CPUs. Reply

Wait.. How does it work now? Do they have to develop a different renderering engine for each GPU or GPU family? I thought APIs like DX actually took care of that shit and standardized everything :S. Reply

DirectX and OpenGL provide a standard *software* interface. The actual hardware has a completely different (and more general) structure. The drivers take the software commands and figures out what the hardware needs to do to draw the triangle or whatever. The "problem" is that DirectX and OpenGL are too general, and the driver has to allow for all sorts of possibilities that will probably never occur.

So, there's a "general" path in the drivers. This is sort of a reference implementation that follows the API specification as closely as possible. Obviously there's one of these per family. This code isn't especially quick because of having to take into account all the possibilities.

Now, if a game is important, NV and ATI driver developers will either analyze the calls the game makes, or sit down and talk directly with the developers. From this, they will program up a whole lot of game-specific optimizations. Usually it'll be at the family level, but it's not unheard of for specific models to be targetted. Sometimes these optimizations are safe for general use and speed up everything. These migrate back into the general path.

Much more often, these optimizations violate the API spec in some way, but don't have any effect on this specific game. For example, the API spec might require that a function does a number of things, but in the game only a portion of this functionality is required. So, a game-specific implementation of this function is made that only does the bare minimum required. Since this can break other software that might rely on the removed functionality, these are put into a game-specific layer that sits on top of the general layer and is only activated when the game is detected.

This is partially why drivers are so huge nowadays. There's not just one driver, but a reference implementation plus dozens or even hundreds of game- and GPU-specific layers.

So from the game developer point of view, yes DirectX and OpenGL hide all the ugly details. But in reality, all is does is shift the work from the game developers to the driver developers. Reply

Generally speaking (off the top of my head, I realy have no idea if thsi is true or not) I think this is accomplished by having all data needed to render a scene, geometry, textures, shaders, etc... on both cards, but each card is only given half of the scene to actually render. Reply

Even besides the performance aspect, which they apparently didn't give any real-world figures to ponder over (I may have overlooked it though), a great many people will not buy it with such a high premium. Over $70 for the higher lane part is nutty. I will be perfectly content with AMD's and NVidias approach thankyou....

I'm having trouble seeing what is so great about this. It looks like paying $75 to $150 just to cut your graphics bandwidth in half. If you throw it on the P55 motherboard, you aren't losing anything in SLI mode (only one x16 lane anyway), but why pay extra money for less bandwidth anyway?

I will say this. My hat is off to anyone who buys this and can succeeds in making at least one fanboy's head explode (ATI and NVidia in the same board???!!!EleventyOne!!!????). Reply

who cares about lanes, if they acomplish what they promised of almost linear scaling, its a hell lot more acomplish than the 60% average performance increase from single to dual in both nvidia and ATI.
Reply

true, but keep in mind, assuming it works, it is quite an amazing chip and does far more than a lot of other things can, plus the demand of the chip is going to be fairly low. Unlike a chipset which is needed on EVERY computer, this will only be features in the higher end motherboards. Basic economics... Reply

It's like momma always said, if it's too good to be true, it probably is.

But if it works as advertised, hydra will be going on all kinds of motherboards. Who would ever want to do SLI or crossfire, when you can buy a motherboard with this chip and get linear performance scaling with each additional video card you add.

I hope the hype lives up to the expectations, but I'm prepared for disappointment. Reply

That noise you hear is the sound of countless wallets slamming shut for the next 30 days. You'd have to be a fool to buy or build a new gaming PC until we find out how well this actually works (and exactly how much it will cost.) Reply

We know that not all GPUs render the same way. AMD AA looks different than nVidia AA. They have different modes and make different trade offs for perf.

I suspect best results will still be matched GPUs. The win is (possibly) the ability to get better load balancing than you get with SLI or Crossfire. Some games scale poorly beyond two GPUs, will this fix that? If so then THAT is the win.

What I do forsee is a gina tpile of app compat issues. Tons of forums posts that are "Hey, how to you get X working with GPUs FOO and BAR? Anyone? BUMP" Reply

This technology would convert me to go multi GPU. In the past I would just buy one card and by the time I needed another I would just upgrade to the latest which was usually significantly faster. I didn't want the hassle of having to get the exact same brand and model. Now I could just buy the latest and greatest and put that one in as well. Reply

lol not mine. i just ordered an HD58701GB. with that said, if it isnt up to par, i wont have any issues just getting GT300 as well when it comes out and throwing it on there if it works properly lmao. i was originally planning on getting a second card in a month anyway but that perfectly coincides with the expected launch of the big bang, so i will know whether to spend my cash there instead or not, since i was gonna get an i7 after anyway to replace my EP45-UD3P Reply

ooh also, how are single card multigpu solutions handled? they run SLI/CF on the card so wouldnt that pose some interesting issues? or could ATI/NV just use this instead of whatever silicon they are using now to bridge the cards? the low end model would be well suited to what they need to do for a fair bit cheaper and it would also solve the scaling issues some of these cards can have as well in some games lol Reply