and the best need only be better than SLI/Crossfire, does this mean we will begin to see major changes in motherboard design? 12 slot 16x or more? Real-time modeling for weather system software in desktop enthusiast platforms? Games that shame Crysis around the corner simply because near linear scaling would mean the end of ceilings? 4 gpus for 60fps @1920x1200, but 10 gpus to unlock 4k resolutution @60fps? Reply

Assuming this technology works as advertised it wont be long until ATI and Nvidia either work out liscensing deals or begin work into cloning the technology themselves.

If they can liscens it for a decient ammount then it can quite possibly serve to their benefit. They wont have to spend time developing and modifying their drivers for Xfire and SLI then can just spend time making efficient drivers that work.

Game companies will no longer have to develop the game to work with one or the other and ATI/Nvidia no longer have to worry about specific profiles for the games. Who knows this might just be the beginning of something really awesome, Graphics cards that work evenly all around with fewer hastles.

True they dont give us too many hastles now but I for one would welcome a world with no specific game configurations or worrying about driver upgrades potentially breaking older games. Can you imagine it, I can and it makes me happy. Reply

This is clearly Intel trying to become a graphics monopoly now. I am sure that AMD and Nvidia will try to break this chip via their drivers in order to retain Crossfire and SLI. Crossfire works great and SLI is pretty good too. I hope this brings AMD and Nvidia closer and hopefully they can report Intel to the European Commission for more anti competitive monopolising behaviour... And no I am not anti Intel as I own a QX9650 and Intel Atom netbooks. In fact I have never owned an AMD chip... This will be a total flop...mark my words. Lucid should concentrate on Intel GPUS such as Larrabee. Also I am really staring to think that Anand is an Intel fanboy...Just my two cents Reply

Well, it really only threatens nVidia's nForce 200 implementation with the x58 and any chipsets bought over Intel's for SLI/Crossfire purposes. For while directly this chip will replace the nForce 200, other chipsets will still sell if people like how they perform. And you also need to remember that Lucid is not hurting the CPU market, but actually helping it. Anything that makes people more comfortable buying more GPUs is a plus for both AMD and nVidia. Reply

Another question I have is that, if you put one of these on a card that connects to 4 GPUs, can you also have one of these on a motherboard that connects to 4 of such cards, a case in which these Hydra 100's grow off of each other while maintaining linear performance, or can they just connect to GPUs and not other Hydra 100s? Reply

As there is no DX or OGL "command stream" to the graphics cards, it's Lucid Logixs software that intercepts them. I'm still not sure how it distributes the workload though. But i guess that has to be 100% before the drivers of the cards, but it seems kinda odd having it as a PCI-e switch. But i guess it needs the x16 2.0 interface and motherboards don't need any more lanes to deal with. I guess it does some framebuffer magic to combine the results into one image. Reply

We all know that SLI (and maybe Crossfire) takes a scene and loads it into the memory of each card. Say you have a 1GB scene and 2 512MB cards. The scene is not split into 2 512MB portions, but is sent, 1/2 to both cards to process, then another 1/2 to both cards to process. So you are limited in GPU RAM by the lowest RAM in the GPU system.

Is this chip going to use its own RAM (system RAM) to hold the entire scene and send parts of the scene to each GPU (so that if you have 2 1GB GPUs, each GPU can process a different 1GB of data from a 2GB scene)? Or does this chip not need to use RAM to split the scene into different parts so that GPUs can process without knowing the other parts? Or are we again limited for GPU RAM to the GPU that has the lowest RAM? Reply

So basically a 3GB command is given to the Hydra 100. The Hydra 100 splits the command and distributes the segments, giving 2 512MB segments with low processing power needs to 2 512MB mid-end GPUs and then 2 1GB segments to 2 GeForce GTX 280's, being tightly bound to the GPUs' RAM to know how it all fits together. Reply

The Lucid Hydra 100 is just a PCI-e switch and a accelerator chip for the software that sits BEFORE the graphics driver. It doesn't mess around with the low end stuff of talking directly to the hardware. The CPU in the Hydra 100 is just a 225MHz 32-bit processor with 16kB of cache. The load balancing happens before the graphic cards drivers and they just render some of the DX/OGL each. It also detects the power of the cards so it just won't send as much stuff to the card if it feels it being overloaded. The hydra software won't load the textures or store it, as it don't mess around with communication between the cards as said it will just direct the work from the app to the (drivers) gpus. The textures and stuff don't pass through the Hydra. Reply

So,
if the Hydra intercepts the calls before the driver, and then decides the split based on objects, then my guess is improvement maynot be linear.... Coz it will mess up the card's certain bandwidth saving tricks like Z-Occlusion culling, wont it? Since at any given point the graphics card is not seeing the entire scene, rather only a few objects within the scene. It will not know if the object he is rendering is partially or full obstructed by another object.
Also I wonder how will it handle pixel shader routines, coz some of the shading techniques tends to "blend" the objects.. Reply

Well I'm still wondering about all this my self, I guess the software has to know all this. It's not clear how exactly it works from there own site, and the pcper article hints that the software aren't really ready, they just have DX9 support right now. However they also hint at that the software will know how to distribute the scenes and combine them. So I guess they need drivers that knows how to handle every game pretty much. It could work great, but we have to wait and see till next year too see how it does on the commercial market. Reply

Having a separete box with it's own power supply(s) is ideal. That way if you want to add 2 or more 3 gpu's to your hydra system, you don't have to rip apart your computer and put in a different motherboard and power supply. I imagine this system will probably come with it's own mainboard and power supply with several separate pcie x16 slots for scalablity. Time to put that external pci express specification to good use! Reply

There is next to no latency according to Lucid. The problem is that Vista doesn't allow 2 graphics card drivers to work at one time so you cannot mix ATI and Nvidia cards in a hydra setup until microsoft fixes that, if possible.

The catch is how much is the hydra going to run. If you look on Dailytech's preview you will see how the graphics are rendered. It splits up the scene before it ever hits the graphics driver, thus, there is no latency that AFR or bandwidth issues.

Lucid chip recognizes the instruction pattern, i.e. rendering triangles, textures etc. What about recognizing CUDA or other computational applications designed to run on graphics? If it can't split those instructions effectively, than the Nvidia idea of heavy computations done on multiple GPUs is compromized, and this is exactly what Intel wants :), thus they fund the research in Lucid. Intel needs to wash away competition in heavy computing and eventually make dominant market space for Larabee, i.e. multiple x86 Intel chips. Reply

Given Intel's orientation with Larrabee being just compute in HW and DX&GL being implemented in API. I think they have this in mind. Gaming graphics are always a useful display example, but you can bet they want GPGPU sales as well. Reply

There has to be some catch. There's no way ATI or nVidia with their respective R&D budgets wouldn't have implemented this if it works as promised. Both companies I'm sure have put many times more money into researching this sort of thing than this small company that no one has heard of. Reply

how much spent on R&D has nothing to do with it. Large corporations like NVIDIA/ATI frequently have trouble thinking outside of the box (in other words, how can we get people to buy 2 of our cards).

I assume you are talking about the scalability factor of this product vs current crossfire/SLI configurations. Both ATI/NVIDIA are unlikely to think of placing something essentially between the Graphics cards and the rest of the computer. Lots of new ideas are thought of by the little guys, if their idea is good enough/plausable they can overnight become a major player in their fields. The best thing about this for all the companies involved? The vid card manufacturers still sell their cards and Lucid's not going to have any real competition if this plays out and is true. I am skeptical, but I don't think that NVIDIA or ATI would have thought this far outside of the box. Who knows, only time will tell. Reply

I wouldn't be surprised to hear that ATI/Nvidia actually have the expertise to do this, but still prefer Crossfire/SLI because it locks consumers into their format. But perhaps ATI is actually performing similar logic on their multi-GPU boards (their dual-GPU boards do perform quite well after all), and it was only a matter of time before technology migrated to one of AMD's platforms.

This is probably a stupid question, but will this chip be designed to work with a specific API? For example, will there be a "hydra" chip for every new directX release? Reply

You know, the reason that NVidia and ATI aren't too enthused about this idea is that once implemented with a wide bus, you wouldn't have to buy new architecture. You could just use old architecture and scale it. Of course, newer architecture uses less energy overall, but it won't be that long before the energy use will be so low that it becomes insignificant.

In other words, this could REALLY hurt the sales of new graphics. ATI/NVidia may soon have to shift towards becoming motherboard manufacturers.

What's so strange is that generally the technology companies have a larger scope of the future when it comes to drumming up ideas to make money, while the manufacturers are slightly more limited in what they can innovate on. The chip designers are on a shakier tight rope perhaps, because if they fail to perform or innovate they die. On the other hand, manufacturers have a harder time dealing with economic fluctuations.

This little dazzler could change all this, forcing manufacturers to become even stronger innovators, while making the GPU market a bit more obsolete. If you have 3 solid chip designs that use very little power and they can scale linearly, then what wide demand would there be for new tech? Just add another GPU, forget buying a new expensive one with the costs of the recent r&d figured in. Reply

Yep. After all, why would you pay $600 for the hot new gaming graphics card when you can drop in another $250 card and get greater performance cheaper than the new card? Some people would be willing to spend the money, sure, but there does come a time when buying big yields increasingly lesser performance per dollar (think "Extreme Edition"). Reply