Posted
by
timothy
on Sunday July 27, 2008 @03:01AM
from the many-many-little-dots dept.

lindik writes "As part of their research efforts aimed at building real-time human-levelartificial vision systemsinspired by the brain, MIT graduate student Nicolas Pinto and principal investigators David Cox (Rowland Institute at Harvard) and James DiCarlo (McGovern Institute for Brain Research at MIT) recently assembled an impressive 16-GPU 'monster' composed of 8x9800gx2s donated by NVIDIA. The high-throughput method they promote can also use other ubiquitous technologies like IBM's Cell Broadband Engine processor (included in Sony's Playstation 3) or Amazon's Elastic Cloud Computing services. Interestingly, the team is also involved in the PetaVision project on the Roadrunner, the world's fastest supercomputer."

We should PAY ATI to use nVidia's Drivers. I learned this on the Radeon 9800s. Solid Well performing card fairly good 3D Perfornce. Drivers utter and complete garbage. Used more memory, cause random crashes. I had to reinstall XP, after I sold the card, ( and after I had re-installed XP twice before to fix the 'feature' ) to get rid of.Net 2.0. Got a GeForce 4ti to replace. Was able to put a fan right over the GPU. Computer went to MONTHS without crashing, No more blue screens. (AMD 1.6 Ghz dual). If I ever see the words 'catalyst' its really 'crap_is_this_sys'

I now have a pair of nVidia 7600GTs on a crossfire motherboard. ( yea, it sould have ATIs on it ), but with the driver hack, I can play a whole weekend no problems. I cant seem to remember this new box (AMD 2.0Ghz) ever requiring rebooting, except for Windows Updates. I basically bought the rig to play FarCry, and its great on FarCry. I cant wait to get Crysis.

"I think this part of the computing timeline is going to be one that is well remembered. I know I find it fascinating."

Well remembered? Perhaps... but I wouldn't sing their praises just yet. Advances in memory are critically necessary to keep the pace of computational speed up. The big elephants in the room are: Heat, memory bandwidth and latency. Part of the reason the GPU's this time round were not as impressive is because of increasing memory bandwidth linearly will start not have the same effects sooner or later, there is a point at which the geometry of information will come into play and the law of diminishing returns on a kind of architecture or memory will take place for a price that people can afford to pay.

Next up, 32-bit addressing is starting to be a real pain in the ass. The move to 64-bit operating systems is critical if we are to expect GPU's to keep increasing their memory (1GB+ of local memory on a card now).

Supreme commander was one of the few games to hit the 4GB addressing limit and more and more games will definitely do so in the future. I know you were talking about other areas of computing, but without the games market, I don't see any serious reason for any regular person to upgrade their computers video at all. The many who donate to distributed computing did so as an afterhtought, not as the main reason they bought the card... As for the wider non-gaming market we'll have to see whether not GPU computing is going to be moer widely adopted.

Lastly, let us not forget that one of the primary reasons GPU computing is so fast is memory bandwidth, delay's in better memory technology will have big impacts on GPU performacne. As we've seen with this generation of GPU's, Nvidia's lack of DDR5 and a smaller process for the GT280 hurt them a lot.

The GPU architecture has been progressively moving to a more "general" system with every generation. Originally the processing elements in the GPU could only write to one memory location, now the hardware supports scattered writes, for example.

As such I think the GPGPU method of casting algorithms into the GPU APIs (CUDA et. al) are going to die a quick death once Larabee comes out and people can simply run their threaded codes on these finely-grained co-processors.