Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

lindik writes "As part of their research efforts aimed at building real-time human-levelartificial vision systemsinspired by the brain, MIT graduate student Nicolas Pinto and principal investigators David Cox (Rowland Institute at Harvard) and James DiCarlo (McGovern Institute for Brain Research at MIT) recently assembled an impressive 16-GPU 'monster' composed of 8x9800gx2s donated by NVIDIA. The high-throughput method they promote can also use other ubiquitous technologies like IBM's Cell Broadband Engine processor (included in Sony's Playstation 3) or Amazon's Elastic Cloud Computing services. Interestingly, the team is also involved in the PetaVision project on the Roadrunner, the world's fastest supercomputer."

I dont see that at all. There is at least in the second shot a increable diffrence in the mid-foregroud detail. The second shot shows it off the best, and the backround is really 3D looking, wheras, the other shots look like its a bollywood set. Im loading and stripping (vlite) Vista next weekend, so Ill have a look at DX10, as well as hacking DX10 to work under vista.

I would say its a tossup to, because the realism is a give-and-take from the speed. No realtime ray tracing there, but its more than detail. Call it grokking, looking at the whole picture's realsim, the color, depth of field, camera tricks, detail. It was fun grabbing all the shots, and slideshowing through them and not looking at the source, until I had analysed the pictures. Most of it is quite striking, and I cant wait to get my hands on Crysis now. I have a box running both XP DX9, XP DX10, and vista DX

I - The shaders offered by the two APIs are different (shader model 3 vs 4). None of the DX9 screen shot does self-shading. This is specially visible on the rocks (but even in action on the plancks of the fences). So there *are* available under Vista additional subtleties

II - The driver architecture is much more complex in Vista, because it is built to enable cooperation between several separate processes all using the graphics at the same time. Even if Vista automatically disables Aero when games are running full-screen (and thus the game is the only process accessing the graphic card), the additional layers of abstraction have an impact on performance. It is specially visible at low quality settings where the software overhead is more noticeable.

It's quite noticable on Page 2 [gamespot.com]; see the cliffs in the last shot, Vista has shadows where XP has none. Not terribly exciting though, especially given the additional FPS impact; woo a few shadows;)

Some things you probably have to see moving, though. e.g. Bioshock uses more dynamic water with DX10 (which as betterer vertex shaders or so?), and responds more to objects moving in it.

Thats LARGE FANS. There are probibly about 3 fans per actual GPU. One on the card, one on the box, and one on the Powersupply/etc...

You could just as easily bathe the thing in cooling oil. Although I am not a fan of water cooling, I can't see it as being any more unreliable than fans, done well, water cooling will outlast the machine.

no it isn't. Duke Nukem Forever will be released when a powerful enough computer is assembled. The game will just manifest itself in the machine one powered up. But you have to have downloaded 20TB of porn and covered the internals with a thin layer of cigar smoke first.

I think this part of the computing timeline is going to be
one that is well remembered. I know I find it fascinating.

This is a classic moment when tech takes the branch that
was unexpected. GPGPU computing [gpgpu.org] will soon
reach ubiquity
but for right now it's the fledgling that is being
grown in
the wild.

Of course I'm not earmarking this one particular project
as the start point but this year has gotten 'GPU this' and
'GPGPU that' start up events all over it. Some even said
in 2007, that it would be a buzzword in 08 [theinquirer.net].

And of course there's nothing like new tech to bring out [intel.com]
a naysayer.

Folding@home [stanford.edu] released their second generation [stanford.edu]
GPU client
in April 08. While retiring the GPU1 core in
June of this year.

I know I enjoy throwing spare GPU cycles to a distributed
cause and whenever I catch sight of the icon for the GPU [stanford.edu]
client it brings the back the nostalgia of distributed clients [wikipedia.org]
of the past. [Near the bottom].

Oh yes, I realize places like the
infamous Sandia will be using
the GPU to rev up atom splitting.
But maybe if they keep their
bombs IN the GPU it'll lessen the
chances of seeing rampant
proliferation
again.

Huh? GPGPU was a buzzword in 2005 (at least, that's when I first saw large numbers of books about the subject appearing). Now it's pretty much expected - GPUs are the cheapest stream-vector processors on the market at the moment.

"I think this part of the computing timeline is going to be one that is well remembered. I know I find it fascinating."

Well remembered? Perhaps... but I wouldn't sing their praises just yet. Advances in memory are critically necessary to keep the pace of computational speed up. The big elephants in the room are: Heat, memory bandwidth and latency. Part of the reason the GPU's this time round were not as impressive is because of increasing memory bandwidth linearly will start not have the same effects

I think it's great to see that we can finally start using GPUs to do things beyond gaming, but I also don't see it as the Great Second Coming of high-speed computing.
GPUs are designed to tackle only one kind of problem, and a highly parallel problem at that. If you are a researcher and you can see huge gains in performance by using GPUs, then great! But GPUs are hardly general purpose, and will simply not address most of our computing needs.
I see the rise of GPUs as similiar to computing in the 60's(?)

Crysis ran "well" for me at Medium settings on an 8800 GTX and a 2.6GHz dual core at my monitor's native resolution of 1680x1050. (Using DirectX 10 on Vista!)

But, it ran everything on "zomg high amazing ponies!" when I connected it to my lower-resolution 720p television.

(I love doing that to Xbox fanboys - "You think Team Fortress 2 looks "amazing" on your little toy? Come over here and see it played at 60fps with more antialiasing than you could fit in the 12 dimensions of a X-hypercube, let alone an X-b

AMD/ATI have released the specs for their hardware. Why haven't the proprietary NVIDIA engineers done the same? What do they have to hide?

In terms of actually being totally non-proprietary, Nvidia has to worry about ATI stealing their drivers (which they would or at least "borrow" alot from them), since Nvidia generally has that as their trump card over ATI no matter who has the better hardware. On the other hand, Nvidia has no interest in "borrowing" from ATI's drivers. ATI knows that, and that's why their

Tom's Hardware [tomshardware.com] did a pretty good job detailing the ups and downs of ATI and Nvidia with many of the major games of last year (BioShock, World in Conflict, etc). Overall, both companies faired well, but they reported quite a few crashes due to the ATI drivers. I've had an ATI card before, the 9800xt when Nvidia was producing their horrible 5xxx series back in 2003-04 that was totally worthless. The 9800xt was a good card for everything (gaming, graphical aps, etc). Sorry, I should have cited sources. Wasn't trolling on purpose, though I know that writing anything positive about Nvidia on slashdot is borderline blasphemy.

You are claiming ATI will outright steal from Nvidia, whether one driver is better than the other doesn't matter, I want you to back up your claim that they would do something like that.

Would you like me to call up ATI and ask them?

ATI Customer Service: What can I help you with today.Me: If Nvidia made their drivers OSS, would you borrow from them?ATI Customer Service: I'm sorry sir, we cannot answer that at this time. Is there anything else I can help you with?Me: Nope, thanks.

Also, isnt the concept of opensource to share information to better the overall technology? If nvidia feels that giving out their driver code will give ATI better video cards than their own, it would be insane for Nvidia to release them (which implies that ATI cards are more or less hardware equivilant to Nvidia). It may improve the overall tech, but only in favor of ATI (assuming ATI's own drivers do not improve from their own advancements not related to what they could potentionally gain from nvidia's). H

We should PAY ATI to use nVidia's Drivers. I learned this on the Radeon 9800s. Solid Well performing card fairly good 3D Perfornce. Drivers utter and complete garbage. Used more memory, cause random crashes. I had to reinstall XP, after I sold the card, ( and after I had re-installed XP twice before to fix the 'feature' ) to get rid of.Net 2.0. Got a GeForce 4ti to replace. Was able to put a fan right over the GPU. Computer went to MONTHS without crashing, No more blue screens. (AMD 1.6 Ghz dual). If I eve

The parts specific to the windowing system (including context switching / multiplexing).

The parts specific to the 3D API.

The parts specific to the hardware.

ATi could conceivably steal parts from the first two from nVidia, but it's doubtful that they could steal anything from the last part since their hardware designs are sufficiently different to make this hard.

The problem nVidia are going to have is that the new Gallium architecture means that the first two parts are abstracted away and reusable, as is the fall-back path (which emulates functionality any specific GPU might be missing). This means that Intel and AMD both get to benefit from the other company (and random hippyware developers and other GPU manufacturers / users) improving the generic components, while nVidia are stuck developing their own entire alternative to DRI, DRM, Gallium, and Mesa. The upshot is that Intel and AMD can spend a tiny fraction of the time (and, thus, money) developing drivers that nVidia do. In the long run, this means either smaller profits or more expensive cards for nVidia, more bugs in nVidia drivers (since they don't have the same real-world coverage testing).

Now, if you're talking just about specs, then you're just plain trolling. Intel doesn't lose anything to AMD by releasing the specs for the Core 2 in a 3000 page PDF, because the specs just give you the input-output semantics, they don't give you any implementation details. Anyone with a little bit of VLSI experience could make an x86 chip, but making one that gives good performance and good performance-per-Watt is a lot harder. Similarly, the specs for an nVidia card would let anyone make a clone, but they'd have to spend a lot of time and effort optimising their design to get anywhere close to the performance that nVidia get.

AMD/ATI have released the specs for their hardware. Why haven't the proprietary NVIDIA engineers done the same?

Nvidia has to worry about ATI stealing their drivers {...} ATI knows that, and that's why their drivers are open.

We are not speaking about releasing source code of current drivers. In fact ATI/AMD's fglrx *IS NOT* open. At all. What is open are 2 *separate* drivers projects, which are done using the *technical data* released by AMD.

You're confusing the situation with Intel. (They paid Thungsten Graphics to write an open source drivers for i8xx/i9xx to begin with. There's no such thing as a proprietary intel drive on linux. Only an opensource driver written by TG)

I upgraded my X800XL to a 8800GT. With Windows, I never had a problem with my X800XL and I still have not see a problem with the 8800GT. The X800XL just worked and the 8800GT just works.

With Ubuntu, the X800XL was working nicely (open source drivers) and the 8800GT is a piece of crap. NVidia's drivers are horribly slow and a lot of users are reporting the same thing. I have an old computer with an even older GeForce 4 MX and it displays things faster.

Maybe they work for you: I find NVidia drivers quite painful, especially for non-Windows operating sytems. And a 'third party open source driver'can't get the details of the NVidia API to work from, which means a huge amount of reverse engineering, especially of their propriatary OpenGL libraries, which are at the core of their enhanced features in non-Windows operating systems.

wait, so your telling me you have troubles with the windows drivers too? it's a single download for the platform your on and next next done. Granted, the linux ones have a couple more steps than that, but it's still rather trivial for most people, considering it's the most frequently used driver for 3d on linux (besides possibly intel).

I've had to clean up when someone trying to fix their PC and driver problems went and re-installed drivers from their media, when I'd I'd updated from NVidia's site, and monitors become completely unavailable on dual-display cards from the previous working display, and had it impossible to fix without dragging another monitor in with the other connector type and fixing events from the other display. It's compounded on systems with built-in displays and add-on graphics cards.

You laugh, but it seems like my eyes have gotten faster.I used to not care about 60hz refresh rate, but now I can't stand it. Look straight ahead at a CRT monitor running 60 hertz looks like a rapidly flickering/shimmering mess. 70 hz is still annoying b/c my peripheral vision picks it up.

I attribute my increased sensitivity to flicker to playing FPS's.

Oh and when a decent brain-computer interface comes out I'll be getting one installed.

I know this is slashdot so you have probably not spent much time around females at leat not those of our species, but let me tell you an angry one is a dangerous creature. All of them do get pissed off some of the time. You can be the greatest guy ever and sooner or later you will make a mistake. The good news if you are a good guy they will forgive you but the period between your screw up and their forgiveness can be extreemly hazardous.

Do you know the human brain has about 100 billion neurons? Each neuron can be represented as a weighted average of its inputs, a typical human neuron has some 1000 inputs and does around a hundred operations per second.

So, yes, *maybe* there could be some very smart algorithm that mimics human reasoning, but that's not how it's done in the human brain. It's raw computing power all the way.

I keep seeing all these articles about bringing more types of processing applications to the gpu, since it handles floating point math and parallel problems better. I only have a rudimentary understanding of programming compared to most people on this site, so the following may sound like a dumb question. But how do you determine what types of problems will perform well (or are even possible to be solved) through the use of GPUs, and just how "general purpose" can you get on such specialized hardware?

As a rule of thumb, if you problem requires solving many instances of one simple subproblem which are independent of each other then a gpu helps. A gpu is like a cpu with many many cores where each cpu is not as general purpose as your intel, rather each core is optimized for some solving small problem (without optimizing for frequent load/store/switching operations etc that a general cpu can handle quite well).

So if you see an easy parallelization of your problem, you might think of using a gpu. There are problems that are believed to not be efficiently parallelizable (Linear Programming is one such problem). Also, even if your problem can be easily made parallel it might be tricky to benefit from a gpu as each subroutines might be too complex.

I don't program but my guess would be that if you can see the solution to your problem consisting of a few lines of codes running on many processors and gaining anything, a gpu might be the way to go.

I think you did a good job explaining, one point thought. The sub-problems need not be independent.

Many problems such as weather prediction use finite element analysis with a "clock tick" to syncronise the results of the sub-problems. The sub-problems themselves are cubes representing X cubic kilometers of the atmosphere/surface, each sub-problem depends on the state of it's immediate neighbours. The accuracy of the results depends on the resolution of the clock tick, the volume represented by the sub-pr

I have been using my own GPU to do this very same thing by automatically converting images to vertex format and use the GPU to scale, shade, etc and in this way I can have a shape recognition by simply measuring the closest match on the frame buffer. There are more complex ways to use the GPU to do pseudo computation in parallel, I still think that a commonly available CAM or near CAM would increase neural like computations by being essentially a completely parallel process. It would be better to allow more people to experiment with the methods because the greatest gain and cost is the software itself and specialized hardware for a single purpose allows better profit but limits innovation.

A GPU executes shader programs. These are typically kernels - small programs that are run repeatedly on a lot of inputs (e.g. a vertex shader kernel would run on every vertex in a scene, a pixel shader on each pixel). You can typically run several kernels in parallel (up to around 16 I think, but I've not been paying attention to the latest generation, so it might be more or less). Within each kernel, you have a simple instruction set designed to be efficient on pixels and vertexes. These are both four-

The GPU architecture has been progressively moving to a more "general" system with every generation. Originally the processing elements in the GPU could only write to one memory location, now the hardware supports scattered writes, for example.

As such I think the GPGPU method of casting algorithms into the GPU APIs (CUDA et. al) are going to die a quick death once Larabee comes out and people can simply run their threaded codes on these finely-grained co-processors.

1) Your task has to be highly parallel. You really need something that can be made parallel to a more or less infinite level. Current GPUs have hundreds of parallel shader paths (which are what you use for GPGPU). So you have to have a problem that can be broken down in to a bunch of small parallel processes.

2) Your task needs to be single precision floating point. The latest nVidia GPUs do support double precision, but they are the only ones, and they take a major, major speed penalty (way over 50%) to do

I'm still eager to see PhysX running on my dual 8800M GTX laptop. I've run all the drivers from 177.35 up and I'm running the 8.06.12 PhysX drivers as required.
Apparently it's just the mobile versions:(

Maybe so, but why not build just two machines? The only reason I can think of is that this sounds cooler. Maybe they save a bit of money on having a single cooling solution/power supply, but I don't see it. Strange enough, the machine doesn't seem to be symmetric. They've probably put one motherboard upside down, otherwise you would have to split the case. Let's hope the magic doesn't leak out.

On June 30 of this year, The New Yorker magazine published a fascinating, if at moments disturbing article entitled The Itch [newyorker.com]. The article discusses, among other things, the human mind's perception of the reality of its environment based on the various nervous inputs it has, vision included. Apparently this is an oft debated topic among the scientific community, but it was new information to me.

One of the things I found intriguing was the note that the bulk (80%) of the neural interconnections going into t

I looked through each of TFA's linked in the story, and I don't see any technical details on this system. Whereas when the FASTRA people at Univ. of Antwerp put together their 4 9800-GX2 system for CUDA, they published all the nitty gritty down to specific parts, etc. The pictures are interesting but not enough.

CAE's Tropos image generators use 17 GPUs per channel in a commercially available package. Each image channel (there are usually at least 3 in a flight simulator) uses 4 quad-GPU Radeon 8500 cards in addition to the onboard GPU which is only used for the operator interface. I've been working on these things for a couple of years now.