NVIDIA Corp. today introduced its next generation CUDA GPU architecture, codenamed “Fermi”. An entirely new ground-up design, the “Fermi” architecture is the foundation for the world’s first computational graphics processing units (GPUs), delivering breakthroughs in both graphics and GPU computing.

“NVIDIA and the Fermi team have taken a giant step towards making GPUs attractive for a broader class of programs,” said Dave Patterson, director Parallel Computing Research Laboratory, U.C. Berkeley and co-author of Computer Architecture: A Quantitative Approach. “I believe history will record Fermi as a significant milestone.”

Presented at the company’s inaugural GPU Technology Conference, in San Jose, California, “Fermi” delivers a feature set that accelerates performance on a wider array of computational applications than ever before. Joining NVIDIA’s press conference was Oak Ridge National Laboratory who announced plans for a new supercomputer that will use NVIDIA GPUs based on the “Fermi” architecture. “Fermi” also garnered the support of leading organizations including Bloomberg, Cray, Dell, HP, IBM and Microsoft.

“It is completely clear that GPUs are now general purpose parallel computing processors with amazing graphics, and not just graphics chips anymore,” said Jen-Hsun Huang, co-founder and CEO of NVIDIA. “The Fermi architecture, the integrated tools, libraries and engines are the direct results of the insights we have gained from working with thousands of CUDA developers around the world. We will look back in the coming years and see that Fermi started the new GPU industry.”

As the foundation for NVIDIA’s family of next generation GPUs namely GeForce, Quadro and Tesla − “Fermi” features a host of new technologies that are “must-have” features for the computing space, including:

NVIDIA Parallel DataCache - the world’s first true cache hierarchy in a GPU that speeds up algorithms such as physics solvers, raytracing, and sparse matrix multiplication where data addresses are not known beforehand

NVIDIA GigaThread Engine with support for concurrent kernel execution, where different kernels of the same application context can execute on the GPU at the same time (eg: PhysX fluid and rigid body solvers)

Intel will and AMD does support DX11. NVIDIA must support DX11 to stay competitive in the graphics market. I doubt NVIDIA would kill GeForce to save CUDA/Tesla although I'm certain the thought crossed their mind.

Intel will and AMD does support DX11. NVIDIA must support DX11 to stay competitive in the graphics market. I doubt NVIDIA would kill GeForce to save CUDA/Tesla although I'm certain the thought crossed their mind.

Click to expand...

Uh of course nVidia will support DX11, and? I failed to relate your post to mine.

OpenGL is to Direct3D as OpenCL is to DirectCompute. So yeah, Windows only software will be inclined to use the DirectX variety while cross-platform software will use the Open variety. There's not much room for CUDA, I'm afraid.

OpenGL is to Direct3D as OpenCL is to DirectCompute. So yeah, Windows only software will be inclined to use the DirectX variety while cross-platform software will use the Open variety. There's not much room for CUDA, I'm afraid.

Click to expand...

That is true everything else being equal. However CUDA supports C++ and a plethora of other languages. From what I have heard, its a simple solution to use, just drop in the libraries and go. So if you are a *insert application here* developer who does not know open CL and you have all FORTRAN, C, or whatever developers on your team - CUDA is tons cheaper, faster, and more convenient than Open CL.

Now I'm always wary of proprietary stuff, but sometimes a proprietary standard blows away the open-source one in terms of actual performance and functionality. I definitely think that this is the case here.

Well most supercomputers run on unix or linux and they don't like to play together with Dx, so CUDA has the same chances as OpenGL/CL HW acceleration which is also in its infancy. At the moment Nvidia are the only GPU manufacturer going for the server/HPC enviroment so I think CUDA is here to stay

OpenGL is to Direct3D as OpenCL is to DirectCompute. So yeah, Windows only software will be inclined to use the DirectX variety while cross-platform software will use the Open variety. There's not much room for CUDA, I'm afraid.

Click to expand...

CUDA is not Windows only. OpenCL has been supported for a while now. The linux driver supports both. Everything included in the Windows version of the driver is also included in the linux version of the driver. nVidia is cross-platform. At least when you think about the major ones.

CUDA is not Windows only. OpenCL has been supported for a while now. The linux driver supports both. Everything included in the Windows version of the driver is also included in the linux version of the driver. nVidia is cross-platform.

Click to expand...

I don't think that's what he meant.

I think he meant Linux types will use open source, whilst windows type will use DX11 rather then cuda.

Now, will it do all it claims to AND be 5870 killer? If yes, then ATI must be getting a little po'd at having their launches spoiled.

Click to expand...

Something tells me this thing is still several months out... AMD hard launches a great new GPU and the best nVidia can scrounge up is a few slides and some guy from U.C. Berkeley? I don't think nVidia is spoiling it at all, nor will they. The only thing at stake here is Huang's ego when gamers and general consumers alike choose AMD because at some point you've gotta accept that it's a graphics card not a co-processor. You can't design one to compete against the other...

nVidia is going to face (and probably already has) massive technical issues on this one, only to be compounded by ridiculous TDP and a price they can't possibly turn profitable. Maybe if Larabee were out we'd be looking at a different competitive landscape, but I think for now gamers are more interested in gaming than spending an extra $100-200 to fold proteins.

(That said, this may end up benefiting their Quadro line significantly. Those sales are way too low-volume to save them if this thing fails in the consumer market though...)

(That said, this may end up benefiting their Quadro line significantly. Those sales are way too low-volume to save them if this thing fails in the consumer market though...)

Click to expand...

That said, that market also has much higher margins.

I see this as a direction shift from Nvidia, they're starting to look at different areas for revenue (HPC etc). They'll still be big in the descrete gpu market, but it wont be their sole focus. They may lose market share to ATI (and eventually Intel), but if they offset that with increased profit elsewhere then it wont matter. Indeed they may be more stable as a company having a more diverse business model.

I've fried plenty o' parts when not OCing. Need to start doing it again to avoid problems...

I think nVidia are a little bit behind with their mentality. They need to think for the future and invest in the same Open Source standards or at least in universally accepted standards. Right now, it seems to me like they're trying to use their power to push for their own standards, which is perfectly natural in the business world, but as they loose discrete graphics market share to ATi/AMD and eventually Intel, as gumpty predicted, they will loose the power to enforce these proprietary standards.

For Nvidia to not have a DX11 card ready in 2009 is a major fail. This card could be 4-5 months away and i doubt even die hard nvidia lovers will be prepared to wait until next year while there are 5850's and 5870's around.

I think nVidia are a little bit behind with their mentality. They need to think for the future and invest in the same Open Source standards or at least in universally accepted standards. Right now, it seems to me like they're trying to use their power to push for their own standards, which is perfectly natural in the business world, but as they loose discrete graphics market share to ATi/AMD and eventually Intel, as gumpty predicted, they will loose the power to enforce these proprietary standards.

Click to expand...

Which of the Open Source standard is supported by the ATi/AMD or Intel and not supported by nVidia?

For Nvidia to not have a DX11 card ready in 2009 is a major fail. This card could be 4-5 months away and i doubt even die hard nvidia lovers will be prepared to wait until next year while there are 5850's and 5870's around.

Click to expand...

No it's not. Since there are no DX11 titles to play. In 3 months maybe there will be a few. nVidia will have it's cards just in time for that. 5850 and 5870 is just an incremental upgrade. For what Fermi is promising, I would wait another year.

ATI started with high precision stream cores back on the X1K and now it has branched to DX11, and CUDA as the competing platform. This is all going to come down to consumers, this will be another "format war".

I am liking the offerings of the green team this round, I love the native code drop in, the expected performance at common tasks and folding, but I will probably hate the price. ATI might have a real problem here if they don't get their ass in gear with some software to run on their hardware, and show it to be as good as or better than NV. I for one am tired of paying either company for a card, hearing all the options and only having a few actually made and working. I bought a high end high def camcorder, and a card I was understanding could/was going to handle the format and do it quickly. I still use the CPU based software to manipulate my movies. FAIL......