GTC 2010 is the place to be on September 20-23. What should you expect? Four amazing days of GPU computing sessions, advanced technology tutorials, inspiring keynotes and collaborative networking, right in the heart of Silicon Valley. This is a one-of-a-kind opportunity to meet colleagues, peers, researchers, entrepreneurs and investors from around the world.

Here's what industry analyst Rob Enderle is saying about it:

"The GPU Technology Conference will have sessions covering advancements in artificial intelligence driven automobiles and robotics, because GPU computing is wonderful for AI. It will have sessions on advances in medical and modeling… and there will be examples of projects ranging from mapping the weather to exploring outer space that wouldn't have been completed had it not been for the introduction of GPU computing."

NVIDIA Japan recently held a CUDA class in Tokyo. This event was especially noteworthy because it was for high school students! The participating students had an interest in GPU computing as well as a programming background. The distinguished guest speaker was Professor Takayuki Aoki of Tokyo Tech. The class was taught by Mr. Kei Tagwa of Fixstars. Steve Furney-Howe, Steven Zhang and Masaaki Sawai presented on behalf of NVIDIA.

Professor Aoki commented: "Supercomputers in the future will be powered by GPUs without any doubt… Younger people have already started using GPUs, thus they have a huge advantage." The class was sponsored by NVIDIA partners Fixstars, Acer, ELSA, Dell, Unitcom, DOSPARA and Mousecomputer.

CAPS of Rennes, France, is offering a 3-day CUDA training class in late November. Participants will learn about the CUDA C programming model and how to handle multi-GPU applications. The training includes a hands-on lab. See: www.caps-entreprise.com

SciComp of Austin, Texas, provides scientific computing solutions to the financial markets. Their Monte Carlo GPU code generation capabilities have been extended in the latest release of SciFinance, which supports CUDA 3.0 and the NVIDIA Tesla 20-Series. SciComp reports that "refinements have led to a further 30-50% performance increase in already very fast multi-factor Monte Carlo models." See SciComp's latest newsletter: http://www.scicomp.com/news/Newsletter_08_10?=source=nwsl#7

The gpucomputing.net website is a central site for GPU computing research. The organizers have started a new initiative called GPU Computing Research Forum, which will offer frequent Webex presentations (with live Q&A) from GPU researchers, users and vendors. The first one is on Sept. 15 and features NVIDIA's David Luebke. If you're interested in virtually presenting at this Forum (to introduce your lab, your project, call for solutions, etc.), contact Laurie Talkington at talkngtn@ad.uiuc.edu.

CUDA is NVIDIA’s parallel computing hardware architecture. NVIDIA provides a complete toolkit for programming on the CUDA architecture, supporting standard computing languages such as C, C++ and Fortran as well as APIs such as OpenCL and DirectCompute.
Send comments and suggestions to: cuda_week_in_review@nvidia.com