Not anymore I think. JHH mentioned hes comfortable with the supply-demand balance.

EDIT: Looked at the wrong con call, and blew it. My bad. Wish we had strike-through as a format option:

No, they are still supply limited on 28nm. NV expected that to be true and have been pleased with TMSC's efforts to mitigate that particular problem - so he is comfortable in so much as NV is where it expected to be. I think they would be happier with more supply, because there is a question as to how much money NV left on the table due lower supply.

"Q : Could you talk about 28-nanometer supply, if that was a limitation at all this quarter, and what impact that may have on your January quarter outlook?
A : Well, the 28-nanometer yield and 28-nanometer supply situation have both improved substantially. And so we feel pretty good about the balance of supply and demand at the moment."

There will be two versions of K20:
K20: 13SMX, 1,13TFLOPs and 5GB
K20X: 14SMX, 1,31TFLOPs and 6GB

Came from a news which was put down immediately. But some site has "copied" the news:

Quote:

Nvidia and Advanced Micro Devices on Monday announced high-performance graphics chips for supercomputers.
Nvidia announced GPUs (graphics processing units) called K20 and K20X, with the latter being used in Titan, a 20-petaflop supercomputer at the U.S. Department of Energy's Oak Ridge National Laboratory. AMD announced the FirePro SM10000 graphics processor, which is targeted at high-performance computers and servers in virtualized environments.
Co-processors like GPUs are considered more powerful that CPUs for specific tasks such as scientific and math applications. The GPUs are important in providing more computing power for simulation and experimentation in research areas such as biosciences, climate, energy and space. IBM and Intel also offer accelerators for supercomputers.
[ Get served with the latest developments on data centres and servers in Computerworld's Storage newsletter ]
Some of the world's fastest supercomputers today harness the processing power of CPUs and graphics chips for complex calculations. The Titan supercomputer pairs 18,688 Nvidia Tesla K20X GPUs with 18,688 AMD 16-core Opteron 6274 CPUs, with the GPUs handling 80 to 90 percent of the processing load. Other supercomputers that pair CPUs and GPUs include the Tianhe-1A at the National Supercomputer Center in Tianjin, China.
Nvidia has a big lead over AMD in GPUs for supercomputing, said Dan Olds, principal analyst at Gabriel Consulting Group.
Nvidia pushed parallel programming tools many years ago so coders could write applications for GPUs, Olds said. AMD has virtually no presence in the supercomputing market and needs to foster a programming environment for parallel frameworks like OpenCL to be a worthy alternative to Nvidia, Intel and other companies, Olds said.Nvidia's K20 has 5GB of memory and delivers 1.17 teraflops of double-precision performance and 3.52 teraflops of single-precision performance. Double-precision performance is more important for supercomputing applications as it carries higher precision for floating-point calculations than single-precision calculations. The faster K20X has 6GB of memory and delivers 1.31 teraflops of double-precision performance. The K20X is three times faster than its predecessor, the Tesla M2090, which was released in the middle of last year.
The K20 products will be available in computers from companies such as Hewlett-Packard, IBM, Asus, Fujitsu, Tyan, Quanta Computer and Cray. Nvidia declined to provide pricing, saying the GPUs would be sold through server vendors.
The new chips have thousands of small processing cores that will be able to more effectively execute application code simultaneously. The Hyper-Q feature will speed up execution of legacy code through smarter scheduling of code execution.
AMD claimed that its FirePro SM10000 delivered 1.48 teraflops of peak double-precision performance. The graphics card has 6GB of memory.
The SM10000 is designed for multiple server deployments, AMD said in a slide presentation. GPUs in servers are capable of deploying virtual desktops to client devices like PCs and tablets. The GPU can accelerate graphics on the server side for full high-definition virtual desktops on client devices.
The faster processing speed of SM10000 could also help deploy virtual machines at a faster rate in computing environments, AMD said. A single graphics card will be able to deploy many virtual machines, and AMD has worked with Citrix, VMware and Microsoft to boost virtualization performance on the GPU.
AMD is also targeting the FirePro SM10000 at workstations. The company did not immediately comment on questions related to single-precision performance and pricing for the graphics card.Agam Shah covers PCs, tablets, servers, chips and semiconductors for IDG News Service. Follow Agam on Twitter at @agamsh. Agam's e-mail address is agam_shah@idg.com

What have yields to do with the binning process? nVidia is limited by power not by size. They started the ramping process a few months ago. They must test all chips and look if they are in the power envelope and are reliable enough.

If K20 and K20X have the same TDP, why not use K20X from the beginning?

You have process yields and binning yields. You might have 100 functional dies on a wafer from a possible 200 (50% process yields, the other 100 dies are completely dead and unusable), but not all those 100 work with 14 or 15 SMX enabled at a given frequency. It is normal that larger dies have increase defect probability, thus when you disable 1 or 2 SMX that are defective, you can salvage a chip that you would have to throw away otherwise.

I believe you always get more partially defective chips than completely "healthy" chips from a wafer. Simple probability math.

If K20 and K20X have the same TDP, why not use K20X from the beginning?

Because there are not enough Chips after the ramping process. Why do you think the mobile versions of the mid- and high-end chips coming out much later? They are limited by power. nVidia and AMD are binning enough chips before they sell them to the OEMs.

Quote:

You have process yields and binning yields. You might have 100 functional dies on a wafer from a possible 200 (50% process yields, the other 100 dies are completely dead and unusable), but not all those 100 work with 14 or 15 SMX enabled at a given frequency. It is normal that larger dies have increase defect probability, thus when you disable 1 or 2 SMX that are defective, you can salvage a chip that you would have to throw away otherwise.

I believe you always get more partially defective chips than completely "healthy" chips from a wafer. Simple probability math.

nVidia announced the M2090 7 months after the GTX580 with less performance.

The M2050 came 3 months after the GTX480 to the market and had only 69% of the compute performance.

So do you really believe that in both circumstances the yields were so bad for the 15SM (GF100) and 16SM (GF110) chip that nVidia needed to sell them first in the Geforce market?

K20x may be a tiny bit higher than 225w tsp, but no way capital LOL does it have a 300w TDP with the very small bump in clockspeed. Keep in mind core clocks affect TDP more than the amount of functional units active when comparing the same chips.

I am sticking to my original prediction that GeForce will get a 14smx GK110 as it's initial flagship (gtx780) followed by a 15smx version in late summer early fall (gtx785). The 13 SMX version will be the gtx770. I wonder if Nvidia will go with an asynchronous memory configuration if the 13smx version has a 320 bit memory interface.

What have yields to do with the binning process? nVidia is limited by power not by size. They started the ramping process a few months ago. They must test all chips and look if they are in the power envelope and are reliable enough.

Quote:

Originally Posted by boxleitnerb

If K20 and K20X have the same TDP, why not use K20X from the beginning?

You have process yields and binning yields. You might have 100 functional dies on a wafer from a possible 200 (50% process yields, the other 100 dies are completely dead and unusable), but not all those 100 work with 14 or 15 SMX enabled at a given frequency. It is normal that larger dies have increase defect probability, thus when you disable 1 or 2 SMX that are defective, you can salvage a chip that you would have to throw away otherwise.

I believe you always get more partially defective chips than completely "healthy" chips from a wafer. Simple probability math.

Quote:

Originally Posted by ShintaiDK

You know this is mainly TDP based binning, not defects?

Had they been sold as Geforce cards they would most likely almost all be 14SMX. Since there aint no 225W TDP limit there.

This is nitpicking but yields don't necessarily mean functional yields. Transistors running out of spec (parametric yield) most likely hits harder in this case. Still, this is definitely die harvesting and whether the SMX outright do not work or need so much voltage that it's more feasible to cut them instead of lowering clock targets makes little difference. Arguing that TSMCs 28nm process is doing what Nvidia wants it to do and everything is nice and pretty and perfect is just silly.

Anyways with Big Kepler taking the high end are the 760/ti SKUs expected to be pretty much underclocked 670/680s? I'd love to get a decent 15" laptop under 900€ and it would be nice to possibly get a high GK106/low GK/104 GPU with it.