[Beyond3D Article] Intel presentation reveals the future of the CPU-GPU war

Regular

It is not an exaggeration. Compare like price points and you can see that the average die size of a GPU given x cost is > than that of a CPU costing the same amount.

Click to expand...

Yes, and that is relevant how? As I say, Core2 has no competition, nVidia and ATi don't have that luxury, they need to have low prices. With the Pentium 4/Pentium D the cost per die size was completely different, when it was competing with K8.
Even so I still think the difference is exaggerated.
Aside from that, the discrete GPU market is still MUCH lower volume than the CPU market.

LegendAlpha

The switch happened or started in 2001.
AMD only recently completely stopped using its 200mm equipment last year, so it obviously wasn't a universal transition.

The trend was that such increases would happen every ten years.
200mm was done in 1991, and 300mm was done in 2001.

Equipment manufacturers are unhappy because they still have to make back the expense of the last transtion when the fabs want to push for another increase.
Because 300mm research was expensive, and not everyone made the switch, the break-even point was pushed way back.

Even fewer manufacturers could make a 450mm transition, and the equipment and R&D aren't getting cheaper, so break-even would be very far out (and probably not until after Intel or someone else pushes for another increase to line its own pockets).

Veteran

Yes, and that is relevant how? As I say, Core2 has no competition, nVidia and ATi don't have that luxury, they need to have low prices. With the Pentium 4/Pentium D the cost per die size was completely different, when it was competing with K8.
Even so I still think the difference is exaggerated.
Aside from that, the discrete GPU market is still MUCH lower volume than the CPU market.

Click to expand...

Here are relevant statistics on the largest CONSUMER-GRADE microprocessor for sale from each respective company:

Veteran

The switch happened or started in 2001.
AMD only recently completely stopped using its 200mm equipment last year, so it obviously wasn't a universal transition.

The trend was that such increases would happen every ten years.
200mm was done in 1991, and 300mm was done in 2001.

Equipment manufacturers are unhappy because they still have to make back the expense of the last transtion when the fabs want to push for another increase.
Because 300mm research was expensive, and not everyone made the switch, the break-even point was pushed way back.

Even fewer manufacturers could make a 450mm transition, and the equipment and R&D aren't getting cheaper, so break-even would be very far out (and probably not until after Intel or someone else pushes for another increase to line its own pockets).

Click to expand...

Thanks for the history lesson and the explanation of current events. It all makes sense now.

Veteran

Uhm, that's not quite extreme enough, the overall BoM has little to do with chip revenue...

Click to expand...

You are of course correct, but you're also strengthening my argument. GPUs aren't directly comparable to CPUs in the manner by which Scali is attempting to compare them. The fact that GPUs are not sold separately as CPUs are only illustrates this point further.

No, those are retail prices, which have little to do with production costs.
Yorkfield is more expensive because it has less competition, not because TSMC can produce chips cheaper than Intel can.
In fact, the Q6600 is currently cheaper than the 45 nm quadcores. Does that mean Q6600 is cheaper to produce? Unlikely.
So yes, GPUs are (currently) cheaper to consumers per die size, but your conclusion that nVidia produces its GPUs more economically than Intel could is just wrong.
In fact, even comparing 65 nm chip diesizes against 45 nm chips is quite strange... You are holding it against Intel that they have superior technology that allows them to make smaller chips with better performance per mm^2?

Regular

Indeed, as I already tried to explain, production costs have little to do with retail price.
Just because a chip is expensive to a consumer doesn't mean it was expensive to produce it.
In fact, the actual cost of the production of a CPU from raw material to the finished endproduct is ridiculously low.
The actual cost depends on the investments in R&D and manufacturing facilities the company had to make in order to produce the chips.
Prices of the same chip change massively during their lifetime... Take the Q6600 for example. I believe it was introduced at about $850, but now it's less than $200, for the exact same chip. So it has little to do with the production cost, but more with the investments and the strategy to return on those investments.
In fact, I'm a bit shocked that people on this forum don't seem to be aware of this, and instead just pick random CPUs and GPUs to 'prove' their claims. As Jawed demonstrates, pick a different model GPU (Quadro and Tesla are basically still just G80/G92 designs) and the tables are turned.

I see no reason why Intel couldn't compete with nVidia/TSMC. In fact, doesn't Intel already compete with TSMC, because AMD outsources some of its production there?

ModeratorLegendVeteran

Feel free to continue disregarding basic arithmetic, I just hope for Intel's sake that Otellini takes that stuff more seriously than you do despite his non-engineering background. I have provided the exact same ideas myself in a wide variety of threads with different and more 'best-case' (for Intel) examples; the *cost* difference isn't as massive as implied above, but it's still very big. Oh, and AMD doesn't off-source CPUs to TSMC, only Chartered.

ModeratorLegendVeteran

I'll presume you didn't see my edits, sorry about that - I tend to abuse from my mod privileges a bit too much to edit the post in the next ~5 minutes. Anyhow, I definitely am not disregarding basic economics; I have more than enough experience there, thank you very much. Once again, please consider Intel's gross margins.

Regular

But you ARE disregarding basic economics.
You have to admit that production costs basically come down to how you speed-bin your chips.
The rest just comes down to how much you invested in the design and production facilities, and the resulting performance determines how much of a profit you can make (the marketleader defines price/performance, all others just have to adapt to the scale of the marketleader).
Look at what happened with the Radeons... AMD is now practically giving them away because they perform poorly compared to the GeForces.
If AMD's R700 turns out to outperform the 8800s and 9800s then nVidia will have drop their prices.
Which has little to do with production costs, but everything with how much performance your design can produce.
In fact, I'm quite sure that the Radeon 2900 was actually about as expensive for ATi to produce than the 8800GTX/Ultra were for nVidia. But the lacking performance determined that the 2900 fell into the 8800GTS 640 pricebracket.

Veteran

No, those are retail prices, which have little to do with production costs.

Click to expand...

I don't see how production cost affects the consumer at all WRT this discussion. Besides, I don't have access to production costs on any of the products in question, and I doubt you do either as such information is certainly a company secret.

In fact, even comparing 65 nm chip diesizes against 45 nm chips is quite strange... You are holding it against Intel that they have superior technology that allows them to make smaller chips with better performance per mm^2?

Click to expand...

My comparison was one between the two largest consumer-grade products currently shipping from each company, nothing more than that. Quite fair if you ask me.

If you want to compare products which are no longer in production against ones that are, let's take your absolute best-case scenario and compare the firesale-priced Q6600 to the still-full-priced G92-based 9800 GTX:

However, this is only if you use the closeout pricing, and not established MSRP/average sale price.

So you see, even in this absolute best-case-scenario for Intel, by which all breaks are given to them, and none to NV, they still BARELY come out on top. Any *fair* product comparison will show the opposite, and by quite the large margin. Compare G80 to Kentsfield and it's like G92 vs. Yorkfield all over again. Hmm, I'm seeing a trend there...

I think you misunderstand why GPUs have larger die sizes than CPUs, and it has NOTHING to do with any manufacturing process advantage/disadvantage either company has. It is because cache is much denser than logic, and Intel allocates more of their transistor budget (percentage wise) to cache than NV does.

Regular

So basically you needed 20 quotes to say 'margins'.
But what is your point?
If we go back to the post that started this discussion:
"GPUs are much bigger than CPUs and generate much lower revenues. If Intel could magically cut its fab costs in half they would still have trouble matching NVIDIA's economics. The idea that they will all the sudden outperform GeForce because of a process advantage is highly dubious."

So the argument was made that it would be impossible for Intel to match "nVidia's economics"... So, margins basically. Then the whole nonsense about retail prices and diesizes started for 'proof' of this statement. I just mentioned the diesizes to show that even though GPUs are larger than the x86 CPU dies that Intel currently sells, they have produced MUCH larger dies than these GPUs, so there will not be much of a technical challenge in manufacturing GPU dies for Intel.
I really don't think there's any relation between diesize and retail price. Apparently you agree, even though you still seem to think Intel is not going to match nVidia. I think there's no relation and Intel will be able to balance their R&D investments and production costs so their margins will be adequate to match nVidia (and unlike nVidia, for Intel the GPUs don't need to be cash-cows either, as already mentioned earlier. Intel could afford to lose money on their GPUs, so for Intel it doesn't even have to be about margins in the first place if they so choose).

Veteran

In fact, I'm quite sure that the Radeon 2900 was actually about as expensive for ATi to produce than the 8800GTX/Ultra were for nVidia. But the lacking performance determined that the 2900 fell into the 8800GTS 640 pricebracket.

Click to expand...

Smaller die size, no NVIO chip, and less GDDR means it was almost definitely not as expensive to produce as the GTX/Ultra.

ModeratorLegendVeteran

ShaidarHaran: I don't think cache vs logic is really the main factor here, and clearly you've got a die size budget, not a transistor budget.
Scali: Once again, do you have ANY idea how high the ASPs are for Montecito? I agree there's no technical challenge for Intel here, but that's not the point. I'm most definitely not the one disregarding basic economics here. In fact...

About Us

Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!