Haswell CPUs will contain vector processors and a more power on-die GPU. The chips are designed to power the next generation of "Ultrabooks". (Source: ASUSTek)

An Intel corporate blog post seemed to confirm both the presence of vector coprocessor silicon and a 2013 release date for the 22 nm Haswell. (Source: Intel)

Company looks to new 22 nm architecture to hold off AMD and ARM Holdings

Intel Corp. (INTC)
has dropped a few hints to its upcoming 22 nm Haswell architecture,
currently under development by the company's secret Oregon team. In a
post on the Intel Software Network blog titled
"Haswell New Instruction Descriptions Now Available!", the company
reveals that it plans to launch the new CPU in 2013.

The vector process, which will work with the on-die GPU, was a major focus of
the post. The company is preparing a series of commands called Advanced
Vector Extensions (AVX), which will speed up vector math. It writes:

Intel AVX addresses the continued need for vector floating-point
performance in mainstream scientific and engineering numerical applications,
visual processing, recognition, data-mining/synthesis, gaming, physics,
cryptography and other areas of applications. Intel AVX is designed to
facilitate efficient implementation by wide spectrum of software architectures
of varying degrees of thread parallelism, and data vector lengths.

Intel has a ways to go to meet that objective -- its on-die GPU in Sandy
Bridge marked a significant improvement over past designs (which were
housed in a separate package, traditionally), however it also fell far short of the GPU found in Advance
Micro Devices (AMD)
Llano Fusion APUs.

Intel has enjoyed a love/hate relationship with graphics makers AMD and NVIDIA
Corp. (NVDA).
While it's been forced to allow their GPUs to live on its motherboards
and alongside its CPUs, the company has also fantasized of usurping the
graphics veterans. Those plans culminated in the company's Larrabee project, which aimed to offer
discrete Intel graphics cards.

Now that a commercial release of Larrabee
has been cancelled, Intel has seized upon on-die integrated graphics as its
latest answer to try to push NVIDIA and AMD out of the market. Intel is promoting heavily the concept of ultrabooks --
slender notebooks like the Apple, Inc.'s (AAPL)
MacBook Air or ASUTEK Computer Inc.'s (TPE:2357) UX21,
which feature low voltage CPUs and -- often -- no discrete GPU.

Mr. Kilroy reportedly wants ultrabook manufacturers using Haswell to
shoot for target and MSRP of $599 USD, which would put them roughly in line
with this year's Llano notebooks from AMD and partners.
It's about $100 USD less than current Sandy Bridge notebooks
run.

Well they didn't say exactly which discrete cards of today it would rival. If they are saying the top of the line ATI radeon 6XXX series performance will be available in an integrated GPU, that is pretty impressive, even 2 years away. But discrete graphics cards come in a broad range, and they didn't elaborate really, so it is hard to know if I should be impressed or not.

quote: I hope they do I would like a lot more physics in my games. More than just a torn flag waving or other cheap eye candy.

Really, are you that bothered by the fact that when you toss a grenade it has a perfect parabolic trajectory, completely disregarding the air friction?There's a reason Ageia failed: there's no need for complex calculation when the result very much resembles what PhysX can approximate today.

Ageia failed because the framerates were horrible when the PPU was being used. Think back to the original 3G cards, where 3DFX dominated, and WHY. There were the 3DFX Voodoo cards, and then you had all the others, most of them not really improving the framerates, even though graphics clarity WAS better.

The Voodoo and Voodoo 2, better graphics, and faster performance, it was a win/win. Look at physics, where if you turn it on, your framerates go down so much you want to turn it off. Software acceleration with the feature on is the only thing that would make the PPU seem like an improvement.

Ageia was always dooomed even in their best case scenario of delivering something in their discrete product that couldn't be matched on pure performance by the GPU makers. The best they could hope for was to turn a big profit on a buyout by one of the big GPU companies.

The problem was that they could never hope to have a slam dunk, you either have our card or your stuff is hopelessly inferior, situation. The most that they could hope for was a brief window when a few games would get a lot of mileage out of the card. The technology wasn't sufficiently different from what GPGPU enables, meaning the next GPU generation would always offer a substantial chunk of what the discrete card offered but effectively for free in the end users eyes.

Worse, high-end users faced a choice between adding a PPU card or a second GPU card. The latter choice was arguably more versatile, albeit more costly in most cases where the card is fairly recent and strong.

So, even if it worked perfectly and gained wide support, Ageia couldn't ever offer a killer value proposition against the likes of Big GPU.

Hearing Intel talk up their 'next-gen graphics' is like listening to a broken record, and dates back about as far as vinyl as well.

I simply fail to imagine how Intel will out-engineer AMD in GPU design. Having an enemy in nVidia and a direct competitor in AMD, they will have to source their own engineering team, which no matter how well funded, will simply not outpace nVidia or AMD in 2 years.

quote: I simply fail to imagine how Intel will out-engineer AMD in GPU design.

By and large I'd have to agree with you there. However, it is fair to say that Intel has superior manufacturing capabilities, and that their stumbles (Cougar Point SATA Bug) lately have been relatively minor (contrast with Phenom). In comparison, AMD/ATI and NVidia are rather tied to TSMC, though neither they or GlobalFoundries et al. have and/or can match Intel.

Intel has also shown somewhat surprising adaptability recently in terms of CPU architecture. AMD has seemingly been stagnant, though quite the opposite is likely true, perhaps due to budget / cash-flow difficulties. Which goes to show once again that a sound economic position can help you [buy engineers].

I wouldn't underestimate Intel. Don't forget they can afford to throw much more money in the project. Probably much more than AMD and nVidia could put in together.That in combination with current technological advantage they have could create them path to really get competitive gpu. Not saying it will, just it could be possible.

quote: Really, are you that bothered by the fact that when you toss a grenade it has a perfect parabolic trajectory, completely disregarding the air friction?

What a strangely specific example.

I'm of the opinion that stuff looks more real if it moves like it's real. You can slap all the textures and shaders and effects onto a 3D model that you like, even to the point of making it look photorealistic for stills, but if it still moves like a mannequin having a stroke, the effect is ruined.

That what I was looking for: an example where real physics will have noticeable impact. Cause we have ragdoll physics right now. All I have seen in tech demos was smoke, cloth (both of which will be unnoticed in a fast paced game - maybe useful in an RPG, but that's more about the story and game play) and particles. And particles are useless since we saw that while you may be able to compute a million of them each second, no video card will cope with displaying that many.

Actually getting something close on paper wouldn't be that hard. On paper llano is 25% as fast (lower in reality due to the memory bottleneck), and GPUs are still seeing enough design improvements to double yearly, instead of every 18mo. 22nm could get them halfway there all by itself; and intel is claiming trigate's performance boost is the equivalent of a process node which gets them most of the rest of the way there.

For all we know, and most likely... They'll have an on-die GPU that'll be equal to todays discrete video cards... since they are NOT specific and intel is thankfully - bad at video (just stick to CPUs and SSDs), they could be talking about the ATI 6450, a bottom end video card, which is a bit better than what AMD is offering in their Fusion-C CPU/APUs.

So... in two years from now, AMD will have their 9000 series cards (again) and intel will be 2-4 years behind, as usual. And of course, AMD Fusion chips will be more advanced that what they are today.

Biggest thing I can think of in their resent "GPU competition" mode was ripping out half of the GPU in most of the Sandy Bridge chips sold. Whose stupid idea was it to include the HD3000 in only the 'K' series overclockers parts?

Intel's own stupid idea, of course!. If they had HD3000 in the lower-end chips, they would be competing against themselves which is a bad idea. So crippling the lower-end chip is a "marketing" decision.

I still think they ought to ship variants of the SB chips without any GPU cores in there since they suck and OEMs put in discrete GPU anyway so why part for an energy wasting part which is not used ?.

A quick look over today's PC games shows a very simple reality: Huge numbers of them are console ports, designed to run on hardware that's 6 years old already. The 360 isn't scheduled for replacement until 2015.

You don't need a 2013 graphics card to play a console port. You don't even need a 2011 graphics card unless you're doing something like 6xMultimonitor and Anti-aliasing.

In that light...well, a CPU that can smoothly play WoW and the vast menagerie of console ports without a graphics card doesn't seem quite so stupid.