My guess is that AMD is going to have some major design wins with Fusion. Having a single chip that will meet the needs of a large section of the computing population will be very valuable, in both cost and power consumption.

Apple especially may be interested in Fusion for their products. They do not need a large number of chips compared to a vendor such as Dell, but they demand a certain amount of performance and battery life (in the case of their laptops). Look at the current MacBooks, they have three chips that could potentially be replaced with a single chip (Intel CPU + Intel GPU + NVIDIA GPU). Not hard to imagine the Fusion chip could replace all three.

With Intel and NVIDIA not playing nicely, it's not hard to imagine that their products will be even harder to integrate in the future - Optimus isn't exactly and ideal approach. Makes me wonder if Intel would eventually buy out NVIDIA, even if that isn't something NVIDIA will probably ever swing for.

As for integrated GPU products eliminating the need for discrete GPUs, I think that is very unlikely. With technology such as Crossfire or SLI, it doesn't seem hard to imagine a situation where you could add in a discrete GPU and have it work in tandem with an integrated GPU.Reply

From what I have heard the folks working on Larrabee graphics are being transitioned to integrated graphics. The Larrabee HPC product still has all of the graphics stuff in it but there is no active development on a discrete GPU product. There will be some path finding missions and experiments for sure but no products planned.

"Intel is effectively stating that it sees a potential future where discrete graphics isn’t a sustainable business and that integrated graphics will become good enough for most of what we want to do, even from a gaming perspective."

This is key. Intel is predicting a future in which the discrete graphics card goes the way of the discrete sound card. Remember those? They're basically a niche solution today and only a handful of people need them.

I think Intel is right to think that graphics will be similar. As PC gaming loses market to consoles and integrated solutions are more and more capable of handling PC games, the size of a market for discrete cards will shrink. The vast majority of all discrete cards sold are in the sub-$100 market; the market will cease to exist in about five years. I'm not saying that discrete cards won't exist--they will, at least for the next few years--but Intel doesn't see this as a market worth investing in because only highly specialized applications with a fairly small audience will have an interest in these products. The barriers to entry in the discrete card market are large because it takes a lot to get a graphics platform working, and the future is neither long nor bright for discrete graphics cards in the sector where they currently make money. There's just not much money to be made competing with two established manufacturers like Nvidia and ATI.Reply

Discrete graphics aren't going to go anywhere because integrated graphics cannot be as good as add-on graphics. Discrete graphics are going to exist so long as computer game producers keep pushing the boundaries of realism with each new game. Sound cards died because the integrated ones were just as good as the add-on ones. The same can never be said of integrated graphics.

Until people stop wanting to play computer games at the highest settings, discrete graphics will not go away. The people who demand the best may be a small market, but there are enough of them to be very profitable.Reply

Just want to say that I agree. What philosofool is saying is completely off and Intel's take on this is obviously influenced by them just having shut down their own venture into discrete graphics, possibly because of technological challenges that simply became too much to handle.

Larrabee was an interesting project and I'm sorry to see it go, but the GPU market will do pretty well even without Intel..Reply

You misunderstood. I didn't say that they were going away, I said that they would become a decreasingly important segment of the market--it's *not* about the continued existence of discrete cards, it's about the continued existence of a large and profitable section of the market.

Currently, Nvidia and ATI sell about 90% of the graphics cards in the sub-$100 range. That means that most of the money in the discrete card business is in the cheap cards, not the gamer cards. But integrated graphics will compete in the non-gamer level performance within the next few years. As memory bandwidth of new DDR increases and integrated graphics move on chip where they can share a cooling solution with the CPU, the limitations of integrated graphics will shrink and the gap between an Intel IGP and a low end discrete card will shrink with it. That means a smaller market and a small market means less money.Reply

The analogy isn't very good... the reason most of us don't need discreet sound solution has to do with the fact that the sound processing itself is already stretched to a very good potential. Remember, in the past there have been just beeps, but now we can play sound files that have sound's we can't even hear. What's the point in advancing over that?

On the graphics side, well, there are more parameters. Sure, cards ca go 100 fps+, but TV's and monitor's get bigger and they will struggle to do that many fps on the next HD revision. BTW, what is it? 2540p and 4320p? So there's need for faster and faster chips with more and more memory.

What intel is doing is trying not to depend so much on the graphics orientated companies like nvidia for usual graphics usage. AMD did clever in merging with ATI and intel does well in devoting more resources for this segment.Reply

I just ordered a Asus Xonar Essence STX. Why? Because quality matters. You get what you pay for and watching HD movies or listening to music with crappy onboard sound just sucks.Invest in a good sound card + THX certified boxes and you will stop being a bean counter.Good sound cards and good graphics cards will never die. Reply

All nice and well for performance at first glance. However, one shouldn't forget that fast graphics also requires a lot of bandwidth, naturally. No amount of on-cpu cache is going to change this. By moving the GPU into the CPU we constrain the bandwidth for both. To get performance even comparable to a mainstram graphics card we'd need a couple more memory channels and higher memory clocks for the CPU, which additionally means a more expensive socket and motherboard. Aside from the cost for memory and the huge graphics part within the CPU.

If this graphics engine can also be used as a co-processor for general purpose tasks we may have a deal.. but if not and we'd have to pay for this in each and every office box I'd rather say "forget IGP gaming".

now, in say 5 years, wel be at what? 22-20 or maby 16nm. now havent the nay sayers been calling the dicrete gpu dead for god knows how long? 3d gaming? theres not an IGP on earth that will pull 3d at any acceptable level. while the igp's continue to evolve, so does gaming and within the timespan mentioned will have new consoles, and thus another huge leap in development and requirements. this industrie fuels itself. just dont see discrete cards dying off.Reply

I think that CPU and GPU will merge eventually so there will be no descrete GPUs at the end. There will be multicore floating point capable CPUs. And then instead SLI you'll be putting in the PC more CPUs with more cores. That way it will be easier to program and faster.Reply

My main interest in graphics chips is their potential to offer vastly faster general computing, though their evolution into GP-GPUs (example: Folding@Home, a distributed protein folding program, runs 40X faster on a PS3 than on a modern CPU).

So can anyone explain the ramifications, on this, of Intel's announcement? Is their (perhaps temporary) abandonment of high-performance integrated graphics bad news for using GPUs to take on CPU tasks, or not?

sorry, correction: I meant Intel's "abandonment of high-performance DISCRETE graphics..."I should add the reason I raised the issue is that it was my understanding that Larrabee's architecture was supposed to make it particularly suitable for GP use (what I don't know is whether this would put them ahead of ATI and Nvidia in that regard).Reply

Larrabee is still a good idea, just not necessarily for a discrete graphics controller. Take a look at the forthcoming SCC chip that similarly uses "tiles" of simplified x86 cores, but wires them together via an on-chip mesh network with 256 GB/s bandwidth. Each of 24 tiles has it's own address space and 2 x86 cores in a SMP configuration sharing that memory. The 48 cores are Atom-like, but have message passing hardware. Every tile runs its own OS in its own address space, so it is a HPC on a chip, but with much faster networking. Now consider the on-chip optical signaling stuff Intel is working on that will increase the on-chip network bandwidth to the point that message passing is just as fast as if all cores accessed the same RAM. This gets around GPGPU's problem of continually having to shuffle data between texture memory and off-chip system memory over PCIe. The GPGPU is very fast at fully parallel tasks, but the SCC can move data between cores and other cores or system RAM at much higher rates. Real world tasks are often a serial set of parallel sub-tasks.

And who says it has to be one or the other? Why not include a GPGPU tile with its own on-chip texture memory? Then it would transfer data over the on-chip network as opposed to the much slower PCIe. It's not as dumb an idea as some seem to think.Reply

Jaybus, thanks for the thoughtful reply.I also wonder if it might be easier to get integrated graphics units (as opposed to discrete graphics cards) to perform CPU-like tasks, either because of their architecture or their integration with the CPU.Reply

Then maybe Anandtech and Tom's will stop doing all these waste of time gamer reviews about cards that have ruined the hardware market. The current GPU "technology," is a farce; it's the MOST metered market in the PC world... ATI and Nvid LOVE metering out worthless "next gen," cards that aren't next-gen anything. They've long developed a product line based off 1 core and then chopped it up 5-6-7-8-9 times. The stupid gaming masses endlessly fuel this low-tech (really—single cores in 2010 are still the majority...Vs CPU's quad and hex core? lmfao)

The GPU industry is a pure JOKE... about as bad as Apple and their ULTRA rigid control of Intel hardware and their goof-ball attempt of a GUI based OS.

GPU cards need to be modular like a mobo; we should be able to drop in a single, multi-core GPU as an "upgrade," vs the endless giant PCB chase. All of the major CPU advances in power savings and multi-core load sharing should have LONG been implemented towards GPU manufacturing... But that's another 10yrs away according to the METERED, profit plan.Reply

Single core? Fermi has 480 and 448 "cores". ATI has 1600, 1440 and 1120 (at the top end). They also have dual-die/chip cards. They long ago got past the single core chip. SLI/Crossfire is like dual intel or amd processors.

I do think that they could make bigger jumps in performance from generation to generation though. I agree with you on that.Reply

The complexity behind a GPU these days is frankly mind boggling at times; from the hardware point of view the stuff behind keeping so many threads in flight is just madness. While they haven't quite got the flexibility of a CPU for doing what they do (batch processing of floatng point data) they blow it away.

The discrete graphics product isn't going to die any time soon; on the hardware end the maths simply doesn't work (memory bandwidth, memory access patterns, heat, power, the ability to scale), heck if you want high performance then you are just shifting the heat and power consumption somewhere else.

The 'future' is divided into two groups going forward;- those who will 'make do' with on-chip 'fusion' style GPUs which are those who could live with the current crop of 'on board' GPUs which come on the motherboards (such as AMD's solutions)- those who want high resolution graphics at high framerates but will also benifit from a 'fusion' GPU/CPU combo as it will allow off loading of tasks to a 'close' FPU array

So, yeah, your crazy, misinformed and misguided wish isn't going to happen.. or if it does it won't be for a long long time, kinda like when we can work out the competting memory access patterns and fights over bandwidth which CPU and GPUs are going to have.Reply

Intel entered the discrete graphics market too late. Larrabee might be a good GPGPU, but its still impractical for the average consumer if it doesn't have optimized drivers for 3D games and other video software. Sound cards and NICs are a lot more simple in complexity than graphics cards. I still don't believe discrete graphics will decline in the next 5 years; users who need discrete graphics today will favor a more powerful graphics card than a moderate integrated solution. Reply

Comparing demanding audio and video is a farce at best and mostly disingenuous. Audio hasn't been a major hog of CPU cycles in YEARS, if not decades. And there's no adequate way to compare audio quality without paying up the nose in good speakers (besides, space for audio on a disc is not limitless). It's demands grew linearly.

Graphics on the other hand are already outpacing next-gen GPUs like nobody's business. The power of a video card will ALWAYS be a limiting factor. Believing that IGPs will one day be "good enough" is like thinking an IGP today can run Doom 3 flawlessly at 60fps at high settings, and that's a game that came out SIX YEARS AGO!Reply

Kinda sad to see Intel lost focus,but a company like Arm is marginalizing Intel and that's dangerous. Probably a smart move to focus on mobile devices and try to bring the least best product, else we'll get get Apple setting more lower standard for years to come.Reply

Thats how I read it, but remember you have to take it to account how fast the graphics industry as a whole is moving. If the mainstream Sandy Bridge IGP is only as fast as a 9400M was back in 2008/2009, and a higher performance version is 80% faster (around 4500 in 3DM2k6), where will mobile GPUs be in January 2011? If TSMC can get their head out of their butt and get 28nm fabrication online late this year maybe in Q2 we see GPUs from Nvidia that are 2x as powerful as the 300-series we see now for the same power/heat footprint. Reply

This has been coming for a long time, it's obvious that intel can't mix it with nvidia or ATI plus with the problems that nvidia has been having with their fab process, it's time for intel to buy nvidia just like AMD did with ATIReply

I think, the importance of selling discrete GPUs is to have the consumers pay for the development of Larrabee while they start taking shares from Nvidia in the HPC market.Unfortunately, that wasn't the case since their GPU design will not stand a chance against Nvidia's or AMD's GPUs. They won't sell enough GPUs if they pushed through with the plan.

They will just focus to create a Fusion like CPU before AMD beats them to it which is not that far related from creating good SoCs. Meanwhile, allow Nvidia to take HPC market for now.Reply

I don't know about that. Intel is taking a calculated risk based on what they see going on in the graphics sector. I believe more and more people are choosing integrated graphics and you will see discrete graphics become somewhat of a niche market mostly for gamers and folders. I don't think discrete graphics are going anywhere in the short term (probably not the long term either).

PCs are becoming more and more of a disposable device to people. Most people buy one till it breaks, and go back to BestBuy or some other store and buy their next one. Those PCs generally have integrated graphics. I think Intel realized it was late to the game in discrete graphics and was entering a market that at best probably won't experience much growth. I really have to wonder about performance of Larrabee though, I mean the people who want discrete graphics will go with whatever is best at the time in terms of performance, power, etc. Maybe Larrabee just couldn't cut it. Reply

In a Perfect World, the GPU should only be using few W when i am browsing and idle. And scale up to 100+W when i am gaming. But of coz, we know unless there is some major transistor tech breakthrough, this is not going to happen in the next 5 + years.

So back to an ideal world, The IGP should be VERY low power, with superior 2D Rendering performance, Programmable DSP that allows Many if not all of the FFmpeg codec gets hardware acceleration. 3D Performance that is focused on UI, Effect workload, Browser canvas and other Vector acceleration. With final consideration for anything gaming related.

As it currently stand, the IGP size is fairly large that provide performance we dont need 90% of the time. And when we do need performance it is not capable anyway. So why waste transistors on it? Intel could give us a Extra Core or L2 cache was CPU performance.

I dont understand why a previous poster noted Optimus is not the way forward. I see Optimus as the future. At least in near term.

And final Notes. The worst thing about Intel HD is not the Hardware itself. Is the fact Drivers for Intel HD is poor, slow to update, and the main causes for poor gaming performance. Nvidia has more Software Enginerr then Hardware for their Drivers. It just shows GPU and CPU are completely different beast.Reply

There's one more reason why Intel's IGP/IPG products can't be relied upon.They discontinue driver support, for architectures they are still developing, very early.

There are a lot of bugs discovered in Intel's GMA 950, GMA X3100 drivers - but Intel has stated on its support forums that they will not be fixed. It's more than weird considering that GMA 950 is the same basic architecture as GMA 3150, and GMA X3100 is the same basic architecture as HD Graphics (Clarkdale).

If these were Nvidia or ATI products, they would still be supported and I would be able to rely on my applications being able to run - albeit slowly - on all of my hardware generations; now, to have that reliability, I am forced to purchase discrete graphics cards, even though the performance of Intel's products might be enough for me.Reply

1) "Intel is effectively stating that it sees a potential future where discrete graphics isn’t a sustainable business and that integrated graphics will become good enough for most of what we want to do, even from a gaming perspective. In Intel’s eyes, discrete graphics would only serve the needs of a small niche if we reach this future where integrated graphics is good enough."

Intel should recognise that they are partly to blame for the lack of progress with integrated graphics. ATi and nVidia would have released far more powerful solutions if it became apparent that Intel was serious about providing a better gaming experience.

2) "Anything can happen, but by specifically calling out the Atom segment I get the impression that Intel is trying to build its own low power GPU core for use in SoCs. Currently the IP is licensed from Imagination Technologies, a company Intel holds a 16% stake in, but eventually Intel may build its own integrated graphics core here."

I'd rather Intel stopped developing its own solutions right now and just pumped money into Imagination Technologies. Everyone knows how good Kyro was despite its clock speed disadvantage, and despite being told it couldn't be done, they paired a T&L unit with their deferred rendering system. Every phone worth its salt uses PowerVR graphics and it can't be difficult to scale these up, plus you don't need to throw the most powerful components at them due to their unmatched efficiency.

Besides which, I'd LOVE to see PowerVR back where it belongs. I still wonder how powerful Kyro III (PowerVR Series 4) would have been compared to the competitors of the time (the Radeon 9800 and GeForce FX 5900).Reply

Once they get their graphics architecture right, there will be advantages to making them in-house, and I bet it doesn't end at licensing costs either. Whether they'll get that on their LPIA products is another problem altogether.

We'll see if PowerVR can scale up again, but other than papers claiming they can scale up, so far there hasn't been any for PC-centric since the Kyro days. Whether its hardware or drivers, there is indeed some merit to the claim that says for modern shader architectures, TBDR is hard to implement.Reply

Well, they could, but i think the price is a little too expensive ranging close to 1B dollar. If 4B for ATI was expensive. I consider 1B for a small company like Imagine Tech is A LOT MORE expensive...Reply

I have to admit that I have been anticipating this announcement for quite a while. I say that Intel should leave the GPU segment to ATI and nVidia and should focus their energy on the segment that they are excellent at - the CPU. :)Reply

Another proof that x86 architecture sucks. It's too old, too bloated and inefficient. And the only reason it is still on the market is because of Intel's monopoly on the PC and the huge amount of money they invest in marketing.

I bet there are at least 2powof(3) architectures that could replace x86 if they had the chance.Reply

Keep up these types of articles! You don't need to report on every facet of tech, but if you've got something to say, it's probably worth hearing, and I'd like to hear it! You do a great job of going past the press release and news, and you give us a very informed opinion of what it actually means to the industry and the consumer. That's something most websites just don't have the time or ability to do well. Thanks.Reply

Intel doesn't seem to understand that a serial based architecture stinks at massivlly parellel tasks. Even if you increase the number of serial processors, you still have the fact the processing units are based on serial architecture.

Rasterization is parellel down to the individual pixel; that is, every pixel is rasterized independently of the others. And at the end of the day, the transformation is nothing more then a 3d matrix being mapped to a 2d space. As such, the extra power a CPU can give is mostly going to waste.Reply

I figured it would come to this... the GMA based graphics on current 1156 processors are a stopgap. Intel will take its Larrabee design, break off 8-16 cores, and use those as its next-generation IGP.Reply

From the article: "Sandy Bridge was supposed to be out in Q4 2010, but we’ll see it shipping in Q1 2011. It’ll offer a significant boost in integrated graphics performance. I’ve heard it may finally be as fast as the GPU in the Xbox 360."

Is this a typo? Even if Sandy Bridge's IGP doubles the performance of the Clarkdale IGP (as you state is to be expected, later in the article) I can't see it as being as fast as the 360's GPU.

I've got an i3-530 in one system and I also have an Xbox 360 and I don't think the two are in the same league as far as graphics capabilities are concerned, so doubling the performance of Clarkdale's IGP still wouldn't put it in reach of the 360.

The original Xbox's GPU, though, I could definitely see Sandy Bridge's IGP matching that level of performance.Reply

Xbox 360 GPU is really outdated compare to today's PC graphics. The reason you see better graphics for Xbox 360 is because,

1. Xbox 360 runs a lower resolution.2. It has much better optimized drivers and system software for gaming3. Embedded memory for much higher bandwidth. You get zero cost AA on Xbox 360.4. Intel Graphics Drivers support is POOR5. No Games was ever designed to optimized for Intel Graphics.

The last two point could easily lost 50% of it maximum potential . While the Xbox 360 GPU, being technically slightly more the twice as fast as i3 GPU, be 4 times as fast due to Software difference.Reply

Sorry if this has already been posted, but I don't have time right now to read the whole thread.

How would you go to multiple chips (multiple sockets) with the gpu integrated into the cpu? Multi-socket boards in their current form factor are large and expensive. I have no doubt that Intel and AMD will integrate gpus into their cpus. It would be a waste to have 8 cpu cores be "standard" at 22 nm, so the mainstream will have a few cpu like cores, and an integrated gpu on a single board.

If you want to use the same chip across the board though, and use multiple chips/sockets for higher performance markets, then IMO, it makes more sense to change the system architecture to effectively integrate the cpu in with the gpu, and move the system memory out farther in the memory hierarchy. Just put the cpu-gpu on a graphics type board, with high-speed memory, and page in and out of the main system memory on the system board. This is a much better form factor for multiple processors rather than a large, expensive 4 socket board, for example. This also assumes that the software is capable of spreading the workload across multiple chips, which should get easier, or at least more efficient as you do more and more processing per pixel.

Other form factors are possible with different packaging options. We could have some memory chips in the same package, or even stacked on top of other chips. If you are going to use a lot of chips for just processing, then it may make sense to have a large amount of embedded memory in the chipset on the mainboard.

The massive single chip gpus that nvidia is still focusing on do not make too much sense, IMO. It would be better to target the "sweet spot" for the process tech, and use multiple chips for higher performance. Memory is still the limiting factor though, since it is too expensive to connect memory to multiple gpus on the same board. Would be better to share the memory some how, or package memory in with each chip. With paging the directly connected memory in and out of system memory, you should be able to make better use of the local memory to reduce duplication. Right now, most data is duplicated several places in system memory and graphics memory.

I didn't expect larrabee to do that well, since I expected it to take significantly more hardware to reach performance comparable to nvidia and amd offerings. Intel has the capacity to throw a lot more hardware at the problem, so I have to wonder if it is a software or power issue. If they have to throw a lot more hardware at the problem, then that would also mean a lot more power. Intel can't market a card that consumes 500 watts or something even if it has similar performance.Reply

"Intel is effectively stating that...integrated graphics will become good enough...even from a gaming perspective."

That's a bold conclusion, and I disagree. The quote said Intel missed milestones, and upon assessment, they decided to focus on IGP, HD video, and mobile. I don't see that as saying IGP will be good enough for gaming, but rather that Intel can't compete in discrete right now.Reply