OSNews: http://www.osnews.com/story/18493/AMD_Fusing_Chip_Plans_for_2009
Exploring the Future of Computingen-usCopyright 2001-2015, David Adamsadam+nospam@osnews.comSun, 02 Aug 2015 21:13:16 GMThttp://www.osnews.com/images/osnews.gifOSNews.comhttp://www.osnews.com
Not like the 486http://www.osnews.com/thread?265067
http://www.osnews.com/thread?265067...and Intel's decision to integrate a floating-point unit into it. The idea to integrate a math co-processor into a standard code grinder is nothing like integrating a graphics chip and a modern multipurpose processor.

He stepped back in chip history to liken the Fusion project to Intel's decision to integrate a floating-point processor into the 486 chip.

Otherwise, it's an interesting idea, especially when dealing with general, non gaming systems and non high end graphical systems.

Having built a few of these non gaming systems for people wanting only to surf, email and use office productivity suits, motherboards with integrated graphics chips are, without a doubt, the largest savings in cost. Having the same option, but on a processor level, should really drive down that cost.Wed, 22 Aug 2007 21:28:00 GMTdonotreply@osnews.com (SReilly)CommentsRE: Not like the 486http://www.osnews.com/thread?265071
http://www.osnews.com/thread?265071pardon my ignorance but not a gpu integrated on the same cpu die and communicated bis a bis at the same bus speed would be faster than any pci gpu?Wed, 22 Aug 2007 21:47:00 GMTdonotreply@osnews.com (Fransexy)Comments3D Desktophttp://www.osnews.com/thread?265086
http://www.osnews.com/thread?265086CPU+GPU integration in same processor should work hand in hand with Compiz Fusion + GNOME/KDE, accelerating 3D Desktop adoption.Wed, 22 Aug 2007 22:37:00 GMTdonotreply@osnews.com (fffffh)CommentsSolves the 99% user scenariohttp://www.osnews.com/thread?265093
http://www.osnews.com/thread?265093Ok, so I'm pulling that statistic out of my rear, as most statistics seem to be harvested from someone's rear, but most people aren't that concerned with the highest speed of graphics displays on today's computers, because as long as the drivers are stable, and they aren't into the heavy 3D games and/or CAD type of stuff, most users don't push even the most mediocre video accelerator to the max with whatever they're doing.

While it should logically be that such an integration can produce faster graphics processing because there's a minimal amount of signal delay between the main CPU and the GPU, physics come together in a couple of ways to keep the top-performing solutions as discrete chips. Why? First, there's the die space: top-end modern GPUs are often larger than the current general purpose CPU available at the time. Second, there's the heat issue: both modern general purpose CPUs and modern top-end GPUs generate a huge amount of waste heat in a small amount of space, so the thermal envelope is another major factor in what's feasible to put in the same package.Wed, 22 Aug 2007 22:56:00 GMTdonotreply@osnews.com (JonathanBThompson)CommentsInterestinghttp://www.osnews.com/thread?265096
http://www.osnews.com/thread?265096Wonder what this'll mean for people who tend to go through 2 to 3 video cards per CPU upgrade?

Be interesting to see the results non the less.Wed, 22 Aug 2007 23:08:00 GMTdonotreply@osnews.com (blitze)CommentsRE: Solves the 99% user scenariohttp://www.osnews.com/thread?265097
http://www.osnews.com/thread?265097"First, there's the die space: top-end modern GPUs are often larger than the current general purpose CPU available at the time. "

That's because different manufacturing process, as the CPUs are usually much more advanced in this manner and therefore occupying less space. Of course you are also right, because nowadays GPUs tend to have many more transistors than CPUs.

"a huge amount of waste heat in a small amount of space, so the thermal envelope is another major factor in what's feasible to put in the same package."

You again forgot about manufacturing process, which if more advanced, helps to produce less heat. And maintaining one cooling system for both GPU+CPU is generally a good idea, too, as it allows to use only a single but more advanced and efficent cooling system (e.g. some advanced and expensive heatpipe) instead of two simplier and cheaper, but equalling the cost of that advanced one when summed up.Edited 2007-08-22 23:18Wed, 22 Aug 2007 23:10:00 GMTdonotreply@osnews.com (cromo)CommentsRE[2]: Not like the 486http://www.osnews.com/thread?265102
http://www.osnews.com/thread?265102Thats a good point but you seem to be missing something. Gamers usually upgrade they're GPU several times before they upgrade the CPU and even though having the two integrated on the same die would result in faster data transfer between them, it would be very inflexible and upgrades would end up costing much more in the long run.

If you look at the advances in game technology, the latest 3D accelerator is a must have for large budget games. Just look at when Doom3 and Quake4 came out. I remember one article jokingly claiming that a Cray was needed to run both games, and that was just for the general system specs, not the graphics chip specs.

On the other hand, integrating a physics processor (like the AGEIA Physix chip) with either the CPU or the GPU die would be an instant sell for any gamer I know.Wed, 22 Aug 2007 23:33:00 GMTdonotreply@osnews.com (SReilly)CommentsRE: Interestinghttp://www.osnews.com/thread?265130
http://www.osnews.com/thread?265130It'll mean nothing. It means a lot to the guy in the office who may save $100 on each PC, or who wants to make a lower power HTPC, etc., but maybe needs features or performance of a low-end card right now (or the guy who's kid wants to get games from the dollar store to hose his Windows install... ). Imagine not that AMD can replace a $300 video card with this move, but that they can replace a $100 card with it.Edited 2007-08-23 01:51Thu, 23 Aug 2007 01:48:00 GMTdonotreply@osnews.com (cerbie)Commentsinteresting if more can be donehttp://www.osnews.com/thread?265238
http://www.osnews.com/thread?265238I was just thinking, considering that AMD have to make a decision regarding using either gddr or plain old ddr(2?) ram for the integrated board, I've long thought it would be great to be able to add extra ram to a video card. Would sure make my old 6800 go for a little longer. But then they'd get less people upgrading video cards less often, I guess (cynic alert).Thu, 23 Aug 2007 14:47:00 GMTdonotreply@osnews.com (garfield)CommentsRE: 3D Desktophttp://www.osnews.com/thread?265241
http://www.osnews.com/thread?265241Especially if AMD/ATI decide to get off their butts and give us decent drivers in a timely and efficient manner before or shortly after the products are released.

Or, better yet, release the specs since

A.)they don't seem to be trying to shoot for fastest GPU with this product

B.) This is DIFFERENT, so maybe it'll lack a lot of the patent-laiden old code that they cite as a reason for holding on to the specs so very tightly....

We can hope? When I started on my ancient compaq with win95, I NEVER saw computers coming what they are now!Thu, 23 Aug 2007 14:53:00 GMTdonotreply@osnews.com (ThawkTH)CommentsRE: 3D Desktophttp://www.osnews.com/thread?265386
http://www.osnews.com/thread?265386

CPU+GPU integration in same processor should work hand in hand with Compiz Fusion + GNOME/KDE, accelerating 3D Desktop adoption.

Unless, of course, they require proprietary drivers.Thu, 23 Aug 2007 23:23:00 GMTdonotreply@osnews.com (B. Janssen)CommentsRE[2]: 3D Desktophttp://www.osnews.com/thread?265458
http://www.osnews.com/thread?265458Intel never has required a special proprietary driver for 80x87 coprocessor or SSE extensions. All Instructions Set for x87 are publicly available.

AMD, normally, shouldn't do the same for his own processors ?

A GPU can be used for other type of computation like a vector coprocessor, not only for execution of OpenGL translation calls, but for scientific or engineering applications.

dear amd, please integrate the entire chipset, not just a gpu. hopefully you already know this, and have some idea how useless chipsets are.

the conventional role of chipsets has been to connect memory and i/o devices to the cpu. amd has already moved memory controllers onto the cpu core, and this news item specifies that the highest bandwidth output device in the computer is going to be integrated too. remaining roles for a chipset could be a usb controller, sound card, ethernet card, pcie bus, or any other i/o device commonly found in PC systems. chipsets also contain the bios bootcode for the system.

now that the two highest performance subsystems have been integrated onto the cpu (memory & graphics), there is no need to continue offering a seperate i/o chip: just integrate the remaining peripherials on die. x86 remains the only market where chipsets are used, and its a heapload of complexity that serves no advantage, particularly for anyone in embedded space (where complexity comes at tangible cost of boardspace).

so, put the sound controller, ethernet controller, pcie host, all those little things, into the cpu. its 2007 and the x86 is the only architecture still hampered by being a processor that is completely useless without a high bandwidth connection to a chipset whose seeming only purpose is to boot the cpu and host its peripherials. get rid of this relic, put everything on die, and bring x86 into embedded space. every other embedded cpu has a fairly complete i/o spec built in, and x86 is stuck in the past with massively over-complex system over-engineering.

(i've also heard rumors that fusion is not going to be x86, but no matter the architecture, get rid of the chipset)