If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Welcome to OCF! Join us to comment and to customize your site experience! Members have access to different forum appearance options, and many more functions.

the teraflop chip is a real cpu that has 80 cores, running at 3.16ghz per core.

the 300ghz or whatever that IBM achieved was not a cpu at all, just a handful of transisitors. you need a couple hundred thousand transistors or more to make a cpu, and they all have to run in close proximity to each other. being able to run a few transisitors at 300ghz while using -255c to cool them really means nothing.

and OP, yes, this has been posted a couple of times already. its ok tho.

Yeah, thats sort of what I was getting at. As far as the terraflop chip, it is also more specialized towards that particular operation and who knows what instruction set it is using. I don't think it is any more than a way for them to have press releases that sound promising. Similar to this, graphics cards already accomplish similar goals in terms of processing power by using many simplified cores.

Here is an exerpt

The cores used on the research chip are much smaller and simpler than those used in Intel's latest line of chips

smaller and simpler cores which are most likely more specialized.

The research chip has 100 million transistors on it, about one-third the number on Intel's current line of chips.

Its not like a CPU the way we know it, but it certainly is good at floating point, which makes me wonder if the PC will start using several different specialized processors instead of a single general CPU. There are actually some trends and reverse trends that have been happening in this area. One example is graphics cards, in a sense, these have become like co-processors which specialize in graphics. In other examples, there has been talk of re-integrating the graphics card and processor, which is the reverse of the previous trend.

In other examples, there has been talk of re-integrating the graphics card and processor, which is the reverse of the previous trend.

AMD's Fusion is looking to do exactly that, and it's not even a theory either. It's supposed to be out sometime in late 2008 or 2009. I think the emphasis is on mobile applications, but who knows what process shrinks could bring in the not-so-distant future.