AMD opens up the Opteron architecture to other microprocessor R&D companies

Today AMD unveiled what it calls the evolution of enterprise level computing, called Torrenza. The new platform, says AMD, will utilize next-generation multi-core 64-bit processor that have the capability to work alongside specialized co-processors. DailyTech previously reported that AMD was considering working with co-processing design firms such as ClearSpeed, to develop and design platforms that would be able to utilize specialized processors for specific duties alongside the general host processor in a traditional Opteron socket.

With Torrenza, AMD has designed what it calls an open architecture, based on the next wave of Opteron processors, which allow what AMD calls "Accelerators." Using the add-in accelerators, a system will be capable of peforming specialized calculations similar in fashion to the way we use GPUs today.

Because of its flexibility, the HyperTransport protocol allows a multitude of co-processor designs that are already compatible with systems on other platforms. For example, with Torrenza, specialized co-processors are able to sit directly into an Opteron socket, and communicate directly with the entire system. During the conference, Cray Inc. noted that it had worked with AMD to design a system where a system can contain even up to three different co-processors, all dedicated to specialized tasks. All three processors would communicate directly with Opteron processors and the system chipset harmonously. The open-ended nature of Torrenza will allow companies to design specialized processors to plug-in and work with Torrenza-enabled Opteron systems. Although AMD acknowledges many of these applications can run off PCIe and other connection technologies, Torrenza emphasizes HT-3 and HTX in particular.

AMD representatives said that because of the archicture, Torrenza allows very low latency communication between chipset, main processor and co-processors. According to both Cray and AMD, applications can be written in a way where all the variouis processing architectures are recognized and are fully usable. Torrenza-aware applications are on the way said Cray, but the company did admit that developing them was very much "rocket-science".

Comments

Threshold

Username

Password

remember me

This article is over a month old, voting and posting comments is disabled

Im a consumer and I think it would be a great idea for my home PC. We once all added in a co-processor called a GPU and theyre bandwidth hungry. If all our important "co-processors" had a spot on the board and a hypertransport pipe to the CPU then the processing power would be insane. AMD has thought this technology through and we all know theyre making a pretty decent decision by coming out with this tech.

Latency would improve but is not a real concern when it comes to GPUs. Besides we can't even saturate AGP or PCIe with current GPUs etc so no need to really add another socket to direct hypertransport.

I agree. Or, at least they should have offered two versions (like video cards) - PCI and PCIe. But I guess to save money and to cover people that have AGPs (still a great deal), they decided to release only PCI.
I also don't understand why there isn't more stuff for all those empty 1x/4x PCIe slots.

yeah. We'll solve issues, hit problems, stuck for a while, until eventually the mechanical parts near light speed and we hit the event horizon and have to start accounting for what special relativity talks about in regards to time. That will be a huge issue. When time is going at one rate on the the computers end, and a different rate at our end, how do we account for certain things, and monitor in real time? We'll have to learn how to use it to our advantage, ie do the inverse and speed time up on the computing end to enable massive leaps in technology on our end (what if you could get 100 years of processing (cpu speed and technology leaps become irrelevant at this point) done in one second?) If you don't understand cosmology please don't say I'm a quack. Go look and see what happens when you near a black hole in regards to time. Or what happens when you near light speed. We really do need to at least begin to consider different approaches as options as computing as a science is explored and exploited to it's fullest. I'll reitterate, if I could use something such as that to change our perspective on computing, hardware tech and developers as we know them would become obsolete, but how much would technology leap?
Yes this was a bit random, but I'm an astronomer and tech, so I tend to combine my knowledge of both. Heck, they've allready created a micro black hole in a lab, (incredibly bad idea I might add)we are approaching the time when things as this will be possible. Time travel is impossible. To warp timespace is. LOL I'm WAY off topic.

100 years of processing in 1 second would only get you a burned CPU in a second :)
Your thread reminds me of Prince of Persia Warrior Within. You go forward in time, everything is in ruins.
Anyway... although you don't see scientists talking much about hitting the event horizon, they are dealing with problems nowadays too. I am sure your concerns have been thought more than a million times by genius electronic engineers. Its just not worth investing in right now.
We're not talking about smth that could require decades to develop, we're talking about what right now we describe as impossible in our existence.

PS: I would never let you put a black hole in my computer to alter time :D

This certainly would be interesting if it could be intergrated down to the desktop level, not to say what could be done of the grand server level isn't exciting.

We know both ATi/Nvidia have at least dabbeled in the idea of a gfx socket, or an AIB with set ram, but a socket so one could keep their current card, but change the gpu without paying for the RAM,PCB, etc all over again. It would be interesting if GPUs/PPUs etc could use this tech, and we could just put them in a socket. I wonder where all the memory bandwidth would come from though...Would it use system memory, or would we use the pci-e cards just for the next-gen of G-RAM? I dunno. :P

uh, i doubt ati/nvidia will use that socket because:
1) memory architecture that ati/nvidia uses is different to that of amd's.
2) ati/nvidia are higher bandwidth memory (gddr2/3) compare to amd.
3) chips design for that specific sockets will still be reworked to fit in intel's system.

You won't see a GPU socket on a mobo because the tight manufacturing tolerances needed for routing high speed wide (256-bit) memory buses to dedicated memory sockets is not practical, and system memory bandwidth and latencies still lag far behind what even mid-range GPUs use.

What is needed is some way of providing a fast bus to a graphics-core that has its own high-speed memory. The ideal situation would be some sort of card that combines the GPU with the dedicated memory, which could be inserted into a compatible slot on a motherboard.

quote: What is needed is some way of providing a fast bus to a graphics-core that has its own high-speed memory. The ideal situation would be some sort of card that combines the GPU with the dedicated memory, which could be inserted into a compatible slot on a motherboard.

Also known as a "Graphics Card", wich is inserted into a compatible PCIe or AGP slot...

However, AMD seems to change sockets about every 18 months. Would that mean these "Socket Chips" have to be re-designed every 18 months to support a new Socket?

The other limiting factor is Torrenza would seem to be AMD only... so if an Aegeia made a Torrenza compatible chip... then it could only work on AMD systems... ie they've cut their customer-base in half by ignoring Intel systems.

Of course Aegeia could develop PCI-E for intel and Torrenza for AMD... but this is twice as much development work... plus... then do you have to add embedded memory?

At last we have something thats unique & a. very interesting concept
The technocrats at AMD have started to accept what the marketing boys always say.They say "We know what the buyer/user want,so give them what they need/want & this is what they want."
This technology/product is exclusively for business apllications-home user need not bother.
This technology is the start for something big & opportunities are plenty.e.g.
Companies like IBM,SUN,Toshiba,Hitachi,Samsung,Nec,Nvidia etc can jump in with their expertize /experience.
AMD should encourage also the those small start ups(venture capital funded).Its these small start ups bring some really exiciting products into the market.
AMD should not go it alone rather bring as many partners to make this technology/product first a reality then a success

Years and years ago, Hard drives were connected serially and processors had seperate math co-processors and there was no FSB and South Brdge. we have come almost full circle now. It will be interesting to see how AMD pulls this off. I would love to buy say Far Cry and it could with a specialized co processor that gives it 190fps speed boost J/K. But seriously, This would give the opeteron a serious performance boost. I manage several DB servers with Opteron processors. I can see where this would make AMD the king of the server proc heap.

"So, I think the same thing of the music industry. They can't say that they're losing money, you know what I'm saying. They just probably don't have the same surplus that they had." -- Wu-Tang Clan founder RZA