This week at IDF, Intel made an official announcement on its response to AMD's Torrenza technology. AMD made waves earlier this year when it announced that it would open up its Opteron platform to the industry, allowing other manufacturers to create and develop add-in components that communicate directly with the system processor and memory. Going beyond that, AMD also mentioned that Torrenza would allow companies to create accelerators or co-processors that could be used directly in an Opteron socket.

Intel said that like AMD, it also plans to open up its chipset platform technology. The move would be an unprecedented move for Intel, as it has been guarding its platform for the longest time. Intel's primary goal is to introduce an alternative to AMD's HyperTransport. The technology would allow devices to communicate on a much faster pathway than PCI Express alone could muster. Interfacing directly with the front-side bus (FSB), devices will be able to communicate directly to the processor and or other accelerators. Non-Intel chips will be able to plug into a Xeon socket for example, and work parallel to the main processor or processors.

With the introduction of an open FSB platform, Intel will also be making a move towards integrating memory controllers directly onto processors. This is something that AMD has been doing for several years with the original Opteron processor. DailyTech previously reported that a number of large companies were already partnering with AMD to create accelerator and other co-processors. The decision to open up its platform has propelled AMD into the enterprise market in very large way. It will be interesting to see what Intel's move into an open space will do for the industry.

Currently, the technology is expected to be introduced sometime in the next one to one and a half years. Some analysts speculate that Intel will show off an open FSB specification in 2008 on Itanium, and on the Xeon sometime in 2009. Reports say that Intel is currently working with several companies to create co-processors -- they too would be able to plug directly into a Xeon or Itanium socket.

No, they need to keep waiting and improving it, and only release it 6 months before it's really needed. I don't really want a whole new architecture for the measly 2%-4% speed bump it'd give today. I'd rather they spend their efforts like they have, concentrating on the 20-30% speed bumps without a massive architecture change.

Granted, eventually the large speed bumps without an artitecture change will become more and more difficult, to the point where the greatest gains could be gotten from an architecture change, but that time isn't today.

I fail to see how frequent updates are a bad thing. Between a company that releases a 30% faster product after 3 years, and another that releases a 9% faster product every year, I'll take the latter. It might be 0.5% slower in the last year, but it kept me ahead of the game in the previous two.

The problem with Intel is they keep changing sockets, changing chipsets, changing memory types, etc.. And that's what AMD finally figured out, with their "stable platform" initiative: if they commit to a platform for 5 years, that gets them support from the enterprise segment. Most companies (apart from a handful of IT mega corporations) couldn't really care less about Torrenza. But every company likes to know that the servers they buy today will be upgradable in 3 or 4 years, and won't have to be completely replaced because mighty Intel decided that everyone should jump onto a new, untested, more expensive, and sometimes slower platform.

That is why AMD is growing in the server space, and is likely to keep on growing (unless they seriously screw up). Intel is going to need at least another 16 months to catch up in terms of interconnect technology (which is the major issue in the enterprise server space), and that's assuming its management doesn't shoot itself in the foot, as it did so often in the recent past. Netburst is what happens when you let the marketing department run a technology company.

You only get a big benefit by changing your architecture if your old architecture sucked (e.g., Netburst).

AMD can't just pull a rabbit out of its hat and get a 40% performance increase/40% power decrease by changing its architecture, because its architecture is pretty good.

It will have to get the increases the hard way (smaller processes, moving to quad core, K8L extensions, on-die L3 cache). This is very similar to how Intel improved the performance of its P4 chips before it finally got rid of them (smaller processes enabled higher clockspeeds, dual core with Pentium D, added HT, boosted cache sizes). (and I mean no disrespect to K8 by comparing it to P4!)

That said, as we move into the future, with smaller and smaller processes and better interconnect, radical new design approaches (e.g., mini cores) might start to make sense. I doubt mini-cores would have made sense when CPUs were being manufactured at 180 nanometers, like the first P4s.

Evolutionary improvements to those new architectures will not only make them better (faster, more performance per watt) the chipmaker will also learn how to further optimize future processors. If a chipmaker completely turned its back on evolutionary improvements, it could end up making radical new designs with many of the same minor flaws as in old designs.

"Nowadays, security guys break the Mac every single day. Every single day, they come out with a total exploit, your machine can be taken over totally. I dare anybody to do that once a month on the Windows machine." -- Bill Gates