Back in March of this year, Intel launched a slew of third generation Core Ivy Bridge processors. At the high end sat the Core i7-3770K with 4 cores, hyperthreading, 3.5 GHz clockspeed (3.9 GHz Turbo Boost), 8 MB L3 cache, and a 77W TDP for $332. The lineup went down in features – and price – from there all the way to the Core i5-3330S. The 3330S had four cores, 6 MB of L3 cache, a 65W TDP, and a clockspeed of 2.7 GHz (3.2 GHz Turbo Boost). Further, just about every CPU that was not a K, S, or T edition came equipped with the older HD 2500 integrated processor graphics. While the list comprised 18 new processors, the lower-end Core i3 Ivy Bridge CPUs were noticeably absent.

Fortunately, FanlessTech has managed to get ahold of pricing and specifications for five of those lower cost Intel chips. The new additions to Intel's lineup include three Ivy Bridge processors and two Sandy Bridge CPUs. Specifically, we have the i3-3240T, i3-3220T, Pentium G2100T, Pentium G645T, and Pentium G550T. All of those parts have a TDP of 35W and are priced very affordably.

Model

Cores / Threads

Clockspeed

L3 Cache

TDP

Launch Price ($USD)

i3-3240T

Ivy Bridge

2/4

2.90 GHz

3MB

35W

$138

i3-3220T

Ivy Bridge

2/4

2.80 GHz

3MB

35W

$117

Pentium G2100T

Ivy Bridge

2/2

2.60 GHz

3MB

35W

$75

Pentium G645T

Sandy Bridge

2/2

2.50 GHz

3MB

35W

$64

Pentium G550T

Sandy Bridge

2/2

2.20 GHz

2MB

35W

$42

The Core i3-3240T and i3-3220T are dual core Ivy Bridge processors build on a 22nm process, and are priced at just over $100. The cheapest Ivy Bridge CPU is actually the Pentium G2100T at $75 so the barrier to entry for Intel’s latest chips is much lower than it was a few months ago. Intel’s second generation Core architecture is still alive and kicking as well with the Pentium G645T and G550T at $64 and $42 respectively.

Two specifications are still unkown: Turbo Boost clockspeeds (if any) and which version of processor graphics these chips will feature. On the graphics front, I think HD 2500 is a safe bet but Intel may throw everyone a curve ball and pack the higher-end processor graphics into the low end units – which are arguably the (computers) that need the better GPU the most.

Granted, these lower cost processors are not going to give you near the performance of the i7-3770K that we recently reviewed, but they are still important for low power and budget desktops. Bringing the power efficiency improvements of Ivy Bridge down to under $100 is definitely a good thing.

As far as availability, you can find some of the new low TDP processors at online retailers now (such as the Core i3-3220T), but others are not for sale yet. While I do not have any exact dates, they should be available shortly.

AMD has good news for those looking to build or upgrade an AMD powered system as they are lowering their prices on processors across the board as well as adding the new four core Socket AM3+ FX-4130, with a 3.8GHz base clock and 3.9GHz in Turbo. It is not yet for sale but is expected to retail for $112, easily affordable for most users looking for a lower cost system.

"The value proposition for the first generation AMD A-Series APUs is also compelling: A quad-core CPU and a DirectX® 11 highly-capable gaming GPU on a single-chip with more than 500 GFLOPs of compute power, for under $100 (A8 3850). Working together, the CPU and GPU can accelerate a range of applications to outperform a stand-alone CPU in some use cases. The lower-power first generation AMD A-Series APUs are even more affordable and are receiving positive reviews for small-form factor HTPCs as well. Price reductions across the first generation AMD A-Series APUs stack are in effect now, so please check your local retailer!"

The new Ivy Bridge processors introduced a new member of Intel's graphics processor called the HD 2500, which has received less than positive reviews as the previous HD 3000 outperforms it. However those tests were for Windows applications and games, whereas the testing at Phoronix specifically pertains to the performance under Linux. They compare the i5-2400S, i5-2500K, i5-3470, and i7-3770K together in a series of benchmarks to not only test the performance but also their compatibility with Linux. It seems that perhaps the performance of the HD3000 and HD2500 are much closer in Linux than they were running under Windows, though both still lose out to the HD4000.

"Since the launch of Intel's Ivy Bridge processors earlier this year there have been many benchmarks of the Intel Core i7 3770K with its integrated HD 4000 graphics and then more recently have been Linux testing of the Intel Core i7 3517UE from the CompuLab Intense-PC and Intel Core i7-3615QM as found on the Apple Retina MacBook Pro. The newest Intel Ivy Bridge chip to play with at Phoronix is the Intel Core i5 3470, which bears an Intel HD 2500 graphics core. In this article are benchmarks of the Intel HD 2500 Ivy Bridge graphics with the open-source Intel Linux graphics driver stack."

It is that time of year again: another installment of the PC Perspective Hardware Workshop! Once again we will be presenting on the main stage at Quakecon 2012 being held in Dallas, TX August 2-5th.

Main Stage - Quakecon 2012

Saturday, August 4th, 2pm CT

Our thanks go out to the organizers of Quakecon for allowing us and our partners to put together a show that we are proud of every year. We love giving back to the community of enthusiasts and gamers that drive us to do what we do! Get ready for 2 hours of prizes, games and raffles and the chances are pretty good that you'll take something out with you - really, they are pretty good!

Our primary partners at the event are those that threw in for our ability to host the workshop at Quakecon and for the hundreds of shirts we have ready to toss out! Our thanks to NVIDIA, MSI Computer and Corsair!!

Live Streaming

If you can't make it to the workshop - don't worry! You can still watch the workshop live on our page right here as we stream it over one of several online services. Just remember this URL: http://pcper.com/workshop and you will find your way!

Case Mod Competition

Along with the Hardware Workshop, PC Perspective is working with Modders Inc on the annual case mod contest! There are two categories for the competition: "Scratch Built" and "In the Box" that will allow those that build their computer enclosures from the ground up to compete separately from those that heavily modify their existing cases and systems.

During a European roadshow, Gigabyte showed off a new Mini-ITX form factor motherboard for the first time. Called the GA-H77N-WIFI, the motherboard is well suited for home theater and home server tasks. Based on the H77 chipset, it is compatible with the latest Intel Core i3 (coming soon), i5, and i7 "Ivy Bridge" processors. The board goes for an all-black PCB with minimal heatsinks on the VRMs, and the form factor is the same size as the motherboard that Ryan recently used in his Mini-ITX HTPC build.

The GA-H77N-WIFI features a LGA 1155 processor socket, two DDR3 DIMM slots, PCI Express slot, two SATA 3Gbps ports, two SATA 6Gbps ports, and an internal USB 3.0 header. There are also two Realtek Ethernet controller chips and a Realtek audio chip.

Rear IO on the Mini-ITX motherboard includes:

1 PS/2 port

2 USB 3.0 ports

2 HDMI ports

1 DVI port

2 Antenna connectors (WIFI)

4 USB 2.0 ports

2 Gigabit Ethernet ports

1 Optical S/PDIF port

5 Analog audio jacks

The dual Gigabit Ethernet ports are interesting. It could easily be loaded with open source routing software and turned into router/firewall/Wi-Fi access point. To really take advantage of the Ivy Bridge support, you could put together a nice media server and HTPC recording/streaming box (using something like SiliconDust's HDHomeRun networked tuners or Ceton's USB tuner since this board is very scarce in the way of PCI-E slots). What would you do with this Mini-ITX Gigabyte board?

Unfortunately, there is no word yet on pricing or availability, but the motherboard is likely coming soon. You can find more information on the motherboard over at tonymacx86, who managed to snag get some photos of the board.

According to VR-Zone, an Intel roadmap has surfaced which outlines the upper end of the company’s CPU product line through the end of 3rd Quarter 2013. The most interesting albeit also most confusing entry is the launch of Ivy Bridge-E processors in the quarter after the Haswell mainstream parts.

The latest Intel CPU product roadmap outlines the company’s expected product schedule through to the end of Q3 2013. The roadmap from last quarter revealed that Intel’s next architecture, Haswell, would be released in the second quarter of 2013 with only Sandy Bridge-E SKUs to satisfy the enthusiasts who want the fastest processors and the most available RAM slots. It was unclear what would eventually replace SBE as the enthusiast part and what Intel expects for their future release cycles.

Latest rumors continue to assert that Sandy Bridge-E X79 chipset-based motherboards will be able to support Ivy Bridge-E with a BIOS update.

The downside: personally, not a big fan of upgrading CPUs frequently.

In the past I have never kept a motherboard and replaced a CPU. While I have gone through the annoyance of applying thermal paste – and guessing where Arctic Cooling stains will appear over the next 2 weeks – I tend to even just use the default thermal tape which comes with the stock coolers. I am not just cheap or lazy either; I simply tend to not feel a jump in performance unless I allow three to five years between CPU product cycles to pass by.

But that obviously does not reflect all enthusiasts.

But how far behind on the enthusiast architectures will Intel allow themselves to get? Certainly someone with my taste in CPU upgrades should not wait 8-10 years to upgrade our processors if this doubling of time-between-releases continues?

What do you think is the future of Intel’s release cycle? Is this a one-time blip trying to make Ivy Bridge scale up or do you expect that Intel will start releasing progressively more infrequently on the upper end?

There has been quite a bit of news lately from AMD, and very little of it good. What has perhaps dominated the headlines throughout this past year was the amount of veteran AMD employees who have decided (or were pushed) to seek employment elsewhere. Not much has been said from these departing employees, but Rory Read certainly started things off with a bang by laying off some 10% of the company just months into his tenure.

Now we finally have some good news in terms of employment. AMD has hired a pretty big name in the industry. Not just a big name, but a person who was one of the primary leads on two of AMD’s most successful architectures to date. Jim Keller is coming back to AMD, and at a time where it seems AMD needs some veteran leadership who is very in touch with not just the industry, but CPU architecture design.

Jim was a veteran of DEC and worked on some of the fastest Alpha processors of the time. Much could be written about DEC and how they let what could have been one of the most important and profitable architectures in computing history sit essentially on the back burner while they focused on seemingly dinosaur age computing. After the Alpha was sold off and DEC sold away, Jim found his way to AMD and played a very important role at that company.

The first product was helping to launch the K7, and worked primarily with system engineering. The vast majority of design work for the K7 was finished by the time he signed on, but he apparently worked quite a bit on integrating it into the new socket architecture that was derived from the DEC Alpha. Where Jim really earned his keep was in co-authoring the x86-64 specification and being lead architect on the AMD K8 series of processors. While he left in 1999, the mark he left on AMD is essentially indelible.

After AMD he joined Sibyte (Broadcom) and was lead architect on a series of MIPS processors used in networking devices. This lasted until 2003 and he again left the company seemingly more prosperous than when he began.

PA-Semi was the next stop and he worked again primarily on networking specific SOCs utilizing the PowerPC architecture. So far, by counting fingers, Jim has worked on five major ISAs (Alpha, x86, x86-64, MIPS, and PowerPC). These chips were able to power networking devices with 10 Gb throughput. PA-Semi was then purchased by Apple in 2007/2008.

At Apple Jim was now Director of Platform Architecture and worked with yet another major ISA; ARM. Jim worked to develop several major and successful products with the A4 and A5 processors that have powered the latest iPhone and iPad products from the Cupertino giant. To say that this individual has had his fingers in some very important pies is an understatement.

Jim now rejoins AMD as CVP and Chief Architect of CPU Cores. He will report directly to Mark Papermaster. His primary job is to improve execution efficiency and consistency, as well as implement next generation features into future CPU cores which will keep AMD competitive with not only Intel, but other rising competitors in the low power space. This is finally some good news for AMD as they are actually adding talent rather than losing it. While Jim may not be able to turn the company around overnight, he does look to be an important piece of the puzzle with a huge amount of experience and knowhow with multiple CPU ISA. If there is anyone that can tackle the challenges in front of AMD in the face of a changing world, this might be the guy. So far he has had a positive impact in every stop he has made, and perhaps this could prove to be the pinnacle of his career. Or it could be where his career goes to die. It is hard to say, but I do think that AMD made a good hire with Jim.

Eurogamer and Digital Foundry believe that a next-generation Xbox developer kit somehow got into the hands of an internet user looking to fence it for $10,000. If the rumors are true, a few interesting features are included in the kit: an Intel CPU and an NVIDIA graphics processor.

A little PC perspective on console gaming news…

If the source and people who corroborate it are telling the truth: somehow Microsoft lost control of a single developer’s kit for their upcoming Xbox platform. Much like their Cupertino frenemies who lost an iPhone 4 in a bar which was taken and sold for $5000 to a tech blog, the current owner of the Durango devkit is looking for a buyer for a mere $10000. It is unlikely he found it on a bar stool.

One further level of irony, the Xbox 360 alpha devkit were repurposed Apple Mac Pros.

Image source: DaE as per its own in-image caption.

Alpha developer kits will change substantially externally but often do give clues to what to expect internally.

The first Xbox 360 software demonstrations were performed on slightly altered Apple Mac Pros. At that time, Apple was built on a foundation of PowerPC by IBM while the original Xbox ran Intel hardware. As it turned out, the Xbox 360 was based on the PowerPC architecture.

Huh, looks like a PC.

The leaked developer kit for the next Xbox is said to be running X86 hardware and an NVIDIA graphics processor. 8GB of RAM is said to be present on the leaked kit albeit that only suggests that the next Xbox will have less than 8GB of RAM. With as cheap as RAM is these days -- a great concern for PC gamers would be that Microsoft would load the console to the brim with memory and remove the main technical advantage of our platform. Our PCs will still have that advantage once our gamers stop being scared of 64-bit compatibility issues. As a side note, those specifications are fairly identical to the equally nebulous specs rumored for Valve’s Steam Box demo kit.

The big story is the return to x86 and NVIDIA.

AMD is not fully ruled out of the equation if they manage to provide Microsoft with a bid they cannot refuse. Of course practically speaking AMD only has an iceball’s chance in Hell of have a CPU presence in the upcoming Xbox – upgraded from snowball. More likely than not Intel will pick up the torch that IBM kept warm for them with their superior manufacturing.

PC gamers might want to pay close attention from this point on…

Contrast the switch for Xbox from PowerPC to X86 with the recent commentary from Gabe Newell and Rob Pardo of Blizzard. As Mike Capps has allured to – prior to the launch of Unreal Tournament 3 – Epic is concerned about the console mindset coming to the PC. It is entirely possible that Microsoft could be positioning the Xbox platform closer to the PC. Perhaps there are plans for cross-compatibility in exchange for closing the platform around certification and licensing fees?

Moving the Xbox platform closer to the PC in hardware specifications could renew their attempts to close the platform as has failed with their Games for Windows Live initiative. What makes the PC platform great is the lack of oversight about what can be created for it and the ridiculous time span for compatibility for what has been produced for it.

It might be no coincidence that the two companies who are complaining about Windows 8 are the two companies who design their games to be sold and supported for decades after launch.

And if the worst does happen, PC gaming has been a stable platform despite repetitive claims of its death – but could the user base be stable enough to handle a shift to Linux? I doubt that most would even understand the implications of proprietary platforms on art to even consider it. What about Adobe and the other software and hardware tool companies who have yet to even consider Linux as a viable platform?

Yesterday ARM announced a multi-year partnership with fab TSMC to produce sub-20nm processors that utilize 3D FinFET transistors. The collaboration and data sharing between the two companies will allow the fabless ARM SoC company the ability to produce physical processors based on its designs and will allow TSMC a platform to further its process nodes and FinFET transistor technology. The first TSMC-produced processors will be based on the ARMv8 architecture and will be 64-bit compatible.

The addition of 3D transistors will allow the ARM processors to be even more power efficient and suitable for both mobile devices. Alternatively, it could allow for higher clockspeeds at the same TDP ratings as current chips. The other big news is that the chips will be moving to a 64-bit compatible design, which is huge considering ARM processors have traditionally been 32-bit. By moving to 64-bit, ARM is positioning itself for server and workstation adoption, especially with the recent ARM-compatible Windows 8 build due to be released soon. Granted, ARM SoCs have a long way to go before taking market share from Intel and AMD in the desktop and server markets in a big way but it is slowly but surely becoming more competitive with the x86-64 giants.

TSMC’s R&D Vice President Cliff Hou stated that the collaboration between ARM and TSMC will allow TSMC to optimize its FinFET process to target “high speed, low voltage and low leakage.” ARM further qualified that the partnership would give ARM early access to the 3D transistor FinFET process that could help create advanced SoC designs and ramp up volume production.

I think this is a very positive move for ARM, and it should allow them to make much larger inroads into the higher-end computing markets and see higher adoption beyond mobile devices. On the other hand, it is going to depend on TSMC to keep up and get the process down. Considering the issues with creating enough 28nm silicon to meet demand for AMD and NVIDIA’s latest graphics cards, a sub-20nm process may be asking a lot. Here’s hoping that it’s a successful venture for both companies, however.

AMD recently released its Q2 2012 earnings (as did Intel), and things are continuing to look bleak for the number two x86-64 processor company. The company stated that the lower than expected numbers were the result of a weak economy and during a time of the year when people are not buying computers. The may be some truth to that as the second quarter is in the post-Christmas holiday season lul and before the big back-to-school retail push. On the economy front, it’s harder for me to say but without going political or armchair economist on you, the market seems better than it has been but is really still recovering–At least from a consumer perspective.

AMD reported revenue of $1.41 billion in the second quarter of 2012, which does not seem terrible, but when compared to Intel’s $13.5 billion Q2 revenue, and the fact that AMD’s numbers represent an 11-percent lower value than last quarter and 10-percent decrease versus Q2 2011, it’s easy to say that things are not looking good for the company.

According to Paul Lilly over at MaximumPC, when breaking AMD’s numbers down by business segment it gets even worse. Its Computing Solutions business fell 13-percent versus the previous quarter and Q2 2011. On the other hand, the company has the ever-so-slightly better news that the graphics card division stayed the same versus last year and was down 5-percent versus last quarter. The company was quoted as stating that the respective revenue drops were due to lower desktop sales in China and Europe and a “seasonally down quarter.”

PC Perspective’s Josh Walrath recently wrote up an editorial (note: pre-earnings call) that talks about AMDs new plan to focus on APUs, take on less risk, and push out new products faster. As a future-looking article, it talks about the impact of the company’s upcoming VIshera and Kaveri processors as well as AMD’s increased focus on heterogeneous system architectures. It remains to be seen if that new path for company will help them to make money or if it will hurt them. AMD cautions that Q3 2012 may not see increased revenue, but here’s hoping that they will be able to pull together for a strong Q4 and sell chips during the big holiday shopping season.

I for one am excited about the prospects of Kaveri and believe that HSA could work and is what AMD needs to focus on as it is one advantage that they have over NVIDIA and Intel – NVIDIA does not have an x86-64 license and Intel’s processor graphics leave room for improvement, to put it mildly. AMD may not have the best CPU cores, but it’s not an inherently bad design and where they are moving with the full convergence of the CPU and GPU is much farther ahead of the other big players.

If you are in the San Diego area today or tomorrow, you should make it a point to stop by Belo San Diego (http://www.belosandiego.com/ 438 E Street), a night club near the convention area, to visit with the AMD and the Geek and Sundry group.

Felicia Day, most popular for her role in the web-series The Guild, will be part of the on going event between 10am and 2am both today (the 12th) and tomorrow sponsored by AMD. She is excited to be there - just look!

If you stop by the Belo nightclub during those hours you can take home a FREE AMD A8-3870K APU (with accompanying motherboard) if you agree to use your social media outlets (Twitter and Facebook) to tell your friends about the experience. You will in fact become an AMD Social Media Reviewer!

Sorry, if you aren't in the San Diego area, you are out of luck on this promotion. This is just another reason why attending ComicCon is so enticing!

Taking a half dozen PandaBoard ESes from Texas Instruments that have a 1.2GHz dual-core ARM Cortex-A9 processor onboard, Phoronix built a 12-core ARM machine to test out against AMD's E-350 APU as well as Intel's Atom Z530 and a Core i7 3770K. Before you you make the assumption that the ARM's will be totally outclassed by any of these processors, Phoronix is testing performance per Watt and the ARM system uses a total of 31W when fully stressed and idles below 20W, which gives ARM a big lead on power consumption.

Phoronix tested out these four systems and the results were rather surprising as it seems Intel's Ivy Bridge is a serious threat to ARM. Not only did it provide more total processing power, its performance per Watt tended to beat ARM and more importantly to many, it is cheaper to build an i7-3770K system than it is to set up a 12-core ARM server. The next generation of ARM chips have some serious competition.

"Last week I shared my plans to build a low-cost, 12-core, 30-watt ARMv7 cluster running Ubuntu Linux. The ARM cluster that is built around the PandaBoard ES development boards is now online and producing results... Quite surprising results actually for a low-power Cortex-A9 compute cluster. Results include performance-per-Watt comparisons to Intel Atom and Ivy Bridge processors along with AMD's Fusion APU."

Intel does not respond well when asked about Larabee. Though Intel has received a lot of bad press from the gaming community about what they were trying to do, that does not necessarily mean that Intel was wrong about how they set up the architecture. The problem with Larrabee was that it was being considered as a consumer level product with an eye for breaking into the HPC/GPGPU market. For the consumer level, Larrabee would have been a disaster. Intel simply would not have been able to compete with AMD and NVIDIA for gamers’ hearts.

The problem with Larrabee and the consumer space was a matter of focus, process decisions, and die size. Larrabee is unique in that it is almost fully programmable and features really only one fixed function unit. In this case, that fixed function unit was all about texturing. Everything else relied upon the large array of x86 processors and their attached vector units. This turns out to be very inefficient when it comes to rendering games, which is the majority of work for the consumer market in graphics cards. While no outlet was able to get a hold of a Larrabee sample and run benchmarks on it, the general feeling was that Intel would easily be a generation behind in performance. When considering how large the die size would have to be to even get to that point, it was simply not economical for Intel to produce these cards.

Xeon Phi is essentially an advanced part based on the original Larrabee architecture.

This is not to say that Larrabee does not have a place in the industry. The actual design lends itself very nicely towards HPC applications. With each chip hosting many x86 processors with powerful vector units attached, these products can provide tremendous performance in HPC applications which can leverage these particular units. Because Intel utilized x86 processors instead of the more homogenous designs that AMD and NVIDIA use (lots of stream units doing vector and scalar, but no x86 units or a more traditional networking fabric to connect them). This does give Intel a leg up on the competition when it comes to programming. While GPGPU applications are working with products like OpenCL, C++ AMP, and NVIDIA’s CUDA, Intel is able to rely on many current programming languages which can utilize x86. With the addition of wide vector units on each x86 core, it is relatively simple to make adjustments to utilize these new features as compared to porting something over to OpenCL.

So this leads us to the Intel Xeon Phi. This is the first commercially available product based on an updated version of the Larrabee technology. The exact code name is Knights Corner. This is a new MIC (many integrated cores) product based on Intel’s latest 22 nm Tri-Gate process technology. The details are scarce on how many cores this product actually contains, but it looks to be more than 50 of a very basic “Pentium” style core; essentially low die space, in-order, and all connected by a robust networking fabric that allows fast data transfer between the memory interface, PCI-E interface, and the cores.

Each Xeon Phi promises more than 1 TFLOP of performance (as measured by Linpack). When combined with the new Xeon E5 series of processors, these products can provide a huge amount of computing power. Furthermore, with the addition of the Cray interconnect technology that Intel acquired this year, clusters of these systems could provide for some of the fastest supercomputers on the market. While it will take until the end of this year at least to integrate these products into a massive cluster, it will happen and Intel expects these products to be at the forefront of driving performance from the Petascale to the Exascale.

These are the building blocks that Intel hopes to utilize to corner the HPC market. Providing powerful CPUs and dozens if not hundreds of MIC units per cluster, the potential computer power should bring us to the Exascale that much sooner.

Time will of course tell if Intel will be successful with Xeon Phi and Knights Corner. The idea behind this product seems sound, and the addition of powerful vector units being attached to simple x86 cores should make the software migration to massively parallel computing just a wee bit easier than what we are seeing now with GPU based products from AMD and NVIDIA. The areas that those other manufacturers have advantages over Intel are that of many years of work with educational institutions (research), software developers (gaming, GPGPU, and HPC), and industry standards groups (Khronos). Xeon Phi has a ways to go before being fully embraced by these other organizations, and its future is certainly not set in stone. We have yet to see 3rd party groups get a hold of these products and put them to the test. While Intel CPUs are certainly class leading, we still do not know of the full potential of these MIC products as compared to what is currently available in the market.

The one positive thing for Intel’s competitors is that it seems their enthusiasm for massively parallel computing is justified. Intel just entered that ring with a unique architecture that will certainly help push high performance computing more towards true heterogeneous computing.

Last year after that particular AFDS, there was much speculation that AMD and ARM would get a whole lot closer. Today we have confirmed that in two ways. The first is that AMD and ARM are founding members of the HSA Foundation. This endeavor is a rather ambitious project that looks to make it much easier for programmers to access the full computer power of a CPU/GPU combo, or as AMD likes to call them, the APU. The second confirmation is one that has been theorized for quite some time, but few people have actually hit upon the actual implementation. This second confirmation is that AMD is licensing ARM cores and actually integrating them into their x86 based APUs.

AMD and ARM are serious about working with each other. This is understandable as both of them are competing tooth and nail with Intel.

ARM has a security functionality that they have been working with for several years now. This is called ARM TrustZone. It is a set of hardware and software products that provide a greater amount of security in data transfer and transactions. The hardware basis is built into the ARM licensed designs and is implemented in literally billions of devices (not all of them enabled). The biggest needs that this technology addresses are that of secure transactions and password enabled logins. Money is obviously quite important, but with identity theft and fraud on the rise, secure logins to personal information or even social sites are reaching the same level of importance as large monetary transactions.

AMD will actually be implementing a Cortex-A5 processor into AMD APUs that will handle the security aspects of ARM TrustZone. The A5 is the smallest Cortex processor available, and that would make sense to use it in a full APU so it will not take up an extreme amount of die space. When made on what I would assume to be a 28 nm process, a single A5 processor would likely take up as little as 10 to 15 mm squared of space on the die.

This is not exactly the licensing agreement that many analysts had expected from AMD. It is a start though. I would generally expect AMD to be more aggressive in the future with offerings based on ARM technologies. If we remember some time ago Rory Read of AMD pronounced their GPU technology as “the crown jewel” of their IP lineup, it makes little sense for AMD to limit this technology just to standalone GPUs and x86 based APUs. If AMD is serious about heterogeneous computing, I would expect them to eventually move into perhaps not the handheld ARM market initially, but certainly with more server level products based on 64 bit ARM technology.

Cortex-A5: coming to an AMD APU near you in 2013/2014. Though probably not in quad core fashion as shown above.

AMD made a mistake once by selling off their ultra-mobile graphics group, Imageon. This was sold off to Qualcomm, who is now a major player in the ARM ecosystem with their Snapdragon products based on Adreno graphics (“Adreno” is an anagram of “Radeon”). With the release of low powered processors in both the Brazos and Trinity line, AMD is again poised to deliver next generation graphics to the low power market. Now the question is, what will that graphics unit be attached to?

Today is a big day for AMD as they, along with four other major players in the world of processors and SoCs, announced the formation of the HSA Foundation. The HSA Foundation is a non-profit consortium created to define and promote an open approach to heterogeneous computing. The primary goal is to make it easier for software developers to write and program for the parallel power of GPUs. This encompasses both integrated and discrete of which the HSA (heterogeneous systems architecture) Foundation wants to enable users to take full advantage of all the processing resources available to them.

On stage at the AMD Fusion Developer Summit in Bellevue, WA, AMD announced the formation of the consortium in partnership with ARM, Imagination Technologies, MediaTek, and Texas Instruments; some of the biggest names in computing.

The companies will work together to drive a single architecture specification and simplify the programming model to help software developers take greater advantage of the capabilities found in modern central processing units (CPUs) and graphics processing units (GPUs), and unlock the performance and power efficiency of the parallel computing engines found in heterogeneous processors.

There are a lot of implications in this simple statement and there are many questions that are left open ended to which we hope to get answered this week while at AFDS. The idea of a "single architecture specification" set a lot of things in motion and makes us question the direction of both AMD and the traditionally ARM-based companies of the HSA Foundation will be moving in. AMD has had the APU, and the eventual complete fusion of the CPU and GPU, on its roadmap for quite a few years and has publicly stated that in 2014 they will have their first fully HSA-capable part. We are still assuming that this is an x86 + Radeon based part, but that may or may not be the long term goal; ideas of ARM-based AMD processors with Radeon graphics technology AND of Radeon based ARM-processors built by other companies still swirl amongst the show. There are even rumors of Frankenstein-like combinations of x86 and ARM based products for niche applications.

Looks like there is room for a few more founding partners...

Obviously ARM and others have their own graphics IP (ARM has Mali, Imagination Technology has Power VR) and those GPUs can be used for parallel processing in much the same way that we think of GPU processing on discrete GPUs and APUs today. ARM processor designers are well aware of the power and efficiency benefits of utilizing all of the available transistors and processing power correctly and the emphasis on an HSA-style system design makes a lot of sense moving forward.

My main question for the HSA Foundation is its goals: obviously they want to promote the simplistic approach for programmers, but what does that actually translate to on the hardware side? It is possible that both x86 and ARM-based ISAs can continue to exist with libraries and compilers built to correctly handle applications for each architecture, but that would seem to me to be against the goals of such a partnership of technology leaders.

In a meeting with AMD personnel, the most powerful and inspiring idea from the HSA Foundation is summed up with this:

"This is bigger than AMD. This is bigger than the PC ecosystem."

The end game is to make sure that all software developers can EASILY take advantage of both traditional and parallel processing cores without ever having to know what is going on under the hood. AMD and the other HSA Foundation members continue to tell us that this optimization can be completely ISA-agnostic – though the technical blockages for that to take place are severe.

AMD will benefit from the success of the HSA Foundation by finally getting more partners involved in promoting the idea of heterogeneous computing, and powerful ones at that. ARM is the biggest player in the low power processor market responsible for the Cortex and Mali architectures found in the vast majority of mobile processors. As those partners trumpet the same cause as AMD, more software will be developed to take advantage of parallel computing and AMD believes their GPU architecture gives them a definite performance advantage once that takes hold.

What I find most interesting is the unknown – how will this affect the roadmaps for all the hardware companies involved? Are we going to see the AMD APU roadmap shift to an ARM-IP system? Will we see companies like Texas Instruments fully integrate the OMAP and Power VR cores into a single memory space (or ARM with Cortex and Mali)? Will we eventually see NVIDIA jump onboard and lend their weight towards true heterogenous computing?

We have much more the learn about the HSA Foundation and its direction for the industry but we can easily say that this is probably the most important processor company collaboration announcement in many years – and it does so without the 800 pound gorilla that is Intel in attendance. By going after the ARM-based markets where Intel is already struggling to compete in, AMD can hope to create a foothold with technological and partnership advantages and return to a seat of prominence. This harkens back to the late 1990s when AMD famously put together the "virtual gorilla" with many partners to take on Intel.

While Trinity is currently rated at 726 GFLOPS, the Kaveri APU due late in 2012 or early 2013, will have at least 1 TFLOPS of total compute performance. That is a 37% boost over the previous generation.

While perusing through the listings and descriptions of sessions and presentations for the upcoming AMD Fusion Developer Summit, I came across an interesting one that surprised me. Tomorrow, June 11th, at 5:15pm PST, you can stop by the Grand Hyatt in Bellevue to learn about the upcoming AMD Wireless Display technology.

AWD (AMD Wireless Display) is a multiple-platform application family to enable wireless display technologies much in the same way that Intel has been pushing with WiDi. While Intel's take on it requires very specific Intel wireless controllers and is only recently, with the release of Ivy Bridge, getting the full-steam push from Intel, AMD's take on it is quite different.

Intel introduced WiDi in 2010

According to the brief on this AFDS session, AMD wants to create an API and SDKs for application developers to integrate AWD into software and to leverage the WiFi Alliance for an open-standards compliant front-end. Using AMD APUs, the goal is provide lower latency for encoded video and audio while still using the required MPEG2TS wrapper. We are also likely to learn that AMD hopes to make AWD open to a wider array of wireless devices.

AMD often takes this "open" approach to new technologies with mixed results - CUDA has been in place for many years while the adoption of OpenCL is only starting to take hold and 3D Vision still is the standard for 3D gaming on the PC.

After having quite a few chances to use Intel's Wireless Display (WiDi) technology myself I can definitely say that the wireless approach is the one I am most excited with and that has the most potential to revolutionize the way we work with displays and computing devices. I am eager to see what partners AMD has been working with and what demonstrations they will have for AWD next week.

Phoronix have been very busy lately, getting their heads around the functionality of Ivy Bridge on Linux and as these processor are much more compatible than their predecessors it has resulted in a lot of testing. The majority of the testing focused on the performance of GCC, LLVM/Clang, DragonEgg, PathScale EKOPath, and Open64 on an i7-3770K using a wide variety of programs and benchmarks. Their initial findings favoured GCC over all other compilers as in general it took top spot, with LLVM having issues with some of their tests. They then started to play around with the instruction sets the processor was allowed to use, by disabling some of the new features they could emulate how the Ivy Bridge processor would perform if it was from a previous generation of chips, good to judge the improvement of raw processing power. They finished up by testing its virtualization performance, with BareMetal, the Kernel-based Virtual Machine virtualization and Oracle VM VirtualBox. You can see how they compared right here.

"From an Intel Core i7 3770K "Ivy Bridge" system here is an 11-way compiler comparison to look at the performance of these popular code compilers on the latest-generation Intel hardware. Among the compilers being compared on Intel's Ivy Bridge platform are multiple releases of GCC, LLVM/Clang, DragonEgg, PathScale EKOPath, and Open64."

While I don't know exactly what surprises will be on display this year I am looking forward to seeing the improvement from software developers after having another 12 months to work on APU-accelerated applications. HSA (heterogeneous system architecture) has been getting a lot of buzz from AMD and the industry as we push towards a combined memory address space and the ultimate acceleration of programs across both serialized and parallel processors on the same die.

Technical tracks and sessions to learn about HSA and programming for it

If you can't make it though, you should definitely follow the whole event right here at PC Perspective - the easiest way is to keep track of our AFDS tag to make sure you don't miss any of the potentially industry shifting news!