In June of last year, Apple announced that it would migrate all of its systems to Intel platforms by its World Wide Developer Conference (WWDC) in 2007. Earlier this week Apple announced that the transition was complete, finalized by announcing the brand new Intel based Mac Pro and Xserve systems for professional workstation and server customers respectively. A full year ahead of "schedule", very few expected to take until 2007 to complete the transition considering the x86 version of OS X had been in development in parallel with the PowerPC version for the past 6 years.

It's been a long and not always as exciting road, going from the PowerPC G4 and G5 based systems to the new Core Duo and Xeon based Macs. Externally, little has changed with the new Macs, but on the inside these things are full blown PCs running Mac OS X. The entire transition has honestly been quite impressive on Apple's part, as switching CPU architectures this seamlessly is not easy to do.

We've looked at previous Intel based Macs, the MacBook Pro and the new iMac, and generally have come away quite pleased with the move to Intel. There are still some hiccups here and there, mostly thanks to applications from companies like Adobe and Microsoft that have yet to provide Universal Binary support but for the most part the end user isn't aware that Apple's OS and software have gone through dramatic changes over the past year.

We will be bringing you full coverage of the new Mac Pro, including a complete review of the system compared to other PCs as well as its predecessors, however we found ourselves talking a lot about the specs of the new Mac Pro that we decided to put that discussion in an article before our review goes live. We're still awaiting our review sample and hope to begin testing in the coming weeks, but until then there are a number of items worth discussing about the new Mac Pro. We'll examine the price impact of Apple's choice of Xeon over Core 2 processors, fully explain FB-DIMMs and what they mean to you, as well as talk about the chipset, graphics and storage options on the new platform (while offering some cheaper alternatives to Apple's Build-to-Order upgrades).

Unlike the outgoing PowerMac G5, the Mac Pro only ships in one standard configuration with the following specs:

Apple Mac Pro

CPU

2 x Intel Xeon 5150 Processors 2.66GHz

Memory

2 x 512MB DDR2-667 ECC FB-DIMM

Graphics

NVIDIA GeForce 7300 GT

Hard Drive

1 x 250GB SATA 3Gbps

Optical

1 x SuperDrive (DVD+R DL/DVD+-RW/CD-RW)

Price

$2499 ($2299 with Educational Discount)

The point of this article is to help those of you ordering today, analyze and understand the specs, as well as provide some of the necessary background information for our review that will follow in the coming weeks. Without further ado, let's talk about one of the most important aspects of the new Mac Pro: the CPUs.

CPU Analysis

The biggest part of Apple's Mac Pro announcement is of course the move to Intel processors and as many had predicted, Apple chose to go with Intel's Woodcrest based Xeon processors instead of Core 2 for the Mac Pro. Architecturally, the Woodcrest based Xeons are no different than the Conroe based Core 2 processors, so you get the same level of performance we showcased in our Core 2 review. With Xeon you do get the ability to go to multi-socket systems and a faster FSB, both of which are not possible with Core 2. Note that the Woodcrest based Xeons use a 771-pin LGA socket that is different than the 775-pin LGA socket used by the desktop Core 2 processors, so you can't swap them if you wanted to.

One of our biggest fears with Apple's use of Xeon instead of Core 2 is that it would put pricing of the Mac Pro above and beyond reasonable, but consulting Intel's price list left us pleasantly surprised:

Core 2

Clock/Cache/FSB

Price

Xeon

Clock/Cache/FSB

Price

X6800

2.93GHz/4M/1066

$999

5160

3.00GHz/4M/1333

$851

E6700

2.66GHz/4M/1066

$530

5150

2.66GHz/4M/1333

$690

E6600

2.40GHz/4M/1066

$316

5140

2.33GHz/4M/1333

$455

E6400

2.13GHz/2M/1066

$224

5130

2.00GHz/4M/1333

$316

E6300

1.86GHz/2M/1066

$183

5120

1.86GHz/4M/1066

$256

5110

1.60GHz/4M/1066

$209

Believe it or not, Intel's Xeon 5160, a faster alternative to the Core 2 Extreme X6800 is actually priced lower. At the very high end, from a purely processor standpoint, it makes sense for Apple to opt for the Xeon over the desktop Core 2 route because it's cheaper. It's not often that you see server/workstation processors priced lower than their desktop counterparts, so Core 2 Extreme owners should feel a bit ripped off (but the excellent performance can definitely numb the pain).

The Xeon 5160 aside, you're basically paying a premium for going with a Xeon over a Core 2, despite the fact that most of the time all you're getting is a faster FSB. While the 1333MHz FSB will do something, in the case of the Xeon 5150 vs. the Core 2 Duo E6700, you're paying 30% more for that advantage. Compared to the E6400, the Xeon 5130 costs 40% more and is clocked lower, although it has a larger L2 cache.

If you want to put Apple's performance in perspective, the slowest Mac Pro you can get is outfitted with a pair of Xeon 5130s. In single threaded applications, we'd expect the system to perform similarly to a Core 2 Duo E6400 system. In well multi-threaded applications, you'd be looking at significantly higher performance (dual dual core vs. single dual core).

The Xeon 5150 will obviously be a bit faster than the equivalently clocked Core 2 Duo E6700, thanks to the faster FSB. As we saw in our Core 2 review, in most Windows desktop applications we saw a 0 - 7.5% increase in performance, with the average increase being 2.3% due to the faster FSB. Multithreaded applications won't necessarily take better advantage of the faster FSB, it really depends on the application itself.

And obviously the Xeon 5160 will be faster than the current fastest desktop processor, Intel's Core 2 Extreme X6800. The performance advantage won't be tremendous, but it will be there.

If we were simply looking at single CPU configurations, Apple's decision to choose Woodcrest/Xeon over Conroe/Core 2 would have been an effort to keep average selling prices high, but none of the Mac Pros are single socket systems. Instead, Apple made an expensive but important move with the Mac Pros; by choosing Xeon, Apple can implement two sockets on the motherboard, which today means you can execute four simultaneous threads (dual dual core). By the end of this year, Intel will be shipping Clovertown, a quad core version of the dual core Xeons you see in today's Mac Pros. If Apple chooses to, with minimal effort, it could release 8-core Mac Pro systems in a matter of months (assuming Intel keeps its accelerated CPU schedule).

With four cores on a single die, the faster FSB matters that much more, so the 0 - 7.5% increase due to the 1333MHz FSB that we saw in our Core 2 review will go up. Seeing as how we were playing with quad core Kentsfield processors back in late May/early June, you had better believe that Apple designed its Mac Pro motherboards with support for Clovertown. While Apple isn't really touting processor upgradability with the new Mac Pro, it wouldn't be too far fetched to think that you could swap a pair of Clovertowns in these systems with no more than a firmware update.

The Chipset

A very little talked about aspect of the new Mac Pro is the chipset used, which appears to be Intel's 5000X. The only other option is the Intel 5000P, but the 5000P only has x8 PCIe slots off of the MCH and thus it wouldn't make sense given that Apple is only really touting single GPU or multi-display configurations with the Mac Pro.

The 5000X is by no means a desktop chipset, it supports up to four FB-DIMM memory channels and has two independent 64-bit FSB interfaces, one for each Xeon socket. With two FSBs running at 1333MHz a piece, there's a total of 21.3GB/s of bandwidth between the chipset and the CPUs, which matches up perfectly with the 21.3GB/s of memory bandwidth offered if you populate all four FB-DIMM channels on the motherboard. Note that if you only use two FB-DIMMs, you'll only be running in two channel mode, which will limit you to 10.67GB/s of bandwidth. While we have yet to test it, there may be a performance penalty when running in two channel mode.

The 5000X MCH (the "System Controller") supports a total of 24 PCIe lanes, divided into one x16 and one x8. The x8 appears to connect to the ICH (labeled in the graphic above as the "I/O Controller") while the x16 is what drives the primary PCIe slot (the one that has enough room for a double height card).

The ICH have another 12 PCIe lanes coming off of it, and it looks like Apple splits them off into two x4s and one x1 for its remaining PCIe slots. Apple continues to exclusively use physical x16 slots, so each slot can be used by any sort of card (video card or not) rather than having x1 and x4 slots on the motherboard. Because of the Mac Pro's four x16 slots, you can order the system with up to four GeForce 7300GTs for some 8 monitor action.

The ICH used on the motherboard is what we believe to be Intel's 6321ESB and it supports up to 6 SATA devices and 2 PATA devices, which is where you get the expansion capabilities that are built into the system. You've got four SATA hard drive bays and support for up to two SuperDrives. Apple still relies on OS X to provide RAID support, so only RAID 0 and RAID 1 are supported through software.

Understanding FB-DIMMs

Since Apple built the Mac Pro out of Intel workstation components, it unfortunately has to use more expensive Intel workstation memory. In other words, cheap unbuffered DDR2 isn't an option, its time to welcome ECC enabled Fully Buffered DIMM (FBD) to your Mac.

Years ago, Intel saw two problems happening with most mainstream memory technologies: 1) As we pushed for higher speed memory, the number of memory slots per channel went down, and 2) the rest of the world was going serial (USB, SATA and more recently, Hyper Transport, PCI Express, etc...) yet we were still using fairly antiquated parallel memory buses.

The number of memory slots per channel isn't really an issue on the desktop; currently, with unbuffered DDR2-800 we're limited to two slots per 64-bit channel, giving us a total of four slots on a motherboard with a dual channel memory controller. With four slots, just about any desktop user's needs can be met with the right DRAM density. It's in the high end workstation and server space that this limitation becomes an issue, as memory capacity can be far more important, often requiring 8, 16, 32 or more memory sockets on a single motherboard. At the same time, memory bandwidth is also important as these workstations and servers will most likely be built around multi-socket multi-core architectures with high memory bandwidth demands, so simply limiting memory frequency in order to support more memory isn't an ideal solution. You could always add more channels, however parallel interfaces by nature require more signaling pins than faster serial buses, and thus adding four or eight channels of DDR2 to get around the DIMMs per channel limitation isn't exactly easy.

Intel's first solution was to totally revamp PC memory technology, instead of going down the path of DDR and eventually DDR2, Intel wanted to move the market to a serial memory technology: RDRAM. RDRAM offered significantly narrower buses (16-bits per channel vs. 64-bits per channel for DDR), much higher bandwidth per pin (at the time a 64-bit wide RDRAM memory controller would offer 6.4GB/s of memory bandwidth, compared to a 64-bit DDR266 interface which at the time could only offer 2.1GB/s of bandwidth) and of course the ease of layout benefits that come with a narrow serial bus.

Unfortunately, RDRAM offered no tangible performance increase, as the demands of processors at the time were no where near what the high bandwidth RDRAM solutions could deliver. To make matters worse, RDRAM implementations were plagued by higher latency than their SDRAM and DDR SDRAM counterparts; with no use for the added bandwidth and higher latency, RDRAM systems were no faster, if not slower than their SDR/DDR counterparts. The final nail in the RDRAM coffin on the PC was the issue of pricing; your choices at the time were this: either spend $1000 on a 128MB stick of RDRAM, or spend $100 on a stick of equally performing PC133 SDRAM. The market spoke and RDRAM went the way of the dodo.

Intel quietly shied away from attempting to change the natural evolution of memory technologies on the desktop for a while. Intel eventually transitioned away from RDRAM, even after its price dropped significantly, embracing DDR and more recently DDR2 as the memory standards supported by its chipsets. Over the past couple of years however, Intel got back into the game of shaping the memory market of the future with this idea of Fully Buffered DIMMs.

The approach is quite simple in theory: what caused RDRAM to fail was the high cost of using a non-mass produced memory device, so why not develop a serial memory interface that uses mass produced commodity DRAMs such as DDR and DDR2? In a nutshell that's what FB-DIMMs are, regular DDR2 chips on a module with a special chip that communicates over a serial bus with the memory controller.

The memory controller in the system stops having a wide parallel interface to the memory modules, instead it has a narrow 69 pin interface to a device known as an Advanced Memory Buffer (AMB) on the first FB-DIMM in each channel. The memory controller sends all memory requests to the AMB on the first FB-DIMM on each channel and the AMBs take care of the rest. By fully buffering all requests (data, command and address), the memory controller no longer has a load that significantly increases with each additional DIMM, so the number of memory modules supported per channel goes up significantly. The FB-DIMM spec says that each channel can support up to 8 FB-DIMMs, although current Intel chipsets can only address 4 FB-DIMMs per channel. With a significantly lower pin-count, you can cram more channels onto your chipset, which is why the Intel 5000 series of chipsets feature four FBD channels.

Bandwidth is a little more difficult to determine with FBD than it is with conventional DDR or DDR2 memory buses. During Steve Jobs' keynote, he put up a slide that listed the Mac Pro as having a 256-bit wide DDR2-667 memory controller with 21.3GB/s of memory bandwidth. Unfortunately, that claim isn't being totally honest with you as the 256-bit wide interface does not exist between the memory controller and the FB-DIMMs. The memory controller in the Intel 5000X MCH communicates directly with the first AMB it finds on each channel, that interface is actually only 24-bits wide per channel for a total bus width of 96-bits (24-bits per channel x 4 channels). The bandwidth part of the equation is a bit more complicated, but we'll get to that in a moment.

Below we've got the anatomy of a AMB chip:

The AMB has two major roles, to communicate with the chipset's memory controller (or other AMBs) and to communicate with the memory devices on the same module.

When a memory request is made the first AMB in the chain then figures out if the request is to read/write to its module, or to another module, if it's the former then the AMB parallelizes the request and sends it off to the DDR2 chips on the module, if the request isn't for this specific module, then it passes the request on to the next AMB and the process repeats.

As we mentioned before, the AMB interface is only 24-bits wide thanks to its high speed serial nature, but there's far more detail to this bus than meets the eye. The AMB bus is split into a 14-bit read bus ("Northbound" lanes) and a 10-bit write bus ("Southbound" lanes), with these buses operating at 6 times the DDR2 frequency (e.g. if you're using DDR2-667 FB-DIMMs, then the AMB runs at 667MHz x 6 or 4GHz). By having a dedicated read and write bus, reads and writes can happen simultaneously thus increasing performance in some circumstances. The read bus is a bit wider than the write bus since more often than not your system is reading from memory than writing to it.

In each bus, there are no dedicated lines for addresses, commands and data, all three types of signals are sent over the same pins. In conventional parallel interfaces, the address of the memory request is placed on a dedicated set of address pins and the data at that address is then placed on another set of data pins. With FBD, the data is sent in packets or frames (much like network traffic); each frame generally consists of either address/control signals or command and data signals. The data frames are 15 bytes large for writes and 21 bytes large for reads, but not all of that is raw data, some of it is ECC data that we don't normally look at when comparing bandwidths, so we'll have to strip that out.

For northbound traffic (reads), each frame is 12 cycles long and each frame that contains data can have a maximum of 16-bytes of data, meaning that our peak bandwidth with DDR2-667 FB-DIMMs is 5.34GB/s. For southbound traffic (writes), each frame is still 12 cycles long but only 8 bytes are transferred per frame, giving us a peak data bandwidth of 2.67GB/s.

Total data bandwidth then weighs in at just over 8GB/s for a single channel, but also keep in mind that not every frame will be a data frame, so the effective bandwidth will be noticeably lower. What we're touching on here is one of the major drawbacks to serial buses: there's greater overhead than with a parallel bus. Although there is more peak bandwidth on the AMB bus than there is between the AMB and its DDR2 devices (8GB/s vs. 5.34GB/s), there may actually be less peak read bandwidth once you factor in the overhead of the serial bus. There's of course less peak write bandwidth available, but writes take much longer to complete and generally can't ever reach peak bandwidth numbers. At the end of the day, despite the best efforts, there may be some situations where you are actually bandwidth limited by your AMB in a FBD system. How frequently those situations occur and what the average performance impact is are unfortunately both very complicated questions to answer and beyond the scope of this already long article.

The FBD proposition gets a little less appetizing when you look at the other major aspect of memory performance: latency. Since the protocol calls for point-to-point communication between AMBs, there's an additional latency penalty for each AMB that has to be contacted in the search for the right FB-DIMM to fulfill the read/write request. Intel states that the additional delay is in the range of 3 - 5 ns per FB-DIMM, meaning that a configuration of 8 x 1GB FB-DIMMs will be slower than 4 x 2GB FB-DIMMs. The argument here in favor of FBD is that even though you give up some latency, you make up for it in the ability to cram more memory channels on your memory controller and support configurations with more DIMMs.

There's one more issue worth talking about and that is power consumption. The AMB on each FB-DIMM has a pretty big job, converting the 4GHz serialized memory requests into 667MHz parallel requests that can be serviced by regular DDR2 memories. This translation process consumes quite a bit of power and thus causes the AMB to dissipate a noticeable amount of heat. The Mac Pro page on Apple's website states the following about the FB-DIMMs it uses:

"To help dissipate heat, every Apple DIMM you purchase for your Mac Pro comes with its own preinstalled heat sink. This unique heat sink lets fans run slower - and quieter - yet keeps the memory cool enough to run at full speed."

That heatsink is made necessary by the AMB on each FB-DIMM, which seems to dissipate somewhere between 3 - 6W. The reason there's a range is because how active the AMB is depends on how close it is to the memory controller. The first AMB in the chain will have to service all requests from the main memory controller, passing them along as needed, while the last AMB in the chain will only receive those requests that are specifically targeted to its module. With 8 FB-DIMM slots in the Mac Pro, you're looking at up to another ~40W of power if you've got all slots populated.

Despite being a lower pincount bus, current FB-DIMMs use the same number of pins as DDR2 DIMMs. The reason being that each AMB needs two sets of buses, one to communicate with the FB-DIMM before it, and one to communicate with the module after it, thus there are approximately 120 signaling pins needed for each AMB. Once you add your power and ground pins, not to mention your reserved pins for future use you're not that far off of the 240-pins used on current desktop DDR2 DIMMs. Rather than introducing a brand new connector and module design, FB-DIMMs simply take the current DDR2 DIMM design and key it differently to only work in FB-DIMM slots. Remember that the signal routing from the chipset to the first memory slot still only uses 69 signaling pins since it doesn't have to communicate with anything "before" it in the chain, so you do still get the benefits of a lower pincount interface.

Front and back of a FB-DIMM

The major benefit that the Mac Pro seems to get from the use of FB-DIMMs is that its memory bus and FSBs can offer identical bandwidths at 21.3GB/s (ignoring the unknowns we discussed earlier about the efficiency of FBD). By using a lower pincount interface, Intel was able to fit four FBD channels on its 5000 series chipset and thus offer the bandwidth equivalent of a 256-bit wide DDR2 memory controller. However the additional memory bandwidth comes at the high cost of additional latency, power consumption and more expensive DIMMs.

There are a couple of things you can do to maximize performance and minimize the cost of additional memory on your Mac Pro, and it starts with the number of FB-DIMMs you configure your system with. The Mac Pro ships with a default configuration of 2 x 512MB FB-DIMMs, unfortunately that means that you're only using two of the four available memory channels, cutting your peak theoretical memory bandwidth in half. You'll want to upgrade to at least four FB-DIMMs so that you can run in quad-channel mode, in the coming weeks we'll be running some tests to figure out exactly how much additional performance you'll gain by doing that and if it's noticeable or not.

If you do find yourself filling all 8 memory slots on the Mac Pro, we would suggest trying to move to 4 higher density modules instead. Remember that you gain an additional 3 - 5ns of latency (at minimum) with each FB-DIMM hop, so the fewer FB-DIMMs you have the lower your worst case scenario memory latency will be. But since you still want to be running in quad-channel mode you don't want to drop below four FB-DIMMs, making four the magic number with the Mac Pro.

As always, Apple's pricing for memory upgrades is much higher than what you can get elsewhere. We are going to try and test memory compatibility once our Mac Pro system arrives, but there's no reason that FB-DIMMs that work on current Xeon motherboards shouldn't work in the Mac Pro. We would recommend holding off on ordering a Mac Pro with any of Apple's memory upgrades until we can verify that 3rd party memory will work, if it does, the table below will give you an idea of the savings possible:

The prices above were for Kingston ECC DDR2-667 FB-DIMMs, which may or may not work on the Mac Pro. We will find out for sure in the coming weeks but the price differential is great enough that you may want to hold off on ordering a lot of memory just in case you can get it a lot cheaper from elsewhere.

Drive Options

One of the biggest complaints about the PowerMac G5s was that the chassis, despite being absolutely massive, only featured two 3.5" hard drive bays and no room for extra optical devices. The new Mac Pro chassis, although identical from the outside, has been totally revamped on the inside to accommodate a total of four 3.5" SATA hard drives and up to two optical drives.

Apple allows you to order your system with as few or as many of these bays populated as possible, but if you look closely at the pricing, you may want to avoid letting Apple upgrade your hard drive for you. Below we compared the price of Apple's drive upgrades to the best prices we were able to find for various 500GB drives on our Real Time Price Engine:

For $200 more Apple will swap the default 250GB SATA drive for a 500GB unit, manufacturer and model unspecified. Compared to the cost of a stand alone 500GB drive, Apple's upgrade is the cheapest you can get, however the better route is to have Apple leave the 250GB intact and simply pay $209 for another 500GB drive giving you a total of 750GB of storage for the same price that Apple would charge you for 500GB.

Populating the 2nd, 3rd and 4th drive bays with a 500GB drive costs $400 a pop at Apple, the choice here is simple: buy the drive on your own. Note that you can currently buy Seagate's 750GB drive for as little as $346, a far better bargain than Apple's $400 upgrades. Unlike video cards which require Mac specific versions, all SATA hard drives should work just fine on the Mac Pro.

Apple also moved to a two optical drive layout with the new Mac Pro; currently you have the option of upgrading to two SuperDrives, but we'd expect that in the future one of those bays may end up with a Blu-ray or HD-DVD drive in it.

GPU Options

We had hoped for more extensive GPU options with the Mac Pro, unfortunately Apple only gave us three. The options are now a GeForce 7300 GT for those who aren't doing any real 3D work, an ATI Radeon X1900 XT and a NVIDIA Quadro FX 4500.

The Radeon X1900 XT used in the Mac Pro appears to have a 1.3GHz memory clock, which is slower than the 1.45GHz clock of the PC version. The core clock is also slower than the PC version at 600MHz, instead of 625MHz. Historically, ATI Mac Edition cards have always been clocked lower than their PC counterparts; ATI explained the reasoning behind this disparity as having to do with basic supply and demand. The demand for Mac video cards is lower than their PC counterparts, so ATI runs them at lower clock speeds to maintain their desired profit per card regardless of whether they are selling to Mac or PC markets.

The interesting offering on the Mac Pro is the Quadro FX 4500, which is basically a higher clocked version of the GeForce 7800 GTX with some additional workstation class features. With a 450MHz core clock (compared to 430MHz on the 7800 GTX) and a 1050MHz memory clock, the Quadro FX 4500 should actually be slower than the Radeon X1900 XT on the Mac Pro. If you compare the X1900 XT to NVIDIA's offerings on the PC, you generally need a 7800 GTX 512MB (with its faster clock speeds) or a 7900 GTX to outperform the X1900 XT, a vanilla 7800 GTX won't cut it. However, Apple's own benchmarks indicate that the Quadro FX 4500 is faster in games than the Radeon X1900 XT; even though Doom 3 and Quake 4 are the titles of choice, ATI should still be faster. It's tough to say which will run cooler/quieter, the X1900 XT is built on a 90nm process while the Quadro FX 4500 was a 110nm GPU, but with different clocks, transistor counts and fans we'll just have to find out for ourselves.

We're working on getting both cards in house for a head to head comparison, but there could be some explanations for the performance standings being what they are today. NVIDIA's OpenGL drivers may be better than ATI's under OS X or it's also possible that some of the GPU-level enhancements enabled on Quadro GPUs are somehow coming into play in the Quake 4/Doom 3 benchmarks that Apple is reporting.

Finally there's the default configuration option, the NVIDIA GeForce 7300 GT. If you plan on doing any GPU based rendering, then the 7300 GT is more than enough for OS X, especially since it comes equipped with 256MB of memory. Even 30" Cinema Display owners will have a fairly smooth Exposé experience with 256MB of video memory.

Final Words

In many ways, the fact that Apple didn't change the exterior of the PowerMac G5 chassis symbolizes the upgrade that is the Mac Pro. The focus is entirely on the inside of the chassis, on the Woodcrest based Xeon processors, the FB-DIMMs and the four SATA drive bays, however we'd caution you on spending too much on a brand new Mac Pro right away, despite Apple boasting immediate availability. By the end of this year, Intel will begin shipping Clovertown, which should be a drop-in quad-core replacement to the current dual-core Woodcrest based Xeon CPUs. It should allow Apple to replace its dual 3GHz Xeon 5160 option with a pair of quad-core Clovertown based Xeon CPUs. We're not yet sure what the clock speeds or price points will be, but it may be worth waiting for if you can. If you can't wait, there is always the possibility that you may be able to simply upgrade the CPUs in the Mac Pro later on, in which case you may want to go with the lowest end option now and simply drop in Clovertown later. Over the coming days we will hopefully be able to figure out what will and won't be possible with the new Mac Pro design, but we're hoping that the move to Intel means good things for the cost of upgradability on these systems.

By using Xeon on the high end Mac Pros, Apple also paves the way for a more mid-range offering utilizing Intel's Core 2 Duo processor. While it's inevitable that the iMac will be revamped to use Core 2, we're wondering if Apple will decide to introduce a lower end Mac Pro or simply a vanilla "Mac" desktop offering with a single Core 2 processor.

Honestly however, we did have higher hopes for some aspects of the Mac Pro. Like many Apple users, we were looking forward to a revamped case design; while the internals have changed, the fact of the matter is that the standard configuration of the Mac Pro still weighs in at a hefty 42.5 lbs. The bulky chassis does scream high end workstation, but it would have been nice to see something a bit different, possibly slimmed down. Based on how Apple has handled transitioning all of the previous products to Intel platforms, we shouldn't have expected any different on the chassis side. The revamped Apple designs should really start to surface in 2007, most likely starting with the iMac and MacBook/Pro as those models are still using older Yonah based Core Duo processors.

On the technology side, we wanted to see a bit more from Apple. For a company usually on the forefront of adopting new technologies and standards we were surprised not to see any eSATA ports on the Mac Pro. Given that OS X's GUI performance is sometimes tied to the amount of video memory you have, we would also have expected Apple to opt for one of the 512MB GeForce 7300s as an option on the Mac Pro, although admittedly 256MB is pretty good even at 30" Cinema Display resolutions. We thought we might see more of a focus on RAID, especially with the default system configuration, but alas a 250GB hard drive is all you get.

Maybe we're hoping for too much, and maybe we just need to get our hands on a system for review, but until then hopefully this discussion will help those of you buying today make a better buying decision.