Like I said earlier Apple might be doing much more work in-house on the drivers and save AMD some work or love the fact that dual GPU's standard might be great for them, even see Apple as a natural ally against CUDA by pushing OpenCL but they still are going to want a premium from Apple over and above the consumer parts to use the FirePro moniker. I have no doubt Apple will be getting a big discount, maybe industry best, but how much is Apple going to discount a pair of cards that retail for $8k to the consumer?

The cards are stuck in the Mac Pro, so you could not ever sell them on. That allows a large price difference.

Does it matter about the specs though? AMD isn't going to let Apple throw $200 consumer gfx chips into their computers and call them FirePro's. While pro GPU unit sales are tiny compared to the consumer GPU unit sales the revenue is not:http://farm3.staticflickr.com/2835/9104 ... 9a_z_d.jpg

Like I said earlier Apple might be doing much more work in-house on the drivers and save AMD some work or love the fact that dual GPU's standard might be great for them, even see Apple as a natural ally against CUDA by pushing OpenCL but they still are going to want a premium from Apple over and above the consumer parts to use the FirePro moniker. I have no doubt Apple will be getting a big discount, maybe industry best, but how much is Apple going to discount a pair of cards that retail for $8k to the consumer?

ETA- I saw that MR thread before. It is pretty cool that the guy got an instant w9000 upgrade but I think all that shows is Apple has the drivers in 10.9 for testing. I have a feeling it will be taken away if they add a identifier check to make sure it really is a FirePro card.

Heh, that was actually meant to be a reply to cc bcc's post about just the cost to Apple but you got in a reply before me. But yeah I still expect a chunk of change cause the branding too. It'll be interesting to see what the MBP and iMacs get, some exclusivity and buckets of cash can sway a supplier. Just thought of the 10.9 drivers some more, I guess they could be used to figure out what else is working and might end up in the Mac Pro. edit - looks like 7870 XT is recognized as W8000 according to this.

Does it matter about the specs though? AMD isn't going to let Apple throw $200 consumer gfx chips into their computers and call them FirePro's. While pro GPU unit sales are tiny compared to the consumer GPU unit sales the revenue is not:

But PCIe slots are a thing of the past on the Mac platform. For AMD, getting their parts into this system is the only way to get any 'pro' revenue on the Mac at all. Unless Wintel customers start buying Mac Pros rather than PC workstations with retail-priced FirePros in significant numbers, AMD has literally nothing to lose here.

It's also likely that effectively no vendor of any significant size comes to AMD and says "We want to put your pro graphics in every single workstation we make." With most workstation models, AMD is competing right alongside NVIDIA and is only winning (as per that article) ~15% of the time. With some not-entirely-crazy assumptions about Mac Pro sales volume, Apple could be offering to increase AMD's pro unit sales 50%-100% overnight.

Edit: Actually, given that there are two GPUs in every one of these machines, and dual GPUs are still pretty uncommon in the workstation market, it's probably more than that. To show some math here, assume that the Mac Pro accounts for 3% of Apple's unit sales, and Apple sells 15M Macs per year. That's 112.5K Mac Pros per quarter. The total workstation market is about 900K units per quarter. Assume these customers are also the market for pro graphics. Let's take AMD's 15% of the pro graphics market, and assume the typical workstation has 1.2 GPUs. That would be 162K units per quarter for AMD. Now Apple comes along and says "We'd like to buy 225K FirePros a quarter."

Using retail prices is meaningless. Apple is buying at top-tier wholesale, and using those cash reservers to tweak prices ever lower.

Yeah, Apple doesn't pay retail obviously but it forms a basis of comparison. Like if they put a $2800 of CPU Into a $6200 machine on the high end etc. You prob figure Apple is getting those at nearly half retail etc.

But PCIe slots are a thing of the past on the Mac platform. For AMD, getting their parts into this system is the only way to get any 'pro' revenue on the Mac at all. Unless Wintel customers start buying Mac Pros rather than PC workstations with retail-priced FirePros in significant numbers, AMD has literally nothing to lose here.

It's also likely that effectively no vendor of any significant size comes to AMD and says "We want to put your pro graphics in every single workstation we make." With most workstation models, AMD is competing right alongside NVIDIA and is only winning (as per that article) ~15% of the time. With some not-entirely-crazy assumptions about Mac Pro sales volume, Apple could be offering to increase AMD's pro unit sales 50%-100% overnight.

Edit: Actually, given that there are two GPUs in every one of these machines, and dual GPUs are still pretty uncommon in the workstation market, it's probably more than that. To show some math here, assume that the Mac Pro accounts for 3% of Apple's unit sales, and Apple sells 15M Macs per year. That's 112.5K Mac Pros per quarter. The total workstation market is about 900K units per quarter. Assume these customers are also the market for pro graphics. Let's take AMD's 15% of the pro graphics market, and assume the typical workstation has 1.2 GPUs. That would be 162K units per quarter for AMD. Now Apple comes along and says "We'd like to buy 225K FirePros a quarter."

I guess that makes sense that two in every Mac Pro will be a significant chunk of their sales and not easily popped out and upgraded to boot. Still I think I'm not so optimistic Apple will reduce the price to the consumer as aggressively. When I look at their historical upgrade prices on the BTO sheets like $750 for very old 512GB SSD's or crazy RAM prices even though they are getting those things wholesale for pennies on the dollar I'm not sure Apple won't see this as a sure fire margin booster. We'll see, hopefully you're right and Apple ends up mainstreaming these pro GPU's and they become much more reasonable.

But PCIe slots are a thing of the past on the Mac platform. For AMD, getting their parts into this system is the only way to get any 'pro' revenue on the Mac at all. Unless Wintel customers start buying Mac Pros rather than PC workstations with retail-priced FirePros in significant numbers, AMD has literally nothing to lose here.

PCI slots are a thing of the past internal to the Apple chassis. But you can still have PCI slots, only external across TB.

Quote:

It's also likely that effectively no vendor of any significant size comes to AMD and says "We want to put your pro graphics in every single workstation we make." With most workstation models, AMD is competing right alongside NVIDIA and is only winning (as per that article) ~15% of the time. With some not-entirely-crazy assumptions about Mac Pro sales volume, Apple could be offering to increase AMD's pro unit sales 50%-100% overnight.

Edit: Actually, given that there are two GPUs in every one of these machines, and dual GPUs are still pretty uncommon in the workstation market, it's probably more than that. To show some math here, assume that the Mac Pro accounts for 3% of Apple's unit sales, and Apple sells 15M Macs per year. That's 112.5K Mac Pros per quarter. The total workstation market is about 900K units per quarter. Assume these customers are also the market for pro graphics. Let's take AMD's 15% of the pro graphics market, and assume the typical workstation has 1.2 GPUs. That would be 162K units per quarter for AMD. Now Apple comes along and says "We'd like to buy 225K FirePros a quarter."

One other side to this is that it may promote OpenCL at the expense of CUDA. Enough apps supporting OpenCL can change the landscape of GPU marketing a bit.

Using retail prices is meaningless. Apple is buying at top-tier wholesale, and using those cash reservers to tweak prices ever lower.

Yeah, Apple doesn't pay retail obviously but it forms a basis of comparison. Like if they put a $2800 of CPU Into a $6200 machine on the high end etc. You prob figure Apple is getting those at nearly half retail etc.

It might be possible to roughly estimate Apple's actual chip price by taking the MSRP of the machine (currently unknown) and apply overall margins combined with the BOM numbers occasionally cranked out by iSuppli. The result would have a high margin-of-error, but it would be interesting see how much it deviates from the published CPU prices over at Intel.

PCI slots are a thing of the past internal to the Apple chassis. But you can still have PCI slots, only external across TB.

Depends on if Pro apps really use tons of bandwidth or not for it to be feasible. But even so if the machine already has 2 of their GPU's built-in I don't see why it would be worse if people wanted to add more, especially because they'd be paying retail. I think with TB2 things are better but TB3 or TB optical is where it will probably start to become a more interesting possibility.

Quote:

One other side to this is that it may promote OpenCL at the expense of CUDA. Enough apps supporting OpenCL can change the landscape of GPU marketing a bit.

Yeah that's why I mentioned it earlier that it might be a hook in that AMD really wants Apple to be an ally against CUDA. It makes sense as it is a proprietary standard against which OpenCl offers an alternative to so it is beneficial to both to work against it.

I wouldn't be surprised to see Apple get closer to AMD. Two high level Apple tech execs and former AMD employees, Jim Keller and Raja Koduri, have gone to AMD in the last year. AMD has been promoting their new Heterogeneous System Architechture initiative. Their first unified memory architecture APU (cpu and gpu address and access the same memory space) this fall. Imagination and ARM are also founding members of the HSA foundation. Apple could eventually even get AMD to do a semi-custom APU for the Mac Pro if the machine is really narrowly targeted at video editing and cpu performance is of marginal value compared to gpu performance.

One other side to this is that it may promote OpenCL at the expense of CUDA. Enough apps supporting OpenCL can change the landscape of GPU marketing a bit.

Yes. Many pro apps are cross-platform. AMD landing the contract to supply Mac Pro GPUs means that effectively every one of these apps will need to implement first-rate OpenCL support, which will likely also make it into their Windows versions.

Plus, as 'not' pointed out, when is the last time a workstation actually attracted media attention? There's quite a lot of promotional value for AMD in being on this machine.

I don't think we're assigning enough importance to the fact that Schiller name checked OpenCL during his Mac Pro presentation.

Yup. Between that and the switch from dual CPUs as standard[1] to dual GPUs as standard, it's clear that Apple sees a shift to GPU-centric computing in the Mac Pro's market. They've likely believed this for some time — they are the ones that took the initiative in creating OpenCL, after all.

[1] Yes, single CPU Mac Pro configurations were available, but the platform was clearly designed around dual CPUs.

I wouldn't be surprised to see Apple get closer to AMD. Two high level Apple tech execs and former AMD employees, Jim Keller and Raja Koduri, have gone to AMD in the last year. AMD has been promoting their new Heterogeneous System Architechture initiative. Their first unified memory architecture APU (cpu and gpu address and access the same memory space) this fall. Imagination and ARM are also founding members of the HSA foundation. Apple could eventually even get AMD to do a semi-custom APU for the Mac Pro if the machine is really narrowly targeted at video editing and cpu performance is of marginal value compared to gpu performance.

So it's not just higher ups either. Kinda makes you wonder if Apple is so interested in their own SoC's if/when they may want to start their own fabs. You figure AMD's market cap is less than $3B, Apple could buy them for a months profit. Not that they'd want to do that but I was kinda surprised how low AMD was.

Yeah I don't think it would be something they'd want but Steve used to have that dream about dump trucks full of sand going in one end of the factory and coming out as Macs the other end Apple is so secretive too that they would love to be able to build chips w/o having to show anything to 3rd parties but yeah fabs don't seem to be a business that would work for Apple.

One other side to this is that it may promote OpenCL at the expense of CUDA. Enough apps supporting OpenCL can change the landscape of GPU marketing a bit.

I don't think we're assigning enough importance to the fact that Schiller name checked OpenCL during his Mac Pro presentation.

"For those of you using OpenCL, and you all know you should, this delivers over 7 teraflops of compute power to your applications."

That's a big tell.

The new FCP X this fall will be OpenCl optimized and dual GPU aware, and once again Apple will lead by eating its own dogfood. Move your compute cycles to the GPUs is the message of WWDC.

Why would FCP X need to be OpenCL optimized? It already uses OpenCL. Core Image on Mavericks is built on OCL, which will likely ship with the Mac Pro. And according to Guy English (http://kickingbear.com/blog/archives/349), one of the GPUs in the Mac Pro can be allocated solely to OpenCL, leaving the dual GPU awareness to the OpenCL layer.

It's not a problem now, but the first and/or second generation of Aluminum iMacs had awful thermals. I saw one in an audio studio that ran in full wind tunnel mode nearly all the time even when it was only being used for word processing and printing stuff out. It was ridiculous. I think they actually recalled that generation because they would get very very hot.

I own one of the first gen alum iMacs and I hate to say it but you are completely wrong. It doesn't run hot and neither it or the model after it were recalled.

I've heard various people point out that there isn't the driver differences between Radeons and FirePros on Mac OS X that there is on Windows, because Apple writes all of the drivers anyway.

So, question: if I put this thing into Boot Camp, will I need to install the Windows FirePro drivers? Or will Radeon drivers work? Or will I need to use completely different drivers for Windows supplied by Apple?

I think we are all speculating on just what the relationship with with Apple and GPU makers exaclty is when it comes to drivers and the subsequent possibility that AMD might be willing to sell cheaper Pro GPU's to Apple because more of the heavy lifting in driver support will be on Apple's shoulders. To be clear many people argue that the FirePro GPU's are essentially identical to the gaming GPU's but the rest of the cards are usually not i.e. the ECC memory and the identifier. Your question is an interesting one though for other reasons. The deal Apple makes with AMD might mean Apple doesn't want to let people have cheaper FirePro's on Windows via their Mac Pro. The Windows driver would have to come from AMD though and the driver support would be all theirs. Would they want to support the Mac card under Windows when they sold them to Apple at a large discount because they would be avoiding the added costs of having to do just that?

ETA- the gaming cards showing up under 10.9 as FirePro's is a bit of a red herring. Apple doesn't really care, the drivers are only there because they wanted them for testing in the new MP and they hadn't anticipated this happening. Sooner or later though I'd bet they might start checking and disable the pro drives on non-pro cards, or they might not care about Hackintoshes enough to even bother especially considering after this year no Macs are even going to have PCIe slots to put a gfx card into anyway.

Yeah I'm kind of thinking Apple wouldn't care on the Mac side cause the limited group it'd be useful to, and if their Radeon and FirePro drivers are essentially the same thing then it doesn't really matter either way (unless they just block high end Radeons completely?). Otherwise for AMD and Windows who knows. If Apple dumped enough cash it might be worth it for that and the push against CUDA, particularly if any Mac Pros running Windows would otherwise likely be PCs with Nvidia cards. And there's still that whole possible product range, the higher end ones will still likely cost enough (plus the volume) to be worth it for AMD.

Sure, Apple could have gone with Nvidia for the graphics processors, easing some of the transition pain, but this would have been seen as an attempt to hedge the OpenCL bet, which would in turn have fractured development effort and slowed down the transition.

The OS X development story is very unified. System/CLI programming is done in C. The window server and everything above it is in Objective-C. Compute is (now) OpenCL.

The CUDA/OpenCL story is similar to the Carbon/Cocoa story. Like Carbon, CUDA was first. Like with Carbon, modern Apple never really liked CUDA and favored a home-grown and more technically elegant replacement. Like with Carbon, Apple tolerated CUDA for a while out of necessity. And like with Carbon, Apple abruptly squashed CUDA and for all practical purposes removed it from its platform.

The way this is being handled is straight out of the Apple playbook. Apple prefers to drive these sorts of transitions rather than just allowing them to happen on their own.

One other side to this is that it may promote OpenCL at the expense of CUDA. Enough apps supporting OpenCL can change the landscape of GPU marketing a bit.

I don't think we're assigning enough importance to the fact that Schiller name checked OpenCL during his Mac Pro presentation.

"For those of you using OpenCL, and you all know you should, this delivers over 7 teraflops of compute power to your applications."

That's a big tell.

The new FCP X this fall will be OpenCl optimized and dual GPU aware, and once again Apple will lead by eating its own dogfood. Move your compute cycles to the GPUs is the message of WWDC.

Why would FCP X need to be OpenCL optimized? It already uses OpenCL. Core Image on Mavericks is built on OCL, which will likely ship with the Mac Pro. And according to Guy English (http://kickingbear.com/blog/archives/349), one of the GPUs in the Mac Pro can be allocated solely to OpenCL, leaving the dual GPU awareness to the OpenCL layer.

If Apple wants to promote the adoption of OpenCL, they don't need to use AMD hardware. All they need to do is make OpenCL work. Right now, OpenCL is promising cross-platform but not delivering it. My very first attempt at writing OpenCL code looked like this: I took a simple operation (2D image convolution), wrote and tested an OpenCL kernel on a Windows/Nvidia machine. Then I tried to run the same code on Windows/AMD: it didn't work. Fixed it, tried on OS X/Nvidia: it didn't work. Fixed and tried on OS X/AMD: didn't work. Why are so many programs using CUDA and not OpenCL? Because it works. Read the archive of the Blender/Cycles development mailing list if you don't believe me.

OpenCL and CUDA are not like Cocoa and Carbon: They are like Qt and Cocoa. One of them kinda works everywhere, sorta somehow. The other runs on hardware from one vendor only, and it works beautifully (mostly). I don't see anyone here complaining about how OS X is proprietary and how Apple should use Linux, why is CUDA being proprietary being perceived as so evil? Just like Apple, Nvidia is providing software to let developers get the best out of their hardware, and nothing more.

I'm not anti OpenCL - I attended WWDC 08 and I was thrilled when I saw it announced. I would love to use OpenCL, it just doesn't cut the mustard yet. Throwing bigger hardware at it won't solve the problems that are still lingering in the software implementations.

If Apple wants to promote the adoption of OpenCL, they don't need to use AMD hardware. All they need to do is make OpenCL work. Right now, OpenCL is promising cross-platform but not delivering it. My very first attempt at writing OpenCL code looked like this: I took a simple operation (2D image convolution), wrote and tested an OpenCL kernel on a Windows/Nvidia machine. Then I tried to run the same code on Windows/AMD: it didn't work. Fixed it, tried on OS X/Nvidia: it didn't work. Fixed and tried on OS X/AMD: didn't work. Why are so many programs using CUDA and not OpenCL? Because it works. Read the archive of the Blender/Cycles development mailing list if you don't believe me.

OpenCL and CUDA are not like Cocoa and Carbon: They are like Qt and Cocoa. One of them kinda works everywhere, sorta somehow. The other runs on hardware from one vendor only, and it works beautifully (mostly). I don't see anyone here complaining about how OS X is proprietary and how Apple should use Linux, why is CUDA being proprietary being perceived as so evil? Just like Apple, Nvidia is providing software to let developers get the best out of their hardware, and nothing more.

I'm not anti OpenCL - I attended WWDC 08 and I was thrilled when I saw it announced. I would love to use OpenCL, it just doesn't cut the mustard yet. Throwing bigger hardware at it won't solve the problems that are still lingering in the software implementations.

Pretty much this.Chaos Group said something similar - they support both OpenCL and CUDA, but recommend using nVidia hardware for OpenCL, since AMD hardware doesn't implement OpenCL properly, and when using nVidia hardware, they recommend using CUDA anyway. Basically the takeaway is: use nVidia hardware, and use CUDA. Chaos Group doesn't care either way which standard "wins", but for what they are doing, CUDA just works and OpenCL clearly doesn't.

This is the primary reason nVidia is eating AMD's lunch at least in my field - CUDA/OpenCL promises some amazing advantages in 3D rendering, since waiting 20 minutes for a PREVIEW really breaks the workflow and the same preview could be done on the GPU in <1 minute. AMD hardware just doesn't work well enough yet.

That text also makes me wonder if maybe the HDMI port on the new Mac Pro can actually be used for broadcast monitoring. This was arguably hinted at in the keynote as well — Schiller goes right from describing how the machine supports three 4K displays to showing a guy using FCP X with three displays, one of which is a broadcast monitor displaying a full-screen video image. This is a long shot, but if this machine is as heavily targeted at pro video as it seems to be, it would be a pretty cool standard feature.

If Apple wants to promote the adoption of OpenCL, they don't need to use AMD hardware. All they need to do is make OpenCL work. Right now, OpenCL is promising cross-platform but not delivering it. My very first attempt at writing OpenCL code looked like this: I took a simple operation (2D image convolution), wrote and tested an OpenCL kernel on a Windows/Nvidia machine. Then I tried to run the same code on Windows/AMD: it didn't work. Fixed it, tried on OS X/Nvidia: it didn't work. Fixed and tried on OS X/AMD: didn't work. Why are so many programs using CUDA and not OpenCL? Because it works. Read the archive of the Blender/Cycles development mailing list if you don't believe me.

I did play around a bit with OpenCL this week and didn't run into any horrible problems, but whatever.

Quote:

OpenCL and CUDA are not like Cocoa and Carbon: They are like Qt and Cocoa. One of them kinda works everywhere, sorta somehow. The other runs on hardware from one vendor only, and it works beautifully (mostly). I don't see anyone here complaining about how OS X is proprietary and how Apple should use Linux, why is CUDA being proprietary being perceived as so evil? Just like Apple, Nvidia is providing software to let developers get the best out of their hardware, and nothing more.

Portable GUI code is always going to be intrinsically worse than platform specific GUI code. There is an inherent quality vs portability tradeoff.

For compute code, there shouldn't be any tradeoff. Compute code should be portable. Regardless of whether Apple is moving too soon, they're doing the right thing.

Portable GUI code is always going to be intrinsically worse than platform specific GUI code. There is an inherent quality vs portability tradeoff.

For compute code, there shouldn't be any tradeoff. Compute code should be portable. Regardless of whether Apple is moving too soon, they're doing the right thing.

I don't do OpenCL or CUDA development myself, but according to what I've seen, the issue is that AMD's implementation of OpenCL simply isn't feature complete, and if you have to use nVidia anyway, their CUDA implementation is better than the OpenCL implementation.