Enabling CrossFire under Windows between the new Mac Pro's two AMD FirePro GPUs can provide a significant speed boost in frame rates for specific graphics-intensive applications such as games, as noted in a report by BareFeats.com.

AMD's CrossFire technology links the two GPUs to make them appear to be one fast GPU, conceptually similar to grouping drives in a RAID configuration.

The potential for CrossFire on Apple's Macs

Apple's OS X doesn't support currently CrossFire, however, as the technology requires firmware and driver support that AMD currently only supplies for Windows XP, Vista or 7 and Linux.

AMD acquired CrossFire as part of ATI's graphics technology portfolio in 2006. It's never made it to the Mac largely because Macs haven't played much of a role among gamers willing to pay hundreds of dollars for multiple video cards. And while Macs have begun to play a significant role in video editing, the higher end 3D applications targeted by CrossFire are principally used on Windows PCs or Linux workstations designed to accommodate multiple GPUs.

On Apple's end, support for AMD's CrossFire was restricted not only by the limited niche market for dual video card Macs, but also because CrossFire (like NVIDIA's competing SLI) is proprietary and therefore restricted to working only across specific models of each vendors' GPUs. Apple has preferred the independence of being able to switch back and forth between the two GPU vendors.

The fact that all of Apple's Mac Pros (AI review) now ship standard with a narrow selection of dual AMD FirePro GPUs makes it plausible that Apple could invest in supporting CrossFire under future versions of OS X in order to squeeze more performance from the new machine's architecture under apps without requiring hardware-specific optimizations.

Without CrossFire, apps and games running on the Mac Pro must be custom written to take advantage of the two GPUs. According to AnandTech, Apple has configured the Mac Pro so that "by default, one GPU is setup for display duties while the other is used exclusively for GPU compute workloads."

The site noted that "it is up to the game developer to recognize and split rendering across both GPUs, which no one is doing at present," and that, "unfortunately firing up two instances of a 3D workload won't load balance across the two GPUs by default."

Apple's support for GPU-specific development

In December, Apple's own Final Cut Pro 10.1 update delivered custom support for using both of the new Mac Pro's dual GPUs to speed compute tasks, with the company specifically noting "optimized playback and rendering using dual GPUs in the new Mac Pro."

And of course, Apple's Grand Central Dispatch in OS X is designed to spread tasks across all available GPU and CPU cores available on a system, but only for apps that are written to package tasks in a way that the system can manage.

Over the past five years, Apple has been working to make use of GCD and Open CL to increasingly delegate more tasks to the GPU that would historically be run simply on the main CPU. With Mac sales increasing overall, and with the new Mac Pro standardizing upon a dual configuration of a specific family of GPUs, it's increasing likely that Apple will offer new and expanded support for tools to ease developers' use of the available hardware.

If Apple does add support for AMD's CrossFire (or a similar GPU-teaming technology) in an upcoming release of OS X, it will also make it more likely that that company will enhance its other Mac models to incorporate support for multiple GPUs, providing a more flexible, cost effective and energy efficient route to faster overall performance without relying entirely upon larger, faster Intel CPUs.

Apple's newest iOS devices already incorporate multiple GPU cores, which are used to support the computationally intensive task of pushing pixels to high density Retina Display screens.

The massive shift to mobile devices at the expense of conventional Windows PCs in recent years has accelerated developers' interest in OpenGL and OpenCL and driven development resources and efforts toward Apple's Cocoa development tools, benefitting Apple's desktop Mac platform in the process.

There you go, this thing gets more retarded by the second. You need to put windows in the shinny trash can, they made for each other. And the iOS years continue.

Gaming was always better under Windows. The 2nd GPU is meant for computing to take the place of another CPU. For apps that are written to take advantage of the dual GPUs like Da Vinci, it performs very well:

It's not as if the 2nd GPU adds a huge expense. The top-end extra GPU is $500.

Quote:

Originally Posted by GrangerFX
Dual 12 core Xeons would help me a lot more than dual ATI GPUs.

It would cost $3500 more. Like I say, the extra GPU is $500. If you want dual 12-core, you'd buy two 12-core machines.

Quote:

Originally Posted by DarkLite
why is the 2010 Mac Pro doing better than the 2013 Mac Pro on these benchmarks?

I think the R9 270 is a newer GPU but they are meant for gaming and don't always come with as much memory and not ECC memory. The FirePro is meant for different workloads. It still maxes out most games with one GPU though.

I'm glad my D700 work machine will also be able to play games better--in ANY OS--than I've ever had them before. (I'm a 1st-person shooter fan but have never felt like spending the money on either high-end detail nor dual machines--or even dual OS's. I wanted them! But not enough to pay. I think the time has come.)

Booting into Windows is pretty unacceptable--I like my stuff in one place, and I'd rather maintain one OS than two. But since I may need a Windows setup to test stuff for my Windows clients anyway, some recent Mac improvements make Boot Camp gaming JUST barely acceptable--if a game comes along where I can't live without Crossfire:

1. SSD makes booting back and forth between two OS's less of a pain.

2. The Mac side reboots with all your windows and documents restored. Less of a workflow interruption, then.

3. Steam lets me buy a game on Mac and have it on Windows too when I feel like doing that. (But I still favor the App Store, given the choice.)

I hope to have this machine long enough that I'll start to feel gaming is slow--and then Crossfire, even if it never comes to OS X, will be welcome.

And seeing as you probably don't own one, be thankful for the rest of us who get the dual FirePro GPGPUs OpenCL 1.2/2.x ready, unlike Nvidia who will never extend support beyond the 1.1 level for OpenCL. The latest CUDA 6.0.1 tops out at that. They have made zero commitment to OpenCL 1.2/2.x.

Apple and AMD are fully committed to it.

Apple can coordinate with AMD and have full Mantle API support opened up and then you'll be glad Nvidia wasn't the solution.

Gaming was always better under Windows. The 2nd GPU is meant for computing to take the place of another CPU. For apps that are written to take advantage of the dual GPUs like Da Vinci, it performs very well:

It's not as if the 2nd GPU adds a huge expense. The top-end extra GPU is $500.
It would cost $3500 more. Like I say, the extra GPU is $500. If you want dual 12-core, you'd buy two 12-core machines.
I think the R9 270 is a newer GPU but they are meant for gaming and don't always come with as much memory and not ECC memory. The FirePro is meant for different workloads. It still maxes out most games with one GPU though.

Actually the problem with most modern games is that they use pre-rendered graphics which is why they need so much storage space and power to be able to process all those images fast enough.

However, if the games were written properly using on the fly graphics it seems that with the power of the A7 and beyond they could indeed play Crysis style games.

To prove my point a few years ago a demo coder created a full on Quake 3 style game in 986KB (yes Kilobytes) and it ran fast and smoother than Quake 3 running on high end gear at the time. All the levels and graphics were created on the fly. There were a few bugs but it proved that because of big RAM and big HDDs programmers by and large are lazy people wanting to push out the latest games rather than create games where the big CPU could be worked... i.e. like using the processing power for making unbelievable AI instead of rendering pretty graphics and sound which is ultimately all Crysis was.

A PS4 version of a game shouldn't need much more than a recompile to run on a Mac or Linux machine. They'll keep some exclusivity (even just timed) on the console but it means good things for developers looking to publish titles to as large an audience as possible. Tomb Raider was brought over to the Mac last month:

A PS4 version of a game shouldn't need much more than a recompile to run on a Mac or Linux machine. They'll keep some exclusivity (even just timed) on the console but it means good things for developers looking to publish titles to as large an audience as possible. Tomb Raider was brought over to the Mac last month:

And seeing as you probably don't own one, be thankful for the rest of us who get the dual FirePro GPGPUs OpenCL 1.2/2.x ready, unlike Nvidia who will never extend support beyond the 1.1 level for OpenCL. The latest CUDA 6.0.1 tops out at that. They have made zero commitment to OpenCL 1.2/2.x.

Apple and AMD are fully committed to it.

Apple can coordinate with AMD and have full Mantle API support opened up and then you'll be glad Nvidia wasn't the solution.

Is there any reason you prefer OpenCL? As far as I can tell, there are more developers working on CUDA stuff, and CUDA seems to be a bit nicer.

A PS4 version of a game shouldn't need much more than a recompile to run on a Mac or Linux machine. They'll keep some exclusivity (even just timed) on the console but it means good things for developers looking to publish titles to as large an audience as possible. Tomb Raider was brought over to the Mac last month:

I'm sorry to say this but you should buy a PC.
Apple doesn't care about the kind of scientific computing you are probably doing.

It is not clear that Apple cares about ANY kind of scientific computing, their machines are more geared to image processing stuff (and it works for scientific computing, great, but not their goal in life).

It is not clear that Apple cares about ANY kind of scientific computing, their machines are more geared to image processing stuff (and it works for scientific computing, great, but not their goal in life).

Having attended a scientific computing conference where two Apple employees were the featured guests, I have to disagree.

Having attended a scientific computing conference where two Apple employees were the featured guests, I have to disagree.

I am glad to hear it, but a lot of scientific computing is done on compute farms made of generic hardware -- even the front end machines are often Linux workstations (because that's what the back end runs). Since a lot of people carry around apple laptops (e.g., yours truly), there should be a bigger role for Apple, but they have not seemed very aggressive about pursuing it.

a lot of scientific computing is done on compute farms made of generic hardware -- even the front end machines are often Linux workstations (because that's what the back end runs). Since a lot of people carry around apple laptops (e.g., yours truly), there should be a bigger role for Apple, but they have not seemed very aggressive about pursuing it.

You do know who came up with OpenCL? Also, your statement about generic hardware counters what you were saying about CUDA. If it's generic then OpenCL is much more suitable.

The term scientific computing is broad and nebulous. What you appear to be taking about is not specifically scientific computing at all. You are talking about clusters. Scientific software makes scientific computing. A cluster can be used for computing intensive tasks--scientific or otherwise. Clusters are usually built of components for white box computers. and powered by Linux. Each node is cheaper than an iPhone.

Where is the room for Apple aggression in this scenario? You can build a fairly nice working cluster in an afternoon. If you can't, then Apple's help is probably not all that you need.

You do know who came up with OpenCL? Also, your statement about generic hardware counters what you were saying about CUDA. If it's generic then OpenCL is much more suitable.

nVidia is pretty dominant in the graphics card market, and compute farms are quite likely to have Fermi coprocessors. This is my personal experience. As for OpenCL, I am well aware of its history, but again, Apple was more interested in image processing (which, of course, does come up in non-video production applications, but video WAS their prime mover).

The term scientific computing is broad and nebulous. What you appear to be taking about is not specifically scientific computing at all. You are talking about clusters. Scientific software makes scientific computing. A cluster can be used for computing intensive tasks--scientific or otherwise. Clusters are usually built of components for white box computers. and powered by Linux. Each node is cheaper than an iPhone.

Where is the room for Apple aggression in this scenario? You can build a fairly nice working cluster in an afternoon. If you can't, then Apple's help is probably not all that you need.

Well, as a matter of fact, my comment was inspired by my current experience. I have been considering buying a MacPro, because I am do do some GPU based stuff, and while I do have servers off in Central Europe somewhere doing computations, it would be nice to have greater immediacy for some things. Problems: (a) the MacPro only supports 64GB of RAM (b) the MacPro has AMD GPUs. A lot of the best tools I know of (google Parrakeet compiler for python, or Numba or NumbaPro) are fairly CUDA-specific. CUDA out of the box from nVidia has nice libraries, and when CUDA 6.0 comes out, you will be able to just slot them into your code without thinking about GPUs at all.

If Apple cared, they would certainly be able to address problem (a), and they also have sufficient resources to address the larger problem (b) (both of these are by no means specific to me, interestingly the first is more relevant for symbolic vs numeric computing. But they don't. So, most likely, my money will be spent on upgrading one of my local linux boxes to a K40 GPU.