ESXi 6.5 support for Apple Mac Pro 6,1

I know several of you have reached out asking about the support for ESXi 6.5 on the Apple Mac Pro 6,1 but as of right now, the Mac Pro 6,1 is currently not supported with ESXi 6.5. I know this is not ideal especially for customers who wish to take advantage of the latest vSphere release. The good news is that VMware is in the process of testing the Apple Mac Pro 6,1 for ESXi 6.5, however there is not an ETA on when this will be completed by.

Some of you might be wondering why this did not happen earlier? The primary reason is that hardware certification for ESXi is actually performed by the hardware vendors. Once a vendor completes the certification for a particular hardware platform or component, they submit the results to VMware and the VMware HCL is updated. If there is a piece of hardware that is not on the VMware HCL today, it is definitely worth reaching out to your hardware vendor to inquire about its status.

In Apple's case, it unfortunate as they do not participate in VMware's Hardware Certification program for ESXi which makes certification challenging. VMware intends to continue to support customers who require the use of Mac OS X Virtualization and will work towards getting the Mac Pro's certified for latest version of vSphere as mentioned earlier. Historically, testing and certifying ESXi for Apple hardware does take an additional amount of time and in some cases, code changes may even be required due to unexpected hardware changes from Apple.

I hope this gives customers some additional insights into how Apple hardware is certified for ESXi. If you would like to see this improved in the future, you may want to reach out to Apple and provide them with your feedback.

Now ... before you close this blog post thinking it is going to take awhile before there is going to be an update regarding ESXi 6.5 and Mac Pro 6,1, please continue reading further 🙂

UPDATE (07/28/2017) - ESXi 6.5 Update 1 just GA'ed yesterday and is fully supported with all current Apple Mac Pro 6,1 (as you can see on the HCL here) and the workaround mentioned below is no longer required. This means you can install ESXi without any modification to the image.

UPDATE (03/25/2017) - VMware has just published the following VMware KB 2149537 which outlines the officially recommended workaround to install ESXi 6.5 onto the Apple Mac Pro 6,1. The VMware HCL has also been updated to include the Apple Mac Pro 6,1 4-Core, 6-Core, 8-Core & 12-Core systems. In a future release of ESXi, the workaround will not be required and ESXi will just install out of the box. This temporarily workaround is to enable customers who wish to run the current version of ESXi 6.5 which includes GA release, 6.5a and 6.5p01.

Disclaimer: The following section below is not officially supported or recommended by VMware. Please use at your own risk.

Early last week, I had a customer who had reached out to me that attempted an install of ESXi 6.5 on their Mac Pro 6,1. They were already aware that the platform was not officially supported with ESXi 6.5, but wanted to see if I had any ideas that they could try. When attempting to boot the ESXi installer (upgrade or fresh install), they saw the following error message in the ESXi logs:

The customer had theorized that perhaps there was an issue with the AHCI driver but since the system would not boot further, there was not much more they could do. Looking at the error, I also agreed the issue might be related to the AHCI driver which gave me an idea. The specific driver shown in the logs is the new AHCI Native Driver which is new in ESXi 6.5. Perhaps, the new driver is not able to claim the disk drives and is preventing the boot-up. I recommended to the customer that they could actually fall back to the "legacy" vmklinux driver and see if that would allow them to progress further and to my surprise, that actually worked. Not only did the installer completely boot, but the customer was able to perform both a fresh install of ESXi 6.5 as well as an upgrade from ESXi 6.0 to 6.5 on the Mac Pro 6,1 without any issues.

Of course, we do not know if this is the real fix or if there are other issues. So far the customer has not reported any issues but it is still recommended that customers who want official support for Mac Pro 6,1 and ESXi 6.5 to hold off until it is certified by VMware. For customers who wish to push the "Not Supported" boundaries a bit, below are the instructions on how to get ESXi 6.5 booted and installed on Mac Pro 6,1.

Add the following ESXi boot option (persistent) by pressing SHIFT+O when you are presented at the initial black screen.

preferVmklinux=True

At this point, you can now successfully boot the ESXi 6.5 installer and perform either a fresh install or an upgrade. You will NOT need to perform this operational again as the change is persistent. If you prefer not to manually have to add the ESXi boot option by hand, you can create an ESXi bootable USB key and then simply edit both boot.cfg and efi/boot/boot.cfg and append the option as shown below:

kernelopt=runweasel preferVmklinux=True

I will be sure to share this information with our Engineering folks working on testing the Mac Pro 6,1 but at least we know its possible to install ESXi 6.5 🙂 Big thanks to Andrew for reaching out and I think we were both pleasantly surprised by the outcome.

FYI - For customers who use the Apple Mac Mini, ESXi 6.5 seems to run fine without any issues (e.g. fresh install or upgrade). I have not heard of any real major issues, so you should be fine. Please note that the Apple Mac Mini is not an officially supported hardware platform, please use at your own risk.

Reader Interactions

Comments

I have accomplished this using rEFInd. I simply use dd the iso to img and then dd the img to usb and with patience, I went back into the mac by removing the SID, and NVROM <—windows hack "windows installer" <—remove all partitions and then install esxi on a 2ndry thumb drive.

What version of mac mini is supported? I recently found your article about ESXi 6.0 on an xserv. I’m looking at picking up some used equipment for a homelab. The mini sounds wife approved in size, power, and noise levels.

Thanks so much for taking the time to help with this! We used this for some development hosts and so far things appear to be working OK. Looking forward to official support and putting this into production.

We’ve had a much better experience installing macOS Sierra VMs on 6.5 (with the new Guest OS version OS X 10.12 setting). Previously we’ve had issues installing Sierra from ISO (converted in various ways from the InstallESD.dmg) and sometimes issues booting into the OS but now it appears to work without issue.

Indeed, thanks very much! We have it up and running as well. Going through and testing it on the pro (cluster), mini (single hosts) and xserve (cluster) in production simulations now. Luckily looks like many of the HBA and 10gig cards we use are still on the HCL so its all going smooth.

I think your last sentence about the Mac Mini is a little bit too optimistic 😉

I tried the 6.5.0 installer on a brand new Mac Mini 7,1 16gb 512SSD today. That one got stuck on a few random spots before i went back to 6.0. Symptoms exactly like the ones you describe for the Mac Pro.

Ridiculous that this isn’t supported by VMware yet. I’ve been using 6 on this hardware since beta 2 with similar ‘hacks’ listed above to get it to work, along with third party drivers for the thunderbolt NICs and other changes to make it boot off of the internal disk. With all the solutions published in their forums you would think they would have adopted it by now.

I’ve been running 6.5 on a MacPro 6.1 with out issues. My issue is that no external drives are recognized when attached to the MacPro. I’d like to attach some usb and firewire drives to create extra datastores, but the devices never show up in the esxi web interface. Have I missed some step?

I can confirm that 6.5 works perfectly with no modifications on a 2009 Xserve with 8-cores (the last Xserves made). Even my USB3 PCI cards seem to be recognized and allowed for pass-through if desired.

I’ve tried to find other potential uses of my Mac Pro 6,1. Before finding this site, I’ve only focused on Linux. Thanks! However, I found the video parts of my two D700 cards were shown in grey/disabled for passthrough at the hardware dialog box of ESXi 6.5, but only the audio parts could be enabled. Any methods or suggestions to have D700 GPU passed-through to guest OS? Thanks!! By the way, FYI, with the newest iso I donwloaded, I did not need to use the ‘prefervmklinux=True’ to get an event-free boot.

No, I am not running multiple OS X. Instead, I am planning to run multiple Linux, both host and guest. I’ve managed to pass through my two D700s to two Linux guests at the same time. Although guests could successfully initialized the card, got correct parameters like RAM size or clock speed, the final OpenCL programs failed. Only the very beginning part like getting platform numbers or getting device numbers worked. Real programs just hung. This is the farrest I can get right now.

VMware can’t provide guidance on how to interpret other vendor’s EULA. The recommendation which I’ve given to many of our customers is to have your organization work w/Apple to understand the requirements. Several customers in the past have mentioned success w/Volume Licensing, but ultimately this is an agreement between your organization and Apple.

Does ESXi 6.5 improve fan behaviour in the 2012 MacMini? The fans were running constantly at a low speed, without any management from the OS, so I had to attach a USB cable to the power cables on the fan, then run that from a USB port on the Mini. It runs the fan at probably 60% speed constantly (due to USB power) but it’s better than nothing.

I followed all the steps above and got the ESXI installed on my MacPro 6,1.. but then it dosn’t have network connectivity. I tried to connect it via thunderbolt and ethernet, but no luck in getting an IP.. any thoughts?

I was going to inquire about an interesting topic via LinkedIn but I’ll start the conversation here. On the 2013 Mac Pro, using either RHEL (maybe CentOS)/Ubuntu or Windows 2008/2012 R2 been able to successfully get OpenCL running on a virtualized instance with passthrough (or do you require it)(does it function like Nvidia GRID technology)?

I ask because I have a huge Dell PowerEdge tower that frankly, runs too hot and loud for my office lab. I need the GPU for professional requirements and I need OpenCL applications to recognize the GPU (at least one of them). I think I read that the legacy AMD driver under 14.04 *fglrx* was being used. One of the issues I have came across so far on bare hardware recently with AMD is the provider supplied drivers are messy. AMDPRO-GPU is the driver being used under 16.04 LTS that supports newer GCN GPU architechtures for which I thought this should be, at least if it’s what people claim they are variants of here:https://architosh.com/2013/10/the-mac-pro-so-whats-a-d300-d500-and-d700-anyway-we-have-answers/

I’ve followed your blog for years, running my first DFIR lab with a MacMini server model, later a full blown Dell R610 with dual Xeon hex-cores and now, I had to downgrade that (because it was just too hot in my lab) to a Mac Pro. This will be running my SIEM/UEBA, AD-DS/NPS server along with my SOA and ITSM solution – I’m a little concerned on applying https://kb.vmware.com/s/article/52345

As some of this is done at microcode level by the vendor, we all know Apple could give a hoot about DellEMC or VMware – will this even work?

Look, I like Dell servers. They are made in Round Rock and I support that. I don’t support not being able to run a consumer GPU on a 16x PCI lane (just because) or proprietary drives so I’d really like to know I can patch this OS without bricking it. Your thoughts on mitigation for the hypervisor host?

@lamw Seems the installer fails now, current Boot ROM for Apple is MP61.0120.B00 – usual nfs4client error and then – no network adapters found with installer must exit. Really expensive brick to run Splunk and Nexpose on!

We are seeing the latest firmware (MP61.0120.B00 – released in conjunction with macOS 10.13) for the Mac Pro cause many varying issues after the install of ESXi 6.5U1 (note we don’t see any errors during the install).

We have seen the following behavior on identical Pros with the above firmware after we set management network info and then reboot:

-Management network IP and root password are forgotten (even though we’ve logged in with the password just minutes earlier before the reboot) OR

-“BANK5: invalid update counter. No hypervisor found.” OR

-“kernel= must be set in /boot.cfg. Fatal error: 32 (Syntax)”

We have tried multiple install medias (to rule out a bad CD or USB stick), and we have zeroed out the SSDs in the Pros. The only thing in common with the errors we see is the latest firmware version (MP61.0120.B00), and using the preferVmklinux=True workaround is the only thing we’ve found to mitigate the issue.

Happy to provide more information or run any tests, but for now we’ll continue to use the workaround.

Thank you @specter345. I know this is the case with that firmware, it is intended to address the Thunderkit (Sonic Screwdriver) exploit which Rich Smith of Duo Labs and I confirmed.

My issues today are now with EFI MP61.88Z.0116.B17.1602221600 which IS listed as compatible with 6.5u1. The installer works without issue, if you don’t mind being pwned at ring -2, -1 and 0 should that exploit be used against you which is only possible with the Thunderbolt to Gigabit Ethernet adapter.

The main concern I am having is with this supported EFI release above. I have that EFI firmware as well and while I am aware of its vulnerabilities, both Broadcom BCM577xx onboard pNIC’s are recognized yet the Thunderbolt adapters needed for additional connectivity are not. I have six of these and I plan to use at least four in a managed network. If the vmkernel doesn’t recognize them with supported firmware, that is a whole different ball of wax that VMware needs to address as these should appear as essentially no different that any pNIC connected directly to the PCI-E bus.

Apologies for the delay, I’ve been on paternity leave and yesterday was literally my first day back. Can I ask if either of you have filed an official VMware SR (this is the recommended approach so we can properly track issues/requests), if so, can you provide that to me?

I’m still catching up from being away, but I did drop a note to a few of the Engrs. They did confirm that they’ve got latest ESXi 6.5u1 running on both MP61.88Z.0116.B17.1602221600 (listed on VMW HCL) as well as MP61.88Z.0120.B00.1708080652 (included w/MacOS 10.13.3) without any issues. If you’re still having trouble installing ESXi, it would be great to get an SR filed (if you haven’t already with all the details and steps you’ve taken).

With respect to the comment from Brian on the Thunderbolt to Ethernet Adapter, this is not something that was ever officially supported by VMware. I’m not sure if you’re aware or had assumed (although this has mostly worked as the Apple device uses tg3 driver, it was mostly luck that I had discovered it rather than something VMware officially blesses and is listed on HCL).

I will definitely do that. I have some other things to run by you guys as we are really stuck.

1. The mac pro has officially passed all osx tests by a certified tech.
2. When we boot esxi in our pre deployment lab on this 2017 mac pro it works fine. We can even reboot it with out issues.
3. We then move the mac pro into a sonnet case and continue to rebooting in the lab and its fine.
4. We then move the mac pro into the data center and it crashed like clockwork. It comes up with different errors depending on the version of esxi.

We are stumped on what it could be. How can it work in our pre deployment lab fine and be rebooted , have the power cord out and plugged back in and still boots? Once moved into our data center it fails to boot immediately.

The thermal characteristics of your data center may be very different than your pre deployment lab. The latest firmware updates from Apple are known to make adjustments to the board’s thermal controls that may extend the life of the MacPro 6,1 components. So one theory is that interaction between the new thermal profiles and the old ESXi code is causing instability. Other things to look at are, of course, differences in power and NIC modes between the two environments.

OK so its not the hardware, We installed osx on it and rebooted it in the data center and its up. Its something with VMware as stated by @tateconcepts .. firmware perhaps, however we have the exact same mac pro beside it running the same firmware and its fine… ???

I’m seeing this same issue with an MacPro 6,1 8core with stock factory SSD drive and ESXi 6.5 and 6.7. After reboot network / password settings are lost. Eventually after several reboots the server starts crashing with Fatal Error 33. I’ve been on the phone with VMWare support for 2 hrs and they have no idea. Has anyone found a solution to this issue?

Hi Joel and all, does vDGA ( graphics card pass thru) work on 6.7? We have also had major issues with pass thru running on 6.5u1. Although the card shows up in windows and we install the driver it still says there is an issue with the video card drive. We have opened a ticket with vmware however they say since passthru is “working” its a windows 10 issue ? any ideas?

– ESXi 6.7 running stable for a month now
– 5 VMs running, Windows Flavors, Linux based SA, MacOS (aka macOS) flavors
– You should use an external fan to cool down the getting hot MacMini
– Any consumer summer fan will do fine

– We plan to update to 6.7 U1 which has just been released by VMware in 2018/October/17
– So in a nutshell MacMini does not cause any hardware related headache so far

NOTE: As for the MacPro Black 6,1 it’s a different story unfortunately. We have exactly the same issues as Claudio reported above, with 6.7 because we have no 6.5 or 6.0 installations. We have found a workaround for the MacPro Black problem, if you folks are interested I can describe it in a seperate post. Our MacPro btw has Firmware MP61.88Z.0124.B00.1804111223 installed because previously this machine was running under MacOS 10.13.x (to be precise 10.13.6)

Hi…
I’m very interested by your workaround fot 6.7.
We have MP 6.1 with the MP61.88Z.0124.B00.1804111223 firmware and after reboot network and password settings are lost.
If I boot from usb drive… no probleme.
Thanks a lot !

Since we purchased a VMware vSphere Essentials Kit and we wanted to run one or two other VMs on the server, we wanted to migrate from MacOS/Fusion to ESXi.

4) ESXi Setup in the OFFICE

4a) because we need a monitor, we setup the MacPro in our office rooms
we connect an Apple 27″ Thunderbolt Display to the MacPro
4b) we connect 2 ethernet cables to the MacPro’s two ethernet ports
we boot with the ESXi Installer from a USB SSD
4c) Installing ESXi runs flawless
When asked for, we disconnect the USB SSD and hit enter to reboot
4d) ESXi boots just fine in DHCP mode
We configure the network settings for static IP, no reboot required
We successfully connect to the Web Interface of ESXi
The ESXi Host is standalone, thus not added to our VCSA
4e) We reboot again to check if everything works fine
It is.
Shutting down the System/Host

5) Relocating the ESXi MacPro to our [small] ServerCenter in the basement
==========================================================

Connecting two Ethernet Cables and the Powercable
Power-on the server, waiting a few minutes
Server is not accessible on the static IP we had configured
We transport a monitor to our ServerCenter
—> Magenta Screen (which as I understand is the Mac black or Windows blue screen)
—> Fatal Error, another reboot does not remedy the issue

6. Relocating the MacPro AGAIN to the OFFICE
======================================

Doing the procedures of 4) and 5) again
—> Still ending with a Magenta Screen

7. One Ethernet Port Approach
========================

Doing steps 4) and 5) again BUT this time with just 1 ethernet cable connected to ethernet Port 1 of the MacPro

—> ESXi booting fine, static network settings working fine
—> HOWEVER, as soon as an ethernet cable is being connected to ethernet Port 2
we will end up with a magenta screen again, forcing us to set up ESXi completely

8. It’s a workaround not a Solution
==========================

Because this is productive server, for the time being we decide

– to stay with this WORKAROUND (it’s not a solution)
– in order to prevent anyone from plugging an ethernet cable into Port 2
– we have simply masked Ethernet Port 2 of the MacPro with a yellow tape

9. Questions from our side to William Lam or Others
========================================

– Does setting the ‘preferVmklinx=true’ boot option do anything good for ESXi 6.7?
– Or does this only apply to ESXi 6.5?

Do not yet upgrade your MacPros and MacMinis to ESXi 6.7U1 (Status 2018/October/26)

We thought the 6.7U1 update should potentially remove bugs and problems but when installing 6.7U1 on a test (luckily) Quadcore MacMini Late 2012 we ran into the “Multiboot buffer is too small.” problem. Others are having this problem too with 6.7U1

Since we purchased a VMware vSphere Essentials Kit and we wanted to run one or two other VMs on the server, we wanted to migrate from MacOS/Fusion to ESXi.

4) ESXi Setup in the OFFICE

4a) because we need a monitor, we setup the MacPro in our office rooms
we connect an Apple 27″ Thunderbolt Display to the MacPro
4b) we connect 2 ethernet cables to the MacPro’s two ethernet ports
we boot with the ESXi Installer from a USB SSD
4c) Installing ESXi runs flawless
When asked for, we disconnect the USB SSD and hit enter to reboot
4d) ESXi boots just fine in DHCP mode
We configure the network settings for static IP, no reboot required
We successfully connect to the Web Interface of ESXi
The ESXi Host is standalone, thus not added to our VCSA
4e) We reboot again to check if everything works fine
It is.
Shutting down the System/Host

5) Relocating the ESXi MacPro to our [small] ServerCenter in the basement
==========================================================

Connecting two Ethernet Cables and the Powercable
Power-on the server, waiting a few minutes
Server is not accessible on the static IP we had configured
We transport a monitor to our ServerCenter
—> Magenta Screen (which as I understand is the Mac black or Windows blue screen)
—> Fatal Error, another reboot does not remedy the issue

6. Relocating the MacPro AGAIN to the OFFICE
======================================

Doing the procedures of 4) and 5) again
—> Still ending with a Magenta Screen

7. One Ethernet Port Approach
========================

Doing steps 4) and 5) again BUT this time with just 1 ethernet cable connected to ethernet Port 1 of the MacPro

—> ESXi booting fine, static network settings working fine
—> HOWEVER, as soon as an ethernet cable is being connected to ethernet Port 2
we will end up with a magenta screen again, forcing us to set up ESXi completely

8. It’s a workaround not a Solution
==========================

Because this is productive server, for the time being we decide

– to stay with this WORKAROUND (it’s not a solution)
– in order to prevent anyone from plugging an ethernet cable into Port 2
– we have simply masked Ethernet Port 2 of the MacPro with a yellow tape

9. Questions from our side to William Lam or Others
========================================

– Does setting the ‘preferVmklinx=true’ boot option do anything good for ESXi 6.7?
– Or does this only apply to ESXi 6.5?

Do not yet upgrade your MacPros and MacMinis to ESXi 6.7U1 (Status 2018/October/26)

We thought the 6.7U1 update should potentially remove bugs and problems but when installing 6.7U1 on a test (luckily) Quadcore MacMini Late 2012 we ran into the “Multiboot buffer is too small.” problem. Others are having this problem too with 6.7U1

I would invite William or some other folks from VMware to sit together with some Apple folks in Cupertino and jointly workout a pragmatic solution. The MacPro is a nice host for ESXi (extremely compact, lots of cores, fast PCIe SSD)

So despite I’m aware that Apple as for ESXi is not doing the thorough job that Dell, HP et al are doing for certificating their servers for ESXi, still with VMware folks taking a friendly lead, I am confident the issues can be resolved together with Apple.

ALSO, this would be a GREAT opportunity to initiate a closer collaboration between VMware and Apple. If macOS is to be LEGALLY virtualized, good working Mac Hardware sure is helpful to do it legally.

I have tried versions of ESXi including 6.5.0u1 and 6.5.0u2. After the initial installation, all is working perfectly but after a full power down, disconnection of cables and relocation of the Server, I get the wonderful PSOD. I have tried setting the ‘preferVmklinx=true’ boot option on various occasions and still end up with the same result.

Our current work around is to not power down the server! This is obviously very problematic and not always possible.

I have 3 other Mac Pro 6,1’s on site running 6.5.0 and earlier versions of EXSi perfectly. The only difference I can think is the firmware version as the problematic Mac Pro’s were purchased later and shipped with a newer firmware.

In order to verify your experience I have shutdown the MacPro from above twice.

1. Shutdown:

– ESXi is starting seamless but the VCSA VM on the Server is marked red as invalid VM
—> This might be just an unpleasant conicidence
—> VM was beyond repair, had to re-install and reconfigure the VCSA

2. Shutdown:

– After shutdown, I swap the ethernet cable but put the new cable into the same ethernet port 1 and into the same switch port to be sure that nothing has change except for the cable
– After booting, we end up with “error loading /state.tgz fatal error” “buffer too small”
– Nothing help except for re-installling ESXi
—> Positive However: Re-installing but preserve store with VMs does work

Conclusion:

• Indeed things are shaky, and yes it’s probably because of the newer firmware-versions (10.12, 10.13)

• My workaround from above is only of limited help but re-installing without losing the entire store with the VMs – if it is actually the workaround from above that contributes – still is better than losing everything

• You’re right, simply keep running the MacPro whenever possible is most save AND you should NOT remove network cables at all.

Remedy:

William, we all would be very pleased if you guys from the VMware team could sit together with some folks from Apple in order to make ESXi 6.7 and 6.7U1 compatible with the newer MacPro Firmware versions.
Thanks in advance.

We have tried to upgrade several Mac Pro 6,1 systems from ESXi 6.0 to both ESXi including 6.5.0u1 and 6.5.0u2. After the initial installation, all is working perfectly but after a full power down (not reboot, power off/on), we get the PSOD. This is reproducable across multiple Mac Pro systems. Ive repeated the upgrade 8 times already. Always the same result.

The differences in them are that they have firmware versions:
MP61.88Z.0116.B25.1702171857 (which is already beyond the MP61.88Z.0116.B17.1602221600 listed on the HCL)
and MP61.88Z.0125.B00.x (i believe from from 10.13.6)

I have tried setting the ‘preferVmklinx=true’ boot option on various occasions and still end up with the same result on all systems. PSOD after a shutdown / restart.

I tired to search for methods to revert the the firmware on try Mac Pro 6,1 but I see information on on Apple’s website, “this restore method cannot be used to return an Intel-based Macintosh computer’s firmware to a previous version if a successful update has already been performed. You can only use this to restore the firmware after an interrupted or unsuccessful update.”
I see that William states he has shown success using firmware MP61.88Z.0120.B00.1708080652, but again there is no way to revert the firmware.

I hope that upgrading to 6.7 has better results, similar to Joel Cannon posted in Aug. Fingers crossed…

Primary Sidebar

Search this website

Author

William Lam is a Staff Solutions Architect working in the VMware Cloud on AWS team within the Cloud Platform Business Unit (CPBU) at VMware. He focuses on Automation, Integration and Operation of the VMware Software Defined Datacenter (SDDC).