Before joining Turbonomic, Eric Wright served as a systems architect at Raymond James in Toronto. As a result of his work, Eric was named a VMware vExpert and Cisco Champion with a background in virtualization, OpenStack, business continuity, PowerShell scripting and systems automation. He’s worked in many industries, including financial services, health services and engineering firms. As the author behind DiscoPosse.com, a technology and virtualization blog, Eric is also a regular contributor to community-driven technology groups such as the vBrownBag community and leading the VMUG organization in Toronto, Canada. He is a Pluralsight Author, the leading provider of online training for tech and creative professionals. Eric’s latest course is “Introduction to OpenStack” you can check it out at pluralsight.com.

Messing around with Mesos: A DCOS Primer – Part 2

In our first article we took an initial look at Mesos and the idea of the Mesosphere DCOS. More options are coming to the data center with Mesos and the Mesosphere offering. This is ripe for the picking as companies are revisiting their IT strategies and really seeing how next-generation applications may drive the need for a next-generation data center OS strategy.

An offer you can’t (or can) refuse

The resources inside a Mesos cluster are advertised to the master in order to provide a view of the available CPU and memory. Once the slave nodes report to the master node, an allocation policy is invoked to allow the framework schedulers to allocate application resources across into the environment.

For those who want to dive in to the deep end of the pool, there is a research paper on Dominant Resource Fairness which is the heart of the Mesos resource allocation model. It is exciting to see this interesting algorithm in play as Mesos begins to really gain some steam in the industry. It may not be long before we see more application providers and integrators build in their own custom allocator.

The great thing about the recent GA launch of the DCOS is that you can test drive the environment easily on Amazon Web Services. The tryout process is free from the Mesosphere side, but it will incur charges on AWS of course.

I’ve created my AWS keypair which will be attached to the new cluster, so now it is as simple as walking through the wizard to get started:

Next we go through the setup process and select the parameters for our DCOS environment. Since I’ve taken a run at a 3-node master setup, I will also put some slave nodes in place to get a real sense of building out a proper cluster size.

Once my stack is created with all the necessary parameters, I can follow along in my CloudFormation console to see the progress:

With the process being this simple, you can just imagine why it is a powerful shift in how organizations may want to build and deploy applications. As a long time user of many different hypervisor and cloud platforms, the DCOS process adds a new level of abstractions which could be the trigger for people to take a long look at the Mesosphere architecture.

Now, let’s check our status:

With our creation process completed, simply go to the public URL provided in the output from your CloudFormations which will bring you to the master console:

Yes, it is just that easy 🙂

Note that Mesos, the open source product, and Mesosphere, the commercial product, sometimes get used interchangeably. This is tricky because Mesos as a platform is made easy by leveraging the Mesosphere management tools. That means that as an enterprise consumer wants to dive in with the DCOS, it is good to learn early about whether taking a commercial product built on the open source product is the right approach. As a big fan of getting the best of both worlds, I would lean that way myself.

Hopefully this gives a view of how easy it is to set up a DCOS environment, and we can explore more in the future.