Python – {code}https://blog.thecodeteam.com
Thu, 22 Feb 2018 22:07:52 +0000en-UShourly1https://wordpress.org/?v=4.9.6Automating Docker Swarm and REX-Ray installs in GCE with Ansiblehttps://blog.thecodeteam.com/2016/02/09/automating-docker-swarm-and-rex-ray-installs-in-gce-with-ansible/
https://blog.thecodeteam.com/2016/02/09/automating-docker-swarm-and-rex-ray-installs-in-gce-with-ansible/#respondTue, 09 Feb 2016 22:30:57 +0000http://blog.emccode.com/?p=1651When it comes to managing your infrastructure and environments, you shouldn’t be doing anything by hand. We are big believers in “everything as code” here, from application to infrastructure. We are excited to announce a REX-Ray role for Ansible and a REX-Ray module for Puppet. Ansible and Puppet are two very widely-used open source configuration management […]

]]>When it comes to managing your infrastructure and environments, you shouldn’t be doing anything by hand. We are big believers in “everything as code” here, from application to infrastructure. We are excited to announce a REX-Ray role for Ansible and a REX-Ray module for Puppet.

Ansible and Puppet are two very widely-used open source configuration management tools. They are most often leveraged to quickly produce repeatable, consistent deployments with minimal effort. This often allows a single engineer to manage 10’s or 100’s of nodes the same way they would manage one.

The greatest advantage of using these tools is their ability to automatically populate REX-Ray’s config.yml file. Filling this file out is something that is really best left to automation. Let Ansible or Puppet’s templating engines make sure that the correct YAML entries for the given storage drivers are in place.

We’ll first take a look at the specifics of the new Ansible role and Puppet module. After that, we’ll show off the power of these new tools by showing a detailed walk-through using Ansible to deploy a Docker Swarm in Google Compute Engine (GCE) that is automatically configured to use REX-Ray as a Docker Volume Plugin.

Ansible

The Ansible role is available on Ansible Galaxy and the source code is also available on GitHub. Installing the role is as simple as:

$ ansible-galaxy install emccode.rexray

With the role installed, simply add the role to the hosts where you want REX-Ray installed. The most important steps of setting up the role is choosing which storage driver(s) to enable and passing the appropriate role variables.

Now is a good time to point out that Ansible has great support for encrypted variables through Ansible Vault, which would be the perfect place to store your AWS credentials. With the role variables safely stored and enrypted in Vault, you would then run your playbook with the extra –ask-vault-pass switch, resulting in a command like:

$ ansible-playbook site.yml –ask-vault-pass

As a final treat, the GitHub repo includes a Vagrantfile for those wanting to give Ansible and REX-Ray a testdrive. Vagrant 1.8+ is required because Vagrantfile makes use of the ansible_local provisioner, which means that you don’t even need to have Ansible installed on your Vagrant host machine.

Puppet

The Puppet module is available on Puppet Forge and the source code is also on GitHub. To install the module on your Puppet master, execute:

$ puppet module install emccode-rexray

With the module installed, you must include the class in the manifest for any nodes where you wish REX-Ray to be installed. Again using AWS as an example:

Here’s how

Let’s walk through a more practical demonstration of what we can do with one of these tools and the roles we’ve published. We are going to deploy a REX-Ray enabled Docker Swarm on Google Compute Engine, all with Ansible.

That seems like a lot of steps, but necessary for first time setup to connect Ansible to GCE. If you are already an Ansible user, a GCE user or both, then many of these steps will be familiar or completed.

Launch nodes in GCE

Next, let’s start by creating our Swarm nodes in GCE. The Ansible playbooks referenced here are available on GitHub. To launch the default number of nodes (3), run:

$ ansible-playbook -i gce.py create_gce_swarm_nodes.yml

Here we are passing ‘gce.py’ as Ansible’s Inventory file. This will query GCE to see if our nodes already exist and only create new ones as needed. Here is example output:

At this point we’ve deployed one node designated swarm-master and two agent nodes (swarm-node-1 and swarm-node-2). These nodes have a base Ubuntu 14.04 image on them, your SSH key, and nothing else.

Configure nodes with Docker and REX-Ray

Now that we have our nodes created, it’s time to install Docker and REX-Ray, and then configure a Docker Swarm. To do that, we use our second playbook:

$ ansible-playbook -i gce.py configure_gce_swarm.yml -b -u <user>

We are again using our gce.py inventory script, which will let our playbook know how to SSH to all of our nodes. We are also using -b, which makes the remote command run as root, and the -u <user> is to specify the user we want to SSH as. If you specify ‘-u codenrhoden’, Ansible will SSH as user codenrhoden to the node, then run everything through sudo.

This installs the ‘rexray’ binary, starts the service, and configures REX-Ray to use the GCE storage driver. The playbook pre-stages the GCE JSON key to /root/gce_key.json, using “~/Downloads/gce_key.json” as the source by default. If you want to upload the key from a different location, then pass –extra-vars=”gce_json_file=/path/to/key” to ansible-playbook when you run it.

There’s too much output to capture from the play, but here is the end result:

The end result

Everything has run, but what do we really have? Let’s check it out. First, get the IP address for your swarm master:

And there you have it – a functioning Docker Swarm cluster, running on GCE, and using REX-Ray as a volume driver to mount persistent volumes natively through GCE. Want to scale up your Swarm cluster? It’s as simple as running two steps:

Next Steps

What’s next for REX-Ray and configuration management? We’ll keep up with the progression of REX-Ray features like volume mount pre-emption and continue to make sure it’s easy to deploy REX-Ray in the most {code}-like ways possible. We can add a Vagrantfile to the Puppet module to make sure getting started with Puppet is just as easy as getting started with Ansible.

Right now, Ansible and Puppet are the hottest automation and orchestration tools out there, but they are certainly not the only ones! We’d love to hear from you about what tools you are using, especially when it comes to deploying and managing Docker or Mesos at scale – environments where REX-Ray is extremely useful. Are you using Chef instead? Let us know in the comments!

]]>https://blog.thecodeteam.com/2016/02/09/automating-docker-swarm-and-rex-ray-installs-in-gce-with-ansible/feed/0Introducing ViPR Commandhttps://blog.thecodeteam.com/2015/09/08/introducing-vipr-command/
https://blog.thecodeteam.com/2015/09/08/introducing-vipr-command/#respondTue, 08 Sep 2015 04:07:56 +0000http://blog.emccode.com/?p=666The ViPRCommand project started as an idea to provided current ViPR/CoprHD users the ability to interact with the storage platform as any other “shell lover” person will do in their Linux/Unix systems. The main concept surged as a way to generate a directory structure using the resources exposed in the REST API (e.g. the URL GET […]

]]>The ViPRCommandproject started as an idea to provided current ViPR/CoprHD users the ability to interact with the storage platform as any other “shell lover” person will do in their Linux/Unix systems.

The main concept surged as a way to generate a directory structure using the resources exposed in the REST API (e.g. the URL GET /hosts will be a directory /hosts) in which users will execute command (like in bash) to manipulate them (e.g. create new hosts, create volumes, etc…).

Command ‘ls’ lists the resources as directories

The shell commands are extracted using a wadl file provided by ViPR/CoprHD which provides all the available resources, their commands and their parameters then these are processed as dynamically generate directory structure. It is because this dynamic generation of the model that ViPRCommand can be used against most versions of ViPR/CoprHD without having to constrict a specific version of this tool to a specific release.

Commands ‘cd’ and ‘ls’ open and list tenants in the /tenants directory.Command ‘ll’ (similar to ‘ls –la’) to show detail information of the previous resources

ViPRCommand has many commonly expected shell commands available like:

‘cd’ – changes the resource directory

‘ls’ – lists the resources in the directory

‘ll’ – lists the URN of the resources

‘find’ – finds directory path

The most powerful commands are those that will allow creating, retrieve modifying or deleting resources like:

But there are many others commands, just type ‘help’ in the shell to view them or ‘help’ with a command to get more information about it. The tool looks very promising and the team expect the community to help evolved it and mature as it is adopted.

The tool, written in Python, is available today. It can be downloaded together with the source code and documentation from GitHub (https://github.com/emccode/ViPRCommand) under the MIT license for those shell developer that would like to contribute.

]]>https://blog.thecodeteam.com/2015/09/08/introducing-vipr-command/feed/0Getting Free and Frictionless with ScaleIOhttps://blog.thecodeteam.com/2015/06/09/getting-free-and-frictionless-with-scaleio/
https://blog.thecodeteam.com/2015/06/09/getting-free-and-frictionless-with-scaleio/#commentsTue, 09 Jun 2015 00:12:50 +0000http://blog.emccode.com/?p=396EMC {code} is excited to be at the fore front of helping contribute to an EMC future state where we thrive by aligning to a technology landscape focused on Open-Source technologies and DevOps artifacts. So what is all this business with “free and frictionless”? How are products relevant to companies that focus on purely Open-Source strategies? The […]

]]>EMC {code} is excited to be at the fore front of helping contribute to an EMC future state where we thrive by aligning to a technology landscape focused on Open-Source technologies and DevOps artifacts.

So what is all this business with “free and frictionless”? How are products relevant to companies that focus on purely Open-Source strategies? The answer is not always simple, but we believe that products can still hold a ton value in this changing world.

A first and very relevant example of this is ScaleIO. For companies that are looking to move in a direction similar to some of the large web-scale companies like Google, Facebook, Twitter, or others, this is a very relevant topic. The big web-scale examples have changed how they consume resources from their data centers. The shift is very clear, where they are focused on ensuring that any resource in the data center can do anything at any time. These companies are looking for software decoupled from hardware that shifts the playing field and allows companies to embrace methodologies focused on running everything as software, infrastructure-as-code, and hardware homogeneity.

In comes our Free and Frictionless motto at EMC. In cases where our technology or products are relevant and make sense to open source or release for use in a free and easily consumed manner, we will strive do so.

Let’s show what it means to be Free and Frictionless with ScaleIO!

Free

Download

This past week, EMC announced that ScaleIO was available to be downloaded and used for free. Simply stated, the download link is here.

Free from Pay-Walls

This is currently available through ftp.emc.com, so there are no forms to fill out.