The ideal candidates for this role have com from a background with bare metal linux infrastructure, have experience using configuration management tools such as Chef or Puppet, and are skilled with automation. Prior experience using AWS is a big plus.

Collaborate with software engineers to make architectural decisions to enable a scalable, stable and secure infrastructure; automate provisioning and configuration of production systems using Chef, Terraform and Docker.

The infrastructure is heavily in Linux and AWS and you will work a lot with AWS tools, Docker, as well as configuration management and continuous integration tools. You will also need code deployment experience.

Build tools to analyze and optimize cpu, core, memory and disk utilization of services that run on our Aurora and Hadoop clusters.Build a cost-effective and seamless way to run pipelines across our on-premises Hadoop cluster and Amazon EMR.

This role is for a seasoned IT professional that loves to solve problems that help their coworkers be faster and more efficient at their jobs, who knows the landscape of existing tools available and when to use them, and who isn't afraid to dive into a bit of code when the existing tools don't match our needs.