How do we become a high performing organization, to move faster and build more secure and resilient systems? That’s the $64,000 question!

A16Z strikes again! Andreeson Horowitz’s epic podcast hosts world class guests around all sorts of startup & new technology topics. This week they interview Jez Humble and Nicole Forsgren. They run Dora which is DevOps Research and Assessment, which shows organizations just how to get the advantages of devops in the real world.

Technology does not drive organizational performance

Check out section 16:04 in the podcast…

“the point of distinction comes from how you tie process and culture together technology through devops”

It’s the classic Amazon model. They’re running hundreds of experiments in production at any one time!

Mainframes of Kubernetes?

What about tooling? Is that important? Here’s what Jez has to say. Jump to 29:30…

“Implementing those technologies does *not* give you those outcomes. You can achieve those results with Mainframes. Equally you can use Kubernetes, Docker and microservices and not achieve those outcomes.”

I recently ran across this interesting question on a technology forum.

“I’m an engineering team lead at a startup in NYC. Our app is written in Ruby on Rails and hosted on Heroku. We use metrics such as the built-in metrics on Heroku, as well as New Relic for performance monitoring. This summer, we’re expecting a large influx of traffic from a new partnership and would like to have confidence that our system can handle the load.”

“I’ve tried to wrap my head around different types of performance/load testing tools like JMeter, Blazemeter, and others. Additionally, I’ve experimented with scripts which have grown more complex and I’m following rabbit holes of functionality within JMeter (such as loading a CSV file for dynamic user login, and using response data in subsequent requests, etc.). Ultimately, I feel this might be best left to consultants or experts who could be far more experienced and also provide our organization an opportunity to learn from them on key concepts and best practices.”

It used to be you point a loadrunner type tool at your webpage and let it run. Then watch the load, memory & disk on your webserver or database. Before long you’d find some bottlenecks. Shortage of resources (memory, cpu, disk I/O) or slow queries were often the culprit. Optimizing queries, and ripping out those pesky ORMs usually did the trick.

Today things are quite a bit more complicated. Yes jmeter & blazemeter are great tools. You might also get newrelic installed on your web nodes. This will give you instrumentation on where your app spends time. However it may still not be easy. With microservices, you have the docker container & orchestration layer to consider. In the AWS environment you can have bottlenecks on disk I/O where provisioned IOPS can help. But instance size also impacts network interfaces in the weird world of multi-tenant. So there’s that too!

What’s more a lot of frameworks are starting to steer back towards ORMs again. Sadly this is not a good trend. On the flip side if you’re using RDS, your default MySQL or postgres settings may be decent. And newer versions of MySQL are getting some damn fancy & performant indexes. So there’s lots of improvement there.

There is also the question of simulating real users. What is a real user? What is an ACTIVE user? These are questions that may seem obvious, although I’ve worked at firms where engineering, product, sales & biz-dev all had different answers. But lets say you’ve answered that. Does are load test simply login the user? Or do they use a popular section of the site? Or how about an unpopular section of the site? Often we are guessing what “real world” users do and how they use our app.

The basics aren’t tough. You need to know the anatomy of a Dockerfile, and how to setup a docker-compose.yml to ease the headache of docker run. You also should know how to manage docker images, and us docker ps to find out what’s currently running. And get an interactive shell (docker exec -it imageid). You’ll also make friends with inspect. But what else?

1. Manage image bloat

Docker images can get quite large. Even as you try to pair them down they can grow. Why is this?

Turns out the architecture of docker means as you add more stuff, it creates more “layers”. So even as you delete files, the lower or earlier layers still contain your files.

This will immediately cleanup the crap that apt-get built from, without it ever becoming permanent in that layer. Cool! As long as you use “&&” it is part of that same RUN command, and thus part of that same layer.

Another option is you can flatten a big image. Something like this should work:

2. Orchestrate

Running docker containers on dev is great, and it can be a fast and easy way to get things running. Plus it can work across dev environments well, so it solves a lot of problems.

But what about when you want to get those containers up into the cloud? That’s where orchestration comes in. At the moment you can use docker’s own swarm or choose fleet or mesos.

But the biggest players seem to be kubernetes & ECS. The former of course is what all the cool kids in town are using, and couple it with Helm package manager, it becomes very manageable system. Get your pods, services, volumes, replicasets & deployments ready to go!

On the other hand Amazon is pushing ahead with it’s Elastic Container Service, which is native to AWS, and not open source. It works well, allowing you to apply a json manifest to create a task. Then just as with kubernetes you create a “service” to run one or more copies of that. Think of the task as a docker-compose file. It’s in json, but it basically specifies the same types of things. Entrypoint, ports, base image, environment etc.

For those wanting to go multi-cloud, kubernetes certainly has an appeal. But amazon is on the attack. They have announced a service to further ease container deployments. Dubbed Amazon Fargate. Remember how Lambda allowed you to just deploy your *code* into the cloud, and let amazon worry about the rest? Imaging you can do that with containers, and that’s what Fargate is.

3. Registries & Deployment

There are a few different options for where to store those docker images.

One choice is dockerhub. It’s not feature rich, but it does the job. There is also Quay.io. Alternatively you can run your own registry. It’s as easy as:

$ docker run -d -p 5000:5000 registry:2

Of course if you’re running your own registry, now you need to manage that, and think about it’s uptime, and dependability to your deployment pipeline.

If you’re using ECS, you’ll be able to use ECR which is a private docker registry that comes with your AWS account. I think you can use this, even if you’re not on ECS. The login process is a little weird.

Once you have those pieces in place, you can do some fun things. Your jenkins deploy pipeline can use docker containers for testing, to spinup a copy of your app just to run some unittests, or it can build your images, and push them to your registry, for later use in ECS tasks or Kubernetes manifests. Awesome sauce!

I put together some of the most common ones I’ve heard, and my thoughts on the right solution.

1. How would you load test?

Here’s an interesting question. Do you talk about tools? How do you approach the problem?

The first thing I talk about is simulating real users. If your site normally has 1000 active users, how will it behave when it has 5000, 10,000, 100,000 or 1million? We can simulate this by using a load testing tool, and monitoring the infrastructure and database during that test.

But how accurate are those tests? What do active users do? Login to the site? Edit and change some data? Where do active users spend most of their time? Are there some areas of the site that are busier than others? What about some dark corner of the site or product that gets less use, but is also less tuned? Suddenly a few users decide that want that feature, and performance slides!

Real world usage patterns are unpredictable. There is as much art as science to this type of load testing.

2. Why is Amazon S3’s 99.999999999% promise *not* enough??

I’ve heard people say before that S3 is extremely reliable. But is it?

According to their SLA, the durability guarantee is 11 nines. What does this mean? Durability is confidence that a file is saved. That you will not lose it. It’s on storage, and that storage has redundant copies. Great. You can be confident you will never lose a file.

What about uptime? That SLA is 99.99% or an hour a month. Surprise! That amounts to an hour of DOWNTIME per month. And if your product fails when S3 files are missing, guess what, your business is down for an hour a month.

That’s actually quite a *lot* of downtime.

Solution: You better be doing cross-region replication. You have the tools and the cloud, now get to work!

3. Why is continuous integration not about tools?

I hear a lot of talk about continuous integration. I’ve even seen it as a line item on a todo list I was handed. Hmmm…

I asked the manager, “so it says here setup CI/CD. Are there already unit tests written? What about integration tests?” Turns out the team is not yet on board with writing tests. I gently explain that automated builds are not going to get very far without tests to run. 🙂

CI/CD requires the team to be on-board. It’s first a cultural change in how you develop code.

It means more regular code checkins. It means every engineer promises not to break the build.

4. What can VPC peering do for you?

Amazon has made VPC peering a lot easier. But what the heck is it?

In the world of cloud networking, everything is virtual. You have VPCs and inside those you have public and private subnets. What if you have multiple AWS accounts? VPC peering can allow you to connect backend private subnets, without going across the public internet at all.

As security becomes front & center for more businesses, this can be a huge win.

5. Why go with a New York based resource?

As more and more small startups put together teams to build their MVP, the offshore market has never been hotter. And there are very talented engineers in faraway places, from Eastern Europe, to India and China. And South America too.

At ⅓ to ¼ the price, why hire US based people? Well one reason might be compliance. If you have sensitive data, that must be handled by US nationals, that might be one reason.

Why New York based? Well there is the value of being face-to-face and working side by side with teams. It may also ease the language barrier & communication. And timezone challenges sometimes make communication difficult.

And lastly ownership. With resources that are focused solely on you, and for which you are a big customer, you’re likely to get more personalized focused attention.

6. What are some common antipatterns in the cloud

Antipatterns are interesting. Because you see them regularly, and yet they are the *wrong* way to solve a problem, either they’re slower, or there is a better more reliable way to solve it.

o Using EFS Amazon’s NFS solution, instead of putting assets in S3.

It might help you avoid rewriting code, but in the end S3 is definitely the way to go.

o Hardcoded IPs in security group rules instead of naming a group.

Yes I’ve seen this. If you specify each webserver individually, what happens when you autoscale? Answer, the new nodes break! The solution is to put all the webservers in a group, and then add a security group rule allowing access from that group. Voila!

Credentials are the bane of applications. You hardcode them and things break later. Or they create a vulnerability that you forget about. That’s why AWS invented roles. But did you know a server *itself* can have a role? That means that server and any software running on it, has permissions to certain APIs within the amazon universe. You can change a servers roles or it’s underlying policies, while the server is still running. No restart required!

Implement CI/CD as a task item

Don’t forget culture & process are the big hurdles. Installing a tool is easy. Getting everyone using it everyday is the challenge!

Reducing and managing docker image bloat

As you change your docker images, you add layers. Even as you delete things, the total image size only grows! Seems counterintuitive. What’s more when you do all that work with yum or apt-get those packages stay lying around. One way is to install packages and then cleanup all in one command. Another way is to export and import an finished image.

ssh-ing into servers or giving devs kubectl

Old habits die hard! I was watching Kelsey Hightower’s keynote at KubCon. He made some great points about kubernetes. If you give all the devs kubectl, then it’s kind of like allowing everybody to SSH into the boxes. It’s not the way to do it!

1. A short history of application hosting

Just to give some context, I’ll start by a quick walk through compute history. From the server cabinet in the back office, to the early managed hosting providers and then on to today’s modern cloud offerings, I’ll explain how we got here.

2. What the heck is serverless?

With that new context in mind, I’ll talk about that evolution one step further, to managed functions. What’s that you ask? Just hand over your code to the cloud, and let them handle running the servers, provisioning load balancers, and reacting to your customers when they hit the endpoint.

3. Introducing a reference architecture

No presentation is complete without a proper diagram. My reference architecture makes use of Amazon’s many cloud services, including API endpoint, cognito for user authentication, lambda for serverless functions, dynamodb to store state information, S3 for storing objects, CloudFront for the edge caching network, and Route53 for the domain name.

4. Architecture walkthrough

Each of the components I mention above, requires some explanation. I’ll talk about how to setup a serverless project, how to define and manage your API endpoint. This is where users first touch your application. I’ll introduce user authentication with Amazon’s own service or a third party like OneLogin or Auth0. From there you’ll see how Amazon’s nosql database Dynamodb works, and how you can store your original & edited images in S3. And no site would be complete without an edge cache, and we’ll have that setup too. Then store your domain name in Route53 and point it to your API.

I have to admit, though I’m not a management consultant, I do pickup the big ones from time to time. Good to Great, How to Make Friends & Influence People, The Lean Startup, Innovators Dillemma & Who Moved My Cheese among my favorites.

I’d seen Ben Horowitz’s book “The Hard Thing about Hard Things” in the news. But I also really love the a16z podcast, and although I don’t know a ton about the VC business, I thought it would be a good read.

Boy is that an understatement! The book is so readable & so accessible, there are nuggets of value in there for anyone in the startup world, or building their career, CEO or not!

On the efficient market hypothesis

I had always assumed Adam Smith’s invisible hand was a good theory, almost as scripture. So to see a different perspective on this, and one backed up by real experience. That’s cool.

“No, markets weren’t “efficient” at finding the truth; they were just very efficient at converging on a conclusion — often the wrong conclusion”. p52

What’s more for investors out there, it means good old fashioned investigative work can still turn up gems, that are worth investing in. Word to the wise.

On the Freaky Friday management technique

Freaky Friday was a movie way back in the 80’s. In it a mother & daughter are at each others throats, frustrated with the each other. They end up switching places, and quickly learn to sympathize with the other’s plight in life.

Horowitz decided to put this method to use between two of his managers. Pretty ingenious.

“After just one week walking in the other’s moccasins, both executives quickly diagnosed the core issues causing the conflict. They then swiftly acted to implement a simple set of processes that cleared up the combat and got the teams working harmoniously. p253

On ancient wisdom & hard choices

Every time you make the hard, correct decision you become a bit more courageous and every time you make the easy, wrong decision you become a bit more cowardly. p213

Boy, can I relate to these bits of wisdom. Running my own business all these years, it hasn’t been easy. There have been ups and downs, and times when people told me to take a different road. But that courage. When you find it, it can be real fuel for you moving forward.

On instincts

I realized that embracing the unusual parts of my background would be the key to making it through. It would be those things that would give me unique perspectives and approaches to the business. p276

When I work with entrepreneurs today, this is the main thing that I try to convey. Embrace your weirdness, your background, your instinct. p276

I hadn’t really thought of this, and it’s an interesting point. It may be one of the reasons why customers hire me, that I hadn’t realized. Certainly I can give an original viewpoint. But I think I will try to put this to work in the future.

On publicity in ventura capital

This is a curious & fascinating point about the history of venture capital. Ripe for disruption indeed!

Marc discovered that the original venture capital firms in the late 1940s and early ’50s were modeled after the original investment banks such as J.P. Morgan and Rothschild. Those banks also did not do PR for a very specific reason: The banks funded wars—and sometimes both sides of the same war—so publicity was not a good idea. p271

Devops is in serious demand these days. At every meetup or tech event I attend, I hear a recruiter or startup founder talking about it. It seems everyone wants to see benefits of talented operations brought to their business.

That said the skill set is very broad, which explains why there aren’t more devs picking up the batton.

By the way, I write an article like this every month, covering consulting lessons, tech trends, cloud and startup innovation. Get the next one via email.

I thought it would be helpful to put together a list of interview questions. There are certainly others, but here’s what I came up with.

1. Explain the gitflow release process

As a devops engineer you should have a good foundation about software delivery. With that you should understand git very well, especially the standard workflow.

Although there are other methods to manage code, one solid & proven method is gitflow. In a nutshell you have two main branches, development & master. Developers checkout a new branch to add a feature, and push it back to development branch. Your stage server can be built automatically off of this branch.

Periodically you will want to release a new version of the software. For this you merge development to master. UAT is then built automatically off of the master branch. When acceptance testing is done, you deploy off of master to production. Hence the saying always ship trunk.

Bonus points if you know that hotfixes are done directly off the master branch & pushed straight out that way.

2. How do you provision resources?

There are a lot of tools in the devops toolbox these days. One that is great at provisioning resources is Terraform. With it you can specify in declarative code everything your application will need to run in the cloud. From IAM users, roles & groups, dynamodb tables, rds instances, VPCs & subnets, security groups, ec2 instances, ebs volumes, S3 buckets and more.

You may also choose to use CloudFormation of course, but in my experience terraform is more polished. What’s more it supports multi-cloud. Want to deploy in GCP or Azure, just port your templates & you’re up and running in no time.

It takes some time to get used to the new workflow of building things in terraform rather than at the AWS cli or dashboard, but once you do you’ll see benefits right away. You gain all the advantages of versioning code we see with other software development. Want to rollback, no problem. Want to do unit tests against your infrastructure? You can do that too!

3. How do you configure servers?

The four big choices for configuration management these days are Ansible, Salt, Chef & Puppet. For my money Ansible has some nice advantages.

First it doesn’t require an agent. As long as you have SSH access to your box, you can manage it with Ansible. Plus your existing shell scripts are pretty easy to port to playbooks. Ansible also does not require a server to house your playbooks. Simply keep them in your git repository, and checkout to your desktop. Then run ansible-playbook on the yaml file. Voila, server configuration!

4. What does testing enable?

Unit testing & integration testing are super import parts of continuous integration. As you automate your tests, you formalize how your site & code should behave. That way when you automate the deployment, you can also automate the test process. Let the software do the drudgework of making sure a new feature hasn’t broken anything on the site.

As you automate more tests, you accelerate the software development process, because you’re doing less and less manually. That means being more agile, and makes the business more nimble.

5. Explain a use case for Docker

Docker a low overhead way to run virtual machines on your local box or in the cloud. Although they’re not strictly distinct machines, nor do they need to boot an OS, they give you many of those benefits.

Docker can encapsulate legacy applications, allowing you to deploy them to servers that might not otherwise be easy to setup with older packages & software versions.

Docker can be used to build test boxes, during your deploy process to facilitate continuous integration testing.

Docker can be used to provision boxes in the cloud, and with swarm you can orchestrate clusters too. Pretty cool!

6. How is communicating relevant to Devops

Since devops brings a new process of continuous delivery to the organization, it involves some risk. Actually doing things the old way involves more risk in the long term, because things can and will break. With automation, you can recovery quicker from failure.

But this new world, requires a leap of faith. It’s not right for every organization or in every case, and you’ll likely strike a balance from what the devops holy book says, and what your org can tolerate. However inevitably communication becomes very important as you advocate for new ways of doing things.

My theory is that devops enables the business in a lot of profound ways. Sure it means one sysadmin can do much more, manage a fleet of servers, and support a large user base. But it goes much deeper than that.

By the way, I write an article like this every month, covering consulting lessons, tech trends, cloud and startup innovation. Get the next one via email.

Being able to standup your entire dev, qa, or production environment at the click of the button transforms software delivery dramatically. It means it can happen more often, more easily, and with less risk to the business. It means you can do things like blue/green deployments, rolling out featues without any risk to the production environment running in parallel.

What kind of chops does it take?

Strong generalist skills

For starters you’ll need a pragmatist mindset. Not fanatical about one technology, but open to the many choices available. And as a generalist, you start with a familiarity with a broad spectrum of skills, from coding, troubleshooting & debugging, to performance tuning & integration testing.

Stir into the mix good operating system fundamentals, top to bottom knowledge of Unix & Linux, networking, configuration and more. Maybe you’ve built kernels, compiled packages by hand, or better yet contributed to a few open source projects yourself.

Competent programmer

Although as a devop you probably won’t be doing frontend dev, you’ll need some cursory understanding of those. You should be competent at Python and perhaps Nodejs. Maybe Ruby & bash scripts. You’ll need to understand JSON & Yaml, CloudFormation & Terraform if you want to deliver IAC.

Strong sysadmin with ops mindset

These are fundamental. But what does that mean? Ops mindset is born out of necessity. Having seen failures & outages, you prioritize around uptime. A simpler stack means fewer moving parts & less to manage. Do as Martin Weiner would suggest & use boring tech.

But you’ll also need to reason about all these components. That’ll come from dozens of debug & troubleshooting sessions you’ll do through years of practice.

Understand build systems & deployment models

Build systems like CircleCI, Jenkins or Gitlab offer a way to automate code delivery. And as their use becomes more widespread knowing them becomes de rigueur. But it doesn’t end there.

With deployments you’ll have a lot to choose from. At the very simplest a single target deploy, to all-at-once, minimum in service and rolling upgrades. But if you have completely automated your dev, qa & prod infra buildout, you can dive into blue/green deployments, where you make a completely knew infra for each deploy, test, then tear down the old.

Personality to communicate across organization

I think if you’ve made it this far you will agree that the technical know-how is a broad spectrum of modern computing expertise. But you’ll also need excellent people skills to put all this into practice.

That’s because devops is also about organizational transformation. Yes devs & ops have to get up to speed on the tech, but the organization has to get on board too. Many entrenched orgs pay lip service to devops, but still do a lot of things manually. This is out of fear as much as it stands as technical debt.

But getting past that requires evangelizing, and advocating. For that a leader in the devops department will need superb people skills. They’ll communicate concepts broadly across the organization to win hearts and minds.

She went on to say how much has changed in the last decade. We talked about how the database administrator, as a career role, wasn’t really being hired for much these days. Things had changed. Evolved a lot.

How do you keep up with all the new technology, she asked?

I went on to talk about Amazon RDS, EC2, lambda & serverless as really exciting stuff. And lets not forget terraform (I wrote a howto on terraform), ansible, jenkins and all the other deployment automation technologies.

By the way, I write an article like this every month, covering consulting lessons, tech trends, cloud and startup innovation. Get the next one via email.

We talked about Redshift too. It seems to be everywhere these days and starting to supplant hadoop as the warehouse of choice for analytics.

It was a great conversation, and afterward I decided to summarize my thoughts. Here’s how I think automation and the cloud are impacting the dba role.

My career pivots

Over the years I’ve poured all those computer science algorithms, coding & hardware skills into a lot of areas. Tools & popular language change. Frameworks change. But solid deductive reasoning remains priceless.

o C++ Developer

Fresh out of college I was doing Object Oriented Programming on the Macintosh with Codewarrior & powerplant. C++ development is no joke, and daily coding builds strength in a lot of areas. Turns out he application was a database application, so I was already getting my feet wet with databases.

o Jack of all trades developer & Unix admin

One type of job role that I highly recommend early on is as a generalist. At a small startup with less than ten employees, you become the primary technology solutions architect. So any projects that come along you get your hands dirty with. I was able to land one of these roles. I got to work on Windows one day, Mac programming another & Unix administration & Oracle yet another day.

o Oracle DBA

The third pivot was to work primarily on Oracle. I attended Oracle conferences & my peers were Oracle admins. Interestingly, many of the Oracle “experts” came from more of a business background, not computer science. So to have a more technical foundation really made you stand out.

For the startups I worked with, I was a performance guru, scalability expert. Managers may know they have Oracle in the mix, but ultimately the end goal is to speed up the website & make the business run. The technical nuts & bolts of Oracle DBA were almost incidental.

o MySQL & Postgres

As Linux matured, so did a lot of other open source projects. In particular the two big Open Source databases, MySQL & Postgres became viable.

Suddenly startups were willing to put their businesses on these technologies. They could avoid huge fees in Oracle licenses. Still there were not a lot of career database experts around, so this proved a good niche to focus on.

o RDS & Redshift on Amazon Cloud

Fast forward a few more years and it’s my fifth career pivot. Amazon Web Services bursts on the scene. Every startup is deploying their applications in the cloud. And they’re using Amazon RDS their managed database service to do it. That meant the traditional DBA role was less crucial. Sure the business still needed data expertise, but usually not as a dedicated role.

Time to shift gears and pour all of that Linux & server building experience into cloud deployments & migrating to the cloud.

o Devops, data, scalability & performance

Now of course the big sysadmin type role is usually called an SRE or Devops role. SRE being site reliability engineer. New name but many of the same responsibilities.

Now though infrastructure as code becomes front & center. Tools like CloudFormation & Terraform, plus Ansible, Chef & Jenkins are all quite mature, and being used everywhere.

Checkout your infrastructure code from git, and run terraform apply. And minutes later you have rebuilt your entire stack from bare metal to fully functioning & autoscaling application. Cool!

Cons of automation & databases

However these days cloud, automation & microservices have brought a lot of madness too! Don’t believe me check out this piece on microservice madness.

With microservices you have more databases across the enterprise, on more platforms. How do you restore all at the same time? How do you do point-in-time recovery? What if your managed service goes down?

Migration scripts have become popular to make DDL changes in the database. Going forward (adding columns or tables) is great. But should we be letting our deployment automation roll *BACK* DDL changes? Remember that deletes data right? 🙂

What about database drop & rebuild? Or throwing databases in a docker container? No bueno. But we’re seeing this more and more. New performance problems are cropping up because of that.

What about when your database upgrades automatically? Remember when you use a managed service, it is build for 1000 users, not one. So if your use case is different you may struggle.

In my experience upgrading RDS was a nightmare. Database as a service upgrades lack visibility. You don’t have OS or SSH access so you can’t keep track of things. You just simply wait.

No longer do we have “zero downtime”. With amazon RDS you have guarenteed downtime upgrades. No seriously.

As the field of databases fragments, we are wearing many more hats. If you like this challenge & enjoy being a generalist, you may feel at home here. But it is a long way from one platform one skill set career path.

Also fragmented db platforms means more complex recovery. I can’t stress this enough. It would become practically impossible to restore all microservices, all their underlying databases & all systems to one single point in time, if you need to.

2. Edit serverless.yml

Ok, what are we looking at here? Framework is the version of the serverless framework. Provider is aws, because serverless is attempting to build cross-platform support. You may also use azure, openwhisk, google cloud functions etc. Runtime is your language.

Under functions, our main one is currentTime. handler tells serverless framework what code to matchup with your function name. And finally events tell serverless about the API endpoint to configure.

There's a lot of magic going on under the hood. The serverless framework us using CloudFormation to build things in the background for you. CloudFormation is like Latin, it is a foundational construct to the entire AWS world. You can formalize any object, from servers to sqs queues, dynamodb tables, security groups, IAM users, S3 buckets, ebs volumes etc etc. You get the idea.

Want to see what serverless did? Head over to your aws dashboard, navigate to CloudFormation. You should see a new stack there called myEndpoint-dev. Scroll down and click the "Template" tab. You'll see the exact JSON code in all it's gory detail!

4. Deploy!

Now the fun party. Let's deploy the code.

$ serverless deploy

Simple command, but it's doing a lot of work. Serverless framework is packaging up your nodejs code into a zip file and uploading it to aws for you. You should see some output telling you what happened.