AWS re:Invent Day 3 Recap

The highlight of my third day at AWS re:Invent in Las Vegas was starting the day with the first keynote speech. I’m always amazed at the amount of innovation that is all announced at the same time. These were some of the highlights:

Serverless Aurora: A database where you don’t have to think about the underlying server at all, which I thought was odd.

Managed Kubernetes: Something I guessed would happen soon when AWS joined the consortium that ‘owns’ Kubernetes.

Serverless containers: This finally allows you to just deal with running the container, and lets you ignore the node layer.

A managed graph database – Neptune: As a longstanding fan of Neo4j, I’m very excited for this. They are following the Apache Tinkerpop spec, but I do love graphs, so this is very welcome.

Massive expansion to bring machine learning to more offerings: I’m not deep into ML, but anything that brings these capabilities to more products is a good thing.

Translation, transcription, and Natural Language processing: Tying these services together could provide some amazing capabilities, especially if you layer on Lex and/or Polly on top.

And much more: IoT, DynamoDB, video streaming and recognition, and even a hardware machine learning camera.

The AWS CEO, Andy Jassy, was the Energizer bunny of new services. Now I fear that AWS may be getting a little too broad for many of its users – a retuning of their capabilities to better organize services could go a long way to make leveraging it more palatable for new users, especially because right now it’s pretty intimidating.

Sessions one and two – Security, DevSecOps, and CloudWatch

Today I focused a lot on security, starting with the concept of DevSecOps. Every developer knows that embedding security as you go is far more effective than shoe-horning it in at the end. With the ever increasing presence of DevOps practices, where more checks and deployments operate as a code pipeline, embedding security checks and balances as you go is a natural fit. Another great ‘ah-ha’ moment was that if you start approaching the ‘infrastructure as code’ mindset, be sure to follow the same quality checks as your other code. To do this effectively, consider forms of static code analysis, code review, unit testing, and integration tests. Now you may be asking, “What does an integration test look like for a Cloud Formation template?” I don’t have the answer for that, but I relish the opportunity to have the discussion.

The next security discussion presented a very clever approach to ensuring that nefarious activities are not going on. By leveraging CloudWatch events, you can trigger a lambda function when, for example, someone starts an instance. You then can do some processing to validate if that should have happened – should that AMI be running in that subnet? Should that security group be attached? If not, terminate it immediately. This happens before the machine even becomes close to available to run. I would take it even further and keep track of what API key/IAM user tried to do it, and disable that user/key, or move them to a ‘quarantine’ group.

This approach can be further extended by leveraging custom CloudWatch events and tying into existing syslogs on servers. In other words, that same set of rules could be triggered if someone remotely connected to the machine – maybe that fires a warning, but if they sudo to get privileges, you stop the instance, isolate it via security groups, and alert network operations of the attempted issue.

Session three – Netflix’s practices

Netflix then presented some of the practices they use within their platform. The one I found the most interesting is that while they cycle their servers – ensuring a fresh ‘golden image’ – they actively ‘snoop’ the state of their running machine by snapshotting the drive, making a new volume from the snapshot, attaching it to an ops EC2 instance, and scanning it to ensure it’s valid. This happens automatically and without any knowledge by any user on the server itself. It’s very much a ‘trust but verify’ perspective on server security. They are working to move to fully immutable instances, which would then allow them to lock down any remote access to the server, further securing the system.

Sessions four and five – DynamoDB and ECS

My last two sessions were focused on two different AWS products – DynamoDB and ECS. The DynamoDB session really gave some highlights on the scalability and capacity of DynamoDB and the changes it will be going through, including autoscaling by default, which should lower costs significantly. Another big change announced was the ability to backup and restore tables, which happens instantly regardless of table size. Very impressive. The other big announcement was ‘global tables.’ DynamoDB can now provide multi-region master-master replication – it’s only able to be activated on new, empty tables, but it’s a game changer for leveraging DynamoDB globally. As a note, you will be charged by write units as the rows are copied to other regions, so don’t replicate to any regions you don’t need to.

ECS – Elastic Container Service – had a lot of announcements in the keynote, and the advanced patterns session really didn’t talk too much on those features. They went through how ECS works and what it’s good for. The talk improved once BuzzFeed started talking about their usage of ECS – seeing a real-world, practical usage is, as expected, always of more value. They discussed their ‘rig’ framework they put together, which looks like a great system to smooth ECS-based developer and product flows. This is something I definitely will be researching post-re:Invent. With all of the impending changes to containers on AWS – Serverless, Kubernetes, and ECS updates – I would recommend taking a few breaths and evaluating all the new paths available.

Looking to Day 4 – let’s meet!

In summary, Day 3 continued the great re:Invent experience – the keynote delivered more features to AWS, with some expected, but many surprising. Tomorrow’s keynote should announce even more capabilities. With it still holding the vast majority of the cloud market (44%), AWS is not letting up on innovation. This is good, but they risk spreading too wide. Some of the IoT, ML, and media capabilities announced this week are great, but I think deepening features of the existing products provides new functionality without significantly increasing the intimidation of getting started. See you tomorrow!

Are you here at AWS re:Invent and want to meet up? Connect with me on Twitter at @MrDanGreene to follow along with my live-tweet of my experience and to set up a time to meet. Or just keep an eye out for me in the sea of thousands upon thousands of fellow re:Invent-goers.

Dan Greene

Director of Cloud Services

Dan Greene is the Director of Cloud Services at 3Pillar Global. Dan has more than 20 years of software design and development experience, with software and product architecture experience in areas including eCommerce, B2B integration, Geospatial Analysis, SOA architecture, Big Data, and has focused the last few years on Cloud Computing. He is an AWS Certified Solution Architect who worked at Oracle, ChoicePoint, and Booz Allen Hamilton prior to 3Pillar. He is also a father, amateur carpenter, and runs obstacle races including Tough Mudder.