The Serverless Security Conundrum: Who Manages Security?

Written by Tal Melamed

August 13, 2018

In “Serverless” is just a name. We could have called it ‘Jeff’,” Paul Johnston writes, “The idea of ‘serverless’ is NOT about removing the servers completely (or you couldn’t use the internet at all), but essentially paying for services that mean that someone else manages the servers for you, thereby reducing maintenance load…

“In tech, we always rely on others. Serverless is just an extension of that.”

Johnston continues to state that not having access to the server can make things harder, but not impossible. And it’s not necessarily wrong, just different.

And the same applies to securing serverless apps. In some ways, it’s easier than securing traditional web apps. In other ways, it’s harder. And in yet other ways, it’s merely different. In this post, we’ll summarize these key differences and provide actionable tips you can do today to secure serverless apps.

How Serverless Apps are More Secure

No Need to Handle OS & Runtime Security & Patching

Gone. This entire stress is extinct with serverless.

Generally, the cloud service providers will do a better job than you at maintaining servers, so handing off this responsibility not only allows you to focus on building great apps but likely results in increased security.

Stateless/Ephemeral = No Long-Term Resident Injections

Serverless functions run for a few seconds and then die. Containers get recycled. And the fact that serverless functions, for the most part, come and go and have no memory means you no longer have to worry about long-term attacks.

Improved Visibility

Due to the logs and monitoring tools provided by your cloud services provider, you have more visibility regarding which functions interact with which, what resources are being accessed, how frequently, etc. All that visibility can help substantially with security.

How Serverless Security is Different

More IAM Roles to Configure

This can be both an advantage (opportunity to apply fine-grained control over permissions), and a disadvantage (it’s more work!) But it’s just different. More on this later.

Billing

With serverless, you only pay for what you use. Not paying for idle resources can save money. However, if a security tool will add time to processing, you must multiply that by all your requests per month. Additionally, not paying for idle can encourage sprawl and clutter. You need to stay on top of your inventory and status. While unused functions might not impact your bill, they can still be relevant as an additional attack surface.

Attacks

Serverless functions still execute code, even without provisioning or managing servers. If this code is written with poor security, it can still be vulnerable to traditional attacks. The goals of the attacks don’t change in serverless, but the methods will. You have to know what to look out for.

How Serverless Security is More Difficult

Security Visibility

Yes, yes, I just said that visibility is a security benefit. But the problem is seeing the proverbial “forest for the trees.” With the extensive quantity of info and number of resources, it can be difficult to make sense of it all. If you have ten containers, you can know if they’re running or not.

But when you have 1,000 functions, it’s more difficult to determine if everything is behaving the way it’s supposed to. With a billion events in your log every day, it’s difficult to know which are important. This is one way in which the machine learning of a serverless security solution can be particularly beneficial in detecting threats.

Loss of the Traditional Perimeter

When you want to secure your property, you can put a fence around it. To secure your traditional web app, you can put a WAF in front. But with serverless, you can’t rely on your WAF. Since application layer firewalls are only capable of inspecting HTTP(s) traffic, a WAF will only protect functions which are API Gateway-triggered. Your WAF will not provide protection against any other event trigger types.

So How Do You Secure Serverless Apps?

Traditionally, developers wrote code and packaged their workloads wrapped with the security controls put by the DevOps or security operations. But this process just won’t work for serverless. If developers need to wait on security to open port, and configure IAM roles or security groups, the speed advantages of serverless will be lost and the process will break down.

IAM Roles X 1,000?

You must ensure security is tightly wrapped around your application and applied correctly, specifically to each resource, function, S3 bucket, etc. Those IAM roles are critical in ensuring your app is as secure as possible.

This allows all actions on S3 to all buckets on S3. That gets the job done. The code will run, and not fail or give errors because of lack of permission. However, a slightly better, more secure version might be:

In the last case, an injection attack into this area of the code could only read files from this particular bucket. The function can’t read files from any other bucket, and can’t write to or delete the bucket. That will significantly shrink what an attacker can do.

Don’t Grant Functions All the Time in the World

Shrink not just what a function can do, but how long it can run. Functions should have a tight runtime profile. If an attacker is able to succeed with a code injection, they have more time available to do more damage. Shorter timeouts require them to attack more often, which makes the attack more visible.

And yes, you need to consider permissions even for internal functions! For example, an internal function that cannot be triggered by an API gateway might still have permissions to trigger SNS, S3 buckets, email, etc., and can still be triggered by attackers.

Don’t Drink from a Poisoned Well

Finally, don’t forget application dependencies. Cloud-native applications tend to comprise many modules and libraries. Attackers look to include their malicious code in common projects.

Securing application dependencies requires access to a good database and automated tools to continuously prevent new vulnerable packages from being used and alert you to newly disclosed issues.

Tal has 15 years’ experience in the information security field, specializing in security research and vulnerability assessment. Prior to being the Head of Security Research at Protego, Tal was a tech leader at AppSec Labs, leading and executing a variety of security projects for serverless, IoT, mobile, web, and client applications, as well as working for leading security organizations, such as Synack, CheckPoint, and RSA. Tal is also a keen speaker; training DevOps and hackers around the world.