Rust Runtime for AWS Lambda

AWS Lambda, which makes it easy for developers to run code for virtually any type of application or backend service with zero administration, has just announced the Runtime APIs. The Runtime APIs define an HTTP-based specification of the Lambda programming model which can be implemented in any programming language. To accompany the API launch, we have open sourced a runtime for the Rust language. If you’re not familiar with Rust, it’s a programming language for writing and maintaining fast, reliable, and efficient code.

There’s one more setting to make in the Cargo.toml file. When configured to use a custom runtime with the Runtime APIs, AWS Lambda expects the deployment package to contain an executable file called bootstrap. We can configure Cargo to generate a file called bootstrap, regardless of the name of our crate. First, in the [package] section of the file, add an autobins = false setting. Then, at the bottom of the Cargo.toml, add a new [[bin]] section:

Next, open the main.rs file that Cargo created in the src folder in your project. Copy the content from the basic example above and paste it into the file. It should replace the stub main method created by Cargo. With the new source in place, we are almost ready to build and deploy our Lambda function

Before we launch our build, we need to make sure that the Rust compiler is targeting the correct platform. AWS Lambda executes your function in an Amazon Linux environment. Unless you are already running this tutorial on an x86 64bit Linux environment, we’ll need to add a new target for the Rust compiler – we can use the Rustup tool to make this easier. Follow the instructions below to compile our basic example on Mac OS X.

Compiling on Mac OS X

Before we build the application, we’ll also need to install a linker for the target platform. Fortunately, the musl-cross tap from Homebrew provides a complete cross-compilation toolchain for Mac OS.

$ brew install filosottile/musl-cross/musl-cross

Now we need to inform Cargo that our project uses the newly-installed linker when building for the x86_64-unknown-linux-musl platform. Create a new directory called .cargo in your project folder and a new file called config inside the new folder.

The build process creates an executable file in the ./target/x86_64-unknown-linux-musl/release/bootstrap directory. Lambda expects the deployment package to be a zip file. Run the following command to create a deployment zip file for AWS Lambda:

To simplify the development and build process, we’ll be adding a Cargo builder to the SAM CLI (Serverless Application Model). When that release of the SAM CLI is out, you’ll be able to simply run sam build with a SAM template.

Deploying the Function on AWS Lambda

We can now deploy this file to AWS Lambda. Navigate to the AWS Lambda console and create a new function.

Leave the Author from scratch option selected and give your function a name – I called mine test-rust. Next, from the Runtime dropdown, select Provided. Our sample function doesn’t require any special permissions. You can select an existing role if you already have a basic execution role, or ask the Lambda console to create a new one with basic permissions (you don’t have to pick a template). Finally, click create function.

In the function screen, use the Upload button in the Function code section to upload the rust.zip file that we created in the build step of this tutorial. With the new file selected, Save the changes to the function. We do not need to make any other configuration changes.

Because our code is entirely contained within the bootstrap executable that Lambda will start, the Handler information is not needed. 128MB of memory and a 3 second execution timeout are sufficient headroom for a “Hello, world.”

When throwing an error, the runtime can optionally include the full stack trace in the output of the function. To enable this, simply set the RUST_BACKTRACE environment variable to 1.

We can now test our function. In the Lambda console, click the Test button on the top right. Since this is the first time we are testing this function, the Lambda console asks us to define a test event. In the sample code above, you might have noticed that we expect a firstName property in the incoming event. Use the JSON below as the test event and give your test object a name.

{
"firstName": "Rustacean"
}

Finally, click Create in the test event modal window. With the new test event saved, click Test again on the top right of the console to actually start the function. Expand the “execution result” section to take a look at the function output and the log.

Congratulations! You have now built and deployed your first AWS Lambda function written in Rust. Next, try to deploy this function using a Serverless Application Model (SAM) template.

Code Deep Dive

Now that we have a Rust Lambda function running, let’s break down the sample code into its most important components. Starting from the top, we import our crates:

The first crate we import is the lambda_runtime. This is our new runtime, which we’ll rename lambda for the sake of conciseness. You might also notice the #[macro_use] — this is a declaration to the Rust compiler that we’re importing a macro – you won’t need to do this for much longer, as the Rust 2018 edition will allow us to import macros like normal functions or values. The runtime defines a lambda! macro that makes it easy to bootstrap the runtime.

The serde_derive crate also uses macros to generate marshallers for a given struct. You’ll notice that structs in the sample code are annotated with #[derive(Serialize, Deserialize)], which handle serialization and deserialization, respectively.

The library uses the macros defined by the log crate to produce log messages. The sample code includes the simple_logger crate to print messages to stdout. There are many crates that implement the log facade, and the runtime itself is not opinionated on which one you should pick.

After the extern and use statements, our sample code declares the main() method – the entry point of our bootstrap executable. This is the code that will run when Lambda starts our function.

The first thing we do is initialize the simple_logger and set the logging level to Info. You can change this to Debug or Trace to receive more information on what the library and its dependencies are doing behind the scenes. Be aware that the simple_logger crate takes a lock on stdout, so logging in debug or trace mode will slow down your function considerably.

Next, we use the lambda!() macro defined in the lambda_runtime crate to bootstrap our custom runtime. In its most basic form, the macro takes a pointer to the handler function defined in your code. The custom runtime uses the hyper library to make HTTP requests to the Lambda Runtime APIs. You can optionally pass your own tokioruntime to the lambda!() macro:

let rt = tokio::runtime::Runtime::new()?;
lambda!(my_handler, rt);

You would want to create a custom Tokio runtime in cases where existing libraries would create their own runtime if not prompted. You can see a full, working example here.

With this, the custom runtime launches and begins polling the Lambda Runtime APIs for new events. The next section of the code defines the handler function. The handler function must respect the Handler type defined in the lambda_runtime crate.

The handler function receives an event object that implements the serde::Deserialize trait. The custom runtime also generates a Context object for each event and passes it to the handler. The Context object contains the same properties you’d find in the official runtimes.

The return value of the handler must be a Result with a custom output type that implements the serde::Serialize. Additionally, the custom runtime library specifies a HandlerError type that you can use to wrap custom errors. You can use the new_error(msg: &str) method in the Context object to instantiate a new HandlerError object with a backtrace. The custom runtime knows how to serialize a HandlerError to JSON and include the backtrace if the RUST_BACKTRACE environment variable says it should do so.

Conclusion

This runtime is still in the early stages, and we’d love to have your feedback in terms of its evolution. Additionally, we’re also aware of existing Rust for Lambda libraries like lando, rust-aws-lambda, and rust-crowbar, and we’d like to thank those projects’ respective authors for their work and inspiration.

Stefano Buliani

Stefano is a serverless specialist solutions architect at AWS. Stefano helps AWS customers develop, deploy, and scale their serverless applications. Stefano also maintains a number of open source projects such as Serverless Java Container, Lambda Go API, and Serverless SAM.