A user POSTS a JSON encoded Elasticsearch Document to our API gateway. The API gateway then validates the JSON against an authoritative schema, and if the Document passes, the API gateway will send the Document to our document store over an encrypted channel.

The API gateway blocks all other access to our Elasticsearch services' indices, methods and endpoints.

The Agenda for this HOWTO follows

Create and configure your AWS development environment

Deploy and configure an AWS Elasticsearch endpoint using the AWS CLI

Create an app that receives, validates and submits a Document to your Elasticsearch endpoint

Use Chalice to deploy your Lambda function and create/ attach an API gateway

2. Deploy and configure an AWS Elasticsearch endpoint using the AWS CLI

I use the AWS CLI below to deploy Elasticsearch. Again, if you get lost or prefer to use the AWS Console GUI, simply refer to the first Lambda tutorial.

The following AWS CLI command deploys an Elasticsearch domain, and attaches a security policy that only allows access from services owned by your AWS account. For debugging purposes, we also punch a hole in the policy in order to give you (the developer) access to the Kibana web GUI. For that reason, ensure that the policy below includes the IP address of whichever workstation you plan to access the GUI with.

To make life easy, export your AWS account ID and IP address to the following environment variables. Once more, I use dummy examples below. Be sure to use your actual AWS account ID and IP address.

The following (hideous) command uses the two environment varables above to create an Elasticsearch domain (named elastic) specifically tailored for your personal environment. If you run into any difficulties, you can just deploy the Elasticsearch service using the AWS Console GUI.

This contains the main Lambda application. The application validates a JSON Document, creates a random Document ID and then chucks the Document to our Elasticsearch document store. The application's structure should look familiar to those developers that have experience with Flask.

This module includes the configuration for your development environment. The Chalice documentation instructs you to include any additional Python files in the chalicelib directory. Be sure to update ELASTICSEARCH_ENDPOINT with your Elasticsearch endpoint's URL.

Similar to Elastic Beanstalk , Chalice uses requirements.txt to ensure the Lambda function includes all the required packages. jsonschema, unfortunately, requires functools, which cannot be installed via pip. If you attempt to deploy your Chalice package using just requirements.txt, you will get the following error: Could not install dependencies: functools==3.2. You will have to build these yourself and vendor them in the chalice vendor folder. This brings us to the next bullet...

Pip will not find wheel files for functools, a dependency for jsonschema. To solve this problem, I followed the instructions on the official Chalice documentation and built a wheel file appropriate for Amazon Linux. I then unzipped it into the vendor directory. You're welcome.

Before we move on, please make sure that you edited config.py to reflect your Elasticsearch endpoint. You can find this in the Elasticsearch console, under the elastic domain. Leave off https:// and the trailing slash.

If you are ready, execute the following command in order to deploy the Lambda function and API gateway:

(working)[working]$ cd eslambda
(working)[eslambda]$ chalice deploy --no-autogen-policy
Creating role: eslambda-dev
The following execution policy will be used:
{"Version": "2012-10-17",
"Statement": [{"Action": ["logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"],
"Resource": "arn:aws:logs:*:*:*",
"Effect": "Allow"},
{"Action": ["es:ESHttpGet",
"es:ESHttpHead",
"es:ESHttpPost",
"es:ESHttpPut"],
"Resource": "*",
"Effect": "Allow"}]}
Would you like to continue? [Y/n]: Y
Creating deployment package.
Could not install dependencies:
functools32==3.2.3-2
You will have to build these yourself and vendor them in
the chalice vendor folder.
Your deployment will continue but may not work correctly
if missing dependencies are not present. For more information:
http://chalice.readthedocs.io/en/latest/topics/packaging.html
Creating lambda function: eslambda-dev
Initiating first time deployment.
Deploying to API Gateway stage: api
https://9z8cesjny0.execute-api.us-east-1.amazonaws.com/api/

Once you execute this command, Chalice should report an endpoint. If you would like, you can go to your AWS console and take a look at what Chalice deployed.

Chalice deploys a Lambda function:

Chalice deploys an API Gateway that reflects the logic you included in app.py:

Chalice also deploys an IAM Role and Policy for your Lambda Function:

5. Test drive your new Elasticsearch proxy

You can use httpie to test out your new API gateway. By default, httpie encodes POST data as Content-Type = application/json. The syntax below ensures we match the proper schema for bigsurvey. The colon equals syntax for agree and anumber ensures httpie sends a boolean value and numeric to the API gateway. You can experiment with changing these to strings, and you will observe the API gateway refuses the data.

Type in bigsurvey for your index pattern and @timestamp for the Time Filter Name. If you modified ELASTIC_INDEX_NAME in chalicelib\config.py, then input that name in index pattern.

Now go to the Discover tab. You will see the Document that HTTPIE just posted to our API gateway.

Conclusion

Congrats! You used Chalice to deploy an Elasticsearch proxy that validates a JSON Document before it posts to a private Elasticsearch document store. You can easily extend my example to accomodate other user stories.