So I've been fiddling around with AWS and setting up serverless computing. This means you can host a website that does complex things without setting up a web server. It's pretty awesome. Why, you ask?

Pay per use: If you have a server, you have to pay a monthly fee depending on the type of server (size, bandwidth, storage, blah blah blah). If you go serverless, you pay only for what you use. So for each API call, or user getting a web page, you pay only for that. This is good for prototyping since prototypes don't get a lot of users, so you just need something cheap to demo a project.

Easy set up: You can whip up an HTML site without setting up Apache or Nginx web servers, for example. You can just focus on code.

Basic Components

Step 1, Build Front End: If you want a super simple, static HTML site, then all you need is an S3 Bucket with static webhosting.

Step 2, Build Back End: If you want some more complicated back end logic, then you also need an Lambda Function that your website can access via an API Gateway endpoint.

Step 3, Set Up Custom Domains: If you want to use your own domain name (and/or purchase a domain name), you should set up a Route 53 to register a domain name and/or set up DNS Zones. If you want to your own domain to have SSL (use https instead of http), you can get a free certificate using Certificate Manager and hook that up with Cloudfront.

These are all components and options. You don't need all of these. But, since I've figured out how to set this up, I thought I'd put my notes on the interwebs in case you are interested in a similar set up.

Here's a story of how this all goes down:

Once upon a time, a user goes to a web address. The domain registrar looks up Name Space for this domain and find a Route53 DNS Zone. Route53 then tells the user, "Hey, for this domain you should look at what Cloudfront is up to. Cloudfront is then like, "Yo, I got an https certificiate for this domain so I can hook you up to the content on this domain through a secure pipe so jokers can't sniff your traffic and see what's up." Cloudfront also says "And BTW, you should check out the content on S3." S3 then gets a call from Cloudfront and says "Check it out, here's your awesome content."

Sometimes, S3 needs to call a server to generate even more awesomer content. So S3 is all "'Sup API Gateway, I need some processing on these parameters, the name is blah, and the password is blah." In this case, the name and password are both parameters. Then, API gateway is like "Gotcha, Imma pass these parameters to Lambda Function." Lambda Function then says, "Nice, I can use these parameters to do crazy calculations and return an ouput. The ouput for this is: user is authenticated." API Gateway then replies, "Sweetness, Imma pass this back to the static webpage hosted on S3."

S3 then gets the content from API Gateway and renders it for the user. The end. Simple!

Step 1: Host a Static Site

(If you haven't already, you should get an AWS Developer Account. This enables you to use all the goodies I described above.)

Step 2: Hook Site to Backend

Ok, so that's cool. Now you want to do something more complicated. You want the user to tell you their name and give them a personalized Hello World message. Like "Hello World, Michael." (Yes, I know this can be done through front end Javascript, but this is just an example OK?)

Now you need to set up a backend. I like to use Node.js since then I can use the same language (Javascript) for both front end and back end. Do to this, let's set up a Lambda Function and an API Gateway API.

You can test this by clicking the "Save and Test" or "Test" button. This is the blue button at the top. When testing, you can use the template "API Gateway AWS Proxy" to simulate an API Gateway request.

Now, we need to hook up this Lambda with an API Gateway. This creates an https://blahblah/blah endpoint that your web site on S3 can call to get content.

This enables the API Gateway to pass any parameters or inputs through to Lambda. Enabling CORS means other websites (such as the one hosted on S3) can access this API even though it's on a different domain.

Now publish the API by:

Click Action > Deploy API and fill out:

Deployment state: prod

Deployment description: whatever you want

After this is deployed, you can test out your API by visiting the API's public prod URL. You should see an Invoke URL in the blue box. Append /YourFunctionName to the end. Your API's url should be something like: https://ab12cd34ef56.execute-api.us-east-1.amazonaws.com/prod/YourFunctionName. You can also find this by going to your AWS Console > Lambda > YourFunctionName > Triggers tab.

Now that you can call your API, you can update your static web page to call the API.

Click the top stream (each of these links are streams, which are logs from a period of time)

I usually like to select Expand All: Text in the top right hand corner to see the logs

Remember the streams can stop, so periodically (every few minutes it seems) go back to the streams list and be sure you are reading the latest stream.

Step 2 Digression: Helpful Dev Tools

Before we go on, I'd like to talk about a few tools I find super helpful. The first is a web-based coding environment called Cloud9. Unlike all other tools here, this is not part of AWS. Oh wait, Amazon did acquire Cloud9 already. With Cloud9, you can develop code through the browser, and it is thusly you can go between different computers with the data sync'd through the cloud.

I'd also like to talk about logs and uploading code! You can use AWS CLI (command line interface) to retrieve Lambda Logs, upload code to S3, and upload code to Lambda. This makes development much easier than always having to use AWS Console to upload files to S3 or write files in the Lambda inline code editor.

Like a notepad, but a lot more bad-ass, and fully browser based. This means I can go between my bedroom laptop and office computer and not miss a beat. I just fire up my web-browser and continue where I left off. It's where I coded up this blog post as well as the examples in this post. It's a recent AWS Acquisition so I'm hopeful of closer integration with AWS in the future.

Besides a code editor, it also has a terminal feature. This is useful because I can use it to see my Lambda Function's logs. So in your Lambda Function, when you write console.log("blah"), it actually pops into a service called Cloudwatch. The Cloudwatch console is not as helpful since they break down logs into streams and you constantly have to close one stream and open another stream. I prefer the command line interface much more, and this can be used in Cloud 9's Terminal (or any other terminal).

In Cloud9, once you set up a workspace, open the terminal and do the following to set up awscli (lets you do thinks like sync S3 Buckets an upload code to Lambda) and awslogs (lets you stream Cloudwatch logs).

Click Attach Existing Policies Directly. This lets you attach "policies" which is like permissions to do this or that through AWS.

To use the command line to sync S3 an Lambda functions, you'll want to find and check these policies: AWSLambdaFullAccess and AmazonS3FullAccess

Click Next

Confirm that you are attaching: AWSLambdaFullAccess and AmazonS3FullAccess

Now note down your Access Key ID and Secret Access Key. You can download the .csv file so you'll have this info stored. The Secret Access Key is only shown once (but you can always generate more access/secret pairs if you lose your secret)

AWS Commands

To stream Cloudwatch Logs for a particularly Lambda Function named MyLambdaFunctionName

$ awslogs get /aws/lambda/MyLambdaFunctionName ALL --watch

Control+C to stop the stream

Sync S3

Assumes your local folder with all your S3 code is in path/to/local/s3/folder and you want to sync to bucket with name MyBucketName. You can remove the --delete part if you don't want things deleted on S3 just because it isn't present in the local folder.

$ aws s3 sync path/to/local/s3/folder s3://MyBucketName --delete

Sync Lambda

Assumes your local folder with all your Lambda code is in path/to/lambda/code (which can include node packages as well), and your lambda function name is MyLambdaFunctionName. We will first create a temporary zip file of your lambda codes called lambda.zip.

Your local lambda folder should include index.js, which correlates with your Handler value (the default Handler value is index.handler). You can set your Handler value by going to AWS > Lambda > MyLambdaFunctionName > Configuration Tab > Handler text field. So if your Handler is something like foobar.handler, then your main Javascript file in path/to/lambda/code/ should be foobar.js.

Awesome! Now you have your handy dandy website. But wait, http://blahblahblah.s3-website-us-east-1.amazonaws.com is not a very memorable web address. You want your own domain you say? Well, lucky for you, Step 3 is next.

Step 3: Set up Domain Configs

If you want your website hosted on your own domain (e.g., https://mydomain.com), you can do this by using Route 53 (domain name registration, setting domain DNS) if you're fine with just an HTTP website. If you want an HTTPS website, you'll also need Cloudfront (content delivery network) and Certificate Manager (generates free SSL certificates).

Setting up an HTTP domain

To point a domain to S3, you'll need to create a DNS Zone (done for you if you registered through Route 53).

Route 53 > Hosted Zones > Create Hosted Zone

Name: mydomain.com (no need for the www part)

Now, let's configure the hosted zone to tell your website where to go.

Click Create Record Set for your domain (e.g., mydomain.com)

Name: leave this empty

Type: A -- IPv4 Address

Alias: Yes

Alias Target: select your S3 Bucket, or fill in the domain name for your s3 bucket, it will look something like s3-website-us-east-1.amazonaws.com. Note that this isn't the full URL to your S3 bucket; it doesn't include the actual bucket name.

Click Create Record Set for your www domain (e.g., www.mydomain.com)

Name: www

Type: A -- IPv4 Address

Alias: Yes

Alias Target: mydomain.com

Note the namespace (or NS) values created for you by Route 53. It will be 4 values with names like ns-128.awsdns-13.com..

Domain name registrars will have NS fields, which points to places that serve your code. So if you registered domain name already on places like Namecheap.com or Google Domains, then all you need to do is point your NS fields to AWS.

Go to Route 53 > Registered Domains > Register Domain button at the top. After the registration goes through, you can find it under Registered Domains.

Route 53 > Registered Domains > Click your domain

Click Add or Edit Name Servers (top right hand side)

Congrats! You've hooked up your domain name to your hosted zone.

Ok this will name a while for the website to show up. So grab a beer or soda. Watch some TV. Come back later.

Setting up an HTTPS domain

HTTPS connections means the pipes between your users and your servers are secure. So if your user is visiting your web site at some public wifi hot spot, nosy jokers can't sniff the traffic and see what's data is getting sent from your user to your servers and vice versa.

So, instead of pointing your A Records to S3, you will point them to Cloudfront, and you will also attach a SSL Certificate to Cloudfront.

Origin Domain Name: type in your S3 website name (without the http:// part). Do not select one of the pre-existing option. You should input something like mydomain.com.s3-website-us-east-1.amazonaws.com

Max TTL: 300 (you should change this later once you are comfortable your content is good)

Forward Cookies: All

Forward Querystrings: Forward all/cached based on all

Price Class: choose whatever you want

Alternate Domain Names: type www.mydomain.com and mydomain.com as well as any other subdomains to point to this bucket

SSL Certificate: Select Custom Cert, then the Certificate you just created for your domain

Click Create Distribution

This will take a while. Any change that happens in Cloudfront has to propagate through the content delivery network (CDN). This means when somebody in Alabama goes to your website, the request reaches a cached copy of your website somewhere close to Alabama. When somebody in California goes to your website, the request reaches a cached copy somewhere close to California. This means your customers will experience less latency.