Sunday, October 6, 2013

Disclaimer: Modifying security credentials could render in loosing access to your server in case of problems. I strongly suggest you test the method described here in your Development environment before using it in Production.

Key-Pairs is the standard method to authenticate SSH access to our EC2 Instances based on AWS AMI Linux. We can easily create new Key-Paris for our team using the ssh-keygen command and manually adding them to the file /home/ec2-user/.ssh/authorized_keys for those with root access.
Format:

But when the number of instances and members grows we need a centralized method of distribution of this file.Goal
- Store an authorized_keys file in S3 encrypted "at-rest".
- Transport this file from S3 to the instance securely.
- Give access to this file only to the right instances.
- Do not store any API Access Keys into the involved script.
- Store all the temporary files in RAM.

S3

- Create a bucket. In this example is "tarro".

- Create in your local an authorized_keys file and upload it to the new bucket.

- Create a file called authorized_keys.md5, copy the md5sum result in (only the hexadecimal string of numbers and letters) and upload it to the same S3 bucket.

IAM

We will use an EC2 IAM instance role. This way we don't need to store a copy of our API Access Key into the instances who will be accessing the secured files. AWS Command Line Interface (AWS CLI) will automatically access to the EC2 Instance Metadata and retrieve a temporary security credential needed to connect to S3. We will specify a role policy to grant read access to the bucket that contains those files.

- Create a role using the IAM Console. In my example is "demo-role".
- Select Role Type = Amazon EC2.
- Select Custom Policy.
- Create a role policy to grant read access only to "tarro" bucket. Example:

- Launch your instance as you usually do but now select the IAM Role and choose the appropriate one. In my example is "demo-role" but you could have different roles for every application tier like: web servers, data bases, test, etc.

- Under root, create /root/bin/- In /root/bin/ create the file deploy-keys.sh with the following content:

Tuesday, September 24, 2013

Due the importance of SEO and the relevance of Google search engine is not uncommon to hear this question in a meeting: From where GoogleBot crawler crawls our site? Design and investment decisions are made based on the answer to that question. Is a popular believe that Google Inc. crawler GoogleBot resides at California, USA but I'm afraid this is not accurate.

I've discovered this:

- GoogleBot is not only a bunch of servers (obviously). It is a very big distributed cluster with hundreds of machines. My site is indexed from more than 900 different Google IP addresses every day.
- I've identified 7 different GoogleBot crawling clusters.
- They seem to connect to my site from 6 different locations.
- Almost all of them are in USA but one location is Europe.

Origin IP

With access to your web site log files you can "grep" the string "http://www.google.com/bot.html" on the referrer field and find out which IP GoogleBot is using when it pays you a visit. There are some other malicious crawlers that fake their referrer as GoogleBot but they're easily spotted. Google Inc. owns the Autonomous System AS15169 and its connections come from there. In my case I got connections from those IP ranges below, during the last six months:

Nowadays is tricky to know where a IP is located when it belongs to a big network. Anycast routing method (like the one used with the popular Google Public DNS Service 8.8.8.8) becomes a challenge if you want to be certain. Google Inc. IP addresses are administrative located at Mountain View, California and without any further analysis this is the conclusion you will get.

But when I ping those networks from my server (Paris, France), write the obtained round trip times on a table and give a look to the Google Data Centers map... One can guess and approximated geographic location for those GoogleBot clusters:

IPv4 Network

Ping Round Trip

Location

66.249.72.2

92 ms

USA East Coast ?

66.249.73.2

114 ms

USA Mid West ?

66.249.74.2

152 ms

USA West Coast ?

66.249.75.2

96 ms

USA East Coast ?

66.249.76.2

(Not active since 2013-05-29)

Unknown

66.249.77.2

274 ms

Unknown
(Not USA nor Europe ?)

66.249.78.2

13 ms

Dublin, Ireland ?

Round Trip milliseconds is not an accurate method to place a system on the map but the answer I'm trying to answer here is whether GoogleBot is at California or not. As you see, there is not a short answer but at least we know that it is spread around different locations within the States and Europe.

The problem
When putting a Varnish cache in front of an AWS EC2 Elastic Load Balancer weird things happen like: Not getting any traffic to your instance or getting traffic to just one of your instances (in case of Multi Availability Zone (AZ) deployment).

Why?
This has to do with how the ELB is designed and how Varnish is designed. Is not a flaw. Let's call it: Incompatibility.
When you deploy a Elastic Load Balancer into EC2 you access it through a CNAME DNS address. When you deploy an ELB in front of multiple instances in multiple Availability Zones that CNAME is not a DNS address, is many.

As you can see, the answer for this CNAME DNS resolution for Netflix's ELB are 3 different IP addresses. Is up to the application (usually your Internet Web Browser) to decide which to use. Different clients will chose different IPs (they are not always sorted the same way) and this will balance the traffic among different AZs.
The bottom line is that your ELB in real life are multiple instances in multiple AZs and the CNAME mechanism is the method used to balance them.

But Varnish behaves different
And when you specify a CNAME as a Varnish backend server (the destination server where Varnish requests will be send to) it will translate that into only one IP. Despite the amount of IP addresses associated with that CNAME. It will only chose one and use that one for all its activity. Therefore Varnish and AWS ELB are not compatible. (Would you like to suggest a change?)

The Solution
Put a NGINX web server between Varnish and the ELB, acting as a load balancer. I know, not elegant. but works and once is in place no maintenance is needed and the process overhead for the Varnish server is minimum.

Setup
- Varnish server listening on TCP port 80 and configured to send all its requests to 127.0.0.1:8080
- NGINX server listening on TCP port 127.0.0.1:8080 and sending all its requests to our EC2 ELB.

Thursday, April 18, 2013

Due to popular demand I've decided to release the collection of vector graphics objects I use to draw Amazon Web Services architecture diagrams. This is the first release and more are on the way. This is an Adobe Illustrator CS5 (.AI) file. I've obtained this artwork from the original AWS Architecture PDF files published at the AWS Architecture Center.
You can use Adobe Illustrator to open this file and to create your diagrams or you can export these objects to SVG format and use GNU software to work with them. The file has been saved in "PDF Compatibility Mode" so plenty of utilities can import it without the need of using Adobe Illustrator (With Inkscape for instance).

Disclaimer:
- I provide this content as it is. No further support of any kind can be provided. I'd love to receive your comments and suggestions but I can not help you drawing diagrams.
- As far as I know this content is not copyrighted (1). Feel free to use it.
- Those designs have been created by a brilliant and extraordinary person that works in AWS. I'm just a channel of communication here. All credits should go to him.

Monday, January 14, 2013

Ubuntu includes a nice backup tool called Déjà Dup based on Duplicity that gives us just the options we need to handle our home backups. With just a couple of settings we can use Amazon Web Services S3 as device for those backups.

S3 Bucket and Credentials

If none specified, Deja-Dup will automatically create a bucket in S3 using our credentials. This will happen at the default AWS Region (North Virginia). If you need your backups placed elsewhere (a closer region for example) you should manually create a S3 bucket for that purpose.

You need to create an AWS IAM user with S3 privileges and export its credentials to be used with Deja-Dup.

Open Déjà-Dup (Backup) and select the Storage menu. If the additional packages are correctly installed you should have "Amazon S3" as an available Backup Location. Select it and type your S3 Key and the folder your like to store your laptop backup.

You should see something like the capture above. Close the Backup utility.

Bucket Configuration

To tell Déjà-Dup the bucket name we want to use we need dconf. Execute dconf or install it if needed (sudo apt-get install dconf-tools).

Access to / org / gnome / deja-dup / s3 folder:

Substitute the random generated Deja-Dup bucket name by yours and close dconf.

Backup Launch

Start again Deja-Dup and launch your backup. A pop-up window will appear asking you for the S3 Secret Access Key. My suggestion is to select to remember those credentials to avoid the need of typing them every time.

And the rest of the backup process is standard: Pop-up window asking you for a password to encrypt the backup files, scan progress window, etc.
I suggest you to check after a successful backup whether the duplicity files are in the expected S3 Bucket or not. And pay attention to the "Folders to Ignore" Backup setting to avoid copying unnecessary files. S3 is cheap but is not free.