So you decided that your managed hosting service does not suit your needs (anymore) and would like to move to a cloud service like Amazon Web Services (AWS)? Then you’ve come to the right place! In this article, I try to provide a short reference of the most important steps and considerations during the migration.
Note that this process requires a certain level of technical expertise, therefore some background knowledge on the topics covered is expected.

Also note that this is a writeup of my own experience of moving this very (static) website from managed hosting at 1&1 to S3: My website is fairly small in size (currently less than 1 MB), does not include any (server-side) dynamic features or any form of authentication. It does, however, make use of SSL certificates, and redirects from http://www.jonaslieb.com to https://www.jonaslieb.com as well as from jonaslieb.com to www.jonaslieb.com. This guide will be specific to this setup, your scenario may vary. Especially if your website relies on dynamic features or authentication, S3 is not the right choice and you won’t find this guide to be very helpful.
For the ones who are undecided, I will first discuss the general pros and cons of the cloud vs. managed hosting, so you can decide for yourself. If you are impatient or have already made your decision, you may jump directly to the migration recipe.

The user gets his or her own Web server but is not allowed full control over it (user is denied root access for Linux/administrator access for Windows); however, they are allowed to manage their data via FTP or other remote management tools. The user is disallowed full control so that the provider can guarantee quality of service by not allowing the user to modify the server or potentially create configuration problems. The user typically does not own the server. The server is leased to the client.

Definition of cloud computing, according to Wikipedia (again, emphasis mine):

Cloud computing is an information technology (IT) paradigm, a model for enabling ubiquitous access to shared pools of configurable resources (such as computer networks, servers, storage, applications and services), which can be rapidly provisioned with minimal management effort, often over the Internet.

As you can see, both options are simple to manage (compared to operating server hardware yourself). However, there are huge differences between both approaches: While managed hosting provides few, tightly controlled software options, the cloud provides a multitude of services which have to be configured individually.

After this introduction, here are a couple of reasons why one might want to move from a static host to a cloud service, and a few considerations to make about before switching.

Pros:

Different software needs: Managed hosting contracts commonly offer domains, a web server with PHP, a database server (e.g. MySQL), email addresses and SSL certificates. You might notice that you do not use all of these or worse: require different software which is not available within the given infrastructure. A cloud provider lets you choose from a lot more options on a per-project basis.

Scalability: Managed hosting contracts usually include a fixed amount of hard drive space that you may occupy, as well as a maximum amount of data transfers per month. If you have less visitors than the contract anticipates, you might end up paying much more than actually required. If you have more visitors, visitors might end up not being able to use your website once the transfer limit has been reached. Cloud providers usually enable you to scale the infrastructure to your demands.

Pricing: This goes together with the last point, you might not need the entire bundle.

Increased international availability: Some hosting contracts keep your data in one country. This can introduce a sloppy user experience for visitors from the other side of the planet. Use a content delivery network or deploy your website in a cloud spreading multiple continents to mitigate this problem.

Education: Moving to AWS has taught me a lot about the backbones of my website, therefore I can also recommend this if you just want to experiment! But be prepared to fail, so avoid experiments with the production website.

Cons:

Greater technical expertise required: Despite Amazon’s great documentation, a lot more fine tuning and manual editing of configurations is required. You should have at least a basic understanding of HTTP, DNS and SSL to undergo this adventure.

Less customer support: Most hosting contracts include a support plan with a human contact. The Basic AWS Support only includes static documentation and account support. The most basic support plan including a (human) technical support contact currently starts at $29.

Privacy Concerns: As the cloud consists of a multitude of services running on servers all around the world, data in the cloud is not restricted to one jurisdiction. If your application processes your or other user’s private data, this might be of concern.

Security Concerns: This does not affect S3 as much as dedicated servers such as EC2 instances. Note however that for some services, the user is responsible for securely configuring their server and keeping software up-to-date.

Varying costs: Due to the nature of on-demand pricing, your bills may vary greatly from month to month. This might be disadvantageous for predicting costs.

Moving to a cloud provider is not a all-or-nothing decision. You can always start by moving a couple of services to a cloud provider and leave e.g. the domain registration and DNS at your previous provider. Also note that there are more companies offering services similar to AWS. Two mentionable alternatives are the Google Cloud Platform and DigitalOcean, which I have not yet tried.

The website is going to be stored and served by Amazon S3 (Simple Storage Service), so start by creating a new S3 bucket: Log into the AWS console, and choose the “S3” service through the navigation bar up top.

Start the creation wizard by clicking on “Create Bucket”, choose a name and your preferred region. I strongly recommend naming your S3 bucket the same as your domain (e.g. www.example.com) such that you don’t rely on CloudFront for DNS aliasing (see below). Don’t change anything on the remaining pages of the wizard.
Next up, choose your newly created bucket from your list of buckets and navigate to the “Properties” tab. Enable “Static Website Hosting” and choose an index document, most likely index.html. Optionally, you can specify an error page which will be shown to visitors on error. Make a note of the “endpoint” URL (something like http://<BUCKETNAME>.s3-website.<REGION>.amazonaws.com), you will need it later.

After enabling “Static Website Hosting”, navigate to the “Permissions” tab and click the “Bucket Policy” button. Paste the following snippet, insert your bucket name for BUCKETNAME and don’t forget to save.

Note that this snippets makes your entire bucket content world-readable! (Also note that you cannot choose the “Version” key, it has nothing to do with today’s date.)

The next step is to copy the contents from your old provider to S3. First, obtain a copy of all your static files from your old provider, for example via FTP. Now is also the time to remove any files that are no longer needed. Especially remove any .htaccess and .htpasswd files, as they may contain confidential information and are not understood by the S3 host. All files uploaded to your bucket will be publicly accessible and you will not be able to leverage any kind of server-side authentication (e.g. HTTP Basic) on S3.

For administering the content of your S3 buckets, you can either use the AWS console or the AWS Command Line Interface. I am going to describe usage of the latter. First, you have to download and install the package corresponding to your operating system. After installation, use aws configure to setup your credentials, as described in the official documentation.

You can then navigate into your content folder (which you just downloaded and cleaned) and use aws s3 sync ./ s3://<BUCKETNAME> --delete to sync the S3 bucket to your local file system. Note that the --delete option instructs the S3-CLI to delete files from the bucket that are not present on your local file system.
Amazon will charge you depending on the storage used, so try to keep the storage footprint low.

At this time, you should be able to successfully navigate to the URL that you remembered earlier (e.g. http://<BUCKETNAME>.s3-website.<REGION>.amazonaws.com) in your web browser. Note that it is expected that some assets might not load properly, as their URLs might be specified absolutely, not pointing to the S3 bucket.

As a second, independent step, let’s move your DNS setup to Amazon Route 53. For this, we first replicate your current DNS setup at Route 53. Navigate to the Route 53 service through the AWS console and create a new hosted zone for your domain (charges apply!). As the hosted zone type, choose “Public Hosted Zone”.

Select your newly created zone from your list of hosted zones. By default, two record sets are created: The NS entry and the SOA entry (Do not delete them!). We will now add additional record sets from your old provider. You can use the dig command to lookup existing record sets. Call dig +nocmd <DOMAIN> any +noall +answer. The output will look similarly to the following:

For each record (except SOA and NS records), click on “Create Record Set” in your hosted zone view, enter the name (or leave it blank), choose the correct record type (4th column of the dig output), TTL (time-to-live; 2nd column) and value (last column), then save. Ignore the NS and SOA records in the dig response.

Next, point the DNS settings of your old hosting provider to the Amazon name servers: For 1&1, log into your control center, and navigate to the DNS settings for the domain. In the section “Name Server Settings”, choose “Other name servers” and enter the four name servers provided by Amazon. Their names should look like this: ns-123.awsdns-45.com (with different numbers).

If everything has been configured correctly, nothing should have changed about your website. You can use the dig command from earlier to verify that your record sets have correctly been transferred. Verify that the Amazon name servers show up for the NS records. It may take up to 48h for the changes to take effect.

In the next step, we will setup the domain to point at your S3 bucket directly. This step is optional and is only possible if you do not require SSLand used your domain name as bucket name earlier.
Navigate to your hosted zone within the Route 53 dashboard. Create a new A-class record set and set its name to the subdomain hosting your website (e.g. ‘www’). If you already have an A-record set for that subdomain, you have to modify it. Choose “Alias” and use your website bucket URLwithout the bucket name (e.g. s3-website.<REGION>.amazonaws.com) as “Alias Target”.

Here’s where Route 53 trickery comes into play: If you named your bucket www.example.com, you can use it with the record set named www.example.com, which will automatically point to www.example.com.s3-website.<REGION>.amazonaws.com. Similarly, if you want to serve your website from the bare domain name, call the bucket example.com, the endpoint example.com; the website will be served from example.com.s3-website.<REGION>.amazonaws.com. You can then verify if everything works correctly by navigating to your website with your web browser. Make sure to hit the http:// version though, as SSL has not yet been set up. If your browser automatically redirects you to the https://-version, this might be due to HSTS. In that case you might have to clear your browsers HSTS settings for your domain.

Using CloudFront in front of an S3 website has several advantages: It enables you to use SSL on your domain (S3 only supports SSL on the <BUCKETNAME>.s3-website.<REGION>.amazonaws.com-URL) and it gives you presence in a world wide CDN. The disadvantages are cost and the fact that due to CloudFront’s caching, changes usually take around 24 hours to come into effect.

You can manage CloudFront from your AWS console: Choose “CloudFront” as a service in the navigation bar. Choose “Create Distribution” and then in the “Web” section “Get Started”. As “Origin Domain Name”, enter your S3 website URL: <BUCKETNAME>.s3-website.<REGION>.amazonaws.com. In the “Distribution Settings” section, enter your domain name in the “Alternate Domain Names” field. Leave everything else to the default settings (e.g. don’t specify a default root object).

After creation, the CloudFront distribution is assigned a domain name which you can see on the list of distributions. Note down the domain.
Navigate back into your hosted zone at Route 53 and create a new A-class record set (or modify the existing one). Choose “Alias” and enter the CloudFront domain as “Alias Target”.

Navigate to the “Amazon Certificate Manager” on your AWS console. On the top right, choose “US East (N. Virginia)” as region, as CloudFront can only access certificates from that specific ACM region. Click “Request a Certificate” and enter your domain name. This is also a good moment to consider other (sub-) domains that you would like to use with your website. You can even set up a wildcard certificate by entering *.example.com in the “Domain name” field. Note that the wildcard certificate does not protect the bare domain (example.com), so go ahead and add both. After submission, you will have to verify ownership of the domain. For this purpose, emails will be send, to the administrative contact in the WHOIS record of your domain, as well as several administrative contacts guessed via your DNSMX record. Receive the email and follow its instructions to verify your email.

Then edit your CloudFront distribution, choose “Custom SSL Certificate” and select your newly created certificate. Again, changes to CloudFront distributions may take a while to come into effect, so you might have to wait for one day. You should then be able to navigate to https://www.example.com in your web browser.
Also try to navigate to http://www.example.com to check whether the redirection to HTTPS works.

So far, this guide only aimed to make the website accessible via one domain. However, for convention and usability, you might want to make the website accessible via multiple (sub-) domains, such as www.example.com and example.com. As search engines do not like to see the same content available via two domains, one usually sets up a redirection. To setup a redirection that works in combination with HTTPS, create a second S3 bucket for the domain that you want to redirect from. In the “Properties” tab, again choose “Static Website Hosting”, but this time select the redirection part. Enter the entire target domain (the domain that you set up and tested in the last step) and ‘https’ as protocol.
Create a CloudFront distribution analogously to the previous section, enter the domain under “Alternate Domain Names” and choose a corresponding HTTPS certificate (e.g. the wildcard certificate that you created earlier). Again, note down the CloudFront domain name.
Lastly, create a DNS A record set for the new subdomain within your existing hosted zone and choose the CloudFront distribution as alias target.

Unfortunately, creating a second CloudFront distribution seems to be the only way to setup the redirection. For more information, also see this blog post by Simone Carletti on this topic.

There are several options for setting up email with your domain at Route 53. You can setup the MX record to point to any mail provider of your choice. Amazon also offers their own mail service (WorkMail), which I have not yet tried. Another cheap option is to setup a redirection using Amazon SES and Amazon Lambda. There is a great writeup by bravokeyl on this topic.

In this a little lengthy post I have first discussed the pros and cons of moving from a managed hosting service to a cloud service like AWS. In form of a guide, I have subsequently written down my experience of moving from 1&1 to AWS and the minimal amount of steps necessary to do so.

Concerning pricing and long term experience, I will update this article in a couple of months.

A couple of months ago, I lost my precious Google Chrome browser (running on Windows 7, 64bit): All of the sudden (not so much as I will find out later), Chrome was not able to establish secure connections anymore.

The symptoms where a little bit unclear: When I would start the browser, I was able to reach secure sites (with the https:// prefix, port 443) for about 30 seconds. Afterwards, it would always get stuck in the “Establishing secure connection” stage for new connections, until ERR_CONNECTION_TIMED_OUT. The connections that were established in these first moments seemed to work for the rest of the session and usual http traffic (TCP port 80) was not affected.

Researching solutions on the internet yielded a lot of open questions and problems, including obvious solutions as checking firewall settings, reinstalling Chrome, clearing the ssl cache or disabling TLS 1.2 (which is not possible in anymore in current Chrome versions). All of these solutions did nothing for me.

So, I monitored my network connections with packet capture software (Wireshark). First everything went as expected. But even when the secure connections started to fail, I could clearly see that the handshake completed (SSLClient Hello, Server Hello, Certificate, Client Key Exchange, New Session Ticket), but there was silence after the handshake until Chrome finished and reset the connection 20 seconds later (TCPFIN followed by RST flag). Comparing that behavior to Firefox (which still worked; for comparison, I also enabled TLS 1.2), it was clear that it was Chrome’s turn to send data.

The fact that the connections which were established in the first seconds worked for an entire session indicated that some kind of data would be cached, probably certificates. Additionally, because the amount of websites that I could load was not limited by a certain number, but a certain time period, it was quite obvious that my connections were victims of a race condition.
So I fired up Sysinternals Process Monitor, filtering only events from the chrome.exe application.
After a few tries, the experiment was set up and ready to capture Chrome’s faulty behavior. I started Chrome and navigated to a lot off different web pages, pinpointing the exact time of failure. Using Wireshark and Process Monitor, I found out that at the time of the failure, Chrome just finished reading a lot of certificate files (having to do with different Certificate Authorities and revocation lists). There were especially a lot of accesses to the certificate stores issuers.sst and subjects.sst (located at ... \AppData\LocalLow\COMODO\CertSentry).

Research on the internet showed that these files were remnants of an old COMODO Dragon installation that I got tricked into earlier this year and that had not finished correctly. If one tries to delete them, they are recreated moments after by the DLLcertsentry.dll (located at C:\Windows\system32 and C:\Windows\SysWOW64). Using regsvr32 /u certsentry.dll and Unlocker, I was able to unregister and delete the files making a deletion of the certificate stores permanent. After a couple of reboots, Chrome worked again as intended.

But I did not stop here. To further investigate the issue, I kept copies of the DLL files and the certificate lists. Using Dependency Walker, I was able to confirm that there were dependencies on crypt32.dll, cryptnet.dll and cryptui.dll. The certificate stores could be opened with certmgr and included certificates which I had used during my tests (e.g. Google, LastPass, AdBlockPlus, akamai [facebook]). They were still valid though.

Earlier this year, I had to manually cancel a COMODO setup procedure and remove all of its remnants. During that procedure, I missed the DLL files certsentry.dll in my system folder which were still being hooked by Microsoft Windows cryptography modules, creating the files issuers.sst and subjects.sst in my user data folder.
When Google Chrome is started, the user can immediately start browsing while the certificate stores are loaded. This leads to a race condition. Once all files have finished loading, new SSL connections also fail after completing the handshake.
There are a couple things that Chrome and COMODO could learn from these situations: First of all, the asynchronous loading of security files could also be very dangerous since the user does not seem to be protected by the COMODO mechanism in the first seconds. Secondly, there should be a reasonable error message informing the user of SSL errors instead of a time out exception.

For me, I am very satisfied with the result, especially since I have a game jam coming up where I wanted to make use of HTML5 technology, which is faster and more stable in Chrome than it is in Firefox.