AWS SA Professional – Practice Question 3

Once again I’m back with yet another practice question that I’ve stumbled across that I wanted to share.

You are an architect for a news-sharing mobile application. Anywhere in the world, your users can see local news on topics they choose. They can post pictures and videos from inside the application. Since the application is being used on a mobile phone, connection stability is required for uploading content, and delivery should be quick. Content is accessed a lot in the first minutes after it has been posted, but is quickly replaced before disappearing. The local nature of the news means that 90 percent of the uploaded content is then read locally (less than a 100 kilometers from where it was posted). What solution will optimise the user experience when users upload and view content (by minimizing page load times and minimizing upload times)? (Choose 1)

a. Upload and store the content in a central Amazon Simple Storage Service (S3) bucket, and use an Amazon CloudFront distribution for content delivery.

b. Upload and store the content in an Amazon Simple Storage Service (S3) bucket in the region closest to the user, and use multiple Amazon CloudFront distributions for content delivery.

c. Upload the content to an Amazon Elastic Cloud Compute (EC2) Instance in the region closest to the user, send the content to a central Amazon Simple Storage Service (S3) bucket, and use an Amazon CloudFront distribution for content delivery.

d. Use an Amazon CloudFront distribution for uploading the content to a central Amazon Simple Storage Service (S3) bucket and for content delivery.

As always I’m going to try and rule out any obvious answers that won’t meet the requirements for the desired outcome.

This question is basically testing your knowledge of Amazon CloudFront which in its simplest form is a Content Distribution Network but it can also do a lot more things as well such as:

Geographic Restriction – Allowing you to either whitelist or blacklist access to the site from specific countries.

Support for GET, HEAD, POST, PUT, PATCH, DELETE and OPTIONS – For POST, PUT, PATCH and DELETE these requests are proxied back to the Origin Server.

Server Name Indication (SNI) Custom SSL – Relies on the SNI extension of the Transport Layer Security (TLS) protocol which allows multiple domains to serve SSL traffic over the same IP address by including the hostname that end users are trying to connect to.

Invalidation – Allows you to remove content directly from the Origin Server and wait for the content to then expire or invalidate the content via the API to remove the content from the edge locations.

Dynamic Content Support – Specify whether you want Amazon CloudFront to forward some or all of your cookies to your custom origin server. Amazon CloudFront then considers the forwarded cookie values when identifying a unique object in its cache. End users get the benefit of content that is personalized just for them with a cookie and the performance benefits of Amazon CloudFront.

“Answer C” is making the solution more expensive by requiring instances in an AWS region nearest to the user which would then send the data back to an Amazon S3 Bucket whilst potentially increasing latency. So if I was to assume that the end user was in Italy then the nearest AWS Region would be in Frankfurt for where the EC2 Instance could be located, there would of course be some sort of latency between the End User device and this location for the uploading of data. This data would then need to be sent to potentially some other AWS Region for the centralized S3 Bucket that could be in the USA somewhere adding further latency just for the uploading of data. Therefore I would rule this out based on it not meeting the requirements for optimizing the user experience for the uploading of data.

“Answer B” whilst it’s using the right technologies for the required solution, it’s not necessarily resolving the issues around the minimizing the upload times for the content although, if it’s closer to the end users it will help but not sufficient enough in my personal opinion.

“Answer A” whilst it’s using the right technologies for the required solution is only going to achieve an optimized experience for users reviewing the content. However as its suggesting to upload and store the content in a central Amazon S3 Bucket this could be thousands of miles from the end users and therefore latency would be an issue.

So in order to justify my choice of “Answer D” being the correct choice of meeting all the requirements by utilizing Amazon CloudFront for both the distribution and uploading the content and using Amazon S3 as a central store, please continue reading below.

How CloudFront Delivers Content to Your Users

Once you configure CloudFront to deliver your content, here’s what happens when users request your objects:

1. A user accesses your website or application and requests one or more objects, such as an image file and an HTML file.
2. DNS routes the request to the CloudFront edge location that can best serve the user’s request, typically the nearest CloudFront edge location in terms of latency, and routes the request to that edge location.
3. In the edge location, CloudFront checks its cache for the requested files. If the files are in the cache, CloudFront returns them to the user. If the files are not in the cache, it does the following:
a. CloudFront compares the request with the specifications in your distribution and forwards the request for the files to the applicable origin server for the corresponding file type—for example, to your Amazon S3 bucket for image files and to your HTTP server for the HTML files.
b. The origin servers send the files back to the CloudFront edge location.
c. As soon as the first byte arrives from the origin, CloudFront begins to forward the files to the user. CloudFront also adds the files to the cache in the edge location for the next time someone requests those files.

How CloudFront Uploads Data to an Origin

1. A user accesses your website or application and requests one or more objects, such as an image file and an HTML file.
2. DNS routes the request to the CloudFront edge location that can best serve the user’s request, typically the nearest CloudFront edge location in terms of latency, and routes the request to that edge location.
4. CloudFront will send the upload request back to the origin web server (such as an Amazon S3 bucket, an Amazon EC2 instance, an Elastic Load Balancer, or your own origin server) over an optimized route that uses persistent connections, TCP/IP and network path optimizations.

“Answer D” on the other hand is meeting all the requirements needed for the viewing the content by using Amazon CloudFront for both the distribution and uploading of content.