You have been asked to virtually extend two existing data centres into AWS to support a highly available application that depends on existing, on-premises resources located in multiple data centres and static content that is served from an Amazon Simple Storage Service (S3) bucket. Your design currently includes a dual-tunnel VPN connection between your CGW and VGW. Which component of your architecture represents a potential single point of failure that you should consider changing to make the solution more highly available? (Choose 1)

a. No changes are necessary: the network architecture is currently highly available.

b. Add another CGW in a different data centre and create another dual-tunnel VPN connection.

c. Add another VGW in a different availability zone can create another dual-tunnel VPN connection.

d. Add a second VGW in a different availability zone, and a CGW in a different data centre, and create another dual-tunnel VPN connection.

This question is testing your understanding of VPN’s and the varying elements that go into the creation of establishing a VPN Connection with AWS.

A media production company wants to deliver high definition raw video material for pre-production and dubbing to customer all around the world. They would like to use Amazon CloudFront for their scenario, and they require the ability to limit downloads per customer and video file to a configurable number. A CloudFront download distribution with TTL = 0 was already setup to make sure all client HTTP requests hit an authentication backend on Amazon Elastic Cloud Compute (EC2)/Amazon Relational Database Service (RDS) first, which is responsible for restricting the number of downloads. Content is stored in Amazon Simple Storage Service (S3) and configured to be accessible only via CloudFront. What else needs to be done to achieve and architecture that meets the requirements? (Choose 2)

a. Enable URL parameter forwarding, let the authentication backend count the number of downloads per customer in Amazon RDS, and invalidate the CloudFront distribution as soon as the download limit is reached.

b. Configure a list of trusted signers, let the authentication backend count the number of download requests per customer in Amazon RDS, and then return a dynamically signed URL unless the download limit is reached.

c. Enable CloudFront logging into an Amazon S3 bucket, let the authentication backend determine the number of downloads per customer by parsing those logs, and return the content S3 URL unless the download limit is reached.

d. Enable URL parameter forwarding, let the authentication backend count the number of downloads per customer in Amazon RDS, and return the content S3 URL unless the download limit is reached.

e. Enable CloudFront logging into an Amazon S3 bucket, leverage Amazon Elastic Map Reduce (EMR) to analyse CloudFront logs to determine the number of downloads per customer, and return the content S3 URL unless the download limit is reached.

This question is testing your understanding of CloudFront and how to secure the Origin which in this case is an S3 Bucket. I would strongly recommend watching the AWS re:Invent Video Introduction to Amazon CloudFront (CTD205) as this will cover some of the fundamental pieces that is being asked within the Question.

You are designing a file-sharing service. This service will have millions of files on it. Revenue for the service will come from fees based on how much storage the user is using. You also want to store metadata on each file, such as title, description and whether the object is public or private. How do you achieve all of these goals in a way that is economical and can scale to millions of users? (Choose 1)

a. Store all files in Amazon Simple Storage Service (S3). Create a bucket for each user. Store metadata in the filename of each object, and access it with LIST commands against the S3 API.

b. Store all files Amazon S3. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded.

c. Create a striped set of 4000 IOPs Elastic Block Store Volumes to store the data. Use a database running in Amazon Relational Database Service (RDS) to store the metadata.

d. Create a striped set of 4000 IOPs Elastic Block Store Volumes to store the data. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded.

This question is testing your understanding of S3 and EBS Volumes and the varying use cases for both. Before I go any further ahead I’d recommend reading and understanding the Service Limits:

I had been reminded by a number of work colleagues on Friday that the exam voucher packs we had bought whilst we were on our instructor led Architecting Azure Course back in May were due to expire at the end of this week.

Not wanting to waste the money that I’d spent and the fact that there was also a free retake option as part of the deal I decided to book the exam. By no means was I confident as I’ve not been focusing on Azure at all, let alone actively using the system since I’m currently working and spending my time studying in preparation for my AWS Solution Architect Professional Exam.