Tuesday, August 30, 2016

My attempt to the AWS Solution Architect professional exam sample questions.

For this practice exam the correct answers are given on the Japanese version of the practice exam which can be found here.  The English practice exam is available from here.  And in this post I like to provide my reasoning in why the given answers are the correct ones.  I have done similarly for the DevOps professional exam here.

Question 1: Best RTO for on-premise Content Management System

 - Answer A will and is the best of the provided options because storage gateway is already used and it's volumes can be converted to EBS volumes.  RMAN backups in S3 also allow restoration into EC2
 - Answer B is not acceptable since Glacier storage takes recovery times >= 3 hours
 - Answer C: There is no need to attach a AWS Storage gateway to the EC2 instance, better to use an EBS volume
 - Answer D: AWS Storage Gateway-VTL is for tapes so no need here as you had a storage gateway volume

Question 2: ERP application in multiple AZs

 - Answer C is valid and allows to restore data up until 5 minutes from the issue (so RPO of 15 minutes is met).  Since you have hourly backups as well in S3 you can quickly restore these and you only need to replay transaction logs for max 1 hour. Furthermore S3 provides excellent data retention.
 - Answer A is not acceptable since Glacier recoveries take too much time > 3 hours
 - Answer B is not good for this scenario as it is unknown how the data corruption occurred.  Probably data corruption is introduced by a logical error rather than issues on storage level.  Since synchronous replication only makes sure you write the changes on a 2nd system as part of your transactions it doesn't allow to recover to earlier time to protect for these corruption errors.
 - Answer D is unacceptable because even though instance store volumes might allow to take quicker backups they are volatile and should not be relied upon for database backups (they are also only accessible from 1 instance and therefore data is only in 1 AZ)

Question 3: Random acts of kindness

 - Answer B is good as it is a cheap way that allows you to operate without maintaining infrastructure
 - Answer A is not good as IAM users should be internal users of your organizations.  You should not use these identities for 'web' users.  One reason would be because the amount of users would be limited.  Even if you would map them to a single 'application' user it would not be a good practice to do so.
 - Answer C is not good again because of IAM user usage as well as introducing additional unnecessary infrastructure (incurring costs)
 - Answer D introduces unneeded infrastructure incurring unneeded costs.

Question 4: Protecting SSL

 - Answer D is the best. CloudHSM is hardened to make sure SSL certificates cannot leave the device.  Furthermore its design and external certification certify that Amazon employees won't have access to them either.  Since Amazon employees also don't have access inside your EC2 instance it is good to store your logs on an ephemeral volume using a randomly generated AES key.  This means that you will lose your logs upon stop/start or when you experience a hardware failure but there were no retention requirements mentioned for the log files.  Since the volumes use this random key when mounting you grant your users access by granting them access to the instance.  The encryption makes sure that data is encrypted at rest and that physical access does not compromise your data.
 - Answer A is generally a good solution but since in this case security is the main concern it is not the best solution.  By offloading SSL at your Load balancing tier you have the traffic flowing in plain text from ELB to web servers.
 - Answer B is not good as there is no way of protecting your private key in the Amazon S3 bucket.  Since your instances need access to the S3 bucket to retrieve the key, employees could do the same and therefore compromise the key.
 - Answer C is good but it does not really protect your logs as you cannot write them straight into S3.  S3 is an object store and cannot be used reliably as a block device.

Question 5: Fat client application 

 - Answer D is the best.  Using the SSL VPN client the users can securely connect to the VPC and have access to the private subnets.  The fat client can then connect over the VPN tunnel to the application servers which are safely in the private subnet.
 - Answer A does not make sense, AWS Direct Connect is to allow a 'private' line from your data center into AWS and therefore does not come into play for this scenario
 - Answer B is not valid as you don't want to publish the application on the internet therefore an ELB by itself won't help
 - Answer C is not valid as you still place your application servers in the public subnet.  Having the IPsec VPN connection is meant to avoid this need.

Question 6: Legacy engineering application migration

 - Answer B is indeed the way to go an initial sync followed by incremental syncs to make sure you get all the data in the latest state within the time frame.  If needed you could perform multiple incremental syncs (note that these would incur additional cost as you would be consuming more bandwidth)
 - Answer A is not valid as it does not provide a solution to time needed to transfer the 900 GB of data
 - Answer C is not valid as AWS Import/Export is not to migrate data within 48 hours
 - Answer D is not valid because it says to copy the data on Friday which again does not provide enough time to transfer all the data.

1 comment: