AWS Solution Architect Associate 2020 exam guides and tips

AWS Certified Solution architect associate 2020 exam has updated its key topics.
In 2020 it has focused more on building high availability architecture. Passing your AWS Solution Architect Associate exam requires two things: a lot of revision and hard work. An AWS certification is an indication that you’re familiar with AWS resources and be helpful in your career development.
Having production experience is much more valuable and I highly recommend to try out almost everything in your own account. For example building, a highly available system is very important!

I will cover the important tips for your 2020 exam. Please bear in mind that the exam is not easy and studying thoroughly is very important.

Sections covered in the exam and this guide

The exam will cover 5 different sections which are extremely essential to pass the exam:

  1. Designing fault tolerant and scalable architecture (34%)
  2. How to improve architecture performance by utilising caching, selecting right database and storage. (24%)
  3. Securing applications and architectures (focuses alot on VPC, and VPC endpoint) (26%)
  4. How to reduce cost in AWS (such as selecting reserved instance(10%)
  5. Selecting the correct design features in your architecture (6%)

AWS Solution Architect Associate Exam guides

1. Designing fault tolerant and scalable architecture

This is the most important section of the exam as it covers 60% of the total score. Here are the top points you need to know:

AWS EC2, Load Balancer, and Storage Options:

In this section, the exam will question your knowledge on various topics such as the payment plans, types of storage for the instances, etc.

Key points to remember:

  1. Understand the difference between the three payment plans:
    • On-Demand instances: 
      • pay per hour
      • Use cases: development/ testing environment.
      • Going to production not 100% confident of your resource utilisation, go for On-demand first then Reserved
    • Reserved instances
      • 1 or 3-year agreement (up to 75% discount in 2020 this has increased to 75%)
      • Capacity reservation is possible with Reserved instances!
      • Use cases: production-ready application, applications that will be live for a long time.
    • Spot Instances
      •  bid on the price
      • Use cases: applications that termination of an instance won’t cause an issue.
    • Saving Plans
      •  This is a new pricing plan: It’s a flexible plan, it is similar to Reserved plan. The discount is not tied to the instance type but to the usage. This also applies to Fargate
    • Dedicated Hosts
      • You purchase the physical EC2 machine. Ideally, this is used for compliance purposes only/li>
  2. How does Spot Instance works: If the price of the spot instance goes above the bid price, or there is not enough capacity, the AWS EC2 instance will receive a termination notice and will be terminated in two minutes.
  3. There are three tenancy options:
    1. Shared Tenancy: Multiple instances run on the same hardware (default option)
    2. Dedicated instance: A dedicated hardware that runs only a single customer instance
    3. Dedicated host: Physical server with full EC2 capacity dedicated to the user. It’s mostly used for Licensing reasons.
  4. Launch Templates Similar to Launch config but it comes with Versioning. Launch templates can have different versions with each can have different parameters. They are used in autoscaling policies. You can set the auto-scaling to pick up the latest launch template.

    Note: only 5,000 launch templates per Region are allowed and 10,000 versions of a single launch template can exist.
    Launch templates cannot be validated! Please try them before using them in autoscaling.

  5.  Instance store data is lost when an Amazon EC2 instance is restarted or terminated. It is temporary data!
  6. There are four types of AMI’s
    1. AMI’s that are published by AWS: These are maintained and managed by AWS and are very reliable.
    2. AWS Marketplace: You could purchase AMI’s from other providers such as Bitnami, Drupal. You don’t need to install the applications.
    3. Existing Instance AMI’s: AMI’s from a current EC2.
    4. Uploaded AMI’s from Virtual Servers:  These are AMI’s that have been Imported or Exported via AWS Import/Export Service.
  7. EFS volume can be mounted on multiple servers.
    EFS data can be backed up in two options:
    1) AWS Backup service
    2) EFS – EFS backup solution.
  8. EBS volume can be mounted on a single server.
  9. Public IP is changed on a stop/start of an instance. To avoid the change of IP,  associate an Elastic IP to your instance.
  10. Elastic IP attached to an EC2 instance will not incur any charge! But, if it is not associated you will be charged.
  11. A newly launched Window Instance can be accessed via the Random Password which AWS generates upon the completion of the instance creation.
  12. Bootstrapping: Allows you to execute a script when an instance is booted.  Usually, it involves installing certain packages or configuring Chef/Puppet.
    Ideally you would write a cloudinit script which would install Chef/Puppet or Ansible then they would handle the provisioning of the instances
  13. Enhanced Networking: Is a feature in AWS EC2 which improves the network connectivity. Note: Only specific EC2 types support it and can be enabled in a VPC only.
  14. Termination Protection prevents accidental deletion of an EC2 instance.
  15. Placement Group:  Let’s you place multiple AWS EC2 instances in a group, this will provide a lower network latency.
  16. There are four types of EBS Volumes:
    1. Cold HDD
      • Use case: Lowest storage cost is required.
      • Max IOPS / Volume 250.
      • It cannot be selected as the boot volume for the EC2 instance.
    2. Throughput HDD
      • Use case: Big data, log processing, Log processing
      • Max IOPS / Volume 500.
    3. General Purpose SSD
      • Use case: OS boot volume, databases.
      • Max IOPS / Volume 16,000
    4. Provisioned IOPS SSD.
      • Use case: Critical application, hosting ELK, Applications which need very fast data access
      • Max IOPS / Volume 64,000.

2. AWS Databases:

AWS Aurora Serverless is a new database type which you don’t need to manage the scaling of it. It supports MySQL 5.6 and MySQL 5.7 version only. Unlike RDS which you pay for the instance type, in Aurora Serverless you pay for the storage and capacity only. When there is no database usage it auto-pauses. This can be changed.

In my experience these are the most common point that you should know before taking up the exam:

  1. Understand the difference between
    1. OLTP – Online Transaction Processing (database types: AWS RDS, AWS Aurora)
    2. OLAP – Online Analytic Processing (AWS Redshift)
  2. Understand the difference between
    • RPO (Recovery Point Objective) – the acceptable data loss.
    • RTO (Recovery Time Objective) – the time in future that your application can be live.
  3. Manual DB Snapshots are not deleted automatically compared to Automated DB Snapshots!
  4. To create a fault tolerant and high available database architecture, implement Multi-AZ. When the master database fails, the slaves will become the master. There would be few seconds of downtime as the changes needs to take place.
  5. Use the DNS name in your application to connect to the database. If the database fails, AWS will update the records so it won’t impact your application. (Used in Multi AZ)
  6. Use Read Replicas in a heavy read traffic website. To offload the load from the master database
  7. Use AWS Redshift bulk import command, it is much more efficient than raw SQL Queries.
  8. Amazon DynamoDB is the AWS Managed NoSQL database.
  9. Increase the write efficiency of an Amazon DynamoDB by randomizing the primary key value.
  10. AWS Aurora is the database engine developed by AWS which is faster and cheaper than AWS RDS.

The exam will ask you which database is best suited for NoSQL – it’s always AWS DynamoDB !

3.AWS Storage, S3, Glacier

This section focuses on various storage options, security of data storage and implement/deployment of storage options.

Key points:

  • Block level (EBS) and file storage (EFS) are covered here
  • AWS S3 is an object storage, everything saved in S3 is stored as data objects.
    • Each object consists of a MetaData (created by Amazon)  and Data (custom data).
    • An object size could be from 0 to 5 TB.
    • S3 Objects are replicated across multiple devices within a region!
    • S3 Objects are saved in a container called “Bucket“, consider Bucket as the root folder.
    • To prevent accidental object deletion, enable versioning and MFA.
    • S3 data can be replicated across other regions, this is usually done for compliance. Note: Only new objects will be replicated.
    • Bucket names are unique across all AWS Accounts!
    • Bucket name must be between 3 and 63 characters and can contain numbers, hyphens or periods.
    • Consistency Model of S3
      • When you create a new object, you will receive the latest object. (Read After Write Consistency) – PUTS to new object
      • When you PUT or DELETE a current object,  AWS provides Eventual consistency, it might take a while for the changes to be affected.
    • There are three S3 storage classes:
      1. S3 Standard: 99.99% availability, ideal for frequently accessed data.
      2. S3 Intelligent-Tiering: Consider it as an automated AI which monitors the object lifecycle in S3 and moves them to the appropriate class. For example: When a object is not accessed for more than 30 days it will be migrated to the infrequent access class. S3 Intelligent-Tiering has no impact on performace. This is a great service when the understanding of object storage duration is unknown and you would like to save cost in long term run.
      3. S3 Standard-IA Infrequent Access: 99.9% availability, ideal for less frequently accessed data, cheaper than S3 Standard, minimum object size 128kb
      4. S3 One Zone-Infrequent-IA Access: 99.5% availability. It is 20% cheaper than S3 Standard. Data is stored in a single Availability Zone. Ideal for objects that is accessed less frequently but requires sudden access. If the data resilience is not important and you would like to reduce cost this is the ideal class.
      5. S3 Glacier: Used for data archiving such as long term compliance. There is no upfront cost, pricing is based upon per GB storage. Providers three method to access data
        1) Expedited: Retrieve data within 1 to 5 minutes
        2) Standard: Retrieve data within 3-5 hours
        3) Bulk: Retrieve data within 5 to 12 hours.
      6. S3 Glacier Deep Archive: Lowest cost of all S3 storage methods. Data can be restored in 12 hours.


4. Elastic Load Balancer

Key points:

  • There are two types of load balancer:
    1. Internet Facing: has public IP
    2. Internal Load Balancer: routes traffic within a VPC)
  • There are three categories of Load Balancer:
    1. Application Load Balancer
      • Operates at Layer 7.
      • Has support upt 100 rules.
      • Since it is operating at Layer 7 this means it has acces the request URI. You could set rules to dispatch requests to a different target group based upon the URL.
      • Supports WebSockets and Secure WebSockets.
      • Supports SNI.
      • Supports IPv6.
    2. Network Load Balancer ( The NLB is the recommended Load Balancer to be used at Layer 4)
      • Operates at Layer 4. (TCP and UDP protocol)
      • Preserves the source IP.
      • Provides reduced latency compared to other Load Balancers.
      • Handles millions of requests per seconds.
    3. Classic Load Balancer
      1. Operates at Layer 4.
      2. Features Cross Zone Load Balancing.
  • Idle Connection Timeout: Is the period of inactivity between the client and the EC2 instance in which the Load Balancer terminates the connection. By default, it is 60 seconds.
  • Enable Cross Zone Load Balancing, to distribute traffic evenly across the pool of registered AWS instances.
  • Connection Draining stops the load balancer sending traffic to faulty instances.
  • Proxy Protocol: To receive the clients IP and User Agent, it needs to be enabled. (only on Network Load Balancer and Classic Load Balancer)
  • Sticky Sessions: Ensures that the load balancer sends future user request to the EC2 instance that received the initial request.
  • Auto Scaling Group: Lets you increase or decrease the number of EC2 instances based upon CPU, RAM, Scheduled Scaling (usually used when you predict that there will be an increase in load in X time ex: Marketing Campaign)

5. Designing a secure VPN

In this section, the exam will test your knowledge on the security aspect of an architecture. Building a secure architecture can be a challenging job of an architect. AWS has many services or features which can help you implement a secure architecture.

Key points to remember:

  • VPC (Virtual Private Cloud) is a virtual network dedicated to a customer on AWS.
  • VPC cannot be associated with multiple regions.
  • VPC or Subnets IP address range cannot be altered once the VPC has been created.
  • A newly VPC created has the following resources:
    • Subnets.
    • Route Tables.
    • Dynamic Host Configuration Protocol.
    • Security Groups.
    • Network Access Control
  • A subnet is segmentation of a VPC.
  • The smallest subnet can have 16 (/28 netmask) IP addresses and maximum 65,536 (/16 netmask).
  • 5 IP addresses of a subnet is being used by AWS, so if you create a subnet with 16 IP addresses will have only 11 IP address available to use (16-5 = 11)
  • There are three types of a subnet:
    • Public: Has internet access and traffic is routed to the IGW (Internet Gateway)
    • Private: Is not routed to the IGW.
    • VPN-only: Traffic is directed to a VPG (Virtual Private Gateway).
  • A subnet can be associated with a single availability zone!.
  • For a  subnet to become public (have internet access), it would need to route its all non-local traffic to the IGW.
  • VPC Endpoints lets you create an end-to-end connection from your VPC with AWS services such as S3 via a private network without the need of an internet connection, NAT Gateway etc.
  • VPC peering connection, lets you connect instances from other VPC’s together.
    • Connections are initiated through a request/accept protocol.
    • It is a one to one relationship.
    • Cannot peer VPC from different regions.
    • Does not support transitive routing.
  • There are two types of firewalls:
    1. Security Group.
    2. Network Access Control Lists (ACLs)
  • For instances in a private subnet that needs to access the internet, they would need to connect to a NAT instance or a NAT Gateway.
  • NAT Instances are managed by the customer.
  • NAT Gateway is managed by AWS and scales automatically.


Download part 2, just for $5

If you have taken the exam, please share your experience by leaving a comment below




If you’re interested in learning more about Cloud, I recommend you to become familiar with tools such as Chef and Ansible. In fact, I have written a blog post on how to create a VPC using Ansible.