How to pass your 2019 AWS Certified Solutions Architect Associate Exam first time

AWS has introduced new services in 2019 such as Aurora Serverless. To pass the exam on the first time, we would need to have a solid understanding of majority of the services and go through the exam dumps questions. Passing your AWS Solution Architect Associate exam requires two things: a lot of revision and hard work. I was delighted when I passed the exam thanks to Cloud Guru for their amazing training materials. An AWS certification is an indication that you’re familiar with AWS resources and be helpful in your career development.

In this article, I will cover the tips which help you prepare for the 2019 exam. By the end of this post you have revisied majority of the key points!

Sections covered in the exam

The exam will cover 5 different sections which is extremely essential to pass the exam:

  1. Building a scalable, fault-tolerant and cost-efficient architecture. (34%)
  2. Utilizing caching services to improve application performance, appropriate database design. (24%)
  3. How to secure applications from attackers using the services such as WAF, using encryption mechanisms, and configuring VPC (26%)
  4. Reducing the cost of systems in both storage and compute areas. (10%)
  5. Selecting the right design to provide a fault tolernt architecture (6%)

AWS Solution Architect Associate Exam tips and Preparation points




1. Building a scalable, fault-tolerant and cost-efficient architecture

This is the most important section of the exam as it covers 60% of the total score. Here are the top points you need to know:

AWS EC2, Load Balancer, and Storage Options:

In this section, the exam will question your knowledge on various topics such as the payment plans, types of storage for the instances, etc.

Key points to remember:

  1. Understand the difference between the three payment plans:
    • On-Demand instances: 
      • pay per hour
      • Use cases: development/ testing environment.
    • Reserved instances
      • 1 or 3-year agreement (up to 60% discount)
      • Use cases: production-ready application, applications that will be live for a long time.
    • Spot Instances
      •  bid on the price
      • Use cases: applications that termination of an instance won’t cause an issue.
  2. Spot Instance: If the price of the spot instance goes above the bid price, or there is not enough capacity, the AWS EC2 instance will receive a termination notice and will be terminated in two minutes.
  3. There are three tenancy options:
    1. Shared Tenancy: Multiple instances run on the same hardware (default option)
    2. Dedicated instance: A dedicated hardware that runs only a single customer instance
    3. Dedicated host: Physical server with full EC2 capacity dedicated to the user. It’s mostly used for Licensing reasons.
  4. Launch Templates Consider them a template file which you predefined the resources you would like it to spin when the template is created. Launch templates are usefull in autoscaling as when new resources are needed to spin it could utilize a launch template. Launch templates can have different versions with each can have different parameters.
    Note: only 5,000 laucnh templates per Region are allowed and 10,000 versions of a single launch template can exist.
    Launch templates cannot be validated! Please try them before using them in autoscaling.
  5.  Instance store data is lost when an Amazon EC2 instance is restarted or terminated. It is temporary data!
  6. There are four types of AMI’s
    1. AMI’s that are published by AWS: These are maintained and managed by AWS and are very reliable.
    2. AWS Marketplace: You could purchase AMI’s from other providers such as Bitnami, Drupal. You don’t need to install the applications.
    3. Existing Instance AMI’s: AMI’s from a current EC2.
    4. Uploaded AMI’s from Virtual Servers:  These are AMI’s that have been Imported or Exported via AWS Import/Export Service.
  7. EFS volume can be mounted on multiple servers.
    EFS data can be backed up in two options:
    1) AWS Backup service
    2) EFS – EFS backup solution.
  8. EBS volume can be mounted on a single server.
  9. Public IP is changed on a stop/start of an instance. To avoid the change of IP,  associate an Elastic IP to your instance.
  10. Elastic IP attached to an EC2 instance will not incur any charge! But, if it is not associated you will be charged.
  11. A newly launched Window Instance can be accessed via the Random Password which AWS generates upon the completion of the instance creation.
  12. Bootstrapping: Allows you to execute a script when an instance is booted.  Usually, it involves installing certain packages or configuring Chef/Puppet.
    Ideally you would write a cloudinit script which would install Chef/Puppet or Ansible then they would handle the provisioning of the instances
  13. Enhanced Networking: Is a feature in AWS EC2 which improves the network connectivity. Note: Only specific EC2 types support it and can be enabled in a VPC only.
  14. Termination Protection prevents accidental deletion of an EC2 instance.
  15. Placement Group:  Let’s you place multiple AWS EC2 instances in a group, this will provide a lower network latency.
  16. There are four types of EBS Volumes:
    1. Cold HDD
      • Use case: applications with few data access.
      • Max IOPS 250.
    2. Throughput HDD
      • Use case: data warehouse, log processing.
      • Max IOPS 500.
    3. General Purpose SSD
      • Use case: OS boot volume, databases.
      • Max IOPS 10,000
    4. Provisioned IOPS SSD.
      • Use case: Applications which need very fast data access, large databases
      • Max IOPS 20,000.



2. AWS Databases:

AWS Aurora Serverless is a new database type which you don’t need to manage the scaling of it. It supports MySQL 5.6 version only. Unlike RDS which which you pay for the instance type, in Aurora Serverless you pay for the storage and capacity only. When there is no database usage it auto-pauses. This can be changed.

In my experience these are the most common point that you should know before taking up the exam:

  1. Understand the difference between
    1. OLTP – Online Transaction Processing (database types: AWS RDS, AWS Aurora)
    2. OLAP – Online Analytic Processing (AWS Redshift)
  2. Understand the difference between
    • RPO (Recovery Point Objective) – the acceptable data loss.
    • RTO (Recovery Time Objective) – the time in future that your application can be live.
  3. Manual DB Snapshots are not deleted automatically compared to Automated DB Snapshots!
  4. To create a fault tolerant and high available database architecture, implement Multi-AZ. When the master database fails, the slaves will become the master. There would be few seconds of downtime as the changes needs to take place.
  5. Use the DNS name in your application to connect to the database. If the database fails, AWS will update the records so it won’t impact your application. (Used in Multi AZ)
  6. Use Read Replicas in a heavy read traffic website. To offload the load from the master database
  7. Use AWS Redshift bulk import command, it is much more efficient than raw SQL Queries.
  8. Amazon DynamoDB is the AWS Managed NoSQL database.
  9. Increase the write efficiency of an Amazon DynamoDB by randomizing the primary key value.
  10. AWS Aurora is the database engine developed by AWS which is faster and cheaper than AWS RDS.

The exam will ask you which database is best suited for NoSQL – it’s always AWS DynamoDB !

3.AWS Storage, S3, Glacier

This section focuses on various storage options, security of data storage and implement/deployment of storage options.

Key points:

  • Block level (EBS) and file storage (EFS) are covered here
  • AWS S3 is an object storage, everything saved in S3 is stored as data objects.
    • Each object consists of a MetaData (created by Amazon)  and Data (custom data).
    • An object size could be from 0 to 5 TB.
    • S3 Objects are replicated across multiple devices within a region!
    • S3 Objects are saved in a container called “Bucket“, consider Bucket as the root folder.
    • To prevent accidental object deletion, enable versioning and MFA.
    • S3 data can be replicated across other regions, this is usually done for compliance. Note: Only new objects will be replicated.
    • Bucket names are unique across all AWS Accounts!
    • Bucket name must be between 3 and 63 characters and can contain numbers, hyphens or periods.
    • Consistency Model of S3
      • When you create a new object, you will receive the latest object. (Read After Write Consistency) – PUTS to new object
      • When you PUT or DELETE a current object,  AWS provides Eventual consistency, it might take a while for the changes to be affected.
    • There are three S3 storage classes:
      1. S3 Standard: 99.99% availability, ideal for frequently accessed data.
      2. S3 Intelligent-Tiering: Consider it as an automated AI which monitors the object lifecycle in S3 and moves them to the appropriate class. For example: When a object is not accessed for more than 30 days it will be migrated to the infrequent access class. S3 Intelligent-Tiering has no impact on performace. This is a great service when the understanding of object storage duration is unknown and you would like to save cost in long term run.
      3. S3 Standard-IA Infrequent Access: 99.9% availability, ideal for less frequently accessed data, cheaper than S3 Standard, minimum object size 128kb
      4. S3 One Zone-Infrequent-IA Access: 99.5% availability. It is 20% cheaper than S3 Standard. Data is stored in a single Availability zone. Ideal for objects that is accessed less frequently but requires sudden access. If the data resiliene is not important and you would like to reduce cost this is the ideal class.
      5. S3 Glacier: Used for data archiving such as long term compliance. There is no upfront cost, pricing is based upon per GB storage. Providers three method to access data
        1) Expedited: Retrieve data within 1 to 5 minutes
        2) Standard: Retrieve data within 3-5 hours
        3) Bulk: Retrieve data within 5 to 12 hours.
      6. S3 Glacier Deep Archive: Lowest cost of all S3 storage methods. Data can be restored in 12 hours.

 

4. Elastic Load Balancer

Key points:

  • There are two types of load balancer:
    1. Internet Facing: has public IP
    2. Internal Load Balancer: routes traffic within a VPC)
  • There are three categories of Load Balancer:
    1. Application Load Balancer
      • Operates at Layer 7.
      • Since it is operating at Layer 7 this means it has acces the request URI. You could set rules to dispatch requests to a different target group based upon the URL.
      • Supports WebSockets and Secure WebSockets.
      • Supports SNI.
      • Supports IPv6.
    2. Network Load Balancer ( The NLB is the recommended Load Balancer to be used at Layer 4)
      • Operates at Layer 4.
      • Preserves the source IP.
      • Provides reduced latency compared to other Load Balancers.
      • Handles millions of requests per seconds.
    3. Classic Load Balancer
      1. Operates at Layer 4.
      2. Features Cross Zone Load Balancing.
  • Idle Connection Timeout: Is the period of inactivity between the client and the EC2 instance in which the Load Balancer terminates the connection. By default, it is 60 seconds.
  • Enable Cross Zone Load Balancing, to distribute traffic evenly across the pool of registered AWS instances.
  • Connection Draining stops the load balancer sending traffic to faulty instances.
  • Proxy Protocol: To receive the clients IP and User Agent, it needs to be enabled. (only on Network Load Balancer and Classic Load Balancer)
  • Sticky Sessions: Ensures that the load balancer sends future user request to the EC2 instance that received the initial request.
  • Auto Scaling Group: Lets you increase or decrease the number of EC2 instances based upon CPU, RAM, Scheduled Scaling (usually used when you predict that there will be an increase in load in X time ex: Marketing Campaign)

5. Designing a secure VPN

In this section, the exam will test your knowledge on the security aspect of an architecture. Building a secure architecture can be a challenging job of an architect. AWS has many services or features which can help you implement a secure architecture.

Key points to remember:

  • VPC (Virtual Private Cloud) is a virtual network dedicated to a customer on AWS.
  • VPC cannot be associated with multiple regions.
  • VPC or Subnets IP address range cannot be altered once the VPC has been created.
  • A newly VPC created has the following resources:
    • Subnets.
    • Route Tables.
    • Dynamic Host Configuration Protocol.
    • Security Groups.
    • Network Access Control
  • A subnet is segmentation of a VPC.
  • The smallest subnet can have 16 (/28 netmask) IP addresses and maximum 65,536 (/16 netmask).
  • 5 IP addresses of a subnet is being used by AWS, so if you create a subnet with 16 IP addresses will have only 11 IP address available to use (16-5 = 11)
  • There are three types of a subnet:
    • Public: Has internet access and traffic is routed to the IGW (Internet Gateway)
    • Private: Is not routed to the IGW.
    • VPN-only: Traffic is directed to a VPG (Virtual Private Gateway).
  • A subnet can be associated with a single availability zone!.
  • For a  subnet to become public (have internet access), it would need to route its all non-local traffic to the IGW.
  • VPC Endpoints lets you create an end-to-end connection from your VPC with AWS services such as S3 via a private network without the need of an internet connection, NAT Gateway etc.
  • VPC peering connection, lets you connect instances from other VPC’s together.
    • Connections are initiated through a request/accept protocol.
    • It is a one to one relationship.
    • Cannot peer VPC from different regions.
    • Does not support transitive routing.
  • There are two types of firewalls:
    1. Security Group.
    2. Network Access Control Lists (ACLs)
  • For instances in a private subnet that needs to access the internet, they would need to connect to a NAT instance or a NAT Gateway.
  • NAT Instances are managed by the customer.
  • NAT Gateway is managed by AWS and scales automatically.

 

Download part 2, just for $5




If you have taken the exam, please share your experience by leaving a comment below

 

 

 

If you’re interested in learning more about Cloud, I recommend you to become familiar with tools such as Chef and Ansible. In fact, I have written a blog post on how to create a VPC using Ansible.