00:15:00

AWS Certified Solutions Architect Associate Practice Test (SAA-C03)

Use the form below to configure your AWS Certified Solutions Architect Associate Practice Test (SAA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-90 questions and set a time limit.

Logo for AWS Certified Solutions Architect Associate SAA-C03
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

AWS Certified Solutions Architect Associate SAA-C03 Information

AWS Certified Solutions Architect - Associate showcases knowledge and skills in AWS technology, across a wide range of AWS services. The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework. This certification can enhance the career profile and earnings of certified individuals and increase your credibility and confidence in stakeholder and customer interactions.

The AWS Certified Solutions Architect - Associate (SAA-C03) exam is intended for individuals who perform a solutions architect role. The exam validates a candidate’s ability to design solutions based on the AWS Well-Architected Framework.

The exam also validates a candidate’s ability to complete the following tasks:

  • Design solutions that incorporate AWS services to meet current business requirements and future projected needs
  • Design architectures that are secure, resilient, high-performing, and cost optimized
  • Review existing solutions and determine improvements

Free AWS Certified Solutions Architect Associate SAA-C03 Practice Test

Press start when you are ready, or press Change to modify any settings for the practice test.

  • Questions: 15
  • Time: Unlimited
  • Included Topics:
    Design Secure Architectures
    Design Resilient Architectures
    Design High-Performing Architectures
    Design Cost-Optimized Architectures
Question 1 of 15

A data analytics company processes daily logs that amount to 400 GB per day. The logs need to be stored for 30 days for compliance purposes and must be readily accessible for querying and analysis. The processing and analysis are performed on Amazon EC2 instances. The company seeks a cost-effective storage solution that provides quick access and minimal management overhead. As a Solutions Architect, which storage solution would you recommend for storing the logs?

  • Use an Amazon EFS file system to store the logs.

  • Attach multiple EBS General Purpose SSD (gp3) volumes to the EC2 instances for log storage.

  • Set up an EBS Provisioned IOPS SSD (io2) volume for each EC2 instance to store the logs.

  • Store the logs in Amazon S3 Standard storage class with Lifecycle policy to either adjust storage class or delete them after 30 days.

Question 2 of 15

A company is expanding its online retail application to accommodate an increasing number of users from different geographical locations. The application is hosted on an Amazon EC2 instance and utilizes an Application Load Balancer (ALB) for distributing incoming traffic. Which of the following network configurations should be undertaken to best scale the network architecture for anticipated global growth?

  • Increase the Amazon EC2 instance size to improve network and application performance for global users.

  • Configure Amazon Route 53 geo-proximity routing to direct traffic based on the geographic location of the users.

  • Set up an Amazon CloudFront distribution to cache content at edge locations closer to the users.

  • Establish an AWS Direct Connect connection to improve the application's global network performance.

Question 3 of 15

A company needs to migrate a substantial volume of data to the cloud, but faces bandwidth limitations that prohibit efficient online transfer. Which service would best facilitate this large-scale, offline data migration while maintaining high security standards?

  • Kinesis Firehose

  • Snowball

  • DataSync

  • Direct Connect

Question 4 of 15

An organization aims to maintain operational continuity of its critical workload even if an entire data center servicing their region encounters an outage. Their solution includes computing resources distributed across diverse physical locations within the same geographical area. To enhance the system's robustness, which strategy should be implemented for the data layer?

  • Introduce a Load Balancer to distribute traffic among database instances to minimize the impact of a location outage.

  • Configure an active-passive setup using a secondary region and enact health checks to direct traffic upon failure.

  • Implement a Multi-AZ configuration for the relational database to promote automatic failover and data redundancy.

  • Install a globally distributed database with read replicas in various regions for geographical data distribution.

Question 5 of 15

A company generates large amounts of data from their IoT devices and stores this data on Amazon S3 for real-time analysis. After 30 days, the data is rarely accessed but must be retained for one year for compliance reasons. What is the most cost-effective strategy to manage the lifecycle of this data?

  • Keep the data in S3 Standard for the entire year as it may be needed for unplanned analysis.

  • Transfer the data to EBS Cold HDD volumes after 30 days and delete it after one year.

  • Move the data to S3 Standard-Infrequent Access after 30 days and delete it after one year.

  • Move the data to S3 Glacier after 30 days and delete it after one year.

Question 6 of 15

A company requires a method to routinely create backups for their virtual servers hosted on the cloud, including the storage volumes attached to these servers. They seek an automated solution that is capable of not only protecting their resources but also managing the backup lifecycle with scalability and security in mind. Which option should they select to best fulfill their needs?

  • Deploy an on-premises Storage Gateway to synchronize and back up the server data.

  • Activate versioning on an object storage service for the servers' data archives.

  • Schedule a routine of manual snapshots for the server storage volumes.

  • Employ AWS Backup for centralized and automated backup across different services.

Question 7 of 15

A company utilizes a centralized system for user credentials and seeks to grant employees the ability to utilize these same credentials to perform job-specific tasks within their cloud environment. What is the recommended solution to link the company's current system with the cloud services, allowing role assignment based on existing job functions?

  • Deploy a connector that interfaces with the existing credentials directory and assign cloud user profiles to authenticate against it.

  • Amend the trust configurations in the centralized directory to directly accept authentication requests from the cloud directory service.

  • Enable a connectivity channel such as a VPN between the on-premises network and cloud network, controlling access through network routing and policies.

  • Implement a service like AWS IAM Identity Center to establish a trust relationship between the centralized credentials system and the cloud provider, permitting role mapping accordingly.

  • Construct individual user profiles in the cloud directory service and execute a periodic sync for credentials from the existing on-premises system.

Question 8 of 15

A company needs to store data that is infrequently accessed but requires millisecond retrieval when needed. The data must be stored cost-effectively. Which Amazon S3 storage class should the company use?

  • Amazon S3 Standard-Infrequent Access.

  • Amazon S3 Glacier Deep Archive.

  • Amazon S3 Standard.

  • Amazon S3 Glacier Instant Retrieval.

Question 9 of 15

A development team is seeking a solution to deploy a fleet of containers that will allow them to automatically adjust to traffic fluctuations without manually scaling or managing the host infrastructure. This solution should also facilitate the highest level of resource abstraction while ensuring the containers are orchestrated effectively. Which service should the team implement for optimal elasticity and ease of management?

  • Serverless architecture with provisionable concurrency for functions

  • Elastic Compute with Elastic Load Balancing

  • Managed service for big data processing on virtual server clusters

  • Container orchestration service with cluster management on virtual servers

  • Container service with on-demand, serverless compute engine

  • Job scheduling and execution service for batch processing

Question 10 of 15

An e-commerce company is expecting a significant spike in users accessing product images during an upcoming promotional event. They need a storage service that can serve these images with low latency at scale to enhance customer experience. Which of the following AWS services is the BEST choice to meet these requirements?

  • Amazon EFS with provisioned throughput configured to serve files directly to users

  • Amazon Elastic File System (EFS) mounted on high-memory EC2 instances

  • Amazon Elastic Block Store (EBS) with Provisioned IOPS SSD (io1) volumes attached to EC2 instances serving the images

  • Amazon Simple Storage Service (S3) with an Amazon CloudFront distribution

Question 11 of 15

An application deployed on a cloud virtual server requires interaction with object storage and a NoSQL database service. What is the recommended method to manage the application's service-specific permissions in accordance with best security practices that enforce minimal access rights?

  • Embed long-term security credentials in the source code of the application to authorize service interactions.

  • Create a role with the exact permissions required by the application and attach it to the virtual server.

  • Configure an account with administrative privileges and programmatically distribute its access keys across all server instances.

  • Utilize the cloud platform's root account to ensure uninterrupted access to necessary services.

Question 12 of 15

A financial institution requires an archiving solution for critical data stored on local file servers. The data must be accessible with minimal delay when requested by on-premises users, yet older, less frequently accessed files should be economically archived in the cloud. However, after a specific period of inactivity, these older files should be transitioned to a less expensive storage class. Which solution should the architect recommend to meet these needs in a cost-efficient manner?

  • An online data transfer service

  • A fully managed file storage service for Windows files

  • A managed file transfer service

  • File gateway mode of a certain hybrid storage service

Question 13 of 15

An online media platform experiences slow content delivery when accessed by users located on a different continent from where the platform's servers are hosted. How can a Solutions Architect optimize content delivery for these international users?

  • Use Amazon ElastiCache to cache data within the application's current region to enhance content retrieval speeds.

  • Upscale the compute capacity of the origin servers to improve response times for global requests.

  • Implement Amazon CloudFront to cache and deliver content from edge locations closest to the international audience.

  • Deploy additional load balancers in strategic locations to better handle incoming traffic from overseas users.

Question 14 of 15

A company has an application that generates 50 GB of log files each month, which are analyzed quarterly. The current policy is to keep the logs available in hot storage for the first month after generation for immediate analysis if needed, and then move them to a cheaper storage class for the remainder of the quarter. After the analysis, the logs are archived. Which storage strategy is most cost-effective while fulfilling the company's access pattern requirements?

  • Store the logs on Amazon Elastic Block Store (EBS) volumes for the first month, then shift the data to Amazon EFS until the quarterly analysis is done, later archiving to Amazon S3 Standard.

  • Use Amazon S3 to initially store the logs, transition to S3 Standard-Infrequent Access after one month, and move to S3 Glacier or S3 Glacier Deep Archive for archival after quarterly analysis.

  • Keep the logs in Amazon S3 Glacier during the initial month for cost savings, and then move to Amazon S3 for quick analysis, archiving them back to S3 Glacier after analysis.

  • Use Amazon S3 One Zone-Infrequent Access for the entire duration until quarterly analysis, then move the logs directly to S3 Glacier Deep Archive.

Question 15 of 15

Which service feature should you use to manage a large number of concurrent database connections that often experience unpredictable spikes in connection requests, while ensuring minimal changes to the existing applications?

  • Amazon ElastiCache

  • AWS Direct Connect

  • Elastic Load Balancing

  • Amazon RDS Proxy