00:15:00

Free AWS Certified Solutions Architect Associate SAA-C03 Practice Test

Prepare for the AWS Certified Solutions Architect Associate SAA-C03 exam with this free practice test. Randomly generated and customizable, this test allows you to choose the number of questions.

  • Questions: 15
  • Time: 15 minutes (60 seconds per question)
  • Included Objectives:
    • Design Secure Architectures
    • Design Resilient Architectures
    • Design High-Performing Architectures
    • Design Cost-Optimized Architectures
Question 1 of 15

Your application requires a storage solution that can provide read access to the data in different geographical locations to serve users with low latency. Which AWS service feature would you use to achieve this requirement?

  • Amazon S3 Cross-Region Replication (CRR)

  • Amazon S3 Transfer Acceleration

  • RDS Multi-AZ deployments

  • EBS Snapshots to different regions

Question 2 of 15

Your client is deploying an application on AWS which includes a public-facing load balancer, compute instances for web servers residing in a private subnet, and a separate managed relational database in another private subnet. The application is expected to receive substantial traffic from various global locations. To optimize the architecture for cost while maintaining performance, what should be the Solutions Architect’s primary consideration regarding the placement of the compute instances and the database?

  • Set up a NAT Gateway for each subnet, ensuring that data paths are routed through the most cost-effective network devices.

  • Implement an additional load balancer specifically for traffic between the compute instances and the database to optimize the data path.

  • Configure the compute instances and the database to communicate over a public internet gateway to use internet routing.

  • Position the database and compute instances in the same Availability Zone to avoid inter-AZ data transfer charges.

Question 3 of 15

A company is looking to establish a data lake on AWS in order to store and analyze their disparate datasets which come in at varying frequencies and sizes. They want to ensure that the data remains secure both at rest and in transit, and that the solution can accommodate future increases in data volume. Which of the following options fulfills these requirements?

  • Deploy an Amazon RDS instance with a read replica to act as a central repository for the data lake.

  • Store data on Amazon EBS volumes attached to EC2 instances to benefit from volume encryption and direct control.

  • Use Amazon EFS with lifecycle management policies to store and secure data lake files.

  • Use Amazon S3 with encryption enabled and IAM policies for secure, scalable object storage.

Question 4 of 15

An international financial organization must ensure their highly transactional application's operations can withstand the outage of a data center without any service interruption. Furthermore, the application should incur minimal latency for users in Europe, North America, and Asia. Considering cost-effectiveness and operational complexity, what deployment approach adheres BEST to these requirements?

  • Establish the application in multiple AWS Regions each located near Europe, North America, and Asia, with an Amazon Route 53 latency-based routing policy.

  • Implement a global database cluster with cross-region read replicas to ensure the application’s relational data remains available and experiences low latency accesses.

  • Utilize one AWS Region to host the primary instance and establish cross-region read replicas in regions closest to Europe, North America, and Asia.

  • Deploy the application into a single AWS Region and distribute it across multiple Availability Zones, leveraging Amazon Route 53 health checks for failover.

Question 5 of 15

A company has an application that requires a relational database with a highly available and fault-tolerant configuration, including failover capabilities across two Availability Zones within an AWS Region. Which AWS service should be implemented to meet these requirements?

  • Amazon Aurora Global Databases

  • Amazon RDS with Multi-AZ deployments

  • Amazon S3 with cross-Region replication

  • Amazon DynamoDB with global tables

Question 6 of 15

A company's architecture requires segregation between its web servers that are accessible from the internet and its backend databases that should not be directly accessible from the internet. As the Solutions Architect, you have to ensure that the databases remain protected while allowing the web servers to communicate with them. Which of the following options achieves this objective while adhering to AWS security best practices?

  • Deploy both the web servers and databases in the same public subnet, using a network ACL to deny inbound traffic from the internet to the database servers' IP addresses.

  • Place the databases in a public subnet but do not assign a public IP, and configure a route table that has no routes to and from the internet gateway.

  • Place the databases in a private subnet and the web servers in a public subnet, and configure the security groups allowing specific traffic from the web servers to the databases.

  • Utilize a NAT gateway to translate traffic from the internet to the private subnet where the databases reside, ensuring internet traffic can only reach the databases through the NAT gateway.

Question 7 of 15

A software company is aiming to improve the load time of their video streaming service, which caters to a diverse set of customers worldwide. Which service should they implement to enhance content delivery and reduce latency for their international audience?

  • Create multiple application load balancers in different continents

  • Implement a global database with read replicas in several geographical locations

  • Configure a series of VPN connections to facilitate faster video transfer rates

  • Utilize a network of distributed edge locations to cache and serve content

Question 8 of 15

What is the primary purpose of placing an Amazon EC2 instance in a private subnet within a VPC?

  • To enable the EC2 instance to serve web traffic directly to the internet

  • To automatically assign an Elastic IP address to the EC2 instance

  • To allow the EC2 instance to function as a NAT gateway for other instances

  • To prevent the EC2 instance from being directly accessible from the internet

Question 9 of 15

Using AWS Lambda, the number of instances responding to triggers is limited by the AWS Lambda service's built-in concurrency model, which can be adjusted with a concurrency limit for each function.

  • This statement is false

  • This statement is true

Question 10 of 15

Your client hosts their multimedia files on Amazon S3 and observes that these files are frequently accessed for up to 60 days after uploading. After 60 days, the access patterns decline sharply, but the client requires the files to be available for occasional access for at least one year. Which lifecycle policy should be applied to meet the client's need for cost optimization while maintaining file availability?

  • Transition objects to S3 One Zone-Infrequent Access after 60 days

  • Transition objects to S3 Standard-Infrequent Access after 60 days and to S3 Glacier after one year

  • Transition objects directly to S3 Glacier after 60 days

  • Keep the objects stored in S3 Standard without transitioning them to other storage classes

Question 11 of 15

A Solutions Architect must create a secure storage solution for confidential client documents at a law firm. The design needs to enforce strict permissions and ensure documents are only retained as long as legally necessary before being removed from storage. Which configuration would best meet the firm's operational and legal requirements?

  • Utilize a Glacier Vault with Lock policies, scheduling vault lock-in to meet the retention timeline and manually manage deletions.

  • Implement key management service policies to expire encryption on objects, effectively rendering them inaccessible post-retention.

  • Deploy an S3 bucket with appropriate Bucket Policies and IAM roles, setting lifecycle policies to remove documents after the predetermined retention duration.

  • Configure S3 Object Lock to enforce a strict WORM (Write Once Read Many) model until documents are manually purged post-retention.

Question 12 of 15

Amazon Aurora is more cost-effective than Amazon Redshift for large-scale data warehousing and complex querying of columnar data

  • True

  • False

Question 13 of 15

An organization is deploying a web application on a scalable cloud infrastructure. They need to ensure all communications between the web browsers of their clients and the servers are encrypted. Which approach would be the MOST appropriate to guarantee encryption of the data being transmitted over the internet?

  • Incorporate a key management service to generate encryption keys used to manually encrypt data before transmission over the internet.

  • Provision, manage, and automate the renewal of SSL/TLS certificates for deploying on load balancers and content delivery networks.

  • Implement custom encryption in the application code to handle encryption before sending data over the network.

  • Utilize automated server-side encryption in the storage layer to secure data before it leaves the cloud environment.

Question 14 of 15

A corporation is required to automate the identification and categorization of stored content to enforce varying preservation requirements. Which service should be utilized to facilitate the discovery and categorization process, enabling the enforcement of corresponding preservation policies?

  • A managed service for cryptographic keys

  • Amazon Macie

  • A cloud storage service's lifecycle management feature

  • A service for managing identities and permissions

Question 15 of 15

A financial services company is leveraging cloud storage services to retain transaction records. These records contain privileged client information that needs to be encrypted when not in use. The company's security team must have the capability to manage encryption keys centrally, including the facilitation of periodic, automated key changes. Which configuration should be implemented to meet these encryption management requirements?

  • Implement managed service keys with a policy for key rotation every three years.

  • Create customer controlled keys with enabled automated rotation on an annual schedule.

  • Create customer controlled keys and use a scheduled script to change the key material manually.

  • Rely on developers to generate and replace keys on a regular basis through a manual update process.