Scroll down to see your responses and detailed results
Prepare for the AWS Certified Solutions Architect Associate SAA-C03 exam with this free practice test. Randomly generated and customizable, this test allows you to choose the number of questions.
Your application requires a storage solution that can provide read access to the data in different geographical locations to serve users with low latency. Which AWS service feature would you use to achieve this requirement?
Amazon S3 Cross-Region Replication (CRR)
Amazon S3 Transfer Acceleration
RDS Multi-AZ deployments
EBS Snapshots to different regions
Amazon S3 Cross-Region Replication (CRR) is the correct choice for this scenario as it allows automatic replication of S3 objects to a destination bucket located in a different AWS Region. This enhances data availability and can help serve users in different geographical locations with lower latency. S3 Transfer Acceleration is used to speed up the transfer of files into S3 but does not address the need for geographical read access. EBS snapshots are point-in-time backups of EBS volumes, but they do not provide real-time access across regions. RDS Multi-AZ deployments provide high availability within a region, but they do not fulfill the low-latency access requirement across geographical locations like CRR does.
AI Generated Content may display inaccurate information, always double-check anything important.
Your client is deploying an application on AWS which includes a public-facing load balancer, compute instances for web servers residing in a private subnet, and a separate managed relational database in another private subnet. The application is expected to receive substantial traffic from various global locations. To optimize the architecture for cost while maintaining performance, what should be the Solutions Architect’s primary consideration regarding the placement of the compute instances and the database?
Set up a NAT Gateway for each subnet, ensuring that data paths are routed through the most cost-effective network devices.
Implement an additional load balancer specifically for traffic between the compute instances and the database to optimize the data path.
Configure the compute instances and the database to communicate over a public internet gateway to use internet routing.
Position the database and compute instances in the same Availability Zone to avoid inter-AZ data transfer charges.
By placing compute instances and the database within the same Availability Zone, data transfer costs between them can be eliminated since AWS does not charge for intra-AZ traffic. In contrast, inter-AZ data transfers, even within the same region, incur costs. AWS Global Accelerator is not related to cost optimization of traffic between EC2 instances and databases; it is intended to optimize global traffic and improve application performance for end-users. Configuring a NAT Gateway or using an additional load balancer between the compute instances and the database would needlessly increase complexity and costs, as neither addresses intra-region data transfer costs between compute instances and databases.
AI Generated Content may display inaccurate information, always double-check anything important.
A company is looking to establish a data lake on AWS in order to store and analyze their disparate datasets which come in at varying frequencies and sizes. They want to ensure that the data remains secure both at rest and in transit, and that the solution can accommodate future increases in data volume. Which of the following options fulfills these requirements?
Deploy an Amazon RDS instance with a read replica to act as a central repository for the data lake.
Store data on Amazon EBS volumes attached to EC2 instances to benefit from volume encryption and direct control.
Use Amazon EFS with lifecycle management policies to store and secure data lake files.
Use Amazon S3 with encryption enabled and IAM policies for secure, scalable object storage.
Amazon S3 is the correct choice because it is designed to store and retrieve any amount of data, at any time, from anywhere on the web, making it an excellent foundation for a secure and scalable data lake. Amazon S3 offers comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements. For data at rest, S3 provides encryption features such as SSE-S3, SSE-C, and SSE-KMS, ensuring data is encrypted automatically when saved to the data lake. For data in transit, it uses HTTPS to secure data as it travels over the internet. Its scalability ensures that as the company's data grows, S3 can handle the increase in storage demand without the need for manual intervention. The other options do not offer the same level of security, scalability, or are not designed for data lake operations.
AI Generated Content may display inaccurate information, always double-check anything important.
An international financial organization must ensure their highly transactional application's operations can withstand the outage of a data center without any service interruption. Furthermore, the application should incur minimal latency for users in Europe, North America, and Asia. Considering cost-effectiveness and operational complexity, what deployment approach adheres BEST to these requirements?
Establish the application in multiple AWS Regions each located near Europe, North America, and Asia, with an Amazon Route 53 latency-based routing policy.
Implement a global database cluster with cross-region read replicas to ensure the application’s relational data remains available and experiences low latency accesses.
Utilize one AWS Region to host the primary instance and establish cross-region read replicas in regions closest to Europe, North America, and Asia.
Deploy the application into a single AWS Region and distribute it across multiple Availability Zones, leveraging Amazon Route 53 health checks for failover.
Geographic dispersal of the application across multiple AWS Regions close to the users provides for both high availability and optimized latency for users in different continents. This choice offers the advantage of maintaining application operations even if one AWS Region is down. A Multi-Region deployment is generally more cost-effective and operationally less complex than managing global clusters with cross-region read replicas. Deploying within a single region—even across multiple Availability Zones—does not offer protection against a regional outage. Global clusters provide a high level of resiliency and availability but may introduce unnecessary complexity and cost in this scenario, where regional deployments near user populations are sufficient. Using Amazon Route 53 with health checks alone does not address the requirement of operating during a regional data center outage.
AI Generated Content may display inaccurate information, always double-check anything important.
A company has an application that requires a relational database with a highly available and fault-tolerant configuration, including failover capabilities across two Availability Zones within an AWS Region. Which AWS service should be implemented to meet these requirements?
Amazon Aurora Global Databases
Amazon RDS with Multi-AZ deployments
Amazon S3 with cross-Region replication
Amazon DynamoDB with global tables
Amazon RDS with Multi-AZ deployments is designed for high availability and failover support for DB instances within AWS. It automatically provisions and maintains a synchronous standby replica in a different Availability Zone. In the event of a planned or unplanned outage of your primary DB instance, RDS performs an automatic failover to the standby, minimizing disruption. Other services mentioned such as Amazon DynamoDB, Amazon Aurora Global Databases, and Amazon S3 do not fit the scenario described. DynamoDB is a NoSQL service and does not provide a relational database. Aurora Global Databases are used for cross-region disaster recovery, not within a single region. Amazon S3 is object storage and does not support relational database features.
AI Generated Content may display inaccurate information, always double-check anything important.
A company's architecture requires segregation between its web servers that are accessible from the internet and its backend databases that should not be directly accessible from the internet. As the Solutions Architect, you have to ensure that the databases remain protected while allowing the web servers to communicate with them. Which of the following options achieves this objective while adhering to AWS security best practices?
Deploy both the web servers and databases in the same public subnet, using a network ACL to deny inbound traffic from the internet to the database servers' IP addresses.
Place the databases in a public subnet but do not assign a public IP, and configure a route table that has no routes to and from the internet gateway.
Place the databases in a private subnet and the web servers in a public subnet, and configure the security groups allowing specific traffic from the web servers to the databases.
Utilize a NAT gateway to translate traffic from the internet to the private subnet where the databases reside, ensuring internet traffic can only reach the databases through the NAT gateway.
Implementing private and public subnets in a VPC can achieve network segmentation, providing a secure environment for resources. The databases should be placed in a private subnet with no direct access from the internet, while the web servers can be placed in a public subnet. By using security groups, one can allow specific traffic from the public subnet to the private subnet. This ensures that while the web servers can communicate with the databases, the databases remain inaccessible directly from the internet. A wrong answer might incorrectly suggest exposing the databases to the internet or using incorrect components for traffic control.
AI Generated Content may display inaccurate information, always double-check anything important.
A software company is aiming to improve the load time of their video streaming service, which caters to a diverse set of customers worldwide. Which service should they implement to enhance content delivery and reduce latency for their international audience?
Create multiple application load balancers in different continents
Implement a global database with read replicas in several geographical locations
Configure a series of VPN connections to facilitate faster video transfer rates
Utilize a network of distributed edge locations to cache and serve content
Amazon CloudFront is the optimal choice since it is a global content delivery network service that speeds up the distribution of static and dynamic web content such as .html, .css, .js, and video files to users. CloudFront delivers content through a worldwide network of data centers called edge locations. When a user requests content that is being served by CloudFront, the request is routed to the edge location that provides the lowest latency. Therefore, the service ensures that end-users experience faster page load times and a boost in overall performance. Other services mentioned, such as global database deployment or multi-region load balancing, are not primarily designed for the purpose of content caching and global distribution which is essential for the described scenario.
AI Generated Content may display inaccurate information, always double-check anything important.
What is the primary purpose of placing an Amazon EC2 instance in a private subnet within a VPC?
To enable the EC2 instance to serve web traffic directly to the internet
To automatically assign an Elastic IP address to the EC2 instance
To allow the EC2 instance to function as a NAT gateway for other instances
To prevent the EC2 instance from being directly accessible from the internet
The main reason for placing an Amazon EC2 instance in a private subnet is to ensure the instance is not accessible directly from the internet, providing a higher level of security for sensitive applications or data. Instances in a private subnet can access the internet via a NAT gateway or instance, which is located in a public subnet, without exposing these instances directly to inbound internet traffic.
AI Generated Content may display inaccurate information, always double-check anything important.
Using AWS Lambda, the number of instances responding to triggers is limited by the AWS Lambda service's built-in concurrency model, which can be adjusted with a concurrency limit for each function.
This statement is false
This statement is true
AWS Lambda's concurrency model does limit the number of function instances that can run simultaneously. By default, AWS imposes an account-wide concurrency limit, which affects how many instances of all your Lambda functions can run at the same time. However, you can set reserved concurrency limits at the individual function level, which allocates a portion of the account-level limit to a particular function. This question tests understanding of Lambda's concurrency controls, rather than its ability to scale, which is often misunderstood. The statement is true because it is possible to adjust the concurrency limit for each function, which offers control over scaling and ensures that critical functions have the necessary resources.
AI Generated Content may display inaccurate information, always double-check anything important.
Your client hosts their multimedia files on Amazon S3 and observes that these files are frequently accessed for up to 60 days after uploading. After 60 days, the access patterns decline sharply, but the client requires the files to be available for occasional access for at least one year. Which lifecycle policy should be applied to meet the client's need for cost optimization while maintaining file availability?
Transition objects to S3 One Zone-Infrequent Access after 60 days
Transition objects to S3 Standard-Infrequent Access after 60 days and to S3 Glacier after one year
Transition objects directly to S3 Glacier after 60 days
Keep the objects stored in S3 Standard without transitioning them to other storage classes
Setting up a lifecycle policy to transition objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 60 days takes advantage of the lower storage cost for infrequently accessed data, while still providing rapid access when needed. After one year, moving the objects to S3 Glacier provides a low-cost storage solution for archiving, with retrieval times ranging from minutes to hours, which should suffice for the client's occasional access needs. Moving objects directly to S3 Glacier or S3 One Zone-IA after 60 days is not recommended because Glacier is better suited for archiving rather than infrequent access, and One Zone-IA has a lower durability since it stores data in a single Availability Zone only.
AI Generated Content may display inaccurate information, always double-check anything important.
A Solutions Architect must create a secure storage solution for confidential client documents at a law firm. The design needs to enforce strict permissions and ensure documents are only retained as long as legally necessary before being removed from storage. Which configuration would best meet the firm's operational and legal requirements?
Utilize a Glacier Vault with Lock policies, scheduling vault lock-in to meet the retention timeline and manually manage deletions.
Implement key management service policies to expire encryption on objects, effectively rendering them inaccessible post-retention.
Deploy an S3 bucket with appropriate Bucket Policies and IAM roles, setting lifecycle policies to remove documents after the predetermined retention duration.
Configure S3 Object Lock to enforce a strict WORM (Write Once Read Many) model until documents are manually purged post-retention.
Using Amazon S3 lifecycle policies is an effective solution to automatically manage the retention and deletion of documents in cloud storage. Setting up these policies allows documents to be deleted after they've reached the end of the required retention period. Access to these documents can be controlled with fine-grained permissions using Bucket Policies and IAM roles. Other provided options, such as using S3 Object Lock, AWS KMS policies, or Glacier Vault Lock, do not fulfill the specific requirement of automatic data deletion after a retention period. S3 Object Lock is for immutability, KMS manages encryption keys, and Glacier Vault Lock is for long-term archival, not lifecycle management.
AI Generated Content may display inaccurate information, always double-check anything important.
Amazon Aurora is more cost-effective than Amazon Redshift for large-scale data warehousing and complex querying of columnar data
True
False
This statement is false because Amazon Aurora is optimized for online transaction processing (OLTP), while Amazon Redshift is designed specifically for data warehousing and analytics, which often involves complex querying of large volumes of columnar data. Amazon Redshift’s columnar storage and massively parallel processing (MPP) architecture make it better suited and more cost-effective for data warehousing tasks.
AI Generated Content may display inaccurate information, always double-check anything important.
An organization is deploying a web application on a scalable cloud infrastructure. They need to ensure all communications between the web browsers of their clients and the servers are encrypted. Which approach would be the MOST appropriate to guarantee encryption of the data being transmitted over the internet?
Incorporate a key management service to generate encryption keys used to manually encrypt data before transmission over the internet.
Provision, manage, and automate the renewal of SSL/TLS certificates for deploying on load balancers and content delivery networks.
Implement custom encryption in the application code to handle encryption before sending data over the network.
Utilize automated server-side encryption in the storage layer to secure data before it leaves the cloud environment.
Utilizing a service to provision, manage, and deploy public SSL/TLS certificates enables secure HTTPS connections, a standard protocol for encrypted communication over the internet. This service is responsible for the encryption of data in transit by facilitating SSL/TLS protocols, which is a necessity for the encryption of web traffic. This ensures that data is secure while being transmitted. Options related to key management systems and client-side or server-side encryption generally refer to protecting data at rest or providing encryption tools within the application but are not directly related to establishing secure browser to server communication.
AI Generated Content may display inaccurate information, always double-check anything important.
A corporation is required to automate the identification and categorization of stored content to enforce varying preservation requirements. Which service should be utilized to facilitate the discovery and categorization process, enabling the enforcement of corresponding preservation policies?
A managed service for cryptographic keys
Amazon Macie
A cloud storage service's lifecycle management feature
A service for managing identities and permissions
The service best suited for automating the discovery and classification of sensitive content, allowing the enforcement of retention policies, is Amazon Macie. It leverages machine learning and pattern matching to assist in accurately and efficiently identifying and classifying different types of content, making it ideal for this requirement. While other services mentioned may be related to data management or protection, none offer the same specialization in automated data discovery and classification as Macie.
AI Generated Content may display inaccurate information, always double-check anything important.
A financial services company is leveraging cloud storage services to retain transaction records. These records contain privileged client information that needs to be encrypted when not in use. The company's security team must have the capability to manage encryption keys centrally, including the facilitation of periodic, automated key changes. Which configuration should be implemented to meet these encryption management requirements?
Implement managed service keys with a policy for key rotation every three years.
Create customer controlled keys with enabled automated rotation on an annual schedule.
Create customer controlled keys and use a scheduled script to change the key material manually.
Rely on developers to generate and replace keys on a regular basis through a manual update process.
By choosing customer managed keys with enabled automatic rotation, the security team has central control over the encryption keys and satisfies the need for regular, automated updates to the key material. Built-in automatic rotation occurs yearly, reinforcing security best practices. Managed keys provided by the cloud service rotate less frequently, which does not meet the stricter rotation requirements typically desired for sensitive financial data. Using a Lambda function for manual rotation or relying on developers for key updates deviates from the requirement for automated processes, introducing potential human error and inconsistency in security practices.
AI Generated Content may display inaccurate information, always double-check anything important.
Looks like that's it! You can go back and review your answers or click the button below to grade your test.
Join premium for unlimited access and more features