AWS Certified CloudOps Engineer Associate Practice Test (SOA-C03)
Use the form below to configure your AWS Certified CloudOps Engineer Associate Practice Test (SOA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified CloudOps Engineer Associate SOA-C03 Information
The AWS Certified CloudOps Engineer – Associate certification validates your ability to deploy, operate, and manage cloud workloads on AWS. It’s designed for professionals who maintain and optimize cloud systems while ensuring they remain reliable, secure, and cost-efficient. This certification focuses on modern cloud operations and engineering practices, emphasizing automation, monitoring, troubleshooting, and compliance across distributed AWS environments. You’ll be expected to understand how to manage and optimize infrastructure using services like CloudWatch, CloudTrail, EC2, Lambda, ECS, EKS, IAM, and VPC.
The exam covers the full lifecycle of cloud operations through five key domains: Monitoring and Performance, Reliability and Business Continuity, Deployment and Automation, Security and Compliance, and Networking and Content Delivery. Candidates are tested on their ability to configure alerting and observability, apply best practices for fault tolerance and high availability, implement infrastructure as code, and enforce security policies across AWS accounts. You’ll also demonstrate proficiency in automating common operational tasks and handling incident response scenarios using AWS tools and services.
Earning this certification shows employers that you have the technical expertise to manage AWS workloads efficiently at scale. It’s ideal for CloudOps Engineers, Cloud Support Engineers, and Systems Administrators who want to prove their ability to keep AWS environments running smoothly in production. By earning this credential, you demonstrate the hands-on skills needed to ensure operational excellence and reliability in today’s fast-moving cloud environments.

Free AWS Certified CloudOps Engineer Associate SOA-C03 Practice Test
- 20 Questions
- Unlimited
- Monitoring, Logging, Analysis, Remediation, and Performance OptimizationReliability and Business ContinuityDeployment, Provisioning, and AutomationSecurity and ComplianceNetworking and Content Delivery
A company federates employee access to multiple AWS accounts by using a SAML 2.0 identity provider (IdP). Each account still has the default 1-hour maximum session duration for the IAM roles that employees assume through SAML. Compliance now requires 2-hour sessions, so the IdP was updated to include a DurationSeconds=7200 attribute. Since the change, users receive an AccessDenied error when attempting to sign in. Which action will allow successful federation while meeting the 2-hour session requirement?
Keep the DurationSeconds attribute but raise every role's Maximum session duration to 12 hours.
Rename the attribute in the SAML assertion to SessionDuration and increase each role's Maximum session duration setting to 7,200 seconds.
Add the parameter --duration-seconds 7200 to all AWS CLI profiles used by the developers.
Remove any duration attribute so the default 1-hour session length is applied automatically.
Answer Description
For SAML federation, the IdP must supply the attribute named SessionDuration, not DurationSeconds. The value in that attribute can be any duration up to, but not exceeding, the role's Maximum session duration setting. Because the roles still use the default 1-hour maximum, the 7,200-second request is rejected. Updating each role to allow at least a 2-hour session (7,200 seconds) and sending that value in the SessionDuration attribute lets users obtain the desired 2-hour temporary credentials without violating the compliance limit. Other options either keep the incorrect attribute name, rely on the 1-hour default, or attempt to set the duration from the client side, none of which resolves the authentication failure.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a SAML 2.0 identity provider (IdP)?
What is the purpose of the SessionDuration attribute in SAML assertions?
How does Maximum session duration in an IAM role affect federated access?
An operations team uses AWS CDK to define infrastructure. A new stack creates an IAM policy that grants s3:PutObject to * and opens TCP 22 from 0.0.0.0/0 on a security group. Company policy requires that any CI/CD deployment containing permission-broadening or other security-sensitive changes must halt automatically so a security engineer can review the change set. Which CDK deployment configuration satisfies this requirement?
Bootstrap the target account with cdk bootstrap --trusted-accounts <pipeline_account> to block deployments that modify security settings.
Run cdk deploy --no-execute to always create but never execute the CloudFormation change set until it is approved manually.
Add the --force flag to cdk deploy so the pipeline prompts for confirmation before applying IAM or networking changes.
Run cdk deploy --require-approval broadening so the command fails in the pipeline whenever security-sensitive changes are detected.
Answer Description
The CDK CLI can detect security-sensitive changes-such as new IAM resources or rules that broaden network access-during cdk deploy. When the command is run with --require-approval broadening, it will prompt for confirmation only when such permission-broadening changes are present. In a non-interactive CI/CD pipeline the prompt cannot be answered, so the CLI exits with a non-zero code, automatically stopping the deployment and allowing a manual review.
Using --no-execute always halts execution, even for harmless updates, creating unnecessary manual work. Bootstrapping with --trusted-accounts only controls who can publish assets and does not enforce change approvals. The --force flag disables all approval prompts, the exact opposite of the requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does '--require-approval broadening' do in AWS CDK?
What is the purpose of 'cdk bootstrap --trusted-accounts'?
Why shouldn't '--no-execute' be used for this requirement?
A Linux-based EC2 instance in a production VPC hosts a MySQL OLTP database on a 500 GiB gp2 EBS volume. CloudWatch shows regular spikes above 100 ms volume latency, a VolumeQueueLength greater than 60, and average read/write IOPS near 8 000. The operations team must reduce latency immediately, avoid any downtime, and keep storage costs as low as possible. Which action meets these requirements?
Purchase additional I/O credit bundles to extend the gp2 burst duration during peak hours.
Use Elastic Volumes to convert the existing gp2 volume to gp3 and provision 12 000 IOPS with 500 MiB/s throughput.
Change the volume type to st1 throughput-optimized HDD to increase throughput at a lower price.
Modify the volume to io2 Block Express and provision 16 000 IOPS and 1 000 MiB/s throughput.
Answer Description
The gp3 volume type decouples capacity from performance and is priced about 20 % lower than gp2 while offering a default 3 000 IOPS that can be provisioned up to 16 000 IOPS and 1 000 MiB/s. Using the Elastic Volumes feature, the team can modify the existing gp2 volume to gp3 and set higher IOPS and throughput online, so no instance stop, snapshot, or re-attach is required. Migrating to io2 or io2 Block Express would also reduce latency, but those volumes are significantly more expensive and therefore do not satisfy the cost constraint. Purchasing burst credits for gp2 is not possible; the volume automatically earns credits based on size. Converting to st1 lowers costs but is optimized for large sequential throughput and would increase latency for random OLTP workloads.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between gp2 and gp3 volumes in AWS?
What is Elastic Volumes in AWS, and how does it work?
Why is io2 Block Express not suitable for this scenario despite better performance?
An organization has a Direct Connect link between its on-premises data center and an AWS VPC. EC2 instances in the VPC must resolve host names in the on-premises corp.example.com domain by using the existing on-premises DNS server at 10.0.0.2. The operations team wants a scalable solution that requires no per-instance configuration changes or manual record maintenance. According to AWS best practices, which action will meet these requirements?
Enable DNS resolution and DNS hostnames in the VPC; the Amazon-provided DNS server will automatically forward corp.example.com queries across Direct Connect.
Create a Route 53 Resolver outbound endpoint in two private subnets. Add a rule that forwards queries for corp.example.com to 10.0.0.2 and associate the rule with the VPC.
Create a private hosted zone for corp.example.com in Route 53 and manually populate A and CNAME records for all on-premises hosts.
Update the VPC's DHCP options set to hand out 10.0.0.2 as the primary DNS server, then restart networking on every EC2 instance.
Answer Description
Route 53 Resolver can forward DNS queries that originate in a VPC to external DNS servers through an outbound endpoint. Creating the endpoint in at least two subnets provides high availability, and a forwarding rule that targets the on-premises DNS IP ensures that any query for corp.example.com leaves the VPC and is answered by the data-center resolver. No changes are needed on the EC2 instances because they continue to use the Amazon-provided .2 resolver, which automatically consults the forwarding rule.
Creating a private hosted zone would require manually adding and updating records for every on-premises host, which is operationally heavy and error-prone. Relying on the Amazon-provided DNS alone will not work because it never forwards queries to on-premises networks. Pointing instances directly to the on-premises DNS server through the VPC's DHCP options removes the benefit of the Amazon-provided resolver (for internal AWS zones) and introduces a single point of failure without providing route-53-level visibility or logging.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Route 53 Resolver outbound endpoint?
Why is creating a private hosted zone not the best solution in this scenario?
Why is relying on the Amazon-provided DNS resolver alone insufficient for external domains?
Your team operates a fleet of long-running EC2 instances that rarely exceed 20 % CPU or memory utilization. You want data-driven recommendations for downsizing or moving to a different instance family while maintaining equal or better performance. Which AWS tool should you use first to obtain these instance-level rightsizing suggestions?
AWS Cost Explorer rightsizing recommendations
AWS Trusted Advisor
Amazon CloudWatch Metrics Explorer
AWS Compute Optimizer
Answer Description
AWS Compute Optimizer analyzes up to 14 days of CloudWatch metrics for each EC2 instance, models workload characteristics, and produces precise rightsizing recommendations that maintain or improve performance. Cost Explorer and Trusted Advisor also surface cost-saving ideas, but their recommendations are less granular and draw from billing data rather than detailed performance modeling. CloudWatch Metrics Explorer only visualizes resource metrics and does not generate optimization guidance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Compute Optimizer and how does it work?
How is AWS Compute Optimizer different from AWS Cost Explorer's rightsizing recommendations?
Why does AWS Trusted Advisor not provide the same level of insights as AWS Compute Optimizer?
A financial-services company with an AWS Organizations hierarchy must prevent creation of any resources outside us-east-1 and us-east-2 to meet regulatory requirements. The CloudOps team wants a solution that blocks non-compliant API calls across all existing and future member accounts with the least ongoing operational effort. Which approach satisfies these requirements?
Deploy the AWS Config managed rule that detects resources in unapproved Regions and use Systems Manager Automation to delete any that are found.
Attach a service control policy at the organization root that denies all actions when the aws:RequestedRegion condition is not us-east-1 or us-east-2.
Enable a multi-Region CloudTrail and configure Amazon EventBridge to invoke a Lambda function that stops or deletes resources launched in other Regions.
Create an IAM permission boundary in every account that allows actions only in the approved Regions and mandate its use for all roles.
Answer Description
A service control policy (SCP) applied to the organization root is evaluated before IAM permissions in every member account. By using a Deny statement with the aws:RequestedRegion condition key, the SCP blocks any API call targeting a Region other than us-east-1 or us-east-2, preventing resource creation proactively in both existing and newly added accounts without additional setup.
Permission boundaries must be attached to every principal in every account and do not automatically cover new accounts, increasing operational overhead. An AWS Config rule or an EventBridge-triggered Lambda provide only detective or reactive controls-they allow the non-compliant resource to be created before remediation and require additional automation to remain effective. Therefore, the SCP is the simplest and most effective preventive control for Region enforcement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Service Control Policy (SCP) in AWS Organizations?
How does the **aws:RequestedRegion** condition key work in SCPs?
Why are SCPs a better choice than IAM permission boundaries for Region enforcement?
An e-commerce application runs on EC2 instances in two Availability Zones, fronted by an Application Load Balancer (ALB). Some checkout requests take 3 to 4 minutes to complete, and users intermittently receive 504 Gateway Timeout responses. CloudWatch shows the targets are healthy and no Auto Scaling scale-in events occurred. Which change will most effectively prevent these timeouts without redesigning the application?
Increase the ALB idle timeout to a value higher than the longest expected request processing time.
Replace the ALB with a Network Load Balancer to remove all timeout limits.
Enable connection draining by setting the target group deregistration delay to 300 seconds.
Enable cross-zone load balancing on the ALB.
Answer Description
The ALB returns a 504 Gateway Timeout when it closes the connection because no response is received from the target before the load balancer's idle timeout elapses. The default idle timeout for an ALB is 60 seconds, which is shorter than the 3- to 4-minute checkout operations. Increasing the idle timeout to exceed the longest expected request duration allows the load balancer to keep the connection open until the target responds, eliminating the observed 504 errors.
Changing the deregistration delay only affects in-flight requests during scale-in, not long-running requests on healthy targets. Enabling cross-zone load balancing distributes traffic across zones and does not influence connection timeouts. A Network Load Balancer also has a connection idle timeout (with a default of 350 seconds for TCP listeners), so replacing the ALB with an NLB would not remove timeout limits. Simply adjusting the ALB's idle timeout is the most direct and cost-effective fix.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the ALB idle timeout, and why does it matter?
How does the deregistration delay impact in-flight requests?
Why is replacing the ALB with an NLB not effective for timeout issues?
A DevOps engineer maintains a CloudFormation stack that provisions an Amazon RDS DB instance plus hundreds of other resources. Management mandates that future stack updates must never delete or replace the existing database, while allowing normal updates to all other resources. The engineer wants a reusable, stack-level control that does not require changing the template for each release. Which approach meets these requirements?
Run drift detection before every update and cancel the deployment if the DB instance is listed.
Add the DeletionPolicy attribute set to Retain on the DB instance within the template.
Attach a stack policy that explicitly denies Update:Replace and Delete actions on the DB instance's logical ID.
Enable termination protection on the stack so the DB instance cannot be modified.
Answer Description
A stack policy is evaluated during CloudFormation operations. By attaching a policy that denies Update:Replace and Delete actions on the logical ID representing the DB instance, the database is protected from replacement or deletion, yet the same stack can continue to update other resources. Termination protection blocks the entire stack from being deleted but does not stop an update from replacing a resource. The DeletionPolicy attribute only takes effect during stack deletion, not during updates. Drift detection is read-only; it reports configuration differences but cannot prevent changes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are stack policies in AWS CloudFormation?
How does the DeletionPolicy attribute work in CloudFormation templates?
What is the purpose of termination protection on a CloudFormation stack?
A company runs its e-commerce app in two AWS Regions. Each Region has an ALB fronting EC2 instances. The business wants active-passive failover: traffic must go to the standby Region only when the primary Region is unreachable. Operations require DNS health checks to query HTTPS /health on the app, not the ALB default check. Which solution provides this failover with minimal operational overhead?
Deploy AWS Global Accelerator with both ALBs as endpoints and assign all traffic weight to the primary Region; rely on the accelerator's built-in health checks for failover.
Create two latency-based alias A records that point to each Region's ALB and enable Evaluate Target Health on both records.
Create two CNAME records in Route 53 that use the failover routing policy, each pointing to the DNS name of its Region's ALB. Attach an HTTPS health check that calls /health to the primary record and leave the secondary record without a health check.
Create weighted DNS records (100 and 0) for the two ALBs and use a script to update the weights based on a periodic curl /health test.
Answer Description
Failover routing in Amazon Route 53 lets you create a primary and a secondary record. The primary record is associated with an HTTPS health check that can be configured to request a specific path such as /health. When that health check fails, Route 53 automatically stops returning the primary record and returns the secondary record instead, sending traffic to the standby Region. Because the records are CNAMEs that reference the ALB DNS names, you can attach the custom health check. Alias records pointing to an ALB cannot have custom Route 53 health checks associated with them; they can only use 'Evaluate Target Health,' which does not check a specific path. A scripted approach has higher operational overhead. While AWS Global Accelerator also provides health-based failover and allows custom health check paths, it is a more complex service designed for improving global application performance and is not the most direct or minimal solution for this specific DNS failover requirement. Latency-based routing would distribute traffic to both Regions instead of maintaining an active-passive posture.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is failover routing in Route 53?
How do custom health checks in Route 53 work with ALBs?
Why is AWS Global Accelerator not ideal for this scenario?
A company stores department shared files on Amazon FSx for Windows File Server mapped as an SMB share to employee laptops. Compliance requires that users can independently restore earlier versions of files or folders several times each day without contacting administrators. The operations team wants the solution to keep storage overhead low and avoid provisioning additional file systems. Which FSx capability should the CloudOps engineer activate?
Configure shadow copies on the FSx volume with an hourly schedule.
Create daily automatic backups of the file system by using AWS Backup.
Enable cross-region data replication to a secondary Amazon FSx file system.
Turn on data deduplication for the file system to save space.
Answer Description
Enabling shadow copies on an Amazon FSx for Windows File Server volume uses the Windows Volume Shadow Copy Service to create point-in-time snapshots of the file system. Users can access these snapshots through the Previous Versions tab in Windows Explorer and restore files or folders without administrator assistance. Shadow copies are incremental, so they consume storage only for changed blocks, meeting the requirement for low overhead. Automatic backups, cross-region replication, and data deduplication do not give end users a self-service way to retrieve historical file versions multiple times per day.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are shadow copies and how do they work in Amazon FSx for Windows File Server?
How does enabling shadow copies differ from using AWS Backup?
What is the advantage of incremental storage consumption in shadow copies?
An operations team created an Amazon CloudWatch composite alarm that enters the ALARM state when any of three underlying metric alarms breach. The team attempted to attach an EC2 Auto Scaling policy to the composite alarm so that additional instances launch automatically, but the console prevented the configuration. Which approach will allow the alarm to trigger the scaling action while following AWS best practices?
Convert the composite alarm to an anomaly detection alarm and then attach the Auto Scaling policy.
Replace the composite alarm with a standard metric alarm that uses a metric math expression combining the three metrics, then attach the Auto Scaling policy.
Create an Amazon EventBridge rule that matches the composite alarm's state change to ALARM and set the Auto Scaling policy as the rule's target.
Enable action suppression on the composite alarm to allow EC2 Auto Scaling actions to be configured.
Answer Description
Composite alarms can send Amazon SNS notifications but cannot invoke Auto Scaling or EC2 actions directly. However, every state change on a CloudWatch alarm generates an Amazon EventBridge event. By creating a rule that matches the composite alarm's state change to ALARM and targeting the Auto Scaling policy, the team can launch instances when the composite alarm fires. Replacing the composite alarm with metric math would still require creating a standard metric alarm; converting to an anomaly detection alarm or enabling action suppression does not make Auto Scaling actions possible.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a composite alarm in Amazon CloudWatch?
How does Amazon EventBridge work with CloudWatch alarms?
Why can't a composite alarm directly trigger EC2 Auto Scaling actions?
A company created a VPC with two private subnets that have only IPv6 CIDR blocks. EC2 instances in these subnets must download operating-system updates from public repositories on the internet, but company policy forbids any unsolicited inbound connections from the internet to those instances. Which solution satisfies the requirements in the most cost-effective way?
Create an interface VPC endpoint for AWS Systems Manager and block all other outbound IPv6 traffic with network ACLs.
Create a NAT gateway in a public subnet, enable DNS64 for the private subnets, and add a 64:ff9b::/96 route in each subnet's route table that targets the NAT gateway.
Attach a standard internet gateway to the VPC and rely on outbound-only rules in each subnet's security group to block inbound traffic.
Create an egress-only internet gateway, attach it to the VPC, and add a ::/0 route in each subnet's route table that targets the gateway.
Answer Description
An egress-only internet gateway (EIGW) is purpose-built for outbound-only IPv6 connectivity. It is stateful, so return traffic is automatically allowed while unsolicited inbound IPv6 packets are dropped, meeting the security requirement without extra rules. EIGWs have no hourly charge, so they are cheaper than alternatives. A NAT gateway could also work by using NAT64 together with DNS64, but it adds hourly and data-processing costs. A standard internet gateway would expose the instances to inbound IPv6 traffic unless every subnet or instance is locked down with security groups; the policy prefers the gateway itself to block such traffic. Interface VPC endpoints provide private access to specific AWS services only and cannot reach public package mirrors.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an egress-only internet gateway (EIGW)?
Why is a NAT gateway not the cost-effective solution here?
What is the difference between standard internet gateways and egress-only internet gateways?
A CloudOps engineer configured a CloudWatch alarm to invoke a Lambda function directly for automated remediation. The alarm is correctly transitioning to the ALARM state, but the Lambda function is not being invoked. Logs show no invocation attempts. What is the MOST likely cause of this issue?
The alarm action must first send a notification to an SNS topic, which then triggers the Lambda function.
An Amazon EventBridge rule must be created to route the alarm state change to the Lambda function.
The Lambda function's IAM execution role does not grant permission to be invoked by CloudWatch.
The Lambda function is missing a resource-based policy granting invoke permissions to the CloudWatch Alarms service principal.
Answer Description
As of late 2023, CloudWatch alarms can invoke Lambda functions directly. For this to work, the Lambda function must have a resource-based policy that grants the CloudWatch Alarms service principal (lambda.alarms.cloudwatch.amazonaws.com) permission to invoke it. Without this permission, CloudWatch cannot trigger the function, even if the alarm action is configured correctly. The Lambda function's execution role defines what the function can do, not who can invoke it. The old method of using an SNS topic is no longer required for this direct integration. Finally, using EventBridge is an alternative integration pattern, not a solution for a failing direct invocation permission.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a resource-based policy in AWS?
Why does the CloudWatch Alarms service principal need permissions to invoke a Lambda function?
How is the integration between CloudWatch and Lambda different from using SNS or EventBridge?
A company runs an Amazon ECS service on the EC2 launch type across two Availability Zones. Sudden traffic bursts increase the number of messages in an Amazon SQS queue that the tasks process, causing 5xx errors before additional tasks start. The DevOps team wants the service to scale proactively based on the queue length while minimizing code maintenance and operational effort. Which solution should they implement?
Configure an Application Auto Scaling target-tracking policy for the ECS service that uses the SQS ApproximateNumberOfMessagesVisible CloudWatch metric.
Deploy an AWS Lambda function that polls the queue and calls the ECS UpdateService API to adjust the desired count.
Increase the CPU reservation for each task so that existing tasks can handle the additional workload during bursts.
Move the workload to AWS Fargate and rely on the Fargate launch type's capacity management to handle bursts automatically.
Answer Description
Amazon ECS integrates with Application Auto Scaling, which can attach a target-tracking or step scaling policy to an ECS service. The policy can reference any Amazon CloudWatch metric, including the default ApproximateNumberOfMessagesVisible metric from the SQS queue. When the metric breaches the defined threshold, Application Auto Scaling automatically increases or decreases the desired task count with no custom code. Creating and scheduling a Lambda function adds operational overhead, migrating to Fargate does not in itself provide proactive scaling, and changing CPU reservations does not address the need to scale on queue depth.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the ApproximateNumberOfMessagesVisible metric indicate in Amazon SQS?
What is the difference between target-tracking and step scaling in Application Auto Scaling?
How does Amazon ECS integrate with Application Auto Scaling?
An operations team runs an Auto Scaling group of Linux EC2 instances in two private subnets (one in each Availability Zone) of a VPC. The instances must occasionally download patches from public YUM repositories and read data from an S3 bucket. Each subnet currently uses its own NAT gateway, and the hourly NAT gateway charges are higher than all data-processing fees combined. The team must lower network costs while ensuring that outbound connectivity continues if either Availability Zone becomes unavailable. Which solution meets these requirements while following AWS best practices?
Create a gateway VPC endpoint for Amazon S3 and replace each NAT gateway with a small NAT instance in the corresponding Availability Zone. Disable source/destination checks on the instances and update the private route tables to use the new NAT instances.
Replace both NAT gateways with a single NAT gateway in one Availability Zone and point the default route of both private subnets to that gateway.
Attach an egress-only internet gateway to the VPC and add a default route from each private subnet to the gateway.
Remove the NAT gateways and create an interface VPC endpoint for AWS Systems Manager; configure Patch Manager to download updates through the endpoint.
Answer Description
A gateway VPC endpoint lets instances access Amazon S3 without traversing a NAT device, removing that portion of the traffic from any hourly or data-processing charge. Replacing each managed NAT gateway with a small NAT instance in the same Availability Zone eliminates the NAT gateway hourly fee yet preserves zonal redundancy: if one AZ fails, instances in the surviving AZ still have a local NAT instance for internet-bound traffic. NAT instances cost less per hour than NAT gateways and support the required outbound access when sized for the workload.
A single shared NAT gateway lowers hourly cost but introduces cross-AZ data charges and creates a single point of failure, violating the availability requirement. An egress-only internet gateway only supports IPv6 traffic, so IPv4 YUM repository access would fail. An interface endpoint for Systems Manager does not provide general internet access and cannot reach public YUM repositories, so patching would be interrupted.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a gateway VPC endpoint, and how does it help reduce costs?
What is the difference between a NAT gateway and a NAT instance?
Why is using a single NAT gateway across two Availability Zones considered a bad practice?
An enterprise uses AWS Organizations with a single root and two organizational units (OUs) named Prod and Dev. The security team must guarantee that Dev accounts cannot launch Amazon EC2 instances that receive a public IPv4 address, while Prod accounts retain full functionality. The solution must be centrally enforced and impossible for Dev account administrators to bypass. Which approach meets these requirements MOST effectively?
Attach an SCP to the Dev OU that explicitly denies ec2:RunInstances when the request parameter AssociatePublicIpAddress is true.
In every Dev account, attach an IAM customer managed policy that denies launching EC2 instances with public IP addresses to all users and roles.
Enable Amazon GuardDuty in the management account and configure an organization-wide detector to block Dev accounts from launching instances with public IP addresses.
Enable AWS Config across the organization and add a rule that terminates any instance in the Dev OU that is launched with a public IP address.
Answer Description
A service control policy (SCP) attached to the Dev OU establishes an organization-wide guardrail. By adding an explicit Deny on ec2:RunInstances when the request includes the parameter "AssociatePublicIpAddress" set to true, no principal in any Dev account can grant itself permission to launch instances with public IPs. Because SCPs are evaluated before IAM policies and cannot be overridden by account administrators, the restriction is centrally enforced.
Creating IAM policies in each Dev account would work only until an administrator with sufficient privileges changes or detaches the policy. AWS Config and GuardDuty can detect or alert on non-compliant resources but cannot block the API call in real time, so they do not satisfy the requirement to prevent the action.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an SCP in AWS?
Why do SCPs take precedence over IAM policies in AWS Organizations?
What are the limitations of IAM policies compared to SCPs?
A company runs a production MySQL database on a single-AZ Amazon RDS instance in us-east-1a. Compliance now requires that the database experience no more than 2 minutes of unavailability if the Availability Zone hosting the primary instance fails. Operations staff must not perform any manual actions during a failover, and the solution should follow AWS best practices while minimizing operational overhead. Which change will meet these requirements?
Create an in-region MySQL read replica in another Availability Zone and configure Amazon RDS to promote it if the primary instance fails.
Migrate the database to two self-managed MySQL EC2 instances in separate Availability Zones behind Amazon RDS Proxy to handle automatic failover.
Schedule frequent automated snapshots and restore the latest snapshot into another Availability Zone when a failure is detected.
Modify the DB instance to enable Multi-AZ deployment so Amazon RDS creates a synchronous standby in a different Availability Zone that can automatically assume the primary role on failure.
Answer Description
Modifying the existing RDS instance to a Multi-AZ deployment provisions a synchronous standby in another Availability Zone. Amazon RDS continuously monitors both nodes and, when the primary becomes unavailable, automatically updates the instance's DNS to point to the standby. Typical failover completes in about 60-120 seconds, meeting the 2-minute RTO, and requires no administrator intervention. Read replicas use asynchronous replication and require manual promotion, while snapshots and EC2-hosted databases behind RDS Proxy demand manual steps or are unsupported for automatic AZ-level failover, so they do not satisfy the stated RTO or operational constraints.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Multi-AZ deployment in Amazon RDS?
Why are read replicas not suitable for automatic failover in Amazon RDS?
What is the difference between Multi-AZ RDS instances and automated snapshots for disaster recovery?
Your company runs an API behind an Application Load Balancer that is protected by an AWS WAFv2 web ACL. Security engineers must audit every request that AWS WAF blocks, keep the detailed records for at least 30 days, and let analysts run ad-hoc SQL queries on this data with minimal operations effort and cost. Which solution meets these requirements?
Enable AWS CloudTrail data events for the load balancer and stream the logs to Amazon OpenSearch Service for querying.
Publish AWS WAF metrics to Amazon CloudWatch, retain the metrics for 30 days, and analyze them with CloudWatch Logs Insights.
Turn on Application Load Balancer access logging to S3 and have analysts use Amazon Athena to search for HTTP 403 responses.
Enable AWS WAF logging and configure a Kinesis Data Firehose delivery stream that sends the logs to an S3 bucket with a 30-day lifecycle policy; analysts query the data with Amazon Athena.
Answer Description
AWS WAF can stream detailed JSON logs of every evaluated request to Amazon Kinesis Data Firehose. Firehose can then deliver the records directly to an S3 bucket, where an S3 Lifecycle rule can transition or expire objects after 30 days to control storage costs. Because the data is stored in S3, analysts can create an Athena table and run ad-hoc SQL queries without additional infrastructure.
Application Load Balancer access logs do not include the specific WAF rule that caused a block action, so they cannot satisfy the audit requirement. CloudWatch metrics expose only aggregated counts, not request-level details, and CloudWatch Logs Insights cannot query data that is never written to Logs. CloudTrail records control-plane API calls, not the individual HTTP requests processed by the load balancer or WAF. Therefore, enabling AWS WAF logging through Kinesis Data Firehose to S3 with Athena querying is the only option that meets all stated needs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS WAFv2, and how does it work?
How does Amazon Kinesis Data Firehose deliver logs to S3?
What is Amazon Athena, and how does it query data in S3?
Your company's AWS Organization contains Dev and Prod organizational units (OUs) spanning us-east-1 and us-west-2. Operations must deploy the same CloudWatch alarm and metric filter stack to every account in those OUs and automatically roll it out to any new accounts that are added. The solution should minimize ongoing administration and support automatic rollback on failure. Which approach meets these requirements?
Publish the stack as an AWS Service Catalog product and instruct administrators in each account to launch the product in the required Regions.
Store the template in an S3 bucket and configure an EventBridge rule that triggers a Lambda function on every CreateAccount event to assume a cross-account role and deploy the stack.
Use AWS Resource Access Manager to share the existing CloudWatch alarm and metric filter from a central account with the Dev and Prod OUs.
Create a CloudFormation StackSet that uses service-managed permissions, targets the Dev and Prod OUs, and specifies us-east-1 and us-west-2 as deployment Regions so that new accounts automatically receive the stack.
Answer Description
CloudFormation StackSets with service-managed permissions are designed for centrally deploying stacks across multiple AWS accounts and Regions that are members of an AWS Organization. When you target one or more OUs, CloudFormation automatically creates or updates stacks in every existing account in the specified Regions. If a new account later joins the OU, the StackSet automatically deploys the stack to that account as well, and built-in stack rollback handles failed deployments. AWS RAM cannot share CloudWatch alarms because CloudWatch resources are not a supported shareable type. Service Catalog would require each account owner to launch the product manually, and a custom Lambda triggered by CreateAccount events adds more operational code to build and maintain. Therefore, using a CloudFormation StackSet with service-managed permissions and OU targets is the lowest-effort, fully automatic solution.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are CloudFormation StackSets?
What are service-managed permissions in CloudFormation StackSets?
How do StackSets handle new accounts added to Organizational Units (OUs)?
A company runs a web application that stores user session data in an Amazon DynamoDB table configured with provisioned capacity. Traffic is normally low but occasionally experiences unpredictable spikes that exceed the table's read capacity, resulting in throttling. The operations team must eliminate throttling during spikes while keeping costs low during normal traffic and without changing any application code. Which solution meets these requirements?
Create a new table with higher provisioned capacity and replicate data into it by using DynamoDB Streams.
Enable DynamoDB auto scaling for the table's read capacity and set an appropriate minimum and maximum range.
Change the table from provisioned to on-demand capacity mode.
Manually increase the table's provisioned read capacity to the highest observed traffic peak.
Answer Description
Changing the table to on-demand capacity mode lets DynamoDB automatically accommodate sudden increases in traffic up to double the previous peak, while charging only for the actual read and write requests. This removes the need to forecast capacity and prevents throttling when unpredictable spikes occur.
Enabling auto scaling on a provisioned table can still allow throttling because scaling is incremental and may lag behind sudden bursts. Manually raising provisioned capacity to the highest observed peak would stop throttling but wastes money when traffic is low. Replicating data to another table adds complexity and does not address the root cause of capacity shortages.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between provisioned and on-demand capacity modes in DynamoDB?
What causes throttling in DynamoDB tables configured with provisioned capacity mode?
What are the benefits of using DynamoDB Streams for data replication?
Smashing!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.