AWS Certified CloudOps Engineer Associate Practice Test (SOA-C03)
Use the form below to configure your AWS Certified CloudOps Engineer Associate Practice Test (SOA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified CloudOps Engineer Associate SOA-C03 Information
The AWS Certified CloudOps Engineer – Associate certification validates your ability to deploy, operate, and manage cloud workloads on AWS. It’s designed for professionals who maintain and optimize cloud systems while ensuring they remain reliable, secure, and cost-efficient. This certification focuses on modern cloud operations and engineering practices, emphasizing automation, monitoring, troubleshooting, and compliance across distributed AWS environments. You’ll be expected to understand how to manage and optimize infrastructure using services like CloudWatch, CloudTrail, EC2, Lambda, ECS, EKS, IAM, and VPC.
The exam covers the full lifecycle of cloud operations through five key domains: Monitoring and Performance, Reliability and Business Continuity, Deployment and Automation, Security and Compliance, and Networking and Content Delivery. Candidates are tested on their ability to configure alerting and observability, apply best practices for fault tolerance and high availability, implement infrastructure as code, and enforce security policies across AWS accounts. You’ll also demonstrate proficiency in automating common operational tasks and handling incident response scenarios using AWS tools and services.
Earning this certification shows employers that you have the technical expertise to manage AWS workloads efficiently at scale. It’s ideal for CloudOps Engineers, Cloud Support Engineers, and Systems Administrators who want to prove their ability to keep AWS environments running smoothly in production. By earning this credential, you demonstrate the hands-on skills needed to ensure operational excellence and reliability in today’s fast-moving cloud environments.

Free AWS Certified CloudOps Engineer Associate SOA-C03 Practice Test
- 20 Questions
- Unlimited time
- Monitoring, Logging, Analysis, Remediation, and Performance OptimizationReliability and Business ContinuityDeployment, Provisioning, and AutomationSecurity and ComplianceNetworking and Content Delivery
An IAM administrator must create a managed policy that lets members of the DevOps group call dynamodb:DeleteItem on tables in the development account, but only when the users are authenticated with multi-factor authentication (MFA) for the current session. Which IAM policy condition will correctly enforce this requirement?
Add a Bool condition that requires the key aws:MultiFactorAuthPresent to be set to "true".
Add a StringEquals condition that checks whether aws:MultiFactorAuthAge equals "0".
Add a StringEqualsIgnoreCase condition that checks whether sts:AuthenticationType equals "mfa".
Add a Bool condition that requires the key aws:SecureTransport to be set to "true".
Answer Description
The context key aws:MultiFactorAuthPresent is set automatically by AWS to true when the principal's credentials were obtained using MFA. Because the key has a Boolean value, the correct way to test it in an IAM policy is with the Bool condition operator. If the key evaluates to true, the action is allowed; otherwise, it is implicitly denied. Using StringEquals on aws:MultiFactorAuthAge is not sufficient because the key returns the age of the MFA authentication, not whether MFA was used. aws:SecureTransport enforces HTTPS, not MFA, and sts:AuthenticationType is not a valid IAM context key, so those conditions do not meet the requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the aws:MultiFactorAuthPresent context key?
How does the Bool condition operate in IAM policies?
Why is aws:MultiFactorAuthAge not sufficient to enforce MFA-only actions?
An application assumes an IAM role in your AWS account to upload objects to an Amazon S3 bucket. After your company enabled AWS Organizations and attached new service control policies (SCPs), the uploads now fail with an AccessDenied error. You must determine-without making any changes in production-whether the denial originates from the role's identity-based policy, the bucket policy, the role's permissions boundary, or the SCP. Which AWS tool lets you simulate the s3:PutObject call and pin-point the specific policy that blocks the request?
IAM Access Analyzer
AWS Config advanced queries
IAM policy simulator
AWS CloudTrail event history
Answer Description
The IAM policy simulator can model a request made by any IAM principal against a chosen resource and action. During a simulation you can include and evaluate identity-based policies, resource-based policies such as bucket policies, permissions boundaries, and AWS Organizations SCPs. The simulator produces a detailed evaluation log that identifies exactly which policy statement allowed or denied the action. IAM Access Analyzer focuses on external access, while CloudTrail and AWS Config record or evaluate events after they occur and do not perform pre-change simulations.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an IAM policy simulator?
What are service control policies (SCPs) in AWS?
What is the difference between identity-based and resource-based policies?
A company operates dozens of AWS accounts in AWS Organizations. Security requires that any new security group rule that permits 0.0.0.0/0 on TCP port 22 be removed within seconds of creation. The CloudOps engineer must build an agent-less, event-driven solution that can be maintained centrally in a shared services account while minimizing custom code and ongoing operations. Which approach meets these requirements?
Create an Amazon EventBridge rule in each workload account that matches the AWS API call "AuthorizeSecurityGroupIngress" and sends the event to a centrally shared event bus. In the shared services account, invoke an AWS Lambda function that deletes the non-compliant rule.
Enable AWS CloudTrail Lake in every account and schedule a daily SQL query with Amazon EventBridge Scheduler that invokes an AWS Lambda function to remove any discovered non-compliant rules.
Configure the AWS Config managed rule for unrestricted SSH in every account and attach an AWS Systems Manager Automation document that revokes the offending rule when the evaluation is non-compliant.
Launch a small, always-running EC2 instance in each account that polls DescribeSecurityGroups every minute with a script and removes any rule that allows 0.0.0.0/0 on port 22.
Answer Description
An EventBridge rule can match the CloudTrail API event "AuthorizeSecurityGroupIngress", delivering it to a central event bus almost immediately. A resource-based policy on that bus allows events from all member accounts. The centrally managed Lambda function runs only when triggered, needs no agents, and can revoke the offending rule within seconds. AWS Config evaluations are periodic and may take several minutes; CloudTrail Lake queries run on a schedule, not in real time; a polling EC2 instance adds unnecessary cost, maintenance, and single-point-of-failure risk.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon EventBridge?
How does a Lambda function help in managing security groups?
What is the purpose of a resource-based policy on an EventBridge event bus?
A startup runs 50 Amazon Linux 2 instances across two VPCs. Operations must publish memory utilization and disk I/O metrics to Amazon CloudWatch and stream application logs, without opening SSH access or logging in to each host. Every instance already assumes an IAM role that includes AmazonSSMManagedInstanceCore and CloudWatchAgentServerPolicy. Which approach meets the requirements with the least operational effort?
Deploy the older CloudWatch Logs agent with an IAM instance profile and generate custom metrics later by querying logs with CloudWatch Logs Insights.
Add a user-data script to each instance that runs the CloudWatch agent configuration wizard at boot and publishes metrics with PutMetricData.
Enable detailed monitoring on the EC2 instances and create a CloudWatch Logs subscription filter to ingest application logs.
Use AWS Systems Manager Run Command to install the CloudWatch unified agent on all instances, store a common agent configuration in Systems Manager Parameter Store, and start the agent fleet-wide.
Answer Description
The CloudWatch unified agent can collect memory, disk, and custom log data. It can be deployed at scale with Systems Manager Run Command, which lets administrators remotely install the agent, reference a single JSON configuration stored in Systems Manager Parameter Store, and start the agent on all managed instances-no SSH or per-instance scripting required. Enabling detailed monitoring adds only CPU and network metrics, not memory or logs. Running the agent wizard through user-data or deploying the legacy Logs agent would require per-instance interaction and would not automatically provide the required system metrics, making those options less efficient and harder to maintain.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Systems Manager Run Command?
What is the CloudWatch unified agent?
What is the role of Systems Manager Parameter Store in this setup?
A company runs 50 Linux EC2 instances whose application data resides on attached EBS volumes. Security policy mandates encrypted, daily backups that must be retained for 35 days and automatically copied to a secondary AWS Region. The operations team wants a fully managed, scalable solution with minimal custom code or scripts. Which approach satisfies the requirements with the LEAST operational effort?
Use EC2 Image Builder to create daily AMIs for the instances, share the AMIs to the secondary Region, and configure lifecycle policies to delete images after 35 days.
Create an AWS Backup plan that selects the EC2 instances by tag, enables default KMS encryption, sets a 35-day retention rule, and configures cross-Region copy to a backup vault in the secondary Region.
Attach a Data Lifecycle Manager policy to each EBS volume to create encrypted daily snapshots, retain them for 35 days, and enable cross-Region copy.
Schedule an AWS Lambda function with EventBridge that calls the CreateSnapshot API for each EBS volume, encrypts the snapshot, copies it to the target Region, and deletes snapshots older than 35 days.
Answer Description
AWS Backup provides a central, fully managed service that natively supports EC2 instance and EBS volume backups. A backup plan can be targeted to resources by tag, apply default KMS encryption, set a 35-day retention rule, and create automatic cross-Region copy jobs without any custom code. Data Lifecycle Manager can also do snapshot automation, but it requires a separate policy per Region and does not manage cross-account or other resource types from one place, adding additional overhead. A custom Lambda solution or an EC2 Image Builder pipeline would achieve the goals but would require writing and maintaining code and schedules, increasing operational burden. Therefore, using AWS Backup is the lowest-effort, fully managed option that meets all requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Backup and how does it help manage EBS volume backups?
How does cross-Region backup copy work in AWS Backup?
Why is AWS Backup preferred over Data Lifecycle Manager for this scenario?
A company is running a business-critical on-premises PostgreSQL database. The team plans to migrate it to AWS and must meet the following requirements:
- Automatic failover must complete in less than 35 seconds if an Availability Zone becomes unavailable.
- The application must continue to use a single writer endpoint with no DNS or connection-string changes during failover.
- The solution must add read capacity with minimal application changes and keep operational costs as low as possible.
Which migration strategy will best meet these requirements?
Create an Amazon RDS PostgreSQL Single-AZ instance and add two read replicas in different AZs, then enable automatic promotion on failure.
Launch a standard Amazon RDS PostgreSQL instance-based Multi-AZ deployment and add an external read replica for read scaling.
Lift-and-shift the PostgreSQL database to two self-managed Amazon EC2 instances in separate AZs using EBS Multi-Attach for shared storage.
Migrate to an Amazon RDS PostgreSQL Multi-AZ DB cluster with one writer and two readable standbys across three AZs.
Answer Description
Amazon RDS Multi-AZ DB clusters (supported for MySQL 8.0 and PostgreSQL 13 or later) create one writer and two readable standby DB instances in three separate AZs. The cluster exposes a single writer endpoint that automatically points to the promoted standby after a failure, typically in under 35 seconds. Because the two standbys are already provisioned and can accept read traffic, the same cluster adds read capacity without needing to create separate read replicas or change the application's write endpoint. Using fully managed Amazon RDS keeps administrative overhead and licensing costs low compared with self-managed databases on Amazon EC2. Standard instance-based Multi-AZ deployments meet the endpoint requirement but have slower failover and no built-in readers. Read-replica architectures do not provide automatic failover within 35 seconds. Self-managed EC2 deployments require the application to handle failover and incur higher operational cost.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Amazon RDS Multi-AZ DB cluster?
How does failover work in an RDS Multi-AZ DB cluster?
Why is an RDS Multi-AZ DB cluster better suited for high availability than instance-based Multi-AZ deployments?
A company's Dev account runs an application on Amazon EC2 that must read an encrypted parameter stored in AWS Systems Manager Parameter Store in the company's SharedServices account. Storing static credentials on the instance is prohibited. Which solution provides secure, least-privilege cross-account access while removing the need for long-lived credentials?
Enable resource-based policies for Parameter Store and add the Dev account's root as a principal with ssm:GetParameter permission; continue using the existing EC2 role without changes.
Attach an inline policy to the EC2 instance role in Dev that grants ssm:GetParameter on the parameter's ARN; no other configuration is needed.
Create access keys for a new IAM user in SharedServices that has ssm:GetParameter permission and store the keys as environment variables on the EC2 instance.
Create an IAM role in SharedServices that allows ssm:GetParameter on the required parameter and trust principals from the Dev account. Update the EC2 instance's role to call sts:AssumeRole for that role and use the returned temporary credentials.
Answer Description
The secure way to delegate access across AWS accounts is to create an IAM role in the owning account and configure a trust policy that allows a principal in the other account to assume that role. The role grants only the permissions required-in this case, ssm:GetParameter on the specific parameter. The EC2 instance in the Dev account is associated with an instance-profile role that has permission to call sts:AssumeRole on the target role's ARN. The temporary credentials returned by STS let the application read the parameter without storing long-lived keys. Creating users or sharing access keys violates best practices, and adding permissions to the Dev role alone will not satisfy the cross-account trust requirement because the Parameter Store resource lives in a different account.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of sts:AssumeRole in cross-account access?
Why is storing static credentials on an EC2 instance considered a bad practice?
How does AWS Systems Manager Parameter Store support secure access to parameters across accounts?
Your team must log all DNS queries from a VPC. You create an Amazon Route 53 Resolver query logging configuration, select a new CloudWatch Logs log group as the destination, and attempt to associate the configuration with the VPC. The console shows "AccessDeniedException - unable to create log stream". Which action enables Route 53 Resolver to deliver query logs to CloudWatch Logs and adheres to AWS best practices?
Change the log group's KMS CMK to the AWS-managed /aws/logs key so the service can encrypt incoming data.
Enable VPC Flow Logs for the VPC and point the flow logs to the same log group.
Re-create the query logging configuration but choose an S3 bucket destination instead of CloudWatch Logs.
Attach a resource policy to the CloudWatch Logs log group that allows the route53.amazonaws.com service to run logs:CreateLogStream and logs:PutLogEvents on that log group.
Answer Description
Route 53 Resolver cannot create log streams in a CloudWatch Logs log group unless a resource-based policy on that log group grants the service permission. Attaching a policy that lists the log group ARN, specifies the route53.amazonaws.com service principal, and allows the logs:CreateLogStream and logs:PutLogEvents actions lets Resolver write the logs. Enabling VPC Flow Logs, changing the KMS key, or switching to an S3 destination does not correct the CloudWatch permission failure.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of a resource policy in AWS?
What are the logs:CreateLogStream and logs:PutLogEvents actions?
Why doesn't enabling VPC Flow Logs or switching to an S3 destination work in this scenario?
An ecommerce company runs its web tier in an Auto Scaling group behind an Application Load Balancer. CPU utilization rises above 80% for two hours starting at 12:00 PM each weekday, causing slow pages. The team wants CPU to stay under 60% during the surge without over-provisioning at other times. Which solution meets these requirements?
Add two scheduled actions to the Auto Scaling group: raise desired capacity a few minutes before noon on weekdays and reduce capacity shortly after the peak ends.
Enable predictive scaling on the Auto Scaling group and keep the target tracking policy at 60 percent CPU utilization.
Lower the CPU utilization target in the existing target-tracking policy from 60 percent to 40 percent so instances launch sooner.
Purchase Standard Reserved Instances equal to the peak capacity and disable scale-in on the Auto Scaling group.
Answer Description
Scheduled actions let you scale proactively at known times. Creating one action that raises the Auto Scaling group's desired (and optionally minimum) capacity a few minutes before the lunchtime spike, and a second action that lowers capacity after the spike, ensures instances are running when demand arrives and are terminated when they are no longer needed. This keeps CPU utilization under the 60 percent target and avoids paying for extra instances the rest of the day.
Predictive scaling uses machine-learning forecasts but needs historical data and may scale earlier or later than desired; it adds complexity without clear benefit when the spike occurs at a fixed time. Lowering the target in the existing policy still reacts after the spike begins, so pages will continue to slow. Buying Reserved Instances and disabling scale-in provides capacity but wastes money outside the two-hour window and does not adjust to future changes in load.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Auto Scaling scheduled actions?
How is predictive scaling different from scheduled actions?
Why is lowering the target utilization threshold in a tracking policy not effective in this case?
An RDS for PostgreSQL running on a db.t3.medium instance shows sustained high DB load. Performance Insights issues a proactive recommendation stating that the CPU wait dimension is saturated. Which modification best follows the recommendation to increase performance efficiency?
Create a read replica in another Availability Zone for analytic traffic.
Enable storage autoscaling and double the gp2 volume size.
Scale the instance to a larger class such as db.m6g.large.
Turn on automatic minor version upgrades to apply the latest patch.
Answer Description
When CPU is identified as the dominant wait dimension, Performance Insights proactive recommendations advise adding compute capacity. Scaling the DB instance class to a larger size (for example, moving from a burstable db.t3.medium to a compute-optimized or general-purpose db.m6g.large) directly increases available vCPUs and memory, reducing CPU saturation. Enlarging storage, creating replicas, or applying patches can help other bottlenecks but do not address immediate CPU exhaustion flagged by the recommendation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is CPU wait dimension in RDS Performance Insights?
How do instance classes like db.t3.medium or db.m6g.large differ?
Why does scaling the instance improve performance for CPU saturation?
Your team's CodeBuild project executes terraform apply for multiple feature branches, using an S3 backend that stores a single shared state file. When two builds run at the same time, the state becomes corrupted. You must prevent concurrent writes while continuing to use the S3 backend and keep cost low. Which modification addresses this requirement?
Move the Terraform state to AWS Systems Manager Parameter Store by using the secureString type.
Enable versioning on the S3 bucket to recover previous versions of the state file.
Change the backend to local so each CodeBuild job writes its own state file in the build container.
Configure a DynamoDB table for state locking and reference it with the dynamodb_table argument in the S3 backend.
Answer Description
The S3 backend can use a DynamoDB table for state locking. When terraform plan or apply commands run, Terraform writes a lock item to the table. If another process already holds the lock, the second process fails and waits, preventing simultaneous writes to the same state file. Enabling S3 versioning (distractor) protects previous versions but does not stop concurrent updates. Switching to the local backend isolates state but breaks collaboration and does not meet the requirement to keep using S3. Parameter Store is not a supported Terraform backend, so it cannot manage state at all.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does DynamoDB state locking prevent state corruption in Terraform?
Why can't enabling S3 versioning address concurrent update issues?
Why is AWS Systems Manager Parameter Store not suitable for Terraform state management?
Your operations team has been streaming AWS WAF web ACL logs through an Amazon Kinesis Data Firehose delivery stream to CloudWatch Logs. A recent AWS update now lets one network-protection service send its full rule-match logs straight to a CloudWatch Logs log group, allowing you to retire the Firehose stream. Which service gained this direct logging capability?
Route 53 Resolver DNS Firewall
AWS WAF web ACL
AWS Shield Advanced
AWS Network Firewall
Answer Description
As of December 6 2021, AWS WAF can deliver web ACL traffic logs directly to a CloudWatch Logs log group or an Amazon S3 bucket. Before this release, AWS WAF required an Amazon Kinesis Data Firehose stream as an intermediary. Network Firewall and Route 53 Resolver DNS Firewall already supported direct delivery to CloudWatch Logs, so they do not represent a new capability. Therefore, AWS WAF is the service that now lets you eliminate the extra Firehose component.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS WAF, and what does it do?
How does AWS WAF deliver logs to CloudWatch Logs?
What are web ACLs in AWS WAF?
An operations engineer just created an Amazon S3 bucket in a new AWS account. Minutes later security tooling reports that a principal from another AWS account can read objects in the bucket, even though the engineer believes cross-account access is blocked. The engineer must quickly identify which policy grants this external access without adding logging or extra scanning services. Which solution meets these requirements?
Create an account-level IAM Access Analyzer in the Region and review its findings for the bucket to see the policy statement permitting external access.
Use AWS CloudTrail Lake to run a query on recent GetObject events and trace the IAM policies attached to the calling principal.
Enable Amazon S3 server access logging on the bucket and manually inspect the log files to determine which policy was evaluated.
Run an Amazon Macie bucket assessment and use the generated Policy Findings report to locate the offending statement.
Answer Description
IAM Access Analyzer continuously evaluates resource-based policies and produces a finding whenever a resource is shared outside the account. The finding shows the bucket ARN, the external principal, and the exact policy statement that allows the access, enabling rapid remediation with no additional instrumentation. S3 server access logs, CloudTrail Lake queries, and Amazon Macie assessments can reveal who accessed the bucket, but they do not automatically analyze policies or pinpoint the statement that granted the permission, and they require extra configuration or cost.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is IAM Access Analyzer and how does it work?
Why is CloudTrail Lake not recommended for this scenario?
How does IAM Access Analyzer compare to Amazon Macie in this scenario?
During migration of a genomics analysis pipeline, a research team launches hundreds of Amazon EC2 Linux Spot instances for several hours. The workload reads large datasets from an Amazon S3 bucket, needs a POSIX-compliant shared file system that provides over 100 GB/s aggregate throughput with sub-millisecond latency, and must write results back to the same bucket. Which shared storage solution meets these performance requirements at the lowest cost?
Stripe multiple Amazon EBS gp3 volumes in RAID 0 on one EC2 instance and export the file system over NFS to the fleet.
Deploy an Amazon FSx for Windows File Server Multi-AZ file system with SSD storage and mount it on the EC2 instances.
Create an Amazon FSx for Lustre file system linked to the S3 bucket and delete the file system after each pipeline execution.
Provision an Amazon EFS file system in Provisioned Throughput mode and enable Lifecycle Management to reduce storage cost.
Answer Description
Amazon FSx for Lustre is purpose-built for high-performance compute workloads. A file system linked to an S3 bucket can deliver hundreds of gigabytes per second of throughput and sub-millisecond latencies, while transparently synchronizing data to and from S3. Because the pipeline is short-lived, the team can delete the file system after each run and pay only for the hours used.
Amazon EFS, even in its latest throughput modes, maxes out at tens of gigabytes per second and cannot reach the required 100 GB/s. Amazon FSx for Windows File Server provides SMB rather than POSIX semantics and is optimized for Windows workloads, not large-scale HPC throughput. Sharing RAID-0 EBS volumes over NFS from a single instance creates a performance bottleneck, lacks the required aggregate throughput, and introduces a single point of failure.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon FSx for Lustre and why is it suitable for high-performance compute workloads?
How does linking an FSx for Lustre file system with an Amazon S3 bucket work, and what are the benefits?
Why is Amazon EFS or other listed storage solutions not suitable for this workload?
An e-commerce company runs a stateful payment service on an Auto Scaling group of Amazon EC2 instances. The CloudWatch agent publishes the mem_used_percent metric from each instance. When the mem_used_percent metric exceeds 90% for 2 consecutive minutes, the company wants the affected instance to be gracefully rebooted through the AWS-RestartEC2Instance Systems Manager Automation runbook. Which approach requires the LEAST operational overhead?
Install a cron script on every instance that checks the mem_used_percent metric each minute and calls Systems Manager Run Command to reboot the instance when the threshold is reached.
Create a CloudWatch alarm for the mem_used_percent metric on each instance. Configure an EventBridge rule to be triggered by the alarm state change, invoking the AWS-RestartEC2Instance Systems Manager Automation runbook with the alarmed instance ID as a target.
Attach a step-scaling policy that uses the mem_used_percent metric to increase desired capacity by one, allowing Auto Scaling to terminate and replace the over-utilized instance.
Create an SSM State Manager association that runs the AWS-RunShellScript document every minute to evaluate memory usage and reboot the instance if the threshold is breached.
Answer Description
A CloudWatch alarm can evaluate the per-instance mem_used_percent metric and change to the ALARM state when the threshold is breached. CloudWatch automatically emits an Alarm State Change event to EventBridge, where a rule can be configured to target the AWS-RestartEC2Instance runbook and pass the alarmed instance ID as input. This serverless, event-driven workflow requires no custom code or continuous polling and is the most efficient solution.
The cron-based script and the State Manager association are polling-based solutions that demand agent-side maintenance and custom logic on every instance, resulting in higher operational overhead. A step-scaling policy replaces, rather than reboots, the instance and cannot guarantee the preservation of in-memory state, so it does not meet the reboot requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an EventBridge rule and how does it work with a CloudWatch alarm?
What is the AWS-RestartEC2Instance Systems Manager Automation runbook?
Why is a serverless, event-driven approach preferred over polling-based solutions?
A company uses AWS Organizations and has a dedicated shared-services account operated by the network team. The team must deploy the same VPC CloudFormation template to all existing and future member accounts in us-east-1 and us-west-2. Operations leadership requires that:
- The network team manages the deployments from the shared-services account only.
- Stacks are automatically created in any new account that joins the organization.
Which approach meets these requirements while following AWS best practices?
Create a CloudFormation StackSet in the management account using service-managed permissions, designate the shared-services account as a delegated administrator, target the appropriate OU, and enable automatic deployments to us-east-1 and us-west-2.
Implement AWS CDK pipelines configured in each member account that trigger on AWS Control Tower lifecycle events to deploy the VPC stack to both Regions.
In the shared-services account, deploy individual CloudFormation stacks in each Region and share the VPC subnets to member accounts with AWS Resource Access Manager.
Create a CloudFormation StackSet with self-managed permissions, manually create the required IAM roles in every member account, and run a scheduled script to add new accounts to the StackSet when they appear.
Answer Description
CloudFormation StackSets with service-managed permissions integrate directly with AWS Organizations. By registering the shared-services account as a delegated administrator, the network team can create and manage StackSets without access to the management or member accounts. Targeting an OU and enabling automatic deployments causes stacks to be created in every existing account in the OU and in any future accounts as they are added. The StackSet automatically handles deployment to the specified Regions (us-east-1 and us-west-2).
The other options fall short:
- Self-managed StackSets require an IAM administration role in every account and do not automatically include new accounts, so manual scripting would be needed.
- Creating individual stacks and sharing resources through AWS RAM does not satisfy the requirement for automatic deployment to future accounts and adds operational overhead.
- Separate CDK pipelines in each account introduce unnecessary complexity and still require onboarding for future accounts; they also fail to centralize management in the shared-services account.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a CloudFormation StackSet with service-managed permissions?
What does it mean to designate a delegated administrator in AWS Organizations?
What is the purpose of targeting an Organizational Unit (OU) in a CloudFormation StackSet deployment?
A company must deliver an updated, hardened Docker image for a Java microservice every month. The solution must automatically start from the latest public Amazon Corretto base image, install OS patches and application libraries, run functional tests, perform a vulnerability scan, and then push the approved image to an existing Amazon ECR repository. Operations wants an AWS-managed solution that requires the least ongoing maintenance. Which approach meets these requirements?
Create an EC2 Image Builder container pipeline with a container recipe that extends the Amazon Corretto base image, adds the application layers and tests, sets the ECR repository as the distribution target, and enable Amazon Inspector scanning on the registry.
Run a Jenkins server on Amazon EC2 that executes a pipeline to build, test, scan, and push the image to ECR on a cron schedule.
Use AWS App2Container to repackage the Java application and rely on Amazon ECS to pull the latest image at deployment time.
Configure a weekly Amazon EventBridge rule to trigger an AWS CodeBuild project that executes docker build and docker push commands and runs an open-source vulnerability scanner inside the buildspec.
Answer Description
EC2 Image Builder natively supports container pipelines. A container recipe can extend a public Amazon Corretto base image, run build and test components that apply updates, and then distribute the resulting image directly to an Amazon ECR repository. When Amazon Inspector container scanning is enabled on the registry, every pushed image is automatically scanned for CVEs, satisfying the vulnerability-assessment requirement without additional custom code. The entire workflow is managed and scheduled by Image Builder, so no servers or bespoke build scripts need to be maintained. The other options rely on self-managed tooling (Jenkins), custom buildspecs, or repurposing App2Container, all of which introduce additional operational overhead and complexity.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is EC2 Image Builder, and how does it support container pipelines?
How does Amazon Inspector integrate with Amazon ECR to perform vulnerability scans?
Why are the other proposed solutions less optimal compared to EC2 Image Builder?
A Linux-based EC2 instance in a production VPC hosts a MySQL OLTP database on a 500 GiB gp2 EBS volume. CloudWatch shows regular spikes above 100 ms volume latency, a VolumeQueueLength greater than 60, and average read/write IOPS near 8 000. The operations team must reduce latency immediately, avoid any downtime, and keep storage costs as low as possible. Which action meets these requirements?
Use Elastic Volumes to convert the existing gp2 volume to gp3 and provision 12 000 IOPS with 500 MiB/s throughput.
Modify the volume to io2 Block Express and provision 16 000 IOPS and 1 000 MiB/s throughput.
Purchase additional I/O credit bundles to extend the gp2 burst duration during peak hours.
Change the volume type to st1 throughput-optimized HDD to increase throughput at a lower price.
Answer Description
The gp3 volume type decouples capacity from performance and is priced about 20 % lower than gp2 while offering a default 3 000 IOPS that can be provisioned up to 16 000 IOPS and 1 000 MiB/s. Using the Elastic Volumes feature, the team can modify the existing gp2 volume to gp3 and set higher IOPS and throughput online, so no instance stop, snapshot, or re-attach is required. Migrating to io2 or io2 Block Express would also reduce latency, but those volumes are significantly more expensive and therefore do not satisfy the cost constraint. Purchasing burst credits for gp2 is not possible; the volume automatically earns credits based on size. Converting to st1 lowers costs but is optimized for large sequential throughput and would increase latency for random OLTP workloads.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between gp2 and gp3 volumes in AWS?
What is Elastic Volumes in AWS, and how does it work?
Why is io2 Block Express not suitable for this scenario despite better performance?
A company runs a single Amazon EC2 instance that periodically spikes to 90% CPUUtilization for more than 5 minutes, causing slow response times. The operations team wants an automated, auditable remediation that upgrades the instance to the next larger size whenever this threshold is reached, without writing or maintaining custom code. Which solution will meet these requirements?
Define an Auto Scaling scheduled action that replaces the instance with a larger type during the expected peak periods.
Subscribe the CloudWatch alarm to an SNS topic that triggers a Lambda function. Have the function call the ModifyInstanceAttribute API to change the instance type.
Enable EC2 Auto Recovery on the instance so that hardware replacement automatically launches a larger instance when CPU thresholds are exceeded.
Create a CloudWatch alarm for CPUUtilization > 85% for 5 minutes. Add an EventBridge rule for the alarm's ALARM state that invokes a Systems Manager Automation runbook which stops the instance, changes it to the next larger type, and starts it.
Answer Description
Creating a CloudWatch alarm provides the performance signal. Routing the alarm state-change event through EventBridge lets other AWS services react to that condition. An EventBridge rule can directly invoke a Systems Manager Automation runbook. The runbook can stop the instance, modify its instance type to the next larger family size, and restart it, while Systems Manager records every step for auditing. Scheduled actions do not react to real-time metrics, Auto Recovery only recovers the same instance type, and implementing the logic in Lambda requires additional custom code that the team wants to avoid.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a CloudWatch alarm, and how does it help monitor instances?
What is EventBridge, and how does it help automate operations in AWS?
What does a Systems Manager Automation runbook do in this solution?
A company's VPC has a dual-stack private subnet containing EC2 instances. These instances need to initiate outbound connections to the IPv6 internet to download software updates. However, they must not be directly reachable from the internet over IPv6. The subnet's route table already directs some traffic to an on-premises network. Which single route should an administrator add to the subnet's route table to meet these requirements?
Add route 0.0.0.0/0 that targets the VPC's internet gateway
Add route 0.0.0.0/0 that targets an egress-only internet gateway
Add route ::/0 that targets the VPC's internet gateway
Add route ::/0 that targets an egress-only internet gateway attached to the VPC
Answer Description
Instances in a private subnet need a path to the public IPv6 internet for tasks like downloading updates. An egress-only internet gateway provides outbound-only IPv6 connectivity and is stateful, allowing return traffic for initiated connections but blocking unsolicited inbound traffic. To enable this, a route for the IPv6 default prefix (::/0) must be added to the subnet's route table, with the egress-only internet gateway as its target. Pointing ::/0 at a regular internet gateway would allow inbound IPv6 traffic, violating the requirements. Routes for 0.0.0.0/0 only affect IPv4 traffic and would not solve the IPv6 connectivity issue.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an egress-only internet gateway in AWS?
Why use ::/0 for IPv6 routing instead of 0.0.0.0/0?
How does an egress-only internet gateway differ from a regular internet gateway?
That's It!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.