00:20:00

AWS Certified CloudOps Engineer Associate Practice Test (SOA-C03)

Use the form below to configure your AWS Certified CloudOps Engineer Associate Practice Test (SOA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for AWS Certified CloudOps Engineer Associate SOA-C03
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

AWS Certified CloudOps Engineer Associate SOA-C03 Information

The AWS Certified CloudOps Engineer – Associate certification validates your ability to deploy, operate, and manage cloud workloads on AWS. It’s designed for professionals who maintain and optimize cloud systems while ensuring they remain reliable, secure, and cost-efficient. This certification focuses on modern cloud operations and engineering practices, emphasizing automation, monitoring, troubleshooting, and compliance across distributed AWS environments. You’ll be expected to understand how to manage and optimize infrastructure using services like CloudWatch, CloudTrail, EC2, Lambda, ECS, EKS, IAM, and VPC.

The exam covers the full lifecycle of cloud operations through five key domains: Monitoring and Performance, Reliability and Business Continuity, Deployment and Automation, Security and Compliance, and Networking and Content Delivery. Candidates are tested on their ability to configure alerting and observability, apply best practices for fault tolerance and high availability, implement infrastructure as code, and enforce security policies across AWS accounts. You’ll also demonstrate proficiency in automating common operational tasks and handling incident response scenarios using AWS tools and services.

Earning this certification shows employers that you have the technical expertise to manage AWS workloads efficiently at scale. It’s ideal for CloudOps Engineers, Cloud Support Engineers, and Systems Administrators who want to prove their ability to keep AWS environments running smoothly in production. By earning this credential, you demonstrate the hands-on skills needed to ensure operational excellence and reliability in today’s fast-moving cloud environments.

AWS Certified CloudOps Engineer Associate SOA-C03 Logo
  • Free AWS Certified CloudOps Engineer Associate SOA-C03 Practice Test

  • 20 Questions
  • Unlimited
  • Monitoring, Logging, Analysis, Remediation, and Performance Optimization
    Reliability and Business Continuity
    Deployment, Provisioning, and Automation
    Security and Compliance
    Networking and Content Delivery

Free Preview

This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!

Question 1 of 20

An EC2 m6i.large instance copies a 2 TB tar file to an S3 bucket with the command aws s3 cp /data/archive.tar s3://corp-logs/. Network CloudWatch metrics show the instance can sustain 8 Gbps, but the transfer stalls around 500 Mbps and uses only one TCP connection. Without changing the instance type or writing custom code, which AWS CLI adjustment will MOST increase upload throughput?

  • Add an S3 transfer configuration in ~/.aws/config such as:

    [s3]
    multipart_threshold = 64MB
    max_concurrent_requests = 50
    
  • Set multipart_chunksize = 5MB to create many smaller parts during the upload.

  • Turn off enhanced networking on the instance to eliminate driver overhead.

  • Use --storage-class GLACIER in the cp command so the object uploads into the GLACIER storage class.

Question 2 of 20

An operations team reviews CloudWatch metrics for a 4 TiB io1 volume provisioned with 20,000 IOPS that backs a bursty analytics workload. During peak hours VolumeReadOps stays below 500 IOPS and VolumeQueueLength remains under 1. Management asks to slash storage costs without impacting current performance. What is the MOST cost-effective change?

  • Replace the io1 volume with a 4 TiB gp3 volume using the default 3,000 IOPS and 125 MiB/s throughput.

  • Lower the provisioned IOPS on the io1 volume from 20,000 to 2,000.

  • Convert the volume to an st1 throughput-optimized HDD volume.

  • Convert the volume to a gp2 general-purpose SSD volume of the same size.

Question 3 of 20

A company runs a long-running scientific simulation on a single Amazon EC2 instance. The CloudWatch agent publishes a custom MemoryUtilization metric. If memory usage stays above 90 percent for 5 consecutive minutes, an existing Systems Manager Automation runbook must clear application caches on that same instance automatically, without manual intervention. Which approach meets these requirements with the least operational overhead?

  • Place the instance in an Auto Scaling group with a step-scaling policy based on MemoryUtilization and use a lifecycle hook to run the cache-clearing runbook when the group scales out.

  • Create a CloudWatch alarm and an EventBridge rule that invokes an AWS Lambda function. The function reads the InstanceId from the event and calls StartAutomationExecution to run the cache-clearing runbook on that instance.

  • Create a CloudWatch alarm for MemoryUtilization > 90 percent for 5 datapoints. Add an EventBridge rule that filters for the alarm's ALARM state and sets the existing Systems Manager Automation runbook as the target. Use an input transformer to pass the InstanceId from the event to the runbook.

  • Define a Systems Manager Maintenance Window that executes the cache-clearing runbook every 10 minutes, with a pre-task script that exits if memory usage is below 90 percent.

Question 4 of 20

EC2 instances in a private subnet are unable to connect to a public API over HTTPS. The private subnet's route table directs 0.0.0.0/0 traffic to a NAT gateway. The instances' security group allows outbound TCP port 443. VPC flow logs on the instances' network interfaces show 'REJECT' entries for inbound traffic on destination ports 1024-65535. Which action will restore connectivity without making the instances publicly accessible?

  • Attach an internet gateway to the private subnet and add a 0.0.0.0/0 route to it.

  • Update the private subnet's network ACL to allow inbound TCP traffic on ports 1024-65535 from 0.0.0.0/0.

  • Add an inbound rule for TCP port 443 to the EC2 instances' security group.

  • Disable source/destination checking on the NAT gateway's elastic network interface.

Question 5 of 20

After applying a custom network ACL to a private subnet that hosts EC2 instances that call external SaaS APIs through a NAT gateway, outbound HTTPS traffic fails. The ACL allows outbound TCP 443 to 0.0.0.0/0 and denies all other outbound traffic. Inbound rules allow TCP 22 from 10.0.0.0/16 and TCP 443 from 0.0.0.0/0, then deny all. Which modification will restore connectivity with least privilege?

  • Change the existing outbound rule to allow all protocols to 0.0.0.0/0.

  • Add an outbound allow rule for TCP ports 1024-65535 to 0.0.0.0/0.

  • Add an inbound allow rule for TCP ports 1024-65535 from 0.0.0.0/0.

  • Replace the outbound rule with UDP port 443 to 0.0.0.0/0.

Question 6 of 20

Your company maintains a central monitoring account (us-east-1) with CloudWatch dashboards. You must add widgets that show the CPUUtilization metric of EC2 instances in two production accounts (prod-01, prod-02) in us-west-2. Developers in the monitoring account must be able to view dashboards via console or CLI but must not create, modify, or delete them. No extra infrastructure may be deployed. Which approach meets these needs with minimal operational effort?

  • Generate a CloudWatch dashboard snapshot for each production account and embed the PNG URLs in a new dashboard in the monitoring account. Restrict developers to Amazon S3 read-only access so they cannot update dashboards.

  • Export the CPUUtilization metrics to Amazon S3 with an EventBridge rule, load the data into Amazon QuickSight, and build a cross-account analysis dashboard. Assign developers to a QuickSight reader group.

  • Create a CloudWatch dashboard in each production account and share them with the monitoring account by using AWS Resource Access Manager. Give developers the ReadOnlyAccess AWS-managed policy.

  • Enable CloudWatch cross-account observability to link the two production accounts as source accounts, then create the widgets using the account and Region qualifier (for example, accountId=prod-01). Attach an IAM policy to the developers' role that permits GetDashboard, ListDashboards, GetMetricData, and ListMetrics but not PutDashboard.

Question 7 of 20

An operations engineer installed the CloudWatch agent on several Amazon Linux 2 EC2 instances by using the Systems Manager document AWS-ConfigureAWSPackage. A custom JSON file (shown below) was deployed to each instance and the agent was restarted.

{
  "agent": {"metrics_collection_interval": 60},
  "metrics": {
    "append_dimensions": {"InstanceId": "${aws:InstanceId}"},
    "aggregation_dimensions": [["InstanceId"]]
  },
  "logs": {
    "logs_collected": {
      "files": {
        "collect_list": [
          {
            "file_path": "/opt/app/server.log",
            "log_group_name": "app-logs",
            "log_stream_name": "{instance_id}"
          }
        ]
      }
    }
  }
}

Application logs are now visible in CloudWatch Logs, but no memory or disk space metrics appear in CloudWatch Metrics. What is the simplest way to collect these missing metrics on every instance?

  • Insert mem and disk sections under metrics_collected in the agent JSON file, then restart the CloudWatch agent on each instance.

  • Turn on detailed monitoring for the instances in the EC2 console.

  • Attach the managed policy CloudWatchAgentAdminPolicy to the instance profile role.

  • Edit the AWS-ConfigureAWSPackage document to run the agent in collectd compatibility mode.

Question 8 of 20

Your company manages infrastructure for multiple AWS accounts using Terraform. You must build a CI/CD pipeline that: validates plans on every commit, stores Terraform state centrally with locking to prevent simultaneous writes, and avoids long-lived credentials in the pipeline environment. Which approach meets these requirements while following AWS and Terraform best practices?

  • Configure an encrypted, versioned S3 bucket with a DynamoDB table for state locking; have CodeBuild assume an environment-specific IAM role via STS and run Terraform with the S3 backend.

  • Store the state file in a CodeCommit repository and enable repository versioning; store each account's access keys in Secrets Manager and inject them into the build environment.

  • Use the local backend on the CodeBuild container and rely on CodePipeline artifact versioning; create a single IAM user with AdministratorAccess and embed its access keys in the buildspec file.

  • Wrap Terraform modules in CloudFormation StackSets and use CloudFormation as the remote backend; pass cross-account role ARNs to CodePipeline through environment variables.

Question 9 of 20

A company runs a fleet of 20 Amazon EC2 m5.4xlarge instances in an Auto Scaling group across two Availability Zones. CloudWatch shows that for the last 14 days, average CPU utilization has been 18 percent and network throughput is consistently low. Memory usage is below 35 percent on all instances. Management asks the CloudOps engineer to reduce EC2 costs while keeping the same two-AZ architecture and leaving application code unchanged. Which action is the MOST cost-effective and requires the LEAST operational effort?

  • Enable AWS Compute Optimizer for the account and apply its rightsizing recommendation to move the Auto Scaling group to smaller burstable performance instances that still meet the observed workload.

  • Configure the Auto Scaling group to launch Spot Instances of the same size in one Availability Zone and On-Demand instances in the other.

  • Purchase one-year Standard Reserved Instances for the existing m5.4xlarge instance type to obtain a discounted hourly rate.

  • Create a target tracking scaling policy to double the desired capacity when CPU exceeds 50 percent and halve it when CPU drops below 20 percent.

Question 10 of 20

Your company runs Linux and Windows EC2 instances spread across three AWS accounts. Operations must collect the instances' memory utilization and a set of custom application log files in Amazon CloudWatch without manually copying configuration files to every server. The team also wants to be able to update the agent configuration from a central location. Which approach satisfies these requirements with the least operational overhead?

  • Install the CloudWatch Logs agent on Linux servers and the unified CloudWatch agent on Windows servers; configure memory metrics later with CloudWatch Metrics Insights queries.

  • Use Systems Manager Run Command with the AmazonCloudWatch-ManageAgent document to install the unified CloudWatch agent on every instance and have each agent load its JSON configuration from an SSM Parameter Store key that the operations team manages.

  • Manually copy the CloudWatch agent configuration file into /opt/aws/amazon-cloudwatch-agent on each instance during user data, then start the agent with the local file path.

  • Enable AWS Config across all accounts to stream operating-system metrics, including memory, into CloudWatch and configure delivery of log files through the same service.

Question 11 of 20

An Auto Scaling group runs Linux workloads on c5.9xlarge instances that are evenly distributed across three Availability Zones. Operations reports show that short-lived analytics jobs occasionally saturate the instance's 10 Gbps network bandwidth, causing retries and delays. The team needs a quick, low-risk change that provides at least 20 Gbps per instance without redesigning the architecture. Which action meets these requirements?

  • Move the Auto Scaling group into a cluster placement group that spans the three Availability Zones.

  • Enable jumbo frames (MTU 9001) on every instance network interface.

  • Attach an Elastic Fabric Adapter (EFA) to each instance.

  • Update the launch template to use c5n.9xlarge instances.

Question 12 of 20

A company runs a production Amazon RDS for PostgreSQL db.r5.large instance with 2 vCPUs. After enabling Performance Insights, the operations team notices that query latency rises when the database load exceeds the number of vCPUs. They need an automated Systems Manager runbook to execute whenever this situation persists for 5 minutes, while keeping operational overhead low. Which solution meets the requirement?

  • Create a CloudWatch alarm in the AWS/RDS namespace for the DBLoad metric (statistic: Average, period: 60 seconds, evaluation periods = 5, threshold = 2) and set the alarm action to run the Systems Manager Automation document.

  • Configure a CloudWatch alarm on the instance's CPUUtilization metric with an 80% threshold for 5 minutes and target the Systems Manager runbook.

  • Enable Enhanced Monitoring at 1-second granularity and deploy a Lambda function that polls CPU metrics every minute; if CPUUtilization > 80% for 5 checks, invoke the runbook.

  • Create an RDS event subscription for source type 'db-instance' and event category 'failure'; subscribe an SNS topic that triggers the Systems Manager runbook.

Question 13 of 20

Your team operates a production Amazon EKS cluster that uses managed node groups. You must begin streaming application container logs and granular CPU, memory, disk, and network metrics for every pod to Amazon CloudWatch with minimal ongoing maintenance. You prefer an AWS-managed solution rather than hand-built agents. Which approach meets these requirements?

  • Enable Container Insights for the cluster by using the CloudWatch console or eksctl, which installs the AWS CloudWatch agent and Fluent Bit DaemonSets that forward metrics and logs to CloudWatch.

  • Deploy the Prometheus Operator and Grafana inside the cluster, then configure a community exporter to push scraped metrics to CloudWatch.

  • Add a user-data script to each node group that installs and starts the CloudWatch agent as a systemd service to collect host metrics and the /var/log/containers directory.

  • Turn on control-plane logging for the cluster so that the API server automatically emits all pod metrics and container log streams to CloudWatch Logs.

Question 14 of 20

A CloudOps engineer manages an Auto Scaling group of t3.small instances running a latency-sensitive REST API. The p95 request latency occasionally increases to several seconds even though the CloudWatch CPUUtilization metric never rises above 20%. During the same periods, the CPUCreditBalance metric falls to 0 for every instance. What is the most cost-effective change that resolves the performance issue?

  • Convert the Auto Scaling group to run Spot Instances of the same t3.small instance type.

  • Add a scaling policy that doubles the desired capacity when CPUUtilization exceeds 60%.

  • Replace the t3.small instances with m6i.large instances in the launch template.

  • Modify the launch template so the Auto Scaling group uses T3 Unlimited mode.

Question 15 of 20

A company uses a single AWS CloudFormation template to deploy a three-tier application that includes Auto Scaling groups and a production Amazon RDS instance. During routine maintenance, an operations engineer must update the stack to patch the application servers. Company policy states that the update must never replace or delete the existing RDS instance. If the template change would cause a replacement, the operation must immediately fail before any resources are modified so the engineer can investigate. Which approach meets these requirements with the least operational effort?

  • Manually create an RDS snapshot and proceed with the stack update; restore from the snapshot if the database is replaced.

  • Add the DeletionPolicy and UpdateReplacePolicy attributes with a value of Retain to the RDS resource before updating the stack.

  • Attach a stack policy that denies all Update:* actions on the RDS resource and then update the stack.

  • Generate a change set, review it for replacement actions on the RDS resource, and execute the change set only if none are found.

Question 16 of 20

A company streams AWS CloudTrail management events from its production account to an existing CloudWatch Logs log group named ProdTrail. Security engineers need a solution that triggers an alert within 1 minute whenever a DeleteBucket API call is written to the log group. The alert must appear as a CloudWatch alarm and send an email through Amazon SNS. Which set of actions meets these requirements with the least operational overhead?

  • Create an EventBridge rule that matches DeleteBucket events from aws.s3 and sends them to an SNS topic; rely on EventBridge metrics for monitoring.

  • Create a metric filter on the ProdTrail log group with pattern { $.eventName = "DeleteBucket" }, publish it to a custom CloudWatch metric, and add a 1-minute CloudWatch alarm that notifies an SNS topic.

  • Configure an S3 event notification on the log bucket that invokes a Lambda function; have the function scan each log file for DeleteBucket events and publish a message to SNS.

  • Enable CloudTrail Insights on the trail and configure the trail to deliver Insight events to an SNS topic subscribed by the security team.

Question 17 of 20

A company uses two private subnets, one in each of two Availability Zones (AZ-A and AZ-B). All outbound internet traffic is routed through a single NAT gateway that is deployed in a public subnet in AZ-A. After an unplanned AZ-A outage, instances in AZ-B lost internet connectivity. The operations team must improve fault tolerance and reduce inter-AZ data processing charges while keeping administration effort low. What should the team do?

  • Create a second NAT gateway in a public subnet in AZ-B and update the AZ-B private subnet's route table to use that gateway.

  • Move the existing NAT gateway to a shared services VPC in AZ-A and route both private subnets to it through VPC peering connections.

  • Attach an internet gateway directly to each private subnet and add a 0.0.0.0/0 route pointing to it.

  • Replace the NAT gateway with auto-scaled NAT instances placed in each AZ and manage failover with a Network Load Balancer.

Question 18 of 20

A CloudOps engineer is configuring Amazon CloudWatch to scale an Auto Scaling group. The group must launch one additional EC2 instance whenever the average CPUUtilization metric stays above 70 percent for 5 consecutive minutes. The solution must work without relying on EventBridge rules, Lambda functions, or other custom code. Which type of action should the engineer attach to the CloudWatch alarm to meet these requirements?

  • Attach the ARN of a scaling policy associated with the Auto Scaling group.

  • Publish the alarm state to an Amazon SNS topic that triggers a Lambda function to add capacity.

  • Use an EC2 recovery action ARN so the instance restarts when the threshold is breached.

  • Specify a Systems Manager Automation document that uses the EC2:Run command to start a new instance.

Question 19 of 20

A CloudOps engineer must automate backups of an Amazon S3 bucket that stores compliance records. The solution must run a backup at midnight every day, retain each recovery point for 7 years, and allow one-click point-in-time restore. Backups must be immutable and managed centrally with the least operational overhead. Which approach meets these requirements by following AWS best practices?

  • Configure same-Region cross-account replication to a backup account and apply a lifecycle expiration rule on the destination bucket after 7 years.

  • Use AWS Backup to assign the bucket to a daily backup plan that stores recovery points in a vault with a 7-year retention period and Vault Lock enabled.

  • Create a CloudWatch Events rule that invokes an AWS Lambda function daily to copy objects to another bucket configured with S3 Object Lock in compliance mode for 7 years.

  • Enable S3 versioning and add a lifecycle rule that moves noncurrent object versions to Amazon S3 Glacier Deep Archive after 24 hours.

Question 20 of 20

A company runs a multi-tenant application on three Auto Scaling groups behind an Application Load Balancer (ALB). The DNS name for the ALB is the primary record in a Route 53 failover policy that redirects traffic to a standby stack in another Region. The operations team must ensure that Route 53 fails over only when all three microservices in the primary Region are unavailable. Which approach provides the required behavior with the least operational overhead?

  • Create three HTTP health checks, one for each microservice, and create a calculated health check that is healthy only when all child health checks are healthy. Associate the calculated check with the primary record.

  • Create three HTTP health checks, one for each microservice's /status endpoint, and then create a calculated health check that is healthy when at least one child health check is healthy. Attach the calculated health check to the primary failover record.

  • Define a CloudWatch alarm that monitors the ALB's UnHealthyHostCount metric and link that alarm to a Route 53 metric-based health check attached to the primary record.

  • Configure one HTTP health check that monitors the ALB DNS name and set the failure threshold to three consecutive failed checks.