00:20:00

AWS Certified CloudOps Engineer Associate Practice Test (SOA-C03)

Use the form below to configure your AWS Certified CloudOps Engineer Associate Practice Test (SOA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for AWS Certified CloudOps Engineer Associate SOA-C03
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

AWS Certified CloudOps Engineer Associate SOA-C03 Information

The AWS Certified CloudOps Engineer – Associate certification validates your ability to deploy, operate, and manage cloud workloads on AWS. It’s designed for professionals who maintain and optimize cloud systems while ensuring they remain reliable, secure, and cost-efficient. This certification focuses on modern cloud operations and engineering practices, emphasizing automation, monitoring, troubleshooting, and compliance across distributed AWS environments. You’ll be expected to understand how to manage and optimize infrastructure using services like CloudWatch, CloudTrail, EC2, Lambda, ECS, EKS, IAM, and VPC.

The exam covers the full lifecycle of cloud operations through five key domains: Monitoring and Performance, Reliability and Business Continuity, Deployment and Automation, Security and Compliance, and Networking and Content Delivery. Candidates are tested on their ability to configure alerting and observability, apply best practices for fault tolerance and high availability, implement infrastructure as code, and enforce security policies across AWS accounts. You’ll also demonstrate proficiency in automating common operational tasks and handling incident response scenarios using AWS tools and services.

Earning this certification shows employers that you have the technical expertise to manage AWS workloads efficiently at scale. It’s ideal for CloudOps Engineers, Cloud Support Engineers, and Systems Administrators who want to prove their ability to keep AWS environments running smoothly in production. By earning this credential, you demonstrate the hands-on skills needed to ensure operational excellence and reliability in today’s fast-moving cloud environments.

AWS Certified CloudOps Engineer Associate SOA-C03 Logo
  • Free AWS Certified CloudOps Engineer Associate SOA-C03 Practice Test

  • 20 Questions
  • Unlimited
  • Monitoring, Logging, Analysis, Remediation, and Performance Optimization
    Reliability and Business Continuity
    Deployment, Provisioning, and Automation
    Security and Compliance
    Networking and Content Delivery
Question 1 of 20

An operations team must migrate 500 TB of data from an on-premises NFS file server to an Amazon S3 bucket in us-east-1 within seven days. The site has a dedicated, mostly idle 10 Gbps AWS Direct Connect link. The team wants the simplest AWS DataSync configuration that can saturate the link during the bulk copy and then be reused for scheduled incremental syncs after the cut-over. What should the team do?

  • Deploy a single DataSync agent on-premises, create one DataSync task that copies the entire share to the bucket, and leave the task bandwidth setting at "Use available".

  • Deploy two DataSync agents and assign both agents to the same task so the task can reach 20 Gbps.

  • Deploy two DataSync agents and create two separate tasks, each copying half of the directory tree to different prefixes in the bucket.

  • Use the AWS CLI with aws s3 sync for the full copy, and reserve DataSync only for incremental syncs.

Question 2 of 20

A DevOps engineer installed the unified CloudWatch agent on dozens of Amazon EC2 instances in two Regions. The agent publishes the mem_used_percent metric in the CWAgent namespace with the InstanceId dimension. The engineer must receive a single SNS notification whenever any instance's memory usage is above 80% for three consecutive 1-minute periods while minimizing management effort and CloudWatch costs. Which approach satisfies these requirements?

  • Create an EventBridge rule that matches every PutMetricData API call from the CWAgent namespace, route the events to an SNS topic, and use SNS message filtering to detect values above 80%.

  • Create a standard CloudWatch alarm for each EC2 instance that monitors mem_used_percent, then create a composite alarm that aggregates these alarms and publishes a notification to SNS.

  • Stream the mem_used_percent metrics to CloudWatch Logs, configure a metric filter that counts occurrences above 80% in a 1-minute window, and create a CloudWatch alarm on that filter to send an SNS notification.

  • Create a CloudWatch alarm on the Metrics Insights query SELECT MAX(mem_used_percent) FROM "CWAgent" with a 60-second period, threshold 80, and three evaluation periods, and configure the alarm to publish to Amazon SNS.

Question 3 of 20

An operations engineer is troubleshooting a Java application running on an EC2 instance in a private subnet that suddenly fails to connect to an Amazon RDS for MySQL database in the same VPC. The instance is attached to security group sg-app, whose only outbound rules allow TCP ports 80 and 443 to 0.0.0.0/0. The database is attached to sg-db, whose inbound rules allow TCP 3306 from sg-app. Network ACLs and route tables already permit all traffic between the subnets. Which change will most effectively restore connectivity while adhering to the principle of least privilege?

  • Add an outbound rule to sg-app that allows TCP 3306 with sg-db as the destination.

  • Associate both the EC2 instance and the database with the default security group.

  • Add an inbound rule to sg-app that allows TCP 3306 from sg-db.

  • Broaden sg-db's inbound rule to allow TCP 3306 from 0.0.0.0/0.

Question 4 of 20

An Ops team will launch a new VPC (10.0.0.0/16) spanning two Availability Zones. Each AZ will host one public and one private subnet. Resources in private subnets must initiate outbound internet connections even if one AZ becomes unavailable, and networking costs should be kept as low as AWS best practices allow. Which subnet and NAT configuration meets these requirements?

  • Create one NAT gateway in a public subnet of Availability Zone A and associate both private subnet route tables with this gateway.

  • Launch a single NAT instance in one public subnet and update both private subnet route tables to forward 0.0.0.0/0 traffic to that instance.

  • Deploy a NAT gateway in each public subnet and configure each private subnet's route table to use the NAT gateway located in the same Availability Zone.

  • Provision two NAT gateways in a dedicated services subnet located in Availability Zone A and point all private subnets to those gateways for internet access.

Question 5 of 20

A company runs an application on Amazon EC2 Linux instances that are launched by an Auto Scaling group. Operations must collect Apache access logs and memory utilization from every instance, send the data to Amazon CloudWatch, and ensure that any update to the collection settings is applied automatically to new and running instances without storing credentials on the servers. Which solution meets these requirements with the LEAST operational overhead?

  • Bake the CloudWatch Logs agent and a cron-based script that runs the aws cloudwatch put-metric-data CLI command into the AMI, passing long-lived access keys to the instances with user data. Rebuild the AMI whenever the configuration changes.

  • Turn on AWS CloudTrail management and data events for the account, enable CloudTrail Insights, and create a CloudWatch Logs subscription filter to capture Apache logs and memory metrics.

  • Enable detailed monitoring on the Auto Scaling group and write a shell script that copies Apache logs to an S3 bucket every five minutes; configure an S3 event to import the logs into CloudWatch Logs.

  • Store a CloudWatch agent JSON configuration in Systems Manager Parameter Store. Attach an IAM instance profile that includes AmazonSSMManagedInstanceCore and CloudWatchAgentServerPolicy in the launch template. Use Systems Manager Run Command with AWS-ConfigureAWSPackage to install the CloudWatch agent and AmazonCloudWatch-ManageAgent to start it, so the agent automatically downloads the configuration and publishes Apache logs and memory metrics.

Question 6 of 20

An operations team runs a Lambda function named ValidateTag in AWS account 111111111111. A custom AWS Config rule that resides in account 222222222222 must invoke this function. Security policy states that all permissions must be managed from account 111111111111 and that no IAM roles may be created or modified in account 222222222222. Which approach meets these requirements while following the principle of least privilege?

  • Share the ValidateTag function with account 222222222222 by using AWS Resource Access Manager; the share automatically grants invoke permissions to AWS Config.

  • Add a permission to the Config rule that allows it to assume a role in account 111111111111 which has lambda:* permissions on the function.

  • Create an IAM role in account 222222222222 that trusts account 111111111111, attach the lambda:InvokeFunction permission to it, and reference the role ARN in the Config rule.

  • From account 111111111111 run aws lambda add-permission --function-name ValidateTag --statement-id AllowConfigCrossAccount --action lambda:InvokeFunction --principal config.amazonaws.com --source-account 222222222222, which adds a resource-based policy to the function.

Question 7 of 20

A DevOps engineer updates a networking CloudFormation stack that currently exports its VPC ID as DevVpcId. The revised template exports a different VPC ID but retains the same export name. On update CloudFormation fails with "Export DevVpcId cannot be updated as it is in use by stack AppStack." AppStack must stay running and unchanged. Which action enables deployment of the new VPC without triggering the export error?

  • Add the CAPABILITY_NAMED_IAM flag to the update command so CloudFormation can overwrite the existing export.

  • Enable termination protection on AppStack before updating the networking stack to suppress the export conflict.

  • Rename the new VPC ID output to a unique export name (for example DevVpcIdV2) and then update the networking stack.

  • Grant the deployment role cloudformation:UpdateExport permission and retry the stack update.

Question 8 of 20

Your company manages infrastructure for multiple AWS accounts using Terraform. You must build a CI/CD pipeline that: validates plans on every commit, stores Terraform state centrally with locking to prevent simultaneous writes, and avoids long-lived credentials in the pipeline environment. Which approach meets these requirements while following AWS and Terraform best practices?

  • Store the state file in a CodeCommit repository and enable repository versioning; store each account's access keys in Secrets Manager and inject them into the build environment.

  • Wrap Terraform modules in CloudFormation StackSets and use CloudFormation as the remote backend; pass cross-account role ARNs to CodePipeline through environment variables.

  • Configure an encrypted, versioned S3 bucket with a DynamoDB table for state locking; have CodeBuild assume an environment-specific IAM role via STS and run Terraform with the S3 backend.

  • Use the local backend on the CodeBuild container and rely on CodePipeline artifact versioning; create a single IAM user with AdministratorAccess and embed its access keys in the buildspec file.

Question 9 of 20

Your company has three AWS accounts (A, B, and C) that belong to the same organization. Operations wants a single CloudWatch dashboard in account A (us-east-1) that shows EC2 CPUUtilization metrics from accounts B and C in both us-east-1 and eu-west-1. They need the simplest solution that avoids copying data between Regions or running additional agents. Which set of steps will meet these requirements according to AWS best practices?

  • Enable cross-Region replication for CloudWatch in accounts B and C so that their eu-west-1 metrics are copied to us-east-1, then create a single-Region dashboard in account A.

  • Install the CloudWatch agent on all instances in accounts B and C with a configuration that publishes the metrics directly into the log group of account A.

  • Share each Region's metrics from accounts B and C to account A by using AWS Resource Access Manager, then add the shared metrics to a dashboard in account A.

  • In accounts B and C, create an IAM role that trusts account A and grants CloudWatch read-only access; from account A, build the dashboard widgets using metric identifiers that include the source account IDs and Regions.

Question 10 of 20

An auto-scaling script sometimes goes out of control and issues a flood of RunInstances API requests, quickly exhausting the AWS account's service quotas. You need an AWS-native mechanism that detects the abnormal surge in RunInstances call rate and immediately invokes a Lambda function that disables the script's IAM role. Which solution provides the required automation with the least ongoing operational overhead?

  • Enable AWS Config and write a custom rule that counts RunInstances API calls; have the rule invoke the Lambda function when the count exceeds the allowed limit.

  • Enable CloudTrail Insights for management events and create an EventBridge rule that matches "AWS Insight via CloudTrail" events where insightType is ApiCallRateInsight; set the rule's target to the Lambda function that disables the IAM role.

  • Turn on VPC Flow Logs and use CloudWatch Contributor Insights to detect traffic spikes; create an EventBridge rule that triggers the Lambda function when flow-log entries exceed a threshold.

  • Send CloudTrail logs to CloudWatch Logs, build a metric filter to count RunInstances calls per minute, add a CloudWatch alarm on the metric, and configure the alarm to invoke the Lambda function through SNS.

Question 11 of 20

A company hosts an internal REST API on Amazon EC2 instances in a "service VPC" that resides in Account A. Several developer teams in other AWS accounts need to consume this API from private subnets in their own VPCs. Security policy states that traffic must stay on the AWS network, the service VPC must not accept any inbound connections over VPC peering, and each consumer VPC must be able to use its own CIDR range without overlap constraints. Which approach satisfies the requirements with the least operational effort?

  • Expose the API through an internet-facing Application Load Balancer and require each consumer subnet to use a NAT gateway for outbound calls.

  • Attach all VPCs to an AWS Transit Gateway and advertise the service VPC subnet routes to the consumer VPCs through Transit Gateway route tables.

  • Establish VPC peering connections between the service VPC and every consumer VPC, then update route tables to point traffic to the peering links.

  • Place the API behind a Network Load Balancer, create a VPC endpoint service, and let each consumer VPC connect through an interface VPC endpoint (AWS PrivateLink).

Question 12 of 20

An operations engineer installed the CloudWatch agent on several Amazon Linux 2 EC2 instances by using the Systems Manager document AWS-ConfigureAWSPackage. A custom JSON file (shown below) was deployed to each instance and the agent was restarted.

{
  "agent": {"metrics_collection_interval": 60},
  "metrics": {
    "append_dimensions": {"InstanceId": "${aws:InstanceId}"},
    "aggregation_dimensions": [["InstanceId"]]
  },
  "logs": {
    "logs_collected": {
      "files": {
        "collect_list": [
          {
            "file_path": "/opt/app/server.log",
            "log_group_name": "app-logs",
            "log_stream_name": "{instance_id}"
          }
        ]
      }
    }
  }
}

Application logs are now visible in CloudWatch Logs, but no memory or disk space metrics appear in CloudWatch Metrics. What is the simplest way to collect these missing metrics on every instance?

  • Insert mem and disk sections under metrics_collected in the agent JSON file, then restart the CloudWatch agent on each instance.

  • Edit the AWS-ConfigureAWSPackage document to run the agent in collectd compatibility mode.

  • Turn on detailed monitoring for the instances in the EC2 console.

  • Attach the managed policy CloudWatchAgentAdminPolicy to the instance profile role.

Question 13 of 20

An organization stores sensitive logs in the prod-private-logs S3 bucket in its production AWS account. To run periodic queries, an analytics account currently accesses the bucket through a bucket policy that grants s3:GetObject to an IAM role in that account. Security policy now mandates that every cross-account access path uses an external ID. What is the most secure way to comply without breaking the analytics workflow?

  • Attach a service control policy (SCP) to the analytics account that denies s3:GetObject unless the request includes the required external ID header.

  • Add a Condition element with sts:ExternalId to the existing S3 bucket policy so that the analytics role must present the correct external ID when calling GetObject.

  • Enable S3 Object Lock in compliance mode for the bucket and require callers to specify the external ID through object version IDs when fetching objects.

  • Create an IAM role in the production account that trusts the analytics account, includes a Condition requiring a specific sts:ExternalId value, attaches a policy allowing s3:GetObject on the bucket, and remove the direct bucket policy statement. Have the analytics workflow assume this role before accessing S3.

Question 14 of 20

An RDS for PostgreSQL running on a db.t3.medium instance shows sustained high DB load. Performance Insights issues a proactive recommendation stating that the CPU wait dimension is saturated. Which modification best follows the recommendation to increase performance efficiency?

  • Scale the instance to a larger class such as db.m6g.large.

  • Turn on automatic minor version upgrades to apply the latest patch.

  • Enable storage autoscaling and double the gp2 volume size.

  • Create a read replica in another Availability Zone for analytic traffic.

Question 15 of 20

An operations team manages an Amazon ECS cluster that uses the EC2 launch type. They need to collect host-level CPU, memory, disk, and network metrics and forward all container application logs to Amazon CloudWatch Logs. The solution must start automatically on every new container instance without requiring changes to existing application task definitions. Which approach meets these requirements with the least operational effort?

  • Add the CloudWatch agent as a sidecar container to every existing and future application task definition so it starts alongside each application task.

  • Edit the configuration file of the Amazon ECS agent on every container instance so that it emits host metrics and container logs directly to CloudWatch.

  • Create a task definition that runs the unified CloudWatch agent (with Fluent Bit) and deploy it as an ECS service that uses the DAEMON scheduling strategy. Store the agent configuration in Parameter Store and grant the task IAM permissions to write to CloudWatch.

  • Use EC2 user data to install and start the CloudWatch agent on each container instance when it boots.

Question 16 of 20

A company has associated a Route 53 Resolver DNS Firewall rule group with several production VPCs to block known malware domains. An auditor requires proof that the blocking rules are enforced and insists that the DNS log records be retained for at least 5 years at the lowest possible cost. Which solution meets these requirements with the least operational overhead?

  • Enable AWS CloudTrail Lake and periodically join CloudTrail management events with VPC Flow Logs to infer blocked DNS requests.

  • Turn on Route 53 Resolver query logging to CloudWatch Logs and create a subscription filter that forwards the logs to an S3 bucket.

  • Enable Route 53 Resolver query logging for the production VPCs and write the logs directly to an Amazon S3 bucket that has a lifecycle policy to transition objects to the S3 Glacier Flexible Retrieval storage class after 30 days.

  • Configure Amazon GuardDuty DNS Malware Protection and export its findings to AWS Security Hub for long-term retention.

Question 17 of 20

An e-commerce application runs on EC2 instances in two Availability Zones, fronted by an Application Load Balancer (ALB). Some checkout requests take 3 to 4 minutes to complete, and users intermittently receive 504 Gateway Timeout responses. CloudWatch shows the targets are healthy and no Auto Scaling scale-in events occurred. Which change will most effectively prevent these timeouts without redesigning the application?

  • Enable connection draining by setting the target group deregistration delay to 300 seconds.

  • Increase the ALB idle timeout to a value higher than the longest expected request processing time.

  • Replace the ALB with a Network Load Balancer to remove all timeout limits.

  • Enable cross-zone load balancing on the ALB.

Question 18 of 20

A company runs a production MySQL database on a single-AZ Amazon RDS instance. Backups are generated once each night by an AWS Backup plan. After a recent incident, 18 hours of data were lost. The operations team must achieve a maximum RPO of 5 minutes while minimizing cost and operational effort. Which solution meets these requirements?

  • Create a read replica in the same Region and promote it to primary after an incident to recover the latest data.

  • Configure AWS Backup to create manual DB snapshots every 5 minutes and delete snapshots older than one day.

  • Convert the DB instance to a Multi-AZ deployment so a failover to the standby can be triggered during an outage.

  • Enable automated backups on the DB instance with a 7-day retention period and use point-in-time restore for recovery.

Question 19 of 20

A company runs a production Amazon RDS for PostgreSQL db.r5.large instance with 2 vCPUs. After enabling Performance Insights, the operations team notices that query latency rises when the database load exceeds the number of vCPUs. They need an automated Systems Manager runbook to execute whenever this situation persists for 5 minutes, while keeping operational overhead low. Which solution meets the requirement?

  • Configure a CloudWatch alarm on the instance's CPUUtilization metric with an 80% threshold for 5 minutes and target the Systems Manager runbook.

  • Create a CloudWatch alarm in the AWS/RDS namespace for the DBLoad metric (statistic: Average, period: 60 seconds, evaluation periods = 5, threshold = 2) and set the alarm action to run the Systems Manager Automation document.

  • Enable Enhanced Monitoring at 1-second granularity and deploy a Lambda function that polls CPU metrics every minute; if CPUUtilization > 80% for 5 checks, invoke the runbook.

  • Create an RDS event subscription for source type 'db-instance' and event category 'failure'; subscribe an SNS topic that triggers the Systems Manager runbook.

Question 20 of 20

A company has multiple production AWS accounts. For every account, critical CloudWatch alarms already publish state-change events to the account's default event bus. Operations engineers sign in only to the management account and must see a pop-up notification in the AWS Management Console whenever any of those alarms enters the ALARM state. Using AWS User Notifications, what is the MOST efficient way to meet this requirement?

  • Create a cross-account Amazon EventBridge rule in each production account that forwards CloudWatch AlarmStateChange events to the management account event bus, then configure an AWS User Notifications rule in the management account that targets the AWS Console channel.

  • Share the CloudWatch alarms through a cross-account dashboard and rely on the dashboard icons to indicate alarm state when engineers open it.

  • Create an AWS Systems Manager Incident Manager response plan that watches the alarms across accounts and selects console notifications as the engagement channel.

  • Subscribe each alarm's SNS topic to an AWS Chatbot Slack channel and enable Slack as the preferred channel in AWS User Notifications.