00:20:00

AWS Certified CloudOps Engineer Associate Practice Test (SOA-C03)

Use the form below to configure your AWS Certified CloudOps Engineer Associate Practice Test (SOA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for AWS Certified CloudOps Engineer Associate SOA-C03
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

AWS Certified CloudOps Engineer Associate SOA-C03 Information

The AWS Certified CloudOps Engineer – Associate certification validates your ability to deploy, operate, and manage cloud workloads on AWS. It’s designed for professionals who maintain and optimize cloud systems while ensuring they remain reliable, secure, and cost-efficient. This certification focuses on modern cloud operations and engineering practices, emphasizing automation, monitoring, troubleshooting, and compliance across distributed AWS environments. You’ll be expected to understand how to manage and optimize infrastructure using services like CloudWatch, CloudTrail, EC2, Lambda, ECS, EKS, IAM, and VPC.

The exam covers the full lifecycle of cloud operations through five key domains: Monitoring and Performance, Reliability and Business Continuity, Deployment and Automation, Security and Compliance, and Networking and Content Delivery. Candidates are tested on their ability to configure alerting and observability, apply best practices for fault tolerance and high availability, implement infrastructure as code, and enforce security policies across AWS accounts. You’ll also demonstrate proficiency in automating common operational tasks and handling incident response scenarios using AWS tools and services.

Earning this certification shows employers that you have the technical expertise to manage AWS workloads efficiently at scale. It’s ideal for CloudOps Engineers, Cloud Support Engineers, and Systems Administrators who want to prove their ability to keep AWS environments running smoothly in production. By earning this credential, you demonstrate the hands-on skills needed to ensure operational excellence and reliability in today’s fast-moving cloud environments.

AWS Certified CloudOps Engineer Associate SOA-C03 Logo
  • Free AWS Certified CloudOps Engineer Associate SOA-C03 Practice Test

  • 20 Questions
  • Unlimited
  • Monitoring, Logging, Analysis, Remediation, and Performance Optimization
    Reliability and Business Continuity
    Deployment, Provisioning, and Automation
    Security and Compliance
    Networking and Content Delivery
Question 1 of 20

A company has identical web applications running in two AWS Regions: us-east-1 (primary) and us-west-2 (standby). All internet traffic must resolve to the primary endpoint, but if a health check on the primary fails, traffic must immediately shift completely to the standby endpoint without requiring users to change the DNS name. Which Amazon Route 53 routing policy should be used for the two A records?

  • Geoproximity routing policy configured by Region bias

  • Latency-based routing policy across the two Regions

  • Weighted routing policy with a weight of 100 for the primary and 0 for the standby

  • Failover routing policy with PRIMARY and SECONDARY records

Question 2 of 20

An ecommerce company runs a MySQL 8.0 DB instance in one AZ. During flash sales, Amazon CloudWatch shows the DatabaseConnections metric spikes close to the instance limit and FreeableMemory falls sharply. Hundreds of short-lived Lambda invocations open new database sessions. The team wants to relieve memory pressure without changing application logic and would like to use IAM database authentication. Which solution meets these requirements with the least effort?

  • Create an Amazon RDS Proxy for the DB instance, enable IAM authentication, and update the Lambda functions to connect to the proxy endpoint.

  • Launch a larger DB instance class and increase the max_connections parameter in a custom parameter group.

  • Deploy an Amazon ElastiCache for Memcached cluster and cache connection objects for reuse across function invocations.

  • Add a read replica and configure the Lambda functions to spread writes and reads using Amazon Route 53 weighted records.

Question 3 of 20

A company deploys an Auto Scaling group of Amazon EC2 instances in two Availability Zones. Each instance runs a Node.js application listening on port 8080. A SysOps administrator creates an Application Load Balancer with an HTTP listener that forwards to a target group by using all default settings. Within minutes CloudWatch shows TargetHealthyCount=0 and users receive HTTP 503 errors. Which configuration issue is the MOST likely cause?

  • Cross-zone load balancing is disabled, preventing the load balancer from probing targets in the second Availability Zone.

  • The target group is configured to use port 80, so the load balancer sends health checks to a port on which the application is not listening.

  • The deregistration delay is longer than the health check interval, causing every instance to be removed before it can pass the health check.

  • The listener should use the TCP protocol instead of HTTP, otherwise the health check probe is blocked.

Question 4 of 20

An application runs in three isolated (private, no internet) subnets of a VPC. The instances reach Amazon S3 through a NAT gateway in a public subnet, generating high data-processing charges. You must ensure the instances continue to reach S3 but no longer traverse the NAT gateway, without exposing them to the internet. Which change to the route tables meets these requirements?

  • Replace the NAT gateway with an egress-only internet gateway and add a ::/0 IPv6 default route in the existing route tables.

  • Create an interface VPC endpoint for Amazon S3 and update the instances hosts files to resolve the endpoint's DNS name.

  • Associate the isolated subnets with the public route table that already has a 0.0.0.0/0 route to the internet gateway.

  • Add a route with the S3 prefix list destination that targets a newly created S3 gateway endpoint in the route table associated with the isolated subnets.

Question 5 of 20

An operations team collects JVM heap usage as a custom CloudWatch metric from two Amazon EC2 instances that run a critical application. They must automatically restart the application service on the affected instance whenever average heap usage exceeds 80 % for 5 consecutive minutes. The solution must require the least operational overhead and allow administrators to view execution output in AWS. Which approach meets these requirements?

  • Create a CloudWatch alarm on the custom metric. Add an EventBridge rule that matches the alarm when it enters the ALARM state and targets a Systems Manager Automation runbook that restarts the service on the indicated EC2 instance.

  • Configure the alarm to initiate the EC2 recover action when the threshold is breached and review activity through CloudTrail logs.

  • Publish the alarm to an SNS topic that invokes a Lambda function. The function connects to the instance over SSH to restart the service and writes logs to CloudWatch Logs.

  • Attach a step scaling policy to the Auto Scaling group to launch an additional instance when the metric exceeds the threshold, relying on load balancer health checks to remove the busy instance.

Question 6 of 20

A security engineer receives an audit request to identify which IAM principal terminated an Amazon EC2 instance 6 months ago. The company has an organization trail that stores all management events in an S3 bucket in a security account. No additional analytics services have been configured so far. What is the MOST straightforward, cost-effective way to find the required information?

  • Run an Amazon Athena SQL query against the CloudTrail log files stored in the S3 bucket.

  • Enable CloudTrail Lake and run a query for TerminateInstances after the lake channel is created.

  • Create a CloudWatch Logs group, configure the trail to stream logs to it, and use CloudWatch Logs Insights to search for the event.

  • Open CloudTrail Event History in each AWS Region and filter for the TerminateInstances API call.

Question 7 of 20

An operations team must choose shared storage for an on-demand genomics workflow running on Amazon EKS. Input data is in Amazon S3. The job needs sub-millisecond latency and hundreds of GB/s throughput, then pushes results back to S3 and discards the file system. Which solution best meets these needs with minimal administration?

  • Deploy Amazon FSx for Windows File Server in Multi-AZ mode and provide SMB shares to the containers.

  • Use Amazon Elastic File System in Provisioned Throughput mode and mount it in the pods.

  • Provision Amazon FSx for NetApp ONTAP with FlexCache volumes accessed through NFS.

  • Create an Amazon FSx for Lustre scratch file system linked to the S3 bucket and mount it on the EKS worker nodes.

Question 8 of 20

An operations team manages a DynamoDB table in provisioned mode with application auto scaling. During a viral marketing event, request traffic surges roughly 30 times within a few minutes. The table begins to throttle even though auto scaling is configured with generous maximum capacity. The team wants to stop throttling during sudden spikes without permanently over-provisioning. Which solution satisfies these requirements with the least operational effort?

  • Enable Amazon DynamoDB Accelerator (DAX) in front of the table.

  • Increase the maximum read and write capacity units in auto scaling to ten times the expected peak.

  • Switch the table to DynamoDB on-demand capacity mode.

  • Create a global secondary index with identical partition and sort keys to distribute load.

Question 9 of 20

An operations team receives about 50 000 new objects per minute in an Amazon S3 bucket. A Lambda function must inspect each object's metadata and write an entry to DynamoDB. Some objects occasionally cause processing errors, but the team must ensure that every object is eventually processed without loss, even during traffic surges. Which architecture satisfies these requirements while minimizing operational overhead?

  • Configure an S3 ObjectCreated notification that directly invokes the Lambda function and increase the function's reserved concurrency.

  • Enable an S3 ObjectCreated notification to an SNS topic and subscribe the Lambda function to the topic.

  • Send S3 ObjectCreated events to Amazon EventBridge and set the Lambda function as a rule target with a dead-letter queue.

  • Send S3 ObjectCreated events to an Amazon SQS queue that has a dead-letter queue, and configure the Lambda function to poll the queue with a batch size of 10.

Question 10 of 20

A company runs a production MySQL 8.0 database on Amazon RDS. Performance Insights is already enabled with the default 7-day retention. Operations wants to receive an automated notification whenever the database load (average active sessions) stays above 8 for at least 5 consecutive minutes. The solution must use only managed AWS features and require the least administration. Which action will meet these requirements?

  • Create a CloudWatch alarm on the Performance Insights metric DBLoad with a 1-minute period, threshold of 8, and 5 evaluation periods, and configure the alarm to publish to an Amazon SNS topic.

  • Set up an RDS event subscription for a high-load Performance Insights event and have Amazon RDS send the event to an SNS topic.

  • Schedule an EventBridge rule that triggers an AWS Lambda function every minute to query the Performance Insights API, and have the function publish to SNS if DBLoad is greater than 8.

  • Enable Enhanced Monitoring and configure a CloudWatch alarm that notifies SNS when CPUUtilization exceeds 80 percent for 5 minutes.

Question 11 of 20

An organization has three AWS accounts (Dev, Test, Prod), each containing a VPC with non-overlapping CIDR ranges. The teams need full bidirectional private connectivity between all VPCs and expect to add more VPCs and an on-premises data center next quarter. They want to minimize the number of connections and simplify route management while avoiding exposure to the internet. Which solution best meets these requirements?

  • Expose required services through AWS PrivateLink by creating an endpoint service in each VPC and configuring interface endpoints in the other VPCs.

  • Establish full-mesh VPC peering connections among the existing VPCs and create additional peering links as new VPCs are created.

  • Create an AWS Transit Gateway in one account, share it with the other accounts using AWS RAM, and attach each VPC; later attach the on-premises network with Direct Connect.

  • Deploy virtual private gateways in every VPC and build site-to-site VPN tunnels between each pair of VPCs and to the on-premises network.

Question 12 of 20

A company uses a single AWS CloudFormation template to deploy a three-tier application that includes Auto Scaling groups and a production Amazon RDS instance. During routine maintenance, an operations engineer must update the stack to patch the application servers. Company policy states that the update must never replace or delete the existing RDS instance. If the template change would cause a replacement, the operation must immediately fail before any resources are modified so the engineer can investigate. Which approach meets these requirements with the least operational effort?

  • Add the DeletionPolicy and UpdateReplacePolicy attributes with a value of Retain to the RDS resource before updating the stack.

  • Manually create an RDS snapshot and proceed with the stack update; restore from the snapshot if the database is replaced.

  • Attach a stack policy that denies all Update:* actions on the RDS resource and then update the stack.

  • Generate a change set, review it for replacement actions on the RDS resource, and execute the change set only if none are found.

Question 13 of 20

A development team runs an application on Amazon EC2 instances in Account A. The application must upload daily log files to a private Amazon S3 bucket that is owned by Account B. Security mandates removal of all long-term credentials on the instances and wants access restricted only to writing objects to that specific bucket. Which solution meets these requirements while following AWS IAM best practices?

  • Create an IAM user in Account B with programmatic access, store the user's access keys in AWS Systems Manager Parameter Store, and have the EC2 instances read the keys at runtime.

  • In Account B, create an IAM role that allows s3:PutObject only on the log bucket and trusts Account A. Allow the EC2 instance profile in Account A to assume this role with STS, and have the application use the temporary credentials to upload logs.

  • Attach the AmazonS3FullAccess managed policy to the existing EC2 instance profile in Account A and add a bucket policy in Account B that grants the role permission to write objects.

  • Enable S3 cross-region replication from a new bucket in Account A to the target bucket in Account B so logs are copied automatically without additional IAM configuration.

Question 14 of 20

A financial startup stores critical transactional data in a single-Region DynamoDB table using provisioned capacity. Compliance now requires the team to restore the table to any minute in the last 24 hours with an RTO under 15 minutes while minimizing cost and effort. They currently run one nightly on-demand backup through AWS Backup. Which change meets the new requirements most cost-effectively?

  • Enable DynamoDB Point-in-Time Recovery (PITR) on the table and discontinue the nightly on-demand backups.

  • Convert the table to a global table spanning two Regions and use the replica table for point-in-time restores.

  • Keep the existing backup plan but schedule on-demand backups every hour instead of once per day.

  • Enable DynamoDB Streams, deliver updates to Amazon S3 through AWS DMS, and restore the table by replaying the change logs when needed.

Question 15 of 20

An application is deployed in two AWS Regions. Each Region has an identical stack of Amazon EC2 instances behind an Application Load Balancer (ALB). The public zone in Amazon Route 53 contains two A (alias) failover records that point to the Regional ALB DNS names. No health checks are associated with either record. During a complete outage in the primary Region (us-east-1), users still receive time-out errors instead of being routed to the secondary Region (us-west-2). Which configuration change will enable Route 53 to automatically direct traffic to the secondary Region when the primary ALB becomes unavailable?

  • Lower the TTL of both failover records to 0 seconds to force clients to re-query DNS more frequently.

  • Enable cross-zone load balancing on both ALBs to allow Route 53 to fail over when one AZ goes down.

  • Edit the primary alias record and set Evaluate target health to Yes so Route 53 can monitor the ALB's status.

  • Attach a CloudWatch alarm on the ALB's UnHealthyHostCount metric and associate that alarm with the primary record.

Question 16 of 20

An on-premises data center connects to a VPC by a Site-to-Site VPN with two IPSec tunnels. After a firewall firmware upgrade, users can reach the VPC only when Tunnel 2 is active; CloudWatch metrics show TunnelState=Down for Tunnel 1. The VPN logs display repeated Phase 1 failures with the error message "NO_PROPOSAL_CHOSEN." Which firewall change will MOST likely restore stable connectivity through Tunnel 1?

  • Enable NAT Traversal (UDP 4500) for both tunnels on the firewall.

  • Set the firewall's IKE Phase 1 policy to use AES-256 encryption, SHA-256 integrity, and Diffie-Hellman group 14.

  • Change Tunnel 1's inside tunnel CIDR to 169.254.100.0/30 so it differs from Tunnel 2.

  • Lower the Dead Peer Detection (DPD) interval on the firewall from 30 seconds to 10 seconds.

Question 17 of 20

An Auto Scaling group launches Amazon Linux 2 instances that run a Java application. Operations needs to collect memory utilization and the application's /var/log/app.log file, and they want to be able to change the collection settings without baking a new AMI or manually connecting to instances. What is the MOST maintainable way to deploy and manage the CloudWatch agent across all current and future instances?

  • Add the agent's configuration file to user data and run amazon-cloudwatch-agent-ctl in the Auto Scaling group launch template.

  • Bake the agent and its configuration into a custom AMI that the Auto Scaling group uses for all launches.

  • Store the agent's JSON configuration as a Systems Manager Parameter and use a State Manager association with the AmazonCloudWatch-ManageAgent document to install and start the agent on all instances.

  • Rely on default EC2 metrics and create a CloudWatch Logs subscription filter that streams /var/log/app.log to CloudWatch Logs.

Question 18 of 20

An Auto Scaling group runs in private subnet 10.0.20.0/24 behind an Application Load Balancer in 10.0.10.0/24. The subnet's network ACL has only the following rules: inbound allow TCP 80 from 10.0.10.0/24 and TCP 22 from 203.0.113.0/24; outbound allow TCP 0-1023 to 0.0.0.0/0. Users see ALB timeouts. What NACL update fixes connectivity with least privilege?

  • Add an inbound rule that allows TCP 1024-65535 from 0.0.0.0/0.

  • Replace the outbound rule with one that allows all traffic to 0.0.0.0/0.

  • Change the inbound rule to allow TCP 443 from 10.0.10.0/24 instead of TCP 80.

  • Add an outbound rule that allows TCP 1024-65535 to 10.0.10.0/24.

Question 19 of 20

A company runs a production Amazon RDS for PostgreSQL instance that handles both read and write traffic. At month end, CPU utilization exceeds 90% for several hours, degrading performance. The operations team wants a solution that automatically scales compute capacity up and down within minutes, maintains a single writer endpoint, and charges only for actual usage. Which approach meets these requirements with the least operational overhead?

  • Create AWS Lambda functions triggered by CloudWatch alarms to modify the DB instance class and revert it after the spike.

  • Add multiple Amazon RDS read replicas and distribute traffic through an Application Load Balancer to absorb the extra load.

  • Enable Amazon RDS Storage Auto Scaling on the existing instance to increase compute capacity when CPU exceeds a threshold.

  • Migrate the database to an Amazon Aurora Serverless v2 PostgreSQL-compatible cluster and use its built-in automatic scaling.

Question 20 of 20

An organization with AWS Organizations wants every existing and future member account in two Regions to run a standard set of IAM roles and AWS Lambda functions. The CloudOps engineer must implement this once and have the stacks automatically appear in any new accounts that join the same organizational unit (OU). Which approach meets these requirements with the least ongoing operational effort?

  • Create a self-managed CloudFormation StackSet in a delegated administrator account and manually add every member account ID to the target list when the account is created.

  • Build an AWS CDK pipeline that deploys the stack to each account and Region, triggered by an EventBridge rule that detects new AWS Organizations accounts.

  • Instruct each member account to run an AWS Systems Manager Automation runbook that invokes cloudformation deploy for the required templates in both Regions.

  • Create a service-managed CloudFormation StackSet in the management account, specify the two target Regions and the OU, and enable auto-deployment so that new accounts receive the stack automatically.