🔥 40% Off Crucial Exams Memberships — This Week Only

3 days, 15 hours remaining!
00:20:00

AWS Certified CloudOps Engineer Associate Practice Test (SOA-C03)

Use the form below to configure your AWS Certified CloudOps Engineer Associate Practice Test (SOA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for AWS Certified CloudOps Engineer Associate SOA-C03
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

AWS Certified CloudOps Engineer Associate SOA-C03 Information

The AWS Certified CloudOps Engineer – Associate certification validates your ability to deploy, operate, and manage cloud workloads on AWS. It’s designed for professionals who maintain and optimize cloud systems while ensuring they remain reliable, secure, and cost-efficient. This certification focuses on modern cloud operations and engineering practices, emphasizing automation, monitoring, troubleshooting, and compliance across distributed AWS environments. You’ll be expected to understand how to manage and optimize infrastructure using services like CloudWatch, CloudTrail, EC2, Lambda, ECS, EKS, IAM, and VPC.

The exam covers the full lifecycle of cloud operations through five key domains: Monitoring and Performance, Reliability and Business Continuity, Deployment and Automation, Security and Compliance, and Networking and Content Delivery. Candidates are tested on their ability to configure alerting and observability, apply best practices for fault tolerance and high availability, implement infrastructure as code, and enforce security policies across AWS accounts. You’ll also demonstrate proficiency in automating common operational tasks and handling incident response scenarios using AWS tools and services.

Earning this certification shows employers that you have the technical expertise to manage AWS workloads efficiently at scale. It’s ideal for CloudOps Engineers, Cloud Support Engineers, and Systems Administrators who want to prove their ability to keep AWS environments running smoothly in production. By earning this credential, you demonstrate the hands-on skills needed to ensure operational excellence and reliability in today’s fast-moving cloud environments.

AWS Certified CloudOps Engineer Associate SOA-C03 Logo
  • Free AWS Certified CloudOps Engineer Associate SOA-C03 Practice Test

  • 20 Questions
  • Unlimited time
  • Monitoring, Logging, Analysis, Remediation, and Performance Optimization
    Reliability and Business Continuity
    Deployment, Provisioning, and Automation
    Security and Compliance
    Networking and Content Delivery
Question 1 of 20

An IAM administrator must create a managed policy that lets members of the DevOps group call dynamodb:DeleteItem on tables in the development account, but only when the users are authenticated with multi-factor authentication (MFA) for the current session. Which IAM policy condition will correctly enforce this requirement?

  • Add a Bool condition that requires the key aws:MultiFactorAuthPresent to be set to "true".

  • Add a StringEquals condition that checks whether aws:MultiFactorAuthAge equals "0".

  • Add a StringEqualsIgnoreCase condition that checks whether sts:AuthenticationType equals "mfa".

  • Add a Bool condition that requires the key aws:SecureTransport to be set to "true".

Question 2 of 20

An application assumes an IAM role in your AWS account to upload objects to an Amazon S3 bucket. After your company enabled AWS Organizations and attached new service control policies (SCPs), the uploads now fail with an AccessDenied error. You must determine-without making any changes in production-whether the denial originates from the role's identity-based policy, the bucket policy, the role's permissions boundary, or the SCP. Which AWS tool lets you simulate the s3:PutObject call and pin-point the specific policy that blocks the request?

  • IAM Access Analyzer

  • AWS Config advanced queries

  • IAM policy simulator

  • AWS CloudTrail event history

Question 3 of 20

A company operates dozens of AWS accounts in AWS Organizations. Security requires that any new security group rule that permits 0.0.0.0/0 on TCP port 22 be removed within seconds of creation. The CloudOps engineer must build an agent-less, event-driven solution that can be maintained centrally in a shared services account while minimizing custom code and ongoing operations. Which approach meets these requirements?

  • Create an Amazon EventBridge rule in each workload account that matches the AWS API call "AuthorizeSecurityGroupIngress" and sends the event to a centrally shared event bus. In the shared services account, invoke an AWS Lambda function that deletes the non-compliant rule.

  • Enable AWS CloudTrail Lake in every account and schedule a daily SQL query with Amazon EventBridge Scheduler that invokes an AWS Lambda function to remove any discovered non-compliant rules.

  • Configure the AWS Config managed rule for unrestricted SSH in every account and attach an AWS Systems Manager Automation document that revokes the offending rule when the evaluation is non-compliant.

  • Launch a small, always-running EC2 instance in each account that polls DescribeSecurityGroups every minute with a script and removes any rule that allows 0.0.0.0/0 on port 22.

Question 4 of 20

A startup runs 50 Amazon Linux 2 instances across two VPCs. Operations must publish memory utilization and disk I/O metrics to Amazon CloudWatch and stream application logs, without opening SSH access or logging in to each host. Every instance already assumes an IAM role that includes AmazonSSMManagedInstanceCore and CloudWatchAgentServerPolicy. Which approach meets the requirements with the least operational effort?

  • Deploy the older CloudWatch Logs agent with an IAM instance profile and generate custom metrics later by querying logs with CloudWatch Logs Insights.

  • Add a user-data script to each instance that runs the CloudWatch agent configuration wizard at boot and publishes metrics with PutMetricData.

  • Enable detailed monitoring on the EC2 instances and create a CloudWatch Logs subscription filter to ingest application logs.

  • Use AWS Systems Manager Run Command to install the CloudWatch unified agent on all instances, store a common agent configuration in Systems Manager Parameter Store, and start the agent fleet-wide.

Question 5 of 20

A company runs 50 Linux EC2 instances whose application data resides on attached EBS volumes. Security policy mandates encrypted, daily backups that must be retained for 35 days and automatically copied to a secondary AWS Region. The operations team wants a fully managed, scalable solution with minimal custom code or scripts. Which approach satisfies the requirements with the LEAST operational effort?

  • Use EC2 Image Builder to create daily AMIs for the instances, share the AMIs to the secondary Region, and configure lifecycle policies to delete images after 35 days.

  • Create an AWS Backup plan that selects the EC2 instances by tag, enables default KMS encryption, sets a 35-day retention rule, and configures cross-Region copy to a backup vault in the secondary Region.

  • Attach a Data Lifecycle Manager policy to each EBS volume to create encrypted daily snapshots, retain them for 35 days, and enable cross-Region copy.

  • Schedule an AWS Lambda function with EventBridge that calls the CreateSnapshot API for each EBS volume, encrypts the snapshot, copies it to the target Region, and deletes snapshots older than 35 days.

Question 6 of 20

A company is running a business-critical on-premises PostgreSQL database. The team plans to migrate it to AWS and must meet the following requirements:

  • Automatic failover must complete in less than 35 seconds if an Availability Zone becomes unavailable.
  • The application must continue to use a single writer endpoint with no DNS or connection-string changes during failover.
  • The solution must add read capacity with minimal application changes and keep operational costs as low as possible.

Which migration strategy will best meet these requirements?

  • Create an Amazon RDS PostgreSQL Single-AZ instance and add two read replicas in different AZs, then enable automatic promotion on failure.

  • Launch a standard Amazon RDS PostgreSQL instance-based Multi-AZ deployment and add an external read replica for read scaling.

  • Lift-and-shift the PostgreSQL database to two self-managed Amazon EC2 instances in separate AZs using EBS Multi-Attach for shared storage.

  • Migrate to an Amazon RDS PostgreSQL Multi-AZ DB cluster with one writer and two readable standbys across three AZs.

Question 7 of 20

A company's Dev account runs an application on Amazon EC2 that must read an encrypted parameter stored in AWS Systems Manager Parameter Store in the company's SharedServices account. Storing static credentials on the instance is prohibited. Which solution provides secure, least-privilege cross-account access while removing the need for long-lived credentials?

  • Enable resource-based policies for Parameter Store and add the Dev account's root as a principal with ssm:GetParameter permission; continue using the existing EC2 role without changes.

  • Attach an inline policy to the EC2 instance role in Dev that grants ssm:GetParameter on the parameter's ARN; no other configuration is needed.

  • Create access keys for a new IAM user in SharedServices that has ssm:GetParameter permission and store the keys as environment variables on the EC2 instance.

  • Create an IAM role in SharedServices that allows ssm:GetParameter on the required parameter and trust principals from the Dev account. Update the EC2 instance's role to call sts:AssumeRole for that role and use the returned temporary credentials.

Question 8 of 20

Your team must log all DNS queries from a VPC. You create an Amazon Route 53 Resolver query logging configuration, select a new CloudWatch Logs log group as the destination, and attempt to associate the configuration with the VPC. The console shows "AccessDeniedException - unable to create log stream". Which action enables Route 53 Resolver to deliver query logs to CloudWatch Logs and adheres to AWS best practices?

  • Change the log group's KMS CMK to the AWS-managed /aws/logs key so the service can encrypt incoming data.

  • Enable VPC Flow Logs for the VPC and point the flow logs to the same log group.

  • Re-create the query logging configuration but choose an S3 bucket destination instead of CloudWatch Logs.

  • Attach a resource policy to the CloudWatch Logs log group that allows the route53.amazonaws.com service to run logs:CreateLogStream and logs:PutLogEvents on that log group.

Question 9 of 20

An ecommerce company runs its web tier in an Auto Scaling group behind an Application Load Balancer. CPU utilization rises above 80% for two hours starting at 12:00 PM each weekday, causing slow pages. The team wants CPU to stay under 60% during the surge without over-provisioning at other times. Which solution meets these requirements?

  • Add two scheduled actions to the Auto Scaling group: raise desired capacity a few minutes before noon on weekdays and reduce capacity shortly after the peak ends.

  • Enable predictive scaling on the Auto Scaling group and keep the target tracking policy at 60 percent CPU utilization.

  • Lower the CPU utilization target in the existing target-tracking policy from 60 percent to 40 percent so instances launch sooner.

  • Purchase Standard Reserved Instances equal to the peak capacity and disable scale-in on the Auto Scaling group.

Question 10 of 20

An RDS for PostgreSQL running on a db.t3.medium instance shows sustained high DB load. Performance Insights issues a proactive recommendation stating that the CPU wait dimension is saturated. Which modification best follows the recommendation to increase performance efficiency?

  • Create a read replica in another Availability Zone for analytic traffic.

  • Enable storage autoscaling and double the gp2 volume size.

  • Scale the instance to a larger class such as db.m6g.large.

  • Turn on automatic minor version upgrades to apply the latest patch.

Question 11 of 20

Your team's CodeBuild project executes terraform apply for multiple feature branches, using an S3 backend that stores a single shared state file. When two builds run at the same time, the state becomes corrupted. You must prevent concurrent writes while continuing to use the S3 backend and keep cost low. Which modification addresses this requirement?

  • Move the Terraform state to AWS Systems Manager Parameter Store by using the secureString type.

  • Enable versioning on the S3 bucket to recover previous versions of the state file.

  • Change the backend to local so each CodeBuild job writes its own state file in the build container.

  • Configure a DynamoDB table for state locking and reference it with the dynamodb_table argument in the S3 backend.

Question 12 of 20

Your operations team has been streaming AWS WAF web ACL logs through an Amazon Kinesis Data Firehose delivery stream to CloudWatch Logs. A recent AWS update now lets one network-protection service send its full rule-match logs straight to a CloudWatch Logs log group, allowing you to retire the Firehose stream. Which service gained this direct logging capability?

  • Route 53 Resolver DNS Firewall

  • AWS WAF web ACL

  • AWS Shield Advanced

  • AWS Network Firewall

Question 13 of 20

An operations engineer just created an Amazon S3 bucket in a new AWS account. Minutes later security tooling reports that a principal from another AWS account can read objects in the bucket, even though the engineer believes cross-account access is blocked. The engineer must quickly identify which policy grants this external access without adding logging or extra scanning services. Which solution meets these requirements?

  • Create an account-level IAM Access Analyzer in the Region and review its findings for the bucket to see the policy statement permitting external access.

  • Use AWS CloudTrail Lake to run a query on recent GetObject events and trace the IAM policies attached to the calling principal.

  • Enable Amazon S3 server access logging on the bucket and manually inspect the log files to determine which policy was evaluated.

  • Run an Amazon Macie bucket assessment and use the generated Policy Findings report to locate the offending statement.

Question 14 of 20

During migration of a genomics analysis pipeline, a research team launches hundreds of Amazon EC2 Linux Spot instances for several hours. The workload reads large datasets from an Amazon S3 bucket, needs a POSIX-compliant shared file system that provides over 100 GB/s aggregate throughput with sub-millisecond latency, and must write results back to the same bucket. Which shared storage solution meets these performance requirements at the lowest cost?

  • Stripe multiple Amazon EBS gp3 volumes in RAID 0 on one EC2 instance and export the file system over NFS to the fleet.

  • Deploy an Amazon FSx for Windows File Server Multi-AZ file system with SSD storage and mount it on the EC2 instances.

  • Create an Amazon FSx for Lustre file system linked to the S3 bucket and delete the file system after each pipeline execution.

  • Provision an Amazon EFS file system in Provisioned Throughput mode and enable Lifecycle Management to reduce storage cost.

Question 15 of 20

An e-commerce company runs a stateful payment service on an Auto Scaling group of Amazon EC2 instances. The CloudWatch agent publishes the mem_used_percent metric from each instance. When the mem_used_percent metric exceeds 90% for 2 consecutive minutes, the company wants the affected instance to be gracefully rebooted through the AWS-RestartEC2Instance Systems Manager Automation runbook. Which approach requires the LEAST operational overhead?

  • Install a cron script on every instance that checks the mem_used_percent metric each minute and calls Systems Manager Run Command to reboot the instance when the threshold is reached.

  • Create a CloudWatch alarm for the mem_used_percent metric on each instance. Configure an EventBridge rule to be triggered by the alarm state change, invoking the AWS-RestartEC2Instance Systems Manager Automation runbook with the alarmed instance ID as a target.

  • Attach a step-scaling policy that uses the mem_used_percent metric to increase desired capacity by one, allowing Auto Scaling to terminate and replace the over-utilized instance.

  • Create an SSM State Manager association that runs the AWS-RunShellScript document every minute to evaluate memory usage and reboot the instance if the threshold is breached.

Question 16 of 20

A company uses AWS Organizations and has a dedicated shared-services account operated by the network team. The team must deploy the same VPC CloudFormation template to all existing and future member accounts in us-east-1 and us-west-2. Operations leadership requires that:

  • The network team manages the deployments from the shared-services account only.
  • Stacks are automatically created in any new account that joins the organization.

Which approach meets these requirements while following AWS best practices?

  • Create a CloudFormation StackSet in the management account using service-managed permissions, designate the shared-services account as a delegated administrator, target the appropriate OU, and enable automatic deployments to us-east-1 and us-west-2.

  • Implement AWS CDK pipelines configured in each member account that trigger on AWS Control Tower lifecycle events to deploy the VPC stack to both Regions.

  • In the shared-services account, deploy individual CloudFormation stacks in each Region and share the VPC subnets to member accounts with AWS Resource Access Manager.

  • Create a CloudFormation StackSet with self-managed permissions, manually create the required IAM roles in every member account, and run a scheduled script to add new accounts to the StackSet when they appear.

Question 17 of 20

A company must deliver an updated, hardened Docker image for a Java microservice every month. The solution must automatically start from the latest public Amazon Corretto base image, install OS patches and application libraries, run functional tests, perform a vulnerability scan, and then push the approved image to an existing Amazon ECR repository. Operations wants an AWS-managed solution that requires the least ongoing maintenance. Which approach meets these requirements?

  • Create an EC2 Image Builder container pipeline with a container recipe that extends the Amazon Corretto base image, adds the application layers and tests, sets the ECR repository as the distribution target, and enable Amazon Inspector scanning on the registry.

  • Run a Jenkins server on Amazon EC2 that executes a pipeline to build, test, scan, and push the image to ECR on a cron schedule.

  • Use AWS App2Container to repackage the Java application and rely on Amazon ECS to pull the latest image at deployment time.

  • Configure a weekly Amazon EventBridge rule to trigger an AWS CodeBuild project that executes docker build and docker push commands and runs an open-source vulnerability scanner inside the buildspec.

Question 18 of 20

A Linux-based EC2 instance in a production VPC hosts a MySQL OLTP database on a 500 GiB gp2 EBS volume. CloudWatch shows regular spikes above 100 ms volume latency, a VolumeQueueLength greater than 60, and average read/write IOPS near 8 000. The operations team must reduce latency immediately, avoid any downtime, and keep storage costs as low as possible. Which action meets these requirements?

  • Use Elastic Volumes to convert the existing gp2 volume to gp3 and provision 12 000 IOPS with 500 MiB/s throughput.

  • Modify the volume to io2 Block Express and provision 16 000 IOPS and 1 000 MiB/s throughput.

  • Purchase additional I/O credit bundles to extend the gp2 burst duration during peak hours.

  • Change the volume type to st1 throughput-optimized HDD to increase throughput at a lower price.

Question 19 of 20

A company runs a single Amazon EC2 instance that periodically spikes to 90% CPUUtilization for more than 5 minutes, causing slow response times. The operations team wants an automated, auditable remediation that upgrades the instance to the next larger size whenever this threshold is reached, without writing or maintaining custom code. Which solution will meet these requirements?

  • Define an Auto Scaling scheduled action that replaces the instance with a larger type during the expected peak periods.

  • Subscribe the CloudWatch alarm to an SNS topic that triggers a Lambda function. Have the function call the ModifyInstanceAttribute API to change the instance type.

  • Enable EC2 Auto Recovery on the instance so that hardware replacement automatically launches a larger instance when CPU thresholds are exceeded.

  • Create a CloudWatch alarm for CPUUtilization > 85% for 5 minutes. Add an EventBridge rule for the alarm's ALARM state that invokes a Systems Manager Automation runbook which stops the instance, changes it to the next larger type, and starts it.

Question 20 of 20

A company's VPC has a dual-stack private subnet containing EC2 instances. These instances need to initiate outbound connections to the IPv6 internet to download software updates. However, they must not be directly reachable from the internet over IPv6. The subnet's route table already directs some traffic to an on-premises network. Which single route should an administrator add to the subnet's route table to meet these requirements?

  • Add route 0.0.0.0/0 that targets the VPC's internet gateway

  • Add route 0.0.0.0/0 that targets an egress-only internet gateway

  • Add route ::/0 that targets the VPC's internet gateway

  • Add route ::/0 that targets an egress-only internet gateway attached to the VPC