00:15:00

AWS Certified Solutions Architect Professional Practice Test (SAP-C02)

Use the form below to configure your AWS Certified Solutions Architect Professional Practice Test (SAP-C02). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for AWS Certified Solutions Architect Professional SAP-C02
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

AWS Certified Solutions Architect Professional SAP-C02 Information

The AWS Certified Solutions Architect – Professional (SAP-C02) exam is a test for people who want to show advanced skills in cloud design using Amazon Web Services. It proves that you can handle large, complex systems and design solutions that are secure, reliable, and meet business needs. Passing this exam shows a higher level of knowledge than the associate-level test and is often needed for senior cloud roles.

This exam includes multiple-choice and multiple-response questions. It covers areas like designing for high availability, choosing the right storage and compute services, planning for cost, and managing security at scale. You will also need to understand how to migrate big applications to the cloud, design hybrid systems, and use automation tools to keep environments efficient and safe.

AWS suggests having at least two years of real-world experience before taking this test. The SAP-C02 exam takes 180 minutes, includes about 75 questions, and requires a scaled score of 750 out of 1000 to pass. Preparing usually means lots of practice with AWS services, using study guides, and trying practice exams. For many professionals, this certification is an important milestone toward becoming a cloud architect or senior cloud engineer.

Free AWS Certified Solutions Architect Professional SAP-C02 Practice Test

Press start when you are ready, or press Change to modify any settings for the practice test.

  • Questions: 15
  • Time: Unlimited
  • Included Topics:
    Design Solutions for Organizational Complexity
    Design for New Solutions
    Continuous Improvement for Existing Solutions
    Accelerate Workload Migration and Modernization
Question 1 of 15

A large enterprise is managing a hybrid environment with thousands of Amazon EC2 instances and on-premises servers. They need to enforce a standard baseline configuration across the entire fleet, which includes specific software versions and security settings. A key requirement is to automatically apply this baseline to newly launched instances and periodically scan the entire fleet for any configuration drift. If an instance is found to be non-compliant, it should be reported to a central dashboard. The solution must minimize manual intervention and use AWS native services for configuration management.

Which AWS Systems Manager capability should a solutions architect recommend to meet these requirements most effectively?

  • AWS Systems Manager State Manager

  • AWS Systems Manager Distributor

  • AWS Systems Manager Patch Manager

  • AWS Systems Manager Run Command

Question 2 of 15

A financial-services company must migrate a 50-TB on-premises Oracle database to an Amazon RDS for Oracle instance in its primary AWS Region. The company already has one dedicated 10 Gbps AWS Direct Connect connection that terminates at a single colocation facility and uses a private virtual interface (VIF) to the company's VPC.

To meet a strict migration deadline and ensure long-term operational resilience, the solutions architect must design connectivity that

  • increases total bandwidth,
  • survives the loss of an entire Direct Connect location, and
  • provides link-level encryption without adding significant throughput overhead.

Which networking solution best meets these requirements?

  • Order a second 10 Gbps cross-connect at the existing facility and create a Link Aggregation Group (LAG) with the two ports. Configure a Site-to-Site VPN over the internet as a backup path.

  • Provision a second 10 Gbps dedicated Direct Connect connection at a different colocation facility. Configure a private VIF on each connection and attach them directly to the VPC's virtual private gateway. Enable MACsec on both connections for Layer-2 encryption.

  • Upgrade the current Direct Connect port to 100 Gbps at the same location and create both public and private VIFs. Rely on application-layer encryption for data in transit.

  • Add a second 10 Gbps Direct Connect connection in a new facility. Create an AWS Transit Gateway, attach the VPC, and build IPsec Site-to-Site VPN tunnels over each Direct Connect to encrypt traffic.

Question 3 of 15

A financial services company operates a multi-account AWS environment with a dedicated 'Developer Tools' account (ID: 111122223333) and a 'Production' account (ID: 999988887777). A CI/CD pipeline, running on an EC2 instance in the Developer Tools account, needs to deploy updates to a specific Lambda function named 'TradeProcessor' (ARN: arn:aws:lambda:us-east-1:999988887777:function:TradeProcessor). A solutions architect has been tasked with designing an IAM configuration that provides the necessary cross-account access while adhering strictly to the principle of least privilege. Which of the following configurations is the most secure and meets the requirements?

  • In the Production account, create an IAM role named LambdaUpdateRole with an IAM policy that allows the lambda:UpdateFunctionCode action on the resource arn:aws:lambda:us-east-1:999988887777:function:TradeProcessor. Configure the role's trust policy to allow sts:AssumeRole actions from the specific IAM role ARN associated with the EC2 instance in the Developer Tools account.

  • In the Production account, create an IAM user with programmatic access. Attach a policy to the user that allows the lambda:UpdateFunctionCode action on the TradeProcessor function ARN. Store the user's access key and secret key in AWS Secrets Manager in the Developer Tools account and grant the EC2 instance's IAM role permission to retrieve them.

  • In the Production account, create an IAM role named LambdaUpdateRole with an IAM policy that allows the lambda:* action on all resources ("Resource": "*"). Configure the role's trust policy to allow sts:AssumeRole actions from the IAM role associated with the EC2 instance in the Developer Tools account.

  • In the Production account, create an IAM role named LambdaUpdateRole with a policy allowing lambda:UpdateFunctionCode on the TradeProcessor function ARN. Configure the role's trust policy to allow sts:AssumeRole actions from the root of the Developer Tools account ("Principal": {"AWS": "arn:aws:iam::111122223333:root"}).

Question 4 of 15

A financial services company is migrating its on-premises data center to AWS. The migration includes a 500 TB on-premises NAS that stores critical financial analytics data. The company has an existing 1 Gbps AWS Direct Connect connection, which is currently utilized at 40% capacity for other business operations. The project timeline requires the initial 500 TB data transfer to be completed within 30 days. After the initial transfer, a subset of the data, approximately 50 TB, will continue to be updated on-premises and requires ongoing synchronization with the target Amazon S3 bucket until the final application cutover in three months.

The company's security policy mandates end-to-end encryption for all data in transit. A solutions architect needs to design the most efficient and cost-effective migration strategy that meets these requirements.

Which approach should the architect recommend?

  • Use AWS DataSync to transfer the entire 500 TB dataset over the Direct Connect connection. Schedule the DataSync task to run continuously until the migration is complete.

  • Use AWS Snowball Edge Storage Optimized devices for the initial bulk transfer. For ongoing synchronization, configure an AWS Transfer Family SFTP endpoint and use a scheduled script to sync changes.

  • Deploy an AWS Storage Gateway in File Gateway mode on-premises. Use AWS DataSync to migrate the entire 500 TB of data from the NAS to the File Gateway to be uploaded to Amazon S3.

  • Use AWS Snowball Edge Storage Optimized devices for the initial 500 TB transfer. Then, use an AWS DataSync agent on-premises to perform ongoing synchronization over the Direct Connect link.

Question 5 of 15

Your company operates 30 AWS accounts that are organized with AWS Organizations. Finance wants an interactive Amazon QuickSight dashboard that shows charge-back information by linked account, cost category, and cost-allocation tag. The reporting solution must do the following:

  • Include resource-ID-level cost details.
  • Refresh automatically at least once every 24 hours.
  • Retain all historical cost data for multi-year trend analysis.
  • Minimize operational overhead and avoid third-party tools.

Which approach will meet these requirements?

  • Export cost-optimization check results from AWS Trusted Advisor for every account to Amazon S3 each day and use AWS Glue and QuickSight to create cost dashboards from the exported reports.

  • Create individual AWS Budgets for each linked account, have the budgets send daily Amazon SNS notifications, store the notifications in Amazon S3 through an AWS Lambda subscriber, and build QuickSight visuals from the stored messages.

  • Enable an AWS Cost and Usage Report with resource-ID and hourly granularity in the management account. Deliver the report to an Amazon S3 bucket and turn on the Cost and usage dashboard powered by QuickSight to visualize the data.

  • Enable hourly granularity in AWS Cost Explorer and schedule a daily AWS Lambda function to call the Cost Explorer API, store the CSV output in Amazon S3, and query it with Amazon Athena for QuickSight dashboards.

Question 6 of 15

A global enterprise is architecting a multi-account AWS environment. A central 'Shared Services' VPC hosts centralized tools. Numerous 'Application' VPCs, each in a separate AWS account, host business applications. The EC2 instances in these Application VPCs require frequent access to Amazon S3 and Amazon DynamoDB. The networking team has raised concerns about IP address exhaustion in the Application VPCs. Security requirements mandate that all traffic to S3 and DynamoDB must remain within the AWS network and be restricted to a specific list of approved resources. Which network design should a solutions architect recommend to meet these requirements in the most scalable and resource-efficient manner?

  • In each Application VPC, configure a NAT Gateway in a public subnet and update the route tables for the private subnets to direct S3 and DynamoDB traffic through the NAT Gateway.

  • Create VPC Interface Endpoints for S3 and DynamoDB in the central Shared Services VPC. Use AWS Transit Gateway to connect all Application VPCs to the Shared Services VPC and route all AWS service traffic through the centralized endpoints.

  • In each Application VPC, create VPC Interface Endpoints for both Amazon S3 and Amazon DynamoDB. Attach an endpoint policy to each endpoint to restrict access to the approved resources.

  • In each Application VPC, create VPC Gateway Endpoints for both Amazon S3 and Amazon DynamoDB. Attach an endpoint policy to each endpoint that explicitly allows access only to the approved S3 buckets and DynamoDB tables.

Question 7 of 15

A solutions architect is designing a large multi-tenant SaaS application on AWS. The application uses a fleet of EC2 instances in an Auto Scaling group to process asynchronous jobs from an Amazon SQS queue. A single job from one tenant, known as a 'poison pill', could potentially cause a worker instance to crash repeatedly. This could lead to a rapid succession of instance terminations and launches, consuming resources and impacting the job processing capability for all tenants sharing the fleet. The architect needs to design a solution that minimizes the blast radius of such a failure, ensuring a problem caused by a single tenant affects the fewest other tenants possible. Which approach provides the most effective failure isolation for this scenario?

  • Configure the Auto Scaling group to span multiple Availability Zones and place an Application Load Balancer in front of the EC2 instances to distribute jobs.

  • Implement shuffle sharding by creating multiple target groups (virtual shards) from the total worker fleet and mapping each tenant to a unique combination of target groups.

  • Configure a dead-letter queue (DLQ) on the main SQS queue to automatically isolate messages that fail processing multiple times.

  • Implement a strict bulkhead pattern by provisioning a dedicated Auto Scaling group and SQS queue for each tenant.

Question 8 of 15

Your organization is migrating several on-premises Kubernetes microservices to AWS. Each microservice team will receive its own AWS account. A central networking account already owns a VPC and an Amazon EFS file system that must remain in place. Security and platform teams have issued these requirements:

  • Cluster capacity (patching, scaling, operating system updates) must incur the least possible manual effort.
  • Pods must never use node instance credentials; each microservice must receive only the AWS permissions it needs.
  • The existing Amazon EFS file system must be available to the microservices as persistent, POSIX-compatible shared storage.
  • Network administrators must retain ownership of subnets and route tables, but application teams must be able to deploy workloads from their own accounts.

Which architecture best meets all of these requirements?

  • Create a centralized Amazon EKS cluster in the networking account. Configure an AWS Fargate profile for each microservice namespace, share the VPC subnets with workload accounts by using AWS Resource Access Manager, mount the existing Amazon EFS file system with the Amazon EFS CSI driver, and map every Kubernetes service account to its own IAM role by using IAM roles for service accounts.

  • Implement a self-managed Kubernetes cluster on EC2 instances launched in a shared subnet of the networking account. Configure cross-account SSH access for each team, mount the EFS file system directly on the hosts, and use security groups on the nodes to isolate traffic.

  • Deploy an Amazon EKS cluster in every workload account with self-managed EC2 nodes, peer each cluster's VPC to the networking account, mount the EFS file system by exporting it over NFS, and store static AWS access keys in Kubernetes secrets for applications that call AWS services.

  • Provision a single Amazon EKS cluster in the networking account with managed EC2 node groups. Disable IAM roles for service accounts so that pods use the node instance profile, and attach the EFS file system by installing the NFS client on every node.

Question 9 of 15

A financial services company runs a critical monolithic application on a fleet of Amazon EC2 instances behind an Application Load Balancer. The current deployment process involves manually stopping the application, deploying the new version on all instances simultaneously, and then restarting the application. This 'all-at-once' method results in significant downtime during each release and makes rollbacks a complex, time-consuming manual effort. The company wants to improve its operational excellence by adopting a deployment strategy that eliminates downtime and minimizes risk. As a solutions architect, which strategy should you recommend to meet these requirements?

  • Implement an in-place rolling update by configuring the Auto Scaling group to replace instances one by one with a new launch template version.

  • Implement a blue/green deployment strategy using AWS CodeDeploy, configuring it to shift traffic between two environments via the Application Load Balancer.

  • Automate the existing all-at-once deployment process using AWS Systems Manager Run Command to execute the deployment scripts simultaneously across all instances.

  • Re-platform the application onto AWS Elastic Beanstalk and configure its environment to use a managed rolling update deployment policy.

Question 10 of 15

An e-commerce company is refactoring a legacy order-processing application into several microservices that run in separate AWS accounts. The monolith currently writes every order event to an Amazon SQS queue. A Lambda function examines each message's JSON payload and forwards it to three downstream SQS queues-one per microservice-based on the value of the eventType field (ORDER_CREATED, PAYMENT_CAPTURED, or ORDER_CANCELLED).

The development team wants to retire the Lambda router to reduce operational overhead, keep costs low, and continue using SQS for downstream processing. Exactly-once delivery and strict ordering are not required.

Which solution will meet these requirements with the least custom code?

  • Publish every order event to a single Amazon SNS standard topic. Create a dedicated Amazon SQS queue for each microservice and subscribe each queue to the topic. Attach a payload-based filter policy that matches only the required eventType values for that microservice.

  • Configure an Amazon EventBridge custom event bus. Publish each order event to the bus and create one rule per eventType that routes matching events to the appropriate SQS queue.

  • Replace the Lambda router with an Amazon SNS FIFO topic. Set the eventType value as the message-group ID and subscribe each microservice's SQS queue to the topic so that only matching messages are delivered.

  • Create three separate Amazon SNS topics, one for each eventType. Modify the order-processing service so that it publishes every event to all three topics, and have each microservice subscribe to its dedicated topic.

Question 11 of 15

A financial services company is deploying a new, computationally intensive workload on AWS for market simulation. The application is tightly-coupled and requires the lowest possible inter-node latency for optimal performance. The workload runs for several hours at a time, is fault-tolerant and can be interrupted, making it highly cost-sensitive. The company also wants to maximize the availability of compute capacity by allowing for flexibility in the specific EC2 instance types used, mitigating the risk of capacity unavailability for any single instance type.

Which approach meets all of these requirements MOST effectively?

  • Configure an EC2 Fleet with a Spot allocation strategy. Specify multiple instance types that meet the performance requirements and launch them into a single Cluster Placement Group within a single Availability Zone.

  • Create an Auto Scaling group with On-Demand Instances launched into a Spread Placement Group across multiple Availability Zones. Use multiple instance types in the launch template's overrides.

  • Use an EC2 Auto Scaling group with a mixed instances policy to launch instances into a Cluster Placement Group that spans multiple Availability Zones.

  • Launch EC2 Spot Instances using an Auto Scaling group configured with a launch template. Configure the Auto Scaling group to launch instances into a Partition Placement Group spread across multiple Availability Zones.

Question 12 of 15

Acme Group has merged its healthcare business (subject to HIPAA) and its payment-processing subsidiary (subject to PCI-DSS). The company already uses AWS Organizations with all features enabled and operates centralized log-archive and security-tooling accounts in a dedicated Security OU. Leadership wants to 1) apply and audit guardrails for HIPAA and PCI workloads independently, 2) continue sharing the existing security services, 3) receive a single consolidated bill for the entire conglomerate, and 4) avoid additional operational overhead. Which multi-account and OU strategy best satisfies these requirements?

  • Create a separate AWS Organization for the payment subsidiary, enable consolidated billing in each organization, and share the log-archive account between the two organizations by using AWS Resource Access Manager.

  • Place all healthcare and payment workloads in separate VPCs inside a single shared AWS account, enable AWS Control Tower detective guardrails, and use an AWS Cost Category to allocate each subsidiary's spend.

  • Keep all workload accounts in the current Workloads OU, attach both HIPAA and PCI-DSS SCP sets to that OU, and rely on cost-allocation tags to distinguish the two subsidiaries.

  • Expand the current organization by creating two top-level workload OUs (Healthcare and Payments), move the respective workload accounts into each OU, retain the Security OU with the shared log-archive and security-tooling accounts, and attach HIPAA-specific SCPs to the Healthcare OU and PCI-DSS SCPs to the Payments OU while using the existing management account for consolidated billing.

Question 13 of 15

An investment-banking firm is re-architecting its proprietary trade-execution platform from on-premises VMs to AWS.
The Java microservice is stateless and scales horizontally from 10 to more than 500 vCPUs during U.S. trading hours.
Technical requirements for the new compute layer are:

  • Sub-millisecond node-to-node network latency inside the Availability Zone.
  • Isolation of the service that handles client-side TLS private keys so that even root on the EC2 host cannot read the keys.
  • A phased migration to AWS Graviton-based instances to reduce cost while still supporting the current x86_64 build.
  • Automatic horizontal scaling and zero-downtime rolling updates.

Which architecture meets all of the requirements with the LEAST operational overhead?

  • Launch the microservice on EC2 Dedicated Hosts running only M6i instances across two Availability Zones. Use an AWS CloudHSM cluster for key storage and distribute traffic with an Application Load Balancer.

  • Create an Amazon EC2 Auto Scaling group that uses a mixed-instances policy with separate launch templates for M6i (x86_64) and M6g (Arm64) instances. Enable Nitro Enclaves in each template, place the group in a cluster placement group, configure weighted capacity and a capacity-optimized allocation strategy, and use Instance Refresh for rolling updates.

  • Rewrite the application as AWS Lambda functions invoked through Amazon API Gateway. Use AWS KMS customer-managed keys for signing and configure Provisioned Concurrency to meet peak load.

  • Containerize the service and deploy it on AWS Fargate with Amazon ECS. Store the TLS private keys in AWS Secrets Manager and use Service Auto Scaling to add or remove tasks during trading hours.

Question 14 of 15

A global e-commerce company hosts its single-page application on EC2 instances behind an Application Load Balancer (ALB) in the us-east-1 Region. The application serves static assets from the path /static and makes personalized API calls at /api. Customers outside North America report first-page load times above 3 seconds, and analysis shows that 70 percent of the requests for /static originate outside the United States, accounting for most of the ALB's peak throughput. The architecture team must reduce end-to-end latency for worldwide users, decrease the load on the origin, keep TLS termination as close to viewers as possible, and ensure that user-specific API responses are never cached. No code or DNS changes to existing URLs are allowed. Which strategy best meets these requirements?

  • Create an Amazon CloudFront distribution in front of the ALB, add a cache behavior for /static/* that uses an optimized cache policy with compression, add a cache behavior for /api/* that uses the CachingDisabled managed policy and forwards all headers, and enable Origin Shield for the ALB origin.

  • Provision AWS Global Accelerator with the ALB as the only endpoint and enable HTTP/2 to improve global TCP performance.

  • Deploy identical EC2 application stacks behind ALBs in multiple Regions and use Amazon Route 53 latency-based routing to direct users to the nearest Region.

  • Enable S3 Transfer Acceleration on a new S3 bucket, migrate all static assets to the bucket, and update the application to reference the new bucket while continuing to access /api through the ALB.

Question 15 of 15

A company operates hundreds of Amazon EC2 instances in private subnets across three production VPCs in the us-east-1 Region. The instances must receive Run Command instructions and software patches by using AWS Systems Manager and must also upload command output logs to an Amazon S3 bucket in the same Region. A new security policy forbids any traffic from these subnets from traversing a NAT gateway, internet gateway, or public IP address. The networking team also wants every AWS SDK call that the instances make to resolve to private IP addresses inside the VPCs and to minimize ongoing data-processing charges.

Which solution meets these requirements while providing the lowest operational cost?

  • In each VPC create gateway VPC endpoints for Amazon S3, AWS Systems Manager, and Amazon EC2. Update the private subnet route tables to point traffic for these services to the gateway endpoints and delete the NAT gateways.

  • In each VPC create interface VPC endpoints for SSM, SSMMessages, and EC2Messages, enable private DNS for the endpoints, and attach an endpoint policy that allows only the required Systems Manager actions. Create a gateway VPC endpoint for Amazon S3 and add it to the route tables used by the private subnets. Remove the NAT gateway routes.

  • Create an endpoint service (AWS PrivateLink) for Systems Manager and S3 in a shared-services VPC, share the service with the other VPCs by using AWS RAM, and create Route 53 private hosted zone records that map the public service domains to the endpoint's private IP addresses. Remove the NAT gateways.

  • Keep the NAT gateways in place but attach an S3 gateway endpoint to each route table. Add an IAM policy to every instance profile that denies access to public IP addresses.