00:15:00

AWS Certified Solutions Architect Professional Practice Test (SAP-C02)

Use the form below to configure your AWS Certified Solutions Architect Professional Practice Test (SAP-C02). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for AWS Certified Solutions Architect Professional SAP-C02
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

AWS Certified Solutions Architect Professional SAP-C02 Information

The AWS Certified Solutions Architect – Professional (SAP-C02) exam is a test for people who want to show advanced skills in cloud design using Amazon Web Services. It proves that you can handle large, complex systems and design solutions that are secure, reliable, and meet business needs. Passing this exam shows a higher level of knowledge than the associate-level test and is often needed for senior cloud roles.

This exam includes multiple-choice and multiple-response questions. It covers areas like designing for high availability, choosing the right storage and compute services, planning for cost, and managing security at scale. You will also need to understand how to migrate big applications to the cloud, design hybrid systems, and use automation tools to keep environments efficient and safe.

AWS suggests having at least two years of real-world experience before taking this test. The SAP-C02 exam takes 180 minutes, includes about 75 questions, and requires a scaled score of 750 out of 1000 to pass. Preparing usually means lots of practice with AWS services, using study guides, and trying practice exams. For many professionals, this certification is an important milestone toward becoming a cloud architect or senior cloud engineer.

Free AWS Certified Solutions Architect Professional SAP-C02 Practice Test

Press start when you are ready, or press Change to modify any settings for the practice test.

  • Questions: 15
  • Time: Unlimited
  • Included Topics:
    Design Solutions for Organizational Complexity
    Design for New Solutions
    Continuous Improvement for Existing Solutions
    Accelerate Workload Migration and Modernization

Free Preview

This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!

Question 1 of 15

A company is implementing a centralized logging solution within its multi-account AWS environment, which is governed by AWS Organizations. A dedicated Security account (ID 111122223333) hosts an Amazon S3 bucket that receives AWS CloudTrail logs from all member accounts. Compliance rules require every log object in the bucket to be encrypted at rest with a single customer-managed AWS KMS key that also resides in the Security account.

Security analysts, using a specific IAM role in the Security account, must be able to decrypt and analyze the logs. The design must follow the principle of least privilege.

Which configuration correctly enables cross-account encryption of the logs and decryption by the analysts?

  • Create an IAM role in the Security account that member accounts can assume and give that role kms:GenerateDataKey* permission. Configure each trail to use this assumed role for log delivery. Update the KMS key policy to allow the security-analyst IAM role kms:Decrypt permission.

  • In the Security account, create KMS grants that allow the cloudtrail.amazonaws.com service principal to perform the kms:Encrypt action for each member account. Create a separate grant that allows the security-analyst IAM role kms:Decrypt permission.

  • Modify the KMS key policy in the Security account. Add a statement that allows the cloudtrail.amazonaws.com service principal the kms:GenerateDataKey*, kms:Decrypt, and kms:DescribeKey actions, using a condition to limit access to requests from the organization's member accounts. Add another statement that grants the security-analyst IAM role the kms:Decrypt action.

  • Attach an IAM policy to the CloudTrail service-linked role in each member account that grants the kms:Encrypt action on the central KMS key's ARN. In the Security account's KMS key policy, add each member account's root ARN to the principal list to allow access.

Question 2 of 15

A financial services company uses AWS Organizations to manage a multi-account environment. A central 'SharedServices' account hosts a customer-managed KMS key for encrypting sensitive data. A separate 'Security' account is used for centralized logging and auditing. The company's security policy mandates that all new S3 objects in member accounts must be encrypted at rest using Server-Side Encryption with the specific KMS key (SSE-KMS) from the SharedServices account. Any attempts to upload objects without this specific encryption, including using SSE-S3 or other KMS keys, must be denied. Additionally, all cryptographic operations using the shared KMS key must be logged to an S3 bucket in the Security account.

Which combination of actions provides the most effective and scalable solution to enforce these requirements?

  • Deploy an AWS Config rule in each member account to detect S3 objects that are not encrypted with the specified shared KMS key. Configure the rule to trigger a remediation action via an AWS Lambda function that deletes non-compliant objects. In the SharedServices account, grant the Lambda execution roles in each member account access to the KMS key. Use an AWS Config aggregator in the Security account to view compliance status.

  • In each member account, create an IAM identity-based policy that denies s3:PutObject unless the request headers specify SSE-KMS with the correct key ARN, and attach this policy to all relevant IAM roles. In the SharedServices account, update the KMS key policy to allow access from all member account roles. In each member account, configure a CloudTrail trail to send logs to a central S3 bucket in the Security account.

  • In the Organizations management account, create a Service Control Policy (SCP) that denies the s3:PutObject action if the s3:x-amz-server-side-encryption-aws-kms-key-id condition key in the request does not match the ARN of the shared KMS key. In the SharedServices account, modify the KMS key policy to grant kms:GenerateDataKey and kms:Decrypt permissions to the necessary service roles in the member accounts. Create an organization-wide CloudTrail trail in the management account to deliver logs to an S3 bucket in the Security account.

  • In the SharedServices account, modify the KMS key policy to grant the s3.amazonaws.com service principal access from all accounts in the organization. In each member account, create an S3 bucket policy that mandates SSE-KMS encryption using the shared key's ARN. Configure an Amazon EventBridge rule in the default event bus of each member account to forward all S3 and KMS API calls to a central event bus in the Security account for auditing.

Question 3 of 15

Your organization operates a primary data center and must replicate 8 TB of daily database changes to more than 50 Amazon VPCs that are spread across three AWS Regions. Each replication stream must sustain at least 8 Gbps throughput with consistently low latency. The security team mandates encryption of all traffic that traverses the link between the data center and AWS. The network team wants to avoid public-internet paths, minimize the number of physical circuits and virtual interfaces that must be managed, and be able to add additional VPCs or Regions without ordering new circuits. Which connectivity option meets these requirements MOST cost-effectively?

  • Provision a 10 Gbps dedicated AWS Direct Connect connection; create separate private virtual interfaces to each VPC; rely on security groups and network ACLs for traffic protection.

  • Order a 10 Gbps dedicated AWS Direct Connect connection that supports MACsec, create one transit virtual interface to an AWS Direct Connect gateway, and associate the gateway with AWS Transit Gateways in each Region.

  • Implement AWS VPN CloudHub with BGP-based Site-to-Site VPN tunnels from the data center to every VPC and use route propagation for connectivity.

  • Establish multiple AWS Site-to-Site VPN connections over the internet to AWS Transit Gateways in each Region, use equal-cost multipath routing across the tunnels, and accelerate traffic with AWS Global Accelerator.

Question 4 of 15

A financial-services company is building a hybrid-cloud architecture that connects its on-premises data center to multiple AWS VPCs over AWS Direct Connect. The company requires seamless, bidirectional DNS resolution: on-premises applications must resolve private hostnames for Amazon EC2 instances in the VPCs (for example, app-server.prod.vpc.example.com), and EC2 instances must resolve hostnames that live only in the on-premises namespace (for example, db.corp.internal). The solution must be highly available, scalable, and centrally manageable, and it must not require custom DNS server software on EC2 instances.

Which solution meets these requirements most effectively?

  • Deploy a pair of highly available EC2 instances running BIND in a central VPC. Configure on-premises DNS servers to forward queries to these instances, and configure the BIND servers to forward queries for the on-premises domain back to the on-premises DNS servers.

  • Create Route 53 Resolver inbound and outbound endpoints. Configure conditional forwarding on the on-premises DNS servers to send queries for the VPC domain to the inbound endpoint. Create Resolver rules to forward queries for the on-premises domain to the on-premises DNS servers via the outbound endpoint.

  • Create a private hosted zone for the on-premises domain (corp.internal) and associate it with all VPCs. Create a Route 53 outbound endpoint and a rule to forward all queries from the VPCs to the on-premises DNS servers.

  • Create a Route 53 inbound endpoint in each VPC. Configure the on-premises DNS servers with conditional forwarders that send all AWS-related DNS queries to the IP addresses of the inbound endpoints.

Question 5 of 15

You operate latency-sensitive trading workloads on bare-metal servers in an Equinix colocation facility that is also an AWS Direct Connect location. Several microservices run in multiple Amazon VPCs that belong to three different AWS accounts in the us-east-1 and us-east-2 Regions. Network engineering requires a single, private 10-Gbps link that avoids internet hops, delivers predictable latency, and allows additional VPCs to be connected later without ordering new physical circuits. Which connectivity strategy best meets these requirements?

  • Install an AWS Outposts rack in the colocation facility and rely on the Outposts service link over the public internet to exchange traffic with the VPCs.

  • Attach each VPC to an AWS Transit Gateway, create two Site-to-Site VPN tunnels from the Transit Gateway to the on-premises router in the colocation facility, and enable equal-cost multi-path routing.

  • Ask an AWS Direct Connect Delivery Partner at a different Direct Connect location to provision a 10-Gbps hosted connection and extend the circuit to the colocation data center over MPLS.

  • Request a 10-Gbps dedicated AWS Direct Connect cross-connect in the colocation facility. Create one private virtual interface that terminates on a Direct Connect gateway, and associate the gateway with the virtual private gateways of each VPC.

Question 6 of 15

Your company deployed its first workload in a new VPC that uses the IPv4 CIDR block 10.2.0.0/20. Three months later, security and operations teams redefine the network-segmentation standard. The VPC must now contain three public and three private subnets in each of three Availability Zones (18 subnets total). Every subnet must provide at least 400 usable IPv4 addresses to accommodate horizontally-scaling container tasks. Existing resources in the current address range must keep running without an IP-address change.

Which action will satisfy the new requirements with the least operational effort?

  • Create a new VPC with a /16 CIDR block, migrate all workloads into it, and delete the original VPC.

  • Resize each required subnet to /25 so that all 18 subnets fit inside the existing 10.2.0.0/20 range.

  • Associate a non-overlapping secondary IPv4 CIDR block such as 10.2.8.0/18 with the VPC and create the new subnets from that range.

  • Enlarge the VPC's primary CIDR block from /20 to /18, then recreate all subnets so they meet the new size requirement.

Question 7 of 15

A global corporation is adopting a multi-VPC architecture on AWS, with numerous VPCs spread across several AWS Regions. They also maintain a significant on-premises data center connected to AWS via AWS Direct Connect. The key requirements are to enable seamless, transitive communication between all VPCs (inter-VPC) and between the on-premises network and all VPCs. The solution must be highly scalable, centrally managed, and minimize operational overhead. A solutions architect needs to design the optimal network topology. Which approach best meets these requirements?

  • Create a full mesh of VPC peering connections between all VPCs. Establish a separate AWS Direct Connect private virtual interface (VIF) from the on-premises network to each individual VPC.

  • Deploy an AWS Transit Gateway in each region. Peer the Transit Gateways across regions and create attachments for each VPC. Connect the on-premises data center to a Transit Gateway via a Direct Connect Gateway attachment.

  • Use an AWS Direct Connect Gateway and associate it with a Virtual Private Gateway (VGW) in each VPC. This will provide connectivity from on-premises to all VPCs and enable inter-VPC communication through the Direct Connect Gateway.

  • Designate one VPC as a 'transit hub'. Use VPC peering to connect all other 'spoke' VPCs to this hub VPC. Establish a Direct Connect connection to the hub VPC and configure routing instances within it to forward traffic.

Question 8 of 15

A global enterprise is designing its AWS network architecture using a multi-account strategy with AWS Organizations. The design includes a central "Network" account that hosts an AWS Transit Gateway (TGW). Multiple "Application" accounts, each with a VPC, are attached to this TGW. A key security requirement is that all traffic between the Application VPCs must be inspected by a fleet of next-generation firewall (NGFW) appliances. These appliances are deployed in a dedicated "Inspection" VPC, also owned by the Network account. The Application VPCs have been deployed with overlapping CIDR blocks.

Which solution should a solutions architect recommend to meet these requirements in the most scalable and resilient way?

  • Create a VPC endpoint service using AWS PrivateLink in the Inspection VPC, fronting the NGFW appliances. Create interface endpoints for this service in each Application VPC. Update the route tables in all Application VPCs to route traffic through the local interface endpoints for inspection.

  • Deploy the NGFW appliances behind a Network Load Balancer (NLB) in the Inspection VPC. Configure Transit Gateway route tables to forward traffic to the NLB. The firewall appliances will perform Source NAT (SNAT) on the traffic before routing it back to the Transit Gateway for delivery.

  • Create a full mesh of VPC Peering connections between all Application VPCs and the Inspection VPC. Configure route tables in each Application VPC to forward traffic to the Inspection VPC, where the NGFW appliances are deployed on EC2 instances behind a Network Load Balancer.

  • Deploy the NGFW appliances as targets for a Gateway Load Balancer (GWLB) in the Inspection VPC. Configure the Transit Gateway to route traffic between Application VPCs to the Inspection VPC attachment. In the Inspection VPC, create GWLB endpoints and configure routing to direct traffic from the TGW through the GWLB for inspection before it is returned to the TGW.

Question 9 of 15

A solutions architect is troubleshooting a connectivity issue in a hybrid environment. An application running on an EC2 instance in a spoke VPC (10.20.0.0/16) cannot connect to an on-premises database server (192.168.10.50) on port 1433. The spoke VPC is connected to a central inspection VPC via an AWS Transit Gateway. The inspection VPC is connected to the on-premises data center via an AWS Direct Connect connection. All traffic from the spoke VPC to on-premises is routed through firewall appliances in the inspection VPC. On-premises network engineers have confirmed that their firewalls are not blocking the traffic. The architect needs to identify the component in the AWS network path that is blocking the connection. What is the MOST efficient first step to diagnose this issue?

  • Configure Route 53 Resolver Query Logging for the spoke VPC. Analyze the logs to ensure the on-premises database's hostname is correctly resolving to the IP address 192.168.10.50.

  • Enable VPC Flow Logs on the network interfaces for the application instance, the Transit Gateway attachment, and the inspection VPC firewall instances. Query the logs using Amazon Athena to find REJECT entries for traffic destined for 192.168.10.50 on port 1433.

  • Use VPC Reachability Analyzer to create and run an analysis with the application's EC2 instance network interface as the source and the on-premises database IP address (192.168.10.50) as the destination, specifying port 1433.

  • Use the Route Analyzer feature in Transit Gateway Network Manager to analyze the path from the spoke VPC attachment to the Direct Connect gateway attachment, verifying that routes are correctly propagated.

Question 10 of 15

Your company is deploying a two-tier web application in a single Amazon VPC. An Application Load Balancer (ALB) in the public subnets terminates TLS on port 443 and forwards traffic to application servers in private subnets that listen on TCP port 9000. You must meet several compliance requirements: only the ALB may initiate traffic to the application servers on port 9000, the application servers must not be reachable from any other source, return-path traffic must be allowed automatically, and the solution must incur the least ongoing rule maintenance as the environment scales.

  • In the application-server security group, allow TCP 9000 from 0.0.0.0/0. Attach a custom network ACL that denies all other ports inbound and outbound; update the ACL whenever new instances or ports are needed.

  • Associate a custom network ACL with the private subnets that allows inbound TCP 9000 only from the ALB subnet CIDR blocks and outbound ephemeral ports. Leave a security group on the servers that allows all traffic.

  • Replace the private-subnet route tables with routes that send all VPC-internal traffic to a firewall appliance in a dedicated subnet. Configure the appliance to permit TCP 9000 from the ALB to the application servers; keep the default security group and network ACL.

  • Create one security group for the ALB and another for the application servers. In the application-server security group, add an inbound rule that allows TCP 9000 from the ALB's security-group ID and remove all other inbound rules. Keep the default network ACL for all subnets.

Question 11 of 15

A global enterprise now operates more than 150 AWS accounts that are divided into four business-unit OUs. The cloud center of excellence (CCOE) mandates that every account must:

  • Prevent the creation of unencrypted EBS volumes and block uploads to Amazon S3 that are not encrypted with AWS KMS.
  • Enforce the CostCenter and Environment tags with allowed values on every supported AWS resource.
  • Deliver all CloudTrail records from every account to a single immutable log-archive account.
  • Provide each business unit with a consolidated cost view while keeping organization-wide billing.
  • Let developers self-provision new sandbox accounts without opening CCOE tickets.

Which approach best meets all of these requirements while minimizing continuing operational effort?

  • Implement the open-source AWS Landing Zone solution, copy logs into each business-unit account with S3 replication, enforce encryption through bucket policies, require CCOE ticketing for new sandbox accounts, and generate cost visibility from CUR data in Athena.

  • Deploy AWS Control Tower with an OU for each business unit, enable preventive encryption guardrails and an enforced tag policy, allow developers to create sandbox accounts through Account Factory and IAM Identity Center, use the built-in log-archive account for organization-wide CloudTrail, and use consolidated billing with cost-allocation tags for chargeback.

  • Keep all workloads in a single shared AWS account segmented by VPC and IAM, ask developers to tag resources manually, enable default encryption on S3 and EBS, centralize CloudTrail in the same account, and filter costs with Cost Explorer.

  • Use AWS Organizations alone with SCPs that deny unencrypted resource creation and missing tags, store an organization CloudTrail in the management account, create new accounts through Service Catalog and CloudFormation StackSets, and rely on Cost Explorer reports for chargeback.

Question 12 of 15

A global enterprise is designing its multi-region AWS network. The company has a large, existing on-premises IP address space and owns a public /24 IPv4 block. They plan to create hundreds of VPCs across multiple AWS accounts within an AWS Organization. A key requirement is to prevent overlapping IP address ranges between on-premises networks and all new VPCs. Additionally, they want to centrally manage and automate the allocation of VPC CIDR blocks to different business units and enforce specific tagging policies on VPC creation. Which approach provides the most scalable and manageable solution for this IP addressing strategy?

  • Design all VPCs with a small primary CIDR from the 10.0.0.0/8 range. As IP space is depleted, add secondary CIDR blocks to each VPC from the on-premises IP address space.

  • Manually track all AWS-provided private CIDR allocations in a shared spreadsheet. Use AWS Resource Access Manager (RAM) to share subnets from a central VPC to spoke accounts.

  • Implement Amazon VPC IP Address Manager (IPAM) within the AWS Organization. Create IPAM pools from the company's on-premises IP space and use the Bring Your Own IP (BYOIP) feature for their public /24 block. Enforce allocation rules for VPC creation.

  • For all new VPCs, exclusively allocate CIDR blocks from the 100.64.0.0/10 range to ensure no overlap with the existing on-premises network. Use AWS Budgets to monitor IP address consumption.

Question 13 of 15

A global enterprise is designing a multi-account AWS architecture that will host hundreds of applications, each within its own VPC, across multiple AWS Regions. The security team mandates that all east-west (inter-VPC) traffic and north-south (egress to the internet) traffic must be routed through a central point of inspection for deep packet inspection and logging. The solution must be highly scalable, minimize network management overhead, and support transitive routing to on-premises data centers via AWS Direct Connect. Which connectivity strategy best fulfills these requirements?

  • In each region, deploy an AWS Transit Gateway and peer them using inter-region peering. Create a central inspection VPC with a Gateway Load Balancer that fronts a fleet of security appliances. Configure Transit Gateway route tables to forward all traffic to the inspection VPC.

  • Implement a legacy 'Transit VPC' pattern in each region using EC2 instances running third-party routing software. Establish IPsec VPN connections from all spoke VPCs to the Transit VPC to enable transitive routing and inspection.

  • Establish a full-mesh VPC peering configuration for all VPCs within each region. For inter-region traffic, create additional peering connections. Implement traffic inspection by deploying security appliances in every VPC.

  • Use AWS PrivateLink to create VPC endpoints in each spoke VPC for every shared service. For general inter-VPC traffic, establish a limited mesh of VPC peering connections and manage route tables manually.

Question 14 of 15

A global travel-booking company runs a latency-sensitive REST API on Amazon EC2 instances behind an Application Load Balancer (ALB) in the us-east-1 Region. The data tier is an Amazon Aurora MySQL cluster. The architects have already extended the database by adding an Aurora Global Database secondary cluster in us-west-2.

Business continuity targets state that, if the primary Region fails, the API must recover in under one minute (RTO < 60 s) and lose at most 1 second of data (RPO < 1 s). Operations teams want to avoid manual DNS updates or lengthy runbook procedures during a Regional outage and prefer a solution that incurs the least ongoing operational overhead.

Which combination of actions will BEST meet these requirements?

  • Create weighted Amazon Route 53 records with health checks for each ALB, set the record TTL to 60 seconds, and trigger an AWS Lambda function from CloudWatch alarms to adjust the weights. Manually promote the Aurora secondary cluster during an outage.

  • Use AWS Elastic Disaster Recovery to replicate the EC2 instances and the Aurora database to us-west-2, keep the target resources stopped, and start them when a Regional failure is declared.

  • Front both Regional ALBs with AWS Global Accelerator, enabling endpoint health checks for automatic traffic failover, and configure Aurora Global Database managed cross-Region failover to promote the secondary cluster when the primary Region is unavailable.

  • Refactor the application into an active/active design that stores data in Amazon DynamoDB global tables and implements bidirectional replication logic between EC2 instances in both Regions.

Question 15 of 15

A central security account manages encryption for three production workload accounts in the us-east-1 Region. The workloads store sensitive data in Amazon S3 and Amazon DynamoDB. Compliance requires:

  • Encryption keys must stay inside AWS-managed FIPS 140-3 HSMs and never leave the service in plaintext.
  • Keys must rotate automatically every 365 days, and earlier key versions must remain available for at least 7 years so archived data can still be decrypted.
  • The disaster-recovery plan mandates that encrypted data be fully readable in us-west-2 within 15 minutes of a regional outage, without application changes.
  • Operations must minimize the number of keys administrators manage and avoid writing custom code for key rotation or cross-Region replication.

Which solution meets all of these requirements with the LEAST operational overhead?

  • Deploy AWS CloudHSM clusters in us-east-1 and us-west-2, create custom key stores, manually replicate key material between clusters, and schedule annual Lambda jobs to rotate the keys.

  • Create separate customer managed KMS keys in both Regions for each workload account. Turn on automatic rotation for every key and rely on AWS Backup cross-Region copy jobs to move encrypted snapshots to us-west-2.

  • Import customer-generated key material into a KMS key in us-east-1, export the plaintext key, import it into a new KMS key in us-west-2, and use an annual Lambda function to re-import fresh key material into both keys.

  • Create one symmetric multi-Region customer managed KMS key in the security account in us-east-1. Enable automatic rotation and use ReplicateKey to create a replica in us-west-2. Add key-policy statements that allow IAM roles in each workload account to perform cryptographic operations, and point all applications to the key ARN.