AWS Certified Solutions Architect Professional Practice Test (SAP-C02)
Use the form below to configure your AWS Certified Solutions Architect Professional Practice Test (SAP-C02). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Solutions Architect Professional SAP-C02 Information
The AWS Certified Solutions Architect – Professional (SAP-C02) exam is a test for people who want to show advanced skills in cloud design using Amazon Web Services. It proves that you can handle large, complex systems and design solutions that are secure, reliable, and meet business needs. Passing this exam shows a higher level of knowledge than the associate-level test and is often needed for senior cloud roles.
This exam includes multiple-choice and multiple-response questions. It covers areas like designing for high availability, choosing the right storage and compute services, planning for cost, and managing security at scale. You will also need to understand how to migrate big applications to the cloud, design hybrid systems, and use automation tools to keep environments efficient and safe.
AWS suggests having at least two years of real-world experience before taking this test. The SAP-C02 exam takes 180 minutes, includes about 75 questions, and requires a scaled score of 750 out of 1000 to pass. Preparing usually means lots of practice with AWS services, using study guides, and trying practice exams. For many professionals, this certification is an important milestone toward becoming a cloud architect or senior cloud engineer.
Free AWS Certified Solutions Architect Professional SAP-C02 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Design Solutions for Organizational ComplexityDesign for New SolutionsContinuous Improvement for Existing SolutionsAccelerate Workload Migration and Modernization
Free Preview
This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!
A company is implementing a centralized logging solution within its multi-account AWS environment, which is governed by AWS Organizations. A dedicated Security account (ID 111122223333) hosts an Amazon S3 bucket that receives AWS CloudTrail logs from all member accounts. Compliance rules require every log object in the bucket to be encrypted at rest with a single customer-managed AWS KMS key that also resides in the Security account.
Security analysts, using a specific IAM role in the Security account, must be able to decrypt and analyze the logs. The design must follow the principle of least privilege.
Which configuration correctly enables cross-account encryption of the logs and decryption by the analysts?
Create an IAM role in the Security account that member accounts can assume and give that role kms:GenerateDataKey* permission. Configure each trail to use this assumed role for log delivery. Update the KMS key policy to allow the security-analyst IAM role kms:Decrypt permission.
In the Security account, create KMS grants that allow the cloudtrail.amazonaws.com service principal to perform the kms:Encrypt action for each member account. Create a separate grant that allows the security-analyst IAM role kms:Decrypt permission.
Modify the KMS key policy in the Security account. Add a statement that allows the cloudtrail.amazonaws.com service principal the kms:GenerateDataKey*, kms:Decrypt, and kms:DescribeKey actions, using a condition to limit access to requests from the organization's member accounts. Add another statement that grants the security-analyst IAM role the kms:Decrypt action.
Attach an IAM policy to the CloudTrail service-linked role in each member account that grants the kms:Encrypt action on the central KMS key's ARN. In the Security account's KMS key policy, add each member account's root ARN to the principal list to allow access.
Answer Description
A KMS key policy is the authoritative access-control mechanism for the key, so cross-account permissions should be granted there. For CloudTrail to write SSE-KMS encrypted objects to the bucket it needs kms:GenerateDataKey*; to create or update the trail with SSE-KMS enabled it also needs kms:Decrypt; and it must be able to describe the key. A second statement grants the analysts' IAM role kms:Decrypt so they can read the encrypted logs. Scoping the service principal's access with an aws:SourceArn or kms:EncryptionContext condition limits use of the key to the organization's trails, satisfying least-privilege requirements.
The other options are incorrect:
- Granting only kms:Encrypt or giving each account's root ARN is overly permissive and omits required actions.
- Having member accounts assume a separate role is unsupported for CloudTrail's automatic calls.
- Relying on long-lived KMS grants and kms:Encrypt does not meet the documented requirements and adds operational complexity.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why does the KMS key policy need a condition using aws:SourceArn or kms:EncryptionContext?
What actions does the cloudtrail.amazonaws.com service principal require for SSE-KMS encryption?
How does granting kms:Decrypt to the security-analyst IAM role follow the principle of least privilege?
A financial services company uses AWS Organizations to manage a multi-account environment. A central 'SharedServices' account hosts a customer-managed KMS key for encrypting sensitive data. A separate 'Security' account is used for centralized logging and auditing. The company's security policy mandates that all new S3 objects in member accounts must be encrypted at rest using Server-Side Encryption with the specific KMS key (SSE-KMS) from the SharedServices account. Any attempts to upload objects without this specific encryption, including using SSE-S3 or other KMS keys, must be denied. Additionally, all cryptographic operations using the shared KMS key must be logged to an S3 bucket in the Security account.
Which combination of actions provides the most effective and scalable solution to enforce these requirements?
Deploy an AWS Config rule in each member account to detect S3 objects that are not encrypted with the specified shared KMS key. Configure the rule to trigger a remediation action via an AWS Lambda function that deletes non-compliant objects. In the SharedServices account, grant the Lambda execution roles in each member account access to the KMS key. Use an AWS Config aggregator in the Security account to view compliance status.
In each member account, create an IAM identity-based policy that denies s3:PutObject unless the request headers specify SSE-KMS with the correct key ARN, and attach this policy to all relevant IAM roles. In the SharedServices account, update the KMS key policy to allow access from all member account roles. In each member account, configure a CloudTrail trail to send logs to a central S3 bucket in the Security account.
In the Organizations management account, create a Service Control Policy (SCP) that denies the s3:PutObject action if the s3:x-amz-server-side-encryption-aws-kms-key-id condition key in the request does not match the ARN of the shared KMS key. In the SharedServices account, modify the KMS key policy to grant kms:GenerateDataKey and kms:Decrypt permissions to the necessary service roles in the member accounts. Create an organization-wide CloudTrail trail in the management account to deliver logs to an S3 bucket in the Security account.
In the SharedServices account, modify the KMS key policy to grant the s3.amazonaws.com service principal access from all accounts in the organization. In each member account, create an S3 bucket policy that mandates SSE-KMS encryption using the shared key's ARN. Configure an Amazon EventBridge rule in the default event bus of each member account to forward all S3 and KMS API calls to a central event bus in the Security account for auditing.
Answer Description
The correct answer provides the most effective and scalable solution by using a combination of AWS Organizations features. A Service Control Policy (SCP) acts as a preventative guardrail, denying any s3:PutObject
API call that does not meet the specified encryption requirements before it can be processed. This is more effective than reactive methods and more scalable than managing IAM policies in each account. The KMS key policy in the central SharedServices account must explicitly grant cross-account permissions to the IAM principals (roles) in the member accounts that need to use the key for encryption and decryption. Finally, creating a single organization-wide CloudTrail trail is the standard, most efficient method for centralizing audit logs from all accounts into a designated S3 bucket in the Security account.
The option to use IAM policies in each member account is incorrect because it is not scalable. It requires manual configuration and ongoing management in every account within the organization, increasing operational overhead and the risk of misconfiguration. SCPs provide a centralized enforcement mechanism.
The option to use AWS Config rules and Lambda for remediation is incorrect because it is a reactive, not preventative, approach. Non-compliant objects would be created before being detected and deleted, which may not meet the strict security requirement to deny the action outright. SCPs prevent the creation from happening in the first place.
The option to grant access only to the S3 service principal and use EventBridge is incorrect for two reasons. First, for cross-account SSE-KMS, the calling IAM principal requires permissions in the KMS key policy, not just the S3 service principal. Second, while EventBridge can be used for eventing, AWS CloudTrail is the purpose-built service for comprehensive, centralized API call auditing and logging for security and compliance purposes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is an SCP preferred over IAM policies in this scenario?
What role does the KMS key policy play in enforcing encryption requirements?
Why is a centralized CloudTrail trail better than other logging methods?
Your organization operates a primary data center and must replicate 8 TB of daily database changes to more than 50 Amazon VPCs that are spread across three AWS Regions. Each replication stream must sustain at least 8 Gbps throughput with consistently low latency. The security team mandates encryption of all traffic that traverses the link between the data center and AWS. The network team wants to avoid public-internet paths, minimize the number of physical circuits and virtual interfaces that must be managed, and be able to add additional VPCs or Regions without ordering new circuits. Which connectivity option meets these requirements MOST cost-effectively?
Provision a 10 Gbps dedicated AWS Direct Connect connection; create separate private virtual interfaces to each VPC; rely on security groups and network ACLs for traffic protection.
Order a 10 Gbps dedicated AWS Direct Connect connection that supports MACsec, create one transit virtual interface to an AWS Direct Connect gateway, and associate the gateway with AWS Transit Gateways in each Region.
Implement AWS VPN CloudHub with BGP-based Site-to-Site VPN tunnels from the data center to every VPC and use route propagation for connectivity.
Establish multiple AWS Site-to-Site VPN connections over the internet to AWS Transit Gateways in each Region, use equal-cost multipath routing across the tunnels, and accelerate traffic with AWS Global Accelerator.
Answer Description
A single 10 Gbps dedicated AWS Direct Connect (DX) connection that supports MACsec meets the performance requirement while keeping traffic off the public internet. Creating one transit virtual interface (VIF) to an AWS Direct Connect gateway and associating that gateway with Regional AWS Transit Gateways allows the same encrypted DX circuit to reach dozens of VPCs in any Region without adding more VIFs or physical links. MACsec provides line-rate encryption on the DX circuit, satisfying the in-transit-encryption mandate without having to overlay IPsec tunnels. Site-to-Site VPN-only solutions ride the public internet, introduce variable latency, and would need at least seven tunnels to reach 8 Gbps, increasing operational complexity. Using private VIFs to every VPC over DX removes internet dependence but does not provide encryption and requires many additional VIFs to scale. VPN CloudHub also depends on internet paths and is limited to 1.25 Gbps per tunnel. Therefore, a MACsec-enabled DX connection with a transit VIF and Direct Connect gateway is the most operationally efficient and cost-effective choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Direct Connect and how does it differ from a Site-to-Site VPN?
What is MACsec and why is it required in this solution?
How does AWS Direct Connect Gateway and Transit Gateway work together to scale VPC connectivity across Regions?
A financial-services company is building a hybrid-cloud architecture that connects its on-premises data center to multiple AWS VPCs over AWS Direct Connect. The company requires seamless, bidirectional DNS resolution: on-premises applications must resolve private hostnames for Amazon EC2 instances in the VPCs (for example, app-server.prod.vpc.example.com
), and EC2 instances must resolve hostnames that live only in the on-premises namespace (for example, db.corp.internal
). The solution must be highly available, scalable, and centrally manageable, and it must not require custom DNS server software on EC2 instances.
Which solution meets these requirements most effectively?
Deploy a pair of highly available EC2 instances running BIND in a central VPC. Configure on-premises DNS servers to forward queries to these instances, and configure the BIND servers to forward queries for the on-premises domain back to the on-premises DNS servers.
Create Route 53 Resolver inbound and outbound endpoints. Configure conditional forwarding on the on-premises DNS servers to send queries for the VPC domain to the inbound endpoint. Create Resolver rules to forward queries for the on-premises domain to the on-premises DNS servers via the outbound endpoint.
Create a private hosted zone for the on-premises domain (
corp.internal
) and associate it with all VPCs. Create a Route 53 outbound endpoint and a rule to forward all queries from the VPCs to the on-premises DNS servers.Create a Route 53 inbound endpoint in each VPC. Configure the on-premises DNS servers with conditional forwarders that send all AWS-related DNS queries to the IP addresses of the inbound endpoints.
Answer Description
The most effective design is to use Amazon Route 53 Resolver endpoints and conditional-forwarding rules:
Create an inbound endpoint in a shared or hub VPC. This endpoint exposes two or more IP addresses (in different Availability Zones) that on-premises DNS servers forward queries to, allowing on-premises hosts to resolve records stored in Route 53 private hosted zones.
Create an outbound endpoint in the same VPC and configure Resolver rules (for example,
*.corp.internal
) that forward VPC-originated queries to the on-premises DNS servers. The outbound endpoint sends the traffic across Direct Connect or the site-to-site VPN link.
Because each endpoint requires at least two IP addresses in different AZs, the solution is highly available by design and fully managed-no EC2-hosted DNS servers need to be deployed or patched.
Self-managed BIND servers on EC2 can work but introduce operational overhead for scaling, patching, and failure handling, so they do not best satisfy the requirements.
Deploying only inbound endpoints enables on-premises-to-AWS lookups but provides no path for VPC-originated queries to reach on-premises DNS, so bidirectional resolution is not achieved.
Creating a private hosted zone for
corp.internal
plus an outbound endpoint still lacks an inbound endpoint, so on-premises resolvers cannot query AWS records. In addition, the private hosted zone would be redundant because a forwarding rule for the same domain would take precedence and send the queries to the on-premises DNS servers anyway.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Amazon Route 53 Resolver endpoint, and how does it differ for inbound and outbound traffic?
What are conditional forwarding and Resolver rules in Route 53, and how are they configured?
Why are self-managed BIND DNS servers on EC2 not recommended for this use case?
You operate latency-sensitive trading workloads on bare-metal servers in an Equinix colocation facility that is also an AWS Direct Connect location. Several microservices run in multiple Amazon VPCs that belong to three different AWS accounts in the us-east-1 and us-east-2 Regions. Network engineering requires a single, private 10-Gbps link that avoids internet hops, delivers predictable latency, and allows additional VPCs to be connected later without ordering new physical circuits. Which connectivity strategy best meets these requirements?
Install an AWS Outposts rack in the colocation facility and rely on the Outposts service link over the public internet to exchange traffic with the VPCs.
Attach each VPC to an AWS Transit Gateway, create two Site-to-Site VPN tunnels from the Transit Gateway to the on-premises router in the colocation facility, and enable equal-cost multi-path routing.
Ask an AWS Direct Connect Delivery Partner at a different Direct Connect location to provision a 10-Gbps hosted connection and extend the circuit to the colocation data center over MPLS.
Request a 10-Gbps dedicated AWS Direct Connect cross-connect in the colocation facility. Create one private virtual interface that terminates on a Direct Connect gateway, and associate the gateway with the virtual private gateways of each VPC.
Answer Description
A dedicated AWS Direct Connect cross-connect placed in the same colocation facility provides a private, fiber link that bypasses the internet. Terminating a single private virtual interface on a Direct Connect gateway lets you attach the virtual private gateways of up to 20 VPCs-even across multiple AWS accounts and Regions-over that one 10-Gbps circuit, so future VPCs can be added with only configuration changes. The VPN-based design relies on internet transport and therefore cannot guarantee low or consistent latency. Extending a hosted connection from another site adds an additional carrier circuit and extra hops, undermining the latency goal and adding operational complexity. An Outposts rack still relies on a service-link tunnel (public or Direct Connect public VIF) to reach its parent Region and does not provide direct, high-bandwidth connectivity to multiple VPCs; it also introduces unnecessary hardware and cost.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an AWS Direct Connect dedicated cross-connect, and why is it suitable for low-latency connections?
What role does a Direct Connect gateway play in connecting multiple VPCs?
Why is the VPN and Outposts approach less ideal for low-latency workloads?
Your company deployed its first workload in a new VPC that uses the IPv4 CIDR block 10.2.0.0/20. Three months later, security and operations teams redefine the network-segmentation standard. The VPC must now contain three public and three private subnets in each of three Availability Zones (18 subnets total). Every subnet must provide at least 400 usable IPv4 addresses to accommodate horizontally-scaling container tasks. Existing resources in the current address range must keep running without an IP-address change.
Which action will satisfy the new requirements with the least operational effort?
Create a new VPC with a /16 CIDR block, migrate all workloads into it, and delete the original VPC.
Resize each required subnet to /25 so that all 18 subnets fit inside the existing 10.2.0.0/20 range.
Associate a non-overlapping secondary IPv4 CIDR block such as 10.2.8.0/18 with the VPC and create the new subnets from that range.
Enlarge the VPC's primary CIDR block from /20 to /18, then recreate all subnets so they meet the new size requirement.
Answer Description
A /20 VPC has 4,096 IPv4 addresses-insufficient for 18 subnets that each need at least 400 usable addresses. The smallest subnet that meets the usable-address target is a /23 (512 total, 507 usable after AWS reserves 5). 18 × 512 = 9,216 addresses, so additional space is needed.
AWS does not allow resizing a VPC's primary CIDR, but you can associate up to five secondary IPv4 CIDR blocks of size /28-/16. Adding a non-overlapping block (for example, 10.2.8.0/18, which supplies 16,384 addresses) immediately expands the address pool without disturbing existing workloads. New /23 public and private subnets can then be carved from the secondary range and distributed across the three AZs.
Changing the primary CIDR is impossible, creating an entirely new VPC requires a full migration, and shrinking each subnet to /25 would leave only 123 usable addresses-well below the requirement. Therefore, adding a sufficiently large secondary CIDR to the existing VPC is the simplest, lowest-risk solution.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a CIDR block, and how does it define IP address ranges in a network?
Why does AWS reserve five IP addresses in every subnet, and how does that affect the number of usable addresses?
What are the benefits of associating a secondary CIDR block with a VPC, and why is it the least disruptive option?
A global corporation is adopting a multi-VPC architecture on AWS, with numerous VPCs spread across several AWS Regions. They also maintain a significant on-premises data center connected to AWS via AWS Direct Connect. The key requirements are to enable seamless, transitive communication between all VPCs (inter-VPC) and between the on-premises network and all VPCs. The solution must be highly scalable, centrally managed, and minimize operational overhead. A solutions architect needs to design the optimal network topology. Which approach best meets these requirements?
Create a full mesh of VPC peering connections between all VPCs. Establish a separate AWS Direct Connect private virtual interface (VIF) from the on-premises network to each individual VPC.
Deploy an AWS Transit Gateway in each region. Peer the Transit Gateways across regions and create attachments for each VPC. Connect the on-premises data center to a Transit Gateway via a Direct Connect Gateway attachment.
Use an AWS Direct Connect Gateway and associate it with a Virtual Private Gateway (VGW) in each VPC. This will provide connectivity from on-premises to all VPCs and enable inter-VPC communication through the Direct Connect Gateway.
Designate one VPC as a 'transit hub'. Use VPC peering to connect all other 'spoke' VPCs to this hub VPC. Establish a Direct Connect connection to the hub VPC and configure routing instances within it to forward traffic.
Answer Description
The correct answer is to use AWS Transit Gateway. AWS Transit Gateway acts as a cloud router and is specifically designed to simplify network connectivity at scale. By creating a Transit Gateway in each region, attaching all the VPCs in that region, and then peering the Transit Gateways, you create a global network that allows for transitive routing. This means a resource in any connected network (VPC or on-premises) can communicate with a resource in any other connected network through the Transit Gateway hub-and-spoke model. Connecting the on-premises network via a Direct Connect Gateway to a Transit Gateway integrates the hybrid connectivity seamlessly into this architecture. This solution is scalable to thousands of VPCs, centralizes network management, and reduces the operational overhead of managing complex peering relationships.
Creating a full mesh of VPC peering connections is incorrect because it is not scalable. The number of peering connections grows quadratically with the number of VPCs, leading to significant management complexity and being limited to 125 peers per VPC. This approach is not centrally managed.
Using a designated 'transit hub' VPC with routing instances is an outdated pattern known as a 'Transit VPC'. While it can provide transitive routing, it relies on self-managed EC2 instances, which introduces bottlenecks, single points of failure, and high operational overhead for maintenance and scaling compared to the fully managed Transit Gateway service.
Using a Direct Connect Gateway associated with a Virtual Private Gateway (VGW) in each VPC is incorrect. Although a Direct Connect Gateway connects an on-premises site to multiple VPCs, it does not support transitive routing between those VPCs. Traffic cannot flow from one VPC to another through the Direct Connect Gateway, failing a key requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Transit Gateway and how does it simplify network connectivity?
What is the difference between a Direct Connect Gateway and a Transit Gateway?
Why is a full mesh of VPC peering connections not scalable?
A global enterprise is designing its AWS network architecture using a multi-account strategy with AWS Organizations. The design includes a central "Network" account that hosts an AWS Transit Gateway (TGW). Multiple "Application" accounts, each with a VPC, are attached to this TGW. A key security requirement is that all traffic between the Application VPCs must be inspected by a fleet of next-generation firewall (NGFW) appliances. These appliances are deployed in a dedicated "Inspection" VPC, also owned by the Network account. The Application VPCs have been deployed with overlapping CIDR blocks.
Which solution should a solutions architect recommend to meet these requirements in the most scalable and resilient way?
Create a VPC endpoint service using AWS PrivateLink in the Inspection VPC, fronting the NGFW appliances. Create interface endpoints for this service in each Application VPC. Update the route tables in all Application VPCs to route traffic through the local interface endpoints for inspection.
Deploy the NGFW appliances behind a Network Load Balancer (NLB) in the Inspection VPC. Configure Transit Gateway route tables to forward traffic to the NLB. The firewall appliances will perform Source NAT (SNAT) on the traffic before routing it back to the Transit Gateway for delivery.
Create a full mesh of VPC Peering connections between all Application VPCs and the Inspection VPC. Configure route tables in each Application VPC to forward traffic to the Inspection VPC, where the NGFW appliances are deployed on EC2 instances behind a Network Load Balancer.
Deploy the NGFW appliances as targets for a Gateway Load Balancer (GWLB) in the Inspection VPC. Configure the Transit Gateway to route traffic between Application VPCs to the Inspection VPC attachment. In the Inspection VPC, create GWLB endpoints and configure routing to direct traffic from the TGW through the GWLB for inspection before it is returned to the TGW.
Answer Description
The correct solution is to use a combination of AWS Transit Gateway (TGW) and Gateway Load Balancer (GWLB). The TGW acts as a central hub, which is necessary because the Application VPCs have overlapping CIDRs, making VPC Peering impossible. TGW route tables can be configured to direct all inter-VPC traffic to the Inspection VPC. Inside the Inspection VPC, a Gateway Load Balancer is used to deploy, scale, and manage the fleet of NGFW appliances transparently. It operates at Layer 3 and uses GENEVE encapsulation to preserve the original source and destination of the traffic. Routing is configured to send traffic from the TGW to the GWLB Endpoints, through the appliances for inspection, and then back to the TGW to be forwarded to its final destination.
Using VPC Peering is incorrect because it does not support connections between VPCs with overlapping CIDR blocks. It also does not scale well for this hub-and-spoke inspection model, as it would require a complex mesh of connections. Using a Network Load Balancer (NLB) instead of a GWLB is suboptimal because NLBs are designed for Layer 4 load balancing and are not transparent. This would require Source NAT (SNAT) on the firewall appliances, which complicates routing and causes loss of the original source IP address. AWS PrivateLink is designed to provide private, unidirectional access to specific services and is not the appropriate tool for transparently inspecting all network traffic between VPCs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an AWS Transit Gateway (TGW) and why is it used in this architecture?
What is a Gateway Load Balancer (GWLB) and how does it differ from a Network Load Balancer (NLB)?
Why is PrivateLink or VPC Peering not suitable for this use case?
A solutions architect is troubleshooting a connectivity issue in a hybrid environment. An application running on an EC2 instance in a spoke VPC (10.20.0.0/16) cannot connect to an on-premises database server (192.168.10.50) on port 1433. The spoke VPC is connected to a central inspection VPC via an AWS Transit Gateway. The inspection VPC is connected to the on-premises data center via an AWS Direct Connect connection. All traffic from the spoke VPC to on-premises is routed through firewall appliances in the inspection VPC. On-premises network engineers have confirmed that their firewalls are not blocking the traffic. The architect needs to identify the component in the AWS network path that is blocking the connection. What is the MOST efficient first step to diagnose this issue?
Configure Route 53 Resolver Query Logging for the spoke VPC. Analyze the logs to ensure the on-premises database's hostname is correctly resolving to the IP address 192.168.10.50.
Enable VPC Flow Logs on the network interfaces for the application instance, the Transit Gateway attachment, and the inspection VPC firewall instances. Query the logs using Amazon Athena to find REJECT entries for traffic destined for 192.168.10.50 on port 1433.
Use VPC Reachability Analyzer to create and run an analysis with the application's EC2 instance network interface as the source and the on-premises database IP address (192.168.10.50) as the destination, specifying port 1433.
Use the Route Analyzer feature in Transit Gateway Network Manager to analyze the path from the spoke VPC attachment to the Direct Connect gateway attachment, verifying that routes are correctly propagated.
Answer Description
The correct answer is to use VPC Reachability Analyzer. This tool is specifically designed to perform static analysis of network paths between a source and a destination. It checks the configurations of route tables, security groups, network ACLs, and Transit Gateways without sending any live packets. This allows it to quickly identify the specific component that is blocking connectivity, making it the most efficient first step for this scenario.
- Using VPC Flow Logs and Amazon Athena is a valid troubleshooting method, but it is less efficient. It requires enabling logs, waiting for traffic to be captured, and then performing complex queries on potentially large datasets to find the problem. This is more time-consuming than using the purpose-built Reachability Analyzer.
- The Route Analyzer feature in Transit Gateway Network Manager is not the best tool for this task because it only analyzes routes within the Transit Gateway route tables. It does not analyze VPC route tables, security group rules, or network ACLs, which are common sources of connectivity problems.
- Configuring Route 53 Resolver Query Logging would be appropriate if the problem were related to DNS name resolution. However, the scenario describes a failure to connect to a specific IP address, which points to a network path issue, not a DNS issue.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does the VPC Reachability Analyzer work?
What is the difference between VPC Reachability Analyzer and VPC Flow Logs?
Why doesn’t Route Analyzer in Transit Gateway Network Manager identify all connectivity issues?
Your company is deploying a two-tier web application in a single Amazon VPC. An Application Load Balancer (ALB) in the public subnets terminates TLS on port 443 and forwards traffic to application servers in private subnets that listen on TCP port 9000. You must meet several compliance requirements: only the ALB may initiate traffic to the application servers on port 9000, the application servers must not be reachable from any other source, return-path traffic must be allowed automatically, and the solution must incur the least ongoing rule maintenance as the environment scales.
In the application-server security group, allow TCP 9000 from 0.0.0.0/0. Attach a custom network ACL that denies all other ports inbound and outbound; update the ACL whenever new instances or ports are needed.
Associate a custom network ACL with the private subnets that allows inbound TCP 9000 only from the ALB subnet CIDR blocks and outbound ephemeral ports. Leave a security group on the servers that allows all traffic.
Replace the private-subnet route tables with routes that send all VPC-internal traffic to a firewall appliance in a dedicated subnet. Configure the appliance to permit TCP 9000 from the ALB to the application servers; keep the default security group and network ACL.
Create one security group for the ALB and another for the application servers. In the application-server security group, add an inbound rule that allows TCP 9000 from the ALB's security-group ID and remove all other inbound rules. Keep the default network ACL for all subnets.
Answer Description
Security groups operate at the instance level and are stateful, so response traffic is automatically allowed without extra rules. Referencing the ALB's security-group ID in the application-server security group limits inbound traffic to only the ALB while blocking all other sources in the VPC. Changes to either security group automatically apply to any new load-balancer nodes or Auto Scaling instances, eliminating manual updates as the environment grows.
Network ACLs are stateless and would require matching inbound and outbound rules for the return (ephemeral) ports, creating ongoing maintenance overhead. The default network ACL already allows all traffic, so leaving it unchanged keeps administration simple and still enforces least-privilege access through the security groups.
Using a third-party firewall or opening the application-server security group to 0.0.0.0/0 either adds unnecessary complexity or violates the requirement to restrict access solely to the ALB.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why are security groups considered stateful in AWS?
What is the difference between a security group and a network ACL in AWS?
Why is referencing the ALB's security-group ID the best way to limit traffic to the application servers?
A global enterprise now operates more than 150 AWS accounts that are divided into four business-unit OUs. The cloud center of excellence (CCOE) mandates that every account must:
- Prevent the creation of unencrypted EBS volumes and block uploads to Amazon S3 that are not encrypted with AWS KMS.
- Enforce the CostCenter and Environment tags with allowed values on every supported AWS resource.
- Deliver all CloudTrail records from every account to a single immutable log-archive account.
- Provide each business unit with a consolidated cost view while keeping organization-wide billing.
- Let developers self-provision new sandbox accounts without opening CCOE tickets.
Which approach best meets all of these requirements while minimizing continuing operational effort?
Implement the open-source AWS Landing Zone solution, copy logs into each business-unit account with S3 replication, enforce encryption through bucket policies, require CCOE ticketing for new sandbox accounts, and generate cost visibility from CUR data in Athena.
Deploy AWS Control Tower with an OU for each business unit, enable preventive encryption guardrails and an enforced tag policy, allow developers to create sandbox accounts through Account Factory and IAM Identity Center, use the built-in log-archive account for organization-wide CloudTrail, and use consolidated billing with cost-allocation tags for chargeback.
Keep all workloads in a single shared AWS account segmented by VPC and IAM, ask developers to tag resources manually, enable default encryption on S3 and EBS, centralize CloudTrail in the same account, and filter costs with Cost Explorer.
Use AWS Organizations alone with SCPs that deny unencrypted resource creation and missing tags, store an organization CloudTrail in the management account, create new accounts through Service Catalog and CloudFormation StackSets, and rely on Cost Explorer reports for chargeback.
Answer Description
AWS Control Tower sets up a landing-zone architecture that automatically creates a dedicated log-archive account, enables an organization-wide CloudTrail, and provides preventive guardrails such as CT.EC2.PV.2 and CT.S3.PV.6 to block unencrypted EBS volumes and S3 uploads. Account Factory, integrated with IAM Identity Center, gives developers governed self-service account provisioning. An enforced AWS Organizations tag policy applied at the OU level standardizes the CostCenter and Environment tags across resources, and consolidated billing combined with activated cost-allocation tags supplies the required per-business-unit cost view. The alternative options either rely on manual or ticket-based account creation, lack enforced encryption or tagging controls, or spread logging and cost management across multiple locations, resulting in higher operational overhead and incomplete compliance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Control Tower, and why is it used in this solution?
What are preventive guardrails in AWS Control Tower?
What does Account Factory do in AWS Control Tower?
A global enterprise is designing its multi-region AWS network. The company has a large, existing on-premises IP address space and owns a public /24 IPv4 block. They plan to create hundreds of VPCs across multiple AWS accounts within an AWS Organization. A key requirement is to prevent overlapping IP address ranges between on-premises networks and all new VPCs. Additionally, they want to centrally manage and automate the allocation of VPC CIDR blocks to different business units and enforce specific tagging policies on VPC creation. Which approach provides the most scalable and manageable solution for this IP addressing strategy?
Design all VPCs with a small primary CIDR from the 10.0.0.0/8 range. As IP space is depleted, add secondary CIDR blocks to each VPC from the on-premises IP address space.
Manually track all AWS-provided private CIDR allocations in a shared spreadsheet. Use AWS Resource Access Manager (RAM) to share subnets from a central VPC to spoke accounts.
Implement Amazon VPC IP Address Manager (IPAM) within the AWS Organization. Create IPAM pools from the company's on-premises IP space and use the Bring Your Own IP (BYOIP) feature for their public /24 block. Enforce allocation rules for VPC creation.
For all new VPCs, exclusively allocate CIDR blocks from the 100.64.0.0/10 range to ensure no overlap with the existing on-premises network. Use AWS Budgets to monitor IP address consumption.
Answer Description
The correct solution is to use Amazon VPC IP Address Manager (IPAM) integrated with AWS Organizations. IPAM allows for central planning, tracking, and monitoring of IP address space across multiple accounts and Regions. By creating a top-level pool in IPAM with the company's private IP space, the architect can then create smaller, delegated pools for different business units or environments, preventing overlaps. IPAM's rule-based allocation can enforce that new VPCs are created with non-overlapping CIDRs and meet compliance requirements, such as mandatory tagging. Furthermore, by using the Bring Your Own IP (BYOIP) feature, the company can import its public /24 block into IPAM's public scope, allowing them to manage and allocate their own public IP addresses to resources like NAT Gateways and Load Balancers centrally.
Using a spreadsheet is a manual, error-prone process that does not scale for hundreds of VPCs and is contrary to the automation requirement.
While secondary CIDR blocks allow a VPC to be expanded, they do not provide a centralized, proactive mechanism for managing IP allocation across an entire organization. This approach is reactive and does not prevent initial CIDR overlaps between VPCs.
Using the 100.64.0.0/10 range is incorrect for general VPC workloads. This range is reserved by RFC 6598 for Carrier-Grade NAT (CGN) and should not be used for private enterprise networking, although it is supported for specific use cases like Amazon EKS custom networking. Using it for general VPCs could lead to unpredictable connectivity issues.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon VPC IP Address Manager (IPAM)?
What is the Bring Your Own IP (BYOIP) feature in AWS?
How does IPAM enforce allocation rules and prevent CIDR overlap?
A global enterprise is designing a multi-account AWS architecture that will host hundreds of applications, each within its own VPC, across multiple AWS Regions. The security team mandates that all east-west (inter-VPC) traffic and north-south (egress to the internet) traffic must be routed through a central point of inspection for deep packet inspection and logging. The solution must be highly scalable, minimize network management overhead, and support transitive routing to on-premises data centers via AWS Direct Connect. Which connectivity strategy best fulfills these requirements?
In each region, deploy an AWS Transit Gateway and peer them using inter-region peering. Create a central inspection VPC with a Gateway Load Balancer that fronts a fleet of security appliances. Configure Transit Gateway route tables to forward all traffic to the inspection VPC.
Implement a legacy 'Transit VPC' pattern in each region using EC2 instances running third-party routing software. Establish IPsec VPN connections from all spoke VPCs to the Transit VPC to enable transitive routing and inspection.
Establish a full-mesh VPC peering configuration for all VPCs within each region. For inter-region traffic, create additional peering connections. Implement traffic inspection by deploying security appliances in every VPC.
Use AWS PrivateLink to create VPC endpoints in each spoke VPC for every shared service. For general inter-VPC traffic, establish a limited mesh of VPC peering connections and manage route tables manually.
Answer Description
The optimal solution is to deploy an AWS Transit Gateway in each region and use inter-region peering to connect them. A central inspection VPC should be created in each region, containing a Gateway Load Balancer (GWLB) with a fleet of security appliances behind it. Transit Gateway route tables will be configured to direct all inter-VPC and egress traffic to the GWLB endpoint in the inspection VPC. This design creates a scalable, manageable hub-and-spoke network. The Transit Gateway acts as a cloud router, simplifying connectivity and eliminating the need for complex VPC peering meshes. Using a Gateway Load Balancer is the correct approach for deploying, scaling, and managing third-party virtual security appliances transparently within the network traffic path. This architecture centralizes traffic inspection without requiring security appliances to be deployed in each spoke VPC.
A full-mesh VPC peering configuration is incorrect because it is not scalable. Managing peering connections for hundreds of VPCs (which would require thousands of connections) is operationally complex and error-prone. Furthermore, VPC peering does not support transitive routing, so it cannot be used to route traffic from a spoke VPC through a central VPC to an on-premises network.
The legacy 'Transit VPC' model using EC2-based VPN appliances is also incorrect. While it provides transitive routing, it is a self-managed solution that has been largely superseded by the fully managed, more scalable, and highly available AWS Transit Gateway service.
Using AWS PrivateLink is not suitable for this scenario. PrivateLink is designed to provide secure, private connectivity from a VPC to specific services, not for routing all network traffic. It cannot be used to inspect general inter-VPC or egress traffic.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Transit Gateway, and why is it important in this architecture?
What is a Gateway Load Balancer, and how does it enable traffic inspection?
Why is the full-mesh VPC peering strategy not scalable for this scenario?
A global travel-booking company runs a latency-sensitive REST API on Amazon EC2 instances behind an Application Load Balancer (ALB) in the us-east-1 Region. The data tier is an Amazon Aurora MySQL cluster. The architects have already extended the database by adding an Aurora Global Database secondary cluster in us-west-2.
Business continuity targets state that, if the primary Region fails, the API must recover in under one minute (RTO < 60 s) and lose at most 1 second of data (RPO < 1 s). Operations teams want to avoid manual DNS updates or lengthy runbook procedures during a Regional outage and prefer a solution that incurs the least ongoing operational overhead.
Which combination of actions will BEST meet these requirements?
Create weighted Amazon Route 53 records with health checks for each ALB, set the record TTL to 60 seconds, and trigger an AWS Lambda function from CloudWatch alarms to adjust the weights. Manually promote the Aurora secondary cluster during an outage.
Use AWS Elastic Disaster Recovery to replicate the EC2 instances and the Aurora database to us-west-2, keep the target resources stopped, and start them when a Regional failure is declared.
Front both Regional ALBs with AWS Global Accelerator, enabling endpoint health checks for automatic traffic failover, and configure Aurora Global Database managed cross-Region failover to promote the secondary cluster when the primary Region is unavailable.
Refactor the application into an active/active design that stores data in Amazon DynamoDB global tables and implements bidirectional replication logic between EC2 instances in both Regions.
Answer Description
AWS Global Accelerator provides two static anycast IP addresses and continuously probes the health of each regional endpoint. If the ALB in us-east-1 becomes unhealthy, Global Accelerator removes it from service and starts directing new connections to the healthy ALB in us-west-2 in approximately 30 seconds, eliminating dependence on DNS TTL expiration. Aurora Global Database replicates changes across Regions with typical latency below 1 second and, with the managed cross-Region failover feature, can promote the secondary cluster to the writer role in under a minute. Together, these services meet the RPO of less than 1 second and an RTO of under 60 seconds, satisfying the business continuity targets while requiring only minimal configuration.
The weighted Route 53 design still depends on DNS caching and manual weight adjustments, so traffic might continue to flow to an unhealthy Region for longer than the RTO. AWS Elastic Disaster Recovery must boot new instances, which takes minutes and breaches the RTO. Re-architecting around DynamoDB global tables would meet cross-Region data durability but would not preserve the existing Aurora workload and would add significant development and operational effort.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Global Accelerator, and how does it ensure low RTO for regional failovers?
How does Aurora Global Database achieve a replication latency of less than 1 second?
Why is relying on Route 53 weighted records with health checks less effective for failovers?
A central security account manages encryption for three production workload accounts in the us-east-1 Region. The workloads store sensitive data in Amazon S3 and Amazon DynamoDB. Compliance requires:
- Encryption keys must stay inside AWS-managed FIPS 140-3 HSMs and never leave the service in plaintext.
- Keys must rotate automatically every 365 days, and earlier key versions must remain available for at least 7 years so archived data can still be decrypted.
- The disaster-recovery plan mandates that encrypted data be fully readable in us-west-2 within 15 minutes of a regional outage, without application changes.
- Operations must minimize the number of keys administrators manage and avoid writing custom code for key rotation or cross-Region replication.
Which solution meets all of these requirements with the LEAST operational overhead?
Deploy AWS CloudHSM clusters in us-east-1 and us-west-2, create custom key stores, manually replicate key material between clusters, and schedule annual Lambda jobs to rotate the keys.
Create separate customer managed KMS keys in both Regions for each workload account. Turn on automatic rotation for every key and rely on AWS Backup cross-Region copy jobs to move encrypted snapshots to us-west-2.
Import customer-generated key material into a KMS key in us-east-1, export the plaintext key, import it into a new KMS key in us-west-2, and use an annual Lambda function to re-import fresh key material into both keys.
Create one symmetric multi-Region customer managed KMS key in the security account in us-east-1. Enable automatic rotation and use ReplicateKey to create a replica in us-west-2. Add key-policy statements that allow IAM roles in each workload account to perform cryptographic operations, and point all applications to the key ARN.
Answer Description
Creating a multi-Region customer managed AWS KMS key in the security account satisfies every control:
- Multi-Region keys are generated, stored, and used only inside AWS-managed FIPS 140-3 HSMs, so key material never leaves KMS in plaintext.
- Enabling automatic rotation on the primary key rotates the key material every 365 days and KMS retains older key versions indefinitely, allowing decryption of data encrypted up to (and beyond) the required 7-year window.
- Replicating the key to us-west-2 produces a replica with an identical key ID and key material. Applications can decrypt data in either Region with no code changes, meeting the 15-minute DR objective without manual replication scripts.
- A single key set (one primary and one replica) is managed centrally. Only the key policy needs to grant cryptographic permissions to workload-account roles, so administrators avoid maintaining separate keys per account or Region.
The other options introduce extra keys, manual key movement, or expose plaintext key material-failing one or more stated requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a multi-Region customer managed KMS key?
How does automatic key rotation work in AWS KMS?
What is AWS ReplicateKey and how is it used?
Wow!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.