AWS Certified Solutions Architect Professional Practice Test (SAP-C02)
Use the form below to configure your AWS Certified Solutions Architect Professional Practice Test (SAP-C02). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Solutions Architect Professional SAP-C02 Information
The AWS Certified Solutions Architect – Professional (SAP-C02) exam is a test for people who want to show advanced skills in cloud design using Amazon Web Services. It proves that you can handle large, complex systems and design solutions that are secure, reliable, and meet business needs. Passing this exam shows a higher level of knowledge than the associate-level test and is often needed for senior cloud roles.
This exam includes multiple-choice and multiple-response questions. It covers areas like designing for high availability, choosing the right storage and compute services, planning for cost, and managing security at scale. You will also need to understand how to migrate big applications to the cloud, design hybrid systems, and use automation tools to keep environments efficient and safe.
AWS suggests having at least two years of real-world experience before taking this test. The SAP-C02 exam takes 180 minutes, includes about 75 questions, and requires a scaled score of 750 out of 1000 to pass. Preparing usually means lots of practice with AWS services, using study guides, and trying practice exams. For many professionals, this certification is an important milestone toward becoming a cloud architect or senior cloud engineer.

Free AWS Certified Solutions Architect Professional SAP-C02 Practice Test
- 20 Questions
- Unlimited
- Design Solutions for Organizational ComplexityDesign for New SolutionsContinuous Improvement for Existing SolutionsAccelerate Workload Migration and Modernization
Free Preview
This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!
A global enterprise is designing its multi-region AWS network. The company has a large, existing on-premises IP address space and owns a public /24 IPv4 block. They plan to create hundreds of VPCs across multiple AWS accounts within an AWS Organization. A key requirement is to prevent overlapping IP address ranges between on-premises networks and all new VPCs. Additionally, they want to centrally manage and automate the allocation of VPC CIDR blocks to different business units and enforce specific tagging policies on VPC creation. Which approach provides the most scalable and manageable solution for this IP addressing strategy?
Implement Amazon VPC IP Address Manager (IPAM) within the AWS Organization. Create IPAM pools from the company's on-premises IP space and use the Bring Your Own IP (BYOIP) feature for their public /24 block. Enforce allocation rules for VPC creation.
Design all VPCs with a small primary CIDR from the 10.0.0.0/8 range. As IP space is depleted, add secondary CIDR blocks to each VPC from the on-premises IP address space.
Manually track all AWS-provided private CIDR allocations in a shared spreadsheet. Use AWS Resource Access Manager (RAM) to share subnets from a central VPC to spoke accounts.
For all new VPCs, exclusively allocate CIDR blocks from the 100.64.0.0/10 range to ensure no overlap with the existing on-premises network. Use AWS Budgets to monitor IP address consumption.
Answer Description
The correct solution is to use Amazon VPC IP Address Manager (IPAM) integrated with AWS Organizations. IPAM allows for central planning, tracking, and monitoring of IP address space across multiple accounts and Regions. By creating a top-level pool in IPAM with the company's private IP space, the architect can then create smaller, delegated pools for different business units or environments, preventing overlaps. IPAM's rule-based allocation can enforce that new VPCs are created with non-overlapping CIDRs and meet compliance requirements, such as mandatory tagging. Furthermore, by using the Bring Your Own IP (BYOIP) feature, the company can import its public /24 block into IPAM's public scope, allowing them to manage and allocate their own public IP addresses to resources like NAT Gateways and Load Balancers centrally.
Using a spreadsheet is a manual, error-prone process that does not scale for hundreds of VPCs and is contrary to the automation requirement.
While secondary CIDR blocks allow a VPC to be expanded, they do not provide a centralized, proactive mechanism for managing IP allocation across an entire organization. This approach is reactive and does not prevent initial CIDR overlaps between VPCs.
Using the 100.64.0.0/10 range is incorrect for general VPC workloads. This range is reserved by RFC 6598 for Carrier-Grade NAT (CGN) and should not be used for private enterprise networking, although it is supported for specific use cases like Amazon EKS custom networking. Using it for general VPCs could lead to unpredictable connectivity issues.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon VPC IP Address Manager (IPAM)?
What is the Bring Your Own IP (BYOIP) feature in AWS?
How does IPAM enforce allocation rules and prevent CIDR overlap?
A financial services company runs a critical trade-processing application on AWS. The application uses a fleet of Amazon EC2 instances and an Amazon Aurora PostgreSQL database. Due to the critical nature of the application, the business has mandated a Recovery Time Objective (RTO) of less than 1 minute and a Recovery Point Objective (RPO) of less than 1 second. The disaster recovery (DR) plan must account for a full AWS Region failure.
Which DR strategy should a solutions architect recommend to meet these requirements?
Use AWS Elastic Disaster Recovery (DRS) to continuously replicate the EC2 instances and the attached database volumes to a staging area in a secondary region.
Deploy the application and a scaled-down version of the EC2 fleet in a secondary region as a Warm Standby. Use Amazon Aurora Global Database, with the secondary region hosting a read replica.
Configure a Pilot Light architecture by replicating the Aurora database to a secondary region. Provision the application tier infrastructure only upon a failover event.
Use AWS Backup with Cross-Region Replication to copy Aurora snapshots and AMIs to a secondary region. In a disaster, restore the environment using the replicated backups.
Answer Description
The correct answer is to use Amazon Aurora Global Database and a Warm Standby application tier. Amazon Aurora Global Database is specifically designed for cross-region disaster recovery, providing a typical RPO of under one second and an RTO of under one minute, which precisely meets the stated requirements. The Warm Standby approach for the application tier ensures that a scaled-down but fully functional version of the environment is always running in the DR region, allowing for a rapid failover and scaling to full capacity within the one-minute RTO.
Using a Pilot Light strategy would not meet the RTO. While the data layer might be replicated, the application infrastructure in the DR region would need to be provisioned and scaled from a minimal state, a process that typically takes several minutes, exceeding the one-minute RTO.
Implementing a Backup and Restore strategy would fail to meet either the RTO or the RPO. Restoring from backups, even with Cross-Region Replication, is a process that takes hours, not minutes. The RPO would also be, at best, the time between snapshots, which is far greater than one second.
Using AWS Elastic Disaster Recovery (DRS) is a strong but incorrect option. While DRS provides an excellent RPO of seconds by using continuous block-level replication, its RTO is typically in the range of minutes (e.g., 5-20 minutes) because it needs to launch new recovery instances from replicated data. This would not meet the stringent sub-one-minute RTO requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Amazon Aurora Global Database, and why is it suitable for cross-region disaster recovery?
How does a Warm Standby strategy work, and why is it effective for meeting critical RTO requirements?
Why don't Pilot Light and Backup/Restore strategies meet stringent RTO and RPO requirements?
A global enterprise is designing a multi-account AWS architecture that will host hundreds of applications, each within its own VPC, across multiple AWS Regions. The security team mandates that all east-west (inter-VPC) traffic and north-south (egress to the internet) traffic must be routed through a central point of inspection for deep packet inspection and logging. The solution must be highly scalable, minimize network management overhead, and support transitive routing to on-premises data centers via AWS Direct Connect. Which connectivity strategy best fulfills these requirements?
Use AWS PrivateLink to create VPC endpoints in each spoke VPC for every shared service. For general inter-VPC traffic, establish a limited mesh of VPC peering connections and manage route tables manually.
Implement a legacy 'Transit VPC' pattern in each region using EC2 instances running third-party routing software. Establish IPsec VPN connections from all spoke VPCs to the Transit VPC to enable transitive routing and inspection.
Establish a full-mesh VPC peering configuration for all VPCs within each region. For inter-region traffic, create additional peering connections. Implement traffic inspection by deploying security appliances in every VPC.
In each region, deploy an AWS Transit Gateway and peer them using inter-region peering. Create a central inspection VPC with a Gateway Load Balancer that fronts a fleet of security appliances. Configure Transit Gateway route tables to forward all traffic to the inspection VPC.
Answer Description
The optimal solution is to deploy an AWS Transit Gateway in each region and use inter-region peering to connect them. A central inspection VPC should be created in each region, containing a Gateway Load Balancer (GWLB) with a fleet of security appliances behind it. Transit Gateway route tables will be configured to direct all inter-VPC and egress traffic to the GWLB endpoint in the inspection VPC. This design creates a scalable, manageable hub-and-spoke network. The Transit Gateway acts as a cloud router, simplifying connectivity and eliminating the need for complex VPC peering meshes. Using a Gateway Load Balancer is the correct approach for deploying, scaling, and managing third-party virtual security appliances transparently within the network traffic path. This architecture centralizes traffic inspection without requiring security appliances to be deployed in each spoke VPC.
A full-mesh VPC peering configuration is incorrect because it is not scalable. Managing peering connections for hundreds of VPCs (which would require thousands of connections) is operationally complex and error-prone. Furthermore, VPC peering does not support transitive routing, so it cannot be used to route traffic from a spoke VPC through a central VPC to an on-premises network.
The legacy 'Transit VPC' model using EC2-based VPN appliances is also incorrect. While it provides transitive routing, it is a self-managed solution that has been largely superseded by the fully managed, more scalable, and highly available AWS Transit Gateway service.
Using AWS PrivateLink is not suitable for this scenario. PrivateLink is designed to provide secure, private connectivity from a VPC to specific services, not for routing all network traffic. It cannot be used to inspect general inter-VPC or egress traffic.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Transit Gateway, and why is it important in this architecture?
What is a Gateway Load Balancer, and how does it enable traffic inspection?
Why is the full-mesh VPC peering strategy not scalable for this scenario?
You operate latency-sensitive trading workloads on bare-metal servers in an Equinix colocation facility that is also an AWS Direct Connect location. Several microservices run in multiple Amazon VPCs that belong to three different AWS accounts in the us-east-1 and us-east-2 Regions. Network engineering requires a single, private 10-Gbps link that avoids internet hops, delivers predictable latency, and allows additional VPCs to be connected later without ordering new physical circuits. Which connectivity strategy best meets these requirements?
Install an AWS Outposts rack in the colocation facility and rely on the Outposts service link over the public internet to exchange traffic with the VPCs.
Attach each VPC to an AWS Transit Gateway, create two Site-to-Site VPN tunnels from the Transit Gateway to the on-premises router in the colocation facility, and enable equal-cost multi-path routing.
Request a 10-Gbps dedicated AWS Direct Connect cross-connect in the colocation facility. Create one private virtual interface that terminates on a Direct Connect gateway, and associate the gateway with the virtual private gateways of each VPC.
Ask an AWS Direct Connect Delivery Partner at a different Direct Connect location to provision a 10-Gbps hosted connection and extend the circuit to the colocation data center over MPLS.
Answer Description
A dedicated AWS Direct Connect cross-connect placed in the same colocation facility provides a private, fiber link that bypasses the internet. Terminating a single private virtual interface on a Direct Connect gateway lets you attach the virtual private gateways of up to 20 VPCs-even across multiple AWS accounts and Regions-over that one 10-Gbps circuit, so future VPCs can be added with only configuration changes. The VPN-based design relies on internet transport and therefore cannot guarantee low or consistent latency. Extending a hosted connection from another site adds an additional carrier circuit and extra hops, undermining the latency goal and adding operational complexity. An Outposts rack still relies on a service-link tunnel (public or Direct Connect public VIF) to reach its parent Region and does not provide direct, high-bandwidth connectivity to multiple VPCs; it also introduces unnecessary hardware and cost.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an AWS Direct Connect dedicated cross-connect, and why is it suitable for low-latency connections?
What role does a Direct Connect gateway play in connecting multiple VPCs?
Why is the VPN and Outposts approach less ideal for low-latency workloads?
A financial-services company is building a hybrid-cloud architecture that connects its on-premises data center to multiple AWS VPCs over AWS Direct Connect. The company requires seamless, bidirectional DNS resolution: on-premises applications must resolve private hostnames for Amazon EC2 instances in the VPCs (for example, app-server.prod.vpc.example.com
), and EC2 instances must resolve hostnames that live only in the on-premises namespace (for example, db.corp.internal
). The solution must be highly available, scalable, and centrally manageable, and it must not require custom DNS server software on EC2 instances.
Which solution meets these requirements most effectively?
Create Route 53 Resolver inbound and outbound endpoints. Configure conditional forwarding on the on-premises DNS servers to send queries for the VPC domain to the inbound endpoint. Create Resolver rules to forward queries for the on-premises domain to the on-premises DNS servers via the outbound endpoint.
Create a private hosted zone for the on-premises domain (
corp.internal
) and associate it with all VPCs. Create a Route 53 outbound endpoint and a rule to forward all queries from the VPCs to the on-premises DNS servers.Create a Route 53 inbound endpoint in each VPC. Configure the on-premises DNS servers with conditional forwarders that send all AWS-related DNS queries to the IP addresses of the inbound endpoints.
Deploy a pair of highly available EC2 instances running BIND in a central VPC. Configure on-premises DNS servers to forward queries to these instances, and configure the BIND servers to forward queries for the on-premises domain back to the on-premises DNS servers.
Answer Description
The most effective design is to use Amazon Route 53 Resolver endpoints and conditional-forwarding rules:
Create an inbound endpoint in a shared or hub VPC. This endpoint exposes two or more IP addresses (in different Availability Zones) that on-premises DNS servers forward queries to, allowing on-premises hosts to resolve records stored in Route 53 private hosted zones.
Create an outbound endpoint in the same VPC and configure Resolver rules (for example,
*.corp.internal
) that forward VPC-originated queries to the on-premises DNS servers. The outbound endpoint sends the traffic across Direct Connect or the site-to-site VPN link.
Because each endpoint requires at least two IP addresses in different AZs, the solution is highly available by design and fully managed-no EC2-hosted DNS servers need to be deployed or patched.
Self-managed BIND servers on EC2 can work but introduce operational overhead for scaling, patching, and failure handling, so they do not best satisfy the requirements.
Deploying only inbound endpoints enables on-premises-to-AWS lookups but provides no path for VPC-originated queries to reach on-premises DNS, so bidirectional resolution is not achieved.
Creating a private hosted zone for
corp.internal
plus an outbound endpoint still lacks an inbound endpoint, so on-premises resolvers cannot query AWS records. In addition, the private hosted zone would be redundant because a forwarding rule for the same domain would take precedence and send the queries to the on-premises DNS servers anyway.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Amazon Route 53 Resolver endpoint, and how does it differ for inbound and outbound traffic?
What are conditional forwarding and Resolver rules in Route 53, and how are they configured?
Why are self-managed BIND DNS servers on EC2 not recommended for this use case?
A financial services company operates a large number of applications across a multi-account AWS Organization. The security team needs a comprehensive, centrally managed security solution. The solution must provide proactive and intelligent threat detection for workloads and data, including identifying unusual API activity or potential instance compromises. It must also offer protection for public-facing web applications against common web exploits and DDoS attacks. A key requirement is to aggregate security findings from all accounts and services into a single, designated security tooling account for unified visibility, posture management, and prioritized remediation. Which combination of AWS services should a solutions architect recommend to meet all these requirements most effectively?
Use AWS Config with conformance packs to enforce security best practices and Amazon Macie to discover and protect sensitive data in Amazon S3.
Implement Amazon GuardDuty for threat detection, AWS WAF for web application protection, AWS Shield Advanced for DDoS mitigation, and AWS Security Hub for centralized findings management.
Enable Amazon Inspector in all accounts to scan for vulnerabilities, and use AWS Systems Manager Patch Manager to automate patching.
Deploy AWS Network Firewall in each VPC, use VPC Flow Logs for traffic analysis, and stream logs to a central Amazon S3 bucket for manual review.
Answer Description
The correct answer proposes a combination of AWS Security Hub, Amazon GuardDuty, AWS WAF, and AWS Shield Advanced. This solution is the most comprehensive for the described scenario. Amazon GuardDuty provides intelligent threat detection by monitoring for malicious activity and unauthorized behavior. AWS WAF protects web applications from common exploits like SQL injection and cross-site scripting. AWS Shield Advanced offers enhanced protection against sophisticated DDoS attacks. Finally, AWS Security Hub aggregates, organizes, and prioritizes security findings from GuardDuty, WAF, and other services across all accounts in an AWS Organization, providing a centralized view for posture management. This combination directly addresses all requirements: intelligent threat detection (GuardDuty), web application protection (WAF, Shield), and centralized findings aggregation (Security Hub).
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon GuardDuty and how does it provide threat detection?
How do AWS WAF and AWS Shield Advanced protect against web exploits and DDoS attacks?
What is AWS Security Hub, and how does it centralize findings across accounts?
Your company deployed its first workload in a new VPC that uses the IPv4 CIDR block 10.2.0.0/20. Three months later, security and operations teams redefine the network-segmentation standard. The VPC must now contain three public and three private subnets in each of three Availability Zones (18 subnets total). Every subnet must provide at least 400 usable IPv4 addresses to accommodate horizontally-scaling container tasks. Existing resources in the current address range must keep running without an IP-address change.
Which action will satisfy the new requirements with the least operational effort?
Associate a non-overlapping secondary IPv4 CIDR block such as 10.2.8.0/18 with the VPC and create the new subnets from that range.
Create a new VPC with a /16 CIDR block, migrate all workloads into it, and delete the original VPC.
Resize each required subnet to /25 so that all 18 subnets fit inside the existing 10.2.0.0/20 range.
Enlarge the VPC's primary CIDR block from /20 to /18, then recreate all subnets so they meet the new size requirement.
Answer Description
A /20 VPC has 4,096 IPv4 addresses-insufficient for 18 subnets that each need at least 400 usable addresses. The smallest subnet that meets the usable-address target is a /23 (512 total, 507 usable after AWS reserves 5). 18 × 512 = 9,216 addresses, so additional space is needed.
AWS does not allow resizing a VPC's primary CIDR, but you can associate up to five secondary IPv4 CIDR blocks of size /28-/16. Adding a non-overlapping block (for example, 10.2.8.0/18, which supplies 16,384 addresses) immediately expands the address pool without disturbing existing workloads. New /23 public and private subnets can then be carved from the secondary range and distributed across the three AZs.
Changing the primary CIDR is impossible, creating an entirely new VPC requires a full migration, and shrinking each subnet to /25 would leave only 123 usable addresses-well below the requirement. Therefore, adding a sufficiently large secondary CIDR to the existing VPC is the simplest, lowest-risk solution.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a CIDR block, and how does it define IP address ranges in a network?
Why does AWS reserve five IP addresses in every subnet, and how does that affect the number of usable addresses?
What are the benefits of associating a secondary CIDR block with a VPC, and why is it the least disruptive option?
A financial services company is designing a global, multi-account AWS environment to host a critical three-tier application. The architecture requires separate AWS accounts for development, staging, and production to ensure strict workload isolation. Each account will have its own VPC and connect to a central Transit Gateway for shared services and to an on-premises network via AWS Direct Connect. The on-premises network uses the 10.0.0.0/8 address space. The architects have allocated the 172.16.0.0/16 block for all AWS VPCs. A primary requirement is to maintain clear network segmentation between application tiers (web, application, database) within each VPC, while ensuring that routing between the VPCs and the on-premises network is scalable and avoids IP address conflicts. Which network segmentation strategy is the MOST effective and scalable for this scenario?
Use the same 172.16.0.0/16 CIDR block for the VPC in each of the development, staging, and production accounts. Rely on the Transit Gateway to manage routing between the identical address spaces.
Create a single, large VPC in a shared services account with the 172.16.0.0/16 CIDR. Create separate sets of subnets within this single VPC for the development, staging, and production environments, using security groups to enforce isolation.
Assign a unique, non-overlapping CIDR block to each account's VPC (e.g., 172.16.10.0/24 for dev, 172.16.20.0/24 for staging, 172.16.30.0/24 for prod). Within each VPC, create separate subnets for the web, application, and database tiers across multiple Availability Zones.
Assign the primary CIDR block 172.16.0.0/16 to the production VPC. For the development and staging VPCs, use the same primary CIDR and then add unique secondary CIDR blocks to each to differentiate them for routing purposes.
Answer Description
The correct strategy is to create a unique, non-overlapping CIDR block for each account's VPC, derived from the larger allocated address space. This approach prevents IP address conflicts, which is crucial for routing via AWS Transit Gateway and AWS Direct Connect. Using the same CIDR block for all VPCs would create overlapping IP ranges, making inter-VPC and hybrid routing impossible without complex and inefficient NAT solutions. Within each VPC, creating separate subnets for each application tier (web, application, database) across multiple Availability Zones provides the required network segmentation and high availability. Using a single VPC for all environments violates the principle of account-level isolation. Relying on secondary CIDR blocks is a reactive measure for VPCs that have run out of IP addresses and is not a best practice for initial network design.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is it important to use unique, non-overlapping CIDR blocks for each VPC?
What role does the AWS Transit Gateway play in this architecture?
How does segmenting the application tiers into separate subnets within a VPC improve security and scalability?
A global travel-booking company runs a latency-sensitive REST API on Amazon EC2 instances behind an Application Load Balancer (ALB) in the us-east-1 Region. The data tier is an Amazon Aurora MySQL cluster. The architects have already extended the database by adding an Aurora Global Database secondary cluster in us-west-2.
Business continuity targets state that, if the primary Region fails, the API must recover in under one minute (RTO < 60 s) and lose at most 1 second of data (RPO < 1 s). Operations teams want to avoid manual DNS updates or lengthy runbook procedures during a Regional outage and prefer a solution that incurs the least ongoing operational overhead.
Which combination of actions will BEST meet these requirements?
Front both Regional ALBs with AWS Global Accelerator, enabling endpoint health checks for automatic traffic failover, and configure Aurora Global Database managed cross-Region failover to promote the secondary cluster when the primary Region is unavailable.
Create weighted Amazon Route 53 records with health checks for each ALB, set the record TTL to 60 seconds, and trigger an AWS Lambda function from CloudWatch alarms to adjust the weights. Manually promote the Aurora secondary cluster during an outage.
Refactor the application into an active/active design that stores data in Amazon DynamoDB global tables and implements bidirectional replication logic between EC2 instances in both Regions.
Use AWS Elastic Disaster Recovery to replicate the EC2 instances and the Aurora database to us-west-2, keep the target resources stopped, and start them when a Regional failure is declared.
Answer Description
AWS Global Accelerator provides two static anycast IP addresses and continuously probes the health of each regional endpoint. If the ALB in us-east-1 becomes unhealthy, Global Accelerator removes it from service and starts directing new connections to the healthy ALB in us-west-2 in approximately 30 seconds, eliminating dependence on DNS TTL expiration. Aurora Global Database replicates changes across Regions with typical latency below 1 second and, with the managed cross-Region failover feature, can promote the secondary cluster to the writer role in under a minute. Together, these services meet the RPO of less than 1 second and an RTO of under 60 seconds, satisfying the business continuity targets while requiring only minimal configuration.
The weighted Route 53 design still depends on DNS caching and manual weight adjustments, so traffic might continue to flow to an unhealthy Region for longer than the RTO. AWS Elastic Disaster Recovery must boot new instances, which takes minutes and breaches the RTO. Re-architecting around DynamoDB global tables would meet cross-Region data durability but would not preserve the existing Aurora workload and would add significant development and operational effort.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Global Accelerator, and how does it ensure low RTO for regional failovers?
How does Aurora Global Database achieve a replication latency of less than 1 second?
Why is relying on Route 53 weighted records with health checks less effective for failovers?
An organization has multiple AWS accounts that are part of AWS Organizations. A production workload in us-east-1 uses an Amazon FSx for Windows File Server file system and a mission-critical Amazon DynamoDB table. Container images are stored in a private Amazon ECR repository.
Compliance requirements state that:
- Backups must be immutable and retained off-site for 35 days.
- Backup configuration must be centrally managed across accounts.
- A recovery site must be available in us-west-2 with an RTO of 60 minutes and an RPO of ≤ 1 hour.
Which approach meets these requirements in the most cost-effective way?
Convert the FSx file system to a multi-AZ deployment and configure Distributed File System Replication (DFSR) between Regions. Convert the DynamoDB table to a global table spanning us-east-1 and us-west-2, disable all backups, and enable an ECR pull-through cache in us-west-2.
Create an AWS Backup policy in the delegated administrator account that assigns the FSx file system and DynamoDB table to a backup plan with hourly snapshots (FSx) and continuous backups (DynamoDB), a 35-day retention rule, and an automatic copy to a backup vault locked in Compliance mode in us-west-2. Enable Amazon ECR private-registry cross-Region replication from us-east-1 to us-west-2.
Export the DynamoDB table to Amazon S3 every hour, turn on S3 Object Lock for 35 days, and enable S3 Cross-Region Replication to us-west-2. Use AWS DataSync to copy daily Shadow Copies from the FSx file system to the same bucket, and manually push container images to an ECR repository in us-west-2.
Enable AWS Elastic Disaster Recovery on the FSx file system and DynamoDB table to replicate data continuously to us-west-2. Use AWS Backup only for ECR to create nightly snapshots and copy them to a locked vault.
Answer Description
An organization-level AWS Backup policy lets a delegated administrator centrally apply a backup plan to resources in member accounts. The plan can schedule hourly snapshot backups for the FSx file system and enable continuous (point-in-time) backups for DynamoDB, then automatically copy each recovery point to a backup vault in us-west-2. Locking the destination vault in Compliance mode with AWS Backup Vault Lock makes the backups write-once-read-many and prevents even privileged users from deleting or shortening the 35-day retention period. Amazon ECR provides native cross-Region private-registry replication, so new images pushed in us-east-1 are automatically replicated to us-west-2 without extra backup charges. During a regional outage, administrators can restore the FSx recovery point, perform a point-in-time restore of the DynamoDB table (within the 35-day window), and pull the replicated container images-all well within the 60-minute RTO and 1-hour RPO.
The other options either rely on services that do not support the listed resources, require custom scripting and higher data-transfer costs, or fail to provide immutable 35-day retention across accounts and Regions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Backup and how does it help with compliance requirements?
How does Amazon ECR's cross-Region replication work?
What is the role of DynamoDB continuous backups in meeting RPO requirements?
A global enterprise is designing its AWS network architecture using a multi-account strategy with AWS Organizations. The design includes a central "Network" account that hosts an AWS Transit Gateway (TGW). Multiple "Application" accounts, each with a VPC, are attached to this TGW. A key security requirement is that all traffic between the Application VPCs must be inspected by a fleet of next-generation firewall (NGFW) appliances. These appliances are deployed in a dedicated "Inspection" VPC, also owned by the Network account. The Application VPCs have been deployed with overlapping CIDR blocks.
Which solution should a solutions architect recommend to meet these requirements in the most scalable and resilient way?
Create a VPC endpoint service using AWS PrivateLink in the Inspection VPC, fronting the NGFW appliances. Create interface endpoints for this service in each Application VPC. Update the route tables in all Application VPCs to route traffic through the local interface endpoints for inspection.
Deploy the NGFW appliances behind a Network Load Balancer (NLB) in the Inspection VPC. Configure Transit Gateway route tables to forward traffic to the NLB. The firewall appliances will perform Source NAT (SNAT) on the traffic before routing it back to the Transit Gateway for delivery.
Deploy the NGFW appliances as targets for a Gateway Load Balancer (GWLB) in the Inspection VPC. Configure the Transit Gateway to route traffic between Application VPCs to the Inspection VPC attachment. In the Inspection VPC, create GWLB endpoints and configure routing to direct traffic from the TGW through the GWLB for inspection before it is returned to the TGW.
Create a full mesh of VPC Peering connections between all Application VPCs and the Inspection VPC. Configure route tables in each Application VPC to forward traffic to the Inspection VPC, where the NGFW appliances are deployed on EC2 instances behind a Network Load Balancer.
Answer Description
The correct solution is to use a combination of AWS Transit Gateway (TGW) and Gateway Load Balancer (GWLB). The TGW acts as a central hub, which is necessary because the Application VPCs have overlapping CIDRs, making VPC Peering impossible. TGW route tables can be configured to direct all inter-VPC traffic to the Inspection VPC. Inside the Inspection VPC, a Gateway Load Balancer is used to deploy, scale, and manage the fleet of NGFW appliances transparently. It operates at Layer 3 and uses GENEVE encapsulation to preserve the original source and destination of the traffic. Routing is configured to send traffic from the TGW to the GWLB Endpoints, through the appliances for inspection, and then back to the TGW to be forwarded to its final destination.
Using VPC Peering is incorrect because it does not support connections between VPCs with overlapping CIDR blocks. It also does not scale well for this hub-and-spoke inspection model, as it would require a complex mesh of connections. Using a Network Load Balancer (NLB) instead of a GWLB is suboptimal because NLBs are designed for Layer 4 load balancing and are not transparent. This would require Source NAT (SNAT) on the firewall appliances, which complicates routing and causes loss of the original source IP address. AWS PrivateLink is designed to provide private, unidirectional access to specific services and is not the appropriate tool for transparently inspecting all network traffic between VPCs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an AWS Transit Gateway (TGW) and why is it used in this architecture?
What is a Gateway Load Balancer (GWLB) and how does it differ from a Network Load Balancer (NLB)?
Why is PrivateLink or VPC Peering not suitable for this use case?
Your company is deploying a two-tier web application in a single Amazon VPC. An Application Load Balancer (ALB) in the public subnets terminates TLS on port 443 and forwards traffic to application servers in private subnets that listen on TCP port 9000. You must meet several compliance requirements: only the ALB may initiate traffic to the application servers on port 9000, the application servers must not be reachable from any other source, return-path traffic must be allowed automatically, and the solution must incur the least ongoing rule maintenance as the environment scales.
Replace the private-subnet route tables with routes that send all VPC-internal traffic to a firewall appliance in a dedicated subnet. Configure the appliance to permit TCP 9000 from the ALB to the application servers; keep the default security group and network ACL.
Create one security group for the ALB and another for the application servers. In the application-server security group, add an inbound rule that allows TCP 9000 from the ALB's security-group ID and remove all other inbound rules. Keep the default network ACL for all subnets.
Associate a custom network ACL with the private subnets that allows inbound TCP 9000 only from the ALB subnet CIDR blocks and outbound ephemeral ports. Leave a security group on the servers that allows all traffic.
In the application-server security group, allow TCP 9000 from 0.0.0.0/0. Attach a custom network ACL that denies all other ports inbound and outbound; update the ACL whenever new instances or ports are needed.
Answer Description
Security groups operate at the instance level and are stateful, so response traffic is automatically allowed without extra rules. Referencing the ALB's security-group ID in the application-server security group limits inbound traffic to only the ALB while blocking all other sources in the VPC. Changes to either security group automatically apply to any new load-balancer nodes or Auto Scaling instances, eliminating manual updates as the environment grows.
Network ACLs are stateless and would require matching inbound and outbound rules for the return (ephemeral) ports, creating ongoing maintenance overhead. The default network ACL already allows all traffic, so leaving it unchanged keeps administration simple and still enforces least-privilege access through the security groups.
Using a third-party firewall or opening the application-server security group to 0.0.0.0/0 either adds unnecessary complexity or violates the requirement to restrict access solely to the ALB.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why are security groups considered stateful in AWS?
What is the difference between a security group and a network ACL in AWS?
Why is referencing the ALB's security-group ID the best way to limit traffic to the application servers?
A financial-services company exchanges personally identifiable information (PII) with an AWS workload that runs in a private VPC. The company currently uses a single 10 Gbps dedicated AWS Direct Connect private virtual interface that terminates on its on-premises core router. New regulatory requirements mandate that all PII in transit across the hybrid link must be encrypted. The solution must preserve at least 8 Gbps of throughput, add as little operational overhead as possible, and avoid any application-level changes.
Which approach meets these requirements?
Configure an AWS Site-to-Site VPN connection with two IPsec tunnels over the Direct Connect link and route all traffic through the VPN.
Enable MAC Security (MACsec) on the existing 10 Gbps dedicated Direct Connect port and configure matching MACsec parameters on the on-premises router.
Order a second 10 Gbps dedicated Direct Connect at a different location and enable BGP MD5 authentication on both connections.
Implement TLS encryption at the application layer for every service that exchanges PII over the Direct Connect link.
Answer Description
MAC Security (MACsec) is a native option for 10-Gbps, 100-Gbps, and 400-Gbps dedicated Direct Connect ports. When enabled on the existing 10-Gbps connection and configured on the on-premises router, MACsec provides IEEE 802.1AE layer-2 encryption for all traffic between the data center and the Direct Connect location. Because encryption happens in hardware on the link, there is no reduction in available bandwidth or increase in latency, and no additional tunnels or devices to manage.
Using Site-to-Site VPN over Direct Connect would require multiple IPsec tunnels-each limited to 1.25 Gbps-to reach 8 Gbps, adding complexity and management overhead. Encrypting at the application layer with TLS would force code and configuration changes across every workload and still leave unmanaged protocols unprotected. Adding a second Direct Connect and relying on BGP MD5 only authenticates the BGP session; it does not encrypt user data, so the compliance requirement is not satisfied.
Therefore, enabling MACsec on the existing dedicated Direct Connect link is the simplest solution that satisfies both the encryption and performance requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is MACsec and why is it suitable for this scenario?
Why wouldn’t a Site-to-Site VPN with IPsec over Direct Connect meet the requirements?
What are the differences between MACsec and TLS encryption in this context?
A global enterprise now operates more than 150 AWS accounts that are divided into four business-unit OUs. The cloud center of excellence (CCOE) mandates that every account must:
- Prevent the creation of unencrypted EBS volumes and block uploads to Amazon S3 that are not encrypted with AWS KMS.
- Enforce the CostCenter and Environment tags with allowed values on every supported AWS resource.
- Deliver all CloudTrail records from every account to a single immutable log-archive account.
- Provide each business unit with a consolidated cost view while keeping organization-wide billing.
- Let developers self-provision new sandbox accounts without opening CCOE tickets.
Which approach best meets all of these requirements while minimizing continuing operational effort?
Keep all workloads in a single shared AWS account segmented by VPC and IAM, ask developers to tag resources manually, enable default encryption on S3 and EBS, centralize CloudTrail in the same account, and filter costs with Cost Explorer.
Use AWS Organizations alone with SCPs that deny unencrypted resource creation and missing tags, store an organization CloudTrail in the management account, create new accounts through Service Catalog and CloudFormation StackSets, and rely on Cost Explorer reports for chargeback.
Deploy AWS Control Tower with an OU for each business unit, enable preventive encryption guardrails and an enforced tag policy, allow developers to create sandbox accounts through Account Factory and IAM Identity Center, use the built-in log-archive account for organization-wide CloudTrail, and use consolidated billing with cost-allocation tags for chargeback.
Implement the open-source AWS Landing Zone solution, copy logs into each business-unit account with S3 replication, enforce encryption through bucket policies, require CCOE ticketing for new sandbox accounts, and generate cost visibility from CUR data in Athena.
Answer Description
AWS Control Tower sets up a landing-zone architecture that automatically creates a dedicated log-archive account, enables an organization-wide CloudTrail, and provides preventive guardrails such as CT.EC2.PV.2 and CT.S3.PV.6 to block unencrypted EBS volumes and S3 uploads. Account Factory, integrated with IAM Identity Center, gives developers governed self-service account provisioning. An enforced AWS Organizations tag policy applied at the OU level standardizes the CostCenter and Environment tags across resources, and consolidated billing combined with activated cost-allocation tags supplies the required per-business-unit cost view. The alternative options either rely on manual or ticket-based account creation, lack enforced encryption or tagging controls, or spread logging and cost management across multiple locations, resulting in higher operational overhead and incomplete compliance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Control Tower, and why is it used in this solution?
What are preventive guardrails in AWS Control Tower?
What does Account Factory do in AWS Control Tower?
A global corporation is adopting a multi-VPC architecture on AWS, with numerous VPCs spread across several AWS Regions. They also maintain a significant on-premises data center connected to AWS via AWS Direct Connect. The key requirements are to enable seamless, transitive communication between all VPCs (inter-VPC) and between the on-premises network and all VPCs. The solution must be highly scalable, centrally managed, and minimize operational overhead. A solutions architect needs to design the optimal network topology. Which approach best meets these requirements?
Deploy an AWS Transit Gateway in each region. Peer the Transit Gateways across regions and create attachments for each VPC. Connect the on-premises data center to a Transit Gateway via a Direct Connect Gateway attachment.
Create a full mesh of VPC peering connections between all VPCs. Establish a separate AWS Direct Connect private virtual interface (VIF) from the on-premises network to each individual VPC.
Use an AWS Direct Connect Gateway and associate it with a Virtual Private Gateway (VGW) in each VPC. This will provide connectivity from on-premises to all VPCs and enable inter-VPC communication through the Direct Connect Gateway.
Designate one VPC as a 'transit hub'. Use VPC peering to connect all other 'spoke' VPCs to this hub VPC. Establish a Direct Connect connection to the hub VPC and configure routing instances within it to forward traffic.
Answer Description
The correct answer is to use AWS Transit Gateway. AWS Transit Gateway acts as a cloud router and is specifically designed to simplify network connectivity at scale. By creating a Transit Gateway in each region, attaching all the VPCs in that region, and then peering the Transit Gateways, you create a global network that allows for transitive routing. This means a resource in any connected network (VPC or on-premises) can communicate with a resource in any other connected network through the Transit Gateway hub-and-spoke model. Connecting the on-premises network via a Direct Connect Gateway to a Transit Gateway integrates the hybrid connectivity seamlessly into this architecture. This solution is scalable to thousands of VPCs, centralizes network management, and reduces the operational overhead of managing complex peering relationships.
Creating a full mesh of VPC peering connections is incorrect because it is not scalable. The number of peering connections grows quadratically with the number of VPCs, leading to significant management complexity and being limited to 125 peers per VPC. This approach is not centrally managed.
Using a designated 'transit hub' VPC with routing instances is an outdated pattern known as a 'Transit VPC'. While it can provide transitive routing, it relies on self-managed EC2 instances, which introduces bottlenecks, single points of failure, and high operational overhead for maintenance and scaling compared to the fully managed Transit Gateway service.
Using a Direct Connect Gateway associated with a Virtual Private Gateway (VGW) in each VPC is incorrect. Although a Direct Connect Gateway connects an on-premises site to multiple VPCs, it does not support transitive routing between those VPCs. Traffic cannot flow from one VPC to another through the Direct Connect Gateway, failing a key requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Transit Gateway and how does it simplify network connectivity?
What is the difference between a Direct Connect Gateway and a Transit Gateway?
Why is a full mesh of VPC peering connections not scalable?
A financial services company uses AWS Organizations to manage a multi-account environment. A central 'SharedServices' account hosts a customer-managed KMS key for encrypting sensitive data. A separate 'Security' account is used for centralized logging and auditing. The company's security policy mandates that all new S3 objects in member accounts must be encrypted at rest using Server-Side Encryption with the specific KMS key (SSE-KMS) from the SharedServices account. Any attempts to upload objects without this specific encryption, including using SSE-S3 or other KMS keys, must be denied. Additionally, all cryptographic operations using the shared KMS key must be logged to an S3 bucket in the Security account.
Which combination of actions provides the most effective and scalable solution to enforce these requirements?
In the SharedServices account, modify the KMS key policy to grant the s3.amazonaws.com service principal access from all accounts in the organization. In each member account, create an S3 bucket policy that mandates SSE-KMS encryption using the shared key's ARN. Configure an Amazon EventBridge rule in the default event bus of each member account to forward all S3 and KMS API calls to a central event bus in the Security account for auditing.
In each member account, create an IAM identity-based policy that denies s3:PutObject unless the request headers specify SSE-KMS with the correct key ARN, and attach this policy to all relevant IAM roles. In the SharedServices account, update the KMS key policy to allow access from all member account roles. In each member account, configure a CloudTrail trail to send logs to a central S3 bucket in the Security account.
In the Organizations management account, create a Service Control Policy (SCP) that denies the s3:PutObject action if the s3:x-amz-server-side-encryption-aws-kms-key-id condition key in the request does not match the ARN of the shared KMS key. In the SharedServices account, modify the KMS key policy to grant kms:GenerateDataKey and kms:Decrypt permissions to the necessary service roles in the member accounts. Create an organization-wide CloudTrail trail in the management account to deliver logs to an S3 bucket in the Security account.
Deploy an AWS Config rule in each member account to detect S3 objects that are not encrypted with the specified shared KMS key. Configure the rule to trigger a remediation action via an AWS Lambda function that deletes non-compliant objects. In the SharedServices account, grant the Lambda execution roles in each member account access to the KMS key. Use an AWS Config aggregator in the Security account to view compliance status.
Answer Description
The correct answer provides the most effective and scalable solution by using a combination of AWS Organizations features. A Service Control Policy (SCP) acts as a preventative guardrail, denying any s3:PutObject
API call that does not meet the specified encryption requirements before it can be processed. This is more effective than reactive methods and more scalable than managing IAM policies in each account. The KMS key policy in the central SharedServices account must explicitly grant cross-account permissions to the IAM principals (roles) in the member accounts that need to use the key for encryption and decryption. Finally, creating a single organization-wide CloudTrail trail is the standard, most efficient method for centralizing audit logs from all accounts into a designated S3 bucket in the Security account.
The option to use IAM policies in each member account is incorrect because it is not scalable. It requires manual configuration and ongoing management in every account within the organization, increasing operational overhead and the risk of misconfiguration. SCPs provide a centralized enforcement mechanism.
The option to use AWS Config rules and Lambda for remediation is incorrect because it is a reactive, not preventative, approach. Non-compliant objects would be created before being detected and deleted, which may not meet the strict security requirement to deny the action outright. SCPs prevent the creation from happening in the first place.
The option to grant access only to the S3 service principal and use EventBridge is incorrect for two reasons. First, for cross-account SSE-KMS, the calling IAM principal requires permissions in the KMS key policy, not just the S3 service principal. Second, while EventBridge can be used for eventing, AWS CloudTrail is the purpose-built service for comprehensive, centralized API call auditing and logging for security and compliance purposes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is an SCP preferred over IAM policies in this scenario?
What role does the KMS key policy play in enforcing encryption requirements?
Why is a centralized CloudTrail trail better than other logging methods?
A solutions architect is troubleshooting a connectivity issue in a hybrid environment. An application running on an EC2 instance in a spoke VPC (10.20.0.0/16) cannot connect to an on-premises database server (192.168.10.50) on port 1433. The spoke VPC is connected to a central inspection VPC via an AWS Transit Gateway. The inspection VPC is connected to the on-premises data center via an AWS Direct Connect connection. All traffic from the spoke VPC to on-premises is routed through firewall appliances in the inspection VPC. On-premises network engineers have confirmed that their firewalls are not blocking the traffic. The architect needs to identify the component in the AWS network path that is blocking the connection. What is the MOST efficient first step to diagnose this issue?
Configure Route 53 Resolver Query Logging for the spoke VPC. Analyze the logs to ensure the on-premises database's hostname is correctly resolving to the IP address 192.168.10.50.
Use the Route Analyzer feature in Transit Gateway Network Manager to analyze the path from the spoke VPC attachment to the Direct Connect gateway attachment, verifying that routes are correctly propagated.
Enable VPC Flow Logs on the network interfaces for the application instance, the Transit Gateway attachment, and the inspection VPC firewall instances. Query the logs using Amazon Athena to find REJECT entries for traffic destined for 192.168.10.50 on port 1433.
Use VPC Reachability Analyzer to create and run an analysis with the application's EC2 instance network interface as the source and the on-premises database IP address (192.168.10.50) as the destination, specifying port 1433.
Answer Description
The correct answer is to use VPC Reachability Analyzer. This tool is specifically designed to perform static analysis of network paths between a source and a destination. It checks the configurations of route tables, security groups, network ACLs, and Transit Gateways without sending any live packets. This allows it to quickly identify the specific component that is blocking connectivity, making it the most efficient first step for this scenario.
- Using VPC Flow Logs and Amazon Athena is a valid troubleshooting method, but it is less efficient. It requires enabling logs, waiting for traffic to be captured, and then performing complex queries on potentially large datasets to find the problem. This is more time-consuming than using the purpose-built Reachability Analyzer.
- The Route Analyzer feature in Transit Gateway Network Manager is not the best tool for this task because it only analyzes routes within the Transit Gateway route tables. It does not analyze VPC route tables, security group rules, or network ACLs, which are common sources of connectivity problems.
- Configuring Route 53 Resolver Query Logging would be appropriate if the problem were related to DNS name resolution. However, the scenario describes a failure to connect to a specific IP address, which points to a network path issue, not a DNS issue.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does the VPC Reachability Analyzer work?
What is the difference between VPC Reachability Analyzer and VPC Flow Logs?
Why doesn’t Route Analyzer in Transit Gateway Network Manager identify all connectivity issues?
A company is implementing a centralized logging solution within its multi-account AWS environment, which is governed by AWS Organizations. A dedicated Security account (ID 111122223333) hosts an Amazon S3 bucket that receives AWS CloudTrail logs from all member accounts. Compliance rules require every log object in the bucket to be encrypted at rest with a single customer-managed AWS KMS key that also resides in the Security account.
Security analysts, using a specific IAM role in the Security account, must be able to decrypt and analyze the logs. The design must follow the principle of least privilege.
Which configuration correctly enables cross-account encryption of the logs and decryption by the analysts?
Modify the KMS key policy in the Security account. Add a statement that allows the cloudtrail.amazonaws.com service principal the kms:GenerateDataKey*, kms:Decrypt, and kms:DescribeKey actions, using a condition to limit access to requests from the organization's member accounts. Add another statement that grants the security-analyst IAM role the kms:Decrypt action.
In the Security account, create KMS grants that allow the cloudtrail.amazonaws.com service principal to perform the kms:Encrypt action for each member account. Create a separate grant that allows the security-analyst IAM role kms:Decrypt permission.
Attach an IAM policy to the CloudTrail service-linked role in each member account that grants the kms:Encrypt action on the central KMS key's ARN. In the Security account's KMS key policy, add each member account's root ARN to the principal list to allow access.
Create an IAM role in the Security account that member accounts can assume and give that role kms:GenerateDataKey* permission. Configure each trail to use this assumed role for log delivery. Update the KMS key policy to allow the security-analyst IAM role kms:Decrypt permission.
Answer Description
A KMS key policy is the authoritative access-control mechanism for the key, so cross-account permissions should be granted there. For CloudTrail to write SSE-KMS encrypted objects to the bucket it needs kms:GenerateDataKey*; to create or update the trail with SSE-KMS enabled it also needs kms:Decrypt; and it must be able to describe the key. A second statement grants the analysts' IAM role kms:Decrypt so they can read the encrypted logs. Scoping the service principal's access with an aws:SourceArn or kms:EncryptionContext condition limits use of the key to the organization's trails, satisfying least-privilege requirements.
The other options are incorrect:
- Granting only kms:Encrypt or giving each account's root ARN is overly permissive and omits required actions.
- Having member accounts assume a separate role is unsupported for CloudTrail's automatic calls.
- Relying on long-lived KMS grants and kms:Encrypt does not meet the documented requirements and adds operational complexity.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why does the KMS key policy need a condition using aws:SourceArn or kms:EncryptionContext?
What actions does the cloudtrail.amazonaws.com service principal require for SSE-KMS encryption?
How does granting kms:Decrypt to the security-analyst IAM role follow the principle of least privilege?
A central security account manages encryption for three production workload accounts in the us-east-1 Region. The workloads store sensitive data in Amazon S3 and Amazon DynamoDB. Compliance requires:
- Encryption keys must stay inside AWS-managed FIPS 140-3 HSMs and never leave the service in plaintext.
- Keys must rotate automatically every 365 days, and earlier key versions must remain available for at least 7 years so archived data can still be decrypted.
- The disaster-recovery plan mandates that encrypted data be fully readable in us-west-2 within 15 minutes of a regional outage, without application changes.
- Operations must minimize the number of keys administrators manage and avoid writing custom code for key rotation or cross-Region replication.
Which solution meets all of these requirements with the LEAST operational overhead?
Create one symmetric multi-Region customer managed KMS key in the security account in us-east-1. Enable automatic rotation and use ReplicateKey to create a replica in us-west-2. Add key-policy statements that allow IAM roles in each workload account to perform cryptographic operations, and point all applications to the key ARN.
Import customer-generated key material into a KMS key in us-east-1, export the plaintext key, import it into a new KMS key in us-west-2, and use an annual Lambda function to re-import fresh key material into both keys.
Deploy AWS CloudHSM clusters in us-east-1 and us-west-2, create custom key stores, manually replicate key material between clusters, and schedule annual Lambda jobs to rotate the keys.
Create separate customer managed KMS keys in both Regions for each workload account. Turn on automatic rotation for every key and rely on AWS Backup cross-Region copy jobs to move encrypted snapshots to us-west-2.
Answer Description
Creating a multi-Region customer managed AWS KMS key in the security account satisfies every control:
- Multi-Region keys are generated, stored, and used only inside AWS-managed FIPS 140-3 HSMs, so key material never leaves KMS in plaintext.
- Enabling automatic rotation on the primary key rotates the key material every 365 days and KMS retains older key versions indefinitely, allowing decryption of data encrypted up to (and beyond) the required 7-year window.
- Replicating the key to us-west-2 produces a replica with an identical key ID and key material. Applications can decrypt data in either Region with no code changes, meeting the 15-minute DR objective without manual replication scripts.
- A single key set (one primary and one replica) is managed centrally. Only the key policy needs to grant cryptographic permissions to workload-account roles, so administrators avoid maintaining separate keys per account or Region.
The other options introduce extra keys, manual key movement, or expose plaintext key material-failing one or more stated requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a multi-Region customer managed KMS key?
How does automatic key rotation work in AWS KMS?
What is AWS ReplicateKey and how is it used?
Your organization operates a primary data center and must replicate 8 TB of daily database changes to more than 50 Amazon VPCs that are spread across three AWS Regions. Each replication stream must sustain at least 8 Gbps throughput with consistently low latency. The security team mandates encryption of all traffic that traverses the link between the data center and AWS. The network team wants to avoid public-internet paths, minimize the number of physical circuits and virtual interfaces that must be managed, and be able to add additional VPCs or Regions without ordering new circuits. Which connectivity option meets these requirements MOST cost-effectively?
Establish multiple AWS Site-to-Site VPN connections over the internet to AWS Transit Gateways in each Region, use equal-cost multipath routing across the tunnels, and accelerate traffic with AWS Global Accelerator.
Implement AWS VPN CloudHub with BGP-based Site-to-Site VPN tunnels from the data center to every VPC and use route propagation for connectivity.
Order a 10 Gbps dedicated AWS Direct Connect connection that supports MACsec, create one transit virtual interface to an AWS Direct Connect gateway, and associate the gateway with AWS Transit Gateways in each Region.
Provision a 10 Gbps dedicated AWS Direct Connect connection; create separate private virtual interfaces to each VPC; rely on security groups and network ACLs for traffic protection.
Answer Description
A single 10 Gbps dedicated AWS Direct Connect (DX) connection that supports MACsec meets the performance requirement while keeping traffic off the public internet. Creating one transit virtual interface (VIF) to an AWS Direct Connect gateway and associating that gateway with Regional AWS Transit Gateways allows the same encrypted DX circuit to reach dozens of VPCs in any Region without adding more VIFs or physical links. MACsec provides line-rate encryption on the DX circuit, satisfying the in-transit-encryption mandate without having to overlay IPsec tunnels. Site-to-Site VPN-only solutions ride the public internet, introduce variable latency, and would need at least seven tunnels to reach 8 Gbps, increasing operational complexity. Using private VIFs to every VPC over DX removes internet dependence but does not provide encryption and requires many additional VIFs to scale. VPN CloudHub also depends on internet paths and is limited to 1.25 Gbps per tunnel. Therefore, a MACsec-enabled DX connection with a transit VIF and Direct Connect gateway is the most operationally efficient and cost-effective choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Direct Connect and how does it differ from a Site-to-Site VPN?
What is MACsec and why is it required in this solution?
How does AWS Direct Connect Gateway and Transit Gateway work together to scale VPC connectivity across Regions?
Neat!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.