00:20:00

AWS Certified Solutions Architect Professional Practice Test (SAP-C02)

Use the form below to configure your AWS Certified Solutions Architect Professional Practice Test (SAP-C02). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for AWS Certified Solutions Architect Professional SAP-C02
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

AWS Certified Solutions Architect Professional SAP-C02 Information

The AWS Certified Solutions Architect – Professional (SAP-C02) exam is a test for people who want to show advanced skills in cloud design using Amazon Web Services. It proves that you can handle large, complex systems and design solutions that are secure, reliable, and meet business needs. Passing this exam shows a higher level of knowledge than the associate-level test and is often needed for senior cloud roles.

This exam includes multiple-choice and multiple-response questions. It covers areas like designing for high availability, choosing the right storage and compute services, planning for cost, and managing security at scale. You will also need to understand how to migrate big applications to the cloud, design hybrid systems, and use automation tools to keep environments efficient and safe.

AWS suggests having at least two years of real-world experience before taking this test. The SAP-C02 exam takes 180 minutes, includes about 75 questions, and requires a scaled score of 750 out of 1000 to pass. Preparing usually means lots of practice with AWS services, using study guides, and trying practice exams. For many professionals, this certification is an important milestone toward becoming a cloud architect or senior cloud engineer.

AWS Certified Solutions Architect Professional SAP-C02 Logo
  • Free AWS Certified Solutions Architect Professional SAP-C02 Practice Test

  • 20 Questions
  • Unlimited
  • Design Solutions for Organizational Complexity
    Design for New Solutions
    Continuous Improvement for Existing Solutions
    Accelerate Workload Migration and Modernization

Free Preview

This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!

Question 1 of 20

A global enterprise is designing its multi-region AWS network. The company has a large, existing on-premises IP address space and owns a public /24 IPv4 block. They plan to create hundreds of VPCs across multiple AWS accounts within an AWS Organization. A key requirement is to prevent overlapping IP address ranges between on-premises networks and all new VPCs. Additionally, they want to centrally manage and automate the allocation of VPC CIDR blocks to different business units and enforce specific tagging policies on VPC creation. Which approach provides the most scalable and manageable solution for this IP addressing strategy?

  • Implement Amazon VPC IP Address Manager (IPAM) within the AWS Organization. Create IPAM pools from the company's on-premises IP space and use the Bring Your Own IP (BYOIP) feature for their public /24 block. Enforce allocation rules for VPC creation.

  • Design all VPCs with a small primary CIDR from the 10.0.0.0/8 range. As IP space is depleted, add secondary CIDR blocks to each VPC from the on-premises IP address space.

  • Manually track all AWS-provided private CIDR allocations in a shared spreadsheet. Use AWS Resource Access Manager (RAM) to share subnets from a central VPC to spoke accounts.

  • For all new VPCs, exclusively allocate CIDR blocks from the 100.64.0.0/10 range to ensure no overlap with the existing on-premises network. Use AWS Budgets to monitor IP address consumption.

Question 2 of 20

A financial services company runs a critical trade-processing application on AWS. The application uses a fleet of Amazon EC2 instances and an Amazon Aurora PostgreSQL database. Due to the critical nature of the application, the business has mandated a Recovery Time Objective (RTO) of less than 1 minute and a Recovery Point Objective (RPO) of less than 1 second. The disaster recovery (DR) plan must account for a full AWS Region failure.

Which DR strategy should a solutions architect recommend to meet these requirements?

  • Use AWS Elastic Disaster Recovery (DRS) to continuously replicate the EC2 instances and the attached database volumes to a staging area in a secondary region.

  • Deploy the application and a scaled-down version of the EC2 fleet in a secondary region as a Warm Standby. Use Amazon Aurora Global Database, with the secondary region hosting a read replica.

  • Configure a Pilot Light architecture by replicating the Aurora database to a secondary region. Provision the application tier infrastructure only upon a failover event.

  • Use AWS Backup with Cross-Region Replication to copy Aurora snapshots and AMIs to a secondary region. In a disaster, restore the environment using the replicated backups.

Question 3 of 20

A global enterprise is designing a multi-account AWS architecture that will host hundreds of applications, each within its own VPC, across multiple AWS Regions. The security team mandates that all east-west (inter-VPC) traffic and north-south (egress to the internet) traffic must be routed through a central point of inspection for deep packet inspection and logging. The solution must be highly scalable, minimize network management overhead, and support transitive routing to on-premises data centers via AWS Direct Connect. Which connectivity strategy best fulfills these requirements?

  • Use AWS PrivateLink to create VPC endpoints in each spoke VPC for every shared service. For general inter-VPC traffic, establish a limited mesh of VPC peering connections and manage route tables manually.

  • Implement a legacy 'Transit VPC' pattern in each region using EC2 instances running third-party routing software. Establish IPsec VPN connections from all spoke VPCs to the Transit VPC to enable transitive routing and inspection.

  • Establish a full-mesh VPC peering configuration for all VPCs within each region. For inter-region traffic, create additional peering connections. Implement traffic inspection by deploying security appliances in every VPC.

  • In each region, deploy an AWS Transit Gateway and peer them using inter-region peering. Create a central inspection VPC with a Gateway Load Balancer that fronts a fleet of security appliances. Configure Transit Gateway route tables to forward all traffic to the inspection VPC.

Question 4 of 20

You operate latency-sensitive trading workloads on bare-metal servers in an Equinix colocation facility that is also an AWS Direct Connect location. Several microservices run in multiple Amazon VPCs that belong to three different AWS accounts in the us-east-1 and us-east-2 Regions. Network engineering requires a single, private 10-Gbps link that avoids internet hops, delivers predictable latency, and allows additional VPCs to be connected later without ordering new physical circuits. Which connectivity strategy best meets these requirements?

  • Install an AWS Outposts rack in the colocation facility and rely on the Outposts service link over the public internet to exchange traffic with the VPCs.

  • Attach each VPC to an AWS Transit Gateway, create two Site-to-Site VPN tunnels from the Transit Gateway to the on-premises router in the colocation facility, and enable equal-cost multi-path routing.

  • Request a 10-Gbps dedicated AWS Direct Connect cross-connect in the colocation facility. Create one private virtual interface that terminates on a Direct Connect gateway, and associate the gateway with the virtual private gateways of each VPC.

  • Ask an AWS Direct Connect Delivery Partner at a different Direct Connect location to provision a 10-Gbps hosted connection and extend the circuit to the colocation data center over MPLS.

Question 5 of 20

A financial-services company is building a hybrid-cloud architecture that connects its on-premises data center to multiple AWS VPCs over AWS Direct Connect. The company requires seamless, bidirectional DNS resolution: on-premises applications must resolve private hostnames for Amazon EC2 instances in the VPCs (for example, app-server.prod.vpc.example.com), and EC2 instances must resolve hostnames that live only in the on-premises namespace (for example, db.corp.internal). The solution must be highly available, scalable, and centrally manageable, and it must not require custom DNS server software on EC2 instances.

Which solution meets these requirements most effectively?

  • Create Route 53 Resolver inbound and outbound endpoints. Configure conditional forwarding on the on-premises DNS servers to send queries for the VPC domain to the inbound endpoint. Create Resolver rules to forward queries for the on-premises domain to the on-premises DNS servers via the outbound endpoint.

  • Create a private hosted zone for the on-premises domain (corp.internal) and associate it with all VPCs. Create a Route 53 outbound endpoint and a rule to forward all queries from the VPCs to the on-premises DNS servers.

  • Create a Route 53 inbound endpoint in each VPC. Configure the on-premises DNS servers with conditional forwarders that send all AWS-related DNS queries to the IP addresses of the inbound endpoints.

  • Deploy a pair of highly available EC2 instances running BIND in a central VPC. Configure on-premises DNS servers to forward queries to these instances, and configure the BIND servers to forward queries for the on-premises domain back to the on-premises DNS servers.

Question 6 of 20

A financial services company operates a large number of applications across a multi-account AWS Organization. The security team needs a comprehensive, centrally managed security solution. The solution must provide proactive and intelligent threat detection for workloads and data, including identifying unusual API activity or potential instance compromises. It must also offer protection for public-facing web applications against common web exploits and DDoS attacks. A key requirement is to aggregate security findings from all accounts and services into a single, designated security tooling account for unified visibility, posture management, and prioritized remediation. Which combination of AWS services should a solutions architect recommend to meet all these requirements most effectively?

  • Use AWS Config with conformance packs to enforce security best practices and Amazon Macie to discover and protect sensitive data in Amazon S3.

  • Implement Amazon GuardDuty for threat detection, AWS WAF for web application protection, AWS Shield Advanced for DDoS mitigation, and AWS Security Hub for centralized findings management.

  • Enable Amazon Inspector in all accounts to scan for vulnerabilities, and use AWS Systems Manager Patch Manager to automate patching.

  • Deploy AWS Network Firewall in each VPC, use VPC Flow Logs for traffic analysis, and stream logs to a central Amazon S3 bucket for manual review.

Question 7 of 20

Your company deployed its first workload in a new VPC that uses the IPv4 CIDR block 10.2.0.0/20. Three months later, security and operations teams redefine the network-segmentation standard. The VPC must now contain three public and three private subnets in each of three Availability Zones (18 subnets total). Every subnet must provide at least 400 usable IPv4 addresses to accommodate horizontally-scaling container tasks. Existing resources in the current address range must keep running without an IP-address change.

Which action will satisfy the new requirements with the least operational effort?

  • Associate a non-overlapping secondary IPv4 CIDR block such as 10.2.8.0/18 with the VPC and create the new subnets from that range.

  • Create a new VPC with a /16 CIDR block, migrate all workloads into it, and delete the original VPC.

  • Resize each required subnet to /25 so that all 18 subnets fit inside the existing 10.2.0.0/20 range.

  • Enlarge the VPC's primary CIDR block from /20 to /18, then recreate all subnets so they meet the new size requirement.

Question 8 of 20

A financial services company is designing a global, multi-account AWS environment to host a critical three-tier application. The architecture requires separate AWS accounts for development, staging, and production to ensure strict workload isolation. Each account will have its own VPC and connect to a central Transit Gateway for shared services and to an on-premises network via AWS Direct Connect. The on-premises network uses the 10.0.0.0/8 address space. The architects have allocated the 172.16.0.0/16 block for all AWS VPCs. A primary requirement is to maintain clear network segmentation between application tiers (web, application, database) within each VPC, while ensuring that routing between the VPCs and the on-premises network is scalable and avoids IP address conflicts. Which network segmentation strategy is the MOST effective and scalable for this scenario?

  • Use the same 172.16.0.0/16 CIDR block for the VPC in each of the development, staging, and production accounts. Rely on the Transit Gateway to manage routing between the identical address spaces.

  • Create a single, large VPC in a shared services account with the 172.16.0.0/16 CIDR. Create separate sets of subnets within this single VPC for the development, staging, and production environments, using security groups to enforce isolation.

  • Assign a unique, non-overlapping CIDR block to each account's VPC (e.g., 172.16.10.0/24 for dev, 172.16.20.0/24 for staging, 172.16.30.0/24 for prod). Within each VPC, create separate subnets for the web, application, and database tiers across multiple Availability Zones.

  • Assign the primary CIDR block 172.16.0.0/16 to the production VPC. For the development and staging VPCs, use the same primary CIDR and then add unique secondary CIDR blocks to each to differentiate them for routing purposes.

Question 9 of 20

A global travel-booking company runs a latency-sensitive REST API on Amazon EC2 instances behind an Application Load Balancer (ALB) in the us-east-1 Region. The data tier is an Amazon Aurora MySQL cluster. The architects have already extended the database by adding an Aurora Global Database secondary cluster in us-west-2.

Business continuity targets state that, if the primary Region fails, the API must recover in under one minute (RTO < 60 s) and lose at most 1 second of data (RPO < 1 s). Operations teams want to avoid manual DNS updates or lengthy runbook procedures during a Regional outage and prefer a solution that incurs the least ongoing operational overhead.

Which combination of actions will BEST meet these requirements?

  • Front both Regional ALBs with AWS Global Accelerator, enabling endpoint health checks for automatic traffic failover, and configure Aurora Global Database managed cross-Region failover to promote the secondary cluster when the primary Region is unavailable.

  • Create weighted Amazon Route 53 records with health checks for each ALB, set the record TTL to 60 seconds, and trigger an AWS Lambda function from CloudWatch alarms to adjust the weights. Manually promote the Aurora secondary cluster during an outage.

  • Refactor the application into an active/active design that stores data in Amazon DynamoDB global tables and implements bidirectional replication logic between EC2 instances in both Regions.

  • Use AWS Elastic Disaster Recovery to replicate the EC2 instances and the Aurora database to us-west-2, keep the target resources stopped, and start them when a Regional failure is declared.

Question 10 of 20

An organization has multiple AWS accounts that are part of AWS Organizations. A production workload in us-east-1 uses an Amazon FSx for Windows File Server file system and a mission-critical Amazon DynamoDB table. Container images are stored in a private Amazon ECR repository.
Compliance requirements state that:

  • Backups must be immutable and retained off-site for 35 days.
  • Backup configuration must be centrally managed across accounts.
  • A recovery site must be available in us-west-2 with an RTO of 60 minutes and an RPO of ≤ 1 hour.

Which approach meets these requirements in the most cost-effective way?

  • Convert the FSx file system to a multi-AZ deployment and configure Distributed File System Replication (DFSR) between Regions. Convert the DynamoDB table to a global table spanning us-east-1 and us-west-2, disable all backups, and enable an ECR pull-through cache in us-west-2.

  • Create an AWS Backup policy in the delegated administrator account that assigns the FSx file system and DynamoDB table to a backup plan with hourly snapshots (FSx) and continuous backups (DynamoDB), a 35-day retention rule, and an automatic copy to a backup vault locked in Compliance mode in us-west-2. Enable Amazon ECR private-registry cross-Region replication from us-east-1 to us-west-2.

  • Export the DynamoDB table to Amazon S3 every hour, turn on S3 Object Lock for 35 days, and enable S3 Cross-Region Replication to us-west-2. Use AWS DataSync to copy daily Shadow Copies from the FSx file system to the same bucket, and manually push container images to an ECR repository in us-west-2.

  • Enable AWS Elastic Disaster Recovery on the FSx file system and DynamoDB table to replicate data continuously to us-west-2. Use AWS Backup only for ECR to create nightly snapshots and copy them to a locked vault.

Question 11 of 20

A global enterprise is designing its AWS network architecture using a multi-account strategy with AWS Organizations. The design includes a central "Network" account that hosts an AWS Transit Gateway (TGW). Multiple "Application" accounts, each with a VPC, are attached to this TGW. A key security requirement is that all traffic between the Application VPCs must be inspected by a fleet of next-generation firewall (NGFW) appliances. These appliances are deployed in a dedicated "Inspection" VPC, also owned by the Network account. The Application VPCs have been deployed with overlapping CIDR blocks.

Which solution should a solutions architect recommend to meet these requirements in the most scalable and resilient way?

  • Create a VPC endpoint service using AWS PrivateLink in the Inspection VPC, fronting the NGFW appliances. Create interface endpoints for this service in each Application VPC. Update the route tables in all Application VPCs to route traffic through the local interface endpoints for inspection.

  • Deploy the NGFW appliances behind a Network Load Balancer (NLB) in the Inspection VPC. Configure Transit Gateway route tables to forward traffic to the NLB. The firewall appliances will perform Source NAT (SNAT) on the traffic before routing it back to the Transit Gateway for delivery.

  • Deploy the NGFW appliances as targets for a Gateway Load Balancer (GWLB) in the Inspection VPC. Configure the Transit Gateway to route traffic between Application VPCs to the Inspection VPC attachment. In the Inspection VPC, create GWLB endpoints and configure routing to direct traffic from the TGW through the GWLB for inspection before it is returned to the TGW.

  • Create a full mesh of VPC Peering connections between all Application VPCs and the Inspection VPC. Configure route tables in each Application VPC to forward traffic to the Inspection VPC, where the NGFW appliances are deployed on EC2 instances behind a Network Load Balancer.

Question 12 of 20

Your company is deploying a two-tier web application in a single Amazon VPC. An Application Load Balancer (ALB) in the public subnets terminates TLS on port 443 and forwards traffic to application servers in private subnets that listen on TCP port 9000. You must meet several compliance requirements: only the ALB may initiate traffic to the application servers on port 9000, the application servers must not be reachable from any other source, return-path traffic must be allowed automatically, and the solution must incur the least ongoing rule maintenance as the environment scales.

  • Replace the private-subnet route tables with routes that send all VPC-internal traffic to a firewall appliance in a dedicated subnet. Configure the appliance to permit TCP 9000 from the ALB to the application servers; keep the default security group and network ACL.

  • Create one security group for the ALB and another for the application servers. In the application-server security group, add an inbound rule that allows TCP 9000 from the ALB's security-group ID and remove all other inbound rules. Keep the default network ACL for all subnets.

  • Associate a custom network ACL with the private subnets that allows inbound TCP 9000 only from the ALB subnet CIDR blocks and outbound ephemeral ports. Leave a security group on the servers that allows all traffic.

  • In the application-server security group, allow TCP 9000 from 0.0.0.0/0. Attach a custom network ACL that denies all other ports inbound and outbound; update the ACL whenever new instances or ports are needed.

Question 13 of 20

A financial-services company exchanges personally identifiable information (PII) with an AWS workload that runs in a private VPC. The company currently uses a single 10 Gbps dedicated AWS Direct Connect private virtual interface that terminates on its on-premises core router. New regulatory requirements mandate that all PII in transit across the hybrid link must be encrypted. The solution must preserve at least 8 Gbps of throughput, add as little operational overhead as possible, and avoid any application-level changes.

Which approach meets these requirements?

  • Configure an AWS Site-to-Site VPN connection with two IPsec tunnels over the Direct Connect link and route all traffic through the VPN.

  • Enable MAC Security (MACsec) on the existing 10 Gbps dedicated Direct Connect port and configure matching MACsec parameters on the on-premises router.

  • Order a second 10 Gbps dedicated Direct Connect at a different location and enable BGP MD5 authentication on both connections.

  • Implement TLS encryption at the application layer for every service that exchanges PII over the Direct Connect link.

Question 14 of 20

A global enterprise now operates more than 150 AWS accounts that are divided into four business-unit OUs. The cloud center of excellence (CCOE) mandates that every account must:

  • Prevent the creation of unencrypted EBS volumes and block uploads to Amazon S3 that are not encrypted with AWS KMS.
  • Enforce the CostCenter and Environment tags with allowed values on every supported AWS resource.
  • Deliver all CloudTrail records from every account to a single immutable log-archive account.
  • Provide each business unit with a consolidated cost view while keeping organization-wide billing.
  • Let developers self-provision new sandbox accounts without opening CCOE tickets.

Which approach best meets all of these requirements while minimizing continuing operational effort?

  • Keep all workloads in a single shared AWS account segmented by VPC and IAM, ask developers to tag resources manually, enable default encryption on S3 and EBS, centralize CloudTrail in the same account, and filter costs with Cost Explorer.

  • Use AWS Organizations alone with SCPs that deny unencrypted resource creation and missing tags, store an organization CloudTrail in the management account, create new accounts through Service Catalog and CloudFormation StackSets, and rely on Cost Explorer reports for chargeback.

  • Deploy AWS Control Tower with an OU for each business unit, enable preventive encryption guardrails and an enforced tag policy, allow developers to create sandbox accounts through Account Factory and IAM Identity Center, use the built-in log-archive account for organization-wide CloudTrail, and use consolidated billing with cost-allocation tags for chargeback.

  • Implement the open-source AWS Landing Zone solution, copy logs into each business-unit account with S3 replication, enforce encryption through bucket policies, require CCOE ticketing for new sandbox accounts, and generate cost visibility from CUR data in Athena.

Question 15 of 20

A global corporation is adopting a multi-VPC architecture on AWS, with numerous VPCs spread across several AWS Regions. They also maintain a significant on-premises data center connected to AWS via AWS Direct Connect. The key requirements are to enable seamless, transitive communication between all VPCs (inter-VPC) and between the on-premises network and all VPCs. The solution must be highly scalable, centrally managed, and minimize operational overhead. A solutions architect needs to design the optimal network topology. Which approach best meets these requirements?

  • Deploy an AWS Transit Gateway in each region. Peer the Transit Gateways across regions and create attachments for each VPC. Connect the on-premises data center to a Transit Gateway via a Direct Connect Gateway attachment.

  • Create a full mesh of VPC peering connections between all VPCs. Establish a separate AWS Direct Connect private virtual interface (VIF) from the on-premises network to each individual VPC.

  • Use an AWS Direct Connect Gateway and associate it with a Virtual Private Gateway (VGW) in each VPC. This will provide connectivity from on-premises to all VPCs and enable inter-VPC communication through the Direct Connect Gateway.

  • Designate one VPC as a 'transit hub'. Use VPC peering to connect all other 'spoke' VPCs to this hub VPC. Establish a Direct Connect connection to the hub VPC and configure routing instances within it to forward traffic.

Question 16 of 20

A financial services company uses AWS Organizations to manage a multi-account environment. A central 'SharedServices' account hosts a customer-managed KMS key for encrypting sensitive data. A separate 'Security' account is used for centralized logging and auditing. The company's security policy mandates that all new S3 objects in member accounts must be encrypted at rest using Server-Side Encryption with the specific KMS key (SSE-KMS) from the SharedServices account. Any attempts to upload objects without this specific encryption, including using SSE-S3 or other KMS keys, must be denied. Additionally, all cryptographic operations using the shared KMS key must be logged to an S3 bucket in the Security account.

Which combination of actions provides the most effective and scalable solution to enforce these requirements?

  • In the SharedServices account, modify the KMS key policy to grant the s3.amazonaws.com service principal access from all accounts in the organization. In each member account, create an S3 bucket policy that mandates SSE-KMS encryption using the shared key's ARN. Configure an Amazon EventBridge rule in the default event bus of each member account to forward all S3 and KMS API calls to a central event bus in the Security account for auditing.

  • In each member account, create an IAM identity-based policy that denies s3:PutObject unless the request headers specify SSE-KMS with the correct key ARN, and attach this policy to all relevant IAM roles. In the SharedServices account, update the KMS key policy to allow access from all member account roles. In each member account, configure a CloudTrail trail to send logs to a central S3 bucket in the Security account.

  • In the Organizations management account, create a Service Control Policy (SCP) that denies the s3:PutObject action if the s3:x-amz-server-side-encryption-aws-kms-key-id condition key in the request does not match the ARN of the shared KMS key. In the SharedServices account, modify the KMS key policy to grant kms:GenerateDataKey and kms:Decrypt permissions to the necessary service roles in the member accounts. Create an organization-wide CloudTrail trail in the management account to deliver logs to an S3 bucket in the Security account.

  • Deploy an AWS Config rule in each member account to detect S3 objects that are not encrypted with the specified shared KMS key. Configure the rule to trigger a remediation action via an AWS Lambda function that deletes non-compliant objects. In the SharedServices account, grant the Lambda execution roles in each member account access to the KMS key. Use an AWS Config aggregator in the Security account to view compliance status.

Question 17 of 20

A solutions architect is troubleshooting a connectivity issue in a hybrid environment. An application running on an EC2 instance in a spoke VPC (10.20.0.0/16) cannot connect to an on-premises database server (192.168.10.50) on port 1433. The spoke VPC is connected to a central inspection VPC via an AWS Transit Gateway. The inspection VPC is connected to the on-premises data center via an AWS Direct Connect connection. All traffic from the spoke VPC to on-premises is routed through firewall appliances in the inspection VPC. On-premises network engineers have confirmed that their firewalls are not blocking the traffic. The architect needs to identify the component in the AWS network path that is blocking the connection. What is the MOST efficient first step to diagnose this issue?

  • Configure Route 53 Resolver Query Logging for the spoke VPC. Analyze the logs to ensure the on-premises database's hostname is correctly resolving to the IP address 192.168.10.50.

  • Use the Route Analyzer feature in Transit Gateway Network Manager to analyze the path from the spoke VPC attachment to the Direct Connect gateway attachment, verifying that routes are correctly propagated.

  • Enable VPC Flow Logs on the network interfaces for the application instance, the Transit Gateway attachment, and the inspection VPC firewall instances. Query the logs using Amazon Athena to find REJECT entries for traffic destined for 192.168.10.50 on port 1433.

  • Use VPC Reachability Analyzer to create and run an analysis with the application's EC2 instance network interface as the source and the on-premises database IP address (192.168.10.50) as the destination, specifying port 1433.

Question 18 of 20

A company is implementing a centralized logging solution within its multi-account AWS environment, which is governed by AWS Organizations. A dedicated Security account (ID 111122223333) hosts an Amazon S3 bucket that receives AWS CloudTrail logs from all member accounts. Compliance rules require every log object in the bucket to be encrypted at rest with a single customer-managed AWS KMS key that also resides in the Security account.

Security analysts, using a specific IAM role in the Security account, must be able to decrypt and analyze the logs. The design must follow the principle of least privilege.

Which configuration correctly enables cross-account encryption of the logs and decryption by the analysts?

  • Modify the KMS key policy in the Security account. Add a statement that allows the cloudtrail.amazonaws.com service principal the kms:GenerateDataKey*, kms:Decrypt, and kms:DescribeKey actions, using a condition to limit access to requests from the organization's member accounts. Add another statement that grants the security-analyst IAM role the kms:Decrypt action.

  • In the Security account, create KMS grants that allow the cloudtrail.amazonaws.com service principal to perform the kms:Encrypt action for each member account. Create a separate grant that allows the security-analyst IAM role kms:Decrypt permission.

  • Attach an IAM policy to the CloudTrail service-linked role in each member account that grants the kms:Encrypt action on the central KMS key's ARN. In the Security account's KMS key policy, add each member account's root ARN to the principal list to allow access.

  • Create an IAM role in the Security account that member accounts can assume and give that role kms:GenerateDataKey* permission. Configure each trail to use this assumed role for log delivery. Update the KMS key policy to allow the security-analyst IAM role kms:Decrypt permission.

Question 19 of 20

A central security account manages encryption for three production workload accounts in the us-east-1 Region. The workloads store sensitive data in Amazon S3 and Amazon DynamoDB. Compliance requires:

  • Encryption keys must stay inside AWS-managed FIPS 140-3 HSMs and never leave the service in plaintext.
  • Keys must rotate automatically every 365 days, and earlier key versions must remain available for at least 7 years so archived data can still be decrypted.
  • The disaster-recovery plan mandates that encrypted data be fully readable in us-west-2 within 15 minutes of a regional outage, without application changes.
  • Operations must minimize the number of keys administrators manage and avoid writing custom code for key rotation or cross-Region replication.

Which solution meets all of these requirements with the LEAST operational overhead?

  • Create one symmetric multi-Region customer managed KMS key in the security account in us-east-1. Enable automatic rotation and use ReplicateKey to create a replica in us-west-2. Add key-policy statements that allow IAM roles in each workload account to perform cryptographic operations, and point all applications to the key ARN.

  • Import customer-generated key material into a KMS key in us-east-1, export the plaintext key, import it into a new KMS key in us-west-2, and use an annual Lambda function to re-import fresh key material into both keys.

  • Deploy AWS CloudHSM clusters in us-east-1 and us-west-2, create custom key stores, manually replicate key material between clusters, and schedule annual Lambda jobs to rotate the keys.

  • Create separate customer managed KMS keys in both Regions for each workload account. Turn on automatic rotation for every key and rely on AWS Backup cross-Region copy jobs to move encrypted snapshots to us-west-2.

Question 20 of 20

Your organization operates a primary data center and must replicate 8 TB of daily database changes to more than 50 Amazon VPCs that are spread across three AWS Regions. Each replication stream must sustain at least 8 Gbps throughput with consistently low latency. The security team mandates encryption of all traffic that traverses the link between the data center and AWS. The network team wants to avoid public-internet paths, minimize the number of physical circuits and virtual interfaces that must be managed, and be able to add additional VPCs or Regions without ordering new circuits. Which connectivity option meets these requirements MOST cost-effectively?

  • Establish multiple AWS Site-to-Site VPN connections over the internet to AWS Transit Gateways in each Region, use equal-cost multipath routing across the tunnels, and accelerate traffic with AWS Global Accelerator.

  • Implement AWS VPN CloudHub with BGP-based Site-to-Site VPN tunnels from the data center to every VPC and use route propagation for connectivity.

  • Order a 10 Gbps dedicated AWS Direct Connect connection that supports MACsec, create one transit virtual interface to an AWS Direct Connect gateway, and associate the gateway with AWS Transit Gateways in each Region.

  • Provision a 10 Gbps dedicated AWS Direct Connect connection; create separate private virtual interfaces to each VPC; rely on security groups and network ACLs for traffic protection.