AWS Certified Solutions Architect Associate Practice Test (SAA-C03)
Use the form below to configure your AWS Certified Solutions Architect Associate Practice Test (SAA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Solutions Architect Associate SAA-C03 Information
AWS Certified Solutions Architect - Associate showcases knowledge and skills in AWS technology, across a wide range of AWS services. The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework. This certification can enhance the career profile and earnings of certified individuals and increase your credibility and confidence in stakeholder and customer interactions.
The AWS Certified Solutions Architect - Associate (SAA-C03) exam is intended for individuals who perform a solutions architect role. The exam validates a candidate’s ability to design solutions based on the AWS Well-Architected Framework.
The exam also validates a candidate’s ability to complete the following tasks:
- Design solutions that incorporate AWS services to meet current business requirements and future projected needs
- Design architectures that are secure, resilient, high-performing, and cost optimized
- Review existing solutions and determine improvements

- Free AWS Certified Solutions Architect Associate SAA-C03 Practice Test
- 20 Questions
- Unlimited
- Design Secure ArchitecturesDesign Resilient ArchitecturesDesign High-Performing ArchitecturesDesign Cost-Optimized Architectures
Free Preview
This test is a free preview, no account required. 
 Subscribe to unlock all content, keep track of your scores, and access AI features!
Which of the following statements about enabling multi-factor authentication (MFA) in AWS is correct?
Select ONE answer.
- An IAM user can enable an MFA device for their own user if their IAM policy permits the required actions. 
- Only the AWS account root user can enable MFA, and IAM users cannot enable it even if granted permissions. 
- MFA can only be enabled through the AWS CLI; the AWS Management Console does not support enabling MFA devices. 
- MFA can be enabled for an IAM role in the same way that it is enabled for a user. 
Answer Description
An IAM user can enable an MFA device for their own user when their IAM policy grants the required permissions (such as iam:CreateVirtualMFADevice and iam:EnableMFADevice). The root user can enable MFA for the root user credentials, but it is not the only principal that can enable MFA. IAM roles cannot have MFA devices attached, and MFA can be configured through both the AWS Management Console and the AWS CLI.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What permissions are required for an IAM user to enable MFA for themselves?
Why can't MFA be enabled for an IAM role?
Can MFA be enabled through both the AWS Management Console and AWS CLI?
What is the purpose of using multiple Availability Zones for deploying applications on AWS?
- To provide high availability and fault tolerance for applications by distributing resources within a region across physically separated data centers. 
- To serve as a single point of contact and management for global resources in multiple regions. 
- To increase the overall performance of compute instances by equally distributing the workload. 
- To cache static content closer to users and reduce latency. 
Answer Description
Using multiple Availability Zones provides high availability and fault tolerance for applications by distributing resources within a region across physically separated data centers. This ensures that if one Availability Zone becomes unavailable or compromised, others can continue to operate, reducing the potential impact on the application's users. The incorrect answers mention increasing performance and reducing latency, which are benefits related to edge locations and caching respectively, rather than the primary purpose of Availability Zones.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Availability Zones in AWS?
How do Availability Zones ensure fault tolerance?
What is the difference between Availability Zones and Edge Locations?
A healthcare company stores patient information that includes sensitive records in Amazon S3. They are subject to strict compliance regulations and need an automated way to classify their data at scale and be alerted of any potential exposure risks. Which service should they implement for continuous analysis of their stored content and to receive automated security alerts in case of unsecured sensitive data?
- Use Amazon Cognito to manage patient identity verification and to secure sensitive records. 
- Configure AWS Secrets Manager for rotating credentials and alerting on data exposure. 
- Adopt Amazon Macie for content analysis and automated alerts on insecure data storage. 
- Implement Amazon GuardDuty for continuous threat detection and data classification in S3. 
Answer Description
Amazon Macie is the AWS service specifically crafted for the purpose of analyzing and securing content that resides within Amazon S3. It uses machine learning and pattern matching to automatically recognize sensitive information such as healthcare records. When it detects unsecured data or abnormal data access patterns, it triggers alerts. This fits the requirement of the healthcare company to keep its patient records secure according to compliance regulations. Amazon GuardDuty is a threat detection service that monitors malicious activities rather than classifying content. While AWS Secrets Manager secures and rotates secrets such as database credentials and API keys, it does not classify or monitor object content within S3. Lastly, Amazon Cognito focuses on user identity management and would not assist with the data classification or monitoring needs of the healthcare company.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Amazon Macie classify sensitive data in S3?
How is Amazon Macie different from Amazon GuardDuty?
What kind of alerts does Amazon Macie provide if it detects unsecured sensitive data?
An application is experiencing significant load on its database tier, particularly with read-heavy query operations that are impacting performance. Which service would best alleviate this issue by caching query results to enhance the performance and scalability of the application?
- Amazon RDS Read Replicas 
- Amazon RDS Multi-AZ deployment 
- Amazon ElastiCache 
- Amazon Simple Storage Service (S3) 
Answer Description
Amazon ElastiCache effectively addresses the challenge of reducing database load by caching query results. It supports caching strategies, both for Redis and Memcached, that can store frequently accessed data in-memory, thereby improving application performance by providing low-latency access to the data and reducing the load on the database tier. This aligns with the criteria specified for enhancing performance and scalability for read-heavy operations, making it the best option given the scenario described.
Amazon RDS Read Replicas are not the ideal choice because, while they can reduce the load by serving read requests, they are not a caching layer and do not offer the same low latency as an in-memory cache. Using RDS Multi-AZ would provide high availability but would not address the performance issue related to read-heavy queries as effectively as a caching layer. Amazon S3 is primarily used for object storage and is not suitable for caching database query results.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon ElastiCache and how does it work?
How does ElastiCache differ from using RDS Read Replicas?
What are the main use cases for Amazon ElastiCache?
A multinational corporation seeks to fortify the security of the top-level user credentials across its numerous cloud accounts, where each account functions under its own operational domain. They intend to put into effect a two-step verification process for all top-level user logins and establish an automatic mechanism for monitoring any top-level credential usage in API calls. Which service should they utilize to automate the monitoring of such activities throughout all operational domains?
- AWS GuardDuty 
- AWS Config 
- AWS Identity and Access Management (IAM) 
- Amazon CloudTrail 
Answer Description
The service that enables logging of account actions and automatic detection of top-level user API activity is the correct answer, which is Amazon CloudTrail. It records events that are made within an account and can be set up to generate alerts when specific activities, including those by the top-level account user, are detected. The service known for configuration tracking is not suitable for monitoring account activities directly. The service responsible for identity management does not offer automated detection or alerting for specific user actions. The service focused on threat detection primarily monitors for unusual activity but is not specifically designed for tracking the usages of top-level user credentials.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does Amazon CloudTrail do in AWS?
How can CloudTrail alert administrators about specific activities?
What is the difference between CloudTrail and GuardDuty?
A company operates under a multi-account strategy where one account is managed by the security engineers and another is operated by a separate team responsible for network administration. The security team needs to allow the network administration team's account access to a specific Amazon S3 bucket without broadening the access to other accounts. Which of the following is the MOST secure way to grant the required access?
- Edit the S3 bucket's Access Control List (ACL) to include the user identifiers from the team handling network administration. 
- Attach a resource-based policy directly to the S3 bucket identifying the network administration team's account as the principal with the specified permissions. 
- Implement a policy for individual users in the security engineers' account that grants permissions to the network administration team. 
- Set up a bucket policy that limits access to the S3 bucket based on the source IP range of the network administration team's office location. 
Answer Description
Attach a resource-based policy (bucket policy) to the S3 bucket that identifies the network administration team's AWS account as the principal and grants only the required permissions. A bucket policy is evaluated in the account that owns the resource and explicitly supports specifying an entire account in the Principal element, which cleanly limits access to that account.
IAM identity-based policies in the security engineers' account cannot by themselves grant principals from another account access to the bucket; a resource-based policy in the bucket owner's account is still required for cross-account access. Although legacy S3 ACLs can grant permissions to another AWS account via that account's canonical user ID, AWS now recommends disabling ACLs and using bucket policies for simpler management and finer-grained control. Restricting access by source IP address does not satisfy the requirement because any principal from any account could still reach the bucket if it originates from the allowed network range.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a resource-based policy in AWS?
Why are bucket policies preferred over S3 ACLs (Access Control Lists)?
How does the Principal element work in an S3 bucket policy?
A SaaS provider currently runs its entire stack in the us-east-1 Region. Customers are located in North America, Europe, and Asia-Pacific. The product team adds two new requirements:
- Decrease round-trip latency for all users and maintain service availability if an AWS Region becomes unavailable.
- Comply with regional regulations that require all customer data created in the European Union (EU) to remain in EU infrastructure.
As the solutions architect, which approach best meets both requirements while minimizing ongoing operational overhead?
- Deploy the application stack in eu-central-1 and ap-southeast-2 in addition to us-east-1. Use Amazon Route 53 latency-based routing with health checks to direct users to the nearest healthy Region. Store EU customer data only in eu-central-1 and disable cross-Region replication for those buckets and databases. 
- Retain a single-Region deployment in us-east-1 but add AWS Global Accelerator to improve network paths for TCP and UDP traffic worldwide. 
- Keep the workload in us-east-1 and place Amazon CloudFront in front of the application to cache static and dynamic content at global edge locations. 
- Move all compute instances into a cluster placement group in us-east-1 and purchase a 100 Gbps AWS Direct Connect to enhance throughput and latency for every user. 
Answer Description
Deploying identical stacks in multiple AWS Regions and using Amazon Route 53 latency-based (or geolocation) routing with health checks fulfills both goals. Each Region serves traffic from the closest users, reducing latency, and EU data can be stored exclusively in the EU Region (for example, eu-central-1) with cross-Region replication disabled, satisfying residency rules. CloudFront or Global Accelerator alone improve network performance but still route dynamic writes to the original Region, violating data-residency. Increasing instance size, placement groups, or dedicated network links also leave the workload single-Region and do not address compliance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Route 53 latency-based routing?
Why is cross-Region replication disabled for EU data in this scenario?
How does deploying in multiple Regions improve availability?
An enterprise with distinct departments needs to ensure managed, independent access to their cloud resources within a shared environment. The configuration should enable department-specific resource management and enforce the least privilege access principle. As a solutions architect, which option would you recommend to achieve this goal?
- Utilize a central governance mechanism to broadly restrict services accessible by each department without individualized access controls. 
- Implement role-switching for different teams to grant them temporary access to other departments' resources when required. 
- Set up groups corresponding to the enterprise's internal structure with attached permissions, ensuring each group's access is limited to resources necessary for their operations. 
- Create separate user accounts with individualized permissions tailored to each member's role in the enterprise to manage resource access manually. 
Answer Description
Establishing groups for each department and assigning the corresponding policies necessary for their access needs would offer the best solution. This strategy enables clear role definition and privilege segregation. Users are allocated to their respective groups, simplifying the administration and modification of permissions in adherence to their roles. The incorrect choices involve strategies that either increase management complexity without providing additional value, mismatch the scenario's requirements, or fail to offer the precise control needed for departmental resource access within a common environment.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the principle of least privilege in cloud access management?
How do IAM groups in AWS simplify resource access management?
Why is role-switching not ideal for managing independent departmental access?
An e-commerce platform built with microservices experiences sudden traffic spikes during flash-sale campaigns. The order-ingestion service must hand off each order message for downstream processing with these requirements:
- Every order message must be processed at least once; duplicate processing is acceptable.
- Producers and consumers must scale independently to handle unpredictable surges without message loss.
- The solution should minimize operational overhead and keep services loosely coupled.
Which AWS service best meets these requirements?
- Amazon Simple Queue Service (SQS) 
- AWS Step Functions 
- Amazon EventBridge event bus 
- Amazon Kinesis Data Streams 
Answer Description
Amazon Simple Queue Service (SQS) is designed for decoupling producers and consumers with a fully managed message queue. Standard queues provide at-least-once delivery and automatically scale to virtually any throughput, allowing independent scaling of microservices .
Amazon Kinesis Data Streams is optimized for real-time analytics of large, ordered data streams and requires shard management; it is more complex than needed for simple message hand-off and may lose data if consumers fall behind shard retention.
Amazon EventBridge offers at-least-once event delivery but is optimized for routing events to multiple targets and has soft throughput quotas that can throttle extreme burst traffic.
AWS Step Functions orchestrates stateful workflows rather than providing a high-throughput message buffer between microservices.
Therefore, SQS is the most appropriate choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon SQS, and why is it suited for decoupling microservices?
What makes Amazon Kinesis Data Streams unsuitable for this scenario?
How does SQS achieve at-least-once delivery, and why is it important?
Which service should be utilized to manage user sign-up and sign-in functionalities, along with federated authentication, for a mobile application that requires integration with social login providers?
- AWS Identity and Access Management (IAM) 
- Amazon Cognito 
- AWS Control Tower 
- Amazon GuardDuty 
Answer Description
The correct answer is Amazon Cognito, which allows developers to add user sign-up, sign-in, and access control to their web and mobile applications quickly and easily. It also supports federated authentication with social identity providers, such as Facebook, Google, and Amazon, which is the functionality described in the question. The other services listed have different primary uses: AWS IAM is designed for secure AWS resource management, AWS Control Tower is for governance across multiple AWS accounts, and Amazon GuardDuty specializes in security threat detection and continuous monitoring.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Cognito, and how does it work?
What is federated authentication, and why is it useful?
How does Amazon Cognito differ from AWS IAM?
Your company is deploying a web application on AWS using Amazon RDS for database storage, and the Security Officer is drafting a security strategy. What responsibility does AWS directly take care of as part of the shared responsibility model for Amazon RDS?
- Managing user permissions within the database 
- Designing secure logical database schemas 
- Configuring database encryption at rest 
- Patching the underlying database software 
Answer Description
AWS is responsible for protecting the infrastructure that runs AWS services, which includes the physical security of data centers, network infrastructure, and managed service patching, like Amazon RDS. Customers are responsible for managing the security inside the database, such as the data itself, controlling who can access the data, and using encryption to protect the data. While AWS will patch database engines, the logical database schema is within the customer's control, thus the correct answer is 'Patching the underlying database software'.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Shared Responsibility Model in AWS?
How does AWS handle patching in Amazon RDS?
What security tasks are customers responsible for in Amazon RDS?
A company is decomposing a monolithic web application into microservices on AWS. The engineering team wants each new microservice to scale out easily when traffic spikes, without requiring complex session-handling logic. Which design approach BEST satisfies this requirement?
- Write the microservice to read and write all application data to the local file system. 
- Enable sticky sessions on the Application Load Balancer so each user is routed to the same instance. 
- Design each microservice to be stateless and persist required data in a shared store such as Amazon DynamoDB. 
- Store user session data in the microservice's in-memory cache for fast access. 
Answer Description
Designing a microservice to be stateless allows any instance of the service to process any incoming request. All state information (for example, user sessions or shopping-cart data) is stored in an external, shared data store such as Amazon DynamoDB or Amazon ElastiCache. Because instances do not rely on locally held state, Auto Scaling groups or container orchestrators can freely add or remove instances, and an Application Load Balancer can route requests to any healthy instance. Approaches that keep state on the instance (sticky sessions, local file storage, or in-memory caches within the service) couple requests to specific instances and hinder horizontal scaling.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why are stateless microservices preferred for scaling in AWS?
What is the role of Amazon DynamoDB in stateless microservices?
What issues can arise from using sticky sessions for scaling microservices?
An application running on Amazon EC2 instances needs to read log files that are stored only in the S3 bucket named app-logs. No other S3 actions or buckets are required.
Which IAM policy best implements the principle of least privilege for the application's IAM role?
- Allow s3:GetObject and s3:PutObject on all S3 buckets in the account. 
- Attach the AWS managed policy AmazonS3ReadOnlyAccess to the role. 
- Allow s3:*" on the resource arn:aws:s3:::app-logs/*. 
- Allow the action s3:GetObject on the resource arn:aws:s3:::app-logs/*. 
Answer Description
Granting s3:GetObject on the specific bucket path arn:aws:s3:::app-logs/* limits both the actions and the resource scope to exactly what the application needs, satisfying the least-privilege principle. The other options either grant broader actions (write or list), apply to every bucket in the account, or use a managed policy that provides read access to all buckets, all of which exceed the stated requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the principle of least privilege in IAM policies?
How does `s3:GetObject` differ from other S3 actions like `s3:PutObject`?
Why is using an AWS managed policy like `AmazonS3ReadOnlyAccess` not ideal here?
A multinational enterprise has separate accounts for development and production environments to enhance security and operational efficiency. Developers need to access cloud resources in the production environment sporadically to perform troubleshooting. As a solutions architect, what approach would you suggest to facilitate these occasional access requirements while maintaining stringent security controls?
- Provide distinct user credentials for each developer that grant access to the necessary services in the separate environment, with a scheduled monthly rotation policy. 
- Implement trust relationships between the organization's accounts using roles with permissions to access necessary services, allowing for temporary credential assumption through a trusted federation. 
- Adjust the policies attached to resources in the separate environment to directly authorize access for identities from the development environment. 
- Create identically named roles with necessary permissions in both the development and separate environment accounts. 
Answer Description
The correct approach involves setting up trust relationships between accounts by creating roles that can be assumed as needed. The idea is to provide temporary credentials that can be used within certain security parameters defined by the role’s permission policy. This enables the enterprise to control access precisely without having to manage permanent credentials for each developer for each environment, adhering to the principle of least privilege. Generating dedicated user credentials or creating shared roles in both accounts doesn't follow the best practice of using temporary credentials for cross-account access and may not meet security and audit requirements. Enabling direct access by modifying resource policies could compromise security by making the role too permissive and is not aligned with recommended security practices.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are trust relationships in AWS and how do they work?
Why are temporary credentials preferred over permanent credentials for cross-account access?
What is the principle of least privilege, and why is it important in this context?
Your company plans to host a set of web applications in the AWS Cloud. Each application should be accessible over the internet but must be isolated from one another to prevent potential security issues. As the Solutions Architect, you need to design a strategy that enforces the isolation while allowing HTTPS traffic to each application. Which approach satisfies these requirements?
- Create a VPC with multiple public subnets and deploy each application in a separate security group that allows inbound traffic only on TCP port 443. 
- Deploy all applications to a single EC2 instance and control access using the instance's security group to allow inbound traffic only on port 443. 
- Create a VPC with a single public subnet and apply a network ACL that allows inbound traffic on port 22 to ensure secure communication. 
- Configure a single public subnet within a VPC and associate all applications to one security group that allows all inbound traffic. 
Answer Description
Create a VPC with multiple public subnets (for example, in different Availability Zones for high availability). Launch the compute resources for each application (such as EC2 instances or Application Load Balancers) in the appropriate subnet and attach a dedicated security group to each application's network interfaces. Configure the security-group rules to allow inbound TCP 443 (HTTPS) from 0.0.0.0/0 and to deny all other inbound traffic (no rules permitting traffic from the other applications' security groups). Because a security group is evaluated at the instance or ENI level, this prevents the applications from initiating unsolicited traffic to one another while still allowing internet users to reach each application over HTTPS.
The other options either expose additional ports, place all applications behind a single overly permissive security group, or rely on opening SSH (port 22) rather than HTTPS, so they do not meet both the isolation and HTTPS-only requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a security group in AWS?
What is the difference between a security group and a network ACL in AWS?
Why is HTTPS traffic typically configured on TCP port 443?
An organization aims to maintain operational continuity of its critical workload even if an entire data center servicing their region encounters an outage. Their solution includes computing resources distributed across diverse physical locations within the same geographical area. To enhance the system's robustness, which strategy should be implemented for the data layer?
- Install a globally distributed database with read replicas in various regions for geographical data distribution. 
- Introduce a Load Balancer to distribute traffic among database instances to minimize the impact of a location outage. 
- Implement a Multi-AZ configuration for the relational database to promote automatic failover and data redundancy. 
- Configure an active-passive setup using a secondary region and enact health checks to direct traffic upon failure. 
Answer Description
The question asks what can you do to maintain operational continuity if one data center in a region has an outage. Keep in mind that with AWS, one region is made of many data centers groups into availability zones. Therefor, a multi-AZ setup would help mitigate and prevent outages during a data center outage.
Choosing a Multi-AZ deployment for an RDS instance provides high availability by automatically maintaining a synchronous standby replica in a different data center, or Availability Zone. In case of an infrastructure failure, the database will fail over to the standby so that database operations can resume quickly without manual intervention. This choice is the most aligned with the requirement for operational continuity within a single region in the face of a data center outage. The other answers either describe strategies that introduce geographical redundancy, which goes beyond the scope of the question, or load balancing, which does not address the need for automatic failover at the data layer.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Multi-AZ in AWS RDS?
How does Multi-AZ ensure high availability?
How does Multi-AZ differ from Read Replicas?
A company is deploying a three-tier web application consisting of a web server tier, application server tier, and a database tier. How should the organization restrict each tier to only the permissions necessary for their specific operations?
- Distribute administrative credentials to instances in all tiers, ensuring they have sufficient permissions for any action they might need to perform. 
- Employ root user credentials for all instances to maintain simplicity in permissions management and ensure full access to resources. 
- Assign tailored IAM roles to each EC2 instance in the respective tiers with only the permissions necessary for their functions. 
- Remove all permissions from instances in each tier to maximize security and prevent potential security incidents. 
Answer Description
Implementing fine-grained access controls by assigning tailored IAM roles to each tier's respective EC2 instances ensures that each tier operates with only the permissions necessary for its duties. This strict adherence to the principle of least privilege prevents excessive permissions that could be exploited in case of a security breach. Providing overarching administrative credentials to all tiers, using root account access, or stripping all permissions contradict the security best practice of granting least privilege to perform required functions and are, therefore, incorrect.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an IAM role in AWS?
How do IAM roles differ from IAM users?
What is the principle of least privilege in AWS?
You have been tasked with designing a solution for your company that allows existing corporate network users to obtain temporary credentials to interact with console and programmatic interfaces, streamlining the sign-on process and avoiding separate user management. Which method would you employ to facilitate this?
- Integrate the corporate directory with identity federation to assign permissions through temporary security credentials. 
- Distribute long-term security credentials to users for manual configuration of access to the necessary interfaces. 
- Implement a proprietary authentication solution specific to the company's internal systems for granting access. 
- Create individual IAM users corresponding to each member of the workforce and manage permissions directly. 
Answer Description
The process of federation involves the integration of an external directory service with IAM roles. By setting up federation, users authenticate with their local credentials on their existing directory system and then receive temporary security credentials to operate on the console or interact with services via APIs or CLI. This method honors the principle of least privilege and simplifies credential management while providing secure access. Manually creating IAM users is redundant and insecure for large enterprise environments. Assigning long-term credentials goes against security best practices, making them a poor choice. Lastly, developing a bespoke authentication portal partial to the corporation's internal system does not use AWS built-in mechanisms for secure access management and is less efficient and potentially less secure.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is identity federation in AWS?
How do temporary security credentials work?
What are the benefits of using roles instead of long-term credentials?
Your enterprise is scaling and plans to create separate environments for various departments. To ensure centralized management, consistent application of compliance requirements, and an automated setup process for these environments, which service should you leverage?
- AWS Organizations 
- AWS Config 
- AWS Control Tower 
- Amazon Inspector 
Answer Description
Using the selected service, enterprises can manage multiple environments by setting up a well-architected baseline, automating the provisioning of new environments, and uniformly applying policy controls across all environments for security and compliance. While the other options provide specific security features or advisory services, they do not offer the comprehensive solution needed for centralized governance and automated environment setup.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Control Tower, and how does it help manage multiple environments?
How do guardrails work in AWS Control Tower?
How does AWS Control Tower differ from AWS Organizations?
Your client wishes to build a system where their web and mobile platforms can securely request information from a variety of upstream services. This system must support managing developer access, accommodate changes in the structure of requests, and offer mechanisms to limit the number of incoming requests per user. Which Amazon service should they implement to meet these requirements?
- Amazon API Gateway 
- AWS Lambda 
- Amazon Simple Storage Service (S3) 
- AWS Step Functions 
- AWS Direct Connect 
- Amazon Cognito 
Answer Description
The correct answer is Amazon API Gateway because it securely manages API requests, supports developer access control, handles request transformations, and enforces rate limiting. It integrates with AWS services like Lambda and Cognito, making it ideal for managing web and mobile API traffic. The incorrect options lack full API management capabilities - AWS Direct Connect is for private networking, S3 is for storage, and Cognito only handles authentication. Step Functions is for workflow automation, and Lambda executes backend logic but lacks API request management. While some of these services complement API Gateway, none provide a complete solution on their own.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the key features of Amazon API Gateway?
How does Amazon API Gateway handle rate limiting and quotas?
How does API Gateway integrate with AWS Lambda and Cognito?
Gnarly!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.