Microsoft Azure Developer Associate Practice Test (AZ-204)
Use the form below to configure your Microsoft Azure Developer Associate Practice Test (AZ-204). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure Developer Associate AZ-204 Information
Navigating the AZ-204 Azure Developer Associate Exam
The Microsoft Azure Developer Associate (AZ-204) certification is a crucial credential for cloud developers specializing in the Microsoft Azure ecosystem. This exam is designed for professionals who are responsible for all phases of the development lifecycle, including gathering requirements, design, development, deployment, security, maintenance, performance tuning, and monitoring. Candidates should have 1-2 years of professional development experience, including hands-on experience with Microsoft Azure. The exam validates a developer's proficiency in leveraging Azure's tools, SDKs, and APIs to build and maintain cloud applications and services.
The AZ-204 exam assesses a broad set of skills across five primary domains. These areas include developing Azure compute solutions (25-30%), developing for Azure storage (15-20%), implementing Azure security (15-20%), monitoring, troubleshooting, and optimizing Azure solutions (5-10%), and connecting to and consuming Azure services and third-party services (20-25%). The exam itself consists of 40-60 questions and has a duration of about 100 minutes. The question formats can vary, including multiple-choice, scenario-based questions, and drag-and-drop tasks.
The Value of Practice Exams in Preparation
A critical component of a successful study plan for the AZ-204 exam is the use of practice tests. Taking practice exams offers several key benefits that go beyond simply memorizing facts. They help you become familiar with the style, wording, and difficulty of the questions you are likely to encounter on the actual exam. This familiarity can help reduce anxiety and improve time management skills during the test.
Furthermore, practice exams are an excellent tool for self-assessment. They allow you to gauge your readiness, identify areas of weakness in your knowledge, and focus your study efforts accordingly. By reviewing your answers, especially the incorrect ones, you can gain a deeper understanding of how different Azure services work together to solve real-world problems. Many candidates find that simulating exam conditions with timed practice tests helps build the confidence needed to think clearly and methodically under pressure. Microsoft itself provides a practice assessment to help candidates prepare and fill knowledge gaps, increasing the likelihood of passing the exam.

Free Microsoft Azure Developer Associate AZ-204 Practice Test
- 20 Questions
- Unlimited
- Develop Azure compute solutionsDevelop for Azure storageImplement Azure securityMonitor and troubleshoot Azure solutionsConnect to and consume Azure services and third-party services
You are importing an OpenAPI 3.0 document to create a new API in Azure API Management. The specification contains operation-level examples, tags, summary text, and an externalDocs object. After the API is created, you will publish it through the built-in developer portal. Which statement accurately describes how the information from the specification is used in the portal?
externalDocs links are automatically transformed into policy snippets rendered in the portal's test console.
Operation examples defined in the specification are automatically displayed as sample requests and responses for each operation in the developer portal.
Tags on each operation are converted into separate API versions visible in the portal's version selector.
The summary and description elements are ignored during import and must be added manually in API Management before they appear in the portal.
Answer Description
Azure API Management preserves the metadata contained in an OpenAPI specification. Any examples defined for an operation are imported and automatically surface in the developer portal as sample requests and responses that developers can inspect or replay. Summary and description fields are also preserved, tags are used only for grouping operations (not for versioning), and externalDocs links are kept as external references rather than being converted to policies. Therefore, the statement about examples being shown as samples is the only accurate one.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the role of an OpenAPI 3.0 document in Azure API Management?
How are operation examples displayed in the Azure API Management developer portal?
What happens to tags and externalDocs in an OpenAPI 3.0 import?
Your team added a URL ping availability test to Application Insights for an Azure web app. Whenever the metric Failed locations crosses the threshold of 0, the on-call engineer must get an SMS and the help-desk distribution list must get an email. Which configuration should you implement to satisfy this requirement without changing application code?
Configure an Azure Event Grid subscription on the Failed locations metric and trigger a Logic App that sends the SMS and email notifications.
In the availability test settings, list the engineer's phone number and the help-desk email address in the Alert recipients section.
Add the engineer and help-desk distribution list as owners of the Application Insights resource so they receive automatic alert emails.
Create an action group containing two Email/SMS/Push/Voice receivers (one SMS, one email) and associate that action group with a metric alert on the Failed locations metric.
Answer Description
Azure Monitor alerts use action groups to define who is notified and how. In the portal you first create a metric alert rule that targets the availability test metric Failed locations and sets the threshold to greater than 0. During alert creation you select (or create) an action group. Inside the action group you add two Email/SMS/Push/Voice receivers: one configured for the engineer's phone number (SMS) and another for the help-desk distribution list (email). When the alert fires Azure Monitor automatically delivers the notifications through the channels defined in that single action group. Adding owners to the resource only grants access and does not send notifications. The availability test blade itself has no place to list recipients; it relies on Azure Monitor alerts. Using Event Grid and a Logic App would work but is unnecessary overhead when the built-in action group can deliver both SMS and email directly.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Azure Monitor Action Group?
Why is using Event Grid and Logic Apps unnecessary in this scenario?
How do Availability Tests in Application Insights work with Azure Monitor alerts?
You are deploying a new Azure API Management (APIM) instance by using an ARM template. The template contains the following snippet:
"sku": {
"name": "Consumption",
"capacity": 2
}
After running the deployment, the template validation fails with the error "Property capacity is not allowed".
To complete the deployment successfully while keeping the billing model that charges per execution and scales automatically, what should you do?
Remove the capacity property (or set it to 0) and redeploy the Consumption tier.
Change the sku.name value to Developer and keep capacity set to 1.
Keep the sku settings and add the property "autoScale": "enabled" to the template.
Change the sku.name value to Basic and keep capacity set to 2.
Answer Description
The Consumption tier is designed for serverless, per-execution billing and always scales automatically. Because there are no dedicated units, the capacity property must be omitted (or set to 0) in the ARM template. Any positive value is rejected. Other tiers (Developer, Basic, Standard, Premium) require capacity to specify the number of units (or a fixed value of 1 for Developer). Therefore, removing the capacity property (or setting it to 0) is the only way to deploy an APIM instance that keeps the Consumption pricing model.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the Consumption tier in Azure API Management?
Why is the 'capacity' property not allowed in the Consumption tier?
What are the differences between the Consumption and Basic tiers in Azure API Management?
You run an on-demand data-processing task in Azure Container Instances. The task completes in roughly 15 minutes. After a successful run (exit code 0), the container should stop so that billing ends. If the task fails (non-zero exit code), the container must restart automatically. Which restart policy should you set when creating the container group?
UnlessStopped
Always
OnFailure
Never
Answer Description
The OnFailure restart policy instructs Azure Container Instances to monitor the main process running inside the container. If the process exits successfully (exit code 0), the container stops and remains stopped, so billing for CPU and memory ceases. If the process exits with a non-zero code, ACI interprets this as a failure and restarts the container automatically. The Always policy keeps restarting the container even after successful completion, incurring unnecessary cost. The Never policy prevents any automatic restart, so a failure would leave the container stopped. UnlessStopped is not a valid ACI restart policy.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the role of exit codes in Azure Container Instances?
How does the OnFailure restart policy benefit cost management in Azure Container Instances?
What happens if I choose Always or Never restart policies in Azure Container Instances?
You develop an Azure Function that uploads telemetry files to the incoming container of an Azure Storage account. The function runs daily between 02:00 and 03:00 UTC. Security policy mandates a service SAS that lets the function create or overwrite blobs without listing the container and is valid only during the execution window. Which SAS token meets these requirements?
sp=rl&st=2024-10-15T00:00Z&se=2024-10-15T23:59Z
sp=cw&st=2024-10-15T02:00Z&se=2024-10-15T03:00Z
sp=cwdl&st=2024-10-15T02:00Z&se=2024-10-15T03:00Z
sp=wl&st=2024-10-15T02:00Z&se=2024-10-15T03:00Z
Answer Description
To satisfy least-privilege policy, the token must grant only the permissions needed to upload files. Write (w) permits creating and overwriting blobs, and Create (c) is optional because Write already covers creation. Listing the container would require the List (l) permission, which must be excluded. The token sp=cw&st=2024-10-15T02:00Z&se=2024-10-15T03:00Z grants only Create and Write and is valid solely during the 02:00-03:00 UTC window. The other tokens either include List (l), grant unnecessary permissions such as Delete (d), or have a much broader validity period, violating the stated security requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Service SAS in Azure Storage?
Why is the Write (w) permission sufficient for uploading files?
What does 'least-privilege policy' mean in the context of Azure security?
You are preparing to deploy a new Azure API Management instance for a production workload. The instance must join an Azure virtual network in internal mode so it can reach on-premises systems, allow you to add additional capacity units later without redeployment, and use custom hostnames for the gateway and developer portal. Which pricing tier should you choose when you create the instance?
Standard tier
Developer tier
Basic tier
Premium tier
Answer Description
Only the Premium tier supports all three stated capabilities simultaneously. Internal virtual-network connectivity is available in both the Developer and Premium tiers, but the Developer tier is limited to a single capacity unit and is intended only for non-production use. The Basic and Standard tiers allow scaling but do not support internal VNet integration. Custom hostnames are available in several tiers, but the Premium tier is the only production tier that combines custom hostnames, internal VNet support, and horizontal scalability.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does 'internal mode' mean in Azure API Management?
Why is VNet integration important for production workloads in Azure?
What are capacity units in Azure API Management, and why is scalability important?
You plan to create a new Azure API Management (APIM) instance in the Azure portal. During the first page of the creation wizard, you must enter several values. After the resource is provisioned, which setting cannot be modified without recreating the APIM service?
The organization (publisher) name shown in the developer portal
The pricing tier (SKU) assigned to the instance
The service name that forms the *.azure-api.net public endpoint
The administrator email address used for notifications
Answer Description
The service name becomes part of the public endpoint (
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is the service name locked after an Azure APIM instance is created?
Can the organization name or administrator email in Azure APIM be updated after creation?
What happens if I need to change the pricing tier for an existing Azure APIM instance?
You need to provision a new Azure App Service Web App named "contoso-api" for a .NET 7.0 application. The web app must run on Linux and be placed in the existing resource group "RG1" and the existing App Service plan "asp-linux" (located in the same resource group). Which Azure CLI command meets the requirements?
az webapp create --resource-group RG1 --plan asp-linux --name contoso-api --runtime "DOTNET:7.0" --os-type Linux
az webapp create --resource-group RG1 --name contoso-api --runtime "DOTNETCORE|7.0" --os-type Linux
az webapp create --resource-group RG1 --plan asp-linux --name contoso-api --runtime "DOTNETCORE|7.0" --os-type Linux
az webapp create --resource-group RG1 --plan asp-linux --name contoso-api --runtime "DOTNETCORE|7.0" --os-type Windows
Answer Description
The az webapp create command provisions the Web App. The --plan parameter places the app in an existing App Service plan, and --os-type Linux enforces the operating system. For a .NET 7 workload on App Service for Linux, the runtime identifier must be "DOTNETCORE|7.0". Omitting --plan would create the app in a free plan, using a Windows OS type would violate the requirement, and the value "DOTNET:7.0" is not a valid runtime string.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an App Service Plan in Azure?
What is the significance of the runtime 'DOTNETCORE|7.0' in Azure App Service?
Why is '--os-type' important when provisioning an Azure Web App?
You are building a .NET application that uses the Azure.Storage.Blobs v12 SDK. Before processing an existing block blob you must determine its size (ContentLength) and read the value of a custom metadata key named "department." You must accomplish this without downloading the blob's content and while making as few service calls as possible. Which SDK action should you take?
Call BlobClient.GetTagsAsync and read the Tags dictionary.
Call BlobClient.GetPropertiesAsync and read the returned BlobProperties values.
Call BlobClient.DownloadContentAsync and inspect the BlobDownloadResult.
Call BlobClient.SetMetadataAsync with an empty dictionary, then read the response headers.
Answer Description
BlobClient.GetPropertiesAsync issues a single HEAD request to the blob and returns a BlobProperties object. The object contains both system properties such as ContentLength and the complete Metadata dictionary, allowing you to read the "department" value without downloading content.
DownloadContentAsync retrieves the full blob data, which is unnecessary and incurs additional network cost.
GetTagsAsync returns only tags, not system properties or metadata.
SetMetadataAsync is used to write metadata-calling it with an empty dictionary would overwrite existing metadata and still would not return the blob's size.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is BlobClient.GetPropertiesAsync?
What’s the difference between metadata and tags in Azure blobs?
Why is using BlobClient.DownloadContentAsync inefficient in this scenario?
You manage an Azure API Management instance that exposes several APIs to tenants who authenticate with a subscription key. The solution must block a tenant that makes more than 50 requests within any 60-second window, but never throttle requests originating from the corporate VNet (10.0.0.0/16). Which policy configuration meets the requirement?
Configure a validate-subscription policy in the outbound section and use a header transform rule to return HTTP 429 after 50 calls.
Apply a rate-limit policy with calls set to 50 and renewal-period set to 60 seconds in the backend section of the API.
Add an ip-filter policy that allows 10.0.0.0/16, followed by a rate-limit-by-key policy with calls set to 50, renewal-period set to 60 seconds, and counter-key set to the subscription key.
Add an ip-filter policy that allows 10.0.0.0/16, followed by a quota-by-key policy with calls set to 50, renewal-period set to 60 seconds, and counter-key set to the subscription key.
Answer Description
The ip-filter policy must run first to exempt traffic from the internal address space. The rate-limit-by-key policy is designed for short-term rolling windows and evaluates each request against a moving counter keyed to a value such as the caller's subscription key. When calls exceed the configured limit (50) inside the renewal period (60 seconds), API Management automatically returns HTTP 429 to the offending caller. The quota-by-key policy is intended for longer-term usage quotas, a simple rate-limit policy cannot distinguish tenants, and header manipulation alone does not enforce throttling.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the ip-filter policy in the context of Azure API Management?
How does the rate-limit-by-key policy work in Azure API Management?
Why is quota-by-key policy not suitable for short-term limits?
A container named Transactions already contains several years of data. A background .NET service must forward only new inserts and updates to Azure Service Bus. The solution must 1) ignore existing items, 2) resume from the last processed change after restarts, and 3) allow multiple service instances without duplicate processing. Which implementation meets the requirements?
Use the Azure Cosmos DB Change Feed Processor SDK, set ChangeFeedStartFrom.Now, and store leases in a dedicated lease container.
Create a change-feed iterator with ChangeFeedStartFrom.Beginning and keep continuation tokens in memory.
Configure a 1-second TTL on the container and process deleted documents from the change feed.
Enable full-fidelity change feed mode and poll the container every minute for the latest _lsn values.
Answer Description
The Azure Cosmos DB Change Feed Processor library automatically distributes change-feed ranges by using leases, guaranteeing at-least-once processing without duplicates across scaled-out workers. It persists its position in a separate lease container, so after a restart it continues from the last saved checkpoint. When you keep the default StartFromBeginning value (false) or explicitly set ChangeFeedStartFrom.Now, the processor begins reading only future changes, so historic items are ignored. The other options either re-read existing data, rely on manual and volatile token tracking, or misuse TTL/deletion semantics, and therefore cannot satisfy all three requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure Cosmos DB Change Feed?
What does the ChangeFeedStartFrom.Now setting do?
What is a lease container and why is it important?
When using the Azure.Storage.Blobs v12 .NET SDK, you need to read the custom metadata you previously added to a block blob without downloading any of the blob data. Which BlobClient method should you call to obtain the metadata in the response?
GetPropertiesAsync
DownloadContentAsync
SetMetadataAsync
GetTagsAsync
Answer Description
The BlobClient.GetPropertiesAsync method issues a properties call to the service that returns the blob's system properties and any user-defined metadata without transferring the blob's content. DownloadContentAsync streams the data itself; GetTagsAsync returns blob index tags rather than metadata; SetMetadataAsync is used to write, not read, metadata.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between metadata and blob content in Azure Blob Storage?
How does the GetPropertiesAsync method differ from the SetMetadataAsync method?
What are blob index tags, and how do they differ from metadata?
You are developing a background service in C# that must react to inserts and updates in an Azure Cosmos DB container named Orders. The service must guarantee that each document is processed exactly once, even after the process restarts, and it must automatically redistribute the workload when you scale the service out to multiple instances. Which implementation should you use to meet both requirements?
Create a ChangeFeedProcessor instance, assign a separate lease container, and register the processor to handle changes.
Add a SQL API pre-trigger that calls an Azure Function on every insert and update to Orders.
Enable analytical store on the Orders container and periodically query items where the _ts value is greater than the last processed timestamp.
Use GetChangeFeedIterator starting from the beginning of time and keep the continuation token in application memory.
Answer Description
The Azure Cosmos DB Change Feed Processor library creates a dedicated lease container that stores checkpoints for each logical partition. Because the processor persists its state in the lease container, it can resume from the exact position where it stopped, ensuring at-least-once (effectively exactly-once) processing across restarts. The same lease mechanism also handles automatic partition ownership and load balancing among multiple concurrent hosts, so the workload is evenly distributed without extra code. Reading the change feed manually with an iterator would require custom persistence logic, while analytical store queries or SQL triggers do not provide ordered, scalable change-feed semantics.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure Cosmos DB Change Feed?
What is a lease container in Azure Cosmos DB Change Feed Processor?
How does Change Feed ensure scalability and guaranteed processing?
An ASP.NET Core web app is hosted in Azure App Service. The app reads its settings from Azure App Configuration. Several settings are stored as Key Vault references, for example:
ConnectionStrings--Sql = @Microsoft.KeyVault(SecretUri=https://contosokv.vault.azure.net/secrets/SqlConnection)
When the application starts, it throws a Forbidden (403) error while trying to resolve the reference. You must fix the issue without storing any credentials in code or in App Configuration.
What should you do?
Enable diagnostic logging for the Key Vault and send the logs to Application Insights.
Enable a system-assigned managed identity for the App Service resource and assign it the Key Vault Secrets User role on the vault.
Grant the App Service resource the App Configuration Data Reader role on the Key Vault.
Store an Azure AD application client secret in App Configuration and have the code read that secret to access Key Vault.
Answer Description
Key Vault references are evaluated by the client application. The identity that the application presents to Azure Key Vault must have permission to read secrets. Enabling a system-assigned managed identity on the App Service instance allows the runtime to obtain an Azure AD token automatically. Granting that identity the Key Vault Secrets User (or Secret Reader via an access policy) role gives it "Get" permission on the secrets, so the reference can be resolved. The other options either do not provide authentication to Key Vault or grant an unrelated permission.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a system-assigned managed identity in Azure?
What is the Key Vault Secrets User role and how does it work?
How does Azure App Configuration handle Key Vault references?
An Azure Storage account named contosoimages hosts a container for customer uploads. You are developing an ASP.NET Core Web API that must return a six-hour SAS URL so clients can upload one blob. Security requires Azure AD authentication only, no storage account keys, and the token must be revocable by disabling the API's identity. Which approach should you implement?
Generate a user delegation SAS by calling GetUserDelegationKey and building the SAS with BlobSasBuilder.
Set the container access level to Blob and return the blob URL without a SAS.
Generate a service SAS by signing BlobSasBuilder with the storage account key.
Generate an account SAS in the Azure portal and store it in Azure Key Vault.
Answer Description
A user delegation SAS is created by first obtaining a user delegation key through an Azure AD authenticated call to GetUserDelegationKey, then constructing the signature with BlobSasBuilder (or equivalent). Because the SAS is tied to the Azure AD principal, disabling that identity or revoking the delegation key immediately invalidates the token. A service or account SAS is signed with a storage account key; revoking it later would require regenerating the account keys, which the scenario forbids. Making the container public would not meet the security constraints.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a User Delegation SAS in Azure Storage?
How does BlobSasBuilder help create a SAS token?
Why is Azure AD authentication preferred over storage account keys for SAS tokens?
You need to deploy a Linux-based Azure Container Instance (ACI) that will pull the image api-backend:1.0 from the private Azure Container Registry myacr.azurecr.io. Company policy forbids storing registry usernames or passwords in deployment scripts, so the ACI must authenticate to the registry by using a previously created user-assigned managed identity. When you run az container create, which additional parameter satisfies this requirement?
--assign-identity /subscriptions/
/resourceGroups/rg1/providers/Microsoft.ManagedIdentity/userAssignedIdentities/aci-pull-id --use-acr-credential
--registry-username $(ACR_USER) --registry-password $(ACR_PASS)
--secure-context-type acr
Answer Description
The Azure CLI authenticates an Azure Container Instance to a private Azure Container Registry in one of two ways: explicit registry credentials or a managed identity attached to the container group. To use a user-assigned managed identity you add the --assign-identity parameter and supply the identity's resource ID or name. The CLI then sets the identity property on the container group, allowing the identity (once granted AcrPull permission) to obtain an access token for the registry. None of the other parameters are valid for enabling managed-identity based registry authentication; they either refer to username/password authentication or are not recognized by az container create.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a user-assigned managed identity in Azure?
What are AcrPull permissions in Azure Container Registry?
How does the --assign-identity parameter in az container create work?
You are designing a microservice that stores user shopping carts in an Azure Cosmos DB account configured with multiple read regions. Each user must always see their most recent updates immediately after they are written, but global read latency must be as low as possible and cross-user strong consistency is unnecessary. Which consistency level meets these requirements?
Bounded staleness
Consistent prefix
Strong
Session
Answer Description
Session consistency provides read-your-writes, monotonic reads, monotonic writes, and write-follows-reads guarantees within a single client session, so a user always sees their own latest cart changes. Because the guarantee is scoped to the session, the database can serve reads from the nearest replica, keeping latency low. Strong consistency would satisfy all users but incurs the highest latency across regions. Bounded staleness limits lag to a configured window but does not guarantee that a read immediately reflects the caller's last write. Consistent prefix preserves the order of writes yet can still return data that is missing the most recent update, violating the requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the different consistency levels in Azure Cosmos DB?
Why is session consistency better suited than strong consistency in this scenario?
What factors influence read latency in Azure Cosmos DB?
You manage an Azure API Management instance that protects its operations with Azure AD-issued JWT bearer tokens. Compliance requires that every tenant, identified by the tenantId claim inside each token, be limited to at most 1 000 calls per one-hour period across the entire API. Other tenants must not be affected by a busy tenant's traffic. Which inbound policy should you implement, and how should you configure it to meet the requirement?
Insert a quota-by-key policy with calls="1000", renewal-period="3600", counter-key="@(context.Request.Headers["tenantId"])", applied at the API scope.
Insert a rate-limit-by-key policy with calls="1000", renewal-period="3600", counter-key="@(context.Principal.Claims["tenantId"].Value)", applied at the API scope.
Declare a set-variable policy that stores tenantId, followed by a quota policy referencing that variable to cap requests at 1 000 per hour.
Insert a rate-limit policy with calls="1000", renewal-period="3600" at the product scope; no key is needed because the policy counts per caller automatically.
Answer Description
The rate-limit-by-key policy is designed to throttle traffic for each distinct value of a caller-supplied key. By setting calls="1000" and renewal-period="3600" you define a sliding counter of 1 000 calls per 3 600 seconds (one hour). Supplying counter-key="@(context.Principal.Claims["tenantId"]?.Value)" uses the tenantId claim from the already validated JWT as the discriminator, so each tenant gets an independent counter. The simpler rate-limit policy cannot separate traffic by tenant, quota policies accumulate over a subscription's lifetime, and adding an extra variable with a separate quota step is unnecessary because rate-limit-by-key performs the task in a single policy statement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a JWT bearer token?
How does the rate-limit-by-key policy function in Azure API Management?
Why is rate-limit-by-key better suited for tenant-specific limits than quota or rate-limit policies?
You register a single-tenant web API named ContosoApi in Microsoft Entra ID. A separate daemon application will call the API by using the client-credentials grant. The API must authorize calls only when the incoming access token contains the role Orders.ReadWrite and there is no user context. Which configuration should you perform for ContosoApi in the Azure portal?
Create an Azure RBAC role assignment granting the client application Contributor access to the ContosoApi App Service.
Create an application role named Orders.ReadWrite in ContosoApi and assign that role to the client application's service principal.
Define a delegated permission scope named Orders.ReadWrite in ContosoApi and require admin consent for the client application.
Add optional JWT claims for roles in ContosoApi and mark the claim as essential.
Answer Description
Because the client application authenticates with the client-credentials flow, the access token will not contain user delegated scopes. Instead, the token can carry application roles that are granted to the calling service principal. Defining an application role in ContosoApi's app registration and assigning that role to the client application causes Entra ID to issue a roles claim (Orders.ReadWrite) in every token that the client obtains, regardless of user context. Defining delegated scopes would only work in flows that involve a user, optional claims do not create or control role issuance, and Azure RBAC on the App Service is unrelated to token claims used by a custom API.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a client-credentials grant in Microsoft Entra ID?
What is the difference between application roles and delegated permissions in Azure?
How do roles in the JWT claim affect authorization for a web API in Azure?
An Azure App Service Web App you manage must comply with security standards that prohibit TLS 1.0 and TLS 1.1. You have already uploaded a valid TLS certificate and bound it to your custom domain. Which portal configuration will ensure the app only accepts connections that negotiate TLS 1.2 or later?
Add the sslFlags attribute to the web.config file to require SSL negotiation.
Create a new HTTPS listener in Azure Application Gateway that allows only the TLS1_2 protocol.
Set Minimum TLS Version to 1.2 in the TLS/SSL settings blade of the App Service.
Add an application setting named WEBSITE_DISABLE_TLS12 with a value of true.
Answer Description
Azure App Service exposes a setting called Minimum TLS Version in the TLS/SSL settings blade. Setting this value to 1.2 instructs the platform to refuse hand-shakes that attempt to use TLS 1.0 or 1.1, thereby satisfying the compliance requirement. Changing web.config SSL flags only affects IIS-level requirements inside the sandbox and does not block older TLS versions at the front door. Application settings such as WEBSITE_DISABLE_TLS12 do not exist, and enforcing a protocol version through an external Application Gateway listener would protect only traffic routed through that gateway, not direct requests that resolve to the app's default endpoint.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is TLS and why is it important for web applications?
What is the Minimum TLS Version setting in Azure App Service?
Why does configuring an external Azure Application Gateway not solve the issue completely?
Cool beans!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.