Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
APPLIES TO: Developer | Basic | Basic v2 | Standard | Standard v2 | Premium | Premium v2
The llm-token-limit policy prevents large language model (LLM) API usage spikes on a per key basis by limiting consumption of language model tokens to either a specified rate (number per minute), a quota over a specified period, or both. When a specified token rate limit is exceeded, the caller receives a 429 Too Many Requests response status code. When a specified quota is exceeded, the caller receives a 403 Forbidden response status code.
Note
Set the policy's elements and child elements in the order provided in the policy statement. Learn more about how to set or edit API Management policies.
Supported model APIs
This policy works with LLM APIs added to API Management that conform to one of the following API schemas:
- OpenAI Chat Completions or Responses API
- Anthropic Messages API (currently supported in API Management v2 tiers)
Policy statement
<llm-token-limit counter-key="key value"
tokens-per-minute="number"
token-quota="number"
token-quota-period="Hourly | Daily | Weekly | Monthly | Yearly"
estimate-prompt-tokens="true | false"
retry-after-header-name="custom header name, replaces default 'Retry-After'"
retry-after-variable-name="policy expression variable name"
remaining-quota-tokens-header-name="header name"
remaining-quota-tokens-variable-name="policy expression variable name"
remaining-tokens-header-name="header name"
remaining-tokens-variable-name="policy expression variable name"
tokens-consumed-header-name="header name"
tokens-consumed-variable-name="policy expression variable name" />
Attributes
| Attribute | Description | Required | Default |
|---|---|---|---|
| counter-key | The key to use for the token limit policy. For each key value, a single counter is used for all scopes at which the policy is configured. Policy expressions are allowed. | Yes | N/A |
| tokens-per-minute | The maximum number of tokens consumed by prompt and completion per minute. | Either a rate limit (tokens-per-minute), a quota (token-quota over a token-quota-period), or both must be specified. |
N/A |
| token-quota | The maximum number of tokens allowed during the time interval specified in the token-quota-period. Policy expressions are allowed. |
Either a rate limit (tokens-per-minute), a quota (token-quota over a token-quota-period), or both must be specified. |
N/A |
| token-quota-period | The length of the fixed window after which the token-quota resets. The value must be one of the following: Hourly,Daily, Weekly, Monthly, Yearly. The start time of a quota period is calculated as the UTC timestamp truncated to the unit (hour, day, etc.) used for the period. Policy expressions are allowed. |
Either a rate limit (tokens-per-minute), a quota (token-quota over a token-quota-period), or both must be specified. |
N/A |
| estimate-prompt-tokens | Boolean value that determines whether to estimate the number of tokens required for a prompt: - true: estimate prompt tokens in advance based on prompt schema in API. - false: don't estimate prompt tokens; use actual token usage from the model response. For token counting and estimation behavior, see Considerations for token counts and estimation. |
Yes | N/A |
| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified tokens-per-minute or token-quota is exceeded. Policy expressions aren't allowed. |
No | Retry-After |
| retry-after-variable-name | The name of a variable that stores the recommended retry interval in seconds after the specified tokens-per-minute or token-quota is exceeded. Policy expressions aren't allowed. |
No | N/A |
| remaining-quota-tokens-header-name | The name of a response header whose value after each policy execution is the estimated number of remaining tokens corresponding to token-quota allowed for the token-quota-period. Policy expressions aren't allowed. |
No | N/A |
| remaining-quota-tokens-variable-name | The name of a variable that after each policy execution stores the estimated number of remaining tokens corresponding to token-quota allowed for the token-quota-period. Policy expressions aren't allowed. |
No | N/A |
| remaining-tokens-header-name | The name of a response header whose value after each policy execution is the number of remaining tokens corresponding to tokens-per-minute allowed for the time interval. Policy expressions aren't allowed. |
No | N/A |
| remaining-tokens-variable-name | The name of a variable that after each policy execution stores the number of remaining tokens corresponding to tokens-per-minute allowed for the time interval. Policy expressions aren't allowed. |
No | N/A |
| tokens-consumed-header-name | The name of a response header whose value is the number of tokens consumed by both prompt and completion. The header is added to response only after the response is received from backend. Policy expressions aren't allowed. | No | N/A |
| tokens-consumed-variable-name | The name of a variable initialized to the estimated prompt token count in the backend section (or zero if estimate-prompt-tokens is false), updated with the actual reported count in the outbound section. |
No | N/A |
Usage
- Policy sections: inbound
- Policy scopes: global, workspace, product, API, operation
- Gateways: classic, v2, self-hosted, workspace
Usage notes
- This policy can be used multiple times per policy definition.
- This policy can optionally be configured when adding an LLM API using the portal.
- The value of
remaining-quota-tokens-variable-nameorremaining-quota-tokens-header-nameis an estimate and may be larger than expected based on actual token consumption. For more information, see Considerations for token counts and estimation. - API Management uses a single counter for each
counter-keyvalue that you specify in the policy. The counter is updated at all scopes at which the policy is configured with that key value. If you want to configure separate counters at different scopes (for example, a specific API or product), specify different key values at the different scopes. For example, append a string that identifies the scope to the value of an expression. - The v2 tiers use a token bucket algorithm for rate limiting, which differs from the sliding window algorithm in classic tiers. Because of this implementation difference, when you configure token limits in the v2 tiers at multiple scopes by using the same
counter-key, make sure that thetokens-per-minutevalue is consistent across all policy instances. Inconsistent values can cause unpredictable behavior. For more information, see Advanced request throttling with Azure API Management - This policy tracks token usage independently at each gateway where it is applied, including workspace gateways and regional gateways in a multi-region deployment. It doesn't aggregate token counts across the entire instance.
Considerations for token counts and estimation
The policy monitors and enforces token limits using actual token usage data returned from the LLM endpoint. You can optionally enable prompt token estimation to reduce unnecessary backend requests. The following considerations apply.
- Token types: The policy currently counts prompt and completion tokens only.
- Without prompt token estimation (
estimate-prompt-tokens="false"): The policy uses actual token usage values from theusagesection of the LLM API response. Prompts may be sent to the backend even when the limit is exceeded; this is detected from the response, after which subsequent requests are blocked until the limit resets. - With prompt token estimation (
estimate-prompt-tokens="true"): The policy estimates prompt tokens from the prompt schema in the API definition before sending the request. This can reduce unnecessary backend requests when the limit is already exceeded, but may reduce performance. - Streaming: When streaming is enabled in the API request (
stream: true), prompt tokens are always estimated regardless of theestimate-prompt-tokenssetting. Completion tokens are also estimated when responses are streamed. - Image input: For models that accept image input, image tokens are generally counted by the backend LLM and included in limit and quota calculations. However, when streaming is enabled or
estimate-prompt-tokensis set totrue, the policy overcounts each image as a maximum of 1200 tokens. - Concurrency: Because the exact number of tokens consumed can't be determined until responses are received from the backend, concurrent or near-concurrent requests can temporarily exceed the configured token limit. Once responses are processed and the limit is exceeded, subsequent requests are blocked until the limit resets.
- Remaining quota accuracy: The estimated remaining token quota returned in
remaining-quota-tokens-variable-nameorremaining-quota-tokens-header-namemay be larger than expected based on actual token consumption, and becomes more accurate as the quota is approached.
Examples
Token rate limit
In the following example, the token rate limit of 5000 per minute is keyed by the caller IP address. The policy doesn't estimate the number of tokens required for a prompt. After each policy execution, the remaining tokens allowed for that caller IP address in the time period are stored in the variable remainingTokens.
<policies>
<inbound>
<base />
<llm-token-limit
counter-key="@(context.Request.IpAddress)"
tokens-per-minute="5000" estimate-prompt-tokens="false" remaining-tokens-variable-name="remainingTokens" />
</inbound>
<outbound>
<base />
</outbound>
</policies>
Token quota
In the following example, the token quota of 10000 is keyed by the subscription ID and resets monthly. After each policy execution, the number of remaining tokens allowed for that subscription ID in the time period is stored in the variable remainingQuotaTokens.
<policies>
<inbound>
<base />
<llm-token-limit
counter-key="@(context.Subscription.Id)"
token-quota="100000" token-quota-period="Monthly" remaining-quota-tokens-variable-name="remainingQuotaTokens" />
</inbound>
<outbound>
<base />
</outbound>
</policies>
Related policies
Related content
For more information about working with policies, see:
- Tutorial: Transform and protect your API
- Policy reference for a full list of policy statements and their settings
- Policy expressions
- Set or edit policies
- Reuse policy configurations
- Policy snippets repo
- Policy samples repo
- Azure API Management policy toolkit
- Get Copilot assistance to create, explain, and troubleshoot policies