Share via

502 Bad Gateway error when uploading large files (~5GB) through Azure API Management BasicV2 tier

Diane Bloodworth 0 Reputation points
2026-03-11T16:26:04.1366667+00:00

I'm experiencing a consistent 502 Bad Gateway error when uploading large files (~5GB) through Azure API Management BasicV2 tier. I need help confirming whether this is a known limitation of the BasicV2 tier and if there is any workaround.

Environment:

  • Production: API Management BasicV2 tier
  • QA: API Management Basic tier, platform version stv2.1 — works correctly

Current backend policy (same in both environments):

xml

<backend>
    <forward-request timeout="1800" buffer-request-body="false" />
</backend>

Observed behavior:

  • Large file uploads via POST fail with HTTP 502 after exactly 4 minutes
  • The exact same policy works correctly in QA (Basic/stv2.1)
  • No entry is recorded in ApiManagementGatewayLogs for the failed request — this strongly suggests the 502 is being generated by the underlying infrastructure before reaching the APIM gateway layer
  • The 4-minute failure matches the Azure Load Balancer idle timeout

What we've already investigated:

  • The buffer-request-body="false" attribute is intended to stream the request body directly to the backend without buffering, keeping the TCP connection active and avoiding the idle timeout. This works in Basic/stv2.1 but not in BasicV2.
  • Setting buffer-request-body="true" is not viable because the file is ~5GB and would exhaust the gateway memory.
  • The Azure portal's "Diagnose and solve problems" tool pointed to SNAT port exhaustion documentation, but the failure pattern (exactly 4 minutes, no GatewayLogs entry) points to an infrastructure-level idle timeout rather than SNAT exhaustion.
  • Increasing timeout value has no effect since the connection is being dropped at the infrastructure level before the policy timeout is reached.

Key question: Does BasicV2 handle buffer-request-body="false" differently than Basic/stv2? Is there a known limitation in BasicV2 that prevents large file streaming uploads, and is there any supported configuration to work around this?

Fallback plan: We are aware of the SAS token architecture pattern (client uploads directly to Blob Storage, bypassing APIM entirely) and are considering implementing it. However, we would prefer to confirm first whether this is a fixable configuration issue or a hard platform limitation before committing to an architectural change.

I'm experiencing a consistent 502 Bad Gateway error when uploading large files (~5GB) through Azure API Management BasicV2 tier. I need help confirming whether this is a known limitation of the BasicV2 tier and if there is any workaround.

Environment:

  • Production: API Management BasicV2 tier
  • QA: API Management Basic tier, platform version stv2.1 — works correctly

Current backend policy (same in both environments):

xml

<backend>
    <forward-request timeout="1800" buffer-request-body="false" />
</backend>

Observed behavior:

  • Large file uploads via POST fail with HTTP 502 after exactly 4 minutes
  • The exact same policy works correctly in QA (Basic/stv2.1)
  • No entry is recorded in ApiManagementGatewayLogs for the failed request — this strongly suggests the 502 is being generated by the underlying infrastructure before reaching the APIM gateway layer
  • The 4-minute failure matches the Azure Load Balancer idle timeout

What we've already investigated:

  • The buffer-request-body="false" attribute is intended to stream the request body directly to the backend without buffering, keeping the TCP connection active and avoiding the idle timeout. This works in Basic/stv2.1 but not in BasicV2.
  • Setting buffer-request-body="true" is not viable because the file is ~5GB and would exhaust the gateway memory.
  • The Azure portal's "Diagnose and solve problems" tool pointed to SNAT port exhaustion documentation, but the failure pattern (exactly 4 minutes, no GatewayLogs entry) points to an infrastructure-level idle timeout rather than SNAT exhaustion.
  • Increasing timeout value has no effect since the connection is being dropped at the infrastructure level before the policy timeout is reached.

Key question: Does BasicV2 handle buffer-request-body="false" differently than Basic/stv2? Is there a known limitation in BasicV2 that prevents large file streaming uploads, and is there any supported configuration to work around this?

Fallback plan: We are aware of the SAS token architecture pattern (client uploads directly to Blob Storage, bypassing APIM entirely) and are considering implementing it. However, we would prefer to confirm first whether this is a fixable configuration issue or a hard platform limitation before committing to an architectural change.

Azure API Management
Azure API Management

An Azure service that provides a hybrid, multi-cloud management platform for APIs.

{count} votes

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.