Queue Backpressure Control Pattern
Designing limits, buffers, and retries to protect the system from producer-consumer rate imbalances.

Introduction
The real problem with queue-based asynchronous processing is not throughput but backpressure. When consumers slow down or go down, queue backlog builds up quickly, eventually spreading to overall system delays and timeouts. This article organizes patterns that deal with backpressure at the design level in a practical manner.

Problem definition
Pipelines that do not take backpressure into account collapse unpredictably during peak traffic.
- There is no producer transmission speed limit, so the queue length increases infinitely.
- Because the consumer retry policy is aggressive, the number of identical messages increases explosively.
- There is no priority policy, so important tasks wait for a long time.
The key is to control inputs and isolate failures. Rate limit, retry budget, and dead letter separation must be designed simultaneously.
Key concepts
| perspective | Design criteria | Verification points |
|---|---|---|
| input control | producer rate limit | Queue backlog growth rate |
| Consumption Control | consumer concurrency limits | Job success rate |
| Retry | exponential backoff + budget | Duplicate processing rate |
| quarantine | DLQ + priority queue | Core task latency |
Backpressure is an application policy, not an infrastructure option. You must decide in advance which tasks to give up and what to keep in case of processing failure.
Code example 1: Retry budget control
export function nextRetryDelay(attempt: number, maxAttempts: number) {
if (attempt >= maxAttempts) return null;
const base = 500;
const delay = Math.min(base * 2 ** attempt, 30_000);
const jitter = Math.floor(Math.random() * 300);
return delay + jitter;
}
export function shouldMoveToDlq(attempt: number, maxAttempts: number) {
return attempt >= maxAttempts;
}
Code example 2: Queue consumer concurrency limits
const MAX_IN_FLIGHT = 20;
let inFlight = 0;
export async function consume(message: QueueMessage) {
if (inFlight >= MAX_IN_FLIGHT) {
await message.nack({ requeue: true, delayMs: 300 });
return;
}
inFlight += 1;
try {
await processMessage(message);
await message.ack();
} finally {
inFlight -= 1;
}
}
Architecture flow
Clean Architecture Layer Structure
By managing the backpressure policy as an application layer use case and separating the message broker implementation into the infrastructure layer, testing and operational tuning become easier.
| Layer | responsibility | How to apply |
|---|---|---|
| Entities | task state model | Job, RetryPolicy |
| Use Cases | Processing/Retry Policy | ProcessQueueJob |
| Interface Adapters | Queue Consumer/Producer | KafkaAdapter |
| Frameworks | Broker, Metrics | Kafka, Prometheus |
Infrastructure diagram
Tradeoffs
- Strong input limits increase system stability but sacrifice instantaneous throughput.
- Setting a retry budget prevents overflow, but some tasks may be abandoned early.
- Priority queues protect critical tasks, but increase operational complexity.
Cleanup
The purpose of backpressure control is to prevent system collapse, not to immediately process all tasks. By designing input limits, retry budgets, and isolation strategies together, stability can be maintained even during peak periods.
Image source
- Cover: source link
- License: CC BY-SA 2.0 / Author: Derek Harper
- Note: After downloading the free license image from Wikimedia Commons, it was optimized to JPG at 1600px.