2 min read

Queue Backpressure Control Pattern

Designing limits, buffers, and retries to protect the system from producer-consumer rate imbalances.

Queue Backpressure Control Pattern thumbnail

Introduction

The real problem with queue-based asynchronous processing is not throughput but backpressure. When consumers slow down or go down, queue backlog builds up quickly, eventually spreading to overall system delays and timeouts. This article organizes patterns that deal with backpressure at the design level in a practical manner.

Queue Backpressure 제어 패턴 커버
Wikimedia Commons 기반 무료 이미지

Problem definition

Pipelines that do not take backpressure into account collapse unpredictably during peak traffic.

  • There is no producer transmission speed limit, so the queue length increases infinitely.
  • Because the consumer retry policy is aggressive, the number of identical messages increases explosively.
  • There is no priority policy, so important tasks wait for a long time.

The key is to control inputs and isolate failures. Rate limit, retry budget, and dead letter separation must be designed simultaneously.

Key concepts

perspectiveDesign criteriaVerification points
input controlproducer rate limitQueue backlog growth rate
Consumption Controlconsumer concurrency limitsJob success rate
Retryexponential backoff + budgetDuplicate processing rate
quarantineDLQ + priority queueCore task latency

Backpressure is an application policy, not an infrastructure option. You must decide in advance which tasks to give up and what to keep in case of processing failure.

Code example 1: Retry budget control

export function nextRetryDelay(attempt: number, maxAttempts: number) {
  if (attempt >= maxAttempts) return null;

  const base = 500;
  const delay = Math.min(base * 2 ** attempt, 30_000);
  const jitter = Math.floor(Math.random() * 300);
  return delay + jitter;
}

export function shouldMoveToDlq(attempt: number, maxAttempts: number) {
  return attempt >= maxAttempts;
}

Code example 2: Queue consumer concurrency limits

const MAX_IN_FLIGHT = 20;
let inFlight = 0;

export async function consume(message: QueueMessage) {
  if (inFlight >= MAX_IN_FLIGHT) {
    await message.nack({ requeue: true, delayMs: 300 });
    return;
  }

  inFlight += 1;
  try {
    await processMessage(message);
    await message.ack();
  } finally {
    inFlight -= 1;
  }
}

Architecture flow

Mermaid diagram rendering...

Clean Architecture Layer Structure

By managing the backpressure policy as an application layer use case and separating the message broker implementation into the infrastructure layer, testing and operational tuning become easier.

LayerresponsibilityHow to apply
Entitiestask state modelJob, RetryPolicy
Use CasesProcessing/Retry PolicyProcessQueueJob
Interface AdaptersQueue Consumer/ProducerKafkaAdapter
FrameworksBroker, MetricsKafka, Prometheus

Infrastructure diagram

Mermaid diagram rendering...

Tradeoffs

  • Strong input limits increase system stability but sacrifice instantaneous throughput.
  • Setting a retry budget prevents overflow, but some tasks may be abandoned early.
  • Priority queues protect critical tasks, but increase operational complexity.

Cleanup

The purpose of backpressure control is to prevent system collapse, not to immediately process all tasks. By designing input limits, retry budgets, and isolation strategies together, stability can be maintained even during peak periods.

Image source

  • Cover: source link
  • License: CC BY-SA 2.0 / Author: Derek Harper
  • Note: After downloading the free license image from Wikimedia Commons, it was optimized to JPG at 1600px.

Comments