Skip to content

nmdra/outbox-pattern-go

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Outbox Pattern with Go, PostgreSQL WAL, and Kafka

This repository demonstrates the Transactional Outbox Pattern using:

  • Go
  • PostgreSQL
  • Logical replication (WAL / pgoutput)
  • Kafka

The app creates an order and an outbox event in one DB transaction, then a WAL listener publishes outbox events to Kafka.

Why this exists

In distributed systems, writing business data to a database and publishing an event to a broker is a classic dual-write problem.

The outbox pattern solves this by writing both business data and an event record to the same transaction in the service database.

Conceptual Overview

OutBox-Pattern

The Problem: Dual-Write Dilemma

In an event-driven e-commerce system, when a user places an order, two actions must occur:

  1. Save the new order in the database
  2. Send an "OrderCreated" event to a message broker (e.g., Kafka) so the Inventory Service can reserve items

However, this introduces hidden risks:

  • Database succeeds, message fails: The order is saved, but a network issue prevents the event from reaching the broker. As a result, inventory is never updated.

  • Message succeeds, database fails: The event is sent, but the database transaction fails. This leads to inventory being reserved for a non-existent (ghost) order.

Because the database and the message broker are separate systems, they cannot be wrapped in a single atomic transaction.

The Solution: Transactional Outbox Pattern

The Outbox Pattern solves this problem by using the service’s own database as a temporary staging area for messages.

How It Works

  1. Start a Local Transaction The service receives a request and opens a database transaction.

  2. Update Business Data Save the core business data (e.g., the new order).

  3. Write to the Outbox Table Within the same transaction, write the event payload (e.g., the "OrderCreated" message) to a dedicated Outbox table in the same database.

  4. Commit the Transaction The transaction is committed. Since both operations occur in the same database, this step is atomic—either both succeed or both fail.

  5. Message Relay Process A separate background process monitors the Outbox table. When it finds new entries, it:

    • Sends them to the message broker
    • Marks them as processed (or deletes them)

Message Relay Strategies

1. Polling Publisher

A scheduled job (e.g., running every few seconds) checks the Outbox table for unsent messages, publishes them, and updates their status.

Pros:

  • Simple to implement
  • No additional infrastructure required

Cons:

  • Constant polling can impact database performance
  • Introduces slight delays in message delivery

2. Change Data Capture (CDC)

Tools like Debezium read the database’s transaction logs (e.g., PostgreSQL WAL or MySQL binlog). When a new row appears in the Outbox table, the event is streamed to the message broker in near real time.

Pros:

  • Near real-time event delivery
  • No polling load on the database

Cons:

  • Requires additional infrastructure and setup
  • More complex to operate and maintain

How this project works

  1. POST /create-order is called.
  2. The app inserts:
    • a row in orders
    • a row in outbox_events in a single PostgreSQL transaction.
  3. A background WAL listener streams logical replication events from PostgreSQL.
  4. On outbox_events inserts, the listener publishes event payloads to Kafka (orders.events topic).
flowchart 
    %% Client Layer
    subgraph Client
        A[Client]
    end

    %% Application Layer
    subgraph App["Go Application"]
        B[API Handler]
        C["CreateOrderTx()"]
    end

    %% Database Layer
    subgraph DB["Postgres (Single Transaction)"]
        D1[INSERT orders]
        D2[INSERT outbox_events]
        D3[COMMIT]
    end

    %% CDC Layer
    subgraph CDC["CDC WAL Listener"]
        E1["Read WAL (pgoutput)"]
        E2[Parse Logical Replication Message]
        E3[Build OutboxEvent]
    end

    %% Messaging Layer
    subgraph Kafka["Kafka Producer"]
        F1["Publish Event (topic=orders.events)"]
    end

    G[Kafka Broker]

    %% Flow
    A -->|POST /create-order| B
    B --> C
    C --> D1
    C --> D2
    D1 --> D3
    D2 --> D3

    D3 -->|WAL Entry| E1
    E1 --> E2
    E2 --> E3

    E3 --> F1
    F1 -->|key=aggregate_id value=payload| G
Loading

Architecture

  • cmd/relay/main.go: app entrypoint, HTTP server, dependency wiring.
  • internal/postgres/db.go: transactional order + outbox write.
  • internal/cdc/wal_listener.go: PostgreSQL WAL logical replication listener.
  • internal/kafka/producer.go: Kafka producer wrapper.
  • sql/migrations/000001_init_schema.up.sql: schema for orders and outbox_events.

Prerequisites

  • Docker + Docker Compose
  • Go (for local build/test)
  • Optional: make

Quick start

From repo root:

make up

Then create an order:

make trigger-order

Watch logs:

make logs

Stop and clean everything:

make down

Useful development commands

make help         # list available targets
make tidy         # go mod tidy + go fmt
make test         # run go tests
make build        # build cmd/relay into ./bin
make sqlc         # regenerate sqlc code
make watch        # docker compose watch (live rebuild)

Schema

Initial migration creates:

  • orders(id, product_name, quantity)
  • outbox_events(id, aggregate_type, aggregate_id, topic, payload, created_at)

Notes

  • PostgreSQL is configured with wal_level=logical for CDC.
  • Kafka topic used by this demo is orders.events.
  • This example focuses on pattern mechanics, not production hardening.

About

The app creates an order and an outbox event in one DB transaction, then a WAL listener publishes outbox events to Kafka.

Topics

Resources

License

Stars

Watchers

Forks

Contributors