This repository demonstrates the Transactional Outbox Pattern using:
- Go
- PostgreSQL
- Logical replication (WAL /
pgoutput) - Kafka
The app creates an order and an outbox event in one DB transaction, then a WAL listener publishes outbox events to Kafka.
In distributed systems, writing business data to a database and publishing an event to a broker is a classic dual-write problem.
The outbox pattern solves this by writing both business data and an event record to the same transaction in the service database.
In an event-driven e-commerce system, when a user places an order, two actions must occur:
- Save the new order in the database
- Send an "OrderCreated" event to a message broker (e.g., Kafka) so the Inventory Service can reserve items
However, this introduces hidden risks:
-
Database succeeds, message fails: The order is saved, but a network issue prevents the event from reaching the broker. As a result, inventory is never updated.
-
Message succeeds, database fails: The event is sent, but the database transaction fails. This leads to inventory being reserved for a non-existent (ghost) order.
Because the database and the message broker are separate systems, they cannot be wrapped in a single atomic transaction.
The Outbox Pattern solves this problem by using the service’s own database as a temporary staging area for messages.
-
Start a Local Transaction The service receives a request and opens a database transaction.
-
Update Business Data Save the core business data (e.g., the new order).
-
Write to the Outbox Table Within the same transaction, write the event payload (e.g., the "OrderCreated" message) to a dedicated Outbox table in the same database.
-
Commit the Transaction The transaction is committed. Since both operations occur in the same database, this step is atomic—either both succeed or both fail.
-
Message Relay Process A separate background process monitors the Outbox table. When it finds new entries, it:
- Sends them to the message broker
- Marks them as processed (or deletes them)
A scheduled job (e.g., running every few seconds) checks the Outbox table for unsent messages, publishes them, and updates their status.
Pros:
- Simple to implement
- No additional infrastructure required
Cons:
- Constant polling can impact database performance
- Introduces slight delays in message delivery
Tools like Debezium read the database’s transaction logs (e.g., PostgreSQL WAL or MySQL binlog). When a new row appears in the Outbox table, the event is streamed to the message broker in near real time.
Pros:
- Near real-time event delivery
- No polling load on the database
Cons:
- Requires additional infrastructure and setup
- More complex to operate and maintain
POST /create-orderis called.- The app inserts:
- a row in
orders - a row in
outbox_eventsin a single PostgreSQL transaction.
- a row in
- A background WAL listener streams logical replication events from PostgreSQL.
- On
outbox_eventsinserts, the listener publishes event payloads to Kafka (orders.eventstopic).
flowchart
%% Client Layer
subgraph Client
A[Client]
end
%% Application Layer
subgraph App["Go Application"]
B[API Handler]
C["CreateOrderTx()"]
end
%% Database Layer
subgraph DB["Postgres (Single Transaction)"]
D1[INSERT orders]
D2[INSERT outbox_events]
D3[COMMIT]
end
%% CDC Layer
subgraph CDC["CDC WAL Listener"]
E1["Read WAL (pgoutput)"]
E2[Parse Logical Replication Message]
E3[Build OutboxEvent]
end
%% Messaging Layer
subgraph Kafka["Kafka Producer"]
F1["Publish Event (topic=orders.events)"]
end
G[Kafka Broker]
%% Flow
A -->|POST /create-order| B
B --> C
C --> D1
C --> D2
D1 --> D3
D2 --> D3
D3 -->|WAL Entry| E1
E1 --> E2
E2 --> E3
E3 --> F1
F1 -->|key=aggregate_id value=payload| G
cmd/relay/main.go: app entrypoint, HTTP server, dependency wiring.internal/postgres/db.go: transactional order + outbox write.internal/cdc/wal_listener.go: PostgreSQL WAL logical replication listener.internal/kafka/producer.go: Kafka producer wrapper.sql/migrations/000001_init_schema.up.sql: schema forordersandoutbox_events.
- Docker + Docker Compose
- Go (for local build/test)
- Optional:
make
From repo root:
make upThen create an order:
make trigger-orderWatch logs:
make logsStop and clean everything:
make downmake help # list available targets
make tidy # go mod tidy + go fmt
make test # run go tests
make build # build cmd/relay into ./bin
make sqlc # regenerate sqlc code
make watch # docker compose watch (live rebuild)Initial migration creates:
orders(id, product_name, quantity)outbox_events(id, aggregate_type, aggregate_id, topic, payload, created_at)
- PostgreSQL is configured with
wal_level=logicalfor CDC. - Kafka topic used by this demo is
orders.events. - This example focuses on pattern mechanics, not production hardening.
