Skip to content

Taycode/distributed-socket-app

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Distributed WebSocket Chat Application

A high-performance, scalable real-time chat application demonstrating how to scale WebSockets horizontally using a load balancer, a Message Broker (RabbitMQ), and shared state management.

πŸš€ Purpose

The primary goal of this project is to solve the "Socket Scaling Problem". In a traditional monolithic WebSocket app, all clients connect to a single server. If that server sends a message, it knows about all connected clients. However, when you scale to multiple server instances (to handle millions of users), Client A might be on Server 1, while Client B is on Server 2. Server 1 has no direct way to send a message to Client B.

This architecture solves that by using:

  1. Nginx for Load Balancing (with sticky sessions).
  2. RabbitMQ as a Message Bus with Topic Exchanges to efficiently route messages across servers.
  3. Redis as a distributed "Source of Truth" for presence tracking (who is online).

πŸ›  Tech Stack

  • Frontend: Next.js (React, TypeScript, TailwindCSS) - A modern, responsive chat UI.
  • Backend: FastAPI (Python) - Async Python server handling WebSockets.
  • Message Broker: RabbitMQ - Handles inter-service communication.
  • State Management: Redis - Stores ephemeral state like "Online User Counts" accessible by all backend instances.
  • Infrastructure: Docker Compose - Orchestrates the cluster.

πŸ— Architecture

  1. Connection: A client connects to ws://localhost:8080.
  2. Load Balancing: Nginx handles the connection using ip_hash (Sticky Session).
  3. Messaging:
    • User sends "Hello" to Room A.
    • Backend Instance publishes message to RabbitMQ Topic Exchange chat_events with routing key room.A.
  4. Optimized Routing:
    • Each Backend Instance has a unique Queue.
    • When a user joins Room A on Server 1, Server 1 binds its queue to the exchange with key room.A.
    • Result: Only servers with active users in Room A receive the message. Zero wasted processing for servers not complying with that room.
  5. Presence:
    • On Connect/Disconnect, backends update a Set in Redis.
    • Frontend polls /stats to show the global online count.

🏁 How to Run

Prerequisites

  • Docker & Docker Compose
  • Node.js & npm (for local frontend dev)

1. Start the Infrastructure (Backend Cluster)

This spins up RabbitMQ, Redis, Nginx, and 3 replicas of the FastAPI Backend.

docker-compose up --build

2. Start the Frontend

Open a new terminal window:

cd frontend
npm install
npm run dev

3. Usage

  1. Open http://localhost:3002 in your browser.
  2. Open a second (and third) tab or a new Incognito window.
  3. Join Room: Enter a room name (e.g., general) and click Join.
  4. Chat: Send messages. You will see the Server ID in the message bubble.
  5. Observe Presence: Watch the "Online" count in the header update across all tabs.

πŸ”§ commands

  • View Logs: docker-compose logs -f backend
  • RabbitMQ Dashboard: http://localhost:15672 (User: guest, Pass: guest)
  • Check Redis: docker exec -it distributed-socket-app_redis_1 redis-cli

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published