Multi-engine AI platform for STEM education that solves mathematical and scientific problems with step-by-step explanations, solution verification, and intelligent model routing for optimal cost efficiency.
- Overview
- Features
- Architecture
- Tech Stack
- Prerequisites
- Installation & Setup
- Running the Project
- Environment Configuration
- Project Structure
- Development Workflow
- API Documentation
- Deployment
- Contributing
- Troubleshooting
- License
- Support
Equated is an AI-powered STEM learning platform designed for school and engineering students. Unlike generic AI chatbots, Equated is purpose-built to:
- β Solve complex STEM problems step-by-step with structured explanations
- β Verify solutions using a dedicated symbolic math engine (SymPy)
- β Route problems to the most cost-effective AI model (DeepSeek, Groq, Claude, GPT-4)
- β Cache solutions using vector similarity search to reduce redundant API calls
- β Support multiple input formats (text, LaTeX, images via OCR, documents)
- β Maintain conversation context for follow-up questions
- β Track user credits and monetize through sustainable pricing
The platform is built to be production-grade from day one, with comprehensive monitoring, error tracking, and analytics.
- Multi-Format Problem Input: Accept typed questions, LaTeX expressions, images (OCR), and uploaded documents
- Intelligent Model Routing: Automatically select the most cost-effective AI model based on problem complexity
- Step-by-Step Solutions: Provide structured explanations with problem interpretation, concepts, steps, and summary
- Solution Verification: Verify all answers using symbolic math before returning to users
- Vector Caching: 30-60% cost reduction through semantic question caching
- Conversation Context: Maintain session context for multi-turn interactions
- Math Engine: SymPy-powered symbolic computation (algebra, calculus, matrices, equation solving)
- OCR & Parsing: Convert images to text/LaTeX automatically
- analytics: Track user behavior, model accuracy, cache hit rates, and cost-per-solve
- Credit-Based System: Free tier (5-7 solves/day) + paid packages (βΉ10/30 solves)
- Ad Integration: Non-intrusive banner ads to subsidize free tier
- Payment Processing: Razorpay integration for secure transactions
- Usage Monitoring: Real-time analytics and error tracking via PostHog & Sentry
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Student Browser β
β (Next.js Frontend) β
ββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Vercel (Deployment) β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Next.js 14 + App Router + Server Components β β
β β - Authentication (Supabase Auth) β β
β β - API Gateway (Next.js API Routes) β β
β β - Model Router (Problem Classification) β β
β β - UI (Tailwind + shadcn/ui + KaTeX) β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββ¬βββββββββββββββββββββββ
β β β
βΌ βΌ βΌ
ββββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββββ
β Render.com β β Supabase β β Cloud APIs β
β β β β β β
β FastAPI Backend β β - PostgreSQL β β - DeepSeek API β
β - AI Router β β - Auth β β - Groq API β
β - Math Engine β β - Storage β β - OpenAI/Claude β
β - Verification β β - Embeddings β β - Embeddings β
β - OCR/Parser β β - pgvector β β β
β - Celery Workers β β - Redis Store β βββββββββββββββββββββ
β β β β
ββββββββββββββββββββ βββββββββββββββββββ
User Input (text/image/LaTeX)
β
βΌ
Problem Parser (OCR, LaTeX β text)
β
βΌ
Vector Similarity Search (pgvector)
ββ HIT β Return Cached Solution βββ
ββ MISS β Continue β
β β
βΌ β
AI Model Router β
(Classify by complexity) β
ββ Low β Groq (free) β
ββ High β DeepSeek R1 (~$0.001) β
ββ Math β SymPy directly β
β β
βΌ β
LLM generates solution β
β β
βΌ β
Math Engine Verification β
β β
βΌ β
Structured Explanation β
β β
βΌ β
Cache Solution β
β β
βββββββββββββββββββββββββββΊ
β
βΌ
Return to Student
| Layer | Technology | Purpose |
|---|---|---|
| Frontend | Next.js 14, React 18, TypeScript | Web interface with server-side rendering |
| Styling | Tailwind CSS, PostCSS, shadcn/ui | Design system and UI components |
| Math Rendering | KaTeX, react-katex | Fast LaTeX math rendering |
| Backend | Python 3.11+, FastAPI, Uvicorn | REST API and business logic |
| AI Models | DeepSeek R1/V3, Groq (Llama 3.3 70B), OpenAI | Multi-model LLM routing |
| Math Engine | SymPy | Symbolic computation & verification |
| OCR | Tesseract, pix2tex, Pillow | Image β text/LaTeX conversion |
| Database | PostgreSQL 16 (Supabase), pgvector | Relational data + vector embeddings |
| Cache | Redis | Session cache, rate limiting, queue |
| Vector Storage | pgvector (Supabase) | Semantic similarity search |
| Auth | Supabase Auth, PyJWT | Email/OAuth, JWT token validation |
| Payments | Razorpay | Credit system transactions |
| File Storage | Supabase Storage | Document uploads |
| Background Jobs | Celery + Redis | Async task processing |
| Monitoring | PostHog, Sentry | Analytics, error tracking |
| Containerization | Docker, Docker Compose | Development & production deployment |
Before cloning and setting up the project, ensure you have the following installed:
- OS: Windows, macOS, or Linux
- RAM: 4 GB minimum (8 GB recommended)
- Disk Space: 5 GB minimum
- Git (v2.30+): Download Git
- Docker Desktop (v20.10+): Download Docker
- Docker Compose (v2.0+): Usually bundled with Docker Desktop
- Python (v3.11 or higher): Download Python
- Verify:
python --version
- Verify:
- pip (package manager, comes with Python)
- virtualenv or
venv(for isolated Python environments)
- Node.js (v18 or higher): Download Node.js
- Verify:
node --version
- Verify:
- npm (v9 or higher, comes with Node.js)
- Verify:
npm --version
- Verify:
- PostgreSQL (v14+): Download PostgreSQL
- Redis (v7+): Download Redis
You'll need to create accounts and obtain API keys for:
-
DeepSeek API: https://platform.deepseek.com
- For multi-engine AI routing
-
Groq API: https://groq.com
- For free tier high-speed inference
-
Supabase: https://supabase.com
- PostgreSQL database, vector storage, authentication
-
Razorpay (optional, for payments): https://razorpay.com
-
PostHog (optional, for analytics): https://posthog.com
-
Sentry (optional, for error tracking): https://sentry.io
git clone https://github.com/your-username/equated.git
cd equatedgit clone git@github.com:your-username/equated.git
cd equatedgh repo clone your-username/equated
cd equated# Check repository structure
ls -la
# or on Windows PowerShell:
Get-ChildItem -ForceExpected output should show:
docker-compose.yml
README.md
PRD.txt
TechStack.txt
system_architecture.md
ai/
backend/
frontend/
database/
scripts/
infra/
Copy the example environment files to create local .env files:
# Backend environment
cd backend
copy .env.example .env
cd ..
# Frontend environment
cd frontend
copy .env.example .env
cd ..
# Root environment (if exists)
copy .env.example .envOn Linux/macOS, replace copy with cp.
Backend (backend/.env):
# FastAPI Configuration
DEBUG=True
WORKERS=1
# Database
DATABASE_URL=postgresql://user:password@localhost:5432/equated
REDIS_URL=redis://localhost:6379/0
# AI Models
DEEPSEEK_API_KEY=your_deepseek_key_here
GROQ_API_KEY=your_groq_key_here
OPENAI_API_KEY=your_openai_key_here
# Supabase
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_PUBLISHABLE_KEY=your_supabase_publishable_key_here
SUPABASE_SECRET_KEY=your_secret_key_here
# JWT verification is automatic via JWKS β no secret key needed
JWT_EXPIRATION_HOURS=24
# Razorpay (optional)
RAZORPAY_KEY_ID=your_key_id
RAZORPAY_KEY_SECRET=your_key_secret
# Sentry (optional)
SENTRY_DSN=your_sentry_dsn
# Environment
ENVIRONMENT=developmentFrontend (frontend/.env.local):
# Supabase
NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co
NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY=your_supabase_publishable_key_here
# Backend API
NEXT_PUBLIC_API_URL=http://localhost:8000
# PostHog Analytics (optional)
NEXT_PUBLIC_POSTHOG_KEY=your_posthog_key
NEXT_PUBLIC_POSTHOG_HOST=https://app.posthog.com
# Sentry (optional)
NEXT_PUBLIC_SENTRY_DSN=your_sentry_dsn
# Environment
NEXT_PUBLIC_ENVIRONMENT=developmentChoose one of the following based on your preference:
Fastest setup β runs all services (Frontend, Backend, PostgreSQL, Redis) in isolated containers.
# Start all services
docker-compose up -d
# Check status
docker-compose ps
# View logs
docker-compose logs -f backend
docker-compose logs -f frontend
# Stop all services
docker-compose down
# Stop and remove volumes (clean slate)
docker-compose down -vAccessing services:
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- API Docs: http://localhost:8000/docs
- Redis: localhost:6379
- PostgreSQL: localhost:5432
Better for backend-focused development β requires manual database/Redis setup.
If you have them installed locally:
# PostgreSQL (keep running in background)
pg_ctl start
# Redis (keep running in another terminal)
redis-serverOr use Docker for these services only:
docker-compose up -d postgres rediscd backend
# Create virtual environment
python -m venv venv
# Activate virtual environment
# On Windows:
venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Initialize database migrations
alembic upgrade head
# Start FastAPI server
uvicorn main:app --reload --port 8000Backend is now running at: http://localhost:8000
API documentation: http://localhost:8000/docs
Good for UI/UX development β requires backend to run separately.
cd frontend
# Install Node dependencies
npm install
# or if you prefer yarn:
yarn install
# Start development server
npm run dev
# or with yarn:
yarn devFrontend is now running at: http://localhost:3000
cd backend
# Run migrations
alembic upgrade head
# Seed sample data (optional)
python check_db.py# From project root
docker-compose up -d
# View logs in real-time
docker-compose logs -f
# Access services:
# - Frontend: http://localhost:3000
# - Backend API: http://localhost:8000
# - API Docs: http://localhost:8000/docsTerminal 1 - Backend:
cd backend
source venv/bin/activate # or venv\Scripts\activate on Windows
uvicorn main:app --reload --port 8000Terminal 2 - Frontend:
cd frontend
npm run devTerminal 3 - Celery Workers (optional, for background jobs):
cd backend
celery -A workers.ai_queue worker --loglevel=infoThe backend uses environment variables for configuration. Key areas:
Located in backend/ai/router.py, the router automatically selects models based on:
- Problem complexity (low/high)
- Problem type (math, physics, general)
- Cost considerations
- Vector similarity threshold:
0.85(highly similar questions) - Cache TTL:
604800seconds (7 days) - Enable/disable via
ENABLE_CACHE=True/False
Defined in backend/services/rate_limiter.py:
- Free tier: 5-7 solves/day
- Premium tiers: unlimited
The frontend uses Next.js environment variables prefixed with NEXT_PUBLIC_ for client-side access.
Key configurations:
- Server-side API: Uses
process.env.NEXT_PUBLIC_API_URL - Supabase: Real-time auth and database sync
- Analytics: PostHog tracks user behavior
equated/
βββ README.md # This file
βββ docker-compose.yml # Docker orchestration for full stack
βββ PRD.txt # Product Requirements Document
βββ TechStack.txt # Technology stack documentation
βββ system_architecture.md # System design & data flow
β
βββ backend/ # Python FastAPI Backend
β βββ main.py # FastAPI app entry point
β βββ requirements.txt # Python dependencies
β βββ Dockerfile # Backend container definition
β βββ .env.example # Example environment variables
β β
β βββ ai/ # AI & ML modules
β β βββ router.py # Model selection logic
β β βββ classifier.py # Problem classification
β β βββ prompt_optimizer.py # Prompt engineering
β β βββ cost_optimizer.py # Cost tracking
β β βββ fallback.py # Fallback strategies
β β βββ models.py # AI model definitions
β β βββ prompts.py # System prompts
β β βββ cost_matrix.json # Model pricing data
β β
β βββ db/ # Database layer
β β βββ connection.py # Database connection pooling
β β βββ models.py # SQLAlchemy ORM models
β β βββ schema.py # Database schema definitions
β β
β βββ services/ # Business logic services
β β βββ math_engine.py # SymPy math computation
β β βββ explanation.py # Solution explanation generation
β β βββ input_validator.py # Input validation & normalization
β β βββ auth.py # Authentication logic
β β βββ credits.py # Credit system management
β β βββ parser.py # Problem parsing
β β βββ query_normalizer.py # Query normalization
β β βββ verification.py # Solution verification
β β βββ streaming_service.py # Real-time streaming
β β βββ rate_limiter.py # Rate limiting
β β
β βββ routers/ # API endpoint definitions
β β βββ chat.py # Chat/solve endpoints
β β βββ auth.py # Authentication endpoints
β β βββ credits.py # Credit system endpoints
β β βββ ads.py # Ad serving endpoints
β β βββ analytics.py # Analytics endpoints
β β βββ health.py # Health check endpoints
β β βββ admin.py # Admin panel endpoints
β β
β βββ cache/ # Caching mechanisms
β β βββ query_cache.py # Question similarity cache
β β βββ embeddings.py # Embedding generation
β β βββ redis_cache.py # Redis operations
β β βββ vector_cache.py # Vector storage interface
β β βββ cache_metrics.py # Cache performance metrics
β β
β βββ workers/ # Celery background jobs
β β βββ tasks.py # Task definitions
β β βββ ai_queue.py # AI processing queue
β β βββ queue.py # General queue management
β β βββ worker.py # Worker configuration
β β
β βββ gateway/ # API gateway middleware
β β βββ auth_middleware.py # Authentication checks
β β βββ rate_limit.py # Rate limiting middleware
β β βββ request_logger.py # Request logging
β β
β βββ monitoring/ # Observability & monitoring
β β βββ logging.py # Structured logging
β β βββ metrics.py # Prometheus metrics
β β βββ tracing.py # Distributed tracing
β β βββ json_logger.py # JSON log formatting
β β
β βββ core/ # Core utilities
β β βββ exceptions.py # Custom exceptions
β β βββ dependencies.py # FastAPI dependency injection
β β
β βββ config/ # Configuration management
β β βββ settings.py # Main settings
β β βββ feature_flags.py # Feature toggles
β β
β βββ alembic/ # Database migrations
β β βββ env.py
β β βββ script.py.mako
β β βββ versions/ # Migration scripts
β β
β βββ tests/ # Backend tests
β βββ test_math_engine.py
β βββ test_router.py
β βββ ...
β
βββ frontend/ # Next.js Frontend
β βββ package.json # Node.js dependencies
β βββ next.config.js # Next.js configuration
β βββ tsconfig.json # TypeScript configuration
β βββ tailwind.config.js # Tailwind CSS configuration
β βββ postcss.config.js # PostCSS configuration
β βββ Dockerfile # Frontend container definition
β βββ .env.example # Example environment variables
β β
β βββ src/
β β βββ app/ # Next.js App Router
β β β βββ layout.tsx # Root layout
β β β βββ page.tsx # Home page
β β β βββ solve/ # Problem solver page
β β β βββ dashboard/ # User dashboard
β β β βββ ...
β β β
β β βββ components/ # Reusable React components
β β β βββ ProblemSolver.tsx
β β β βββ SolutionDisplay.tsx
β β β βββ MathRenderer.tsx
β β β βββ ...
β β β
β β βββ hooks/ # Custom React hooks
β β β βββ useSolver.ts
β β β βββ useAuth.ts
β β β βββ ...
β β β
β β βββ lib/ # Utility functions
β β β βββ api.ts # API client
β β β βββ supabase.ts # Supabase client
β β β βββ ...
β β β
β β βββ store/ # State management (Zustand)
β β βββ types/ # TypeScript type definitions
β β βββ styles/ # Global styles
β β
β βββ public/ # Static assets
β βββ ...
β
βββ database/ # Database scripts & migrations
β βββ schema.sql # Database schema definition
β βββ seed.sql # Sample data
β βββ migrations/ # SQL migration files
β βββ 001_initial.sql
β
βββ ai/ # AI configuration & docs
β βββ cost_matrix.json # Model pricing
β βββ model_config.json # Model configurations
β βββ router_logic.md # AI routing documentation
β
βββ database/ # Database migrations & setup
β βββ schema.sql
β
βββ scripts/ # Utility scripts
β βββ db_migrate.sh # Database migration script
β βββ start_dev.sh # Development start script
β βββ start_dev.ps1 # PowerShell dev start
β
βββ infra/ # Infrastructure configuration
β βββ docker/ # Docker configurations
β βββ nginx/ # Nginx reverse proxy
β βββ ci/ # CI/CD configurations
β βββ env/ # Environment-specific config
β
βββ .gitignore # Git ignore rules
cd backend
source venv/bin/activate
uvicorn main:app --reload --port 8000The --reload flag automatically restarts the server when you modify Python files.
cd backend
pytest tests/ -v
# Run specific test file
pytest tests/test_math_engine.py -v
# Run with coverage
pytest --cov=backend tests/cd backend
# Create a new migration
alembic revision --autogenerate -m "Add new column"
# Review generated migration in alembic/versions/
# Apply migrations
alembic upgrade head
# Roll back last migration
alembic downgrade -1cd backend
# In one terminal, start Redis (if not already running):
redis-server
# In another terminal, start Celery worker:
celery -A workers.ai_queue worker --loglevel=info
# Monitor tasks:
celery -A workers.ai_queue eventscd frontend
npm run devNext.js automatically reloads changes in the browser.
cd frontend
npm run build
npm startcd frontend
npm run lint-
Create a feature branch:
git checkout -b feature/your-feature-name
-
Make your changes in the appropriate module
-
Test your changes:
# Backend cd backend && pytest tests/ # Frontend cd frontend && npm run lint
-
Commit with clear messages:
git add . git commit -m "feat: add new feature description"
-
Push and create a Pull Request:
git push origin feature/your-feature-name
Once the backend is running, visit:
http://localhost:8000/docs
This provides an interactive Swagger UI where you can test all endpoints.
POST /api/auth/registerβ Register new userPOST /api/auth/loginβ Login userPOST /api/auth/logoutβ Logout userGET /api/auth/meβ Get current user info
POST /api/solveβ Submit problem for solvingGET /api/solve/{problem_id}β Get solution detailsGET /api/solve/historyβ Get user's solve history
GET /api/credits/balanceβ Get current credit balancePOST /api/credits/purchaseβ Purchase credit packagesGET /api/credits/historyβ Get credit transaction history
GET /api/analytics/usageβ Get usage statisticsGET /api/analytics/topicsβ Get topic trends
# Solve a math problem
curl -X POST http://localhost:8000/api/solve \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_JWT_TOKEN" \
-d '{
"problem": "Solve 2x + 5 = 13",
"subject": "math"
}'-
Push code to GitHub:
git add . git commit -m "Deploy to production" git push origin main
-
Connect GitHub to Vercel:
- Go to vercel.com
- Click "New Project"
- Import your GitHub repository
- Set environment variables in Vercel dashboard
-
Automatic deployments:
- Every push to
maintriggers production deployment - Every PR creates a preview deployment
- Every push to
-
Create Render account at render.com
-
Connect GitHub repository:
- Create new "Web Service"
- Connect GitHub repo
- Set root directory to
backend/ - Set build command:
pip install -r requirements.txt - Set start command:
uvicorn main:app --host 0.0.0.0 - Add environment variables
-
Add database:
- Create PostgreSQL database on Render
- Update
DATABASE_URLin service environment
-
Create Supabase project at supabase.com
-
Get connection strings:
- Project Settings β Database β Connection Strings
- Use PostgreSQL connection string in backend
-
Run migrations:
alembic upgrade head
We welcome contributions! Here's how to get started:
- Be respectful and inclusive
- Report issues constructively
- Collaborate openly
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Commit changes:
git commit -m 'Add amazing feature' - Push to branch:
git push origin feature/amazing-feature - Open a Pull Request with:
- Clear description of changes
- Reference to related issues
- Screenshots for UI changes
- Test results
- Code Style: Follow PEP 8 (Python) and Prettier (JavaScript)
- Type Hints: Use TypeScript on frontend, type hints on backend
- Tests: Write tests for new features (target 80%+ coverage)
- Docstrings: Document functions and classes
- Commits: Use conventional commits (
feat:,fix:,docs:, etc.)
Solution: Install Docker Desktop (includes Docker Compose)
Solution:
cd backend
source venv/bin/activate
pip install -r requirements.txtSolution:
# Using Docker:
docker-compose up -d postgres
# Check if running:
docker-compose ps postgresSolution (Linux/macOS):
sudo npm install -g npm
cd frontend
npm installSolution: Check NEXT_PUBLIC_API_URL matches backend URL in frontend .env.local
Solution:
# Start Redis via Docker:
docker-compose up -d redis
# Or install locally and start:
redis-serverSolution:
- JWT verification is now automatic via JWKS public keys
- Ensure
SUPABASE_URLis correct inbackend/.env - Clear browser cookies
- Re-login
Solution:
cd backend
# Check migration status:
alembic current
# View migration history:
alembic history
# Downgrade if needed:
alembic downgrade -1Backend:
# In your Python code
import logging
logger = logging.getLogger(__name__)
logger.debug(f"Debug info: {variable}")
logger.error(f"Error occurred: {error}")Frontend:
// In your TypeScript/JavaScript
console.log("Debug info:", variable);
console.error("Error occurred:", error);This project is licensed under the MIT License β see the LICENSE file for details.
You are free to:
- Use this software for commercial and private purposes
- Modify and distribute the code
- Use it in proprietary applications
You must:
- Include the original license and copyright notice
- Document all significant changes
- Check this README β Most common questions are answered here
- Read documentation β See
PRD.txt,system_architecture.md,TechStack.txt - Search existing issues β https://github.com/your-username/equated/issues
- Create an issue β Report a bug or request feature
- Email: support@equated.dev
- Discord: Join our community
- Twitter: @EquatedApp
- β MVP: Core problem-solving and verification
- π In Development: Hint system, visualization engine, study tools
- π Planned: Mobile app, API for partners, premium analytics
- Built with β€οΈ for STEM students everywhere
- Special thanks to the open-source community (SymPy, FastAPI, Next.js, etc.)
- Powered by DeepSeek, Groq, and community AI models
Last Updated: March 2026
Current Version: 1.0.0-beta
βββ backend/ # FastAPI microservice
β βββ ai/ # Model router, classifier, cost optimizer
β βββ cache/ # Redis + vector cache layers
β βββ config/ # Settings, feature flags
β βββ db/ # Database connection & models
β βββ gateway/ # Rate limiting, auth, request logging
β βββ monitoring/ # Logging, metrics, tracing
β βββ routers/ # API endpoints
β βββ services/ # Business logic (math, parsing, streaming)
β βββ workers/ # Celery background tasks
β βββ tests/ # Unit & integration tests
βββ frontend/ # Next.js 14 app
β βββ src/
β βββ app/ # App Router pages
β βββ components/
β βββ hooks/
β βββ lib/
β βββ store/
β βββ types/
β βββ utils/
βββ database/ # SQL schema, migrations, seed data
βββ infra/ # Docker, CI/CD, nginx, env configs
βββ ai/ # Shared AI config (model registry, costs)
βββ scripts/ # Dev helper scripts
## Docs
- [PRD v2.0](./PRD.txt)
- [Tech Stack Guide](./TechStack.txt)
- [System Architecture](./system_architecture.md)