The Open Source Firewall for LLMs. A self-hosted gateway to secure and control AI applications with powerful guardrails.
-
Updated
Jun 25, 2025 - Python
The Open Source Firewall for LLMs. A self-hosted gateway to secure and control AI applications with powerful guardrails.
LLM Security Platform.
Ship AI agents with guardrails — not prayers. Self-hosted runtime protection for LLMs and tool-calling agents: block prompt injection, enforce tool permissions, redact sensitive data, and control what agents are allowed to do.
AI security framework: deterministic input filtering, adaptive rule learning (389K pre-trained attacks), optional LLM veto verification. Zero dependencies. Works without an LLM. Patent Pending.
Open-source security gateway for LLM APIs — prompt injection detection, PII redaction, dangerous response sanitization, and audit logging. OpenAI/Claude compatible, MCP & Agent SKILL support. Drop-in proxy for AI coding agents (Cursor, Claude Code, Codex).
Open-source AI security firewall. 81 engines for PII detection, prompt injection defense, MCP security, and egress classification. Local-first. Zero cloud dependency.
Self-learning prompt injection detection engine — 25 input detectors (10 languages), 5 output scanners, PII redaction, red team self-testing, F1: 96.0% with 0% false positives. Docker, GitHub Action, pre-commit, FastAPI/Flask/Django/LangChain/CrewAI/Dify/n8n.
LLM Security Platform Docs
Lightweight firewall gateway for LLM APIs — detect prompt injection, PII leaks & token abuse in real-time
AI LLM Firewall -- Detection, Deception, and Intelligence for LLM Security. 9 SDK integrations, 12 LLM backends, 0% false positives. Apache 2.0.
The Self-Hosted AI Firewall & Gateway. Drop-in guardrails for LLMs running entirely on CPU. Blocks jailbreaks, enforces policies, and ensures compliance in real-time
Secure large language model access by enforcing role-based controls, detecting prompt injection and PII, while optimizing concurrency and performance.
🛡️ Detect and block prompt injection attacks in LLM apps using pattern detectors, ML, and community-driven feedback to improve security.
Add a description, image, and links to the llm-firewall topic page so that developers can more easily learn about it.
To associate your repository with the llm-firewall topic, visit your repo's landing page and select "manage topics."