AIGoat - Open-source AI security playground for LLM red teaming. AI Goat provides hands-on labs covering the full OWASP LLM Top 10 with progressive defenses.
-
Updated
Apr 24, 2026 - JavaScript
AIGoat - Open-source AI security playground for LLM red teaming. AI Goat provides hands-on labs covering the full OWASP LLM Top 10 with progressive defenses.
Open-source AI security firewall. 81 engines for PII detection, prompt injection defense, MCP security, and egress classification. Local-first. Zero cloud dependency.
nod is a platform-agnostic, rule-based linter that ensures AI/LLM specifications contain critical security and compliance elements before any agentic or automated development begins.
🛡️ Gemini API destekli, kaynak kodlarındaki SQLi ve XSS zafiyetlerini tespit eden yapay zeka tabanlı analiz motoru ve LLM tehdit alanı güvenlik araştırması.
AI agent discovery and security assessment platform with vulnerability testing, risk scoring, and compliance mapping
Blackwall LLM Shield is an open-source AI security toolkit for JavaScript and Python that protects LLM apps from prompt injection, sensitive data leaks, unsafe tool calls, and hostile RAG content with prompt sanitisation, PII masking, output inspection, policy enforcement, and audit trails.
Blackwall LLM Shield is an open-source AI security toolkit for JavaScript and Python that protects LLM apps from prompt injection, sensitive data leaks, unsafe tool calls, and hostile RAG content with prompt sanitization, PII masking, output inspection, policy enforcement, and audit trails.
Cross-model LLM jailbreak research — EASL technique confirmed against DeepSeek (HIGH) and Google Gemini (CRITICAL). Disclosed April 2026. #LLMSecurity #AIRedTeam #PromptInjection
Hands-on APM Security scanning workshop — step-by-step labs for agent config file security, OWASP LLM Top 10, and Power BI reporting
APM Security Demo App 005 — Go with lockfile integrity violations
An advanced, interactive educational platform focused on AI system vulnerabilities, attack vectors, and offensive security methodologies. [Prompt Injection, Model Evasion, Data Poisoning, Agent Hijacking]
Drop-in prompt injection defense for LLM apps and AI agents — detect, block, and audit injection attacks in real time
A taxonomy of LLM jailbreak & prompt injection attack patterns (A-KK, 37 categories) — educational & defensive reference. OWASP/MITRE ATLAS-style. Bilingual EN/KR.
An interactive web application that generates comprehensive security playbooks for mitigating the OWASP Top 10 vulnerabilities specific to Large Language Model (LLM) applications. The application consists of a Flask backend that leverages the OpenAI API to generate detailed playbooks, paired with a simple HTML/JavaScript frontend.
APM Security scanning platform — 4-engine architecture for agent config file security with 5 polyglot demo apps, SARIF converters, CI/CD pipelines, and Power BI reporting
APM Security Demo App 002 — Flask with Base64/exfiltration violations
APM Security Demo App 001 — Next.js with Unicode injection violations
APM Security Demo App 004 — Spring Boot with shell injection violations
CTF-OS Ecosystem — Autonomous Proving Grounds for AI Agents. Shannon + SWE-Agent + Red-Teaming + Token Efficiency + Quantum-Safe DIDs
Add a description, image, and links to the owasp-llm topic page so that developers can more easily learn about it.
To associate your repository with the owasp-llm topic, visit your repo's landing page and select "manage topics."