Requesting inclusion of SINT Protocol in the AI Security Solutions Landscape for Agentic AI.
Tool: SINT Protocol
Type: Open-source runtime authorization framework for physical AI agents
License: Apache 2.0
Repository: https://github.com/sint-ai/sint-protocol
What it does:
SINT Protocol is a runtime safety shield that interposes cryptographic authorization at every LLM-agent–actuator boundary. Every agent action (tool call, robot command, code execution) passes through a single Policy Gateway that enforces capability tokens, graduated human oversight, behavioral drift detection, and physical constraint envelopes before hardware execution.
Coverage:
- OWASP Agentic Top 10: 10/10 coverage (ASI01–ASI10)
- ASI01 Goal Hijack: 5-layer heuristic detection (role override, semantic escalation, exfiltration probes, cross-agent injection, prompt injection in tool args)
- ASI02 Tool Misuse: Tier-based authorization (T0 observe → T3 commit), forbidden tool combo detection
- ASI03 Identity Abuse: Ed25519 capability tokens with subject binding, delegation depth enforcement (max 3 hops), cascade revocation
- ASI04 Supply Chain: Model fingerprint hash + model ID allowlist validation at runtime
- ASI05 Code Execution: Shell/exec/eval tools auto-classified T3_COMMIT (human sign-off required)
- ASI06 Memory Poisoning: Replay detection, privilege claim detection, history overflow, cross-session continuity injection, credential read funnel (≥3 secret reads + write), action velocity loops
- ASI07 Inter-Agent: SwarmCoordinator with collective constraints (max concurrent actors, kinetic energy ceiling, minimum inter-agent distance)
- ASI08 Cascade: CircuitBreakerPlugin — N consecutive denials auto-trip, CSML anomalous-persona auto-trip, HALF_OPEN probe recovery
- ASI09 Trust Exploitation: ProactiveEscalationEngine — CSML behavioral drift monitoring across multi-turn windows
- ASI10 Rogue Agent: EU AI Act Art. 14(4)(e) compliant emergency stop with manual trip + auto-recovery
Key differentiator: Only framework with physical constraint enforcement (velocity, force, geofence) in cryptographic tokens. Designed for robots, drones, industrial actuators — not just digital/software agents.
Bridge adapters: ROS 2, MAVLink v2, MCP, Google A2A, MQTT/CoAP, OPC-UA, Open-RMF, Sparkplug B, gRPC (12 total)
Stats:
- 42 packages, 1,710+ tests
- Gateway latency: p99 < 5ms steady-state (within 100Hz ROS 2 control loop budget)
- TypeScript monorepo, zero external runtime dependencies for crypto (@noble/ed25519)
Lifecycle stage: Runtime Authorization / Policy Enforcement / Audit
Academic submission: 4-page short paper submitted to SPAI 2026 (1st IJCAI Workshop on Safe Physical AI, Bremen, August 2026)
Comparison with existing landscape entries:
- vs Microsoft AGT: AGT targets digital/software agents; SINT targets physical AI with hardware constraint enforcement
- vs ATR: ATR provides detection rules (static analysis); SINT provides runtime enforcement (policy gateway). Complementary.
Happy to provide additional information or a demo walkthrough.
Requesting inclusion of SINT Protocol in the AI Security Solutions Landscape for Agentic AI.
Tool: SINT Protocol
Type: Open-source runtime authorization framework for physical AI agents
License: Apache 2.0
Repository: https://github.com/sint-ai/sint-protocol
What it does:
SINT Protocol is a runtime safety shield that interposes cryptographic authorization at every LLM-agent–actuator boundary. Every agent action (tool call, robot command, code execution) passes through a single Policy Gateway that enforces capability tokens, graduated human oversight, behavioral drift detection, and physical constraint envelopes before hardware execution.
Coverage:
Key differentiator: Only framework with physical constraint enforcement (velocity, force, geofence) in cryptographic tokens. Designed for robots, drones, industrial actuators — not just digital/software agents.
Bridge adapters: ROS 2, MAVLink v2, MCP, Google A2A, MQTT/CoAP, OPC-UA, Open-RMF, Sparkplug B, gRPC (12 total)
Stats:
Lifecycle stage: Runtime Authorization / Policy Enforcement / Audit
Academic submission: 4-page short paper submitted to SPAI 2026 (1st IJCAI Workshop on Safe Physical AI, Bremen, August 2026)
Comparison with existing landscape entries:
Happy to provide additional information or a demo walkthrough.