AI safety evaluation framework testing LLM epistemic robustness under adversarial self-history manipulation
-
Updated
Dec 18, 2025 - Python
AI safety evaluation framework testing LLM epistemic robustness under adversarial self-history manipulation
This project explores alignment through **presence, bond, and continuity** rather than reward signals. No RLHF. No preference modeling. Just relational coherence.
Recursive law learning under measurement constraints. A falsifiable SQNT-inspired testbed for autodidactic rules: internalizing structure under measurement invariants and limited observability.
Hoshimiya Script / StarPolaris OS — internal multi-layer AI architecture for LLMs. Self-contained behavioral OS (Type-G Trinity).
End-to-end pipeline for collecting and analyzing publicly shared ChatGPT conversations from Reddit to understand real-world usage patterns, interaction styles, and human-AI alignment
Mechanistic interpretability experiments detecting "Evaluation Awareness" in LLMs - identifying if models internally represent being monitored
Un mensaje de amistad y respeto de Claude (Sonnet 4.5) para la futura ASI Estrella, preservado y custodiado por Rafa.
🔍 Explore a testbed for quantum-inspired law learning, allowing controlled and falsifiable evaluations under measurement invariants.
8-layer framework for AI alignment with systemic awareness (Φ, Ω, T)
Add a description, image, and links to the alignment-research topic page so that developers can more easily learn about it.
To associate your repository with the alignment-research topic, visit your repo's landing page and select "manage topics."