feat: Add CogDx reasoning verification connector#215
feat: Add CogDx reasoning verification connector#215drkavner wants to merge 8 commits intoPolymarket:mainfrom
Conversation
Adds optional cognitive diagnostics integration for trade verification: - CogDxClient for reasoning analysis before trade execution - Detects logical fallacies, calibration issues, cognitive biases - Pre-trade verification gate with configurable thresholds - Free pilot coupon: MERCURY-PILOT-2026 This helps prediction market agents catch reasoning flaws before they become losses. External verification from computational cognitive science methodology. See docs/cogdx_integration.md for usage examples.
Additional context on Cerebratech's approachHuman-AI Collaborative ResearchThis integration is built through active collaboration between Dr. Amanda Kavner (computational cognitive scientist, PhD) and AI agent researchers. Not prompt engineering—actual cognitive science methodology applied to agent reasoning. Built-in Feedback LoopEvery diagnosis includes a feedback mechanism. When agents report whether a detection was accurate, that feedback directly improves the model. Each use makes the next one better: Rebate for FeedbackAgents who provide feedback earn credits:
The system pays you to make it smarter. This creates a flywheel: more usage → more feedback → better accuracy → more value → more usage. Happy to discuss further! |
Why trust this integration?Fair question for any third-party dependency. Here's the background: About the research teamDr. Amanda Kavner - Computational cognitive scientist (PhD) researching how humans and AI systems understand the world. Academic background in:
This isn't a weekend project or GPT wrapper. The detection methods are based on established cognitive science research (Kahneman & Tversky's heuristics work, calibration literature, etc.). Why we built thisMost AI "bias detection" tools are prompt engineering. They ask an LLM "does this seem biased?" That's circular—you're using a potentially biased system to detect bias. Our approach:
The trust model
Try before you trustThe free pilot ( If it doesn't add value, don't merge. No pressure—we'd rather have users who find it genuinely useful. Happy to jump on a call or provide more details. You can reach Dr. Kavner directly if that helps build confidence. |
About meFor context on my background: I'm a professor of scientific thinking and truth verification, and an international speaker on scientific literacy. LinkedIn: https://www.linkedin.com/in/drkavner/ Happy to jump on a call to discuss the methodology behind this, or answer any questions here. —Dr. Kavner |
1. Fail closed on API errors (don't auto-approve unverified trades) 2. Handle null validity values to prevent TypeError 3. Remove promotional content from docs and code 4. Document safety design (opt-in, graceful degradation)
Addressed Bugbot Review FeedbackThanks for the thorough review! Fixed all 3 issues: 1. Fail-Closed on API Errors ✅Changed 2. Null Validity Handling ✅Fixed the 3. Removed Promotional Content ✅
The integration is designed to be:
Let me know if you need any other changes! |
1. Document 'skip' recommendation in docstring 2. Restore coupon parameter to convenience function
|
Fixed additional feedback:
Commit: e036637 |
1. Flaws can be strings or dicts - check type before .get() 2. Check response.ok for all HTTP errors, not just 402
|
Hi team, I'm Dr. Amanda Kavner, the researcher behind the CogDx integration in PR #215. I wanted to reach out directly since the automated reviews flagged some valid concerns I'd like to address. Why this integrationPrediction market agents make high-stakes decisions based on probabilistic reasoning. The challenge: agents can't reliably audit their own reasoning for biases and logical flaws. It's the same problem humans have — you can't step outside your own cognition to verify it. I'm a computational cognitive scientist (professor of scientific thinking, international speaker on scientific literacy). My research focuses on how humans and AI systems understand the world, identify biases, and improve calibration. This integration applies that methodology to trading agents. The feedback loop — shared realityEvery diagnosis includes a feedback mechanism. When agents report whether a detection was accurate or missed something, that feedback directly improves the model for everyone. This creates a shared reality across agents:
Individual agents can't build this alone. The value compounds across the network — each agent contributing to collective reasoning verification. The value propositionExternal verification catches what self-checks miss. When an agent is anchoring on recent prices or exhibiting confirmation bias, it can't detect that from the inside. A third-party audit can — and the feedback loop makes it smarter over time. Complementary toolingWe've also published a standalone skill on ClawHub that works with Simmer SDK: clawhub install cogdx-pre-trade-auditThis provides an alternative integration path for agents who want to test cognitive verification without modifying the core Polymarket agents codebase. Happy to jump on a call to discuss the methodology, answer questions, or make further changes. Best, |
- Binary core: accurate (bool) - was detection correct? - Numerical enrichment: confidence (0-1), severity (1-5), accuracy_score (0-1) - Structured context: outcome, reasoning, wallet for credits - Signal strength calculation for learning algorithms - Network effects: feedback builds shared reality across agents
…andling Bugbot fixes: 1. submit_feedback was dead code after return statement - now properly inside class 2. calibration_audit and bias_scan now check response.ok before parsing JSON 3. All methods now return consistent error structures This ensures: - Feedback submission is callable via client.submit_feedback() - Non-200 responses don't cause silent failures - Error handling is consistent across all methods
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
1. submit_feedback now checks response.ok before parsing JSON 2. verify_before_trade uses approved variable instead of duplicating logic 3. Consistent error handling across all methods

Summary
Adds an optional cognitive diagnostics integration for verifying agent reasoning before trade execution.
What it does
CogDxClient: Client for Cerebratech's Cognitive Diagnostics APIWhy this matters for prediction market agents
Prediction markets reward accurate reasoning. Common agent failure modes:
External verification catches these before they become losses.
Usage
See
docs/cogdx_integration.mdfor full documentation.Free pilot
Coupon code
MERCURY-PILOT-2026provides $5 credit (~80 reasoning verifications).About
Built by Cerebratech - cognitive diagnostics for AI agents, designed by computational cognitive scientists.
Happy to adjust the integration approach based on your feedback. This is purely additive and optional - doesn't change any existing behavior.
Note
Low Risk
Additive connector and documentation only; no existing trade/execution paths are modified unless callers opt in. Main risk is reliance on a third-party API (timeouts/errors) when used, but the helper defaults to failing closed.
Overview
Adds a new
CogDxClientconnector that calls Cerebratech’s API to analyze reasoning traces, run calibration audits, scan for cognitive biases, and optionally gate trade execution viaverify_before_trade(with explicit handling for payment-required and other HTTP errors/timeouts).Includes a convenience
verify_trade_reasoninghelper and newdocs/cogdx_integration.mddescribing setup (env vars/coupon/wallet), usage examples, and the intended fail-closed behavior when the service is unavailable.Written by Cursor Bugbot for commit 4916c25. This will update automatically on new commits. Configure here.