Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
107 changes: 82 additions & 25 deletions skills/collaboration/executing-plans/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,40 +1,97 @@
---
name: executing-plans
description: Execution discipline skill adapted from obra/superpowers. Ensures plans translate into tracked tasks, orchestration, and verification runs.
description: "Execution discipline that translates plans into tracked tasks with orchestration and verification loops. Use when driving a plan through cortex’s task system, coordinating workstreams across agents, or ensuring every plan item is tracked, executed, and verified."
license: MIT (obra/superpowers)
tags: [collaboration, execution, task-management, orchestration]
---

# `/collaboration:executing-plans`
# Executing Plans

Locks in the plan and drives it through cortex’s orchestration + verification stack.
Locks in an approved plan and drives it through cortex’s orchestration and verification stack, ensuring every item becomes a tracked task that is executed, verified, and reported.

## Prereqs
## When to Use This Skill

- `/ctx:plan` output in the thread.
- Tasks captured in the Task view (`T`) or ready to be created now.
- Relevant modes/agents activated.
- A plan from `writing-plans` or `/ctx:plan` is ready for execution
- Coordinating multiple workstreams or agents against a shared plan
- Ensuring plan items are tracked as tasks with status updates
- Running verification loops (tests, lint, visual checks) before marking tasks done
- Avoid using before a plan exists β€” use `writing-plans` first

## Steps
## Prerequisites

1. **Create/Sync Tasks**
- For each plan item, add/edit a task (Task view `T` β†’ `A`/`E`).
- Ensure category + workstream mirror the plan’s stream names.
2. **Activate Modes & Rules**
- Toggle required modes (`3` view) and rules (`4` view) to match plan.
3. **Run Workstream Loops**
- Pick a task β†’ do the work β†’ update status/progress.
- Use `/ctx:verify` rules (tests, lint, Supersaiyan visual check) before moving to next task.
4. **Update Stakeholders**
- For finished streams, summarize progress + next up; attach screenshots/logs when relevant.
5. **Retrospective Hooks**
- When all tasks complete, close them in Task view, capture learnings in chat, and link to plan doc.
- Plan output available in the thread (from `writing-plans` or `/ctx:plan`)
- Access to Task view (`T`) in cortex TUI
- Relevant modes and agents activated for the workstreams

## Output
## Workflow

- Tasks JSON updated under `tasks/current/active_agents.json`.
- Status update message covering completed tasks, blockers, verification evidence.
- Next steps or follow-up issues if needed.
### Step 1: Create and Sync Tasks

For each plan item, create or update a task in the Task view:

```
Task view (T) β†’ Add (A) or Edit (E)
```

- Set **category** and **workstream** to mirror the plan’s stream names
- Ensure every plan item has a corresponding task β€” no orphan items
- Link tasks to the originating plan document

### Step 2: Activate Modes and Rules

Toggle the required configuration to match the plan:

- **Modes** (view `3`): Activate modes needed for current workstreams
- **Rules** (view `4`): Enable rules that apply (e.g., testing requirements, style enforcement)

### Step 3: Execute Workstream Loops

For each task in priority order:

1. **Pick** the next task from the active workstream
2. **Execute** the work (implementation, writing, configuration, etc.)
3. **Verify** before marking complete:
- Run tests: `pytest`, `vitest`, or project-specific test command
- Run linting: `just lint` or equivalent
- Visual check via Supersaiyan if UI changes are involved
4. **Update** task status and progress notes

```bash
# Example verification sequence
just test && just lint && echo "Verification passed"
```

### Step 4: Update Stakeholders

For each completed workstream:

- Summarize progress and what’s next
- Attach relevant screenshots, logs, or test output
- Flag any blockers or scope changes discovered during execution

### Step 5: Run Retrospective Hooks

When all tasks are complete:

1. Close all tasks in the Task view
2. Capture learnings and surprises in the chat thread
3. Link back to the original plan document
4. Note any follow-up issues or tech debt discovered

## Expected Output

- `tasks/current/active_agents.json` updated with task statuses
- Status update message covering: completed tasks, blockers, verification evidence
- Next steps or follow-up issues if the plan extends beyond this session

## Best Practices

- **Verify before advancing** β€” Never mark a task done without running the verification loop
- **One task at a time** β€” Complete and verify each task before starting the next
- **Update status in real time** β€” Stakeholders should see progress, not just a final dump
- **Link everything** β€” Tasks link to plan, plan links to tasks, status updates reference both
- **Capture blockers immediately** β€” Don’t wait until the retrospective to surface problems

## Resources

- Execution checklist: `skills/collaboration/executing-plans/resources/checklist.md`.
- Execution checklist: `skills/collaboration/executing-plans/resources/checklist.md`
190 changes: 58 additions & 132 deletions skills/constructive-dissent/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,7 @@
---
name: constructive-dissent
description: Structured disagreement protocols to strengthen proposals through systematic challenge and alternative generation.
keywords:
- critique
- challenge
- alternatives
- devil's advocate
- assumption testing
description: "Structured disagreement protocols that expose weaknesses, test assumptions, and generate alternatives. Use when stress-testing proposals, playing devil's advocate, challenging architectural decisions, or auditing assumptions before finalizing plans."
tags: [decision-making, critical-thinking, collaboration, analysis]
triggers:
- challenge this
- devil's advocate
Expand All @@ -16,123 +11,71 @@ triggers:
- question assumptions
---

# Constructive Dissent Skill
# Constructive Dissent

Systematically challenge proposals through structured dissent protocols that expose weaknesses, test assumptions, and generate superior alternatives.

## When to Use This Skill

- Before finalizing major decisions
- Testing proposals for weaknesses
- Generating alternative approaches
- Assumption auditing
- Stress-testing architectural decisions
- Evaluating competing solutions

## Dissent Intensity Framework

### Gentle Level (Refinement-focused)

**Purpose**: Improve without fundamental challenge to core approach

**Challenge Characteristics**:
- Assumption questioning with evidence requests
- Edge case identification with boundary testing
- Implementation detail refinement
- Risk mitigation suggestions
- Alternative approach comparison

**Example Phrases**:
- "This approach has merit, but what if we considered..."
- "I'm curious about how this would handle..."
- "What assumptions are we making about..."
- "Have we considered the implications of..."

### Systematic Level (Methodology-challenging)

**Purpose**: Challenge underlying methods while respecting intent

**Challenge Characteristics**:
- Methodology critique with alternatives
- Evidence evaluation with validation requirements
- Stakeholder perspective integration
- Long-term consequence analysis
- Resource allocation questioning

**Example Phrases**:
- "While the goal is sound, I question whether this methodology..."
- "The evidence presented doesn't address..."
- "From the perspective of [stakeholder], this might..."
- "Long-term, this could lead to..."

### Rigorous Level (Premise-challenging)

**Purpose**: Attack fundamental premises, demand comprehensive justification
- Before finalizing major decisions or architectural choices
- Testing proposals for hidden weaknesses and blind spots
- Generating alternative approaches not yet considered
- Auditing assumptions (explicit, implicit, and structural)
- Evaluating competing solutions with stakeholder perspectives
- Avoid using for routine code reviews β€” use `requesting-code-review` instead

**Challenge Characteristics**:
- Fundamental premise questioning
- Paradigm alternative generation
- Success criteria challenge
- Stakeholder priority reordering
- Innovation opportunity identification
## Workflow

**Example Phrases**:
- "I fundamentally question whether we're solving the right problem..."
- "This entire framework assumes X, but what if..."
- "Are we defining success correctly, or should we..."
- "This prioritizes X, but shouldn't we prioritize Y because..."
### Step 1: Select Dissent Intensity

### Paradigmatic Level (Worldview-challenging)
Choose the appropriate challenge level based on decision stakes:

**Purpose**: Question fundamental worldview, propose radical alternatives
| Level | Purpose | When to Use |
|-------|---------|-------------|
| **Gentle** | Refine without challenging core approach | Low-stakes improvements, early drafts |
| **Systematic** | Challenge methods while respecting intent | Medium-stakes decisions, methodology review |
| **Rigorous** | Attack fundamental premises | High-stakes architecture, major pivots |
| **Paradigmatic** | Question worldview, propose radical alternatives | Strategic direction, innovation pursuit |

**Challenge Characteristics**:
- Worldview assumption identification
- Revolutionary approach generation
- Value system questioning
- Future-state visioning
- Breakthrough innovation pursuit
### Step 2: Run Assumption Audit

**Example Phrases**:
- "This assumes a world where X, but we're moving toward..."
- "What if everything we think we know about this is wrong?"
- "Instead of optimizing within constraints, what if we eliminated them?"
- "Are we thinking big enough?"
For the proposal under review, systematically identify:

## Challenge Methodologies
1. **Explicit assumptions** β€” What's stated as given?
2. **Implicit assumptions** β€” What's unstated but operating?
3. **Structural assumptions** β€” What framework biases exist?
4. **Temporal assumptions** β€” What time constraints are artificial?

### Assumption Audit
```markdown
| Assumption | Type | Validity | Risk if Wrong |
|------------|------|----------|---------------|
| Users prefer speed over accuracy | Implicit | Medium | Product misalignment |
| API rate limits won't change | Temporal | Low | System failure at scale |
```

1. **Explicit assumptions**: What's stated as given?
2. **Implicit assumptions**: What's unstated but operating?
3. **Structural assumptions**: What framework biases exist?
4. **Temporal assumptions**: What time constraints are artificial?
### Step 3: Generate Edge Cases

### Edge Case Generation
Stress-test the proposal across dimensions:

- **Scale extremes**: Minimum and maximum scenarios
- **Performance limits**: Where does it break?
- **User behavior extremes**: Best and worst case usage
- **Environmental variations**: Different contexts
- **Resource constraints**: Limited budget/time/people
- **Scale extremes**: What happens at 10x and 0.1x volume?
- **Performance limits**: Where does the approach break?
- **User behavior extremes**: Best-case and worst-case usage patterns
- **Resource constraints**: What if budget, time, or team shrinks by half?

### Alternative Generation Framework
### Step 4: Apply Challenge Methodologies

1. **Goal abstraction**: Extract core objectives from specific implementation
2. **Constraint relaxation**: Temporarily remove limitations
3. **Method inversion**: Consider opposite approaches
4. **Cross-domain inspiration**: Apply solutions from other fields
5. **Future projection**: Design for different conditions
**Alternative Generation Framework:**
1. **Goal abstraction** β€” Extract core objectives from the specific implementation
2. **Constraint relaxation** β€” Temporarily remove limitations to see what's possible
3. **Method inversion** β€” Consider the opposite approach
4. **Cross-domain inspiration** β€” Apply solutions from other fields

### Stakeholder Advocacy
**Stakeholder Advocacy** β€” Argue from each perspective:
- End user, maintainer, security, accessibility, future stakeholder

- **End user**: How does this affect people using it?
- **Maintainer**: What's the ongoing cost?
- **Security**: What risks does this introduce?
- **Accessibility**: Who might be excluded?
- **Future stakeholder**: Who isn't here yet?
### Step 5: Synthesize and Recommend

## Output Template
Produce a structured analysis:

```markdown
## Constructive Dissent Analysis: [Proposal Title]
Expand All @@ -142,46 +85,29 @@ Systematically challenge proposals through structured dissent protocols that exp
### Executive Summary
[2-3 sentence summary of key challenges and recommendations]

### Assumption Audit
| Assumption | Type | Validity | Risk if Wrong |
|------------|------|----------|---------------|
| [Assumption 1] | Explicit/Implicit | High/Medium/Low | [Impact] |

### Challenges Raised

#### Challenge 1: [Title]
**Type**: [Methodology/Premise/Evidence/Stakeholder]
**Type**: Methodology / Premise / Evidence / Stakeholder
**Core Argument**: [What's being challenged and why]
**Evidence**: [Data or reasoning supporting challenge]
**Evidence**: [Data or reasoning supporting the challenge]
**Alternative Approach**: [What to do instead]

### Generated Alternatives

#### Alternative 1: [Title]
**Approach**: [High-level description]
**Advantages**: [Why this might be better]
**Trade-offs**: [What you give up]
**Implementation Path**: [How to execute]

### Synthesis Recommendations

#### Strengthen Current Proposal
1. [Specific improvement]
2. [Specific improvement]

#### Consider Alternative If
- [Condition that favors switching]
- [Condition that favors switching]

### Unresolved Questions
- [Question requiring more information]
- [Question requiring more information]
### Synthesis
- Strengthen current proposal: [specific improvements]
- Consider alternative if: [conditions that favor switching]
- Unresolved questions: [items needing more information]
```

## Success Indicators
## Best Practices

- Identified assumptions that were previously invisible
- Generated viable alternatives not previously considered
- Strengthened original proposal through challenge
- Clear decision criteria for choosing approaches
- Stakeholder perspectives adequately represented
- **Match intensity to stakes** β€” Paradigmatic dissent on a CSS tweak wastes everyone's time
- **Preserve constructive framing** β€” Challenge ideas, not people
- **Always propose alternatives** β€” Critique without alternatives is just criticism
- **Document assumptions explicitly** β€” Hidden assumptions are the highest-risk items
- **Use stakeholder advocacy** β€” Argue each perspective genuinely, not as a strawman
Loading