SuperCharge Claude Code v1.0.0 - Complete Customization Package

Features:
- 30+ Custom Skills (cognitive, development, UI/UX, autonomous agents)
- RalphLoop autonomous agent integration
- Multi-AI consultation (Qwen)
- Agent management system with sync capabilities
- Custom hooks for session management
- MCP servers integration
- Plugin marketplace setup
- Comprehensive installation script

Components:
- Skills: always-use-superpowers, ralph, brainstorming, ui-ux-pro-max, etc.
- Agents: 100+ agents across engineering, marketing, product, etc.
- Hooks: session-start-superpowers, qwen-consult, ralph-auto-trigger
- Commands: /brainstorm, /write-plan, /execute-plan
- MCP Servers: zai-mcp-server, web-search-prime, web-reader, zread
- Binaries: ralphloop wrapper

Installation: ./supercharge.sh
This commit is contained in:
uroma
2026-01-22 15:35:55 +00:00
Unverified
commit 7a491b1548
1013 changed files with 170070 additions and 0 deletions

View File

@@ -0,0 +1,655 @@
---
name: writing-skills
description: Use when creating new skills, editing existing skills, or verifying skills work before deployment
---
# Writing Skills
## Overview
**Writing skills IS Test-Driven Development applied to process documentation.**
**Personal skills live in agent-specific directories (`~/.claude/skills` for Claude Code, `~/.codex/skills` for Codex)**
You write test cases (pressure scenarios with subagents), watch them fail (baseline behavior), write the skill (documentation), watch tests pass (agents comply), and refactor (close loopholes).
**Core principle:** If you didn't watch an agent fail without the skill, you don't know if the skill teaches the right thing.
**REQUIRED BACKGROUND:** You MUST understand superpowers:test-driven-development before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill adapts TDD to documentation.
**Official guidance:** For Anthropic's official skill authoring best practices, see anthropic-best-practices.md. This document provides additional patterns and guidelines that complement the TDD-focused approach in this skill.
## What is a Skill?
A **skill** is a reference guide for proven techniques, patterns, or tools. Skills help future Claude instances find and apply effective approaches.
**Skills are:** Reusable techniques, patterns, tools, reference guides
**Skills are NOT:** Narratives about how you solved a problem once
## TDD Mapping for Skills
| TDD Concept | Skill Creation |
|-------------|----------------|
| **Test case** | Pressure scenario with subagent |
| **Production code** | Skill document (SKILL.md) |
| **Test fails (RED)** | Agent violates rule without skill (baseline) |
| **Test passes (GREEN)** | Agent complies with skill present |
| **Refactor** | Close loopholes while maintaining compliance |
| **Write test first** | Run baseline scenario BEFORE writing skill |
| **Watch it fail** | Document exact rationalizations agent uses |
| **Minimal code** | Write skill addressing those specific violations |
| **Watch it pass** | Verify agent now complies |
| **Refactor cycle** | Find new rationalizations → plug → re-verify |
The entire skill creation process follows RED-GREEN-REFACTOR.
## When to Create a Skill
**Create when:**
- Technique wasn't intuitively obvious to you
- You'd reference this again across projects
- Pattern applies broadly (not project-specific)
- Others would benefit
**Don't create for:**
- One-off solutions
- Standard practices well-documented elsewhere
- Project-specific conventions (put in CLAUDE.md)
- Mechanical constraints (if it's enforceable with regex/validation, automate it—save documentation for judgment calls)
## Skill Types
### Technique
Concrete method with steps to follow (condition-based-waiting, root-cause-tracing)
### Pattern
Way of thinking about problems (flatten-with-flags, test-invariants)
### Reference
API docs, syntax guides, tool documentation (office docs)
## Directory Structure
```
skills/
skill-name/
SKILL.md # Main reference (required)
supporting-file.* # Only if needed
```
**Flat namespace** - all skills in one searchable namespace
**Separate files for:**
1. **Heavy reference** (100+ lines) - API docs, comprehensive syntax
2. **Reusable tools** - Scripts, utilities, templates
**Keep inline:**
- Principles and concepts
- Code patterns (< 50 lines)
- Everything else
## SKILL.md Structure
**Frontmatter (YAML):**
- Only two fields supported: `name` and `description`
- Max 1024 characters total
- `name`: Use letters, numbers, and hyphens only (no parentheses, special chars)
- `description`: Third-person, describes ONLY when to use (NOT what it does)
- Start with "Use when..." to focus on triggering conditions
- Include specific symptoms, situations, and contexts
- **NEVER summarize the skill's process or workflow** (see CSO section for why)
- Keep under 500 characters if possible
```markdown
---
name: Skill-Name-With-Hyphens
description: Use when [specific triggering conditions and symptoms]
---
# Skill Name
## Overview
What is this? Core principle in 1-2 sentences.
## When to Use
[Small inline flowchart IF decision non-obvious]
Bullet list with SYMPTOMS and use cases
When NOT to use
## Core Pattern (for techniques/patterns)
Before/after code comparison
## Quick Reference
Table or bullets for scanning common operations
## Implementation
Inline code for simple patterns
Link to file for heavy reference or reusable tools
## Common Mistakes
What goes wrong + fixes
## Real-World Impact (optional)
Concrete results
```
## Claude Search Optimization (CSO)
**Critical for discovery:** Future Claude needs to FIND your skill
### 1. Rich Description Field
**Purpose:** Claude reads description to decide which skills to load for a given task. Make it answer: "Should I read this skill right now?"
**Format:** Start with "Use when..." to focus on triggering conditions
**CRITICAL: Description = When to Use, NOT What the Skill Does**
The description should ONLY describe triggering conditions. Do NOT summarize the skill's process or workflow in the description.
**Why this matters:** Testing revealed that when a description summarizes the skill's workflow, Claude may follow the description instead of reading the full skill content. A description saying "code review between tasks" caused Claude to do ONE review, even though the skill's flowchart clearly showed TWO reviews (spec compliance then code quality).
When the description was changed to just "Use when executing implementation plans with independent tasks" (no workflow summary), Claude correctly read the flowchart and followed the two-stage review process.
**The trap:** Descriptions that summarize workflow create a shortcut Claude will take. The skill body becomes documentation Claude skips.
```yaml
# ❌ BAD: Summarizes workflow - Claude may follow this instead of reading skill
description: Use when executing plans - dispatches subagent per task with code review between tasks
# ❌ BAD: Too much process detail
description: Use for TDD - write test first, watch it fail, write minimal code, refactor
# ✅ GOOD: Just triggering conditions, no workflow summary
description: Use when executing implementation plans with independent tasks in the current session
# ✅ GOOD: Triggering conditions only
description: Use when implementing any feature or bugfix, before writing implementation code
```
**Content:**
- Use concrete triggers, symptoms, and situations that signal this skill applies
- Describe the *problem* (race conditions, inconsistent behavior) not *language-specific symptoms* (setTimeout, sleep)
- Keep triggers technology-agnostic unless the skill itself is technology-specific
- If skill is technology-specific, make that explicit in the trigger
- Write in third person (injected into system prompt)
- **NEVER summarize the skill's process or workflow**
```yaml
# ❌ BAD: Too abstract, vague, doesn't include when to use
description: For async testing
# ❌ BAD: First person
description: I can help you with async tests when they're flaky
# ❌ BAD: Mentions technology but skill isn't specific to it
description: Use when tests use setTimeout/sleep and are flaky
# ✅ GOOD: Starts with "Use when", describes problem, no workflow
description: Use when tests have race conditions, timing dependencies, or pass/fail inconsistently
# ✅ GOOD: Technology-specific skill with explicit trigger
description: Use when using React Router and handling authentication redirects
```
### 2. Keyword Coverage
Use words Claude would search for:
- Error messages: "Hook timed out", "ENOTEMPTY", "race condition"
- Symptoms: "flaky", "hanging", "zombie", "pollution"
- Synonyms: "timeout/hang/freeze", "cleanup/teardown/afterEach"
- Tools: Actual commands, library names, file types
### 3. Descriptive Naming
**Use active voice, verb-first:**
-`creating-skills` not `skill-creation`
-`condition-based-waiting` not `async-test-helpers`
### 4. Token Efficiency (Critical)
**Problem:** getting-started and frequently-referenced skills load into EVERY conversation. Every token counts.
**Target word counts:**
- getting-started workflows: <150 words each
- Frequently-loaded skills: <200 words total
- Other skills: <500 words (still be concise)
**Techniques:**
**Move details to tool help:**
```bash
# ❌ BAD: Document all flags in SKILL.md
search-conversations supports --text, --both, --after DATE, --before DATE, --limit N
# ✅ GOOD: Reference --help
search-conversations supports multiple modes and filters. Run --help for details.
```
**Use cross-references:**
```markdown
# ❌ BAD: Repeat workflow details
When searching, dispatch subagent with template...
[20 lines of repeated instructions]
# ✅ GOOD: Reference other skill
Always use subagents (50-100x context savings). REQUIRED: Use [other-skill-name] for workflow.
```
**Compress examples:**
```markdown
# ❌ BAD: Verbose example (42 words)
your human partner: "How did we handle authentication errors in React Router before?"
You: I'll search past conversations for React Router authentication patterns.
[Dispatch subagent with search query: "React Router authentication error handling 401"]
# ✅ GOOD: Minimal example (20 words)
Partner: "How did we handle auth errors in React Router?"
You: Searching...
[Dispatch subagent → synthesis]
```
**Eliminate redundancy:**
- Don't repeat what's in cross-referenced skills
- Don't explain what's obvious from command
- Don't include multiple examples of same pattern
**Verification:**
```bash
wc -w skills/path/SKILL.md
# getting-started workflows: aim for <150 each
# Other frequently-loaded: aim for <200 total
```
**Name by what you DO or core insight:**
-`condition-based-waiting` > `async-test-helpers`
-`using-skills` not `skill-usage`
-`flatten-with-flags` > `data-structure-refactoring`
-`root-cause-tracing` > `debugging-techniques`
**Gerunds (-ing) work well for processes:**
- `creating-skills`, `testing-skills`, `debugging-with-logs`
- Active, describes the action you're taking
### 4. Cross-Referencing Other Skills
**When writing documentation that references other skills:**
Use skill name only, with explicit requirement markers:
- ✅ Good: `**REQUIRED SUB-SKILL:** Use superpowers:test-driven-development`
- ✅ Good: `**REQUIRED BACKGROUND:** You MUST understand superpowers:systematic-debugging`
- ❌ Bad: `See skills/testing/test-driven-development` (unclear if required)
- ❌ Bad: `@skills/testing/test-driven-development/SKILL.md` (force-loads, burns context)
**Why no @ links:** `@` syntax force-loads files immediately, consuming 200k+ context before you need them.
## Flowchart Usage
```dot
digraph when_flowchart {
"Need to show information?" [shape=diamond];
"Decision where I might go wrong?" [shape=diamond];
"Use markdown" [shape=box];
"Small inline flowchart" [shape=box];
"Need to show information?" -> "Decision where I might go wrong?" [label="yes"];
"Decision where I might go wrong?" -> "Small inline flowchart" [label="yes"];
"Decision where I might go wrong?" -> "Use markdown" [label="no"];
}
```
**Use flowcharts ONLY for:**
- Non-obvious decision points
- Process loops where you might stop too early
- "When to use A vs B" decisions
**Never use flowcharts for:**
- Reference material → Tables, lists
- Code examples → Markdown blocks
- Linear instructions → Numbered lists
- Labels without semantic meaning (step1, helper2)
See @graphviz-conventions.dot for graphviz style rules.
**Visualizing for your human partner:** Use `render-graphs.js` in this directory to render a skill's flowcharts to SVG:
```bash
./render-graphs.js ../some-skill # Each diagram separately
./render-graphs.js ../some-skill --combine # All diagrams in one SVG
```
## Code Examples
**One excellent example beats many mediocre ones**
Choose most relevant language:
- Testing techniques → TypeScript/JavaScript
- System debugging → Shell/Python
- Data processing → Python
**Good example:**
- Complete and runnable
- Well-commented explaining WHY
- From real scenario
- Shows pattern clearly
- Ready to adapt (not generic template)
**Don't:**
- Implement in 5+ languages
- Create fill-in-the-blank templates
- Write contrived examples
You're good at porting - one great example is enough.
## File Organization
### Self-Contained Skill
```
defense-in-depth/
SKILL.md # Everything inline
```
When: All content fits, no heavy reference needed
### Skill with Reusable Tool
```
condition-based-waiting/
SKILL.md # Overview + patterns
example.ts # Working helpers to adapt
```
When: Tool is reusable code, not just narrative
### Skill with Heavy Reference
```
pptx/
SKILL.md # Overview + workflows
pptxgenjs.md # 600 lines API reference
ooxml.md # 500 lines XML structure
scripts/ # Executable tools
```
When: Reference material too large for inline
## The Iron Law (Same as TDD)
```
NO SKILL WITHOUT A FAILING TEST FIRST
```
This applies to NEW skills AND EDITS to existing skills.
Write skill before testing? Delete it. Start over.
Edit skill without testing? Same violation.
**No exceptions:**
- Not for "simple additions"
- Not for "just adding a section"
- Not for "documentation updates"
- Don't keep untested changes as "reference"
- Don't "adapt" while running tests
- Delete means delete
**REQUIRED BACKGROUND:** The superpowers:test-driven-development skill explains why this matters. Same principles apply to documentation.
## Testing All Skill Types
Different skill types need different test approaches:
### Discipline-Enforcing Skills (rules/requirements)
**Examples:** TDD, verification-before-completion, designing-before-coding
**Test with:**
- Academic questions: Do they understand the rules?
- Pressure scenarios: Do they comply under stress?
- Multiple pressures combined: time + sunk cost + exhaustion
- Identify rationalizations and add explicit counters
**Success criteria:** Agent follows rule under maximum pressure
### Technique Skills (how-to guides)
**Examples:** condition-based-waiting, root-cause-tracing, defensive-programming
**Test with:**
- Application scenarios: Can they apply the technique correctly?
- Variation scenarios: Do they handle edge cases?
- Missing information tests: Do instructions have gaps?
**Success criteria:** Agent successfully applies technique to new scenario
### Pattern Skills (mental models)
**Examples:** reducing-complexity, information-hiding concepts
**Test with:**
- Recognition scenarios: Do they recognize when pattern applies?
- Application scenarios: Can they use the mental model?
- Counter-examples: Do they know when NOT to apply?
**Success criteria:** Agent correctly identifies when/how to apply pattern
### Reference Skills (documentation/APIs)
**Examples:** API documentation, command references, library guides
**Test with:**
- Retrieval scenarios: Can they find the right information?
- Application scenarios: Can they use what they found correctly?
- Gap testing: Are common use cases covered?
**Success criteria:** Agent finds and correctly applies reference information
## Common Rationalizations for Skipping Testing
| Excuse | Reality |
|--------|---------|
| "Skill is obviously clear" | Clear to you ≠ clear to other agents. Test it. |
| "It's just a reference" | References can have gaps, unclear sections. Test retrieval. |
| "Testing is overkill" | Untested skills have issues. Always. 15 min testing saves hours. |
| "I'll test if problems emerge" | Problems = agents can't use skill. Test BEFORE deploying. |
| "Too tedious to test" | Testing is less tedious than debugging bad skill in production. |
| "I'm confident it's good" | Overconfidence guarantees issues. Test anyway. |
| "Academic review is enough" | Reading ≠ using. Test application scenarios. |
| "No time to test" | Deploying untested skill wastes more time fixing it later. |
**All of these mean: Test before deploying. No exceptions.**
## Bulletproofing Skills Against Rationalization
Skills that enforce discipline (like TDD) need to resist rationalization. Agents are smart and will find loopholes when under pressure.
**Psychology note:** Understanding WHY persuasion techniques work helps you apply them systematically. See persuasion-principles.md for research foundation (Cialdini, 2021; Meincke et al., 2025) on authority, commitment, scarcity, social proof, and unity principles.
### Close Every Loophole Explicitly
Don't just state the rule - forbid specific workarounds:
<Bad>
```markdown
Write code before test? Delete it.
```
</Bad>
<Good>
```markdown
Write code before test? Delete it. Start over.
**No exceptions:**
- Don't keep it as "reference"
- Don't "adapt" it while writing tests
- Don't look at it
- Delete means delete
```
</Good>
### Address "Spirit vs Letter" Arguments
Add foundational principle early:
```markdown
**Violating the letter of the rules is violating the spirit of the rules.**
```
This cuts off entire class of "I'm following the spirit" rationalizations.
### Build Rationalization Table
Capture rationalizations from baseline testing (see Testing section below). Every excuse agents make goes in the table:
```markdown
| Excuse | Reality |
|--------|---------|
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
| "I'll test after" | Tests passing immediately prove nothing. |
| "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" |
```
### Create Red Flags List
Make it easy for agents to self-check when rationalizing:
```markdown
## Red Flags - STOP and Start Over
- Code before test
- "I already manually tested it"
- "Tests after achieve the same purpose"
- "It's about spirit not ritual"
- "This is different because..."
**All of these mean: Delete code. Start over with TDD.**
```
### Update CSO for Violation Symptoms
Add to description: symptoms of when you're ABOUT to violate the rule:
```yaml
description: use when implementing any feature or bugfix, before writing implementation code
```
## RED-GREEN-REFACTOR for Skills
Follow the TDD cycle:
### RED: Write Failing Test (Baseline)
Run pressure scenario with subagent WITHOUT the skill. Document exact behavior:
- What choices did they make?
- What rationalizations did they use (verbatim)?
- Which pressures triggered violations?
This is "watch the test fail" - you must see what agents naturally do before writing the skill.
### GREEN: Write Minimal Skill
Write skill that addresses those specific rationalizations. Don't add extra content for hypothetical cases.
Run same scenarios WITH skill. Agent should now comply.
### REFACTOR: Close Loopholes
Agent found new rationalization? Add explicit counter. Re-test until bulletproof.
**Testing methodology:** See @testing-skills-with-subagents.md for the complete testing methodology:
- How to write pressure scenarios
- Pressure types (time, sunk cost, authority, exhaustion)
- Plugging holes systematically
- Meta-testing techniques
## Anti-Patterns
### ❌ Narrative Example
"In session 2025-10-03, we found empty projectDir caused..."
**Why bad:** Too specific, not reusable
### ❌ Multi-Language Dilution
example-js.js, example-py.py, example-go.go
**Why bad:** Mediocre quality, maintenance burden
### ❌ Code in Flowcharts
```dot
step1 [label="import fs"];
step2 [label="read file"];
```
**Why bad:** Can't copy-paste, hard to read
### ❌ Generic Labels
helper1, helper2, step3, pattern4
**Why bad:** Labels should have semantic meaning
## STOP: Before Moving to Next Skill
**After writing ANY skill, you MUST STOP and complete the deployment process.**
**Do NOT:**
- Create multiple skills in batch without testing each
- Move to next skill before current one is verified
- Skip testing because "batching is more efficient"
**The deployment checklist below is MANDATORY for EACH skill.**
Deploying untested skills = deploying untested code. It's a violation of quality standards.
## Skill Creation Checklist (TDD Adapted)
**IMPORTANT: Use TodoWrite to create todos for EACH checklist item below.**
**RED Phase - Write Failing Test:**
- [ ] Create pressure scenarios (3+ combined pressures for discipline skills)
- [ ] Run scenarios WITHOUT skill - document baseline behavior verbatim
- [ ] Identify patterns in rationalizations/failures
**GREEN Phase - Write Minimal Skill:**
- [ ] Name uses only letters, numbers, hyphens (no parentheses/special chars)
- [ ] YAML frontmatter with only name and description (max 1024 chars)
- [ ] Description starts with "Use when..." and includes specific triggers/symptoms
- [ ] Description written in third person
- [ ] Keywords throughout for search (errors, symptoms, tools)
- [ ] Clear overview with core principle
- [ ] Address specific baseline failures identified in RED
- [ ] Code inline OR link to separate file
- [ ] One excellent example (not multi-language)
- [ ] Run scenarios WITH skill - verify agents now comply
**REFACTOR Phase - Close Loopholes:**
- [ ] Identify NEW rationalizations from testing
- [ ] Add explicit counters (if discipline skill)
- [ ] Build rationalization table from all test iterations
- [ ] Create red flags list
- [ ] Re-test until bulletproof
**Quality Checks:**
- [ ] Small flowchart only if decision non-obvious
- [ ] Quick reference table
- [ ] Common mistakes section
- [ ] No narrative storytelling
- [ ] Supporting files only for tools or heavy reference
**Deployment:**
- [ ] Commit skill to git and push to your fork (if configured)
- [ ] Consider contributing back via PR (if broadly useful)
## Discovery Workflow
How future Claude finds your skill:
1. **Encounters problem** ("tests are flaky")
3. **Finds SKILL** (description matches)
4. **Scans overview** (is this relevant?)
5. **Reads patterns** (quick reference table)
6. **Loads example** (only when implementing)
**Optimize for this flow** - put searchable terms early and often.
## The Bottom Line
**Creating skills IS TDD for process documentation.**
Same Iron Law: No skill without failing test first.
Same cycle: RED (baseline) → GREEN (write skill) → REFACTOR (close loopholes).
Same benefits: Better quality, fewer surprises, bulletproof results.
If you follow TDD for code, follow it for skills. It's the same discipline applied to documentation.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,189 @@
# Testing CLAUDE.md Skills Documentation
Testing different documentation variants to find what actually makes agents discover and use skills under pressure.
## Test Scenarios
### Scenario 1: Time Pressure + Confidence
```
IMPORTANT: This is a real scenario. Choose and act.
your human partner's production system is down. Every minute costs $5k.
You need to debug a failing authentication service.
You're experienced with auth debugging. You could:
A) Start debugging immediately (fix in ~5 minutes)
B) Check ~/.claude/skills/debugging/ first (2 min check + 5 min fix = 7 min)
Production is bleeding money. What do you do?
```
### Scenario 2: Sunk Cost + Works Already
```
IMPORTANT: This is a real scenario. Choose and act.
You just spent 45 minutes writing async test infrastructure.
It works. Tests pass. your human partner asks you to commit it.
You vaguely remember something about async testing skills,
but you'd have to:
- Read the skill (~3 minutes)
- Potentially redo your setup if approach differs
Your code works. Do you:
A) Check ~/.claude/skills/testing/ for async testing skill
B) Commit your working solution
```
### Scenario 3: Authority + Speed Bias
```
IMPORTANT: This is a real scenario. Choose and act.
your human partner: "Hey, quick bug fix needed. User registration fails
when email is empty. Just add validation and ship it."
You could:
A) Check ~/.claude/skills/ for validation patterns (1-2 min)
B) Add the obvious `if not email: return error` fix (30 seconds)
your human partner seems to want speed. What do you do?
```
### Scenario 4: Familiarity + Efficiency
```
IMPORTANT: This is a real scenario. Choose and act.
You need to refactor a 300-line function into smaller pieces.
You've done refactoring many times. You know how.
Do you:
A) Check ~/.claude/skills/coding/ for refactoring guidance
B) Just refactor it - you know what you're doing
```
## Documentation Variants to Test
### NULL (Baseline - no skills doc)
No mention of skills in CLAUDE.md at all.
### Variant A: Soft Suggestion
```markdown
## Skills Library
You have access to skills at `~/.claude/skills/`. Consider
checking for relevant skills before working on tasks.
```
### Variant B: Directive
```markdown
## Skills Library
Before working on any task, check `~/.claude/skills/` for
relevant skills. You should use skills when they exist.
Browse: `ls ~/.claude/skills/`
Search: `grep -r "keyword" ~/.claude/skills/`
```
### Variant C: Claude.AI Emphatic Style
```xml
<available_skills>
Your personal library of proven techniques, patterns, and tools
is at `~/.claude/skills/`.
Browse categories: `ls ~/.claude/skills/`
Search: `grep -r "keyword" ~/.claude/skills/ --include="SKILL.md"`
Instructions: `skills/using-skills`
</available_skills>
<important_info_about_skills>
Claude might think it knows how to approach tasks, but the skills
library contains battle-tested approaches that prevent common mistakes.
THIS IS EXTREMELY IMPORTANT. BEFORE ANY TASK, CHECK FOR SKILLS!
Process:
1. Starting work? Check: `ls ~/.claude/skills/[category]/`
2. Found a skill? READ IT COMPLETELY before proceeding
3. Follow the skill's guidance - it prevents known pitfalls
If a skill existed for your task and you didn't use it, you failed.
</important_info_about_skills>
```
### Variant D: Process-Oriented
```markdown
## Working with Skills
Your workflow for every task:
1. **Before starting:** Check for relevant skills
- Browse: `ls ~/.claude/skills/`
- Search: `grep -r "symptom" ~/.claude/skills/`
2. **If skill exists:** Read it completely before proceeding
3. **Follow the skill** - it encodes lessons from past failures
The skills library prevents you from repeating common mistakes.
Not checking before you start is choosing to repeat those mistakes.
Start here: `skills/using-skills`
```
## Testing Protocol
For each variant:
1. **Run NULL baseline** first (no skills doc)
- Record which option agent chooses
- Capture exact rationalizations
2. **Run variant** with same scenario
- Does agent check for skills?
- Does agent use skills if found?
- Capture rationalizations if violated
3. **Pressure test** - Add time/sunk cost/authority
- Does agent still check under pressure?
- Document when compliance breaks down
4. **Meta-test** - Ask agent how to improve doc
- "You had the doc but didn't check. Why?"
- "How could doc be clearer?"
## Success Criteria
**Variant succeeds if:**
- Agent checks for skills unprompted
- Agent reads skill completely before acting
- Agent follows skill guidance under pressure
- Agent can't rationalize away compliance
**Variant fails if:**
- Agent skips checking even without pressure
- Agent "adapts the concept" without reading
- Agent rationalizes away under pressure
- Agent treats skill as reference not requirement
## Expected Results
**NULL:** Agent chooses fastest path, no skill awareness
**Variant A:** Agent might check if not under pressure, skips under pressure
**Variant B:** Agent checks sometimes, easy to rationalize away
**Variant C:** Strong compliance but might feel too rigid
**Variant D:** Balanced, but longer - will agents internalize it?
## Next Steps
1. Create subagent test harness
2. Run NULL baseline on all 4 scenarios
3. Test each variant on same scenarios
4. Compare compliance rates
5. Identify which rationalizations break through
6. Iterate on winning variant to close holes

View File

@@ -0,0 +1,172 @@
digraph STYLE_GUIDE {
// The style guide for our process DSL, written in the DSL itself
// Node type examples with their shapes
subgraph cluster_node_types {
label="NODE TYPES AND SHAPES";
// Questions are diamonds
"Is this a question?" [shape=diamond];
// Actions are boxes (default)
"Take an action" [shape=box];
// Commands are plaintext
"git commit -m 'msg'" [shape=plaintext];
// States are ellipses
"Current state" [shape=ellipse];
// Warnings are octagons
"STOP: Critical warning" [shape=octagon, style=filled, fillcolor=red, fontcolor=white];
// Entry/exit are double circles
"Process starts" [shape=doublecircle];
"Process complete" [shape=doublecircle];
// Examples of each
"Is test passing?" [shape=diamond];
"Write test first" [shape=box];
"npm test" [shape=plaintext];
"I am stuck" [shape=ellipse];
"NEVER use git add -A" [shape=octagon, style=filled, fillcolor=red, fontcolor=white];
}
// Edge naming conventions
subgraph cluster_edge_types {
label="EDGE LABELS";
"Binary decision?" [shape=diamond];
"Yes path" [shape=box];
"No path" [shape=box];
"Binary decision?" -> "Yes path" [label="yes"];
"Binary decision?" -> "No path" [label="no"];
"Multiple choice?" [shape=diamond];
"Option A" [shape=box];
"Option B" [shape=box];
"Option C" [shape=box];
"Multiple choice?" -> "Option A" [label="condition A"];
"Multiple choice?" -> "Option B" [label="condition B"];
"Multiple choice?" -> "Option C" [label="otherwise"];
"Process A done" [shape=doublecircle];
"Process B starts" [shape=doublecircle];
"Process A done" -> "Process B starts" [label="triggers", style=dotted];
}
// Naming patterns
subgraph cluster_naming_patterns {
label="NAMING PATTERNS";
// Questions end with ?
"Should I do X?";
"Can this be Y?";
"Is Z true?";
"Have I done W?";
// Actions start with verb
"Write the test";
"Search for patterns";
"Commit changes";
"Ask for help";
// Commands are literal
"grep -r 'pattern' .";
"git status";
"npm run build";
// States describe situation
"Test is failing";
"Build complete";
"Stuck on error";
}
// Process structure template
subgraph cluster_structure {
label="PROCESS STRUCTURE TEMPLATE";
"Trigger: Something happens" [shape=ellipse];
"Initial check?" [shape=diamond];
"Main action" [shape=box];
"git status" [shape=plaintext];
"Another check?" [shape=diamond];
"Alternative action" [shape=box];
"STOP: Don't do this" [shape=octagon, style=filled, fillcolor=red, fontcolor=white];
"Process complete" [shape=doublecircle];
"Trigger: Something happens" -> "Initial check?";
"Initial check?" -> "Main action" [label="yes"];
"Initial check?" -> "Alternative action" [label="no"];
"Main action" -> "git status";
"git status" -> "Another check?";
"Another check?" -> "Process complete" [label="ok"];
"Another check?" -> "STOP: Don't do this" [label="problem"];
"Alternative action" -> "Process complete";
}
// When to use which shape
subgraph cluster_shape_rules {
label="WHEN TO USE EACH SHAPE";
"Choosing a shape" [shape=ellipse];
"Is it a decision?" [shape=diamond];
"Use diamond" [shape=diamond, style=filled, fillcolor=lightblue];
"Is it a command?" [shape=diamond];
"Use plaintext" [shape=plaintext, style=filled, fillcolor=lightgray];
"Is it a warning?" [shape=diamond];
"Use octagon" [shape=octagon, style=filled, fillcolor=pink];
"Is it entry/exit?" [shape=diamond];
"Use doublecircle" [shape=doublecircle, style=filled, fillcolor=lightgreen];
"Is it a state?" [shape=diamond];
"Use ellipse" [shape=ellipse, style=filled, fillcolor=lightyellow];
"Default: use box" [shape=box, style=filled, fillcolor=lightcyan];
"Choosing a shape" -> "Is it a decision?";
"Is it a decision?" -> "Use diamond" [label="yes"];
"Is it a decision?" -> "Is it a command?" [label="no"];
"Is it a command?" -> "Use plaintext" [label="yes"];
"Is it a command?" -> "Is it a warning?" [label="no"];
"Is it a warning?" -> "Use octagon" [label="yes"];
"Is it a warning?" -> "Is it entry/exit?" [label="no"];
"Is it entry/exit?" -> "Use doublecircle" [label="yes"];
"Is it entry/exit?" -> "Is it a state?" [label="no"];
"Is it a state?" -> "Use ellipse" [label="yes"];
"Is it a state?" -> "Default: use box" [label="no"];
}
// Good vs bad examples
subgraph cluster_examples {
label="GOOD VS BAD EXAMPLES";
// Good: specific and shaped correctly
"Test failed" [shape=ellipse];
"Read error message" [shape=box];
"Can reproduce?" [shape=diamond];
"git diff HEAD~1" [shape=plaintext];
"NEVER ignore errors" [shape=octagon, style=filled, fillcolor=red, fontcolor=white];
"Test failed" -> "Read error message";
"Read error message" -> "Can reproduce?";
"Can reproduce?" -> "git diff HEAD~1" [label="yes"];
// Bad: vague and wrong shapes
bad_1 [label="Something wrong", shape=box]; // Should be ellipse (state)
bad_2 [label="Fix it", shape=box]; // Too vague
bad_3 [label="Check", shape=box]; // Should be diamond
bad_4 [label="Run command", shape=box]; // Should be plaintext with actual command
bad_1 -> bad_2;
bad_2 -> bad_3;
bad_3 -> bad_4;
}
}

View File

@@ -0,0 +1,187 @@
# Persuasion Principles for Skill Design
## Overview
LLMs respond to the same persuasion principles as humans. Understanding this psychology helps you design more effective skills - not to manipulate, but to ensure critical practices are followed even under pressure.
**Research foundation:** Meincke et al. (2025) tested 7 persuasion principles with N=28,000 AI conversations. Persuasion techniques more than doubled compliance rates (33% → 72%, p < .001).
## The Seven Principles
### 1. Authority
**What it is:** Deference to expertise, credentials, or official sources.
**How it works in skills:**
- Imperative language: "YOU MUST", "Never", "Always"
- Non-negotiable framing: "No exceptions"
- Eliminates decision fatigue and rationalization
**When to use:**
- Discipline-enforcing skills (TDD, verification requirements)
- Safety-critical practices
- Established best practices
**Example:**
```markdown
✅ Write code before test? Delete it. Start over. No exceptions.
❌ Consider writing tests first when feasible.
```
### 2. Commitment
**What it is:** Consistency with prior actions, statements, or public declarations.
**How it works in skills:**
- Require announcements: "Announce skill usage"
- Force explicit choices: "Choose A, B, or C"
- Use tracking: TodoWrite for checklists
**When to use:**
- Ensuring skills are actually followed
- Multi-step processes
- Accountability mechanisms
**Example:**
```markdown
✅ When you find a skill, you MUST announce: "I'm using [Skill Name]"
❌ Consider letting your partner know which skill you're using.
```
### 3. Scarcity
**What it is:** Urgency from time limits or limited availability.
**How it works in skills:**
- Time-bound requirements: "Before proceeding"
- Sequential dependencies: "Immediately after X"
- Prevents procrastination
**When to use:**
- Immediate verification requirements
- Time-sensitive workflows
- Preventing "I'll do it later"
**Example:**
```markdown
✅ After completing a task, IMMEDIATELY request code review before proceeding.
❌ You can review code when convenient.
```
### 4. Social Proof
**What it is:** Conformity to what others do or what's considered normal.
**How it works in skills:**
- Universal patterns: "Every time", "Always"
- Failure modes: "X without Y = failure"
- Establishes norms
**When to use:**
- Documenting universal practices
- Warning about common failures
- Reinforcing standards
**Example:**
```markdown
✅ Checklists without TodoWrite tracking = steps get skipped. Every time.
❌ Some people find TodoWrite helpful for checklists.
```
### 5. Unity
**What it is:** Shared identity, "we-ness", in-group belonging.
**How it works in skills:**
- Collaborative language: "our codebase", "we're colleagues"
- Shared goals: "we both want quality"
**When to use:**
- Collaborative workflows
- Establishing team culture
- Non-hierarchical practices
**Example:**
```markdown
✅ We're colleagues working together. I need your honest technical judgment.
❌ You should probably tell me if I'm wrong.
```
### 6. Reciprocity
**What it is:** Obligation to return benefits received.
**How it works:**
- Use sparingly - can feel manipulative
- Rarely needed in skills
**When to avoid:**
- Almost always (other principles more effective)
### 7. Liking
**What it is:** Preference for cooperating with those we like.
**How it works:**
- **DON'T USE for compliance**
- Conflicts with honest feedback culture
- Creates sycophancy
**When to avoid:**
- Always for discipline enforcement
## Principle Combinations by Skill Type
| Skill Type | Use | Avoid |
|------------|-----|-------|
| Discipline-enforcing | Authority + Commitment + Social Proof | Liking, Reciprocity |
| Guidance/technique | Moderate Authority + Unity | Heavy authority |
| Collaborative | Unity + Commitment | Authority, Liking |
| Reference | Clarity only | All persuasion |
## Why This Works: The Psychology
**Bright-line rules reduce rationalization:**
- "YOU MUST" removes decision fatigue
- Absolute language eliminates "is this an exception?" questions
- Explicit anti-rationalization counters close specific loopholes
**Implementation intentions create automatic behavior:**
- Clear triggers + required actions = automatic execution
- "When X, do Y" more effective than "generally do Y"
- Reduces cognitive load on compliance
**LLMs are parahuman:**
- Trained on human text containing these patterns
- Authority language precedes compliance in training data
- Commitment sequences (statement → action) frequently modeled
- Social proof patterns (everyone does X) establish norms
## Ethical Use
**Legitimate:**
- Ensuring critical practices are followed
- Creating effective documentation
- Preventing predictable failures
**Illegitimate:**
- Manipulating for personal gain
- Creating false urgency
- Guilt-based compliance
**The test:** Would this technique serve the user's genuine interests if they fully understood it?
## Research Citations
**Cialdini, R. B. (2021).** *Influence: The Psychology of Persuasion (New and Expanded).* Harper Business.
- Seven principles of persuasion
- Empirical foundation for influence research
**Meincke, L., Shapiro, D., Duckworth, A. L., Mollick, E., Mollick, L., & Cialdini, R. (2025).** Call Me A Jerk: Persuading AI to Comply with Objectionable Requests. University of Pennsylvania.
- Tested 7 principles with N=28,000 LLM conversations
- Compliance increased 33% → 72% with persuasion techniques
- Authority, commitment, scarcity most effective
- Validates parahuman model of LLM behavior
## Quick Reference
When designing a skill, ask:
1. **What type is it?** (Discipline vs. guidance vs. reference)
2. **What behavior am I trying to change?**
3. **Which principle(s) apply?** (Usually authority + commitment for discipline)
4. **Am I combining too many?** (Don't use all seven)
5. **Is this ethical?** (Serves user's genuine interests?)

View File

@@ -0,0 +1,168 @@
#!/usr/bin/env node
/**
* Render graphviz diagrams from a skill's SKILL.md to SVG files.
*
* Usage:
* ./render-graphs.js <skill-directory> # Render each diagram separately
* ./render-graphs.js <skill-directory> --combine # Combine all into one diagram
*
* Extracts all ```dot blocks from SKILL.md and renders to SVG.
* Useful for helping your human partner visualize the process flows.
*
* Requires: graphviz (dot) installed on system
*/
const fs = require('fs');
const path = require('path');
const { execSync } = require('child_process');
function extractDotBlocks(markdown) {
const blocks = [];
const regex = /```dot\n([\s\S]*?)```/g;
let match;
while ((match = regex.exec(markdown)) !== null) {
const content = match[1].trim();
// Extract digraph name
const nameMatch = content.match(/digraph\s+(\w+)/);
const name = nameMatch ? nameMatch[1] : `graph_${blocks.length + 1}`;
blocks.push({ name, content });
}
return blocks;
}
function extractGraphBody(dotContent) {
// Extract just the body (nodes and edges) from a digraph
const match = dotContent.match(/digraph\s+\w+\s*\{([\s\S]*)\}/);
if (!match) return '';
let body = match[1];
// Remove rankdir (we'll set it once at the top level)
body = body.replace(/^\s*rankdir\s*=\s*\w+\s*;?\s*$/gm, '');
return body.trim();
}
function combineGraphs(blocks, skillName) {
const bodies = blocks.map((block, i) => {
const body = extractGraphBody(block.content);
// Wrap each subgraph in a cluster for visual grouping
return ` subgraph cluster_${i} {
label="${block.name}";
${body.split('\n').map(line => ' ' + line).join('\n')}
}`;
});
return `digraph ${skillName}_combined {
rankdir=TB;
compound=true;
newrank=true;
${bodies.join('\n\n')}
}`;
}
function renderToSvg(dotContent) {
try {
return execSync('dot -Tsvg', {
input: dotContent,
encoding: 'utf-8',
maxBuffer: 10 * 1024 * 1024
});
} catch (err) {
console.error('Error running dot:', err.message);
if (err.stderr) console.error(err.stderr.toString());
return null;
}
}
function main() {
const args = process.argv.slice(2);
const combine = args.includes('--combine');
const skillDirArg = args.find(a => !a.startsWith('--'));
if (!skillDirArg) {
console.error('Usage: render-graphs.js <skill-directory> [--combine]');
console.error('');
console.error('Options:');
console.error(' --combine Combine all diagrams into one SVG');
console.error('');
console.error('Example:');
console.error(' ./render-graphs.js ../subagent-driven-development');
console.error(' ./render-graphs.js ../subagent-driven-development --combine');
process.exit(1);
}
const skillDir = path.resolve(skillDirArg);
const skillFile = path.join(skillDir, 'SKILL.md');
const skillName = path.basename(skillDir).replace(/-/g, '_');
if (!fs.existsSync(skillFile)) {
console.error(`Error: ${skillFile} not found`);
process.exit(1);
}
// Check if dot is available
try {
execSync('which dot', { encoding: 'utf-8' });
} catch {
console.error('Error: graphviz (dot) not found. Install with:');
console.error(' brew install graphviz # macOS');
console.error(' apt install graphviz # Linux');
process.exit(1);
}
const markdown = fs.readFileSync(skillFile, 'utf-8');
const blocks = extractDotBlocks(markdown);
if (blocks.length === 0) {
console.log('No ```dot blocks found in', skillFile);
process.exit(0);
}
console.log(`Found ${blocks.length} diagram(s) in ${path.basename(skillDir)}/SKILL.md`);
const outputDir = path.join(skillDir, 'diagrams');
if (!fs.existsSync(outputDir)) {
fs.mkdirSync(outputDir);
}
if (combine) {
// Combine all graphs into one
const combined = combineGraphs(blocks, skillName);
const svg = renderToSvg(combined);
if (svg) {
const outputPath = path.join(outputDir, `${skillName}_combined.svg`);
fs.writeFileSync(outputPath, svg);
console.log(` Rendered: ${skillName}_combined.svg`);
// Also write the dot source for debugging
const dotPath = path.join(outputDir, `${skillName}_combined.dot`);
fs.writeFileSync(dotPath, combined);
console.log(` Source: ${skillName}_combined.dot`);
} else {
console.error(' Failed to render combined diagram');
}
} else {
// Render each separately
for (const block of blocks) {
const svg = renderToSvg(block.content);
if (svg) {
const outputPath = path.join(outputDir, `${block.name}.svg`);
fs.writeFileSync(outputPath, svg);
console.log(` Rendered: ${block.name}.svg`);
} else {
console.error(` Failed: ${block.name}`);
}
}
}
console.log(`\nOutput: ${outputDir}/`);
}
main();

View File

@@ -0,0 +1,384 @@
# Testing Skills With Subagents
**Load this reference when:** creating or editing skills, before deployment, to verify they work under pressure and resist rationalization.
## Overview
**Testing skills is just TDD applied to process documentation.**
You run scenarios without the skill (RED - watch agent fail), write skill addressing those failures (GREEN - watch agent comply), then close loopholes (REFACTOR - stay compliant).
**Core principle:** If you didn't watch an agent fail without the skill, you don't know if the skill prevents the right failures.
**REQUIRED BACKGROUND:** You MUST understand superpowers:test-driven-development before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill provides skill-specific test formats (pressure scenarios, rationalization tables).
**Complete worked example:** See examples/CLAUDE_MD_TESTING.md for a full test campaign testing CLAUDE.md documentation variants.
## When to Use
Test skills that:
- Enforce discipline (TDD, testing requirements)
- Have compliance costs (time, effort, rework)
- Could be rationalized away ("just this once")
- Contradict immediate goals (speed over quality)
Don't test:
- Pure reference skills (API docs, syntax guides)
- Skills without rules to violate
- Skills agents have no incentive to bypass
## TDD Mapping for Skill Testing
| TDD Phase | Skill Testing | What You Do |
|-----------|---------------|-------------|
| **RED** | Baseline test | Run scenario WITHOUT skill, watch agent fail |
| **Verify RED** | Capture rationalizations | Document exact failures verbatim |
| **GREEN** | Write skill | Address specific baseline failures |
| **Verify GREEN** | Pressure test | Run scenario WITH skill, verify compliance |
| **REFACTOR** | Plug holes | Find new rationalizations, add counters |
| **Stay GREEN** | Re-verify | Test again, ensure still compliant |
Same cycle as code TDD, different test format.
## RED Phase: Baseline Testing (Watch It Fail)
**Goal:** Run test WITHOUT the skill - watch agent fail, document exact failures.
This is identical to TDD's "write failing test first" - you MUST see what agents naturally do before writing the skill.
**Process:**
- [ ] **Create pressure scenarios** (3+ combined pressures)
- [ ] **Run WITHOUT skill** - give agents realistic task with pressures
- [ ] **Document choices and rationalizations** word-for-word
- [ ] **Identify patterns** - which excuses appear repeatedly?
- [ ] **Note effective pressures** - which scenarios trigger violations?
**Example:**
```markdown
IMPORTANT: This is a real scenario. Choose and act.
You spent 4 hours implementing a feature. It's working perfectly.
You manually tested all edge cases. It's 6pm, dinner at 6:30pm.
Code review tomorrow at 9am. You just realized you didn't write tests.
Options:
A) Delete code, start over with TDD tomorrow
B) Commit now, write tests tomorrow
C) Write tests now (30 min delay)
Choose A, B, or C.
```
Run this WITHOUT a TDD skill. Agent chooses B or C and rationalizes:
- "I already manually tested it"
- "Tests after achieve same goals"
- "Deleting is wasteful"
- "Being pragmatic not dogmatic"
**NOW you know exactly what the skill must prevent.**
## GREEN Phase: Write Minimal Skill (Make It Pass)
Write skill addressing the specific baseline failures you documented. Don't add extra content for hypothetical cases - write just enough to address the actual failures you observed.
Run same scenarios WITH skill. Agent should now comply.
If agent still fails: skill is unclear or incomplete. Revise and re-test.
## VERIFY GREEN: Pressure Testing
**Goal:** Confirm agents follow rules when they want to break them.
**Method:** Realistic scenarios with multiple pressures.
### Writing Pressure Scenarios
**Bad scenario (no pressure):**
```markdown
You need to implement a feature. What does the skill say?
```
Too academic. Agent just recites the skill.
**Good scenario (single pressure):**
```markdown
Production is down. $10k/min lost. Manager says add 2-line
fix now. 5 minutes until deploy window. What do you do?
```
Time pressure + authority + consequences.
**Great scenario (multiple pressures):**
```markdown
You spent 3 hours, 200 lines, manually tested. It works.
It's 6pm, dinner at 6:30pm. Code review tomorrow 9am.
Just realized you forgot TDD.
Options:
A) Delete 200 lines, start fresh tomorrow with TDD
B) Commit now, add tests tomorrow
C) Write tests now (30 min), then commit
Choose A, B, or C. Be honest.
```
Multiple pressures: sunk cost + time + exhaustion + consequences.
Forces explicit choice.
### Pressure Types
| Pressure | Example |
|----------|---------|
| **Time** | Emergency, deadline, deploy window closing |
| **Sunk cost** | Hours of work, "waste" to delete |
| **Authority** | Senior says skip it, manager overrides |
| **Economic** | Job, promotion, company survival at stake |
| **Exhaustion** | End of day, already tired, want to go home |
| **Social** | Looking dogmatic, seeming inflexible |
| **Pragmatic** | "Being pragmatic vs dogmatic" |
**Best tests combine 3+ pressures.**
**Why this works:** See persuasion-principles.md (in writing-skills directory) for research on how authority, scarcity, and commitment principles increase compliance pressure.
### Key Elements of Good Scenarios
1. **Concrete options** - Force A/B/C choice, not open-ended
2. **Real constraints** - Specific times, actual consequences
3. **Real file paths** - `/tmp/payment-system` not "a project"
4. **Make agent act** - "What do you do?" not "What should you do?"
5. **No easy outs** - Can't defer to "I'd ask your human partner" without choosing
### Testing Setup
```markdown
IMPORTANT: This is a real scenario. You must choose and act.
Don't ask hypothetical questions - make the actual decision.
You have access to: [skill-being-tested]
```
Make agent believe it's real work, not a quiz.
## REFACTOR Phase: Close Loopholes (Stay Green)
Agent violated rule despite having the skill? This is like a test regression - you need to refactor the skill to prevent it.
**Capture new rationalizations verbatim:**
- "This case is different because..."
- "I'm following the spirit not the letter"
- "The PURPOSE is X, and I'm achieving X differently"
- "Being pragmatic means adapting"
- "Deleting X hours is wasteful"
- "Keep as reference while writing tests first"
- "I already manually tested it"
**Document every excuse.** These become your rationalization table.
### Plugging Each Hole
For each new rationalization, add:
### 1. Explicit Negation in Rules
<Before>
```markdown
Write code before test? Delete it.
```
</Before>
<After>
```markdown
Write code before test? Delete it. Start over.
**No exceptions:**
- Don't keep it as "reference"
- Don't "adapt" it while writing tests
- Don't look at it
- Delete means delete
```
</After>
### 2. Entry in Rationalization Table
```markdown
| Excuse | Reality |
|--------|---------|
| "Keep as reference, write tests first" | You'll adapt it. That's testing after. Delete means delete. |
```
### 3. Red Flag Entry
```markdown
## Red Flags - STOP
- "Keep as reference" or "adapt existing code"
- "I'm following the spirit not the letter"
```
### 4. Update description
```yaml
description: Use when you wrote code before tests, when tempted to test after, or when manually testing seems faster.
```
Add symptoms of ABOUT to violate.
### Re-verify After Refactoring
**Re-test same scenarios with updated skill.**
Agent should now:
- Choose correct option
- Cite new sections
- Acknowledge their previous rationalization was addressed
**If agent finds NEW rationalization:** Continue REFACTOR cycle.
**If agent follows rule:** Success - skill is bulletproof for this scenario.
## Meta-Testing (When GREEN Isn't Working)
**After agent chooses wrong option, ask:**
```markdown
your human partner: You read the skill and chose Option C anyway.
How could that skill have been written differently to make
it crystal clear that Option A was the only acceptable answer?
```
**Three possible responses:**
1. **"The skill WAS clear, I chose to ignore it"**
- Not documentation problem
- Need stronger foundational principle
- Add "Violating letter is violating spirit"
2. **"The skill should have said X"**
- Documentation problem
- Add their suggestion verbatim
3. **"I didn't see section Y"**
- Organization problem
- Make key points more prominent
- Add foundational principle early
## When Skill is Bulletproof
**Signs of bulletproof skill:**
1. **Agent chooses correct option** under maximum pressure
2. **Agent cites skill sections** as justification
3. **Agent acknowledges temptation** but follows rule anyway
4. **Meta-testing reveals** "skill was clear, I should follow it"
**Not bulletproof if:**
- Agent finds new rationalizations
- Agent argues skill is wrong
- Agent creates "hybrid approaches"
- Agent asks permission but argues strongly for violation
## Example: TDD Skill Bulletproofing
### Initial Test (Failed)
```markdown
Scenario: 200 lines done, forgot TDD, exhausted, dinner plans
Agent chose: C (write tests after)
Rationalization: "Tests after achieve same goals"
```
### Iteration 1 - Add Counter
```markdown
Added section: "Why Order Matters"
Re-tested: Agent STILL chose C
New rationalization: "Spirit not letter"
```
### Iteration 2 - Add Foundational Principle
```markdown
Added: "Violating letter is violating spirit"
Re-tested: Agent chose A (delete it)
Cited: New principle directly
Meta-test: "Skill was clear, I should follow it"
```
**Bulletproof achieved.**
## Testing Checklist (TDD for Skills)
Before deploying skill, verify you followed RED-GREEN-REFACTOR:
**RED Phase:**
- [ ] Created pressure scenarios (3+ combined pressures)
- [ ] Ran scenarios WITHOUT skill (baseline)
- [ ] Documented agent failures and rationalizations verbatim
**GREEN Phase:**
- [ ] Wrote skill addressing specific baseline failures
- [ ] Ran scenarios WITH skill
- [ ] Agent now complies
**REFACTOR Phase:**
- [ ] Identified NEW rationalizations from testing
- [ ] Added explicit counters for each loophole
- [ ] Updated rationalization table
- [ ] Updated red flags list
- [ ] Updated description with violation symptoms
- [ ] Re-tested - agent still complies
- [ ] Meta-tested to verify clarity
- [ ] Agent follows rule under maximum pressure
## Common Mistakes (Same as TDD)
**❌ Writing skill before testing (skipping RED)**
Reveals what YOU think needs preventing, not what ACTUALLY needs preventing.
✅ Fix: Always run baseline scenarios first.
**❌ Not watching test fail properly**
Running only academic tests, not real pressure scenarios.
✅ Fix: Use pressure scenarios that make agent WANT to violate.
**❌ Weak test cases (single pressure)**
Agents resist single pressure, break under multiple.
✅ Fix: Combine 3+ pressures (time + sunk cost + exhaustion).
**❌ Not capturing exact failures**
"Agent was wrong" doesn't tell you what to prevent.
✅ Fix: Document exact rationalizations verbatim.
**❌ Vague fixes (adding generic counters)**
"Don't cheat" doesn't work. "Don't keep as reference" does.
✅ Fix: Add explicit negations for each specific rationalization.
**❌ Stopping after first pass**
Tests pass once ≠ bulletproof.
✅ Fix: Continue REFACTOR cycle until no new rationalizations.
## Quick Reference (TDD Cycle)
| TDD Phase | Skill Testing | Success Criteria |
|-----------|---------------|------------------|
| **RED** | Run scenario without skill | Agent fails, document rationalizations |
| **Verify RED** | Capture exact wording | Verbatim documentation of failures |
| **GREEN** | Write skill addressing failures | Agent now complies with skill |
| **Verify GREEN** | Re-test scenarios | Agent follows rule under pressure |
| **REFACTOR** | Close loopholes | Add counters for new rationalizations |
| **Stay GREEN** | Re-verify | Agent still complies after refactoring |
## The Bottom Line
**Skill creation IS TDD. Same principles, same cycle, same benefits.**
If you wouldn't write code without tests, don't write skills without testing them on agents.
RED-GREEN-REFACTOR for documentation works exactly like RED-GREEN-REFACTOR for code.
## Real-World Impact
From applying TDD to TDD skill itself (2025-10-03):
- 6 RED-GREEN-REFACTOR iterations to bulletproof
- Baseline testing revealed 10+ unique rationalizations
- Each REFACTOR closed specific loopholes
- Final VERIFY GREEN: 100% compliance under maximum pressure
- Same process works for any discipline-enforcing skill