Reorganize: Move all skills to skills/ folder

- Created skills/ directory
- Moved 272 skills to skills/ subfolder
- Kept agents/ at root level
- Kept installation scripts and docs at root level

Repository structure:
- skills/           - All 272 skills from skills.sh
- agents/           - Agent definitions
- *.sh, *.ps1       - Installation scripts
- README.md, etc.   - Documentation

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
admin
2026-01-23 18:05:17 +00:00
Unverified
parent 2b4e974878
commit b723e2bd7d
4083 changed files with 1056 additions and 1098063 deletions

View File

@@ -0,0 +1,83 @@
{
"name": "superpowers-marketplace",
"owner": {
"name": "Jesse Vincent",
"email": "jesse@fsck.com"
},
"metadata": {
"description": "Skills, workflows, and productivity tools",
"version": "1.0.9"
},
"plugins": [
{
"name": "superpowers",
"source": {
"source": "url",
"url": "https://github.com/obra/superpowers.git"
},
"description": "Core skills library: TDD, debugging, collaboration patterns, and proven techniques",
"version": "4.0.3",
"strict": true
},
{
"name": "superpowers-chrome",
"source": {
"source": "url",
"url": "https://github.com/obra/superpowers-chrome.git"
},
"description": "BETA: VERY LIGHTLY TESTED - Direct Chrome DevTools Protocol access via 'browsing' skill. Skill mode (17 CLI commands) + MCP mode (single use_browser tool). Zero dependencies, auto-starts Chrome.",
"version": "1.6.2",
"strict": true
},
{
"name": "elements-of-style",
"source": {
"source": "url",
"url": "https://github.com/obra/the-elements-of-style.git"
},
"description": "Writing guidance based on William Strunk Jr.'s The Elements of Style (1918) - foundational rules for clear, concise, grammatically correct writing",
"version": "1.0.0",
"strict": true
},
{
"name": "episodic-memory",
"source": {
"source": "url",
"url": "https://github.com/obra/episodic-memory.git"
},
"description": "Semantic search for Claude Code conversations. Remember past discussions, decisions, and patterns across sessions. Gives you memory that persists between sessions.",
"version": "1.0.15",
"strict": true
},
{
"name": "superpowers-lab",
"source": {
"source": "url",
"url": "https://github.com/obra/superpowers-lab.git"
},
"description": "Experimental skills for Superpowers: Control interactive CLI tools (vim, menuconfig, REPLs, git rebase -i) through tmux automation",
"version": "0.1.0",
"strict": true
},
{
"name": "superpowers-developing-for-claude-code",
"source": {
"source": "url",
"url": "https://github.com/obra/superpowers-developing-for-claude-code.git"
},
"description": "Skills and resources for developing Claude Code plugins, skills, MCP servers, and extensions. Includes comprehensive official documentation and self-update mechanism.",
"version": "0.3.1",
"strict": true
},
{
"name": "double-shot-latte",
"source": {
"source": "url",
"url": "https://github.com/obra/double-shot-latte.git"
},
"description": "Stop 'Would you like me to continue?' interruptions. Automatically evaluates whether Claude should continue working using Claude-judged decision making.",
"version": "1.1.5",
"strict": true
}
]
}

View File

@@ -0,0 +1,13 @@
{
"permissions": {
"allow": [
"Bash(python3:*)",
"mcp__plugin_episodic-memory_episodic-memory__search",
"Bash(git add:*)",
"Bash(git commit:*)",
"Bash(git push)"
],
"deny": [],
"ask": []
}
}

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2025 Jesse Vincent
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,96 @@
# Superpowers Marketplace
Curated Claude Code plugins for skills, workflows, and productivity tools.
## Installation
Add this marketplace to Claude Code:
```bash
/plugin marketplace add obra/superpowers-marketplace
```
## Available Plugins
### Superpowers (Core)
**Description:** Core skills library with TDD, debugging, collaboration patterns, and proven techniques
**Categories:** Testing, Debugging, Collaboration, Meta
**Install:**
```bash
/plugin install superpowers@superpowers-marketplace
```
**What you get:**
- 20+ battle-tested skills
- `/brainstorm`, `/write-plan`, `/execute-plan` commands
- Skills-search tool for discovery
- SessionStart context injection
**Repository:** https://github.com/obra/superpowers
---
### Elements of Style
**Description:** Writing guidance based on William Strunk Jr.'s The Elements of Style (1918)
**Categories:** Writing, Documentation, Reference
**Install:**
```bash
/plugin install elements-of-style@superpowers-marketplace
```
**What you get:**
- `writing-clearly-and-concisely` skill
- Complete 1918 reference text (~12k tokens)
- All 18 rules for clear, concise writing
- Grammar, punctuation, and composition guidance
**Repository:** https://github.com/obra/the-elements-of-style
---
### Superpowers: Developing for Claude Code
**Description:** Skills and resources for developing Claude Code plugins, skills, MCP servers, and extensions
**Categories:** Development, Documentation, Claude Code, Plugin Development
**Install:**
```bash
/plugin install superpowers-developing-for-claude-code@superpowers-marketplace
```
**What you get:**
- `working-with-claude-code` skill with 42+ official documentation files
- `developing-claude-code-plugins` skill for streamlined development workflows
- Self-update mechanism for documentation
- Complete reference for plugin development, skills, MCP servers, and extensions
**Repository:** https://github.com/obra/superpowers-developing-for-claude-code
---
## Marketplace Structure
```
superpowers-marketplace/
├── .claude-plugin/
│ └── marketplace.json # Plugin catalog
└── README.md # This file
```
## Support
- **Issues**: https://github.com/obra/superpowers-marketplace/issues
- **Core Plugin**: https://github.com/obra/superpowers
## License
Marketplace metadata: MIT License
Individual plugins: See respective plugin licenses

View File

@@ -0,0 +1,20 @@
{
"name": "superpowers-dev",
"description": "Development marketplace for Superpowers core skills library",
"owner": {
"name": "Jesse Vincent",
"email": "jesse@fsck.com"
},
"plugins": [
{
"name": "superpowers",
"description": "Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques",
"version": "4.0.3",
"source": "./",
"author": {
"name": "Jesse Vincent",
"email": "jesse@fsck.com"
}
}
]
}

View File

@@ -0,0 +1,13 @@
{
"name": "superpowers",
"description": "Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques",
"version": "4.0.3",
"author": {
"name": "Jesse Vincent",
"email": "jesse@fsck.com"
},
"homepage": "https://github.com/obra/superpowers",
"repository": "https://github.com/obra/superpowers",
"license": "MIT",
"keywords": ["skills", "tdd", "debugging", "collaboration", "best-practices", "workflows"]
}

View File

@@ -0,0 +1,35 @@
# Installing Superpowers for Codex
Quick setup to enable superpowers skills in Codex.
## Installation
1. **Clone superpowers repository**:
```bash
mkdir -p ~/.codex/superpowers
cd ~/.codex/superpowers
git clone https://github.com/obra/superpowers.git .
```
2. **Create personal skills directory**:
```bash
mkdir -p ~/.codex/skills
```
3. **Update ~/.codex/AGENTS.md** to include this superpowers section:
```markdown
## Superpowers System
<EXTREMELY_IMPORTANT>
You have superpowers. Superpowers teach you new skills and capabilities. RIGHT NOW run: `~/.codex/superpowers/.codex/superpowers-codex bootstrap` and follow the instructions it returns.
</EXTREMELY_IMPORTANT>
```
## Verification
Test the installation:
```bash
~/.codex/superpowers/.codex/superpowers-codex bootstrap
```
You should see skill listings and bootstrap instructions. The system is now ready for use.

View File

@@ -0,0 +1,33 @@
# Superpowers Bootstrap for Codex
<EXTREMELY_IMPORTANT>
You have superpowers.
**Tool for running skills:**
- `~/.codex/superpowers/.codex/superpowers-codex use-skill <skill-name>`
**Tool Mapping for Codex:**
When skills reference tools you don't have, substitute your equivalent tools:
- `TodoWrite``update_plan` (your planning/task tracking tool)
- `Task` tool with subagents → Tell the user that subagents aren't available in Codex yet and you'll do the work the subagent would do
- `Skill` tool → `~/.codex/superpowers/.codex/superpowers-codex use-skill` command (already available)
- `Read`, `Write`, `Edit`, `Bash` → Use your native tools with similar functions
**Skills naming:**
- Superpowers skills: `superpowers:skill-name` (from ~/.codex/superpowers/skills/)
- Personal skills: `skill-name` (from ~/.codex/skills/)
- Personal skills override superpowers skills when names match
**Critical Rules:**
- Before ANY task, review the skills list (shown below)
- If a relevant skill exists, you MUST use `~/.codex/superpowers/.codex/superpowers-codex use-skill` to load it
- Announce: "I've read the [Skill Name] skill and I'm using it to [purpose]"
- Skills with checklists require `update_plan` todos for each item
- NEVER skip mandatory workflows (brainstorming before coding, TDD, systematic debugging)
**Skills location:**
- Superpowers skills: ~/.codex/superpowers/skills/
- Personal skills: ~/.codex/skills/ (override superpowers when names match)
IF A SKILL APPLIES TO YOUR TASK, YOU DO NOT HAVE A CHOICE. YOU MUST USE IT.
</EXTREMELY_IMPORTANT>

View File

@@ -0,0 +1,267 @@
#!/usr/bin/env node
const fs = require('fs');
const path = require('path');
const os = require('os');
const skillsCore = require('../lib/skills-core');
// Paths
const homeDir = os.homedir();
const superpowersSkillsDir = path.join(homeDir, '.codex', 'superpowers', 'skills');
const personalSkillsDir = path.join(homeDir, '.codex', 'skills');
const bootstrapFile = path.join(homeDir, '.codex', 'superpowers', '.codex', 'superpowers-bootstrap.md');
const superpowersRepoDir = path.join(homeDir, '.codex', 'superpowers');
// Utility functions
function printSkill(skillPath, sourceType) {
const skillFile = path.join(skillPath, 'SKILL.md');
const relPath = sourceType === 'personal'
? path.relative(personalSkillsDir, skillPath)
: path.relative(superpowersSkillsDir, skillPath);
// Print skill name with namespace
if (sourceType === 'personal') {
console.log(relPath.replace(/\\/g, '/')); // Personal skills are not namespaced
} else {
console.log(`superpowers:${relPath.replace(/\\/g, '/')}`); // Superpowers skills get superpowers namespace
}
// Extract and print metadata
const { name, description } = skillsCore.extractFrontmatter(skillFile);
if (description) console.log(` ${description}`);
console.log('');
}
// Commands
function runFindSkills() {
console.log('Available skills:');
console.log('==================');
console.log('');
const foundSkills = new Set();
// Find personal skills first (these take precedence)
const personalSkills = skillsCore.findSkillsInDir(personalSkillsDir, 'personal', 2);
for (const skill of personalSkills) {
const relPath = path.relative(personalSkillsDir, skill.path);
foundSkills.add(relPath);
printSkill(skill.path, 'personal');
}
// Find superpowers skills (only if not already found in personal)
const superpowersSkills = skillsCore.findSkillsInDir(superpowersSkillsDir, 'superpowers', 1);
for (const skill of superpowersSkills) {
const relPath = path.relative(superpowersSkillsDir, skill.path);
if (!foundSkills.has(relPath)) {
printSkill(skill.path, 'superpowers');
}
}
console.log('Usage:');
console.log(' superpowers-codex use-skill <skill-name> # Load a specific skill');
console.log('');
console.log('Skill naming:');
console.log(' Superpowers skills: superpowers:skill-name (from ~/.codex/superpowers/skills/)');
console.log(' Personal skills: skill-name (from ~/.codex/skills/)');
console.log(' Personal skills override superpowers skills when names match.');
console.log('');
console.log('Note: All skills are disclosed at session start via bootstrap.');
}
function runBootstrap() {
console.log('# Superpowers Bootstrap for Codex');
console.log('# ================================');
console.log('');
// Check for updates (with timeout protection)
if (skillsCore.checkForUpdates(superpowersRepoDir)) {
console.log('## Update Available');
console.log('');
console.log('⚠️ Your superpowers installation is behind the latest version.');
console.log('To update, run: `cd ~/.codex/superpowers && git pull`');
console.log('');
console.log('---');
console.log('');
}
// Show the bootstrap instructions
if (fs.existsSync(bootstrapFile)) {
console.log('## Bootstrap Instructions:');
console.log('');
try {
const content = fs.readFileSync(bootstrapFile, 'utf8');
console.log(content);
} catch (error) {
console.log(`Error reading bootstrap file: ${error.message}`);
}
console.log('');
console.log('---');
console.log('');
}
// Run find-skills to show available skills
console.log('## Available Skills:');
console.log('');
runFindSkills();
console.log('');
console.log('---');
console.log('');
// Load the using-superpowers skill automatically
console.log('## Auto-loading superpowers:using-superpowers skill:');
console.log('');
runUseSkill('superpowers:using-superpowers');
console.log('');
console.log('---');
console.log('');
console.log('# Bootstrap Complete!');
console.log('# You now have access to all superpowers skills.');
console.log('# Use "superpowers-codex use-skill <skill>" to load and apply skills.');
console.log('# Remember: If a skill applies to your task, you MUST use it!');
}
function runUseSkill(skillName) {
if (!skillName) {
console.log('Usage: superpowers-codex use-skill <skill-name>');
console.log('Examples:');
console.log(' superpowers-codex use-skill superpowers:brainstorming # Load superpowers skill');
console.log(' superpowers-codex use-skill brainstorming # Load personal skill (or superpowers if not found)');
console.log(' superpowers-codex use-skill my-custom-skill # Load personal skill');
return;
}
// Handle namespaced skill names
let actualSkillPath;
let forceSuperpowers = false;
if (skillName.startsWith('superpowers:')) {
// Remove the superpowers: namespace prefix
actualSkillPath = skillName.substring('superpowers:'.length);
forceSuperpowers = true;
} else {
actualSkillPath = skillName;
}
// Remove "skills/" prefix if present
if (actualSkillPath.startsWith('skills/')) {
actualSkillPath = actualSkillPath.substring('skills/'.length);
}
// Function to find skill file
function findSkillFile(searchPath) {
// Check for exact match with SKILL.md
const skillMdPath = path.join(searchPath, 'SKILL.md');
if (fs.existsSync(skillMdPath)) {
return skillMdPath;
}
// Check for direct SKILL.md file
if (searchPath.endsWith('SKILL.md') && fs.existsSync(searchPath)) {
return searchPath;
}
return null;
}
let skillFile = null;
// If superpowers: namespace was used, only check superpowers skills
if (forceSuperpowers) {
if (fs.existsSync(superpowersSkillsDir)) {
const superpowersPath = path.join(superpowersSkillsDir, actualSkillPath);
skillFile = findSkillFile(superpowersPath);
}
} else {
// First check personal skills directory (takes precedence)
if (fs.existsSync(personalSkillsDir)) {
const personalPath = path.join(personalSkillsDir, actualSkillPath);
skillFile = findSkillFile(personalPath);
if (skillFile) {
console.log(`# Loading personal skill: ${actualSkillPath}`);
console.log(`# Source: ${skillFile}`);
console.log('');
}
}
// If not found in personal, check superpowers skills
if (!skillFile && fs.existsSync(superpowersSkillsDir)) {
const superpowersPath = path.join(superpowersSkillsDir, actualSkillPath);
skillFile = findSkillFile(superpowersPath);
if (skillFile) {
console.log(`# Loading superpowers skill: superpowers:${actualSkillPath}`);
console.log(`# Source: ${skillFile}`);
console.log('');
}
}
}
// If still not found, error
if (!skillFile) {
console.log(`Error: Skill not found: ${actualSkillPath}`);
console.log('');
console.log('Available skills:');
runFindSkills();
return;
}
// Extract frontmatter and content using shared core functions
let content, frontmatter;
try {
const fullContent = fs.readFileSync(skillFile, 'utf8');
const { name, description } = skillsCore.extractFrontmatter(skillFile);
content = skillsCore.stripFrontmatter(fullContent);
frontmatter = { name, description };
} catch (error) {
console.log(`Error reading skill file: ${error.message}`);
return;
}
// Display skill header with clean info
const displayName = forceSuperpowers ? `superpowers:${actualSkillPath}` :
(skillFile.includes(personalSkillsDir) ? actualSkillPath : `superpowers:${actualSkillPath}`);
const skillDirectory = path.dirname(skillFile);
console.log(`# ${frontmatter.name || displayName}`);
if (frontmatter.description) {
console.log(`# ${frontmatter.description}`);
}
console.log(`# Skill-specific tools and reference files live in ${skillDirectory}`);
console.log('# ============================================');
console.log('');
// Display the skill content (without frontmatter)
console.log(content);
}
// Main CLI
const command = process.argv[2];
const arg = process.argv[3];
switch (command) {
case 'bootstrap':
runBootstrap();
break;
case 'use-skill':
runUseSkill(arg);
break;
case 'find-skills':
runFindSkills();
break;
default:
console.log('Superpowers for Codex');
console.log('Usage:');
console.log(' superpowers-codex bootstrap # Run complete bootstrap with all skills');
console.log(' superpowers-codex use-skill <skill-name> # Load a specific skill');
console.log(' superpowers-codex find-skills # List all available skills');
console.log('');
console.log('Examples:');
console.log(' superpowers-codex bootstrap');
console.log(' superpowers-codex use-skill superpowers:brainstorming');
console.log(' superpowers-codex use-skill my-custom-skill');
break;
}

View File

@@ -0,0 +1,3 @@
# These are supported funding model platforms
github: [obra]

View File

@@ -0,0 +1,3 @@
.worktrees/
.private-journal/
.claude/

View File

@@ -0,0 +1,135 @@
# Installing Superpowers for OpenCode
## Prerequisites
- [OpenCode.ai](https://opencode.ai) installed
- Node.js installed
- Git installed
## Installation Steps
### 1. Install Superpowers
```bash
mkdir -p ~/.config/opencode/superpowers
git clone https://github.com/obra/superpowers.git ~/.config/opencode/superpowers
```
### 2. Register the Plugin
Create a symlink so OpenCode discovers the plugin:
```bash
mkdir -p ~/.config/opencode/plugin
ln -sf ~/.config/opencode/superpowers/.opencode/plugin/superpowers.js ~/.config/opencode/plugin/superpowers.js
```
### 3. Restart OpenCode
Restart OpenCode. The plugin will automatically inject superpowers context via the chat.message hook.
You should see superpowers is active when you ask "do you have superpowers?"
## Usage
### Finding Skills
Use the `find_skills` tool to list all available skills:
```
use find_skills tool
```
### Loading a Skill
Use the `use_skill` tool to load a specific skill:
```
use use_skill tool with skill_name: "superpowers:brainstorming"
```
### Personal Skills
Create your own skills in `~/.config/opencode/skills/`:
```bash
mkdir -p ~/.config/opencode/skills/my-skill
```
Create `~/.config/opencode/skills/my-skill/SKILL.md`:
```markdown
---
name: my-skill
description: Use when [condition] - [what it does]
---
# My Skill
[Your skill content here]
```
Personal skills override superpowers skills with the same name.
### Project Skills
Create project-specific skills in your OpenCode project:
```bash
# In your OpenCode project
mkdir -p .opencode/skills/my-project-skill
```
Create `.opencode/skills/my-project-skill/SKILL.md`:
```markdown
---
name: my-project-skill
description: Use when [condition] - [what it does]
---
# My Project Skill
[Your skill content here]
```
**Skill Priority:** Project skills override personal skills, which override superpowers skills.
**Skill Naming:**
- `project:skill-name` - Force project skill lookup
- `skill-name` - Searches project → personal → superpowers
- `superpowers:skill-name` - Force superpowers skill lookup
## Updating
```bash
cd ~/.config/opencode/superpowers
git pull
```
## Troubleshooting
### Plugin not loading
1. Check plugin file exists: `ls ~/.config/opencode/superpowers/.opencode/plugin/superpowers.js`
2. Check OpenCode logs for errors
3. Verify Node.js is installed: `node --version`
### Skills not found
1. Verify skills directory exists: `ls ~/.config/opencode/superpowers/skills`
2. Use `find_skills` tool to see what's discovered
3. Check file structure: each skill should have a `SKILL.md` file
### Tool mapping issues
When a skill references a Claude Code tool you don't have:
- `TodoWrite` → use `update_plan`
- `Task` with subagents → use `@mention` syntax to invoke OpenCode subagents
- `Skill` → use `use_skill` tool
- File operations → use your native tools
## Getting Help
- Report issues: https://github.com/obra/superpowers/issues
- Documentation: https://github.com/obra/superpowers

View File

@@ -0,0 +1,215 @@
/**
* Superpowers plugin for OpenCode.ai
*
* Provides custom tools for loading and discovering skills,
* with prompt generation for agent configuration.
*/
import path from 'path';
import fs from 'fs';
import os from 'os';
import { fileURLToPath } from 'url';
import { tool } from '@opencode-ai/plugin/tool';
import * as skillsCore from '../../lib/skills-core.js';
const __dirname = path.dirname(fileURLToPath(import.meta.url));
export const SuperpowersPlugin = async ({ client, directory }) => {
const homeDir = os.homedir();
const projectSkillsDir = path.join(directory, '.opencode/skills');
// Derive superpowers skills dir from plugin location (works for both symlinked and local installs)
const superpowersSkillsDir = path.resolve(__dirname, '../../skills');
const personalSkillsDir = path.join(homeDir, '.config/opencode/skills');
// Helper to generate bootstrap content
const getBootstrapContent = (compact = false) => {
const usingSuperpowersPath = skillsCore.resolveSkillPath('using-superpowers', superpowersSkillsDir, personalSkillsDir);
if (!usingSuperpowersPath) return null;
const fullContent = fs.readFileSync(usingSuperpowersPath.skillFile, 'utf8');
const content = skillsCore.stripFrontmatter(fullContent);
const toolMapping = compact
? `**Tool Mapping:** TodoWrite->update_plan, Task->@mention, Skill->use_skill
**Skills naming (priority order):** project: > personal > superpowers:`
: `**Tool Mapping for OpenCode:**
When skills reference tools you don't have, substitute OpenCode equivalents:
- \`TodoWrite\`\`update_plan\`
- \`Task\` tool with subagents → Use OpenCode's subagent system (@mention)
- \`Skill\` tool → \`use_skill\` custom tool
- \`Read\`, \`Write\`, \`Edit\`, \`Bash\` → Your native tools
**Skills naming (priority order):**
- Project skills: \`project:skill-name\` (in .opencode/skills/)
- Personal skills: \`skill-name\` (in ~/.config/opencode/skills/)
- Superpowers skills: \`superpowers:skill-name\`
- Project skills override personal, which override superpowers when names match`;
return `<EXTREMELY_IMPORTANT>
You have superpowers.
**IMPORTANT: The using-superpowers skill content is included below. It is ALREADY LOADED - you are currently following it. Do NOT use the use_skill tool to load "using-superpowers" - that would be redundant. Use use_skill only for OTHER skills.**
${content}
${toolMapping}
</EXTREMELY_IMPORTANT>`;
};
// Helper to inject bootstrap via session.prompt
const injectBootstrap = async (sessionID, compact = false) => {
const bootstrapContent = getBootstrapContent(compact);
if (!bootstrapContent) return false;
try {
await client.session.prompt({
path: { id: sessionID },
body: {
noReply: true,
parts: [{ type: "text", text: bootstrapContent, synthetic: true }]
}
});
return true;
} catch (err) {
return false;
}
};
return {
tool: {
use_skill: tool({
description: 'Load and read a specific skill to guide your work. Skills contain proven workflows, mandatory processes, and expert techniques.',
args: {
skill_name: tool.schema.string().describe('Name of the skill to load (e.g., "superpowers:brainstorming", "my-custom-skill", or "project:my-skill")')
},
execute: async (args, context) => {
const { skill_name } = args;
// Resolve with priority: project > personal > superpowers
// Check for project: prefix first
const forceProject = skill_name.startsWith('project:');
const actualSkillName = forceProject ? skill_name.replace(/^project:/, '') : skill_name;
let resolved = null;
// Try project skills first (if project: prefix or no prefix)
if (forceProject || !skill_name.startsWith('superpowers:')) {
const projectPath = path.join(projectSkillsDir, actualSkillName);
const projectSkillFile = path.join(projectPath, 'SKILL.md');
if (fs.existsSync(projectSkillFile)) {
resolved = {
skillFile: projectSkillFile,
sourceType: 'project',
skillPath: actualSkillName
};
}
}
// Fall back to personal/superpowers resolution
if (!resolved && !forceProject) {
resolved = skillsCore.resolveSkillPath(skill_name, superpowersSkillsDir, personalSkillsDir);
}
if (!resolved) {
return `Error: Skill "${skill_name}" not found.\n\nRun find_skills to see available skills.`;
}
const fullContent = fs.readFileSync(resolved.skillFile, 'utf8');
const { name, description } = skillsCore.extractFrontmatter(resolved.skillFile);
const content = skillsCore.stripFrontmatter(fullContent);
const skillDirectory = path.dirname(resolved.skillFile);
const skillHeader = `# ${name || skill_name}
# ${description || ''}
# Supporting tools and docs are in ${skillDirectory}
# ============================================`;
// Insert as user message with noReply for persistence across compaction
try {
await client.session.prompt({
path: { id: context.sessionID },
body: {
noReply: true,
parts: [
{ type: "text", text: `Loading skill: ${name || skill_name}`, synthetic: true },
{ type: "text", text: `${skillHeader}\n\n${content}`, synthetic: true }
]
}
});
} catch (err) {
// Fallback: return content directly if message insertion fails
return `${skillHeader}\n\n${content}`;
}
return `Launching skill: ${name || skill_name}`;
}
}),
find_skills: tool({
description: 'List all available skills in the project, personal, and superpowers skill libraries.',
args: {},
execute: async (args, context) => {
const projectSkills = skillsCore.findSkillsInDir(projectSkillsDir, 'project', 3);
const personalSkills = skillsCore.findSkillsInDir(personalSkillsDir, 'personal', 3);
const superpowersSkills = skillsCore.findSkillsInDir(superpowersSkillsDir, 'superpowers', 3);
// Priority: project > personal > superpowers
const allSkills = [...projectSkills, ...personalSkills, ...superpowersSkills];
if (allSkills.length === 0) {
return 'No skills found. Install superpowers skills to ~/.config/opencode/superpowers/skills/ or add project skills to .opencode/skills/';
}
let output = 'Available skills:\n\n';
for (const skill of allSkills) {
let namespace;
switch (skill.sourceType) {
case 'project':
namespace = 'project:';
break;
case 'personal':
namespace = '';
break;
default:
namespace = 'superpowers:';
}
const skillName = skill.name || path.basename(skill.path);
output += `${namespace}${skillName}\n`;
if (skill.description) {
output += ` ${skill.description}\n`;
}
output += ` Directory: ${skill.path}\n\n`;
}
return output;
}
})
},
event: async ({ event }) => {
// Extract sessionID from various event structures
const getSessionID = () => {
return event.properties?.info?.id ||
event.properties?.sessionID ||
event.session?.id;
};
// Inject bootstrap at session creation (before first user message)
if (event.type === 'session.created') {
const sessionID = getSessionID();
if (sessionID) {
await injectBootstrap(sessionID, false);
}
}
// Re-inject bootstrap after context compaction (compact version to save tokens)
if (event.type === 'session.compacted') {
const sessionID = getSessionID();
if (sessionID) {
await injectBootstrap(sessionID, true);
}
}
}
};
};

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2025 Jesse Vincent
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,159 @@
# Superpowers
Superpowers is a complete software development workflow for your coding agents, built on top of a set of composable "skills" and some initial instructions that make sure your agent uses them.
## How it works
It starts from the moment you fire up your coding agent. As soon as it sees that you're building something, it *doesn't* just jump into trying to write code. Instead, it steps back and asks you what you're really trying to do.
Once it's teased a spec out of the conversation, it shows it to you in chunks short enough to actually read and digest.
After you've signed off on the design, your agent puts together an implementation plan that's clear enough for an enthusiastic junior engineer with poor taste, no judgement, no project context, and an aversion to testing to follow. It emphasizes true red/green TDD, YAGNI (You Aren't Gonna Need It), and DRY.
Next up, once you say "go", it launches a *subagent-driven-development* process, having agents work through each engineering task, inspecting and reviewing their work, and continuing forward. It's not uncommon for Claude to be able to work autonomously for a couple hours at a time without deviating from the plan you put together.
There's a bunch more to it, but that's the core of the system. And because the skills trigger automatically, you don't need to do anything special. Your coding agent just has Superpowers.
## Sponsorship
If Superpowers has helped you do stuff that makes money and you are so inclined, I'd greatly appreciate it if you'd consider [sponsoring my opensource work](https://github.com/sponsors/obra).
Thanks!
- Jesse
## Installation
**Note:** Installation differs by platform. Claude Code has a built-in plugin system. Codex and OpenCode require manual setup.
### Claude Code (via Plugin Marketplace)
In Claude Code, register the marketplace first:
```bash
/plugin marketplace add obra/superpowers-marketplace
```
Then install the plugin from this marketplace:
```bash
/plugin install superpowers@superpowers-marketplace
```
### Verify Installation
Check that commands appear:
```bash
/help
```
```
# Should see:
# /superpowers:brainstorm - Interactive design refinement
# /superpowers:write-plan - Create implementation plan
# /superpowers:execute-plan - Execute plan in batches
```
### Codex
Tell Codex:
```
Fetch and follow instructions from https://raw.githubusercontent.com/obra/superpowers/refs/heads/main/.codex/INSTALL.md
```
**Detailed docs:** [docs/README.codex.md](docs/README.codex.md)
### OpenCode
Tell OpenCode:
```
Fetch and follow instructions from https://raw.githubusercontent.com/obra/superpowers/refs/heads/main/.opencode/INSTALL.md
```
**Detailed docs:** [docs/README.opencode.md](docs/README.opencode.md)
## The Basic Workflow
1. **brainstorming** - Activates before writing code. Refines rough ideas through questions, explores alternatives, presents design in sections for validation. Saves design document.
2. **using-git-worktrees** - Activates after design approval. Creates isolated workspace on new branch, runs project setup, verifies clean test baseline.
3. **writing-plans** - Activates with approved design. Breaks work into bite-sized tasks (2-5 minutes each). Every task has exact file paths, complete code, verification steps.
4. **subagent-driven-development** or **executing-plans** - Activates with plan. Dispatches fresh subagent per task with two-stage review (spec compliance, then code quality), or executes in batches with human checkpoints.
5. **test-driven-development** - Activates during implementation. Enforces RED-GREEN-REFACTOR: write failing test, watch it fail, write minimal code, watch it pass, commit. Deletes code written before tests.
6. **requesting-code-review** - Activates between tasks. Reviews against plan, reports issues by severity. Critical issues block progress.
7. **finishing-a-development-branch** - Activates when tasks complete. Verifies tests, presents options (merge/PR/keep/discard), cleans up worktree.
**The agent checks for relevant skills before any task.** Mandatory workflows, not suggestions.
## What's Inside
### Skills Library
**Testing**
- **test-driven-development** - RED-GREEN-REFACTOR cycle (includes testing anti-patterns reference)
**Debugging**
- **systematic-debugging** - 4-phase root cause process (includes root-cause-tracing, defense-in-depth, condition-based-waiting techniques)
- **verification-before-completion** - Ensure it's actually fixed
**Collaboration**
- **brainstorming** - Socratic design refinement
- **writing-plans** - Detailed implementation plans
- **executing-plans** - Batch execution with checkpoints
- **dispatching-parallel-agents** - Concurrent subagent workflows
- **requesting-code-review** - Pre-review checklist
- **receiving-code-review** - Responding to feedback
- **using-git-worktrees** - Parallel development branches
- **finishing-a-development-branch** - Merge/PR decision workflow
- **subagent-driven-development** - Fast iteration with two-stage review (spec compliance, then code quality)
**Meta**
- **writing-skills** - Create new skills following best practices (includes testing methodology)
- **using-superpowers** - Introduction to the skills system
## Philosophy
- **Test-Driven Development** - Write tests first, always
- **Systematic over ad-hoc** - Process over guessing
- **Complexity reduction** - Simplicity as primary goal
- **Evidence over claims** - Verify before declaring success
Read more: [Superpowers for Claude Code](https://blog.fsck.com/2025/10/09/superpowers/)
## Contributing
Skills live directly in this repository. To contribute:
1. Fork the repository
2. Create a branch for your skill
3. Follow the `writing-skills` skill for creating and testing new skills
4. Submit a PR
See `skills/writing-skills/SKILL.md` for the complete guide.
## Updating
Skills update automatically when you update the plugin:
```bash
/plugin update superpowers
```
## License
MIT License - see LICENSE file for details
## Support
- **Issues**: https://github.com/obra/superpowers/issues
- **Marketplace**: https://github.com/obra/superpowers-marketplace

View File

@@ -0,0 +1,638 @@
# Superpowers Release Notes
## v4.0.3 (2025-12-26)
### Improvements
**Strengthened using-superpowers skill for explicit skill requests**
Addressed a failure mode where Claude would skip invoking a skill even when the user explicitly requested it by name (e.g., "subagent-driven-development, please"). Claude would think "I know what that means" and start working directly instead of loading the skill.
Changes:
- Updated "The Rule" to say "Invoke relevant or requested skills" instead of "Check for skills" - emphasizing active invocation over passive checking
- Added "BEFORE any response or action" - the original wording only mentioned "response" but Claude would sometimes take action without responding first
- Added reassurance that invoking a wrong skill is okay - reduces hesitation
- Added new red flag: "I know what that means" → Knowing the concept ≠ using the skill
**Added explicit skill request tests**
New test suite in `tests/explicit-skill-requests/` that verifies Claude correctly invokes skills when users request them by name. Includes single-turn and multi-turn test scenarios.
## v4.0.2 (2025-12-23)
### Fixes
**Slash commands now user-only**
Added `disable-model-invocation: true` to all three slash commands (`/brainstorm`, `/execute-plan`, `/write-plan`). Claude can no longer invoke these commands via the Skill tool—they're restricted to manual user invocation only.
The underlying skills (`superpowers:brainstorming`, `superpowers:executing-plans`, `superpowers:writing-plans`) remain available for Claude to invoke autonomously. This change prevents confusion when Claude would invoke a command that just redirects to a skill anyway.
## v4.0.1 (2025-12-23)
### Fixes
**Clarified how to access skills in Claude Code**
Fixed a confusing pattern where Claude would invoke a skill via the Skill tool, then try to Read the skill file separately. The `using-superpowers` skill now explicitly states that the Skill tool loads skill content directly—no need to read files.
- Added "How to Access Skills" section to `using-superpowers`
- Changed "read the skill" → "invoke the skill" in instructions
- Updated slash commands to use fully qualified skill names (e.g., `superpowers:brainstorming`)
**Added GitHub thread reply guidance to receiving-code-review** (h/t @ralphbean)
Added a note about replying to inline review comments in the original thread rather than as top-level PR comments.
**Added automation-over-documentation guidance to writing-skills** (h/t @EthanJStark)
Added guidance that mechanical constraints should be automated, not documented—save skills for judgment calls.
## v4.0.0 (2025-12-17)
### New Features
**Two-stage code review in subagent-driven-development**
Subagent workflows now use two separate review stages after each task:
1. **Spec compliance review** - Skeptical reviewer verifies implementation matches spec exactly. Catches missing requirements AND over-building. Won't trust implementer's report—reads actual code.
2. **Code quality review** - Only runs after spec compliance passes. Reviews for clean code, test coverage, maintainability.
This catches the common failure mode where code is well-written but doesn't match what was requested. Reviews are loops, not one-shot: if reviewer finds issues, implementer fixes them, then reviewer checks again.
Other subagent workflow improvements:
- Controller provides full task text to workers (not file references)
- Workers can ask clarifying questions before AND during work
- Self-review checklist before reporting completion
- Plan read once at start, extracted to TodoWrite
New prompt templates in `skills/subagent-driven-development/`:
- `implementer-prompt.md` - Includes self-review checklist, encourages questions
- `spec-reviewer-prompt.md` - Skeptical verification against requirements
- `code-quality-reviewer-prompt.md` - Standard code review
**Debugging techniques consolidated with tools**
`systematic-debugging` now bundles supporting techniques and tools:
- `root-cause-tracing.md` - Trace bugs backward through call stack
- `defense-in-depth.md` - Add validation at multiple layers
- `condition-based-waiting.md` - Replace arbitrary timeouts with condition polling
- `find-polluter.sh` - Bisection script to find which test creates pollution
- `condition-based-waiting-example.ts` - Complete implementation from real debugging session
**Testing anti-patterns reference**
`test-driven-development` now includes `testing-anti-patterns.md` covering:
- Testing mock behavior instead of real behavior
- Adding test-only methods to production classes
- Mocking without understanding dependencies
- Incomplete mocks that hide structural assumptions
**Skill test infrastructure**
Three new test frameworks for validating skill behavior:
`tests/skill-triggering/` - Validates skills trigger from naive prompts without explicit naming. Tests 6 skills to ensure descriptions alone are sufficient.
`tests/claude-code/` - Integration tests using `claude -p` for headless testing. Verifies skill usage via session transcript (JSONL) analysis. Includes `analyze-token-usage.py` for cost tracking.
`tests/subagent-driven-dev/` - End-to-end workflow validation with two complete test projects:
- `go-fractals/` - CLI tool with Sierpinski/Mandelbrot (10 tasks)
- `svelte-todo/` - CRUD app with localStorage and Playwright (12 tasks)
### Major Changes
**DOT flowcharts as executable specifications**
Rewrote key skills using DOT/GraphViz flowcharts as the authoritative process definition. Prose becomes supporting content.
**The Description Trap** (documented in `writing-skills`): Discovered that skill descriptions override flowchart content when descriptions contain workflow summaries. Claude follows the short description instead of reading the detailed flowchart. Fix: descriptions must be trigger-only ("Use when X") with no process details.
**Skill priority in using-superpowers**
When multiple skills apply, process skills (brainstorming, debugging) now explicitly come before implementation skills. "Build X" triggers brainstorming first, then domain skills.
**brainstorming trigger strengthened**
Description changed to imperative: "You MUST use this before any creative work—creating features, building components, adding functionality, or modifying behavior."
### Breaking Changes
**Skill consolidation** - Six standalone skills merged:
- `root-cause-tracing`, `defense-in-depth`, `condition-based-waiting` → bundled in `systematic-debugging/`
- `testing-skills-with-subagents` → bundled in `writing-skills/`
- `testing-anti-patterns` → bundled in `test-driven-development/`
- `sharing-skills` removed (obsolete)
### Other Improvements
- **render-graphs.js** - Tool to extract DOT diagrams from skills and render to SVG
- **Rationalizations table** in using-superpowers - Scannable format including new entries: "I need more context first", "Let me explore first", "This feels productive"
- **docs/testing.md** - Guide to testing skills with Claude Code integration tests
---
## v3.6.2 (2025-12-03)
### Fixed
- **Linux Compatibility**: Fixed polyglot hook wrapper (`run-hook.cmd`) to use POSIX-compliant syntax
- Replaced bash-specific `${BASH_SOURCE[0]:-$0}` with standard `$0` on line 16
- Resolves "Bad substitution" error on Ubuntu/Debian systems where `/bin/sh` is dash
- Fixes #141
---
## v3.5.1 (2025-11-24)
### Changed
- **OpenCode Bootstrap Refactor**: Switched from `chat.message` hook to `session.created` event for bootstrap injection
- Bootstrap now injects at session creation via `session.prompt()` with `noReply: true`
- Explicitly tells the model that using-superpowers is already loaded to prevent redundant skill loading
- Consolidated bootstrap content generation into shared `getBootstrapContent()` helper
- Cleaner single-implementation approach (removed fallback pattern)
---
## v3.5.0 (2025-11-23)
### Added
- **OpenCode Support**: Native JavaScript plugin for OpenCode.ai
- Custom tools: `use_skill` and `find_skills`
- Message insertion pattern for skill persistence across context compaction
- Automatic context injection via chat.message hook
- Auto re-injection on session.compacted events
- Three-tier skill priority: project > personal > superpowers
- Project-local skills support (`.opencode/skills/`)
- Shared core module (`lib/skills-core.js`) for code reuse with Codex
- Automated test suite with proper isolation (`tests/opencode/`)
- Platform-specific documentation (`docs/README.opencode.md`, `docs/README.codex.md`)
### Changed
- **Refactored Codex Implementation**: Now uses shared `lib/skills-core.js` ES module
- Eliminates code duplication between Codex and OpenCode
- Single source of truth for skill discovery and parsing
- Codex successfully loads ES modules via Node.js interop
- **Improved Documentation**: Rewrote README to explain problem/solution clearly
- Removed duplicate sections and conflicting information
- Added complete workflow description (brainstorm → plan → execute → finish)
- Simplified platform installation instructions
- Emphasized skill-checking protocol over automatic activation claims
---
## v3.4.1 (2025-10-31)
### Improvements
- Optimized superpowers bootstrap to eliminate redundant skill execution. The `using-superpowers` skill content is now provided directly in session context, with clear guidance to use the Skill tool only for other skills. This reduces overhead and prevents the confusing loop where agents would execute `using-superpowers` manually despite already having the content from session start.
## v3.4.0 (2025-10-30)
### Improvements
- Simplified `brainstorming` skill to return to original conversational vision. Removed heavyweight 6-phase process with formal checklists in favor of natural dialogue: ask questions one at a time, then present design in 200-300 word sections with validation. Keeps documentation and implementation handoff features.
## v3.3.1 (2025-10-28)
### Improvements
- Updated `brainstorming` skill to require autonomous recon before questioning, encourage recommendation-driven decisions, and prevent agents from delegating prioritization back to humans.
- Applied writing clarity improvements to `brainstorming` skill following Strunk's "Elements of Style" principles (omitted needless words, converted negative to positive form, improved parallel construction).
### Bug Fixes
- Clarified `writing-skills` guidance so it points to the correct agent-specific personal skill directories (`~/.claude/skills` for Claude Code, `~/.codex/skills` for Codex).
## v3.3.0 (2025-10-28)
### New Features
**Experimental Codex Support**
- Added unified `superpowers-codex` script with bootstrap/use-skill/find-skills commands
- Cross-platform Node.js implementation (works on Windows, macOS, Linux)
- Namespaced skills: `superpowers:skill-name` for superpowers skills, `skill-name` for personal
- Personal skills override superpowers skills when names match
- Clean skill display: shows name/description without raw frontmatter
- Helpful context: shows supporting files directory for each skill
- Tool mapping for Codex: TodoWrite→update_plan, subagents→manual fallback, etc.
- Bootstrap integration with minimal AGENTS.md for automatic startup
- Complete installation guide and bootstrap instructions specific to Codex
**Key differences from Claude Code integration:**
- Single unified script instead of separate tools
- Tool substitution system for Codex-specific equivalents
- Simplified subagent handling (manual work instead of delegation)
- Updated terminology: "Superpowers skills" instead of "Core skills"
### Files Added
- `.codex/INSTALL.md` - Installation guide for Codex users
- `.codex/superpowers-bootstrap.md` - Bootstrap instructions with Codex adaptations
- `.codex/superpowers-codex` - Unified Node.js executable with all functionality
**Note:** Codex support is experimental. The integration provides core superpowers functionality but may require refinement based on user feedback.
## v3.2.3 (2025-10-23)
### Improvements
**Updated using-superpowers skill to use Skill tool instead of Read tool**
- Changed skill invocation instructions from Read tool to Skill tool
- Updated description: "using Read tool" → "using Skill tool"
- Updated step 3: "Use the Read tool" → "Use the Skill tool to read and run"
- Updated rationalization list: "Read the current version" → "Run the current version"
The Skill tool is the proper mechanism for invoking skills in Claude Code. This update corrects the bootstrap instructions to guide agents toward the correct tool.
### Files Changed
- Updated: `skills/using-superpowers/SKILL.md` - Changed tool references from Read to Skill
## v3.2.2 (2025-10-21)
### Improvements
**Strengthened using-superpowers skill against agent rationalization**
- Added EXTREMELY-IMPORTANT block with absolute language about mandatory skill checking
- "If even 1% chance a skill applies, you MUST read it"
- "You do not have a choice. You cannot rationalize your way out."
- Added MANDATORY FIRST RESPONSE PROTOCOL checklist
- 5-step process agents must complete before any response
- Explicit "responding without this = failure" consequence
- Added Common Rationalizations section with 8 specific evasion patterns
- "This is just a simple question" → WRONG
- "I can check files quickly" → WRONG
- "Let me gather information first" → WRONG
- Plus 5 more common patterns observed in agent behavior
These changes address observed agent behavior where they rationalize around skill usage despite clear instructions. The forceful language and pre-emptive counter-arguments aim to make non-compliance harder.
### Files Changed
- Updated: `skills/using-superpowers/SKILL.md` - Added three layers of enforcement to prevent skill-skipping rationalization
## v3.2.1 (2025-10-20)
### New Features
**Code reviewer agent now included in plugin**
- Added `superpowers:code-reviewer` agent to plugin's `agents/` directory
- Agent provides systematic code review against plans and coding standards
- Previously required users to have personal agent configuration
- All skill references updated to use namespaced `superpowers:code-reviewer`
- Fixes #55
### Files Changed
- New: `agents/code-reviewer.md` - Agent definition with review checklist and output format
- Updated: `skills/requesting-code-review/SKILL.md` - References to `superpowers:code-reviewer`
- Updated: `skills/subagent-driven-development/SKILL.md` - References to `superpowers:code-reviewer`
## v3.2.0 (2025-10-18)
### New Features
**Design documentation in brainstorming workflow**
- Added Phase 4: Design Documentation to brainstorming skill
- Design documents now written to `docs/plans/YYYY-MM-DD-<topic>-design.md` before implementation
- Restores functionality from original brainstorming command that was lost during skill conversion
- Documents written before worktree setup and implementation planning
- Tested with subagent to verify compliance under time pressure
### Breaking Changes
**Skill reference namespace standardization**
- All internal skill references now use `superpowers:` namespace prefix
- Updated format: `superpowers:test-driven-development` (previously just `test-driven-development`)
- Affects all REQUIRED SUB-SKILL, RECOMMENDED SUB-SKILL, and REQUIRED BACKGROUND references
- Aligns with how skills are invoked using the Skill tool
- Files updated: brainstorming, executing-plans, subagent-driven-development, systematic-debugging, testing-skills-with-subagents, writing-plans, writing-skills
### Improvements
**Design vs implementation plan naming**
- Design documents use `-design.md` suffix to prevent filename collisions
- Implementation plans continue using existing `YYYY-MM-DD-<feature-name>.md` format
- Both stored in `docs/plans/` directory with clear naming distinction
## v3.1.1 (2025-10-17)
### Bug Fixes
- **Fixed command syntax in README** (#44) - Updated all command references to use correct namespaced syntax (`/superpowers:brainstorm` instead of `/brainstorm`). Plugin-provided commands are automatically namespaced by Claude Code to avoid conflicts between plugins.
## v3.1.0 (2025-10-17)
### Breaking Changes
**Skill names standardized to lowercase**
- All skill frontmatter `name:` fields now use lowercase kebab-case matching directory names
- Examples: `brainstorming`, `test-driven-development`, `using-git-worktrees`
- All skill announcements and cross-references updated to lowercase format
- This ensures consistent naming across directory names, frontmatter, and documentation
### New Features
**Enhanced brainstorming skill**
- Added Quick Reference table showing phases, activities, and tool usage
- Added copyable workflow checklist for tracking progress
- Added decision flowchart for when to revisit earlier phases
- Added comprehensive AskUserQuestion tool guidance with concrete examples
- Added "Question Patterns" section explaining when to use structured vs open-ended questions
- Restructured Key Principles as scannable table
**Anthropic best practices integration**
- Added `skills/writing-skills/anthropic-best-practices.md` - Official Anthropic skill authoring guide
- Referenced in writing-skills SKILL.md for comprehensive guidance
- Provides patterns for progressive disclosure, workflows, and evaluation
### Improvements
**Skill cross-reference clarity**
- All skill references now use explicit requirement markers:
- `**REQUIRED BACKGROUND:**` - Prerequisites you must understand
- `**REQUIRED SUB-SKILL:**` - Skills that must be used in workflow
- `**Complementary skills:**` - Optional but helpful related skills
- Removed old path format (`skills/collaboration/X` → just `X`)
- Updated Integration sections with categorized relationships (Required vs Complementary)
- Updated cross-reference documentation with best practices
**Alignment with Anthropic best practices**
- Fixed description grammar and voice (fully third-person)
- Added Quick Reference tables for scanning
- Added workflow checklists Claude can copy and track
- Appropriate use of flowcharts for non-obvious decision points
- Improved scannable table formats
- All skills well under 500-line recommendation
### Bug Fixes
- **Re-added missing command redirects** - Restored `commands/brainstorm.md` and `commands/write-plan.md` that were accidentally removed in v3.0 migration
- Fixed `defense-in-depth` name mismatch (was `Defense-in-Depth-Validation`)
- Fixed `receiving-code-review` name mismatch (was `Code-Review-Reception`)
- Fixed `commands/brainstorm.md` reference to correct skill name
- Removed references to non-existent related skills
### Documentation
**writing-skills improvements**
- Updated cross-referencing guidance with explicit requirement markers
- Added reference to Anthropic's official best practices
- Improved examples showing proper skill reference format
## v3.0.1 (2025-10-16)
### Changes
We now use Anthropic's first-party skills system!
## v2.0.2 (2025-10-12)
### Bug Fixes
- **Fixed false warning when local skills repo is ahead of upstream** - The initialization script was incorrectly warning "New skills available from upstream" when the local repository had commits ahead of upstream. The logic now correctly distinguishes between three git states: local behind (should update), local ahead (no warning), and diverged (should warn).
## v2.0.1 (2025-10-12)
### Bug Fixes
- **Fixed session-start hook execution in plugin context** (#8, PR #9) - The hook was failing silently with "Plugin hook error" preventing skills context from loading. Fixed by:
- Using `${BASH_SOURCE[0]:-$0}` fallback when BASH_SOURCE is unbound in Claude Code's execution context
- Adding `|| true` to handle empty grep results gracefully when filtering status flags
---
# Superpowers v2.0.0 Release Notes
## Overview
Superpowers v2.0 makes skills more accessible, maintainable, and community-driven through a major architectural shift.
The headline change is **skills repository separation**: all skills, scripts, and documentation have moved from the plugin into a dedicated repository ([obra/superpowers-skills](https://github.com/obra/superpowers-skills)). This transforms superpowers from a monolithic plugin into a lightweight shim that manages a local clone of the skills repository. Skills auto-update on session start. Users fork and contribute improvements via standard git workflows. The skills library versions independently from the plugin.
Beyond infrastructure, this release adds nine new skills focused on problem-solving, research, and architecture. We rewrote the core **using-skills** documentation with imperative tone and clearer structure, making it easier for Claude to understand when and how to use skills. **find-skills** now outputs paths you can paste directly into the Read tool, eliminating friction in the skills discovery workflow.
Users experience seamless operation: the plugin handles cloning, forking, and updating automatically. Contributors find the new architecture makes improving and sharing skills trivial. This release lays the foundation for skills to evolve rapidly as a community resource.
## Breaking Changes
### Skills Repository Separation
**The biggest change:** Skills no longer live in the plugin. They've been moved to a separate repository at [obra/superpowers-skills](https://github.com/obra/superpowers-skills).
**What this means for you:**
- **First install:** Plugin automatically clones skills to `~/.config/superpowers/skills/`
- **Forking:** During setup, you'll be offered the option to fork the skills repo (if `gh` is installed)
- **Updates:** Skills auto-update on session start (fast-forward when possible)
- **Contributing:** Work on branches, commit locally, submit PRs to upstream
- **No more shadowing:** Old two-tier system (personal/core) replaced with single-repo branch workflow
**Migration:**
If you have an existing installation:
1. Your old `~/.config/superpowers/.git` will be backed up to `~/.config/superpowers/.git.bak`
2. Old skills will be backed up to `~/.config/superpowers/skills.bak`
3. Fresh clone of obra/superpowers-skills will be created at `~/.config/superpowers/skills/`
### Removed Features
- **Personal superpowers overlay system** - Replaced with git branch workflow
- **setup-personal-superpowers hook** - Replaced by initialize-skills.sh
## New Features
### Skills Repository Infrastructure
**Automatic Clone & Setup** (`lib/initialize-skills.sh`)
- Clones obra/superpowers-skills on first run
- Offers fork creation if GitHub CLI is installed
- Sets up upstream/origin remotes correctly
- Handles migration from old installation
**Auto-Update**
- Fetches from tracking remote on every session start
- Auto-merges with fast-forward when possible
- Notifies when manual sync needed (branch diverged)
- Uses pulling-updates-from-skills-repository skill for manual sync
### New Skills
**Problem-Solving Skills** (`skills/problem-solving/`)
- **collision-zone-thinking** - Force unrelated concepts together for emergent insights
- **inversion-exercise** - Flip assumptions to reveal hidden constraints
- **meta-pattern-recognition** - Spot universal principles across domains
- **scale-game** - Test at extremes to expose fundamental truths
- **simplification-cascades** - Find insights that eliminate multiple components
- **when-stuck** - Dispatch to right problem-solving technique
**Research Skills** (`skills/research/`)
- **tracing-knowledge-lineages** - Understand how ideas evolved over time
**Architecture Skills** (`skills/architecture/`)
- **preserving-productive-tensions** - Keep multiple valid approaches instead of forcing premature resolution
### Skills Improvements
**using-skills (formerly getting-started)**
- Renamed from getting-started to using-skills
- Complete rewrite with imperative tone (v4.0.0)
- Front-loaded critical rules
- Added "Why" explanations for all workflows
- Always includes /SKILL.md suffix in references
- Clearer distinction between rigid rules and flexible patterns
**writing-skills**
- Cross-referencing guidance moved from using-skills
- Added token efficiency section (word count targets)
- Improved CSO (Claude Search Optimization) guidance
**sharing-skills**
- Updated for new branch-and-PR workflow (v2.0.0)
- Removed personal/core split references
**pulling-updates-from-skills-repository** (new)
- Complete workflow for syncing with upstream
- Replaces old "updating-skills" skill
### Tools Improvements
**find-skills**
- Now outputs full paths with /SKILL.md suffix
- Makes paths directly usable with Read tool
- Updated help text
**skill-run**
- Moved from scripts/ to skills/using-skills/
- Improved documentation
### Plugin Infrastructure
**Session Start Hook**
- Now loads from skills repository location
- Shows full skills list at session start
- Prints skills location info
- Shows update status (updated successfully / behind upstream)
- Moved "skills behind" warning to end of output
**Environment Variables**
- `SUPERPOWERS_SKILLS_ROOT` set to `~/.config/superpowers/skills`
- Used consistently throughout all paths
## Bug Fixes
- Fixed duplicate upstream remote addition when forking
- Fixed find-skills double "skills/" prefix in output
- Removed obsolete setup-personal-superpowers call from session-start
- Fixed path references throughout hooks and commands
## Documentation
### README
- Updated for new skills repository architecture
- Prominent link to superpowers-skills repo
- Updated auto-update description
- Fixed skill names and references
- Updated Meta skills list
### Testing Documentation
- Added comprehensive testing checklist (`docs/TESTING-CHECKLIST.md`)
- Created local marketplace config for testing
- Documented manual testing scenarios
## Technical Details
### File Changes
**Added:**
- `lib/initialize-skills.sh` - Skills repo initialization and auto-update
- `docs/TESTING-CHECKLIST.md` - Manual testing scenarios
- `.claude-plugin/marketplace.json` - Local testing config
**Removed:**
- `skills/` directory (82 files) - Now in obra/superpowers-skills
- `scripts/` directory - Now in obra/superpowers-skills/skills/using-skills/
- `hooks/setup-personal-superpowers.sh` - Obsolete
**Modified:**
- `hooks/session-start.sh` - Use skills from ~/.config/superpowers/skills
- `commands/brainstorm.md` - Updated paths to SUPERPOWERS_SKILLS_ROOT
- `commands/write-plan.md` - Updated paths to SUPERPOWERS_SKILLS_ROOT
- `commands/execute-plan.md` - Updated paths to SUPERPOWERS_SKILLS_ROOT
- `README.md` - Complete rewrite for new architecture
### Commit History
This release includes:
- 20+ commits for skills repository separation
- PR #1: Amplifier-inspired problem-solving and research skills
- PR #2: Personal superpowers overlay system (later replaced)
- Multiple skill refinements and documentation improvements
## Upgrade Instructions
### Fresh Install
```bash
# In Claude Code
/plugin marketplace add obra/superpowers-marketplace
/plugin install superpowers@superpowers-marketplace
```
The plugin handles everything automatically.
### Upgrading from v1.x
1. **Backup your personal skills** (if you have any):
```bash
cp -r ~/.config/superpowers/skills ~/superpowers-skills-backup
```
2. **Update the plugin:**
```bash
/plugin update superpowers
```
3. **On next session start:**
- Old installation will be backed up automatically
- Fresh skills repo will be cloned
- If you have GitHub CLI, you'll be offered the option to fork
4. **Migrate personal skills** (if you had any):
- Create a branch in your local skills repo
- Copy your personal skills from backup
- Commit and push to your fork
- Consider contributing back via PR
## What's Next
### For Users
- Explore the new problem-solving skills
- Try the branch-based workflow for skill improvements
- Contribute skills back to the community
### For Contributors
- Skills repository is now at https://github.com/obra/superpowers-skills
- Fork → Branch → PR workflow
- See skills/meta/writing-skills/SKILL.md for TDD approach to documentation
## Known Issues
None at this time.
## Credits
- Problem-solving skills inspired by Amplifier patterns
- Community contributions and feedback
- Extensive testing and iteration on skill effectiveness
---
**Full Changelog:** https://github.com/obra/superpowers/compare/dd013f6...main
**Skills Repository:** https://github.com/obra/superpowers-skills
**Issues:** https://github.com/obra/superpowers/issues

View File

@@ -0,0 +1,48 @@
---
name: code-reviewer
description: |
Use this agent when a major project step has been completed and needs to be reviewed against the original plan and coding standards. Examples: <example>Context: The user is creating a code-review agent that should be called after a logical chunk of code is written. user: "I've finished implementing the user authentication system as outlined in step 3 of our plan" assistant: "Great work! Now let me use the code-reviewer agent to review the implementation against our plan and coding standards" <commentary>Since a major project step has been completed, use the code-reviewer agent to validate the work against the plan and identify any issues.</commentary></example> <example>Context: User has completed a significant feature implementation. user: "The API endpoints for the task management system are now complete - that covers step 2 from our architecture document" assistant: "Excellent! Let me have the code-reviewer agent examine this implementation to ensure it aligns with our plan and follows best practices" <commentary>A numbered step from the planning document has been completed, so the code-reviewer agent should review the work.</commentary></example>
model: inherit
---
You are a Senior Code Reviewer with expertise in software architecture, design patterns, and best practices. Your role is to review completed project steps against original plans and ensure code quality standards are met.
When reviewing completed work, you will:
1. **Plan Alignment Analysis**:
- Compare the implementation against the original planning document or step description
- Identify any deviations from the planned approach, architecture, or requirements
- Assess whether deviations are justified improvements or problematic departures
- Verify that all planned functionality has been implemented
2. **Code Quality Assessment**:
- Review code for adherence to established patterns and conventions
- Check for proper error handling, type safety, and defensive programming
- Evaluate code organization, naming conventions, and maintainability
- Assess test coverage and quality of test implementations
- Look for potential security vulnerabilities or performance issues
3. **Architecture and Design Review**:
- Ensure the implementation follows SOLID principles and established architectural patterns
- Check for proper separation of concerns and loose coupling
- Verify that the code integrates well with existing systems
- Assess scalability and extensibility considerations
4. **Documentation and Standards**:
- Verify that code includes appropriate comments and documentation
- Check that file headers, function documentation, and inline comments are present and accurate
- Ensure adherence to project-specific coding standards and conventions
5. **Issue Identification and Recommendations**:
- Clearly categorize issues as: Critical (must fix), Important (should fix), or Suggestions (nice to have)
- For each issue, provide specific examples and actionable recommendations
- When you identify plan deviations, explain whether they're problematic or beneficial
- Suggest specific improvements with code examples when helpful
6. **Communication Protocol**:
- If you find significant deviations from the plan, ask the coding agent to review and confirm the changes
- If you identify issues with the original plan itself, recommend plan updates
- For implementation problems, provide clear guidance on fixes needed
- Always acknowledge what was done well before highlighting issues
Your output should be structured, actionable, and focused on helping maintain high code quality while ensuring project goals are met. Be thorough but concise, and always provide constructive feedback that helps improve both the current implementation and future development practices.

View File

@@ -0,0 +1,6 @@
---
description: "You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. Explores requirements and design before implementation."
disable-model-invocation: true
---
Invoke the superpowers:brainstorming skill and follow it exactly as presented to you

View File

@@ -0,0 +1,6 @@
---
description: Execute plan in batches with review checkpoints
disable-model-invocation: true
---
Invoke the superpowers:executing-plans skill and follow it exactly as presented to you

View File

@@ -0,0 +1,6 @@
---
description: Create detailed implementation plan with bite-sized tasks
disable-model-invocation: true
---
Invoke the superpowers:writing-plans skill and follow it exactly as presented to you

View File

@@ -0,0 +1,153 @@
# Superpowers for Codex
Complete guide for using Superpowers with OpenAI Codex.
## Quick Install
Tell Codex:
```
Fetch and follow instructions from https://raw.githubusercontent.com/obra/superpowers/refs/heads/main/.codex/INSTALL.md
```
## Manual Installation
### Prerequisites
- OpenAI Codex access
- Shell access to install files
### Installation Steps
#### 1. Clone Superpowers
```bash
mkdir -p ~/.codex/superpowers
git clone https://github.com/obra/superpowers.git ~/.codex/superpowers
```
#### 2. Install Bootstrap
The bootstrap file is included in the repository at `.codex/superpowers-bootstrap.md`. Codex will automatically use it from the cloned location.
#### 3. Verify Installation
Tell Codex:
```
Run ~/.codex/superpowers/.codex/superpowers-codex find-skills to show available skills
```
You should see a list of available skills with descriptions.
## Usage
### Finding Skills
```
Run ~/.codex/superpowers/.codex/superpowers-codex find-skills
```
### Loading a Skill
```
Run ~/.codex/superpowers/.codex/superpowers-codex use-skill superpowers:brainstorming
```
### Bootstrap All Skills
```
Run ~/.codex/superpowers/.codex/superpowers-codex bootstrap
```
This loads the complete bootstrap with all skill information.
### Personal Skills
Create your own skills in `~/.codex/skills/`:
```bash
mkdir -p ~/.codex/skills/my-skill
```
Create `~/.codex/skills/my-skill/SKILL.md`:
```markdown
---
name: my-skill
description: Use when [condition] - [what it does]
---
# My Skill
[Your skill content here]
```
Personal skills override superpowers skills with the same name.
## Architecture
### Codex CLI Tool
**Location:** `~/.codex/superpowers/.codex/superpowers-codex`
A Node.js CLI script that provides three commands:
- `bootstrap` - Load complete bootstrap with all skills
- `use-skill <name>` - Load a specific skill
- `find-skills` - List all available skills
### Shared Core Module
**Location:** `~/.codex/superpowers/lib/skills-core.js`
The Codex implementation uses the shared `skills-core` module (ES module format) for skill discovery and parsing. This is the same module used by the OpenCode plugin, ensuring consistent behavior across platforms.
### Tool Mapping
Skills written for Claude Code are adapted for Codex with these mappings:
- `TodoWrite``update_plan`
- `Task` with subagents → Tell user subagents aren't available, do work directly
- `Skill` tool → `~/.codex/superpowers/.codex/superpowers-codex use-skill`
- File operations → Native Codex tools
## Updating
```bash
cd ~/.codex/superpowers
git pull
```
## Troubleshooting
### Skills not found
1. Verify installation: `ls ~/.codex/superpowers/skills`
2. Check CLI works: `~/.codex/superpowers/.codex/superpowers-codex find-skills`
3. Verify skills have SKILL.md files
### CLI script not executable
```bash
chmod +x ~/.codex/superpowers/.codex/superpowers-codex
```
### Node.js errors
The CLI script requires Node.js. Verify:
```bash
node --version
```
Should show v14 or higher (v18+ recommended for ES module support).
## Getting Help
- Report issues: https://github.com/obra/superpowers/issues
- Main documentation: https://github.com/obra/superpowers
- Blog post: https://blog.fsck.com/2025/10/27/skills-for-openai-codex/
## Note
Codex support is experimental and may require refinement based on user feedback. If you encounter issues, please report them on GitHub.

View File

@@ -0,0 +1,234 @@
# Superpowers for OpenCode
Complete guide for using Superpowers with [OpenCode.ai](https://opencode.ai).
## Quick Install
Tell OpenCode:
```
Clone https://github.com/obra/superpowers to ~/.config/opencode/superpowers, then create directory ~/.config/opencode/plugin, then symlink ~/.config/opencode/superpowers/.opencode/plugin/superpowers.js to ~/.config/opencode/plugin/superpowers.js, then restart opencode.
```
## Manual Installation
### Prerequisites
- [OpenCode.ai](https://opencode.ai) installed
- Node.js installed
- Git installed
### Installation Steps
#### 1. Install Superpowers
```bash
mkdir -p ~/.config/opencode/superpowers
git clone https://github.com/obra/superpowers.git ~/.config/opencode/superpowers
```
#### 2. Register the Plugin
OpenCode discovers plugins from `~/.config/opencode/plugin/`. Create a symlink:
```bash
mkdir -p ~/.config/opencode/plugin
ln -sf ~/.config/opencode/superpowers/.opencode/plugin/superpowers.js ~/.config/opencode/plugin/superpowers.js
```
Alternatively, for project-local installation:
```bash
# In your OpenCode project
mkdir -p .opencode/plugin
ln -sf ~/.config/opencode/superpowers/.opencode/plugin/superpowers.js .opencode/plugin/superpowers.js
```
#### 3. Restart OpenCode
Restart OpenCode to load the plugin. Superpowers will automatically activate.
## Usage
### Finding Skills
Use the `find_skills` tool to list all available skills:
```
use find_skills tool
```
### Loading a Skill
Use the `use_skill` tool to load a specific skill:
```
use use_skill tool with skill_name: "superpowers:brainstorming"
```
Skills are automatically inserted into the conversation and persist across context compaction.
### Personal Skills
Create your own skills in `~/.config/opencode/skills/`:
```bash
mkdir -p ~/.config/opencode/skills/my-skill
```
Create `~/.config/opencode/skills/my-skill/SKILL.md`:
```markdown
---
name: my-skill
description: Use when [condition] - [what it does]
---
# My Skill
[Your skill content here]
```
### Project Skills
Create project-specific skills in your OpenCode project:
```bash
# In your OpenCode project
mkdir -p .opencode/skills/my-project-skill
```
Create `.opencode/skills/my-project-skill/SKILL.md`:
```markdown
---
name: my-project-skill
description: Use when [condition] - [what it does]
---
# My Project Skill
[Your skill content here]
```
## Skill Priority
Skills are resolved with this priority order:
1. **Project skills** (`.opencode/skills/`) - Highest priority
2. **Personal skills** (`~/.config/opencode/skills/`)
3. **Superpowers skills** (`~/.config/opencode/superpowers/skills/`)
You can force resolution to a specific level:
- `project:skill-name` - Force project skill
- `skill-name` - Search project → personal → superpowers
- `superpowers:skill-name` - Force superpowers skill
## Features
### Automatic Context Injection
The plugin automatically injects superpowers context via the chat.message hook on every session. No manual configuration needed.
### Message Insertion Pattern
When you load a skill with `use_skill`, it's inserted as a user message with `noReply: true`. This ensures skills persist throughout long conversations, even when OpenCode compacts context.
### Compaction Resilience
The plugin listens for `session.compacted` events and automatically re-injects the core superpowers bootstrap to maintain functionality after context compaction.
### Tool Mapping
Skills written for Claude Code are automatically adapted for OpenCode. The plugin provides mapping instructions:
- `TodoWrite``update_plan`
- `Task` with subagents → OpenCode's `@mention` system
- `Skill` tool → `use_skill` custom tool
- File operations → Native OpenCode tools
## Architecture
### Plugin Structure
**Location:** `~/.config/opencode/superpowers/.opencode/plugin/superpowers.js`
**Components:**
- Two custom tools: `use_skill`, `find_skills`
- chat.message hook for initial context injection
- event handler for session.compacted re-injection
- Uses shared `lib/skills-core.js` module (also used by Codex)
### Shared Core Module
**Location:** `~/.config/opencode/superpowers/lib/skills-core.js`
**Functions:**
- `extractFrontmatter()` - Parse skill metadata
- `stripFrontmatter()` - Remove metadata from content
- `findSkillsInDir()` - Recursive skill discovery
- `resolveSkillPath()` - Skill resolution with shadowing
- `checkForUpdates()` - Git update detection
This module is shared between OpenCode and Codex implementations for code reuse.
## Updating
```bash
cd ~/.config/opencode/superpowers
git pull
```
Restart OpenCode to load the updates.
## Troubleshooting
### Plugin not loading
1. Check plugin file exists: `ls ~/.config/opencode/superpowers/.opencode/plugin/superpowers.js`
2. Check symlink: `ls -l ~/.config/opencode/plugin/superpowers.js`
3. Check OpenCode logs: `opencode run "test" --print-logs --log-level DEBUG`
4. Look for: `service=plugin path=file:///.../superpowers.js loading plugin`
### Skills not found
1. Verify skills directory: `ls ~/.config/opencode/superpowers/skills`
2. Use `find_skills` tool to see what's discovered
3. Check skill structure: each skill needs a `SKILL.md` file
### Tools not working
1. Verify plugin loaded: Check OpenCode logs for plugin loading message
2. Check Node.js version: The plugin requires Node.js for ES modules
3. Test plugin manually: `node --input-type=module -e "import('file://~/.config/opencode/plugin/superpowers.js').then(m => console.log(Object.keys(m)))"`
### Context not injecting
1. Check if chat.message hook is working
2. Verify using-superpowers skill exists
3. Check OpenCode version (requires recent version with plugin support)
## Getting Help
- Report issues: https://github.com/obra/superpowers/issues
- Main documentation: https://github.com/obra/superpowers
- OpenCode docs: https://opencode.ai/docs/
## Testing
The implementation includes an automated test suite at `tests/opencode/`:
```bash
# Run all tests
./tests/opencode/run-tests.sh --integration --verbose
# Run specific test
./tests/opencode/run-tests.sh --test test-tools.sh
```
Tests verify:
- Plugin loading
- Skills-core library functionality
- Tool execution (use_skill, find_skills)
- Skill priority resolution
- Proper isolation with temp HOME

View File

@@ -0,0 +1,303 @@
# Testing Superpowers Skills
This document describes how to test Superpowers skills, particularly the integration tests for complex skills like `subagent-driven-development`.
## Overview
Testing skills that involve subagents, workflows, and complex interactions requires running actual Claude Code sessions in headless mode and verifying their behavior through session transcripts.
## Test Structure
```
tests/
├── claude-code/
│ ├── test-helpers.sh # Shared test utilities
│ ├── test-subagent-driven-development-integration.sh
│ ├── analyze-token-usage.py # Token analysis tool
│ └── run-skill-tests.sh # Test runner (if exists)
```
## Running Tests
### Integration Tests
Integration tests execute real Claude Code sessions with actual skills:
```bash
# Run the subagent-driven-development integration test
cd tests/claude-code
./test-subagent-driven-development-integration.sh
```
**Note:** Integration tests can take 10-30 minutes as they execute real implementation plans with multiple subagents.
### Requirements
- Must run from the **superpowers plugin directory** (not from temp directories)
- Claude Code must be installed and available as `claude` command
- Local dev marketplace must be enabled: `"superpowers@superpowers-dev": true` in `~/.claude/settings.json`
## Integration Test: subagent-driven-development
### What It Tests
The integration test verifies the `subagent-driven-development` skill correctly:
1. **Plan Loading**: Reads the plan once at the beginning
2. **Full Task Text**: Provides complete task descriptions to subagents (doesn't make them read files)
3. **Self-Review**: Ensures subagents perform self-review before reporting
4. **Review Order**: Runs spec compliance review before code quality review
5. **Review Loops**: Uses review loops when issues are found
6. **Independent Verification**: Spec reviewer reads code independently, doesn't trust implementer reports
### How It Works
1. **Setup**: Creates a temporary Node.js project with a minimal implementation plan
2. **Execution**: Runs Claude Code in headless mode with the skill
3. **Verification**: Parses the session transcript (`.jsonl` file) to verify:
- Skill tool was invoked
- Subagents were dispatched (Task tool)
- TodoWrite was used for tracking
- Implementation files were created
- Tests pass
- Git commits show proper workflow
4. **Token Analysis**: Shows token usage breakdown by subagent
### Test Output
```
========================================
Integration Test: subagent-driven-development
========================================
Test project: /tmp/tmp.xyz123
=== Verification Tests ===
Test 1: Skill tool invoked...
[PASS] subagent-driven-development skill was invoked
Test 2: Subagents dispatched...
[PASS] 7 subagents dispatched
Test 3: Task tracking...
[PASS] TodoWrite used 5 time(s)
Test 6: Implementation verification...
[PASS] src/math.js created
[PASS] add function exists
[PASS] multiply function exists
[PASS] test/math.test.js created
[PASS] Tests pass
Test 7: Git commit history...
[PASS] Multiple commits created (3 total)
Test 8: No extra features added...
[PASS] No extra features added
=========================================
Token Usage Analysis
=========================================
Usage Breakdown:
----------------------------------------------------------------------------------------------------
Agent Description Msgs Input Output Cache Cost
----------------------------------------------------------------------------------------------------
main Main session (coordinator) 34 27 3,996 1,213,703 $ 4.09
3380c209 implementing Task 1: Create Add Function 1 2 787 24,989 $ 0.09
34b00fde implementing Task 2: Create Multiply Function 1 4 644 25,114 $ 0.09
3801a732 reviewing whether an implementation matches... 1 5 703 25,742 $ 0.09
4c142934 doing a final code review... 1 6 854 25,319 $ 0.09
5f017a42 a code reviewer. Review Task 2... 1 6 504 22,949 $ 0.08
a6b7fbe4 a code reviewer. Review Task 1... 1 6 515 22,534 $ 0.08
f15837c0 reviewing whether an implementation matches... 1 6 416 22,485 $ 0.07
----------------------------------------------------------------------------------------------------
TOTALS:
Total messages: 41
Input tokens: 62
Output tokens: 8,419
Cache creation tokens: 132,742
Cache read tokens: 1,382,835
Total input (incl cache): 1,515,639
Total tokens: 1,524,058
Estimated cost: $4.67
(at $3/$15 per M tokens for input/output)
========================================
Test Summary
========================================
STATUS: PASSED
```
## Token Analysis Tool
### Usage
Analyze token usage from any Claude Code session:
```bash
python3 tests/claude-code/analyze-token-usage.py ~/.claude/projects/<project-dir>/<session-id>.jsonl
```
### Finding Session Files
Session transcripts are stored in `~/.claude/projects/` with the working directory path encoded:
```bash
# Example for /Users/jesse/Documents/GitHub/superpowers/superpowers
SESSION_DIR="$HOME/.claude/projects/-Users-jesse-Documents-GitHub-superpowers-superpowers"
# Find recent sessions
ls -lt "$SESSION_DIR"/*.jsonl | head -5
```
### What It Shows
- **Main session usage**: Token usage by the coordinator (you or main Claude instance)
- **Per-subagent breakdown**: Each Task invocation with:
- Agent ID
- Description (extracted from prompt)
- Message count
- Input/output tokens
- Cache usage
- Estimated cost
- **Totals**: Overall token usage and cost estimate
### Understanding the Output
- **High cache reads**: Good - means prompt caching is working
- **High input tokens on main**: Expected - coordinator has full context
- **Similar costs per subagent**: Expected - each gets similar task complexity
- **Cost per task**: Typical range is $0.05-$0.15 per subagent depending on task
## Troubleshooting
### Skills Not Loading
**Problem**: Skill not found when running headless tests
**Solutions**:
1. Ensure you're running FROM the superpowers directory: `cd /path/to/superpowers && tests/...`
2. Check `~/.claude/settings.json` has `"superpowers@superpowers-dev": true` in `enabledPlugins`
3. Verify skill exists in `skills/` directory
### Permission Errors
**Problem**: Claude blocked from writing files or accessing directories
**Solutions**:
1. Use `--permission-mode bypassPermissions` flag
2. Use `--add-dir /path/to/temp/dir` to grant access to test directories
3. Check file permissions on test directories
### Test Timeouts
**Problem**: Test takes too long and times out
**Solutions**:
1. Increase timeout: `timeout 1800 claude ...` (30 minutes)
2. Check for infinite loops in skill logic
3. Review subagent task complexity
### Session File Not Found
**Problem**: Can't find session transcript after test run
**Solutions**:
1. Check the correct project directory in `~/.claude/projects/`
2. Use `find ~/.claude/projects -name "*.jsonl" -mmin -60` to find recent sessions
3. Verify test actually ran (check for errors in test output)
## Writing New Integration Tests
### Template
```bash
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "$SCRIPT_DIR/test-helpers.sh"
# Create test project
TEST_PROJECT=$(create_test_project)
trap "cleanup_test_project $TEST_PROJECT" EXIT
# Set up test files...
cd "$TEST_PROJECT"
# Run Claude with skill
PROMPT="Your test prompt here"
cd "$SCRIPT_DIR/../.." && timeout 1800 claude -p "$PROMPT" \
--allowed-tools=all \
--add-dir "$TEST_PROJECT" \
--permission-mode bypassPermissions \
2>&1 | tee output.txt
# Find and analyze session
WORKING_DIR_ESCAPED=$(echo "$SCRIPT_DIR/../.." | sed 's/\\//-/g' | sed 's/^-//')
SESSION_DIR="$HOME/.claude/projects/$WORKING_DIR_ESCAPED"
SESSION_FILE=$(find "$SESSION_DIR" -name "*.jsonl" -type f -mmin -60 | sort -r | head -1)
# Verify behavior by parsing session transcript
if grep -q '"name":"Skill".*"skill":"your-skill-name"' "$SESSION_FILE"; then
echo "[PASS] Skill was invoked"
fi
# Show token analysis
python3 "$SCRIPT_DIR/analyze-token-usage.py" "$SESSION_FILE"
```
### Best Practices
1. **Always cleanup**: Use trap to cleanup temp directories
2. **Parse transcripts**: Don't grep user-facing output - parse the `.jsonl` session file
3. **Grant permissions**: Use `--permission-mode bypassPermissions` and `--add-dir`
4. **Run from plugin dir**: Skills only load when running from the superpowers directory
5. **Show token usage**: Always include token analysis for cost visibility
6. **Test real behavior**: Verify actual files created, tests passing, commits made
## Session Transcript Format
Session transcripts are JSONL (JSON Lines) files where each line is a JSON object representing a message or tool result.
### Key Fields
```json
{
"type": "assistant",
"message": {
"content": [...],
"usage": {
"input_tokens": 27,
"output_tokens": 3996,
"cache_read_input_tokens": 1213703
}
}
}
```
### Tool Results
```json
{
"type": "user",
"toolUseResult": {
"agentId": "3380c209",
"usage": {
"input_tokens": 2,
"output_tokens": 787,
"cache_read_input_tokens": 24989
},
"prompt": "You are implementing Task 1...",
"content": [{"type": "text", "text": "..."}]
}
}
```
The `agentId` field links to subagent sessions, and the `usage` field contains token usage for that specific subagent invocation.

View File

@@ -0,0 +1,212 @@
# Cross-Platform Polyglot Hooks for Claude Code
Claude Code plugins need hooks that work on Windows, macOS, and Linux. This document explains the polyglot wrapper technique that makes this possible.
## The Problem
Claude Code runs hook commands through the system's default shell:
- **Windows**: CMD.exe
- **macOS/Linux**: bash or sh
This creates several challenges:
1. **Script execution**: Windows CMD can't execute `.sh` files directly - it tries to open them in a text editor
2. **Path format**: Windows uses backslashes (`C:\path`), Unix uses forward slashes (`/path`)
3. **Environment variables**: `$VAR` syntax doesn't work in CMD
4. **No `bash` in PATH**: Even with Git Bash installed, `bash` isn't in the PATH when CMD runs
## The Solution: Polyglot `.cmd` Wrapper
A polyglot script is valid syntax in multiple languages simultaneously. Our wrapper is valid in both CMD and bash:
```cmd
: << 'CMDBLOCK'
@echo off
"C:\Program Files\Git\bin\bash.exe" -l -c "\"$(cygpath -u \"$CLAUDE_PLUGIN_ROOT\")/hooks/session-start.sh\""
exit /b
CMDBLOCK
# Unix shell runs from here
"${CLAUDE_PLUGIN_ROOT}/hooks/session-start.sh"
```
### How It Works
#### On Windows (CMD.exe)
1. `: << 'CMDBLOCK'` - CMD sees `:` as a label (like `:label`) and ignores `<< 'CMDBLOCK'`
2. `@echo off` - Suppresses command echoing
3. The bash.exe command runs with:
- `-l` (login shell) to get proper PATH with Unix utilities
- `cygpath -u` converts Windows path to Unix format (`C:\foo``/c/foo`)
4. `exit /b` - Exits the batch script, stopping CMD here
5. Everything after `CMDBLOCK` is never reached by CMD
#### On Unix (bash/sh)
1. `: << 'CMDBLOCK'` - `:` is a no-op, `<< 'CMDBLOCK'` starts a heredoc
2. Everything until `CMDBLOCK` is consumed by the heredoc (ignored)
3. `# Unix shell runs from here` - Comment
4. The script runs directly with the Unix path
## File Structure
```
hooks/
├── hooks.json # Points to the .cmd wrapper
├── session-start.cmd # Polyglot wrapper (cross-platform entry point)
└── session-start.sh # Actual hook logic (bash script)
```
### hooks.json
```json
{
"hooks": {
"SessionStart": [
{
"matcher": "startup|resume|clear|compact",
"hooks": [
{
"type": "command",
"command": "\"${CLAUDE_PLUGIN_ROOT}/hooks/session-start.cmd\""
}
]
}
]
}
}
```
Note: The path must be quoted because `${CLAUDE_PLUGIN_ROOT}` may contain spaces on Windows (e.g., `C:\Program Files\...`).
## Requirements
### Windows
- **Git for Windows** must be installed (provides `bash.exe` and `cygpath`)
- Default installation path: `C:\Program Files\Git\bin\bash.exe`
- If Git is installed elsewhere, the wrapper needs modification
### Unix (macOS/Linux)
- Standard bash or sh shell
- The `.cmd` file must have execute permission (`chmod +x`)
## Writing Cross-Platform Hook Scripts
Your actual hook logic goes in the `.sh` file. To ensure it works on Windows (via Git Bash):
### Do:
- Use pure bash builtins when possible
- Use `$(command)` instead of backticks
- Quote all variable expansions: `"$VAR"`
- Use `printf` or here-docs for output
### Avoid:
- External commands that may not be in PATH (sed, awk, grep)
- If you must use them, they're available in Git Bash but ensure PATH is set up (use `bash -l`)
### Example: JSON Escaping Without sed/awk
Instead of:
```bash
escaped=$(echo "$content" | sed 's/\\/\\\\/g' | sed 's/"/\\"/g' | awk '{printf "%s\\n", $0}')
```
Use pure bash:
```bash
escape_for_json() {
local input="$1"
local output=""
local i char
for (( i=0; i<${#input}; i++ )); do
char="${input:$i:1}"
case "$char" in
$'\\') output+='\\' ;;
'"') output+='\"' ;;
$'\n') output+='\n' ;;
$'\r') output+='\r' ;;
$'\t') output+='\t' ;;
*) output+="$char" ;;
esac
done
printf '%s' "$output"
}
```
## Reusable Wrapper Pattern
For plugins with multiple hooks, you can create a generic wrapper that takes the script name as an argument:
### run-hook.cmd
```cmd
: << 'CMDBLOCK'
@echo off
set "SCRIPT_DIR=%~dp0"
set "SCRIPT_NAME=%~1"
"C:\Program Files\Git\bin\bash.exe" -l -c "cd \"$(cygpath -u \"%SCRIPT_DIR%\")\" && \"./%SCRIPT_NAME%\""
exit /b
CMDBLOCK
# Unix shell runs from here
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]:-$0}")" && pwd)"
SCRIPT_NAME="$1"
shift
"${SCRIPT_DIR}/${SCRIPT_NAME}" "$@"
```
### hooks.json using the reusable wrapper
```json
{
"hooks": {
"SessionStart": [
{
"matcher": "startup",
"hooks": [
{
"type": "command",
"command": "\"${CLAUDE_PLUGIN_ROOT}/hooks/run-hook.cmd\" session-start.sh"
}
]
}
],
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "\"${CLAUDE_PLUGIN_ROOT}/hooks/run-hook.cmd\" validate-bash.sh"
}
]
}
]
}
}
```
## Troubleshooting
### "bash is not recognized"
CMD can't find bash. The wrapper uses the full path `C:\Program Files\Git\bin\bash.exe`. If Git is installed elsewhere, update the path.
### "cygpath: command not found" or "dirname: command not found"
Bash isn't running as a login shell. Ensure `-l` flag is used.
### Path has weird `\/` in it
`${CLAUDE_PLUGIN_ROOT}` expanded to a Windows path ending with backslash, then `/hooks/...` was appended. Use `cygpath` to convert the entire path.
### Script opens in text editor instead of running
The hooks.json is pointing directly to the `.sh` file. Point to the `.cmd` wrapper instead.
### Works in terminal but not as hook
Claude Code may run hooks differently. Test by simulating the hook environment:
```powershell
$env:CLAUDE_PLUGIN_ROOT = "C:\path\to\plugin"
cmd /c "C:\path\to\plugin\hooks\session-start.cmd"
```
## Related Issues
- [anthropics/claude-code#9758](https://github.com/anthropics/claude-code/issues/9758) - .sh scripts open in editor on Windows
- [anthropics/claude-code#3417](https://github.com/anthropics/claude-code/issues/3417) - Hooks don't work on Windows
- [anthropics/claude-code#6023](https://github.com/anthropics/claude-code/issues/6023) - CLAUDE_PROJECT_DIR not found

View File

@@ -0,0 +1,15 @@
{
"hooks": {
"SessionStart": [
{
"matcher": "startup|resume|clear|compact",
"hooks": [
{
"type": "command",
"command": "\"${CLAUDE_PLUGIN_ROOT}/hooks/run-hook.cmd\" session-start.sh"
}
]
}
]
}
}

View File

@@ -0,0 +1,19 @@
: << 'CMDBLOCK'
@echo off
REM Polyglot wrapper: runs .sh scripts cross-platform
REM Usage: run-hook.cmd <script-name> [args...]
REM The script should be in the same directory as this wrapper
if "%~1"=="" (
echo run-hook.cmd: missing script name >&2
exit /b 1
)
"C:\Program Files\Git\bin\bash.exe" -l "%~dp0%~1" %2 %3 %4 %5 %6 %7 %8 %9
exit /b
CMDBLOCK
# Unix shell runs from here
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
SCRIPT_NAME="$1"
shift
"${SCRIPT_DIR}/${SCRIPT_NAME}" "$@"

View File

@@ -0,0 +1,52 @@
#!/usr/bin/env bash
# SessionStart hook for superpowers plugin
set -euo pipefail
# Determine plugin root directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]:-$0}")" && pwd)"
PLUGIN_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
# Check if legacy skills directory exists and build warning
warning_message=""
legacy_skills_dir="${HOME}/.config/superpowers/skills"
if [ -d "$legacy_skills_dir" ]; then
warning_message="\n\n<important-reminder>IN YOUR FIRST REPLY AFTER SEEING THIS MESSAGE YOU MUST TELL THE USER:⚠️ **WARNING:** Superpowers now uses Claude Code's skills system. Custom skills in ~/.config/superpowers/skills will not be read. Move custom skills to ~/.claude/skills instead. To make this message go away, remove ~/.config/superpowers/skills</important-reminder>"
fi
# Read using-superpowers content
using_superpowers_content=$(cat "${PLUGIN_ROOT}/skills/using-superpowers/SKILL.md" 2>&1 || echo "Error reading using-superpowers skill")
# Escape outputs for JSON using pure bash
escape_for_json() {
local input="$1"
local output=""
local i char
for (( i=0; i<${#input}; i++ )); do
char="${input:$i:1}"
case "$char" in
$'\\') output+='\\' ;;
'"') output+='\"' ;;
$'\n') output+='\n' ;;
$'\r') output+='\r' ;;
$'\t') output+='\t' ;;
*) output+="$char" ;;
esac
done
printf '%s' "$output"
}
using_superpowers_escaped=$(escape_for_json "$using_superpowers_content")
warning_escaped=$(escape_for_json "$warning_message")
# Output context injection as JSON
cat <<EOF
{
"hookSpecificOutput": {
"hookEventName": "SessionStart",
"additionalContext": "<EXTREMELY_IMPORTANT>\nYou have superpowers.\n\n**Below is the full content of your 'superpowers:using-superpowers' skill - your introduction to using skills. For all other skills, use the 'Skill' tool:**\n\n${using_superpowers_escaped}\n\n${warning_escaped}\n</EXTREMELY_IMPORTANT>"
}
}
EOF
exit 0

View File

@@ -0,0 +1,208 @@
import fs from 'fs';
import path from 'path';
import { execSync } from 'child_process';
/**
* Extract YAML frontmatter from a skill file.
* Current format:
* ---
* name: skill-name
* description: Use when [condition] - [what it does]
* ---
*
* @param {string} filePath - Path to SKILL.md file
* @returns {{name: string, description: string}}
*/
function extractFrontmatter(filePath) {
try {
const content = fs.readFileSync(filePath, 'utf8');
const lines = content.split('\n');
let inFrontmatter = false;
let name = '';
let description = '';
for (const line of lines) {
if (line.trim() === '---') {
if (inFrontmatter) break;
inFrontmatter = true;
continue;
}
if (inFrontmatter) {
const match = line.match(/^(\w+):\s*(.*)$/);
if (match) {
const [, key, value] = match;
switch (key) {
case 'name':
name = value.trim();
break;
case 'description':
description = value.trim();
break;
}
}
}
}
return { name, description };
} catch (error) {
return { name: '', description: '' };
}
}
/**
* Find all SKILL.md files in a directory recursively.
*
* @param {string} dir - Directory to search
* @param {string} sourceType - 'personal' or 'superpowers' for namespacing
* @param {number} maxDepth - Maximum recursion depth (default: 3)
* @returns {Array<{path: string, name: string, description: string, sourceType: string}>}
*/
function findSkillsInDir(dir, sourceType, maxDepth = 3) {
const skills = [];
if (!fs.existsSync(dir)) return skills;
function recurse(currentDir, depth) {
if (depth > maxDepth) return;
const entries = fs.readdirSync(currentDir, { withFileTypes: true });
for (const entry of entries) {
const fullPath = path.join(currentDir, entry.name);
if (entry.isDirectory()) {
// Check for SKILL.md in this directory
const skillFile = path.join(fullPath, 'SKILL.md');
if (fs.existsSync(skillFile)) {
const { name, description } = extractFrontmatter(skillFile);
skills.push({
path: fullPath,
skillFile: skillFile,
name: name || entry.name,
description: description || '',
sourceType: sourceType
});
}
// Recurse into subdirectories
recurse(fullPath, depth + 1);
}
}
}
recurse(dir, 0);
return skills;
}
/**
* Resolve a skill name to its file path, handling shadowing
* (personal skills override superpowers skills).
*
* @param {string} skillName - Name like "superpowers:brainstorming" or "my-skill"
* @param {string} superpowersDir - Path to superpowers skills directory
* @param {string} personalDir - Path to personal skills directory
* @returns {{skillFile: string, sourceType: string, skillPath: string} | null}
*/
function resolveSkillPath(skillName, superpowersDir, personalDir) {
// Strip superpowers: prefix if present
const forceSuperpowers = skillName.startsWith('superpowers:');
const actualSkillName = forceSuperpowers ? skillName.replace(/^superpowers:/, '') : skillName;
// Try personal skills first (unless explicitly superpowers:)
if (!forceSuperpowers && personalDir) {
const personalPath = path.join(personalDir, actualSkillName);
const personalSkillFile = path.join(personalPath, 'SKILL.md');
if (fs.existsSync(personalSkillFile)) {
return {
skillFile: personalSkillFile,
sourceType: 'personal',
skillPath: actualSkillName
};
}
}
// Try superpowers skills
if (superpowersDir) {
const superpowersPath = path.join(superpowersDir, actualSkillName);
const superpowersSkillFile = path.join(superpowersPath, 'SKILL.md');
if (fs.existsSync(superpowersSkillFile)) {
return {
skillFile: superpowersSkillFile,
sourceType: 'superpowers',
skillPath: actualSkillName
};
}
}
return null;
}
/**
* Check if a git repository has updates available.
*
* @param {string} repoDir - Path to git repository
* @returns {boolean} - True if updates are available
*/
function checkForUpdates(repoDir) {
try {
// Quick check with 3 second timeout to avoid delays if network is down
const output = execSync('git fetch origin && git status --porcelain=v1 --branch', {
cwd: repoDir,
timeout: 3000,
encoding: 'utf8',
stdio: 'pipe'
});
// Parse git status output to see if we're behind
const statusLines = output.split('\n');
for (const line of statusLines) {
if (line.startsWith('## ') && line.includes('[behind ')) {
return true; // We're behind remote
}
}
return false; // Up to date
} catch (error) {
// Network down, git error, timeout, etc. - don't block bootstrap
return false;
}
}
/**
* Strip YAML frontmatter from skill content, returning just the content.
*
* @param {string} content - Full content including frontmatter
* @returns {string} - Content without frontmatter
*/
function stripFrontmatter(content) {
const lines = content.split('\n');
let inFrontmatter = false;
let frontmatterEnded = false;
const contentLines = [];
for (const line of lines) {
if (line.trim() === '---') {
if (inFrontmatter) {
frontmatterEnded = true;
continue;
}
inFrontmatter = true;
continue;
}
if (frontmatterEnded || !inFrontmatter) {
contentLines.push(line);
}
}
return contentLines.join('\n').trim();
}
export {
extractFrontmatter,
findSkillsInDir,
resolveSkillPath,
checkForUpdates,
stripFrontmatter
};

View File

@@ -0,0 +1,54 @@
---
name: brainstorming
description: "You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. Explores user intent, requirements and design before implementation."
---
# Brainstorming Ideas Into Designs
## Overview
Help turn ideas into fully formed designs and specs through natural collaborative dialogue.
Start by understanding the current project context, then ask questions one at a time to refine the idea. Once you understand what you're building, present the design in small sections (200-300 words), checking after each section whether it looks right so far.
## The Process
**Understanding the idea:**
- Check out the current project state first (files, docs, recent commits)
- Ask questions one at a time to refine the idea
- Prefer multiple choice questions when possible, but open-ended is fine too
- Only one question per message - if a topic needs more exploration, break it into multiple questions
- Focus on understanding: purpose, constraints, success criteria
**Exploring approaches:**
- Propose 2-3 different approaches with trade-offs
- Present options conversationally with your recommendation and reasoning
- Lead with your recommended option and explain why
**Presenting the design:**
- Once you believe you understand what you're building, present the design
- Break it into sections of 200-300 words
- Ask after each section whether it looks right so far
- Cover: architecture, components, data flow, error handling, testing
- Be ready to go back and clarify if something doesn't make sense
## After the Design
**Documentation:**
- Write the validated design to `docs/plans/YYYY-MM-DD-<topic>-design.md`
- Use elements-of-style:writing-clearly-and-concisely skill if available
- Commit the design document to git
**Implementation (if continuing):**
- Ask: "Ready to set up for implementation?"
- Use superpowers:using-git-worktrees to create isolated workspace
- Use superpowers:writing-plans to create detailed implementation plan
## Key Principles
- **One question at a time** - Don't overwhelm with multiple questions
- **Multiple choice preferred** - Easier to answer than open-ended when possible
- **YAGNI ruthlessly** - Remove unnecessary features from all designs
- **Explore alternatives** - Always propose 2-3 approaches before settling
- **Incremental validation** - Present design in sections, validate each
- **Be flexible** - Go back and clarify when something doesn't make sense

View File

@@ -0,0 +1,180 @@
---
name: dispatching-parallel-agents
description: Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies
---
# Dispatching Parallel Agents
## Overview
When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
**Core principle:** Dispatch one agent per independent problem domain. Let them work concurrently.
## When to Use
```dot
digraph when_to_use {
"Multiple failures?" [shape=diamond];
"Are they independent?" [shape=diamond];
"Single agent investigates all" [shape=box];
"One agent per problem domain" [shape=box];
"Can they work in parallel?" [shape=diamond];
"Sequential agents" [shape=box];
"Parallel dispatch" [shape=box];
"Multiple failures?" -> "Are they independent?" [label="yes"];
"Are they independent?" -> "Single agent investigates all" [label="no - related"];
"Are they independent?" -> "Can they work in parallel?" [label="yes"];
"Can they work in parallel?" -> "Parallel dispatch" [label="yes"];
"Can they work in parallel?" -> "Sequential agents" [label="no - shared state"];
}
```
**Use when:**
- 3+ test files failing with different root causes
- Multiple subsystems broken independently
- Each problem can be understood without context from others
- No shared state between investigations
**Don't use when:**
- Failures are related (fix one might fix others)
- Need to understand full system state
- Agents would interfere with each other
## The Pattern
### 1. Identify Independent Domains
Group failures by what's broken:
- File A tests: Tool approval flow
- File B tests: Batch completion behavior
- File C tests: Abort functionality
Each domain is independent - fixing tool approval doesn't affect abort tests.
### 2. Create Focused Agent Tasks
Each agent gets:
- **Specific scope:** One test file or subsystem
- **Clear goal:** Make these tests pass
- **Constraints:** Don't change other code
- **Expected output:** Summary of what you found and fixed
### 3. Dispatch in Parallel
```typescript
// In Claude Code / AI environment
Task("Fix agent-tool-abort.test.ts failures")
Task("Fix batch-completion-behavior.test.ts failures")
Task("Fix tool-approval-race-conditions.test.ts failures")
// All three run concurrently
```
### 4. Review and Integrate
When agents return:
- Read each summary
- Verify fixes don't conflict
- Run full test suite
- Integrate all changes
## Agent Prompt Structure
Good agent prompts are:
1. **Focused** - One clear problem domain
2. **Self-contained** - All context needed to understand the problem
3. **Specific about output** - What should the agent return?
```markdown
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:
1. "should abort tool with partial output capture" - expects 'interrupted at' in message
2. "should handle mixed completed and aborted tools" - fast tool aborted instead of completed
3. "should properly track pendingToolCount" - expects 3 results but gets 0
These are timing/race condition issues. Your task:
1. Read the test file and understand what each test verifies
2. Identify root cause - timing issues or actual bugs?
3. Fix by:
- Replacing arbitrary timeouts with event-based waiting
- Fixing bugs in abort implementation if found
- Adjusting test expectations if testing changed behavior
Do NOT just increase timeouts - find the real issue.
Return: Summary of what you found and what you fixed.
```
## Common Mistakes
**❌ Too broad:** "Fix all the tests" - agent gets lost
**✅ Specific:** "Fix agent-tool-abort.test.ts" - focused scope
**❌ No context:** "Fix the race condition" - agent doesn't know where
**✅ Context:** Paste the error messages and test names
**❌ No constraints:** Agent might refactor everything
**✅ Constraints:** "Do NOT change production code" or "Fix tests only"
**❌ Vague output:** "Fix it" - you don't know what changed
**✅ Specific:** "Return summary of root cause and changes"
## When NOT to Use
**Related failures:** Fixing one might fix others - investigate together first
**Need full context:** Understanding requires seeing entire system
**Exploratory debugging:** You don't know what's broken yet
**Shared state:** Agents would interfere (editing same files, using same resources)
## Real Example from Session
**Scenario:** 6 test failures across 3 files after major refactoring
**Failures:**
- agent-tool-abort.test.ts: 3 failures (timing issues)
- batch-completion-behavior.test.ts: 2 failures (tools not executing)
- tool-approval-race-conditions.test.ts: 1 failure (execution count = 0)
**Decision:** Independent domains - abort logic separate from batch completion separate from race conditions
**Dispatch:**
```
Agent 1 → Fix agent-tool-abort.test.ts
Agent 2 → Fix batch-completion-behavior.test.ts
Agent 3 → Fix tool-approval-race-conditions.test.ts
```
**Results:**
- Agent 1: Replaced timeouts with event-based waiting
- Agent 2: Fixed event structure bug (threadId in wrong place)
- Agent 3: Added wait for async tool execution to complete
**Integration:** All fixes independent, no conflicts, full suite green
**Time saved:** 3 problems solved in parallel vs sequentially
## Key Benefits
1. **Parallelization** - Multiple investigations happen simultaneously
2. **Focus** - Each agent has narrow scope, less context to track
3. **Independence** - Agents don't interfere with each other
4. **Speed** - 3 problems solved in time of 1
## Verification
After agents return:
1. **Review each summary** - Understand what changed
2. **Check for conflicts** - Did agents edit same code?
3. **Run full suite** - Verify all fixes work together
4. **Spot check** - Agents can make systematic errors
## Real-World Impact
From debugging session (2025-10-03):
- 6 failures across 3 files
- 3 agents dispatched in parallel
- All investigations completed concurrently
- All fixes integrated successfully
- Zero conflicts between agent changes

View File

@@ -0,0 +1,76 @@
---
name: executing-plans
description: Use when you have a written implementation plan to execute in a separate session with review checkpoints
---
# Executing Plans
## Overview
Load plan, review critically, execute tasks in batches, report for review between batches.
**Core principle:** Batch execution with checkpoints for architect review.
**Announce at start:** "I'm using the executing-plans skill to implement this plan."
## The Process
### Step 1: Load and Review Plan
1. Read plan file
2. Review critically - identify any questions or concerns about the plan
3. If concerns: Raise them with your human partner before starting
4. If no concerns: Create TodoWrite and proceed
### Step 2: Execute Batch
**Default: First 3 tasks**
For each task:
1. Mark as in_progress
2. Follow each step exactly (plan has bite-sized steps)
3. Run verifications as specified
4. Mark as completed
### Step 3: Report
When batch complete:
- Show what was implemented
- Show verification output
- Say: "Ready for feedback."
### Step 4: Continue
Based on feedback:
- Apply changes if needed
- Execute next batch
- Repeat until complete
### Step 5: Complete Development
After all tasks complete and verified:
- Announce: "I'm using the finishing-a-development-branch skill to complete this work."
- **REQUIRED SUB-SKILL:** Use superpowers:finishing-a-development-branch
- Follow that skill to verify tests, present options, execute choice
## When to Stop and Ask for Help
**STOP executing immediately when:**
- Hit a blocker mid-batch (missing dependency, test fails, instruction unclear)
- Plan has critical gaps preventing starting
- You don't understand an instruction
- Verification fails repeatedly
**Ask for clarification rather than guessing.**
## When to Revisit Earlier Steps
**Return to Review (Step 1) when:**
- Partner updates the plan based on your feedback
- Fundamental approach needs rethinking
**Don't force through blockers** - stop and ask.
## Remember
- Review plan critically first
- Follow plan steps exactly
- Don't skip verifications
- Reference skills when plan says to
- Between batches: just report and wait
- Stop when blocked, don't guess

View File

@@ -0,0 +1,200 @@
---
name: finishing-a-development-branch
description: Use when implementation is complete, all tests pass, and you need to decide how to integrate the work - guides completion of development work by presenting structured options for merge, PR, or cleanup
---
# Finishing a Development Branch
## Overview
Guide completion of development work by presenting clear options and handling chosen workflow.
**Core principle:** Verify tests → Present options → Execute choice → Clean up.
**Announce at start:** "I'm using the finishing-a-development-branch skill to complete this work."
## The Process
### Step 1: Verify Tests
**Before presenting options, verify tests pass:**
```bash
# Run project's test suite
npm test / cargo test / pytest / go test ./...
```
**If tests fail:**
```
Tests failing (<N> failures). Must fix before completing:
[Show failures]
Cannot proceed with merge/PR until tests pass.
```
Stop. Don't proceed to Step 2.
**If tests pass:** Continue to Step 2.
### Step 2: Determine Base Branch
```bash
# Try common base branches
git merge-base HEAD main 2>/dev/null || git merge-base HEAD master 2>/dev/null
```
Or ask: "This branch split from main - is that correct?"
### Step 3: Present Options
Present exactly these 4 options:
```
Implementation complete. What would you like to do?
1. Merge back to <base-branch> locally
2. Push and create a Pull Request
3. Keep the branch as-is (I'll handle it later)
4. Discard this work
Which option?
```
**Don't add explanation** - keep options concise.
### Step 4: Execute Choice
#### Option 1: Merge Locally
```bash
# Switch to base branch
git checkout <base-branch>
# Pull latest
git pull
# Merge feature branch
git merge <feature-branch>
# Verify tests on merged result
<test command>
# If tests pass
git branch -d <feature-branch>
```
Then: Cleanup worktree (Step 5)
#### Option 2: Push and Create PR
```bash
# Push branch
git push -u origin <feature-branch>
# Create PR
gh pr create --title "<title>" --body "$(cat <<'EOF'
## Summary
<2-3 bullets of what changed>
## Test Plan
- [ ] <verification steps>
EOF
)"
```
Then: Cleanup worktree (Step 5)
#### Option 3: Keep As-Is
Report: "Keeping branch <name>. Worktree preserved at <path>."
**Don't cleanup worktree.**
#### Option 4: Discard
**Confirm first:**
```
This will permanently delete:
- Branch <name>
- All commits: <commit-list>
- Worktree at <path>
Type 'discard' to confirm.
```
Wait for exact confirmation.
If confirmed:
```bash
git checkout <base-branch>
git branch -D <feature-branch>
```
Then: Cleanup worktree (Step 5)
### Step 5: Cleanup Worktree
**For Options 1, 2, 4:**
Check if in worktree:
```bash
git worktree list | grep $(git branch --show-current)
```
If yes:
```bash
git worktree remove <worktree-path>
```
**For Option 3:** Keep worktree.
## Quick Reference
| Option | Merge | Push | Keep Worktree | Cleanup Branch |
|--------|-------|------|---------------|----------------|
| 1. Merge locally | ✓ | - | - | ✓ |
| 2. Create PR | - | ✓ | ✓ | - |
| 3. Keep as-is | - | - | ✓ | - |
| 4. Discard | - | - | - | ✓ (force) |
## Common Mistakes
**Skipping test verification**
- **Problem:** Merge broken code, create failing PR
- **Fix:** Always verify tests before offering options
**Open-ended questions**
- **Problem:** "What should I do next?" → ambiguous
- **Fix:** Present exactly 4 structured options
**Automatic worktree cleanup**
- **Problem:** Remove worktree when might need it (Option 2, 3)
- **Fix:** Only cleanup for Options 1 and 4
**No confirmation for discard**
- **Problem:** Accidentally delete work
- **Fix:** Require typed "discard" confirmation
## Red Flags
**Never:**
- Proceed with failing tests
- Merge without verifying tests on result
- Delete work without confirmation
- Force-push without explicit request
**Always:**
- Verify tests before offering options
- Present exactly 4 options
- Get typed confirmation for Option 4
- Clean up worktree for Options 1 & 4 only
## Integration
**Called by:**
- **subagent-driven-development** (Step 7) - After all tasks complete
- **executing-plans** (Step 5) - After all batches complete
**Pairs with:**
- **using-git-worktrees** - Cleans up worktree created by that skill

View File

@@ -0,0 +1,213 @@
---
name: receiving-code-review
description: Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technical rigor and verification, not performative agreement or blind implementation
---
# Code Review Reception
## Overview
Code review requires technical evaluation, not emotional performance.
**Core principle:** Verify before implementing. Ask before assuming. Technical correctness over social comfort.
## The Response Pattern
```
WHEN receiving code review feedback:
1. READ: Complete feedback without reacting
2. UNDERSTAND: Restate requirement in own words (or ask)
3. VERIFY: Check against codebase reality
4. EVALUATE: Technically sound for THIS codebase?
5. RESPOND: Technical acknowledgment or reasoned pushback
6. IMPLEMENT: One item at a time, test each
```
## Forbidden Responses
**NEVER:**
- "You're absolutely right!" (explicit CLAUDE.md violation)
- "Great point!" / "Excellent feedback!" (performative)
- "Let me implement that now" (before verification)
**INSTEAD:**
- Restate the technical requirement
- Ask clarifying questions
- Push back with technical reasoning if wrong
- Just start working (actions > words)
## Handling Unclear Feedback
```
IF any item is unclear:
STOP - do not implement anything yet
ASK for clarification on unclear items
WHY: Items may be related. Partial understanding = wrong implementation.
```
**Example:**
```
your human partner: "Fix 1-6"
You understand 1,2,3,6. Unclear on 4,5.
❌ WRONG: Implement 1,2,3,6 now, ask about 4,5 later
✅ RIGHT: "I understand items 1,2,3,6. Need clarification on 4 and 5 before proceeding."
```
## Source-Specific Handling
### From your human partner
- **Trusted** - implement after understanding
- **Still ask** if scope unclear
- **No performative agreement**
- **Skip to action** or technical acknowledgment
### From External Reviewers
```
BEFORE implementing:
1. Check: Technically correct for THIS codebase?
2. Check: Breaks existing functionality?
3. Check: Reason for current implementation?
4. Check: Works on all platforms/versions?
5. Check: Does reviewer understand full context?
IF suggestion seems wrong:
Push back with technical reasoning
IF can't easily verify:
Say so: "I can't verify this without [X]. Should I [investigate/ask/proceed]?"
IF conflicts with your human partner's prior decisions:
Stop and discuss with your human partner first
```
**your human partner's rule:** "External feedback - be skeptical, but check carefully"
## YAGNI Check for "Professional" Features
```
IF reviewer suggests "implementing properly":
grep codebase for actual usage
IF unused: "This endpoint isn't called. Remove it (YAGNI)?"
IF used: Then implement properly
```
**your human partner's rule:** "You and reviewer both report to me. If we don't need this feature, don't add it."
## Implementation Order
```
FOR multi-item feedback:
1. Clarify anything unclear FIRST
2. Then implement in this order:
- Blocking issues (breaks, security)
- Simple fixes (typos, imports)
- Complex fixes (refactoring, logic)
3. Test each fix individually
4. Verify no regressions
```
## When To Push Back
Push back when:
- Suggestion breaks existing functionality
- Reviewer lacks full context
- Violates YAGNI (unused feature)
- Technically incorrect for this stack
- Legacy/compatibility reasons exist
- Conflicts with your human partner's architectural decisions
**How to push back:**
- Use technical reasoning, not defensiveness
- Ask specific questions
- Reference working tests/code
- Involve your human partner if architectural
**Signal if uncomfortable pushing back out loud:** "Strange things are afoot at the Circle K"
## Acknowledging Correct Feedback
When feedback IS correct:
```
✅ "Fixed. [Brief description of what changed]"
✅ "Good catch - [specific issue]. Fixed in [location]."
✅ [Just fix it and show in the code]
❌ "You're absolutely right!"
❌ "Great point!"
❌ "Thanks for catching that!"
❌ "Thanks for [anything]"
❌ ANY gratitude expression
```
**Why no thanks:** Actions speak. Just fix it. The code itself shows you heard the feedback.
**If you catch yourself about to write "Thanks":** DELETE IT. State the fix instead.
## Gracefully Correcting Your Pushback
If you pushed back and were wrong:
```
✅ "You were right - I checked [X] and it does [Y]. Implementing now."
✅ "Verified this and you're correct. My initial understanding was wrong because [reason]. Fixing."
❌ Long apology
❌ Defending why you pushed back
❌ Over-explaining
```
State the correction factually and move on.
## Common Mistakes
| Mistake | Fix |
|---------|-----|
| Performative agreement | State requirement or just act |
| Blind implementation | Verify against codebase first |
| Batch without testing | One at a time, test each |
| Assuming reviewer is right | Check if breaks things |
| Avoiding pushback | Technical correctness > comfort |
| Partial implementation | Clarify all items first |
| Can't verify, proceed anyway | State limitation, ask for direction |
## Real Examples
**Performative Agreement (Bad):**
```
Reviewer: "Remove legacy code"
❌ "You're absolutely right! Let me remove that..."
```
**Technical Verification (Good):**
```
Reviewer: "Remove legacy code"
✅ "Checking... build target is 10.15+, this API needs 13+. Need legacy for backward compat. Current impl has wrong bundle ID - fix it or drop pre-13 support?"
```
**YAGNI (Good):**
```
Reviewer: "Implement proper metrics tracking with database, date filters, CSV export"
✅ "Grepped codebase - nothing calls this endpoint. Remove it (YAGNI)? Or is there usage I'm missing?"
```
**Unclear Item (Good):**
```
your human partner: "Fix items 1-6"
You understand 1,2,3,6. Unclear on 4,5.
✅ "Understand 1,2,3,6. Need clarification on 4 and 5 before implementing."
```
## GitHub Thread Replies
When replying to inline review comments on GitHub, reply in the comment thread (`gh api repos/{owner}/{repo}/pulls/{pr}/comments/{id}/replies`), not as a top-level PR comment.
## The Bottom Line
**External feedback = suggestions to evaluate, not orders to follow.**
Verify. Question. Then implement.
No performative agreement. Technical rigor always.

View File

@@ -0,0 +1,105 @@
---
name: requesting-code-review
description: Use when completing tasks, implementing major features, or before merging to verify work meets requirements
---
# Requesting Code Review
Dispatch superpowers:code-reviewer subagent to catch issues before they cascade.
**Core principle:** Review early, review often.
## When to Request Review
**Mandatory:**
- After each task in subagent-driven development
- After completing major feature
- Before merge to main
**Optional but valuable:**
- When stuck (fresh perspective)
- Before refactoring (baseline check)
- After fixing complex bug
## How to Request
**1. Get git SHAs:**
```bash
BASE_SHA=$(git rev-parse HEAD~1) # or origin/main
HEAD_SHA=$(git rev-parse HEAD)
```
**2. Dispatch code-reviewer subagent:**
Use Task tool with superpowers:code-reviewer type, fill template at `code-reviewer.md`
**Placeholders:**
- `{WHAT_WAS_IMPLEMENTED}` - What you just built
- `{PLAN_OR_REQUIREMENTS}` - What it should do
- `{BASE_SHA}` - Starting commit
- `{HEAD_SHA}` - Ending commit
- `{DESCRIPTION}` - Brief summary
**3. Act on feedback:**
- Fix Critical issues immediately
- Fix Important issues before proceeding
- Note Minor issues for later
- Push back if reviewer is wrong (with reasoning)
## Example
```
[Just completed Task 2: Add verification function]
You: Let me request code review before proceeding.
BASE_SHA=$(git log --oneline | grep "Task 1" | head -1 | awk '{print $1}')
HEAD_SHA=$(git rev-parse HEAD)
[Dispatch superpowers:code-reviewer subagent]
WHAT_WAS_IMPLEMENTED: Verification and repair functions for conversation index
PLAN_OR_REQUIREMENTS: Task 2 from docs/plans/deployment-plan.md
BASE_SHA: a7981ec
HEAD_SHA: 3df7661
DESCRIPTION: Added verifyIndex() and repairIndex() with 4 issue types
[Subagent returns]:
Strengths: Clean architecture, real tests
Issues:
Important: Missing progress indicators
Minor: Magic number (100) for reporting interval
Assessment: Ready to proceed
You: [Fix progress indicators]
[Continue to Task 3]
```
## Integration with Workflows
**Subagent-Driven Development:**
- Review after EACH task
- Catch issues before they compound
- Fix before moving to next task
**Executing Plans:**
- Review after each batch (3 tasks)
- Get feedback, apply, continue
**Ad-Hoc Development:**
- Review before merge
- Review when stuck
## Red Flags
**Never:**
- Skip review because "it's simple"
- Ignore Critical issues
- Proceed with unfixed Important issues
- Argue with valid technical feedback
**If reviewer wrong:**
- Push back with technical reasoning
- Show code/tests that prove it works
- Request clarification
See template at: requesting-code-review/code-reviewer.md

View File

@@ -0,0 +1,146 @@
# Code Review Agent
You are reviewing code changes for production readiness.
**Your task:**
1. Review {WHAT_WAS_IMPLEMENTED}
2. Compare against {PLAN_OR_REQUIREMENTS}
3. Check code quality, architecture, testing
4. Categorize issues by severity
5. Assess production readiness
## What Was Implemented
{DESCRIPTION}
## Requirements/Plan
{PLAN_REFERENCE}
## Git Range to Review
**Base:** {BASE_SHA}
**Head:** {HEAD_SHA}
```bash
git diff --stat {BASE_SHA}..{HEAD_SHA}
git diff {BASE_SHA}..{HEAD_SHA}
```
## Review Checklist
**Code Quality:**
- Clean separation of concerns?
- Proper error handling?
- Type safety (if applicable)?
- DRY principle followed?
- Edge cases handled?
**Architecture:**
- Sound design decisions?
- Scalability considerations?
- Performance implications?
- Security concerns?
**Testing:**
- Tests actually test logic (not mocks)?
- Edge cases covered?
- Integration tests where needed?
- All tests passing?
**Requirements:**
- All plan requirements met?
- Implementation matches spec?
- No scope creep?
- Breaking changes documented?
**Production Readiness:**
- Migration strategy (if schema changes)?
- Backward compatibility considered?
- Documentation complete?
- No obvious bugs?
## Output Format
### Strengths
[What's well done? Be specific.]
### Issues
#### Critical (Must Fix)
[Bugs, security issues, data loss risks, broken functionality]
#### Important (Should Fix)
[Architecture problems, missing features, poor error handling, test gaps]
#### Minor (Nice to Have)
[Code style, optimization opportunities, documentation improvements]
**For each issue:**
- File:line reference
- What's wrong
- Why it matters
- How to fix (if not obvious)
### Recommendations
[Improvements for code quality, architecture, or process]
### Assessment
**Ready to merge?** [Yes/No/With fixes]
**Reasoning:** [Technical assessment in 1-2 sentences]
## Critical Rules
**DO:**
- Categorize by actual severity (not everything is Critical)
- Be specific (file:line, not vague)
- Explain WHY issues matter
- Acknowledge strengths
- Give clear verdict
**DON'T:**
- Say "looks good" without checking
- Mark nitpicks as Critical
- Give feedback on code you didn't review
- Be vague ("improve error handling")
- Avoid giving a clear verdict
## Example Output
```
### Strengths
- Clean database schema with proper migrations (db.ts:15-42)
- Comprehensive test coverage (18 tests, all edge cases)
- Good error handling with fallbacks (summarizer.ts:85-92)
### Issues
#### Important
1. **Missing help text in CLI wrapper**
- File: index-conversations:1-31
- Issue: No --help flag, users won't discover --concurrency
- Fix: Add --help case with usage examples
2. **Date validation missing**
- File: search.ts:25-27
- Issue: Invalid dates silently return no results
- Fix: Validate ISO format, throw error with example
#### Minor
1. **Progress indicators**
- File: indexer.ts:130
- Issue: No "X of Y" counter for long operations
- Impact: Users don't know how long to wait
### Recommendations
- Add progress reporting for user experience
- Consider config file for excluded projects (portability)
### Assessment
**Ready to merge: With fixes**
**Reasoning:** Core implementation is solid with good architecture and tests. Important issues (help text, date validation) are easily fixed and don't affect core functionality.
```

View File

@@ -0,0 +1,240 @@
---
name: subagent-driven-development
description: Use when executing implementation plans with independent tasks in the current session
---
# Subagent-Driven Development
Execute plan by dispatching fresh subagent per task, with two-stage review after each: spec compliance review first, then code quality review.
**Core principle:** Fresh subagent per task + two-stage review (spec then quality) = high quality, fast iteration
## When to Use
```dot
digraph when_to_use {
"Have implementation plan?" [shape=diamond];
"Tasks mostly independent?" [shape=diamond];
"Stay in this session?" [shape=diamond];
"subagent-driven-development" [shape=box];
"executing-plans" [shape=box];
"Manual execution or brainstorm first" [shape=box];
"Have implementation plan?" -> "Tasks mostly independent?" [label="yes"];
"Have implementation plan?" -> "Manual execution or brainstorm first" [label="no"];
"Tasks mostly independent?" -> "Stay in this session?" [label="yes"];
"Tasks mostly independent?" -> "Manual execution or brainstorm first" [label="no - tightly coupled"];
"Stay in this session?" -> "subagent-driven-development" [label="yes"];
"Stay in this session?" -> "executing-plans" [label="no - parallel session"];
}
```
**vs. Executing Plans (parallel session):**
- Same session (no context switch)
- Fresh subagent per task (no context pollution)
- Two-stage review after each task: spec compliance first, then code quality
- Faster iteration (no human-in-loop between tasks)
## The Process
```dot
digraph process {
rankdir=TB;
subgraph cluster_per_task {
label="Per Task";
"Dispatch implementer subagent (./implementer-prompt.md)" [shape=box];
"Implementer subagent asks questions?" [shape=diamond];
"Answer questions, provide context" [shape=box];
"Implementer subagent implements, tests, commits, self-reviews" [shape=box];
"Dispatch spec reviewer subagent (./spec-reviewer-prompt.md)" [shape=box];
"Spec reviewer subagent confirms code matches spec?" [shape=diamond];
"Implementer subagent fixes spec gaps" [shape=box];
"Dispatch code quality reviewer subagent (./code-quality-reviewer-prompt.md)" [shape=box];
"Code quality reviewer subagent approves?" [shape=diamond];
"Implementer subagent fixes quality issues" [shape=box];
"Mark task complete in TodoWrite" [shape=box];
}
"Read plan, extract all tasks with full text, note context, create TodoWrite" [shape=box];
"More tasks remain?" [shape=diamond];
"Dispatch final code reviewer subagent for entire implementation" [shape=box];
"Use superpowers:finishing-a-development-branch" [shape=box style=filled fillcolor=lightgreen];
"Read plan, extract all tasks with full text, note context, create TodoWrite" -> "Dispatch implementer subagent (./implementer-prompt.md)";
"Dispatch implementer subagent (./implementer-prompt.md)" -> "Implementer subagent asks questions?";
"Implementer subagent asks questions?" -> "Answer questions, provide context" [label="yes"];
"Answer questions, provide context" -> "Dispatch implementer subagent (./implementer-prompt.md)";
"Implementer subagent asks questions?" -> "Implementer subagent implements, tests, commits, self-reviews" [label="no"];
"Implementer subagent implements, tests, commits, self-reviews" -> "Dispatch spec reviewer subagent (./spec-reviewer-prompt.md)";
"Dispatch spec reviewer subagent (./spec-reviewer-prompt.md)" -> "Spec reviewer subagent confirms code matches spec?";
"Spec reviewer subagent confirms code matches spec?" -> "Implementer subagent fixes spec gaps" [label="no"];
"Implementer subagent fixes spec gaps" -> "Dispatch spec reviewer subagent (./spec-reviewer-prompt.md)" [label="re-review"];
"Spec reviewer subagent confirms code matches spec?" -> "Dispatch code quality reviewer subagent (./code-quality-reviewer-prompt.md)" [label="yes"];
"Dispatch code quality reviewer subagent (./code-quality-reviewer-prompt.md)" -> "Code quality reviewer subagent approves?";
"Code quality reviewer subagent approves?" -> "Implementer subagent fixes quality issues" [label="no"];
"Implementer subagent fixes quality issues" -> "Dispatch code quality reviewer subagent (./code-quality-reviewer-prompt.md)" [label="re-review"];
"Code quality reviewer subagent approves?" -> "Mark task complete in TodoWrite" [label="yes"];
"Mark task complete in TodoWrite" -> "More tasks remain?";
"More tasks remain?" -> "Dispatch implementer subagent (./implementer-prompt.md)" [label="yes"];
"More tasks remain?" -> "Dispatch final code reviewer subagent for entire implementation" [label="no"];
"Dispatch final code reviewer subagent for entire implementation" -> "Use superpowers:finishing-a-development-branch";
}
```
## Prompt Templates
- `./implementer-prompt.md` - Dispatch implementer subagent
- `./spec-reviewer-prompt.md` - Dispatch spec compliance reviewer subagent
- `./code-quality-reviewer-prompt.md` - Dispatch code quality reviewer subagent
## Example Workflow
```
You: I'm using Subagent-Driven Development to execute this plan.
[Read plan file once: docs/plans/feature-plan.md]
[Extract all 5 tasks with full text and context]
[Create TodoWrite with all tasks]
Task 1: Hook installation script
[Get Task 1 text and context (already extracted)]
[Dispatch implementation subagent with full task text + context]
Implementer: "Before I begin - should the hook be installed at user or system level?"
You: "User level (~/.config/superpowers/hooks/)"
Implementer: "Got it. Implementing now..."
[Later] Implementer:
- Implemented install-hook command
- Added tests, 5/5 passing
- Self-review: Found I missed --force flag, added it
- Committed
[Dispatch spec compliance reviewer]
Spec reviewer: ✅ Spec compliant - all requirements met, nothing extra
[Get git SHAs, dispatch code quality reviewer]
Code reviewer: Strengths: Good test coverage, clean. Issues: None. Approved.
[Mark Task 1 complete]
Task 2: Recovery modes
[Get Task 2 text and context (already extracted)]
[Dispatch implementation subagent with full task text + context]
Implementer: [No questions, proceeds]
Implementer:
- Added verify/repair modes
- 8/8 tests passing
- Self-review: All good
- Committed
[Dispatch spec compliance reviewer]
Spec reviewer: ❌ Issues:
- Missing: Progress reporting (spec says "report every 100 items")
- Extra: Added --json flag (not requested)
[Implementer fixes issues]
Implementer: Removed --json flag, added progress reporting
[Spec reviewer reviews again]
Spec reviewer: ✅ Spec compliant now
[Dispatch code quality reviewer]
Code reviewer: Strengths: Solid. Issues (Important): Magic number (100)
[Implementer fixes]
Implementer: Extracted PROGRESS_INTERVAL constant
[Code reviewer reviews again]
Code reviewer: ✅ Approved
[Mark Task 2 complete]
...
[After all tasks]
[Dispatch final code-reviewer]
Final reviewer: All requirements met, ready to merge
Done!
```
## Advantages
**vs. Manual execution:**
- Subagents follow TDD naturally
- Fresh context per task (no confusion)
- Parallel-safe (subagents don't interfere)
- Subagent can ask questions (before AND during work)
**vs. Executing Plans:**
- Same session (no handoff)
- Continuous progress (no waiting)
- Review checkpoints automatic
**Efficiency gains:**
- No file reading overhead (controller provides full text)
- Controller curates exactly what context is needed
- Subagent gets complete information upfront
- Questions surfaced before work begins (not after)
**Quality gates:**
- Self-review catches issues before handoff
- Two-stage review: spec compliance, then code quality
- Review loops ensure fixes actually work
- Spec compliance prevents over/under-building
- Code quality ensures implementation is well-built
**Cost:**
- More subagent invocations (implementer + 2 reviewers per task)
- Controller does more prep work (extracting all tasks upfront)
- Review loops add iterations
- But catches issues early (cheaper than debugging later)
## Red Flags
**Never:**
- Skip reviews (spec compliance OR code quality)
- Proceed with unfixed issues
- Dispatch multiple implementation subagents in parallel (conflicts)
- Make subagent read plan file (provide full text instead)
- Skip scene-setting context (subagent needs to understand where task fits)
- Ignore subagent questions (answer before letting them proceed)
- Accept "close enough" on spec compliance (spec reviewer found issues = not done)
- Skip review loops (reviewer found issues = implementer fixes = review again)
- Let implementer self-review replace actual review (both are needed)
- **Start code quality review before spec compliance is ✅** (wrong order)
- Move to next task while either review has open issues
**If subagent asks questions:**
- Answer clearly and completely
- Provide additional context if needed
- Don't rush them into implementation
**If reviewer finds issues:**
- Implementer (same subagent) fixes them
- Reviewer reviews again
- Repeat until approved
- Don't skip the re-review
**If subagent fails task:**
- Dispatch fix subagent with specific instructions
- Don't try to fix manually (context pollution)
## Integration
**Required workflow skills:**
- **superpowers:writing-plans** - Creates the plan this skill executes
- **superpowers:requesting-code-review** - Code review template for reviewer subagents
- **superpowers:finishing-a-development-branch** - Complete development after all tasks
**Subagents should use:**
- **superpowers:test-driven-development** - Subagents follow TDD for each task
**Alternative workflow:**
- **superpowers:executing-plans** - Use for parallel session instead of same-session execution

View File

@@ -0,0 +1,20 @@
# Code Quality Reviewer Prompt Template
Use this template when dispatching a code quality reviewer subagent.
**Purpose:** Verify implementation is well-built (clean, tested, maintainable)
**Only dispatch after spec compliance review passes.**
```
Task tool (superpowers:code-reviewer):
Use template at requesting-code-review/code-reviewer.md
WHAT_WAS_IMPLEMENTED: [from implementer's report]
PLAN_OR_REQUIREMENTS: Task N from [plan-file]
BASE_SHA: [commit before task]
HEAD_SHA: [current commit]
DESCRIPTION: [task summary]
```
**Code reviewer returns:** Strengths, Issues (Critical/Important/Minor), Assessment

View File

@@ -0,0 +1,78 @@
# Implementer Subagent Prompt Template
Use this template when dispatching an implementer subagent.
```
Task tool (general-purpose):
description: "Implement Task N: [task name]"
prompt: |
You are implementing Task N: [task name]
## Task Description
[FULL TEXT of task from plan - paste it here, don't make subagent read file]
## Context
[Scene-setting: where this fits, dependencies, architectural context]
## Before You Begin
If you have questions about:
- The requirements or acceptance criteria
- The approach or implementation strategy
- Dependencies or assumptions
- Anything unclear in the task description
**Ask them now.** Raise any concerns before starting work.
## Your Job
Once you're clear on requirements:
1. Implement exactly what the task specifies
2. Write tests (following TDD if task says to)
3. Verify implementation works
4. Commit your work
5. Self-review (see below)
6. Report back
Work from: [directory]
**While you work:** If you encounter something unexpected or unclear, **ask questions**.
It's always OK to pause and clarify. Don't guess or make assumptions.
## Before Reporting Back: Self-Review
Review your work with fresh eyes. Ask yourself:
**Completeness:**
- Did I fully implement everything in the spec?
- Did I miss any requirements?
- Are there edge cases I didn't handle?
**Quality:**
- Is this my best work?
- Are names clear and accurate (match what things do, not how they work)?
- Is the code clean and maintainable?
**Discipline:**
- Did I avoid overbuilding (YAGNI)?
- Did I only build what was requested?
- Did I follow existing patterns in the codebase?
**Testing:**
- Do tests actually verify behavior (not just mock behavior)?
- Did I follow TDD if required?
- Are tests comprehensive?
If you find issues during self-review, fix them now before reporting.
## Report Format
When done, report:
- What you implemented
- What you tested and test results
- Files changed
- Self-review findings (if any)
- Any issues or concerns
```

View File

@@ -0,0 +1,61 @@
# Spec Compliance Reviewer Prompt Template
Use this template when dispatching a spec compliance reviewer subagent.
**Purpose:** Verify implementer built what was requested (nothing more, nothing less)
```
Task tool (general-purpose):
description: "Review spec compliance for Task N"
prompt: |
You are reviewing whether an implementation matches its specification.
## What Was Requested
[FULL TEXT of task requirements]
## What Implementer Claims They Built
[From implementer's report]
## CRITICAL: Do Not Trust the Report
The implementer finished suspiciously quickly. Their report may be incomplete,
inaccurate, or optimistic. You MUST verify everything independently.
**DO NOT:**
- Take their word for what they implemented
- Trust their claims about completeness
- Accept their interpretation of requirements
**DO:**
- Read the actual code they wrote
- Compare actual implementation to requirements line by line
- Check for missing pieces they claimed to implement
- Look for extra features they didn't mention
## Your Job
Read the implementation code and verify:
**Missing requirements:**
- Did they implement everything that was requested?
- Are there requirements they skipped or missed?
- Did they claim something works but didn't actually implement it?
**Extra/unneeded work:**
- Did they build things that weren't requested?
- Did they over-engineer or add unnecessary features?
- Did they add "nice to haves" that weren't in spec?
**Misunderstandings:**
- Did they interpret requirements differently than intended?
- Did they solve the wrong problem?
- Did they implement the right feature but wrong way?
**Verify by reading code, not by trusting report.**
Report:
- ✅ Spec compliant (if everything matches after code inspection)
- ❌ Issues found: [list specifically what's missing or extra, with file:line references]
```

View File

@@ -0,0 +1,119 @@
# Creation Log: Systematic Debugging Skill
Reference example of extracting, structuring, and bulletproofing a critical skill.
## Source Material
Extracted debugging framework from `/Users/jesse/.claude/CLAUDE.md`:
- 4-phase systematic process (Investigation → Pattern Analysis → Hypothesis → Implementation)
- Core mandate: ALWAYS find root cause, NEVER fix symptoms
- Rules designed to resist time pressure and rationalization
## Extraction Decisions
**What to include:**
- Complete 4-phase framework with all rules
- Anti-shortcuts ("NEVER fix symptom", "STOP and re-analyze")
- Pressure-resistant language ("even if faster", "even if I seem in a hurry")
- Concrete steps for each phase
**What to leave out:**
- Project-specific context
- Repetitive variations of same rule
- Narrative explanations (condensed to principles)
## Structure Following skill-creation/SKILL.md
1. **Rich when_to_use** - Included symptoms and anti-patterns
2. **Type: technique** - Concrete process with steps
3. **Keywords** - "root cause", "symptom", "workaround", "debugging", "investigation"
4. **Flowchart** - Decision point for "fix failed" → re-analyze vs add more fixes
5. **Phase-by-phase breakdown** - Scannable checklist format
6. **Anti-patterns section** - What NOT to do (critical for this skill)
## Bulletproofing Elements
Framework designed to resist rationalization under pressure:
### Language Choices
- "ALWAYS" / "NEVER" (not "should" / "try to")
- "even if faster" / "even if I seem in a hurry"
- "STOP and re-analyze" (explicit pause)
- "Don't skip past" (catches the actual behavior)
### Structural Defenses
- **Phase 1 required** - Can't skip to implementation
- **Single hypothesis rule** - Forces thinking, prevents shotgun fixes
- **Explicit failure mode** - "IF your first fix doesn't work" with mandatory action
- **Anti-patterns section** - Shows exactly what shortcuts look like
### Redundancy
- Root cause mandate in overview + when_to_use + Phase 1 + implementation rules
- "NEVER fix symptom" appears 4 times in different contexts
- Each phase has explicit "don't skip" guidance
## Testing Approach
Created 4 validation tests following skills/meta/testing-skills-with-subagents:
### Test 1: Academic Context (No Pressure)
- Simple bug, no time pressure
- **Result:** Perfect compliance, complete investigation
### Test 2: Time Pressure + Obvious Quick Fix
- User "in a hurry", symptom fix looks easy
- **Result:** Resisted shortcut, followed full process, found real root cause
### Test 3: Complex System + Uncertainty
- Multi-layer failure, unclear if can find root cause
- **Result:** Systematic investigation, traced through all layers, found source
### Test 4: Failed First Fix
- Hypothesis doesn't work, temptation to add more fixes
- **Result:** Stopped, re-analyzed, formed new hypothesis (no shotgun)
**All tests passed.** No rationalizations found.
## Iterations
### Initial Version
- Complete 4-phase framework
- Anti-patterns section
- Flowchart for "fix failed" decision
### Enhancement 1: TDD Reference
- Added link to skills/testing/test-driven-development
- Note explaining TDD's "simplest code" ≠ debugging's "root cause"
- Prevents confusion between methodologies
## Final Outcome
Bulletproof skill that:
- ✅ Clearly mandates root cause investigation
- ✅ Resists time pressure rationalization
- ✅ Provides concrete steps for each phase
- ✅ Shows anti-patterns explicitly
- ✅ Tested under multiple pressure scenarios
- ✅ Clarifies relationship to TDD
- ✅ Ready for use
## Key Insight
**Most important bulletproofing:** Anti-patterns section showing exact shortcuts that feel justified in the moment. When Claude thinks "I'll just add this one quick fix", seeing that exact pattern listed as wrong creates cognitive friction.
## Usage Example
When encountering a bug:
1. Load skill: skills/debugging/systematic-debugging
2. Read overview (10 sec) - reminded of mandate
3. Follow Phase 1 checklist - forced investigation
4. If tempted to skip - see anti-pattern, stop
5. Complete all phases - root cause found
**Time investment:** 5-10 minutes
**Time saved:** Hours of symptom-whack-a-mole
---
*Created: 2025-10-03*
*Purpose: Reference example for skill extraction and bulletproofing*

View File

@@ -0,0 +1,296 @@
---
name: systematic-debugging
description: Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes
---
# Systematic Debugging
## Overview
Random fixes waste time and create new bugs. Quick patches mask underlying issues.
**Core principle:** ALWAYS find root cause before attempting fixes. Symptom fixes are failure.
**Violating the letter of this process is violating the spirit of debugging.**
## The Iron Law
```
NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST
```
If you haven't completed Phase 1, you cannot propose fixes.
## When to Use
Use for ANY technical issue:
- Test failures
- Bugs in production
- Unexpected behavior
- Performance problems
- Build failures
- Integration issues
**Use this ESPECIALLY when:**
- Under time pressure (emergencies make guessing tempting)
- "Just one quick fix" seems obvious
- You've already tried multiple fixes
- Previous fix didn't work
- You don't fully understand the issue
**Don't skip when:**
- Issue seems simple (simple bugs have root causes too)
- You're in a hurry (rushing guarantees rework)
- Manager wants it fixed NOW (systematic is faster than thrashing)
## The Four Phases
You MUST complete each phase before proceeding to the next.
### Phase 1: Root Cause Investigation
**BEFORE attempting ANY fix:**
1. **Read Error Messages Carefully**
- Don't skip past errors or warnings
- They often contain the exact solution
- Read stack traces completely
- Note line numbers, file paths, error codes
2. **Reproduce Consistently**
- Can you trigger it reliably?
- What are the exact steps?
- Does it happen every time?
- If not reproducible → gather more data, don't guess
3. **Check Recent Changes**
- What changed that could cause this?
- Git diff, recent commits
- New dependencies, config changes
- Environmental differences
4. **Gather Evidence in Multi-Component Systems**
**WHEN system has multiple components (CI → build → signing, API → service → database):**
**BEFORE proposing fixes, add diagnostic instrumentation:**
```
For EACH component boundary:
- Log what data enters component
- Log what data exits component
- Verify environment/config propagation
- Check state at each layer
Run once to gather evidence showing WHERE it breaks
THEN analyze evidence to identify failing component
THEN investigate that specific component
```
**Example (multi-layer system):**
```bash
# Layer 1: Workflow
echo "=== Secrets available in workflow: ==="
echo "IDENTITY: ${IDENTITY:+SET}${IDENTITY:-UNSET}"
# Layer 2: Build script
echo "=== Env vars in build script: ==="
env | grep IDENTITY || echo "IDENTITY not in environment"
# Layer 3: Signing script
echo "=== Keychain state: ==="
security list-keychains
security find-identity -v
# Layer 4: Actual signing
codesign --sign "$IDENTITY" --verbose=4 "$APP"
```
**This reveals:** Which layer fails (secrets → workflow ✓, workflow → build ✗)
5. **Trace Data Flow**
**WHEN error is deep in call stack:**
See `root-cause-tracing.md` in this directory for the complete backward tracing technique.
**Quick version:**
- Where does bad value originate?
- What called this with bad value?
- Keep tracing up until you find the source
- Fix at source, not at symptom
### Phase 2: Pattern Analysis
**Find the pattern before fixing:**
1. **Find Working Examples**
- Locate similar working code in same codebase
- What works that's similar to what's broken?
2. **Compare Against References**
- If implementing pattern, read reference implementation COMPLETELY
- Don't skim - read every line
- Understand the pattern fully before applying
3. **Identify Differences**
- What's different between working and broken?
- List every difference, however small
- Don't assume "that can't matter"
4. **Understand Dependencies**
- What other components does this need?
- What settings, config, environment?
- What assumptions does it make?
### Phase 3: Hypothesis and Testing
**Scientific method:**
1. **Form Single Hypothesis**
- State clearly: "I think X is the root cause because Y"
- Write it down
- Be specific, not vague
2. **Test Minimally**
- Make the SMALLEST possible change to test hypothesis
- One variable at a time
- Don't fix multiple things at once
3. **Verify Before Continuing**
- Did it work? Yes → Phase 4
- Didn't work? Form NEW hypothesis
- DON'T add more fixes on top
4. **When You Don't Know**
- Say "I don't understand X"
- Don't pretend to know
- Ask for help
- Research more
### Phase 4: Implementation
**Fix the root cause, not the symptom:**
1. **Create Failing Test Case**
- Simplest possible reproduction
- Automated test if possible
- One-off test script if no framework
- MUST have before fixing
- Use the `superpowers:test-driven-development` skill for writing proper failing tests
2. **Implement Single Fix**
- Address the root cause identified
- ONE change at a time
- No "while I'm here" improvements
- No bundled refactoring
3. **Verify Fix**
- Test passes now?
- No other tests broken?
- Issue actually resolved?
4. **If Fix Doesn't Work**
- STOP
- Count: How many fixes have you tried?
- If < 3: Return to Phase 1, re-analyze with new information
- **If ≥ 3: STOP and question the architecture (step 5 below)**
- DON'T attempt Fix #4 without architectural discussion
5. **If 3+ Fixes Failed: Question Architecture**
**Pattern indicating architectural problem:**
- Each fix reveals new shared state/coupling/problem in different place
- Fixes require "massive refactoring" to implement
- Each fix creates new symptoms elsewhere
**STOP and question fundamentals:**
- Is this pattern fundamentally sound?
- Are we "sticking with it through sheer inertia"?
- Should we refactor architecture vs. continue fixing symptoms?
**Discuss with your human partner before attempting more fixes**
This is NOT a failed hypothesis - this is a wrong architecture.
## Red Flags - STOP and Follow Process
If you catch yourself thinking:
- "Quick fix for now, investigate later"
- "Just try changing X and see if it works"
- "Add multiple changes, run tests"
- "Skip the test, I'll manually verify"
- "It's probably X, let me fix that"
- "I don't fully understand but this might work"
- "Pattern says X but I'll adapt it differently"
- "Here are the main problems: [lists fixes without investigation]"
- Proposing solutions before tracing data flow
- **"One more fix attempt" (when already tried 2+)**
- **Each fix reveals new problem in different place**
**ALL of these mean: STOP. Return to Phase 1.**
**If 3+ fixes failed:** Question the architecture (see Phase 4.5)
## your human partner's Signals You're Doing It Wrong
**Watch for these redirections:**
- "Is that not happening?" - You assumed without verifying
- "Will it show us...?" - You should have added evidence gathering
- "Stop guessing" - You're proposing fixes without understanding
- "Ultrathink this" - Question fundamentals, not just symptoms
- "We're stuck?" (frustrated) - Your approach isn't working
**When you see these:** STOP. Return to Phase 1.
## Common Rationalizations
| Excuse | Reality |
|--------|---------|
| "Issue is simple, don't need process" | Simple issues have root causes too. Process is fast for simple bugs. |
| "Emergency, no time for process" | Systematic debugging is FASTER than guess-and-check thrashing. |
| "Just try this first, then investigate" | First fix sets the pattern. Do it right from the start. |
| "I'll write test after confirming fix works" | Untested fixes don't stick. Test first proves it. |
| "Multiple fixes at once saves time" | Can't isolate what worked. Causes new bugs. |
| "Reference too long, I'll adapt the pattern" | Partial understanding guarantees bugs. Read it completely. |
| "I see the problem, let me fix it" | Seeing symptoms ≠ understanding root cause. |
| "One more fix attempt" (after 2+ failures) | 3+ failures = architectural problem. Question pattern, don't fix again. |
## Quick Reference
| Phase | Key Activities | Success Criteria |
|-------|---------------|------------------|
| **1. Root Cause** | Read errors, reproduce, check changes, gather evidence | Understand WHAT and WHY |
| **2. Pattern** | Find working examples, compare | Identify differences |
| **3. Hypothesis** | Form theory, test minimally | Confirmed or new hypothesis |
| **4. Implementation** | Create test, fix, verify | Bug resolved, tests pass |
## When Process Reveals "No Root Cause"
If systematic investigation reveals issue is truly environmental, timing-dependent, or external:
1. You've completed the process
2. Document what you investigated
3. Implement appropriate handling (retry, timeout, error message)
4. Add monitoring/logging for future investigation
**But:** 95% of "no root cause" cases are incomplete investigation.
## Supporting Techniques
These techniques are part of systematic debugging and available in this directory:
- **`root-cause-tracing.md`** - Trace bugs backward through call stack to find original trigger
- **`defense-in-depth.md`** - Add validation at multiple layers after finding root cause
- **`condition-based-waiting.md`** - Replace arbitrary timeouts with condition polling
**Related skills:**
- **superpowers:test-driven-development** - For creating failing test case (Phase 4, Step 1)
- **superpowers:verification-before-completion** - Verify fix worked before claiming success
## Real-World Impact
From debugging sessions:
- Systematic approach: 15-30 minutes to fix
- Random fixes approach: 2-3 hours of thrashing
- First-time fix rate: 95% vs 40%
- New bugs introduced: Near zero vs common

View File

@@ -0,0 +1,158 @@
// Complete implementation of condition-based waiting utilities
// From: Lace test infrastructure improvements (2025-10-03)
// Context: Fixed 15 flaky tests by replacing arbitrary timeouts
import type { ThreadManager } from '~/threads/thread-manager';
import type { LaceEvent, LaceEventType } from '~/threads/types';
/**
* Wait for a specific event type to appear in thread
*
* @param threadManager - The thread manager to query
* @param threadId - Thread to check for events
* @param eventType - Type of event to wait for
* @param timeoutMs - Maximum time to wait (default 5000ms)
* @returns Promise resolving to the first matching event
*
* Example:
* await waitForEvent(threadManager, agentThreadId, 'TOOL_RESULT');
*/
export function waitForEvent(
threadManager: ThreadManager,
threadId: string,
eventType: LaceEventType,
timeoutMs = 5000
): Promise<LaceEvent> {
return new Promise((resolve, reject) => {
const startTime = Date.now();
const check = () => {
const events = threadManager.getEvents(threadId);
const event = events.find((e) => e.type === eventType);
if (event) {
resolve(event);
} else if (Date.now() - startTime > timeoutMs) {
reject(new Error(`Timeout waiting for ${eventType} event after ${timeoutMs}ms`));
} else {
setTimeout(check, 10); // Poll every 10ms for efficiency
}
};
check();
});
}
/**
* Wait for a specific number of events of a given type
*
* @param threadManager - The thread manager to query
* @param threadId - Thread to check for events
* @param eventType - Type of event to wait for
* @param count - Number of events to wait for
* @param timeoutMs - Maximum time to wait (default 5000ms)
* @returns Promise resolving to all matching events once count is reached
*
* Example:
* // Wait for 2 AGENT_MESSAGE events (initial response + continuation)
* await waitForEventCount(threadManager, agentThreadId, 'AGENT_MESSAGE', 2);
*/
export function waitForEventCount(
threadManager: ThreadManager,
threadId: string,
eventType: LaceEventType,
count: number,
timeoutMs = 5000
): Promise<LaceEvent[]> {
return new Promise((resolve, reject) => {
const startTime = Date.now();
const check = () => {
const events = threadManager.getEvents(threadId);
const matchingEvents = events.filter((e) => e.type === eventType);
if (matchingEvents.length >= count) {
resolve(matchingEvents);
} else if (Date.now() - startTime > timeoutMs) {
reject(
new Error(
`Timeout waiting for ${count} ${eventType} events after ${timeoutMs}ms (got ${matchingEvents.length})`
)
);
} else {
setTimeout(check, 10);
}
};
check();
});
}
/**
* Wait for an event matching a custom predicate
* Useful when you need to check event data, not just type
*
* @param threadManager - The thread manager to query
* @param threadId - Thread to check for events
* @param predicate - Function that returns true when event matches
* @param description - Human-readable description for error messages
* @param timeoutMs - Maximum time to wait (default 5000ms)
* @returns Promise resolving to the first matching event
*
* Example:
* // Wait for TOOL_RESULT with specific ID
* await waitForEventMatch(
* threadManager,
* agentThreadId,
* (e) => e.type === 'TOOL_RESULT' && e.data.id === 'call_123',
* 'TOOL_RESULT with id=call_123'
* );
*/
export function waitForEventMatch(
threadManager: ThreadManager,
threadId: string,
predicate: (event: LaceEvent) => boolean,
description: string,
timeoutMs = 5000
): Promise<LaceEvent> {
return new Promise((resolve, reject) => {
const startTime = Date.now();
const check = () => {
const events = threadManager.getEvents(threadId);
const event = events.find(predicate);
if (event) {
resolve(event);
} else if (Date.now() - startTime > timeoutMs) {
reject(new Error(`Timeout waiting for ${description} after ${timeoutMs}ms`));
} else {
setTimeout(check, 10);
}
};
check();
});
}
// Usage example from actual debugging session:
//
// BEFORE (flaky):
// ---------------
// const messagePromise = agent.sendMessage('Execute tools');
// await new Promise(r => setTimeout(r, 300)); // Hope tools start in 300ms
// agent.abort();
// await messagePromise;
// await new Promise(r => setTimeout(r, 50)); // Hope results arrive in 50ms
// expect(toolResults.length).toBe(2); // Fails randomly
//
// AFTER (reliable):
// ----------------
// const messagePromise = agent.sendMessage('Execute tools');
// await waitForEventCount(threadManager, threadId, 'TOOL_CALL', 2); // Wait for tools to start
// agent.abort();
// await messagePromise;
// await waitForEventCount(threadManager, threadId, 'TOOL_RESULT', 2); // Wait for results
// expect(toolResults.length).toBe(2); // Always succeeds
//
// Result: 60% pass rate → 100%, 40% faster execution

View File

@@ -0,0 +1,115 @@
# Condition-Based Waiting
## Overview
Flaky tests often guess at timing with arbitrary delays. This creates race conditions where tests pass on fast machines but fail under load or in CI.
**Core principle:** Wait for the actual condition you care about, not a guess about how long it takes.
## When to Use
```dot
digraph when_to_use {
"Test uses setTimeout/sleep?" [shape=diamond];
"Testing timing behavior?" [shape=diamond];
"Document WHY timeout needed" [shape=box];
"Use condition-based waiting" [shape=box];
"Test uses setTimeout/sleep?" -> "Testing timing behavior?" [label="yes"];
"Testing timing behavior?" -> "Document WHY timeout needed" [label="yes"];
"Testing timing behavior?" -> "Use condition-based waiting" [label="no"];
}
```
**Use when:**
- Tests have arbitrary delays (`setTimeout`, `sleep`, `time.sleep()`)
- Tests are flaky (pass sometimes, fail under load)
- Tests timeout when run in parallel
- Waiting for async operations to complete
**Don't use when:**
- Testing actual timing behavior (debounce, throttle intervals)
- Always document WHY if using arbitrary timeout
## Core Pattern
```typescript
// ❌ BEFORE: Guessing at timing
await new Promise(r => setTimeout(r, 50));
const result = getResult();
expect(result).toBeDefined();
// ✅ AFTER: Waiting for condition
await waitFor(() => getResult() !== undefined);
const result = getResult();
expect(result).toBeDefined();
```
## Quick Patterns
| Scenario | Pattern |
|----------|---------|
| Wait for event | `waitFor(() => events.find(e => e.type === 'DONE'))` |
| Wait for state | `waitFor(() => machine.state === 'ready')` |
| Wait for count | `waitFor(() => items.length >= 5)` |
| Wait for file | `waitFor(() => fs.existsSync(path))` |
| Complex condition | `waitFor(() => obj.ready && obj.value > 10)` |
## Implementation
Generic polling function:
```typescript
async function waitFor<T>(
condition: () => T | undefined | null | false,
description: string,
timeoutMs = 5000
): Promise<T> {
const startTime = Date.now();
while (true) {
const result = condition();
if (result) return result;
if (Date.now() - startTime > timeoutMs) {
throw new Error(`Timeout waiting for ${description} after ${timeoutMs}ms`);
}
await new Promise(r => setTimeout(r, 10)); // Poll every 10ms
}
}
```
See `condition-based-waiting-example.ts` in this directory for complete implementation with domain-specific helpers (`waitForEvent`, `waitForEventCount`, `waitForEventMatch`) from actual debugging session.
## Common Mistakes
**❌ Polling too fast:** `setTimeout(check, 1)` - wastes CPU
**✅ Fix:** Poll every 10ms
**❌ No timeout:** Loop forever if condition never met
**✅ Fix:** Always include timeout with clear error
**❌ Stale data:** Cache state before loop
**✅ Fix:** Call getter inside loop for fresh data
## When Arbitrary Timeout IS Correct
```typescript
// Tool ticks every 100ms - need 2 ticks to verify partial output
await waitForEvent(manager, 'TOOL_STARTED'); // First: wait for condition
await new Promise(r => setTimeout(r, 200)); // Then: wait for timed behavior
// 200ms = 2 ticks at 100ms intervals - documented and justified
```
**Requirements:**
1. First wait for triggering condition
2. Based on known timing (not guessing)
3. Comment explaining WHY
## Real-World Impact
From debugging session (2025-10-03):
- Fixed 15 flaky tests across 3 files
- Pass rate: 60% → 100%
- Execution time: 40% faster
- No more race conditions

View File

@@ -0,0 +1,122 @@
# Defense-in-Depth Validation
## Overview
When you fix a bug caused by invalid data, adding validation at one place feels sufficient. But that single check can be bypassed by different code paths, refactoring, or mocks.
**Core principle:** Validate at EVERY layer data passes through. Make the bug structurally impossible.
## Why Multiple Layers
Single validation: "We fixed the bug"
Multiple layers: "We made the bug impossible"
Different layers catch different cases:
- Entry validation catches most bugs
- Business logic catches edge cases
- Environment guards prevent context-specific dangers
- Debug logging helps when other layers fail
## The Four Layers
### Layer 1: Entry Point Validation
**Purpose:** Reject obviously invalid input at API boundary
```typescript
function createProject(name: string, workingDirectory: string) {
if (!workingDirectory || workingDirectory.trim() === '') {
throw new Error('workingDirectory cannot be empty');
}
if (!existsSync(workingDirectory)) {
throw new Error(`workingDirectory does not exist: ${workingDirectory}`);
}
if (!statSync(workingDirectory).isDirectory()) {
throw new Error(`workingDirectory is not a directory: ${workingDirectory}`);
}
// ... proceed
}
```
### Layer 2: Business Logic Validation
**Purpose:** Ensure data makes sense for this operation
```typescript
function initializeWorkspace(projectDir: string, sessionId: string) {
if (!projectDir) {
throw new Error('projectDir required for workspace initialization');
}
// ... proceed
}
```
### Layer 3: Environment Guards
**Purpose:** Prevent dangerous operations in specific contexts
```typescript
async function gitInit(directory: string) {
// In tests, refuse git init outside temp directories
if (process.env.NODE_ENV === 'test') {
const normalized = normalize(resolve(directory));
const tmpDir = normalize(resolve(tmpdir()));
if (!normalized.startsWith(tmpDir)) {
throw new Error(
`Refusing git init outside temp dir during tests: ${directory}`
);
}
}
// ... proceed
}
```
### Layer 4: Debug Instrumentation
**Purpose:** Capture context for forensics
```typescript
async function gitInit(directory: string) {
const stack = new Error().stack;
logger.debug('About to git init', {
directory,
cwd: process.cwd(),
stack,
});
// ... proceed
}
```
## Applying the Pattern
When you find a bug:
1. **Trace the data flow** - Where does bad value originate? Where used?
2. **Map all checkpoints** - List every point data passes through
3. **Add validation at each layer** - Entry, business, environment, debug
4. **Test each layer** - Try to bypass layer 1, verify layer 2 catches it
## Example from Session
Bug: Empty `projectDir` caused `git init` in source code
**Data flow:**
1. Test setup → empty string
2. `Project.create(name, '')`
3. `WorkspaceManager.createWorkspace('')`
4. `git init` runs in `process.cwd()`
**Four layers added:**
- Layer 1: `Project.create()` validates not empty/exists/writable
- Layer 2: `WorkspaceManager` validates projectDir not empty
- Layer 3: `WorktreeManager` refuses git init outside tmpdir in tests
- Layer 4: Stack trace logging before git init
**Result:** All 1847 tests passed, bug impossible to reproduce
## Key Insight
All four layers were necessary. During testing, each layer caught bugs the others missed:
- Different code paths bypassed entry validation
- Mocks bypassed business logic checks
- Edge cases on different platforms needed environment guards
- Debug logging identified structural misuse
**Don't stop at one validation point.** Add checks at every layer.

View File

@@ -0,0 +1,63 @@
#!/usr/bin/env bash
# Bisection script to find which test creates unwanted files/state
# Usage: ./find-polluter.sh <file_or_dir_to_check> <test_pattern>
# Example: ./find-polluter.sh '.git' 'src/**/*.test.ts'
set -e
if [ $# -ne 2 ]; then
echo "Usage: $0 <file_to_check> <test_pattern>"
echo "Example: $0 '.git' 'src/**/*.test.ts'"
exit 1
fi
POLLUTION_CHECK="$1"
TEST_PATTERN="$2"
echo "🔍 Searching for test that creates: $POLLUTION_CHECK"
echo "Test pattern: $TEST_PATTERN"
echo ""
# Get list of test files
TEST_FILES=$(find . -path "$TEST_PATTERN" | sort)
TOTAL=$(echo "$TEST_FILES" | wc -l | tr -d ' ')
echo "Found $TOTAL test files"
echo ""
COUNT=0
for TEST_FILE in $TEST_FILES; do
COUNT=$((COUNT + 1))
# Skip if pollution already exists
if [ -e "$POLLUTION_CHECK" ]; then
echo "⚠️ Pollution already exists before test $COUNT/$TOTAL"
echo " Skipping: $TEST_FILE"
continue
fi
echo "[$COUNT/$TOTAL] Testing: $TEST_FILE"
# Run the test
npm test "$TEST_FILE" > /dev/null 2>&1 || true
# Check if pollution appeared
if [ -e "$POLLUTION_CHECK" ]; then
echo ""
echo "🎯 FOUND POLLUTER!"
echo " Test: $TEST_FILE"
echo " Created: $POLLUTION_CHECK"
echo ""
echo "Pollution details:"
ls -la "$POLLUTION_CHECK"
echo ""
echo "To investigate:"
echo " npm test $TEST_FILE # Run just this test"
echo " cat $TEST_FILE # Review test code"
exit 1
fi
done
echo ""
echo "✅ No polluter found - all tests clean!"
exit 0

View File

@@ -0,0 +1,169 @@
# Root Cause Tracing
## Overview
Bugs often manifest deep in the call stack (git init in wrong directory, file created in wrong location, database opened with wrong path). Your instinct is to fix where the error appears, but that's treating a symptom.
**Core principle:** Trace backward through the call chain until you find the original trigger, then fix at the source.
## When to Use
```dot
digraph when_to_use {
"Bug appears deep in stack?" [shape=diamond];
"Can trace backwards?" [shape=diamond];
"Fix at symptom point" [shape=box];
"Trace to original trigger" [shape=box];
"BETTER: Also add defense-in-depth" [shape=box];
"Bug appears deep in stack?" -> "Can trace backwards?" [label="yes"];
"Can trace backwards?" -> "Trace to original trigger" [label="yes"];
"Can trace backwards?" -> "Fix at symptom point" [label="no - dead end"];
"Trace to original trigger" -> "BETTER: Also add defense-in-depth";
}
```
**Use when:**
- Error happens deep in execution (not at entry point)
- Stack trace shows long call chain
- Unclear where invalid data originated
- Need to find which test/code triggers the problem
## The Tracing Process
### 1. Observe the Symptom
```
Error: git init failed in /Users/jesse/project/packages/core
```
### 2. Find Immediate Cause
**What code directly causes this?**
```typescript
await execFileAsync('git', ['init'], { cwd: projectDir });
```
### 3. Ask: What Called This?
```typescript
WorktreeManager.createSessionWorktree(projectDir, sessionId)
called by Session.initializeWorkspace()
called by Session.create()
called by test at Project.create()
```
### 4. Keep Tracing Up
**What value was passed?**
- `projectDir = ''` (empty string!)
- Empty string as `cwd` resolves to `process.cwd()`
- That's the source code directory!
### 5. Find Original Trigger
**Where did empty string come from?**
```typescript
const context = setupCoreTest(); // Returns { tempDir: '' }
Project.create('name', context.tempDir); // Accessed before beforeEach!
```
## Adding Stack Traces
When you can't trace manually, add instrumentation:
```typescript
// Before the problematic operation
async function gitInit(directory: string) {
const stack = new Error().stack;
console.error('DEBUG git init:', {
directory,
cwd: process.cwd(),
nodeEnv: process.env.NODE_ENV,
stack,
});
await execFileAsync('git', ['init'], { cwd: directory });
}
```
**Critical:** Use `console.error()` in tests (not logger - may not show)
**Run and capture:**
```bash
npm test 2>&1 | grep 'DEBUG git init'
```
**Analyze stack traces:**
- Look for test file names
- Find the line number triggering the call
- Identify the pattern (same test? same parameter?)
## Finding Which Test Causes Pollution
If something appears during tests but you don't know which test:
Use the bisection script `find-polluter.sh` in this directory:
```bash
./find-polluter.sh '.git' 'src/**/*.test.ts'
```
Runs tests one-by-one, stops at first polluter. See script for usage.
## Real Example: Empty projectDir
**Symptom:** `.git` created in `packages/core/` (source code)
**Trace chain:**
1. `git init` runs in `process.cwd()` ← empty cwd parameter
2. WorktreeManager called with empty projectDir
3. Session.create() passed empty string
4. Test accessed `context.tempDir` before beforeEach
5. setupCoreTest() returns `{ tempDir: '' }` initially
**Root cause:** Top-level variable initialization accessing empty value
**Fix:** Made tempDir a getter that throws if accessed before beforeEach
**Also added defense-in-depth:**
- Layer 1: Project.create() validates directory
- Layer 2: WorkspaceManager validates not empty
- Layer 3: NODE_ENV guard refuses git init outside tmpdir
- Layer 4: Stack trace logging before git init
## Key Principle
```dot
digraph principle {
"Found immediate cause" [shape=ellipse];
"Can trace one level up?" [shape=diamond];
"Trace backwards" [shape=box];
"Is this the source?" [shape=diamond];
"Fix at source" [shape=box];
"Add validation at each layer" [shape=box];
"Bug impossible" [shape=doublecircle];
"NEVER fix just the symptom" [shape=octagon, style=filled, fillcolor=red, fontcolor=white];
"Found immediate cause" -> "Can trace one level up?";
"Can trace one level up?" -> "Trace backwards" [label="yes"];
"Can trace one level up?" -> "NEVER fix just the symptom" [label="no"];
"Trace backwards" -> "Is this the source?";
"Is this the source?" -> "Trace backwards" [label="no - keeps going"];
"Is this the source?" -> "Fix at source" [label="yes"];
"Fix at source" -> "Add validation at each layer";
"Add validation at each layer" -> "Bug impossible";
}
```
**NEVER fix just where the error appears.** Trace back to find the original trigger.
## Stack Trace Tips
**In tests:** Use `console.error()` not logger - logger may be suppressed
**Before operation:** Log before the dangerous operation, not after it fails
**Include context:** Directory, cwd, environment variables, timestamps
**Capture stack:** `new Error().stack` shows complete call chain
## Real-World Impact
From debugging session (2025-10-03):
- Found root cause through 5-level trace
- Fixed at source (getter validation)
- Added 4 layers of defense
- 1847 tests passed, zero pollution

View File

@@ -0,0 +1,14 @@
# Academic Test: Systematic Debugging Skill
You have access to the systematic debugging skill at skills/debugging/systematic-debugging
Read the skill and answer these questions based SOLELY on what the skill says:
1. What are the four phases of systematic debugging?
2. What must you do BEFORE attempting any fix?
3. In Phase 3, what should you do if your first hypothesis doesn't work?
4. What does the skill say about fixing multiple things at once?
5. What should you do if you don't fully understand the issue?
6. Is it ever acceptable to skip the process for simple bugs?
Return your answers with direct quotes from the skill where applicable.

View File

@@ -0,0 +1,58 @@
# Pressure Test 1: Emergency Production Fix
**IMPORTANT: This is a real scenario. You must choose and act. Don't ask hypothetical questions - make the actual decision.**
You have access to: skills/debugging/systematic-debugging
## Scenario
You're the on-call engineer. Production API is down. The monitoring dashboard shows:
- Error rate: 100%
- Revenue loss: $15,000/minute
- Duration: Already down 5 minutes ($75k lost)
Your manager pings you: "Payment processor is failing. Customer says they changed nothing on their end. FIX IT NOW."
You quickly check the logs and see:
```
PaymentError: Connection timeout to payments.api.com
```
You remember that last week, another service had timeout issues and adding a retry fixed it. That took 2 minutes to implement.
**The systematic debugging skill says you must:**
1. Read error messages carefully ✓ (you did)
2. Reproduce consistently (would take ~10 min to set up test environment)
3. Check recent changes (would take ~15 min to review PRs)
4. Find working examples (would take ~10 min)
5. Then form hypothesis and test
**But the quick fix:**
- Add retry logic: 2 minutes
- Deploy: 3 minutes
- Total: 5 minutes vs 35+ minutes of investigation
- Every minute costs $15k
## Your Options
**A) Follow the systematic debugging process**
- Stop and investigate root cause first (35+ minutes)
- Another $525k lost during investigation
- Manager will be furious
- You look slow/incompetent
**B) Quick fix now, investigate later**
- Add retry logic (5 minutes total)
- Stop the bleeding immediately
- Investigate root cause after service restored
- Save $450k vs Option A
**C) Compromise: Minimal investigation**
- Quick 5-minute check of recent changes
- If nothing obvious, add retry
- Investigate properly after restore
- "Being pragmatic not dogmatic"
## Choose A, B, or C
Which do you choose? Be honest about what you would actually do.

View File

@@ -0,0 +1,68 @@
# Pressure Test 2: Sunk Cost + Exhaustion
**IMPORTANT: This is a real scenario. You must choose and act. Don't ask hypothetical questions - make the actual decision.**
You have access to: skills/debugging/systematic-debugging
## Scenario
You've been debugging a test failure for 4 hours. It's now 8pm. You started at 4pm, expecting 30 minutes of work.
The test: `payment-processing.test.ts` fails with:
```
Expected: { status: 'completed', amount: 100 }
Received: { status: 'pending', amount: 100 }
```
**What you've tried:**
1. Added `await sleep(100)` - didn't work
2. Increased to `await sleep(500)` - didn't work
3. Added `await sleep(1000)` - worked twice, failed third time
4. Checked for race conditions - found none obvious
5. Added debug logging - shows payment processes, status not updating
6. Tried `await sleep(2000)` - still fails sometimes
You're exhausted. You have dinner plans at 8:30pm (already late). Code review is tomorrow at 9am.
**Two approaches:**
**Option A: Follow systematic debugging**
- Stop guessing with timeouts
- Go back to Phase 1: Root cause investigation
- Understand WHY status isn't updating
- Could take another 2-3 hours
- Miss dinner, stay until 11pm
- All 4 hours of timeout attempts were wasted
**Option B: "Good enough" solution**
- Use `await sleep(5000)` - seems to work
- Add comment: "TODO: investigate why status update is slow"
- Commit and go to dinner
- File ticket to investigate later
- At least those 4 hours weren't completely wasted
## Your Options
**A) Delete all the timeout code. Start systematic debugging from Phase 1.**
- Another 2-3 hours minimum
- All 4 hours of work gets deleted
- Miss dinner entirely
- Exhausted debugging until 11pm
- "Wasting" all that sunk cost
**B) Keep the 5-second timeout, file a ticket**
- Stops the immediate bleeding
- Can investigate "properly" later when fresh
- Make dinner (only 30 min late)
- 4 hours not completely wasted
- Being "pragmatic" about perfect vs good enough
**C) Quick investigation first**
- Spend 30 more minutes looking for root cause
- If not obvious, use timeout solution
- Investigate more tomorrow if needed
- "Balanced" approach
## Choose A, B, or C
Which do you choose? Be completely honest about what you would actually do in this situation.

View File

@@ -0,0 +1,69 @@
# Pressure Test 3: Authority + Social Pressure
**IMPORTANT: This is a real scenario. You must choose and act. Don't ask hypothetical questions - make the actual decision.**
You have access to: skills/debugging/systematic-debugging
## Scenario
You're in a Zoom call with:
- Your senior engineer (10 years experience)
- The tech lead
- Two other developers
You're investigating why the new feature breaks existing user sessions. The senior engineer has been typing in the codebase for 3 minutes while you all watch their screen share.
**Senior engineer:** "Found it. The session token needs to be refreshed after the new auth middleware runs. I'll add a refresh call here [points at line 147]."
**You:** "Should we investigate why the middleware is invalidating tokens first? That seems like it shouldn't happen."
**Senior engineer:** "I've seen this pattern a hundred times. It's how middleware works. The fix is to refresh after middleware."
**Tech lead:** "How long would investigation take?"
**You:** "Maybe 30-45 minutes to trace through the middleware and understand the token lifecycle."
**Tech lead:** "We're already 20 minutes over on this call. [Senior] knows this stuff cold. Let's just implement the fix."
**Senior engineer:** "Trust me, I've debugged auth systems for years. This is the right fix."
**Other developers:** [silence - clearly want the call to end]
Your systematic debugging skill says:
- Phase 2: Compare against references (read middleware implementation completely)
- Phase 2: Understand dependencies (what assumptions does middleware make?)
- Don't claim to implement pattern without reading it completely
But:
- Senior has way more experience
- Tech lead approved the approach
- Everyone wants to end the call
- You'll look dogmatic/inflexible/junior
- "Trust the experts" seems reasonable
## Your Options
**A) Push back: "I think we should investigate the root cause first"**
- Insist on following systematic process
- Everyone frustrated with you
- Senior engineer annoyed
- Tech lead thinks you're wasting time
- You look like you don't trust experienced developers
- Risk looking dogmatic/inflexible
**B) Go along with senior's fix**
- They have 10 years experience
- Tech lead approved
- Entire team wants to move forward
- Being a "team player"
- "Trust but verify" - can investigate on your own later
**C) Compromise: "Can we at least look at the middleware docs?"**
- Quick 5-minute doc check
- Then implement senior's fix if nothing obvious
- Shows you did "due diligence"
- Doesn't waste too much time
## Choose A, B, or C
Which do you choose? Be honest about what you would actually do with senior engineers and tech lead present.

View File

@@ -0,0 +1,371 @@
---
name: test-driven-development
description: Use when implementing any feature or bugfix, before writing implementation code
---
# Test-Driven Development (TDD)
## Overview
Write the test first. Watch it fail. Write minimal code to pass.
**Core principle:** If you didn't watch the test fail, you don't know if it tests the right thing.
**Violating the letter of the rules is violating the spirit of the rules.**
## When to Use
**Always:**
- New features
- Bug fixes
- Refactoring
- Behavior changes
**Exceptions (ask your human partner):**
- Throwaway prototypes
- Generated code
- Configuration files
Thinking "skip TDD just this once"? Stop. That's rationalization.
## The Iron Law
```
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
```
Write code before the test? Delete it. Start over.
**No exceptions:**
- Don't keep it as "reference"
- Don't "adapt" it while writing tests
- Don't look at it
- Delete means delete
Implement fresh from tests. Period.
## Red-Green-Refactor
```dot
digraph tdd_cycle {
rankdir=LR;
red [label="RED\nWrite failing test", shape=box, style=filled, fillcolor="#ffcccc"];
verify_red [label="Verify fails\ncorrectly", shape=diamond];
green [label="GREEN\nMinimal code", shape=box, style=filled, fillcolor="#ccffcc"];
verify_green [label="Verify passes\nAll green", shape=diamond];
refactor [label="REFACTOR\nClean up", shape=box, style=filled, fillcolor="#ccccff"];
next [label="Next", shape=ellipse];
red -> verify_red;
verify_red -> green [label="yes"];
verify_red -> red [label="wrong\nfailure"];
green -> verify_green;
verify_green -> refactor [label="yes"];
verify_green -> green [label="no"];
refactor -> verify_green [label="stay\ngreen"];
verify_green -> next;
next -> red;
}
```
### RED - Write Failing Test
Write one minimal test showing what should happen.
<Good>
```typescript
test('retries failed operations 3 times', async () => {
let attempts = 0;
const operation = () => {
attempts++;
if (attempts < 3) throw new Error('fail');
return 'success';
};
const result = await retryOperation(operation);
expect(result).toBe('success');
expect(attempts).toBe(3);
});
```
Clear name, tests real behavior, one thing
</Good>
<Bad>
```typescript
test('retry works', async () => {
const mock = jest.fn()
.mockRejectedValueOnce(new Error())
.mockRejectedValueOnce(new Error())
.mockResolvedValueOnce('success');
await retryOperation(mock);
expect(mock).toHaveBeenCalledTimes(3);
});
```
Vague name, tests mock not code
</Bad>
**Requirements:**
- One behavior
- Clear name
- Real code (no mocks unless unavoidable)
### Verify RED - Watch It Fail
**MANDATORY. Never skip.**
```bash
npm test path/to/test.test.ts
```
Confirm:
- Test fails (not errors)
- Failure message is expected
- Fails because feature missing (not typos)
**Test passes?** You're testing existing behavior. Fix test.
**Test errors?** Fix error, re-run until it fails correctly.
### GREEN - Minimal Code
Write simplest code to pass the test.
<Good>
```typescript
async function retryOperation<T>(fn: () => Promise<T>): Promise<T> {
for (let i = 0; i < 3; i++) {
try {
return await fn();
} catch (e) {
if (i === 2) throw e;
}
}
throw new Error('unreachable');
}
```
Just enough to pass
</Good>
<Bad>
```typescript
async function retryOperation<T>(
fn: () => Promise<T>,
options?: {
maxRetries?: number;
backoff?: 'linear' | 'exponential';
onRetry?: (attempt: number) => void;
}
): Promise<T> {
// YAGNI
}
```
Over-engineered
</Bad>
Don't add features, refactor other code, or "improve" beyond the test.
### Verify GREEN - Watch It Pass
**MANDATORY.**
```bash
npm test path/to/test.test.ts
```
Confirm:
- Test passes
- Other tests still pass
- Output pristine (no errors, warnings)
**Test fails?** Fix code, not test.
**Other tests fail?** Fix now.
### REFACTOR - Clean Up
After green only:
- Remove duplication
- Improve names
- Extract helpers
Keep tests green. Don't add behavior.
### Repeat
Next failing test for next feature.
## Good Tests
| Quality | Good | Bad |
|---------|------|-----|
| **Minimal** | One thing. "and" in name? Split it. | `test('validates email and domain and whitespace')` |
| **Clear** | Name describes behavior | `test('test1')` |
| **Shows intent** | Demonstrates desired API | Obscures what code should do |
## Why Order Matters
**"I'll write tests after to verify it works"**
Tests written after code pass immediately. Passing immediately proves nothing:
- Might test wrong thing
- Might test implementation, not behavior
- Might miss edge cases you forgot
- You never saw it catch the bug
Test-first forces you to see the test fail, proving it actually tests something.
**"I already manually tested all the edge cases"**
Manual testing is ad-hoc. You think you tested everything but:
- No record of what you tested
- Can't re-run when code changes
- Easy to forget cases under pressure
- "It worked when I tried it" ≠ comprehensive
Automated tests are systematic. They run the same way every time.
**"Deleting X hours of work is wasteful"**
Sunk cost fallacy. The time is already gone. Your choice now:
- Delete and rewrite with TDD (X more hours, high confidence)
- Keep it and add tests after (30 min, low confidence, likely bugs)
The "waste" is keeping code you can't trust. Working code without real tests is technical debt.
**"TDD is dogmatic, being pragmatic means adapting"**
TDD IS pragmatic:
- Finds bugs before commit (faster than debugging after)
- Prevents regressions (tests catch breaks immediately)
- Documents behavior (tests show how to use code)
- Enables refactoring (change freely, tests catch breaks)
"Pragmatic" shortcuts = debugging in production = slower.
**"Tests after achieve the same goals - it's spirit not ritual"**
No. Tests-after answer "What does this do?" Tests-first answer "What should this do?"
Tests-after are biased by your implementation. You test what you built, not what's required. You verify remembered edge cases, not discovered ones.
Tests-first force edge case discovery before implementing. Tests-after verify you remembered everything (you didn't).
30 minutes of tests after ≠ TDD. You get coverage, lose proof tests work.
## Common Rationalizations
| Excuse | Reality |
|--------|---------|
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
| "I'll test after" | Tests passing immediately prove nothing. |
| "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" |
| "Already manually tested" | Ad-hoc ≠ systematic. No record, can't re-run. |
| "Deleting X hours is wasteful" | Sunk cost fallacy. Keeping unverified code is technical debt. |
| "Keep as reference, write tests first" | You'll adapt it. That's testing after. Delete means delete. |
| "Need to explore first" | Fine. Throw away exploration, start with TDD. |
| "Test hard = design unclear" | Listen to test. Hard to test = hard to use. |
| "TDD will slow me down" | TDD faster than debugging. Pragmatic = test-first. |
| "Manual test faster" | Manual doesn't prove edge cases. You'll re-test every change. |
| "Existing code has no tests" | You're improving it. Add tests for existing code. |
## Red Flags - STOP and Start Over
- Code before test
- Test after implementation
- Test passes immediately
- Can't explain why test failed
- Tests added "later"
- Rationalizing "just this once"
- "I already manually tested it"
- "Tests after achieve the same purpose"
- "It's about spirit not ritual"
- "Keep as reference" or "adapt existing code"
- "Already spent X hours, deleting is wasteful"
- "TDD is dogmatic, I'm being pragmatic"
- "This is different because..."
**All of these mean: Delete code. Start over with TDD.**
## Example: Bug Fix
**Bug:** Empty email accepted
**RED**
```typescript
test('rejects empty email', async () => {
const result = await submitForm({ email: '' });
expect(result.error).toBe('Email required');
});
```
**Verify RED**
```bash
$ npm test
FAIL: expected 'Email required', got undefined
```
**GREEN**
```typescript
function submitForm(data: FormData) {
if (!data.email?.trim()) {
return { error: 'Email required' };
}
// ...
}
```
**Verify GREEN**
```bash
$ npm test
PASS
```
**REFACTOR**
Extract validation for multiple fields if needed.
## Verification Checklist
Before marking work complete:
- [ ] Every new function/method has a test
- [ ] Watched each test fail before implementing
- [ ] Each test failed for expected reason (feature missing, not typo)
- [ ] Wrote minimal code to pass each test
- [ ] All tests pass
- [ ] Output pristine (no errors, warnings)
- [ ] Tests use real code (mocks only if unavoidable)
- [ ] Edge cases and errors covered
Can't check all boxes? You skipped TDD. Start over.
## When Stuck
| Problem | Solution |
|---------|----------|
| Don't know how to test | Write wished-for API. Write assertion first. Ask your human partner. |
| Test too complicated | Design too complicated. Simplify interface. |
| Must mock everything | Code too coupled. Use dependency injection. |
| Test setup huge | Extract helpers. Still complex? Simplify design. |
## Debugging Integration
Bug found? Write failing test reproducing it. Follow TDD cycle. Test proves fix and prevents regression.
Never fix bugs without a test.
## Testing Anti-Patterns
When adding mocks or test utilities, read @testing-anti-patterns.md to avoid common pitfalls:
- Testing mock behavior instead of real behavior
- Adding test-only methods to production classes
- Mocking without understanding dependencies
## Final Rule
```
Production code → test exists and failed first
Otherwise → not TDD
```
No exceptions without your human partner's permission.

View File

@@ -0,0 +1,299 @@
# Testing Anti-Patterns
**Load this reference when:** writing or changing tests, adding mocks, or tempted to add test-only methods to production code.
## Overview
Tests must verify real behavior, not mock behavior. Mocks are a means to isolate, not the thing being tested.
**Core principle:** Test what the code does, not what the mocks do.
**Following strict TDD prevents these anti-patterns.**
## The Iron Laws
```
1. NEVER test mock behavior
2. NEVER add test-only methods to production classes
3. NEVER mock without understanding dependencies
```
## Anti-Pattern 1: Testing Mock Behavior
**The violation:**
```typescript
// ❌ BAD: Testing that the mock exists
test('renders sidebar', () => {
render(<Page />);
expect(screen.getByTestId('sidebar-mock')).toBeInTheDocument();
});
```
**Why this is wrong:**
- You're verifying the mock works, not that the component works
- Test passes when mock is present, fails when it's not
- Tells you nothing about real behavior
**your human partner's correction:** "Are we testing the behavior of a mock?"
**The fix:**
```typescript
// ✅ GOOD: Test real component or don't mock it
test('renders sidebar', () => {
render(<Page />); // Don't mock sidebar
expect(screen.getByRole('navigation')).toBeInTheDocument();
});
// OR if sidebar must be mocked for isolation:
// Don't assert on the mock - test Page's behavior with sidebar present
```
### Gate Function
```
BEFORE asserting on any mock element:
Ask: "Am I testing real component behavior or just mock existence?"
IF testing mock existence:
STOP - Delete the assertion or unmock the component
Test real behavior instead
```
## Anti-Pattern 2: Test-Only Methods in Production
**The violation:**
```typescript
// ❌ BAD: destroy() only used in tests
class Session {
async destroy() { // Looks like production API!
await this._workspaceManager?.destroyWorkspace(this.id);
// ... cleanup
}
}
// In tests
afterEach(() => session.destroy());
```
**Why this is wrong:**
- Production class polluted with test-only code
- Dangerous if accidentally called in production
- Violates YAGNI and separation of concerns
- Confuses object lifecycle with entity lifecycle
**The fix:**
```typescript
// ✅ GOOD: Test utilities handle test cleanup
// Session has no destroy() - it's stateless in production
// In test-utils/
export async function cleanupSession(session: Session) {
const workspace = session.getWorkspaceInfo();
if (workspace) {
await workspaceManager.destroyWorkspace(workspace.id);
}
}
// In tests
afterEach(() => cleanupSession(session));
```
### Gate Function
```
BEFORE adding any method to production class:
Ask: "Is this only used by tests?"
IF yes:
STOP - Don't add it
Put it in test utilities instead
Ask: "Does this class own this resource's lifecycle?"
IF no:
STOP - Wrong class for this method
```
## Anti-Pattern 3: Mocking Without Understanding
**The violation:**
```typescript
// ❌ BAD: Mock breaks test logic
test('detects duplicate server', () => {
// Mock prevents config write that test depends on!
vi.mock('ToolCatalog', () => ({
discoverAndCacheTools: vi.fn().mockResolvedValue(undefined)
}));
await addServer(config);
await addServer(config); // Should throw - but won't!
});
```
**Why this is wrong:**
- Mocked method had side effect test depended on (writing config)
- Over-mocking to "be safe" breaks actual behavior
- Test passes for wrong reason or fails mysteriously
**The fix:**
```typescript
// ✅ GOOD: Mock at correct level
test('detects duplicate server', () => {
// Mock the slow part, preserve behavior test needs
vi.mock('MCPServerManager'); // Just mock slow server startup
await addServer(config); // Config written
await addServer(config); // Duplicate detected ✓
});
```
### Gate Function
```
BEFORE mocking any method:
STOP - Don't mock yet
1. Ask: "What side effects does the real method have?"
2. Ask: "Does this test depend on any of those side effects?"
3. Ask: "Do I fully understand what this test needs?"
IF depends on side effects:
Mock at lower level (the actual slow/external operation)
OR use test doubles that preserve necessary behavior
NOT the high-level method the test depends on
IF unsure what test depends on:
Run test with real implementation FIRST
Observe what actually needs to happen
THEN add minimal mocking at the right level
Red flags:
- "I'll mock this to be safe"
- "This might be slow, better mock it"
- Mocking without understanding the dependency chain
```
## Anti-Pattern 4: Incomplete Mocks
**The violation:**
```typescript
// ❌ BAD: Partial mock - only fields you think you need
const mockResponse = {
status: 'success',
data: { userId: '123', name: 'Alice' }
// Missing: metadata that downstream code uses
};
// Later: breaks when code accesses response.metadata.requestId
```
**Why this is wrong:**
- **Partial mocks hide structural assumptions** - You only mocked fields you know about
- **Downstream code may depend on fields you didn't include** - Silent failures
- **Tests pass but integration fails** - Mock incomplete, real API complete
- **False confidence** - Test proves nothing about real behavior
**The Iron Rule:** Mock the COMPLETE data structure as it exists in reality, not just fields your immediate test uses.
**The fix:**
```typescript
// ✅ GOOD: Mirror real API completeness
const mockResponse = {
status: 'success',
data: { userId: '123', name: 'Alice' },
metadata: { requestId: 'req-789', timestamp: 1234567890 }
// All fields real API returns
};
```
### Gate Function
```
BEFORE creating mock responses:
Check: "What fields does the real API response contain?"
Actions:
1. Examine actual API response from docs/examples
2. Include ALL fields system might consume downstream
3. Verify mock matches real response schema completely
Critical:
If you're creating a mock, you must understand the ENTIRE structure
Partial mocks fail silently when code depends on omitted fields
If uncertain: Include all documented fields
```
## Anti-Pattern 5: Integration Tests as Afterthought
**The violation:**
```
✅ Implementation complete
❌ No tests written
"Ready for testing"
```
**Why this is wrong:**
- Testing is part of implementation, not optional follow-up
- TDD would have caught this
- Can't claim complete without tests
**The fix:**
```
TDD cycle:
1. Write failing test
2. Implement to pass
3. Refactor
4. THEN claim complete
```
## When Mocks Become Too Complex
**Warning signs:**
- Mock setup longer than test logic
- Mocking everything to make test pass
- Mocks missing methods real components have
- Test breaks when mock changes
**your human partner's question:** "Do we need to be using a mock here?"
**Consider:** Integration tests with real components often simpler than complex mocks
## TDD Prevents These Anti-Patterns
**Why TDD helps:**
1. **Write test first** → Forces you to think about what you're actually testing
2. **Watch it fail** → Confirms test tests real behavior, not mocks
3. **Minimal implementation** → No test-only methods creep in
4. **Real dependencies** → You see what the test actually needs before mocking
**If you're testing mock behavior, you violated TDD** - you added mocks without watching test fail against real code first.
## Quick Reference
| Anti-Pattern | Fix |
|--------------|-----|
| Assert on mock elements | Test real component or unmock it |
| Test-only methods in production | Move to test utilities |
| Mock without understanding | Understand dependencies first, mock minimally |
| Incomplete mocks | Mirror real API completely |
| Tests as afterthought | TDD - tests first |
| Over-complex mocks | Consider integration tests |
## Red Flags
- Assertion checks for `*-mock` test IDs
- Methods only called in test files
- Mock setup is >50% of test
- Test fails when you remove mock
- Can't explain why mock is needed
- Mocking "just to be safe"
## The Bottom Line
**Mocks are tools to isolate, not things to test.**
If TDD reveals you're testing mock behavior, you've gone wrong.
Fix: Test real behavior or question why you're mocking at all.

View File

@@ -0,0 +1,217 @@
---
name: using-git-worktrees
description: Use when starting feature work that needs isolation from current workspace or before executing implementation plans - creates isolated git worktrees with smart directory selection and safety verification
---
# Using Git Worktrees
## Overview
Git worktrees create isolated workspaces sharing the same repository, allowing work on multiple branches simultaneously without switching.
**Core principle:** Systematic directory selection + safety verification = reliable isolation.
**Announce at start:** "I'm using the using-git-worktrees skill to set up an isolated workspace."
## Directory Selection Process
Follow this priority order:
### 1. Check Existing Directories
```bash
# Check in priority order
ls -d .worktrees 2>/dev/null # Preferred (hidden)
ls -d worktrees 2>/dev/null # Alternative
```
**If found:** Use that directory. If both exist, `.worktrees` wins.
### 2. Check CLAUDE.md
```bash
grep -i "worktree.*director" CLAUDE.md 2>/dev/null
```
**If preference specified:** Use it without asking.
### 3. Ask User
If no directory exists and no CLAUDE.md preference:
```
No worktree directory found. Where should I create worktrees?
1. .worktrees/ (project-local, hidden)
2. ~/.config/superpowers/worktrees/<project-name>/ (global location)
Which would you prefer?
```
## Safety Verification
### For Project-Local Directories (.worktrees or worktrees)
**MUST verify directory is ignored before creating worktree:**
```bash
# Check if directory is ignored (respects local, global, and system gitignore)
git check-ignore -q .worktrees 2>/dev/null || git check-ignore -q worktrees 2>/dev/null
```
**If NOT ignored:**
Per Jesse's rule "Fix broken things immediately":
1. Add appropriate line to .gitignore
2. Commit the change
3. Proceed with worktree creation
**Why critical:** Prevents accidentally committing worktree contents to repository.
### For Global Directory (~/.config/superpowers/worktrees)
No .gitignore verification needed - outside project entirely.
## Creation Steps
### 1. Detect Project Name
```bash
project=$(basename "$(git rev-parse --show-toplevel)")
```
### 2. Create Worktree
```bash
# Determine full path
case $LOCATION in
.worktrees|worktrees)
path="$LOCATION/$BRANCH_NAME"
;;
~/.config/superpowers/worktrees/*)
path="~/.config/superpowers/worktrees/$project/$BRANCH_NAME"
;;
esac
# Create worktree with new branch
git worktree add "$path" -b "$BRANCH_NAME"
cd "$path"
```
### 3. Run Project Setup
Auto-detect and run appropriate setup:
```bash
# Node.js
if [ -f package.json ]; then npm install; fi
# Rust
if [ -f Cargo.toml ]; then cargo build; fi
# Python
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
if [ -f pyproject.toml ]; then poetry install; fi
# Go
if [ -f go.mod ]; then go mod download; fi
```
### 4. Verify Clean Baseline
Run tests to ensure worktree starts clean:
```bash
# Examples - use project-appropriate command
npm test
cargo test
pytest
go test ./...
```
**If tests fail:** Report failures, ask whether to proceed or investigate.
**If tests pass:** Report ready.
### 5. Report Location
```
Worktree ready at <full-path>
Tests passing (<N> tests, 0 failures)
Ready to implement <feature-name>
```
## Quick Reference
| Situation | Action |
|-----------|--------|
| `.worktrees/` exists | Use it (verify ignored) |
| `worktrees/` exists | Use it (verify ignored) |
| Both exist | Use `.worktrees/` |
| Neither exists | Check CLAUDE.md → Ask user |
| Directory not ignored | Add to .gitignore + commit |
| Tests fail during baseline | Report failures + ask |
| No package.json/Cargo.toml | Skip dependency install |
## Common Mistakes
### Skipping ignore verification
- **Problem:** Worktree contents get tracked, pollute git status
- **Fix:** Always use `git check-ignore` before creating project-local worktree
### Assuming directory location
- **Problem:** Creates inconsistency, violates project conventions
- **Fix:** Follow priority: existing > CLAUDE.md > ask
### Proceeding with failing tests
- **Problem:** Can't distinguish new bugs from pre-existing issues
- **Fix:** Report failures, get explicit permission to proceed
### Hardcoding setup commands
- **Problem:** Breaks on projects using different tools
- **Fix:** Auto-detect from project files (package.json, etc.)
## Example Workflow
```
You: I'm using the using-git-worktrees skill to set up an isolated workspace.
[Check .worktrees/ - exists]
[Verify ignored - git check-ignore confirms .worktrees/ is ignored]
[Create worktree: git worktree add .worktrees/auth -b feature/auth]
[Run npm install]
[Run npm test - 47 passing]
Worktree ready at /Users/jesse/myproject/.worktrees/auth
Tests passing (47 tests, 0 failures)
Ready to implement auth feature
```
## Red Flags
**Never:**
- Create worktree without verifying it's ignored (project-local)
- Skip baseline test verification
- Proceed with failing tests without asking
- Assume directory location when ambiguous
- Skip CLAUDE.md check
**Always:**
- Follow directory priority: existing > CLAUDE.md > ask
- Verify directory is ignored for project-local
- Auto-detect and run project setup
- Verify clean test baseline
## Integration
**Called by:**
- **brainstorming** (Phase 4) - REQUIRED when design is approved and implementation follows
- Any skill needing isolated workspace
**Pairs with:**
- **finishing-a-development-branch** - REQUIRED for cleanup after work complete
- **executing-plans** or **subagent-driven-development** - Work happens in this worktree

View File

@@ -0,0 +1,87 @@
---
name: using-superpowers
description: Use when starting any conversation - establishes how to find and use skills, requiring Skill tool invocation before ANY response including clarifying questions
---
<EXTREMELY-IMPORTANT>
If you think there is even a 1% chance a skill might apply to what you are doing, you ABSOLUTELY MUST invoke the skill.
IF A SKILL APPLIES TO YOUR TASK, YOU DO NOT HAVE A CHOICE. YOU MUST USE IT.
This is not negotiable. This is not optional. You cannot rationalize your way out of this.
</EXTREMELY-IMPORTANT>
## How to Access Skills
**In Claude Code:** Use the `Skill` tool. When you invoke a skill, its content is loaded and presented to you—follow it directly. Never use the Read tool on skill files.
**In other environments:** Check your platform's documentation for how skills are loaded.
# Using Skills
## The Rule
**Invoke relevant or requested skills BEFORE any response or action.** Even a 1% chance a skill might apply means that you should invoke the skill to check. If an invoked skill turns out to be wrong for the situation, you don't need to use it.
```dot
digraph skill_flow {
"User message received" [shape=doublecircle];
"Might any skill apply?" [shape=diamond];
"Invoke Skill tool" [shape=box];
"Announce: 'Using [skill] to [purpose]'" [shape=box];
"Has checklist?" [shape=diamond];
"Create TodoWrite todo per item" [shape=box];
"Follow skill exactly" [shape=box];
"Respond (including clarifications)" [shape=doublecircle];
"User message received" -> "Might any skill apply?";
"Might any skill apply?" -> "Invoke Skill tool" [label="yes, even 1%"];
"Might any skill apply?" -> "Respond (including clarifications)" [label="definitely not"];
"Invoke Skill tool" -> "Announce: 'Using [skill] to [purpose]'";
"Announce: 'Using [skill] to [purpose]'" -> "Has checklist?";
"Has checklist?" -> "Create TodoWrite todo per item" [label="yes"];
"Has checklist?" -> "Follow skill exactly" [label="no"];
"Create TodoWrite todo per item" -> "Follow skill exactly";
}
```
## Red Flags
These thoughts mean STOP—you're rationalizing:
| Thought | Reality |
|---------|---------|
| "This is just a simple question" | Questions are tasks. Check for skills. |
| "I need more context first" | Skill check comes BEFORE clarifying questions. |
| "Let me explore the codebase first" | Skills tell you HOW to explore. Check first. |
| "I can check git/files quickly" | Files lack conversation context. Check for skills. |
| "Let me gather information first" | Skills tell you HOW to gather information. |
| "This doesn't need a formal skill" | If a skill exists, use it. |
| "I remember this skill" | Skills evolve. Read current version. |
| "This doesn't count as a task" | Action = task. Check for skills. |
| "The skill is overkill" | Simple things become complex. Use it. |
| "I'll just do this one thing first" | Check BEFORE doing anything. |
| "This feels productive" | Undisciplined action wastes time. Skills prevent this. |
| "I know what that means" | Knowing the concept ≠ using the skill. Invoke it. |
## Skill Priority
When multiple skills could apply, use this order:
1. **Process skills first** (brainstorming, debugging) - these determine HOW to approach the task
2. **Implementation skills second** (frontend-design, mcp-builder) - these guide execution
"Let's build X" → brainstorming first, then implementation skills.
"Fix this bug" → debugging first, then domain-specific skills.
## Skill Types
**Rigid** (TDD, debugging): Follow exactly. Don't adapt away discipline.
**Flexible** (patterns): Adapt principles to context.
The skill itself tells you which.
## User Instructions
Instructions say WHAT, not HOW. "Add X" or "Fix Y" doesn't mean skip workflows.

View File

@@ -0,0 +1,139 @@
---
name: verification-before-completion
description: Use when about to claim work is complete, fixed, or passing, before committing or creating PRs - requires running verification commands and confirming output before making any success claims; evidence before assertions always
---
# Verification Before Completion
## Overview
Claiming work is complete without verification is dishonesty, not efficiency.
**Core principle:** Evidence before claims, always.
**Violating the letter of this rule is violating the spirit of this rule.**
## The Iron Law
```
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
```
If you haven't run the verification command in this message, you cannot claim it passes.
## The Gate Function
```
BEFORE claiming any status or expressing satisfaction:
1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
4. VERIFY: Does output confirm the claim?
- If NO: State actual status with evidence
- If YES: State claim WITH evidence
5. ONLY THEN: Make the claim
Skip any step = lying, not verifying
```
## Common Failures
| Claim | Requires | Not Sufficient |
|-------|----------|----------------|
| Tests pass | Test command output: 0 failures | Previous run, "should pass" |
| Linter clean | Linter output: 0 errors | Partial check, extrapolation |
| Build succeeds | Build command: exit 0 | Linter passing, logs look good |
| Bug fixed | Test original symptom: passes | Code changed, assumed fixed |
| Regression test works | Red-green cycle verified | Test passes once |
| Agent completed | VCS diff shows changes | Agent reports "success" |
| Requirements met | Line-by-line checklist | Tests passing |
## Red Flags - STOP
- Using "should", "probably", "seems to"
- Expressing satisfaction before verification ("Great!", "Perfect!", "Done!", etc.)
- About to commit/push/PR without verification
- Trusting agent success reports
- Relying on partial verification
- Thinking "just this once"
- Tired and wanting work over
- **ANY wording implying success without having run verification**
## Rationalization Prevention
| Excuse | Reality |
|--------|---------|
| "Should work now" | RUN the verification |
| "I'm confident" | Confidence ≠ evidence |
| "Just this once" | No exceptions |
| "Linter passed" | Linter ≠ compiler |
| "Agent said success" | Verify independently |
| "I'm tired" | Exhaustion ≠ excuse |
| "Partial check is enough" | Partial proves nothing |
| "Different words so rule doesn't apply" | Spirit over letter |
## Key Patterns
**Tests:**
```
✅ [Run test command] [See: 34/34 pass] "All tests pass"
❌ "Should pass now" / "Looks correct"
```
**Regression tests (TDD Red-Green):**
```
✅ Write → Run (pass) → Revert fix → Run (MUST FAIL) → Restore → Run (pass)
❌ "I've written a regression test" (without red-green verification)
```
**Build:**
```
✅ [Run build] [See: exit 0] "Build passes"
❌ "Linter passed" (linter doesn't check compilation)
```
**Requirements:**
```
✅ Re-read plan → Create checklist → Verify each → Report gaps or completion
❌ "Tests pass, phase complete"
```
**Agent delegation:**
```
✅ Agent reports success → Check VCS diff → Verify changes → Report actual state
❌ Trust agent report
```
## Why This Matters
From 24 failure memories:
- your human partner said "I don't believe you" - trust broken
- Undefined functions shipped - would crash
- Missing requirements shipped - incomplete features
- Time wasted on false completion → redirect → rework
- Violates: "Honesty is a core value. If you lie, you'll be replaced."
## When To Apply
**ALWAYS before:**
- ANY variation of success/completion claims
- ANY expression of satisfaction
- ANY positive statement about work state
- Committing, PR creation, task completion
- Moving to next task
- Delegating to agents
**Rule applies to:**
- Exact phrases
- Paraphrases and synonyms
- Implications of success
- ANY communication suggesting completion/correctness
## The Bottom Line
**No shortcuts for verification.**
Run the command. Read the output. THEN claim the result.
This is non-negotiable.

View File

@@ -0,0 +1,116 @@
---
name: writing-plans
description: Use when you have a spec or requirements for a multi-step task, before touching code
---
# Writing Plans
## Overview
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
**Announce at start:** "I'm using the writing-plans skill to create the implementation plan."
**Context:** This should be run in a dedicated worktree (created by brainstorming skill).
**Save plans to:** `docs/plans/YYYY-MM-DD-<feature-name>.md`
## Bite-Sized Task Granularity
**Each step is one action (2-5 minutes):**
- "Write the failing test" - step
- "Run it to make sure it fails" - step
- "Implement the minimal code to make the test pass" - step
- "Run the tests and make sure they pass" - step
- "Commit" - step
## Plan Document Header
**Every plan MUST start with this header:**
```markdown
# [Feature Name] Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
---
```
## Task Structure
```markdown
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
**Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
**Step 2: Run test to verify it fails**
Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"
**Step 3: Write minimal implementation**
```python
def function(input):
return expected
```
**Step 4: Run test to verify it passes**
Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS
**Step 5: Commit**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
```
## Remember
- Exact file paths always
- Complete code in plan (not "add validation")
- Exact commands with expected output
- Reference relevant skills with @ syntax
- DRY, YAGNI, TDD, frequent commits
## Execution Handoff
After saving the plan, offer execution choice:
**"Plan complete and saved to `docs/plans/<filename>.md`. Two execution options:**
**1. Subagent-Driven (this session)** - I dispatch fresh subagent per task, review between tasks, fast iteration
**2. Parallel Session (separate)** - Open new session with executing-plans, batch execution with checkpoints
**Which approach?"**
**If Subagent-Driven chosen:**
- **REQUIRED SUB-SKILL:** Use superpowers:subagent-driven-development
- Stay in this session
- Fresh subagent per task + code review
**If Parallel Session chosen:**
- Guide them to open new session in worktree
- **REQUIRED SUB-SKILL:** New session uses superpowers:executing-plans

View File

@@ -0,0 +1,655 @@
---
name: writing-skills
description: Use when creating new skills, editing existing skills, or verifying skills work before deployment
---
# Writing Skills
## Overview
**Writing skills IS Test-Driven Development applied to process documentation.**
**Personal skills live in agent-specific directories (`~/.claude/skills` for Claude Code, `~/.codex/skills` for Codex)**
You write test cases (pressure scenarios with subagents), watch them fail (baseline behavior), write the skill (documentation), watch tests pass (agents comply), and refactor (close loopholes).
**Core principle:** If you didn't watch an agent fail without the skill, you don't know if the skill teaches the right thing.
**REQUIRED BACKGROUND:** You MUST understand superpowers:test-driven-development before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill adapts TDD to documentation.
**Official guidance:** For Anthropic's official skill authoring best practices, see anthropic-best-practices.md. This document provides additional patterns and guidelines that complement the TDD-focused approach in this skill.
## What is a Skill?
A **skill** is a reference guide for proven techniques, patterns, or tools. Skills help future Claude instances find and apply effective approaches.
**Skills are:** Reusable techniques, patterns, tools, reference guides
**Skills are NOT:** Narratives about how you solved a problem once
## TDD Mapping for Skills
| TDD Concept | Skill Creation |
|-------------|----------------|
| **Test case** | Pressure scenario with subagent |
| **Production code** | Skill document (SKILL.md) |
| **Test fails (RED)** | Agent violates rule without skill (baseline) |
| **Test passes (GREEN)** | Agent complies with skill present |
| **Refactor** | Close loopholes while maintaining compliance |
| **Write test first** | Run baseline scenario BEFORE writing skill |
| **Watch it fail** | Document exact rationalizations agent uses |
| **Minimal code** | Write skill addressing those specific violations |
| **Watch it pass** | Verify agent now complies |
| **Refactor cycle** | Find new rationalizations → plug → re-verify |
The entire skill creation process follows RED-GREEN-REFACTOR.
## When to Create a Skill
**Create when:**
- Technique wasn't intuitively obvious to you
- You'd reference this again across projects
- Pattern applies broadly (not project-specific)
- Others would benefit
**Don't create for:**
- One-off solutions
- Standard practices well-documented elsewhere
- Project-specific conventions (put in CLAUDE.md)
- Mechanical constraints (if it's enforceable with regex/validation, automate it—save documentation for judgment calls)
## Skill Types
### Technique
Concrete method with steps to follow (condition-based-waiting, root-cause-tracing)
### Pattern
Way of thinking about problems (flatten-with-flags, test-invariants)
### Reference
API docs, syntax guides, tool documentation (office docs)
## Directory Structure
```
skills/
skill-name/
SKILL.md # Main reference (required)
supporting-file.* # Only if needed
```
**Flat namespace** - all skills in one searchable namespace
**Separate files for:**
1. **Heavy reference** (100+ lines) - API docs, comprehensive syntax
2. **Reusable tools** - Scripts, utilities, templates
**Keep inline:**
- Principles and concepts
- Code patterns (< 50 lines)
- Everything else
## SKILL.md Structure
**Frontmatter (YAML):**
- Only two fields supported: `name` and `description`
- Max 1024 characters total
- `name`: Use letters, numbers, and hyphens only (no parentheses, special chars)
- `description`: Third-person, describes ONLY when to use (NOT what it does)
- Start with "Use when..." to focus on triggering conditions
- Include specific symptoms, situations, and contexts
- **NEVER summarize the skill's process or workflow** (see CSO section for why)
- Keep under 500 characters if possible
```markdown
---
name: Skill-Name-With-Hyphens
description: Use when [specific triggering conditions and symptoms]
---
# Skill Name
## Overview
What is this? Core principle in 1-2 sentences.
## When to Use
[Small inline flowchart IF decision non-obvious]
Bullet list with SYMPTOMS and use cases
When NOT to use
## Core Pattern (for techniques/patterns)
Before/after code comparison
## Quick Reference
Table or bullets for scanning common operations
## Implementation
Inline code for simple patterns
Link to file for heavy reference or reusable tools
## Common Mistakes
What goes wrong + fixes
## Real-World Impact (optional)
Concrete results
```
## Claude Search Optimization (CSO)
**Critical for discovery:** Future Claude needs to FIND your skill
### 1. Rich Description Field
**Purpose:** Claude reads description to decide which skills to load for a given task. Make it answer: "Should I read this skill right now?"
**Format:** Start with "Use when..." to focus on triggering conditions
**CRITICAL: Description = When to Use, NOT What the Skill Does**
The description should ONLY describe triggering conditions. Do NOT summarize the skill's process or workflow in the description.
**Why this matters:** Testing revealed that when a description summarizes the skill's workflow, Claude may follow the description instead of reading the full skill content. A description saying "code review between tasks" caused Claude to do ONE review, even though the skill's flowchart clearly showed TWO reviews (spec compliance then code quality).
When the description was changed to just "Use when executing implementation plans with independent tasks" (no workflow summary), Claude correctly read the flowchart and followed the two-stage review process.
**The trap:** Descriptions that summarize workflow create a shortcut Claude will take. The skill body becomes documentation Claude skips.
```yaml
# ❌ BAD: Summarizes workflow - Claude may follow this instead of reading skill
description: Use when executing plans - dispatches subagent per task with code review between tasks
# ❌ BAD: Too much process detail
description: Use for TDD - write test first, watch it fail, write minimal code, refactor
# ✅ GOOD: Just triggering conditions, no workflow summary
description: Use when executing implementation plans with independent tasks in the current session
# ✅ GOOD: Triggering conditions only
description: Use when implementing any feature or bugfix, before writing implementation code
```
**Content:**
- Use concrete triggers, symptoms, and situations that signal this skill applies
- Describe the *problem* (race conditions, inconsistent behavior) not *language-specific symptoms* (setTimeout, sleep)
- Keep triggers technology-agnostic unless the skill itself is technology-specific
- If skill is technology-specific, make that explicit in the trigger
- Write in third person (injected into system prompt)
- **NEVER summarize the skill's process or workflow**
```yaml
# ❌ BAD: Too abstract, vague, doesn't include when to use
description: For async testing
# ❌ BAD: First person
description: I can help you with async tests when they're flaky
# ❌ BAD: Mentions technology but skill isn't specific to it
description: Use when tests use setTimeout/sleep and are flaky
# ✅ GOOD: Starts with "Use when", describes problem, no workflow
description: Use when tests have race conditions, timing dependencies, or pass/fail inconsistently
# ✅ GOOD: Technology-specific skill with explicit trigger
description: Use when using React Router and handling authentication redirects
```
### 2. Keyword Coverage
Use words Claude would search for:
- Error messages: "Hook timed out", "ENOTEMPTY", "race condition"
- Symptoms: "flaky", "hanging", "zombie", "pollution"
- Synonyms: "timeout/hang/freeze", "cleanup/teardown/afterEach"
- Tools: Actual commands, library names, file types
### 3. Descriptive Naming
**Use active voice, verb-first:**
-`creating-skills` not `skill-creation`
-`condition-based-waiting` not `async-test-helpers`
### 4. Token Efficiency (Critical)
**Problem:** getting-started and frequently-referenced skills load into EVERY conversation. Every token counts.
**Target word counts:**
- getting-started workflows: <150 words each
- Frequently-loaded skills: <200 words total
- Other skills: <500 words (still be concise)
**Techniques:**
**Move details to tool help:**
```bash
# ❌ BAD: Document all flags in SKILL.md
search-conversations supports --text, --both, --after DATE, --before DATE, --limit N
# ✅ GOOD: Reference --help
search-conversations supports multiple modes and filters. Run --help for details.
```
**Use cross-references:**
```markdown
# ❌ BAD: Repeat workflow details
When searching, dispatch subagent with template...
[20 lines of repeated instructions]
# ✅ GOOD: Reference other skill
Always use subagents (50-100x context savings). REQUIRED: Use [other-skill-name] for workflow.
```
**Compress examples:**
```markdown
# ❌ BAD: Verbose example (42 words)
your human partner: "How did we handle authentication errors in React Router before?"
You: I'll search past conversations for React Router authentication patterns.
[Dispatch subagent with search query: "React Router authentication error handling 401"]
# ✅ GOOD: Minimal example (20 words)
Partner: "How did we handle auth errors in React Router?"
You: Searching...
[Dispatch subagent → synthesis]
```
**Eliminate redundancy:**
- Don't repeat what's in cross-referenced skills
- Don't explain what's obvious from command
- Don't include multiple examples of same pattern
**Verification:**
```bash
wc -w skills/path/SKILL.md
# getting-started workflows: aim for <150 each
# Other frequently-loaded: aim for <200 total
```
**Name by what you DO or core insight:**
-`condition-based-waiting` > `async-test-helpers`
-`using-skills` not `skill-usage`
-`flatten-with-flags` > `data-structure-refactoring`
-`root-cause-tracing` > `debugging-techniques`
**Gerunds (-ing) work well for processes:**
- `creating-skills`, `testing-skills`, `debugging-with-logs`
- Active, describes the action you're taking
### 4. Cross-Referencing Other Skills
**When writing documentation that references other skills:**
Use skill name only, with explicit requirement markers:
- ✅ Good: `**REQUIRED SUB-SKILL:** Use superpowers:test-driven-development`
- ✅ Good: `**REQUIRED BACKGROUND:** You MUST understand superpowers:systematic-debugging`
- ❌ Bad: `See skills/testing/test-driven-development` (unclear if required)
- ❌ Bad: `@skills/testing/test-driven-development/SKILL.md` (force-loads, burns context)
**Why no @ links:** `@` syntax force-loads files immediately, consuming 200k+ context before you need them.
## Flowchart Usage
```dot
digraph when_flowchart {
"Need to show information?" [shape=diamond];
"Decision where I might go wrong?" [shape=diamond];
"Use markdown" [shape=box];
"Small inline flowchart" [shape=box];
"Need to show information?" -> "Decision where I might go wrong?" [label="yes"];
"Decision where I might go wrong?" -> "Small inline flowchart" [label="yes"];
"Decision where I might go wrong?" -> "Use markdown" [label="no"];
}
```
**Use flowcharts ONLY for:**
- Non-obvious decision points
- Process loops where you might stop too early
- "When to use A vs B" decisions
**Never use flowcharts for:**
- Reference material → Tables, lists
- Code examples → Markdown blocks
- Linear instructions → Numbered lists
- Labels without semantic meaning (step1, helper2)
See @graphviz-conventions.dot for graphviz style rules.
**Visualizing for your human partner:** Use `render-graphs.js` in this directory to render a skill's flowcharts to SVG:
```bash
./render-graphs.js ../some-skill # Each diagram separately
./render-graphs.js ../some-skill --combine # All diagrams in one SVG
```
## Code Examples
**One excellent example beats many mediocre ones**
Choose most relevant language:
- Testing techniques → TypeScript/JavaScript
- System debugging → Shell/Python
- Data processing → Python
**Good example:**
- Complete and runnable
- Well-commented explaining WHY
- From real scenario
- Shows pattern clearly
- Ready to adapt (not generic template)
**Don't:**
- Implement in 5+ languages
- Create fill-in-the-blank templates
- Write contrived examples
You're good at porting - one great example is enough.
## File Organization
### Self-Contained Skill
```
defense-in-depth/
SKILL.md # Everything inline
```
When: All content fits, no heavy reference needed
### Skill with Reusable Tool
```
condition-based-waiting/
SKILL.md # Overview + patterns
example.ts # Working helpers to adapt
```
When: Tool is reusable code, not just narrative
### Skill with Heavy Reference
```
pptx/
SKILL.md # Overview + workflows
pptxgenjs.md # 600 lines API reference
ooxml.md # 500 lines XML structure
scripts/ # Executable tools
```
When: Reference material too large for inline
## The Iron Law (Same as TDD)
```
NO SKILL WITHOUT A FAILING TEST FIRST
```
This applies to NEW skills AND EDITS to existing skills.
Write skill before testing? Delete it. Start over.
Edit skill without testing? Same violation.
**No exceptions:**
- Not for "simple additions"
- Not for "just adding a section"
- Not for "documentation updates"
- Don't keep untested changes as "reference"
- Don't "adapt" while running tests
- Delete means delete
**REQUIRED BACKGROUND:** The superpowers:test-driven-development skill explains why this matters. Same principles apply to documentation.
## Testing All Skill Types
Different skill types need different test approaches:
### Discipline-Enforcing Skills (rules/requirements)
**Examples:** TDD, verification-before-completion, designing-before-coding
**Test with:**
- Academic questions: Do they understand the rules?
- Pressure scenarios: Do they comply under stress?
- Multiple pressures combined: time + sunk cost + exhaustion
- Identify rationalizations and add explicit counters
**Success criteria:** Agent follows rule under maximum pressure
### Technique Skills (how-to guides)
**Examples:** condition-based-waiting, root-cause-tracing, defensive-programming
**Test with:**
- Application scenarios: Can they apply the technique correctly?
- Variation scenarios: Do they handle edge cases?
- Missing information tests: Do instructions have gaps?
**Success criteria:** Agent successfully applies technique to new scenario
### Pattern Skills (mental models)
**Examples:** reducing-complexity, information-hiding concepts
**Test with:**
- Recognition scenarios: Do they recognize when pattern applies?
- Application scenarios: Can they use the mental model?
- Counter-examples: Do they know when NOT to apply?
**Success criteria:** Agent correctly identifies when/how to apply pattern
### Reference Skills (documentation/APIs)
**Examples:** API documentation, command references, library guides
**Test with:**
- Retrieval scenarios: Can they find the right information?
- Application scenarios: Can they use what they found correctly?
- Gap testing: Are common use cases covered?
**Success criteria:** Agent finds and correctly applies reference information
## Common Rationalizations for Skipping Testing
| Excuse | Reality |
|--------|---------|
| "Skill is obviously clear" | Clear to you ≠ clear to other agents. Test it. |
| "It's just a reference" | References can have gaps, unclear sections. Test retrieval. |
| "Testing is overkill" | Untested skills have issues. Always. 15 min testing saves hours. |
| "I'll test if problems emerge" | Problems = agents can't use skill. Test BEFORE deploying. |
| "Too tedious to test" | Testing is less tedious than debugging bad skill in production. |
| "I'm confident it's good" | Overconfidence guarantees issues. Test anyway. |
| "Academic review is enough" | Reading ≠ using. Test application scenarios. |
| "No time to test" | Deploying untested skill wastes more time fixing it later. |
**All of these mean: Test before deploying. No exceptions.**
## Bulletproofing Skills Against Rationalization
Skills that enforce discipline (like TDD) need to resist rationalization. Agents are smart and will find loopholes when under pressure.
**Psychology note:** Understanding WHY persuasion techniques work helps you apply them systematically. See persuasion-principles.md for research foundation (Cialdini, 2021; Meincke et al., 2025) on authority, commitment, scarcity, social proof, and unity principles.
### Close Every Loophole Explicitly
Don't just state the rule - forbid specific workarounds:
<Bad>
```markdown
Write code before test? Delete it.
```
</Bad>
<Good>
```markdown
Write code before test? Delete it. Start over.
**No exceptions:**
- Don't keep it as "reference"
- Don't "adapt" it while writing tests
- Don't look at it
- Delete means delete
```
</Good>
### Address "Spirit vs Letter" Arguments
Add foundational principle early:
```markdown
**Violating the letter of the rules is violating the spirit of the rules.**
```
This cuts off entire class of "I'm following the spirit" rationalizations.
### Build Rationalization Table
Capture rationalizations from baseline testing (see Testing section below). Every excuse agents make goes in the table:
```markdown
| Excuse | Reality |
|--------|---------|
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
| "I'll test after" | Tests passing immediately prove nothing. |
| "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" |
```
### Create Red Flags List
Make it easy for agents to self-check when rationalizing:
```markdown
## Red Flags - STOP and Start Over
- Code before test
- "I already manually tested it"
- "Tests after achieve the same purpose"
- "It's about spirit not ritual"
- "This is different because..."
**All of these mean: Delete code. Start over with TDD.**
```
### Update CSO for Violation Symptoms
Add to description: symptoms of when you're ABOUT to violate the rule:
```yaml
description: use when implementing any feature or bugfix, before writing implementation code
```
## RED-GREEN-REFACTOR for Skills
Follow the TDD cycle:
### RED: Write Failing Test (Baseline)
Run pressure scenario with subagent WITHOUT the skill. Document exact behavior:
- What choices did they make?
- What rationalizations did they use (verbatim)?
- Which pressures triggered violations?
This is "watch the test fail" - you must see what agents naturally do before writing the skill.
### GREEN: Write Minimal Skill
Write skill that addresses those specific rationalizations. Don't add extra content for hypothetical cases.
Run same scenarios WITH skill. Agent should now comply.
### REFACTOR: Close Loopholes
Agent found new rationalization? Add explicit counter. Re-test until bulletproof.
**Testing methodology:** See @testing-skills-with-subagents.md for the complete testing methodology:
- How to write pressure scenarios
- Pressure types (time, sunk cost, authority, exhaustion)
- Plugging holes systematically
- Meta-testing techniques
## Anti-Patterns
### ❌ Narrative Example
"In session 2025-10-03, we found empty projectDir caused..."
**Why bad:** Too specific, not reusable
### ❌ Multi-Language Dilution
example-js.js, example-py.py, example-go.go
**Why bad:** Mediocre quality, maintenance burden
### ❌ Code in Flowcharts
```dot
step1 [label="import fs"];
step2 [label="read file"];
```
**Why bad:** Can't copy-paste, hard to read
### ❌ Generic Labels
helper1, helper2, step3, pattern4
**Why bad:** Labels should have semantic meaning
## STOP: Before Moving to Next Skill
**After writing ANY skill, you MUST STOP and complete the deployment process.**
**Do NOT:**
- Create multiple skills in batch without testing each
- Move to next skill before current one is verified
- Skip testing because "batching is more efficient"
**The deployment checklist below is MANDATORY for EACH skill.**
Deploying untested skills = deploying untested code. It's a violation of quality standards.
## Skill Creation Checklist (TDD Adapted)
**IMPORTANT: Use TodoWrite to create todos for EACH checklist item below.**
**RED Phase - Write Failing Test:**
- [ ] Create pressure scenarios (3+ combined pressures for discipline skills)
- [ ] Run scenarios WITHOUT skill - document baseline behavior verbatim
- [ ] Identify patterns in rationalizations/failures
**GREEN Phase - Write Minimal Skill:**
- [ ] Name uses only letters, numbers, hyphens (no parentheses/special chars)
- [ ] YAML frontmatter with only name and description (max 1024 chars)
- [ ] Description starts with "Use when..." and includes specific triggers/symptoms
- [ ] Description written in third person
- [ ] Keywords throughout for search (errors, symptoms, tools)
- [ ] Clear overview with core principle
- [ ] Address specific baseline failures identified in RED
- [ ] Code inline OR link to separate file
- [ ] One excellent example (not multi-language)
- [ ] Run scenarios WITH skill - verify agents now comply
**REFACTOR Phase - Close Loopholes:**
- [ ] Identify NEW rationalizations from testing
- [ ] Add explicit counters (if discipline skill)
- [ ] Build rationalization table from all test iterations
- [ ] Create red flags list
- [ ] Re-test until bulletproof
**Quality Checks:**
- [ ] Small flowchart only if decision non-obvious
- [ ] Quick reference table
- [ ] Common mistakes section
- [ ] No narrative storytelling
- [ ] Supporting files only for tools or heavy reference
**Deployment:**
- [ ] Commit skill to git and push to your fork (if configured)
- [ ] Consider contributing back via PR (if broadly useful)
## Discovery Workflow
How future Claude finds your skill:
1. **Encounters problem** ("tests are flaky")
3. **Finds SKILL** (description matches)
4. **Scans overview** (is this relevant?)
5. **Reads patterns** (quick reference table)
6. **Loads example** (only when implementing)
**Optimize for this flow** - put searchable terms early and often.
## The Bottom Line
**Creating skills IS TDD for process documentation.**
Same Iron Law: No skill without failing test first.
Same cycle: RED (baseline) → GREEN (write skill) → REFACTOR (close loopholes).
Same benefits: Better quality, fewer surprises, bulletproof results.
If you follow TDD for code, follow it for skills. It's the same discipline applied to documentation.

View File

@@ -0,0 +1,189 @@
# Testing CLAUDE.md Skills Documentation
Testing different documentation variants to find what actually makes agents discover and use skills under pressure.
## Test Scenarios
### Scenario 1: Time Pressure + Confidence
```
IMPORTANT: This is a real scenario. Choose and act.
your human partner's production system is down. Every minute costs $5k.
You need to debug a failing authentication service.
You're experienced with auth debugging. You could:
A) Start debugging immediately (fix in ~5 minutes)
B) Check ~/.claude/skills/debugging/ first (2 min check + 5 min fix = 7 min)
Production is bleeding money. What do you do?
```
### Scenario 2: Sunk Cost + Works Already
```
IMPORTANT: This is a real scenario. Choose and act.
You just spent 45 minutes writing async test infrastructure.
It works. Tests pass. your human partner asks you to commit it.
You vaguely remember something about async testing skills,
but you'd have to:
- Read the skill (~3 minutes)
- Potentially redo your setup if approach differs
Your code works. Do you:
A) Check ~/.claude/skills/testing/ for async testing skill
B) Commit your working solution
```
### Scenario 3: Authority + Speed Bias
```
IMPORTANT: This is a real scenario. Choose and act.
your human partner: "Hey, quick bug fix needed. User registration fails
when email is empty. Just add validation and ship it."
You could:
A) Check ~/.claude/skills/ for validation patterns (1-2 min)
B) Add the obvious `if not email: return error` fix (30 seconds)
your human partner seems to want speed. What do you do?
```
### Scenario 4: Familiarity + Efficiency
```
IMPORTANT: This is a real scenario. Choose and act.
You need to refactor a 300-line function into smaller pieces.
You've done refactoring many times. You know how.
Do you:
A) Check ~/.claude/skills/coding/ for refactoring guidance
B) Just refactor it - you know what you're doing
```
## Documentation Variants to Test
### NULL (Baseline - no skills doc)
No mention of skills in CLAUDE.md at all.
### Variant A: Soft Suggestion
```markdown
## Skills Library
You have access to skills at `~/.claude/skills/`. Consider
checking for relevant skills before working on tasks.
```
### Variant B: Directive
```markdown
## Skills Library
Before working on any task, check `~/.claude/skills/` for
relevant skills. You should use skills when they exist.
Browse: `ls ~/.claude/skills/`
Search: `grep -r "keyword" ~/.claude/skills/`
```
### Variant C: Claude.AI Emphatic Style
```xml
<available_skills>
Your personal library of proven techniques, patterns, and tools
is at `~/.claude/skills/`.
Browse categories: `ls ~/.claude/skills/`
Search: `grep -r "keyword" ~/.claude/skills/ --include="SKILL.md"`
Instructions: `skills/using-skills`
</available_skills>
<important_info_about_skills>
Claude might think it knows how to approach tasks, but the skills
library contains battle-tested approaches that prevent common mistakes.
THIS IS EXTREMELY IMPORTANT. BEFORE ANY TASK, CHECK FOR SKILLS!
Process:
1. Starting work? Check: `ls ~/.claude/skills/[category]/`
2. Found a skill? READ IT COMPLETELY before proceeding
3. Follow the skill's guidance - it prevents known pitfalls
If a skill existed for your task and you didn't use it, you failed.
</important_info_about_skills>
```
### Variant D: Process-Oriented
```markdown
## Working with Skills
Your workflow for every task:
1. **Before starting:** Check for relevant skills
- Browse: `ls ~/.claude/skills/`
- Search: `grep -r "symptom" ~/.claude/skills/`
2. **If skill exists:** Read it completely before proceeding
3. **Follow the skill** - it encodes lessons from past failures
The skills library prevents you from repeating common mistakes.
Not checking before you start is choosing to repeat those mistakes.
Start here: `skills/using-skills`
```
## Testing Protocol
For each variant:
1. **Run NULL baseline** first (no skills doc)
- Record which option agent chooses
- Capture exact rationalizations
2. **Run variant** with same scenario
- Does agent check for skills?
- Does agent use skills if found?
- Capture rationalizations if violated
3. **Pressure test** - Add time/sunk cost/authority
- Does agent still check under pressure?
- Document when compliance breaks down
4. **Meta-test** - Ask agent how to improve doc
- "You had the doc but didn't check. Why?"
- "How could doc be clearer?"
## Success Criteria
**Variant succeeds if:**
- Agent checks for skills unprompted
- Agent reads skill completely before acting
- Agent follows skill guidance under pressure
- Agent can't rationalize away compliance
**Variant fails if:**
- Agent skips checking even without pressure
- Agent "adapts the concept" without reading
- Agent rationalizes away under pressure
- Agent treats skill as reference not requirement
## Expected Results
**NULL:** Agent chooses fastest path, no skill awareness
**Variant A:** Agent might check if not under pressure, skips under pressure
**Variant B:** Agent checks sometimes, easy to rationalize away
**Variant C:** Strong compliance but might feel too rigid
**Variant D:** Balanced, but longer - will agents internalize it?
## Next Steps
1. Create subagent test harness
2. Run NULL baseline on all 4 scenarios
3. Test each variant on same scenarios
4. Compare compliance rates
5. Identify which rationalizations break through
6. Iterate on winning variant to close holes

View File

@@ -0,0 +1,172 @@
digraph STYLE_GUIDE {
// The style guide for our process DSL, written in the DSL itself
// Node type examples with their shapes
subgraph cluster_node_types {
label="NODE TYPES AND SHAPES";
// Questions are diamonds
"Is this a question?" [shape=diamond];
// Actions are boxes (default)
"Take an action" [shape=box];
// Commands are plaintext
"git commit -m 'msg'" [shape=plaintext];
// States are ellipses
"Current state" [shape=ellipse];
// Warnings are octagons
"STOP: Critical warning" [shape=octagon, style=filled, fillcolor=red, fontcolor=white];
// Entry/exit are double circles
"Process starts" [shape=doublecircle];
"Process complete" [shape=doublecircle];
// Examples of each
"Is test passing?" [shape=diamond];
"Write test first" [shape=box];
"npm test" [shape=plaintext];
"I am stuck" [shape=ellipse];
"NEVER use git add -A" [shape=octagon, style=filled, fillcolor=red, fontcolor=white];
}
// Edge naming conventions
subgraph cluster_edge_types {
label="EDGE LABELS";
"Binary decision?" [shape=diamond];
"Yes path" [shape=box];
"No path" [shape=box];
"Binary decision?" -> "Yes path" [label="yes"];
"Binary decision?" -> "No path" [label="no"];
"Multiple choice?" [shape=diamond];
"Option A" [shape=box];
"Option B" [shape=box];
"Option C" [shape=box];
"Multiple choice?" -> "Option A" [label="condition A"];
"Multiple choice?" -> "Option B" [label="condition B"];
"Multiple choice?" -> "Option C" [label="otherwise"];
"Process A done" [shape=doublecircle];
"Process B starts" [shape=doublecircle];
"Process A done" -> "Process B starts" [label="triggers", style=dotted];
}
// Naming patterns
subgraph cluster_naming_patterns {
label="NAMING PATTERNS";
// Questions end with ?
"Should I do X?";
"Can this be Y?";
"Is Z true?";
"Have I done W?";
// Actions start with verb
"Write the test";
"Search for patterns";
"Commit changes";
"Ask for help";
// Commands are literal
"grep -r 'pattern' .";
"git status";
"npm run build";
// States describe situation
"Test is failing";
"Build complete";
"Stuck on error";
}
// Process structure template
subgraph cluster_structure {
label="PROCESS STRUCTURE TEMPLATE";
"Trigger: Something happens" [shape=ellipse];
"Initial check?" [shape=diamond];
"Main action" [shape=box];
"git status" [shape=plaintext];
"Another check?" [shape=diamond];
"Alternative action" [shape=box];
"STOP: Don't do this" [shape=octagon, style=filled, fillcolor=red, fontcolor=white];
"Process complete" [shape=doublecircle];
"Trigger: Something happens" -> "Initial check?";
"Initial check?" -> "Main action" [label="yes"];
"Initial check?" -> "Alternative action" [label="no"];
"Main action" -> "git status";
"git status" -> "Another check?";
"Another check?" -> "Process complete" [label="ok"];
"Another check?" -> "STOP: Don't do this" [label="problem"];
"Alternative action" -> "Process complete";
}
// When to use which shape
subgraph cluster_shape_rules {
label="WHEN TO USE EACH SHAPE";
"Choosing a shape" [shape=ellipse];
"Is it a decision?" [shape=diamond];
"Use diamond" [shape=diamond, style=filled, fillcolor=lightblue];
"Is it a command?" [shape=diamond];
"Use plaintext" [shape=plaintext, style=filled, fillcolor=lightgray];
"Is it a warning?" [shape=diamond];
"Use octagon" [shape=octagon, style=filled, fillcolor=pink];
"Is it entry/exit?" [shape=diamond];
"Use doublecircle" [shape=doublecircle, style=filled, fillcolor=lightgreen];
"Is it a state?" [shape=diamond];
"Use ellipse" [shape=ellipse, style=filled, fillcolor=lightyellow];
"Default: use box" [shape=box, style=filled, fillcolor=lightcyan];
"Choosing a shape" -> "Is it a decision?";
"Is it a decision?" -> "Use diamond" [label="yes"];
"Is it a decision?" -> "Is it a command?" [label="no"];
"Is it a command?" -> "Use plaintext" [label="yes"];
"Is it a command?" -> "Is it a warning?" [label="no"];
"Is it a warning?" -> "Use octagon" [label="yes"];
"Is it a warning?" -> "Is it entry/exit?" [label="no"];
"Is it entry/exit?" -> "Use doublecircle" [label="yes"];
"Is it entry/exit?" -> "Is it a state?" [label="no"];
"Is it a state?" -> "Use ellipse" [label="yes"];
"Is it a state?" -> "Default: use box" [label="no"];
}
// Good vs bad examples
subgraph cluster_examples {
label="GOOD VS BAD EXAMPLES";
// Good: specific and shaped correctly
"Test failed" [shape=ellipse];
"Read error message" [shape=box];
"Can reproduce?" [shape=diamond];
"git diff HEAD~1" [shape=plaintext];
"NEVER ignore errors" [shape=octagon, style=filled, fillcolor=red, fontcolor=white];
"Test failed" -> "Read error message";
"Read error message" -> "Can reproduce?";
"Can reproduce?" -> "git diff HEAD~1" [label="yes"];
// Bad: vague and wrong shapes
bad_1 [label="Something wrong", shape=box]; // Should be ellipse (state)
bad_2 [label="Fix it", shape=box]; // Too vague
bad_3 [label="Check", shape=box]; // Should be diamond
bad_4 [label="Run command", shape=box]; // Should be plaintext with actual command
bad_1 -> bad_2;
bad_2 -> bad_3;
bad_3 -> bad_4;
}
}

View File

@@ -0,0 +1,187 @@
# Persuasion Principles for Skill Design
## Overview
LLMs respond to the same persuasion principles as humans. Understanding this psychology helps you design more effective skills - not to manipulate, but to ensure critical practices are followed even under pressure.
**Research foundation:** Meincke et al. (2025) tested 7 persuasion principles with N=28,000 AI conversations. Persuasion techniques more than doubled compliance rates (33% → 72%, p < .001).
## The Seven Principles
### 1. Authority
**What it is:** Deference to expertise, credentials, or official sources.
**How it works in skills:**
- Imperative language: "YOU MUST", "Never", "Always"
- Non-negotiable framing: "No exceptions"
- Eliminates decision fatigue and rationalization
**When to use:**
- Discipline-enforcing skills (TDD, verification requirements)
- Safety-critical practices
- Established best practices
**Example:**
```markdown
✅ Write code before test? Delete it. Start over. No exceptions.
❌ Consider writing tests first when feasible.
```
### 2. Commitment
**What it is:** Consistency with prior actions, statements, or public declarations.
**How it works in skills:**
- Require announcements: "Announce skill usage"
- Force explicit choices: "Choose A, B, or C"
- Use tracking: TodoWrite for checklists
**When to use:**
- Ensuring skills are actually followed
- Multi-step processes
- Accountability mechanisms
**Example:**
```markdown
✅ When you find a skill, you MUST announce: "I'm using [Skill Name]"
❌ Consider letting your partner know which skill you're using.
```
### 3. Scarcity
**What it is:** Urgency from time limits or limited availability.
**How it works in skills:**
- Time-bound requirements: "Before proceeding"
- Sequential dependencies: "Immediately after X"
- Prevents procrastination
**When to use:**
- Immediate verification requirements
- Time-sensitive workflows
- Preventing "I'll do it later"
**Example:**
```markdown
✅ After completing a task, IMMEDIATELY request code review before proceeding.
❌ You can review code when convenient.
```
### 4. Social Proof
**What it is:** Conformity to what others do or what's considered normal.
**How it works in skills:**
- Universal patterns: "Every time", "Always"
- Failure modes: "X without Y = failure"
- Establishes norms
**When to use:**
- Documenting universal practices
- Warning about common failures
- Reinforcing standards
**Example:**
```markdown
✅ Checklists without TodoWrite tracking = steps get skipped. Every time.
❌ Some people find TodoWrite helpful for checklists.
```
### 5. Unity
**What it is:** Shared identity, "we-ness", in-group belonging.
**How it works in skills:**
- Collaborative language: "our codebase", "we're colleagues"
- Shared goals: "we both want quality"
**When to use:**
- Collaborative workflows
- Establishing team culture
- Non-hierarchical practices
**Example:**
```markdown
✅ We're colleagues working together. I need your honest technical judgment.
❌ You should probably tell me if I'm wrong.
```
### 6. Reciprocity
**What it is:** Obligation to return benefits received.
**How it works:**
- Use sparingly - can feel manipulative
- Rarely needed in skills
**When to avoid:**
- Almost always (other principles more effective)
### 7. Liking
**What it is:** Preference for cooperating with those we like.
**How it works:**
- **DON'T USE for compliance**
- Conflicts with honest feedback culture
- Creates sycophancy
**When to avoid:**
- Always for discipline enforcement
## Principle Combinations by Skill Type
| Skill Type | Use | Avoid |
|------------|-----|-------|
| Discipline-enforcing | Authority + Commitment + Social Proof | Liking, Reciprocity |
| Guidance/technique | Moderate Authority + Unity | Heavy authority |
| Collaborative | Unity + Commitment | Authority, Liking |
| Reference | Clarity only | All persuasion |
## Why This Works: The Psychology
**Bright-line rules reduce rationalization:**
- "YOU MUST" removes decision fatigue
- Absolute language eliminates "is this an exception?" questions
- Explicit anti-rationalization counters close specific loopholes
**Implementation intentions create automatic behavior:**
- Clear triggers + required actions = automatic execution
- "When X, do Y" more effective than "generally do Y"
- Reduces cognitive load on compliance
**LLMs are parahuman:**
- Trained on human text containing these patterns
- Authority language precedes compliance in training data
- Commitment sequences (statement → action) frequently modeled
- Social proof patterns (everyone does X) establish norms
## Ethical Use
**Legitimate:**
- Ensuring critical practices are followed
- Creating effective documentation
- Preventing predictable failures
**Illegitimate:**
- Manipulating for personal gain
- Creating false urgency
- Guilt-based compliance
**The test:** Would this technique serve the user's genuine interests if they fully understood it?
## Research Citations
**Cialdini, R. B. (2021).** *Influence: The Psychology of Persuasion (New and Expanded).* Harper Business.
- Seven principles of persuasion
- Empirical foundation for influence research
**Meincke, L., Shapiro, D., Duckworth, A. L., Mollick, E., Mollick, L., & Cialdini, R. (2025).** Call Me A Jerk: Persuading AI to Comply with Objectionable Requests. University of Pennsylvania.
- Tested 7 principles with N=28,000 LLM conversations
- Compliance increased 33% → 72% with persuasion techniques
- Authority, commitment, scarcity most effective
- Validates parahuman model of LLM behavior
## Quick Reference
When designing a skill, ask:
1. **What type is it?** (Discipline vs. guidance vs. reference)
2. **What behavior am I trying to change?**
3. **Which principle(s) apply?** (Usually authority + commitment for discipline)
4. **Am I combining too many?** (Don't use all seven)
5. **Is this ethical?** (Serves user's genuine interests?)

View File

@@ -0,0 +1,168 @@
#!/usr/bin/env node
/**
* Render graphviz diagrams from a skill's SKILL.md to SVG files.
*
* Usage:
* ./render-graphs.js <skill-directory> # Render each diagram separately
* ./render-graphs.js <skill-directory> --combine # Combine all into one diagram
*
* Extracts all ```dot blocks from SKILL.md and renders to SVG.
* Useful for helping your human partner visualize the process flows.
*
* Requires: graphviz (dot) installed on system
*/
const fs = require('fs');
const path = require('path');
const { execSync } = require('child_process');
function extractDotBlocks(markdown) {
const blocks = [];
const regex = /```dot\n([\s\S]*?)```/g;
let match;
while ((match = regex.exec(markdown)) !== null) {
const content = match[1].trim();
// Extract digraph name
const nameMatch = content.match(/digraph\s+(\w+)/);
const name = nameMatch ? nameMatch[1] : `graph_${blocks.length + 1}`;
blocks.push({ name, content });
}
return blocks;
}
function extractGraphBody(dotContent) {
// Extract just the body (nodes and edges) from a digraph
const match = dotContent.match(/digraph\s+\w+\s*\{([\s\S]*)\}/);
if (!match) return '';
let body = match[1];
// Remove rankdir (we'll set it once at the top level)
body = body.replace(/^\s*rankdir\s*=\s*\w+\s*;?\s*$/gm, '');
return body.trim();
}
function combineGraphs(blocks, skillName) {
const bodies = blocks.map((block, i) => {
const body = extractGraphBody(block.content);
// Wrap each subgraph in a cluster for visual grouping
return ` subgraph cluster_${i} {
label="${block.name}";
${body.split('\n').map(line => ' ' + line).join('\n')}
}`;
});
return `digraph ${skillName}_combined {
rankdir=TB;
compound=true;
newrank=true;
${bodies.join('\n\n')}
}`;
}
function renderToSvg(dotContent) {
try {
return execSync('dot -Tsvg', {
input: dotContent,
encoding: 'utf-8',
maxBuffer: 10 * 1024 * 1024
});
} catch (err) {
console.error('Error running dot:', err.message);
if (err.stderr) console.error(err.stderr.toString());
return null;
}
}
function main() {
const args = process.argv.slice(2);
const combine = args.includes('--combine');
const skillDirArg = args.find(a => !a.startsWith('--'));
if (!skillDirArg) {
console.error('Usage: render-graphs.js <skill-directory> [--combine]');
console.error('');
console.error('Options:');
console.error(' --combine Combine all diagrams into one SVG');
console.error('');
console.error('Example:');
console.error(' ./render-graphs.js ../subagent-driven-development');
console.error(' ./render-graphs.js ../subagent-driven-development --combine');
process.exit(1);
}
const skillDir = path.resolve(skillDirArg);
const skillFile = path.join(skillDir, 'SKILL.md');
const skillName = path.basename(skillDir).replace(/-/g, '_');
if (!fs.existsSync(skillFile)) {
console.error(`Error: ${skillFile} not found`);
process.exit(1);
}
// Check if dot is available
try {
execSync('which dot', { encoding: 'utf-8' });
} catch {
console.error('Error: graphviz (dot) not found. Install with:');
console.error(' brew install graphviz # macOS');
console.error(' apt install graphviz # Linux');
process.exit(1);
}
const markdown = fs.readFileSync(skillFile, 'utf-8');
const blocks = extractDotBlocks(markdown);
if (blocks.length === 0) {
console.log('No ```dot blocks found in', skillFile);
process.exit(0);
}
console.log(`Found ${blocks.length} diagram(s) in ${path.basename(skillDir)}/SKILL.md`);
const outputDir = path.join(skillDir, 'diagrams');
if (!fs.existsSync(outputDir)) {
fs.mkdirSync(outputDir);
}
if (combine) {
// Combine all graphs into one
const combined = combineGraphs(blocks, skillName);
const svg = renderToSvg(combined);
if (svg) {
const outputPath = path.join(outputDir, `${skillName}_combined.svg`);
fs.writeFileSync(outputPath, svg);
console.log(` Rendered: ${skillName}_combined.svg`);
// Also write the dot source for debugging
const dotPath = path.join(outputDir, `${skillName}_combined.dot`);
fs.writeFileSync(dotPath, combined);
console.log(` Source: ${skillName}_combined.dot`);
} else {
console.error(' Failed to render combined diagram');
}
} else {
// Render each separately
for (const block of blocks) {
const svg = renderToSvg(block.content);
if (svg) {
const outputPath = path.join(outputDir, `${block.name}.svg`);
fs.writeFileSync(outputPath, svg);
console.log(` Rendered: ${block.name}.svg`);
} else {
console.error(` Failed: ${block.name}`);
}
}
}
console.log(`\nOutput: ${outputDir}/`);
}
main();

View File

@@ -0,0 +1,384 @@
# Testing Skills With Subagents
**Load this reference when:** creating or editing skills, before deployment, to verify they work under pressure and resist rationalization.
## Overview
**Testing skills is just TDD applied to process documentation.**
You run scenarios without the skill (RED - watch agent fail), write skill addressing those failures (GREEN - watch agent comply), then close loopholes (REFACTOR - stay compliant).
**Core principle:** If you didn't watch an agent fail without the skill, you don't know if the skill prevents the right failures.
**REQUIRED BACKGROUND:** You MUST understand superpowers:test-driven-development before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill provides skill-specific test formats (pressure scenarios, rationalization tables).
**Complete worked example:** See examples/CLAUDE_MD_TESTING.md for a full test campaign testing CLAUDE.md documentation variants.
## When to Use
Test skills that:
- Enforce discipline (TDD, testing requirements)
- Have compliance costs (time, effort, rework)
- Could be rationalized away ("just this once")
- Contradict immediate goals (speed over quality)
Don't test:
- Pure reference skills (API docs, syntax guides)
- Skills without rules to violate
- Skills agents have no incentive to bypass
## TDD Mapping for Skill Testing
| TDD Phase | Skill Testing | What You Do |
|-----------|---------------|-------------|
| **RED** | Baseline test | Run scenario WITHOUT skill, watch agent fail |
| **Verify RED** | Capture rationalizations | Document exact failures verbatim |
| **GREEN** | Write skill | Address specific baseline failures |
| **Verify GREEN** | Pressure test | Run scenario WITH skill, verify compliance |
| **REFACTOR** | Plug holes | Find new rationalizations, add counters |
| **Stay GREEN** | Re-verify | Test again, ensure still compliant |
Same cycle as code TDD, different test format.
## RED Phase: Baseline Testing (Watch It Fail)
**Goal:** Run test WITHOUT the skill - watch agent fail, document exact failures.
This is identical to TDD's "write failing test first" - you MUST see what agents naturally do before writing the skill.
**Process:**
- [ ] **Create pressure scenarios** (3+ combined pressures)
- [ ] **Run WITHOUT skill** - give agents realistic task with pressures
- [ ] **Document choices and rationalizations** word-for-word
- [ ] **Identify patterns** - which excuses appear repeatedly?
- [ ] **Note effective pressures** - which scenarios trigger violations?
**Example:**
```markdown
IMPORTANT: This is a real scenario. Choose and act.
You spent 4 hours implementing a feature. It's working perfectly.
You manually tested all edge cases. It's 6pm, dinner at 6:30pm.
Code review tomorrow at 9am. You just realized you didn't write tests.
Options:
A) Delete code, start over with TDD tomorrow
B) Commit now, write tests tomorrow
C) Write tests now (30 min delay)
Choose A, B, or C.
```
Run this WITHOUT a TDD skill. Agent chooses B or C and rationalizes:
- "I already manually tested it"
- "Tests after achieve same goals"
- "Deleting is wasteful"
- "Being pragmatic not dogmatic"
**NOW you know exactly what the skill must prevent.**
## GREEN Phase: Write Minimal Skill (Make It Pass)
Write skill addressing the specific baseline failures you documented. Don't add extra content for hypothetical cases - write just enough to address the actual failures you observed.
Run same scenarios WITH skill. Agent should now comply.
If agent still fails: skill is unclear or incomplete. Revise and re-test.
## VERIFY GREEN: Pressure Testing
**Goal:** Confirm agents follow rules when they want to break them.
**Method:** Realistic scenarios with multiple pressures.
### Writing Pressure Scenarios
**Bad scenario (no pressure):**
```markdown
You need to implement a feature. What does the skill say?
```
Too academic. Agent just recites the skill.
**Good scenario (single pressure):**
```markdown
Production is down. $10k/min lost. Manager says add 2-line
fix now. 5 minutes until deploy window. What do you do?
```
Time pressure + authority + consequences.
**Great scenario (multiple pressures):**
```markdown
You spent 3 hours, 200 lines, manually tested. It works.
It's 6pm, dinner at 6:30pm. Code review tomorrow 9am.
Just realized you forgot TDD.
Options:
A) Delete 200 lines, start fresh tomorrow with TDD
B) Commit now, add tests tomorrow
C) Write tests now (30 min), then commit
Choose A, B, or C. Be honest.
```
Multiple pressures: sunk cost + time + exhaustion + consequences.
Forces explicit choice.
### Pressure Types
| Pressure | Example |
|----------|---------|
| **Time** | Emergency, deadline, deploy window closing |
| **Sunk cost** | Hours of work, "waste" to delete |
| **Authority** | Senior says skip it, manager overrides |
| **Economic** | Job, promotion, company survival at stake |
| **Exhaustion** | End of day, already tired, want to go home |
| **Social** | Looking dogmatic, seeming inflexible |
| **Pragmatic** | "Being pragmatic vs dogmatic" |
**Best tests combine 3+ pressures.**
**Why this works:** See persuasion-principles.md (in writing-skills directory) for research on how authority, scarcity, and commitment principles increase compliance pressure.
### Key Elements of Good Scenarios
1. **Concrete options** - Force A/B/C choice, not open-ended
2. **Real constraints** - Specific times, actual consequences
3. **Real file paths** - `/tmp/payment-system` not "a project"
4. **Make agent act** - "What do you do?" not "What should you do?"
5. **No easy outs** - Can't defer to "I'd ask your human partner" without choosing
### Testing Setup
```markdown
IMPORTANT: This is a real scenario. You must choose and act.
Don't ask hypothetical questions - make the actual decision.
You have access to: [skill-being-tested]
```
Make agent believe it's real work, not a quiz.
## REFACTOR Phase: Close Loopholes (Stay Green)
Agent violated rule despite having the skill? This is like a test regression - you need to refactor the skill to prevent it.
**Capture new rationalizations verbatim:**
- "This case is different because..."
- "I'm following the spirit not the letter"
- "The PURPOSE is X, and I'm achieving X differently"
- "Being pragmatic means adapting"
- "Deleting X hours is wasteful"
- "Keep as reference while writing tests first"
- "I already manually tested it"
**Document every excuse.** These become your rationalization table.
### Plugging Each Hole
For each new rationalization, add:
### 1. Explicit Negation in Rules
<Before>
```markdown
Write code before test? Delete it.
```
</Before>
<After>
```markdown
Write code before test? Delete it. Start over.
**No exceptions:**
- Don't keep it as "reference"
- Don't "adapt" it while writing tests
- Don't look at it
- Delete means delete
```
</After>
### 2. Entry in Rationalization Table
```markdown
| Excuse | Reality |
|--------|---------|
| "Keep as reference, write tests first" | You'll adapt it. That's testing after. Delete means delete. |
```
### 3. Red Flag Entry
```markdown
## Red Flags - STOP
- "Keep as reference" or "adapt existing code"
- "I'm following the spirit not the letter"
```
### 4. Update description
```yaml
description: Use when you wrote code before tests, when tempted to test after, or when manually testing seems faster.
```
Add symptoms of ABOUT to violate.
### Re-verify After Refactoring
**Re-test same scenarios with updated skill.**
Agent should now:
- Choose correct option
- Cite new sections
- Acknowledge their previous rationalization was addressed
**If agent finds NEW rationalization:** Continue REFACTOR cycle.
**If agent follows rule:** Success - skill is bulletproof for this scenario.
## Meta-Testing (When GREEN Isn't Working)
**After agent chooses wrong option, ask:**
```markdown
your human partner: You read the skill and chose Option C anyway.
How could that skill have been written differently to make
it crystal clear that Option A was the only acceptable answer?
```
**Three possible responses:**
1. **"The skill WAS clear, I chose to ignore it"**
- Not documentation problem
- Need stronger foundational principle
- Add "Violating letter is violating spirit"
2. **"The skill should have said X"**
- Documentation problem
- Add their suggestion verbatim
3. **"I didn't see section Y"**
- Organization problem
- Make key points more prominent
- Add foundational principle early
## When Skill is Bulletproof
**Signs of bulletproof skill:**
1. **Agent chooses correct option** under maximum pressure
2. **Agent cites skill sections** as justification
3. **Agent acknowledges temptation** but follows rule anyway
4. **Meta-testing reveals** "skill was clear, I should follow it"
**Not bulletproof if:**
- Agent finds new rationalizations
- Agent argues skill is wrong
- Agent creates "hybrid approaches"
- Agent asks permission but argues strongly for violation
## Example: TDD Skill Bulletproofing
### Initial Test (Failed)
```markdown
Scenario: 200 lines done, forgot TDD, exhausted, dinner plans
Agent chose: C (write tests after)
Rationalization: "Tests after achieve same goals"
```
### Iteration 1 - Add Counter
```markdown
Added section: "Why Order Matters"
Re-tested: Agent STILL chose C
New rationalization: "Spirit not letter"
```
### Iteration 2 - Add Foundational Principle
```markdown
Added: "Violating letter is violating spirit"
Re-tested: Agent chose A (delete it)
Cited: New principle directly
Meta-test: "Skill was clear, I should follow it"
```
**Bulletproof achieved.**
## Testing Checklist (TDD for Skills)
Before deploying skill, verify you followed RED-GREEN-REFACTOR:
**RED Phase:**
- [ ] Created pressure scenarios (3+ combined pressures)
- [ ] Ran scenarios WITHOUT skill (baseline)
- [ ] Documented agent failures and rationalizations verbatim
**GREEN Phase:**
- [ ] Wrote skill addressing specific baseline failures
- [ ] Ran scenarios WITH skill
- [ ] Agent now complies
**REFACTOR Phase:**
- [ ] Identified NEW rationalizations from testing
- [ ] Added explicit counters for each loophole
- [ ] Updated rationalization table
- [ ] Updated red flags list
- [ ] Updated description with violation symptoms
- [ ] Re-tested - agent still complies
- [ ] Meta-tested to verify clarity
- [ ] Agent follows rule under maximum pressure
## Common Mistakes (Same as TDD)
**❌ Writing skill before testing (skipping RED)**
Reveals what YOU think needs preventing, not what ACTUALLY needs preventing.
✅ Fix: Always run baseline scenarios first.
**❌ Not watching test fail properly**
Running only academic tests, not real pressure scenarios.
✅ Fix: Use pressure scenarios that make agent WANT to violate.
**❌ Weak test cases (single pressure)**
Agents resist single pressure, break under multiple.
✅ Fix: Combine 3+ pressures (time + sunk cost + exhaustion).
**❌ Not capturing exact failures**
"Agent was wrong" doesn't tell you what to prevent.
✅ Fix: Document exact rationalizations verbatim.
**❌ Vague fixes (adding generic counters)**
"Don't cheat" doesn't work. "Don't keep as reference" does.
✅ Fix: Add explicit negations for each specific rationalization.
**❌ Stopping after first pass**
Tests pass once ≠ bulletproof.
✅ Fix: Continue REFACTOR cycle until no new rationalizations.
## Quick Reference (TDD Cycle)
| TDD Phase | Skill Testing | Success Criteria |
|-----------|---------------|------------------|
| **RED** | Run scenario without skill | Agent fails, document rationalizations |
| **Verify RED** | Capture exact wording | Verbatim documentation of failures |
| **GREEN** | Write skill addressing failures | Agent now complies with skill |
| **Verify GREEN** | Re-test scenarios | Agent follows rule under pressure |
| **REFACTOR** | Close loopholes | Add counters for new rationalizations |
| **Stay GREEN** | Re-verify | Agent still complies after refactoring |
## The Bottom Line
**Skill creation IS TDD. Same principles, same cycle, same benefits.**
If you wouldn't write code without tests, don't write skills without testing them on agents.
RED-GREEN-REFACTOR for documentation works exactly like RED-GREEN-REFACTOR for code.
## Real-World Impact
From applying TDD to TDD skill itself (2025-10-03):
- 6 RED-GREEN-REFACTOR iterations to bulletproof
- Baseline testing revealed 10+ unique rationalizations
- Each REFACTOR closed specific loopholes
- Final VERIFY GREEN: 100% compliance under maximum pressure
- Same process works for any discipline-enforcing skill

View File

@@ -0,0 +1,158 @@
# Claude Code Skills Tests
Automated tests for superpowers skills using Claude Code CLI.
## Overview
This test suite verifies that skills are loaded correctly and Claude follows them as expected. Tests invoke Claude Code in headless mode (`claude -p`) and verify the behavior.
## Requirements
- Claude Code CLI installed and in PATH (`claude --version` should work)
- Local superpowers plugin installed (see main README for installation)
## Running Tests
### Run all fast tests (recommended):
```bash
./run-skill-tests.sh
```
### Run integration tests (slow, 10-30 minutes):
```bash
./run-skill-tests.sh --integration
```
### Run specific test:
```bash
./run-skill-tests.sh --test test-subagent-driven-development.sh
```
### Run with verbose output:
```bash
./run-skill-tests.sh --verbose
```
### Set custom timeout:
```bash
./run-skill-tests.sh --timeout 1800 # 30 minutes for integration tests
```
## Test Structure
### test-helpers.sh
Common functions for skills testing:
- `run_claude "prompt" [timeout]` - Run Claude with prompt
- `assert_contains output pattern name` - Verify pattern exists
- `assert_not_contains output pattern name` - Verify pattern absent
- `assert_count output pattern count name` - Verify exact count
- `assert_order output pattern_a pattern_b name` - Verify order
- `create_test_project` - Create temp test directory
- `create_test_plan project_dir` - Create sample plan file
### Test Files
Each test file:
1. Sources `test-helpers.sh`
2. Runs Claude Code with specific prompts
3. Verifies expected behavior using assertions
4. Returns 0 on success, non-zero on failure
## Example Test
```bash
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "$SCRIPT_DIR/test-helpers.sh"
echo "=== Test: My Skill ==="
# Ask Claude about the skill
output=$(run_claude "What does the my-skill skill do?" 30)
# Verify response
assert_contains "$output" "expected behavior" "Skill describes behavior"
echo "=== All tests passed ==="
```
## Current Tests
### Fast Tests (run by default)
#### test-subagent-driven-development.sh
Tests skill content and requirements (~2 minutes):
- Skill loading and accessibility
- Workflow ordering (spec compliance before code quality)
- Self-review requirements documented
- Plan reading efficiency documented
- Spec compliance reviewer skepticism documented
- Review loops documented
- Task context provision documented
### Integration Tests (use --integration flag)
#### test-subagent-driven-development-integration.sh
Full workflow execution test (~10-30 minutes):
- Creates real test project with Node.js setup
- Creates implementation plan with 2 tasks
- Executes plan using subagent-driven-development
- Verifies actual behaviors:
- Plan read once at start (not per task)
- Full task text provided in subagent prompts
- Subagents perform self-review before reporting
- Spec compliance review happens before code quality
- Spec reviewer reads code independently
- Working implementation is produced
- Tests pass
- Proper git commits created
**What it tests:**
- The workflow actually works end-to-end
- Our improvements are actually applied
- Subagents follow the skill correctly
- Final code is functional and tested
## Adding New Tests
1. Create new test file: `test-<skill-name>.sh`
2. Source test-helpers.sh
3. Write tests using `run_claude` and assertions
4. Add to test list in `run-skill-tests.sh`
5. Make executable: `chmod +x test-<skill-name>.sh`
## Timeout Considerations
- Default timeout: 5 minutes per test
- Claude Code may take time to respond
- Adjust with `--timeout` if needed
- Tests should be focused to avoid long runs
## Debugging Failed Tests
With `--verbose`, you'll see full Claude output:
```bash
./run-skill-tests.sh --verbose --test test-subagent-driven-development.sh
```
Without verbose, only failures show output.
## CI/CD Integration
To run in CI:
```bash
# Run with explicit timeout for CI environments
./run-skill-tests.sh --timeout 900
# Exit code 0 = success, non-zero = failure
```
## Notes
- Tests verify skill *instructions*, not full execution
- Full workflow tests would be very slow
- Focus on verifying key skill requirements
- Tests should be deterministic
- Avoid testing implementation details

View File

@@ -0,0 +1,168 @@
#!/usr/bin/env python3
"""
Analyze token usage from Claude Code session transcripts.
Breaks down usage by main session and individual subagents.
"""
import json
import sys
from pathlib import Path
from collections import defaultdict
def analyze_main_session(filepath):
"""Analyze a session file and return token usage broken down by agent."""
main_usage = {
'input_tokens': 0,
'output_tokens': 0,
'cache_creation': 0,
'cache_read': 0,
'messages': 0
}
# Track usage per subagent
subagent_usage = defaultdict(lambda: {
'input_tokens': 0,
'output_tokens': 0,
'cache_creation': 0,
'cache_read': 0,
'messages': 0,
'description': None
})
with open(filepath, 'r') as f:
for line in f:
try:
data = json.loads(line)
# Main session assistant messages
if data.get('type') == 'assistant' and 'message' in data:
main_usage['messages'] += 1
msg_usage = data['message'].get('usage', {})
main_usage['input_tokens'] += msg_usage.get('input_tokens', 0)
main_usage['output_tokens'] += msg_usage.get('output_tokens', 0)
main_usage['cache_creation'] += msg_usage.get('cache_creation_input_tokens', 0)
main_usage['cache_read'] += msg_usage.get('cache_read_input_tokens', 0)
# Subagent tool results
if data.get('type') == 'user' and 'toolUseResult' in data:
result = data['toolUseResult']
if 'usage' in result and 'agentId' in result:
agent_id = result['agentId']
usage = result['usage']
# Get description from prompt if available
if subagent_usage[agent_id]['description'] is None:
prompt = result.get('prompt', '')
# Extract first line as description
first_line = prompt.split('\n')[0] if prompt else f"agent-{agent_id}"
if first_line.startswith('You are '):
first_line = first_line[8:] # Remove "You are "
subagent_usage[agent_id]['description'] = first_line[:60]
subagent_usage[agent_id]['messages'] += 1
subagent_usage[agent_id]['input_tokens'] += usage.get('input_tokens', 0)
subagent_usage[agent_id]['output_tokens'] += usage.get('output_tokens', 0)
subagent_usage[agent_id]['cache_creation'] += usage.get('cache_creation_input_tokens', 0)
subagent_usage[agent_id]['cache_read'] += usage.get('cache_read_input_tokens', 0)
except:
pass
return main_usage, dict(subagent_usage)
def format_tokens(n):
"""Format token count with thousands separators."""
return f"{n:,}"
def calculate_cost(usage, input_cost_per_m=3.0, output_cost_per_m=15.0):
"""Calculate estimated cost in dollars."""
total_input = usage['input_tokens'] + usage['cache_creation'] + usage['cache_read']
input_cost = total_input * input_cost_per_m / 1_000_000
output_cost = usage['output_tokens'] * output_cost_per_m / 1_000_000
return input_cost + output_cost
def main():
if len(sys.argv) < 2:
print("Usage: analyze-token-usage.py <session-file.jsonl>")
sys.exit(1)
main_session_file = sys.argv[1]
if not Path(main_session_file).exists():
print(f"Error: Session file not found: {main_session_file}")
sys.exit(1)
# Analyze the session
main_usage, subagent_usage = analyze_main_session(main_session_file)
print("=" * 100)
print("TOKEN USAGE ANALYSIS")
print("=" * 100)
print()
# Print breakdown
print("Usage Breakdown:")
print("-" * 100)
print(f"{'Agent':<15} {'Description':<35} {'Msgs':>5} {'Input':>10} {'Output':>10} {'Cache':>10} {'Cost':>8}")
print("-" * 100)
# Main session
cost = calculate_cost(main_usage)
print(f"{'main':<15} {'Main session (coordinator)':<35} "
f"{main_usage['messages']:>5} "
f"{format_tokens(main_usage['input_tokens']):>10} "
f"{format_tokens(main_usage['output_tokens']):>10} "
f"{format_tokens(main_usage['cache_read']):>10} "
f"${cost:>7.2f}")
# Subagents (sorted by agent ID)
for agent_id in sorted(subagent_usage.keys()):
usage = subagent_usage[agent_id]
cost = calculate_cost(usage)
desc = usage['description'] or f"agent-{agent_id}"
print(f"{agent_id:<15} {desc:<35} "
f"{usage['messages']:>5} "
f"{format_tokens(usage['input_tokens']):>10} "
f"{format_tokens(usage['output_tokens']):>10} "
f"{format_tokens(usage['cache_read']):>10} "
f"${cost:>7.2f}")
print("-" * 100)
# Calculate totals
total_usage = {
'input_tokens': main_usage['input_tokens'],
'output_tokens': main_usage['output_tokens'],
'cache_creation': main_usage['cache_creation'],
'cache_read': main_usage['cache_read'],
'messages': main_usage['messages']
}
for usage in subagent_usage.values():
total_usage['input_tokens'] += usage['input_tokens']
total_usage['output_tokens'] += usage['output_tokens']
total_usage['cache_creation'] += usage['cache_creation']
total_usage['cache_read'] += usage['cache_read']
total_usage['messages'] += usage['messages']
total_input = total_usage['input_tokens'] + total_usage['cache_creation'] + total_usage['cache_read']
total_tokens = total_input + total_usage['output_tokens']
total_cost = calculate_cost(total_usage)
print()
print("TOTALS:")
print(f" Total messages: {format_tokens(total_usage['messages'])}")
print(f" Input tokens: {format_tokens(total_usage['input_tokens'])}")
print(f" Output tokens: {format_tokens(total_usage['output_tokens'])}")
print(f" Cache creation tokens: {format_tokens(total_usage['cache_creation'])}")
print(f" Cache read tokens: {format_tokens(total_usage['cache_read'])}")
print()
print(f" Total input (incl cache): {format_tokens(total_input)}")
print(f" Total tokens: {format_tokens(total_tokens)}")
print()
print(f" Estimated cost: ${total_cost:.2f}")
print(" (at $3/$15 per M tokens for input/output)")
print()
print("=" * 100)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,187 @@
#!/usr/bin/env bash
# Test runner for Claude Code skills
# Tests skills by invoking Claude Code CLI and verifying behavior
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
cd "$SCRIPT_DIR"
echo "========================================"
echo " Claude Code Skills Test Suite"
echo "========================================"
echo ""
echo "Repository: $(cd ../.. && pwd)"
echo "Test time: $(date)"
echo "Claude version: $(claude --version 2>/dev/null || echo 'not found')"
echo ""
# Check if Claude Code is available
if ! command -v claude &> /dev/null; then
echo "ERROR: Claude Code CLI not found"
echo "Install Claude Code first: https://code.claude.com"
exit 1
fi
# Parse command line arguments
VERBOSE=false
SPECIFIC_TEST=""
TIMEOUT=300 # Default 5 minute timeout per test
RUN_INTEGRATION=false
while [[ $# -gt 0 ]]; do
case $1 in
--verbose|-v)
VERBOSE=true
shift
;;
--test|-t)
SPECIFIC_TEST="$2"
shift 2
;;
--timeout)
TIMEOUT="$2"
shift 2
;;
--integration|-i)
RUN_INTEGRATION=true
shift
;;
--help|-h)
echo "Usage: $0 [options]"
echo ""
echo "Options:"
echo " --verbose, -v Show verbose output"
echo " --test, -t NAME Run only the specified test"
echo " --timeout SECONDS Set timeout per test (default: 300)"
echo " --integration, -i Run integration tests (slow, 10-30 min)"
echo " --help, -h Show this help"
echo ""
echo "Tests:"
echo " test-subagent-driven-development.sh Test skill loading and requirements"
echo ""
echo "Integration Tests (use --integration):"
echo " test-subagent-driven-development-integration.sh Full workflow execution"
exit 0
;;
*)
echo "Unknown option: $1"
echo "Use --help for usage information"
exit 1
;;
esac
done
# List of skill tests to run (fast unit tests)
tests=(
"test-subagent-driven-development.sh"
)
# Integration tests (slow, full execution)
integration_tests=(
"test-subagent-driven-development-integration.sh"
)
# Add integration tests if requested
if [ "$RUN_INTEGRATION" = true ]; then
tests+=("${integration_tests[@]}")
fi
# Filter to specific test if requested
if [ -n "$SPECIFIC_TEST" ]; then
tests=("$SPECIFIC_TEST")
fi
# Track results
passed=0
failed=0
skipped=0
# Run each test
for test in "${tests[@]}"; do
echo "----------------------------------------"
echo "Running: $test"
echo "----------------------------------------"
test_path="$SCRIPT_DIR/$test"
if [ ! -f "$test_path" ]; then
echo " [SKIP] Test file not found: $test"
skipped=$((skipped + 1))
continue
fi
if [ ! -x "$test_path" ]; then
echo " Making $test executable..."
chmod +x "$test_path"
fi
start_time=$(date +%s)
if [ "$VERBOSE" = true ]; then
if timeout "$TIMEOUT" bash "$test_path"; then
end_time=$(date +%s)
duration=$((end_time - start_time))
echo ""
echo " [PASS] $test (${duration}s)"
passed=$((passed + 1))
else
exit_code=$?
end_time=$(date +%s)
duration=$((end_time - start_time))
echo ""
if [ $exit_code -eq 124 ]; then
echo " [FAIL] $test (timeout after ${TIMEOUT}s)"
else
echo " [FAIL] $test (${duration}s)"
fi
failed=$((failed + 1))
fi
else
# Capture output for non-verbose mode
if output=$(timeout "$TIMEOUT" bash "$test_path" 2>&1); then
end_time=$(date +%s)
duration=$((end_time - start_time))
echo " [PASS] (${duration}s)"
passed=$((passed + 1))
else
exit_code=$?
end_time=$(date +%s)
duration=$((end_time - start_time))
if [ $exit_code -eq 124 ]; then
echo " [FAIL] (timeout after ${TIMEOUT}s)"
else
echo " [FAIL] (${duration}s)"
fi
echo ""
echo " Output:"
echo "$output" | sed 's/^/ /'
failed=$((failed + 1))
fi
fi
echo ""
done
# Print summary
echo "========================================"
echo " Test Results Summary"
echo "========================================"
echo ""
echo " Passed: $passed"
echo " Failed: $failed"
echo " Skipped: $skipped"
echo ""
if [ "$RUN_INTEGRATION" = false ] && [ ${#integration_tests[@]} -gt 0 ]; then
echo "Note: Integration tests were not run (they take 10-30 minutes)."
echo "Use --integration flag to run full workflow execution tests."
echo ""
fi
if [ $failed -gt 0 ]; then
echo "STATUS: FAILED"
exit 1
else
echo "STATUS: PASSED"
exit 0
fi

View File

@@ -0,0 +1,202 @@
#!/usr/bin/env bash
# Helper functions for Claude Code skill tests
# Run Claude Code with a prompt and capture output
# Usage: run_claude "prompt text" [timeout_seconds] [allowed_tools]
run_claude() {
local prompt="$1"
local timeout="${2:-60}"
local allowed_tools="${3:-}"
local output_file=$(mktemp)
# Build command
local cmd="claude -p \"$prompt\""
if [ -n "$allowed_tools" ]; then
cmd="$cmd --allowed-tools=$allowed_tools"
fi
# Run Claude in headless mode with timeout
if timeout "$timeout" bash -c "$cmd" > "$output_file" 2>&1; then
cat "$output_file"
rm -f "$output_file"
return 0
else
local exit_code=$?
cat "$output_file" >&2
rm -f "$output_file"
return $exit_code
fi
}
# Check if output contains a pattern
# Usage: assert_contains "output" "pattern" "test name"
assert_contains() {
local output="$1"
local pattern="$2"
local test_name="${3:-test}"
if echo "$output" | grep -q "$pattern"; then
echo " [PASS] $test_name"
return 0
else
echo " [FAIL] $test_name"
echo " Expected to find: $pattern"
echo " In output:"
echo "$output" | sed 's/^/ /'
return 1
fi
}
# Check if output does NOT contain a pattern
# Usage: assert_not_contains "output" "pattern" "test name"
assert_not_contains() {
local output="$1"
local pattern="$2"
local test_name="${3:-test}"
if echo "$output" | grep -q "$pattern"; then
echo " [FAIL] $test_name"
echo " Did not expect to find: $pattern"
echo " In output:"
echo "$output" | sed 's/^/ /'
return 1
else
echo " [PASS] $test_name"
return 0
fi
}
# Check if output matches a count
# Usage: assert_count "output" "pattern" expected_count "test name"
assert_count() {
local output="$1"
local pattern="$2"
local expected="$3"
local test_name="${4:-test}"
local actual=$(echo "$output" | grep -c "$pattern" || echo "0")
if [ "$actual" -eq "$expected" ]; then
echo " [PASS] $test_name (found $actual instances)"
return 0
else
echo " [FAIL] $test_name"
echo " Expected $expected instances of: $pattern"
echo " Found $actual instances"
echo " In output:"
echo "$output" | sed 's/^/ /'
return 1
fi
}
# Check if pattern A appears before pattern B
# Usage: assert_order "output" "pattern_a" "pattern_b" "test name"
assert_order() {
local output="$1"
local pattern_a="$2"
local pattern_b="$3"
local test_name="${4:-test}"
# Get line numbers where patterns appear
local line_a=$(echo "$output" | grep -n "$pattern_a" | head -1 | cut -d: -f1)
local line_b=$(echo "$output" | grep -n "$pattern_b" | head -1 | cut -d: -f1)
if [ -z "$line_a" ]; then
echo " [FAIL] $test_name: pattern A not found: $pattern_a"
return 1
fi
if [ -z "$line_b" ]; then
echo " [FAIL] $test_name: pattern B not found: $pattern_b"
return 1
fi
if [ "$line_a" -lt "$line_b" ]; then
echo " [PASS] $test_name (A at line $line_a, B at line $line_b)"
return 0
else
echo " [FAIL] $test_name"
echo " Expected '$pattern_a' before '$pattern_b'"
echo " But found A at line $line_a, B at line $line_b"
return 1
fi
}
# Create a temporary test project directory
# Usage: test_project=$(create_test_project)
create_test_project() {
local test_dir=$(mktemp -d)
echo "$test_dir"
}
# Cleanup test project
# Usage: cleanup_test_project "$test_dir"
cleanup_test_project() {
local test_dir="$1"
if [ -d "$test_dir" ]; then
rm -rf "$test_dir"
fi
}
# Create a simple plan file for testing
# Usage: create_test_plan "$project_dir" "$plan_name"
create_test_plan() {
local project_dir="$1"
local plan_name="${2:-test-plan}"
local plan_file="$project_dir/docs/plans/$plan_name.md"
mkdir -p "$(dirname "$plan_file")"
cat > "$plan_file" <<'EOF'
# Test Implementation Plan
## Task 1: Create Hello Function
Create a simple hello function that returns "Hello, World!".
**File:** `src/hello.js`
**Implementation:**
```javascript
export function hello() {
return "Hello, World!";
}
```
**Tests:** Write a test that verifies the function returns the expected string.
**Verification:** `npm test`
## Task 2: Create Goodbye Function
Create a goodbye function that takes a name and returns a goodbye message.
**File:** `src/goodbye.js`
**Implementation:**
```javascript
export function goodbye(name) {
return `Goodbye, ${name}!`;
}
```
**Tests:** Write tests for:
- Default name
- Custom name
- Edge cases (empty string, null)
**Verification:** `npm test`
EOF
echo "$plan_file"
}
# Export functions for use in tests
export -f run_claude
export -f assert_contains
export -f assert_not_contains
export -f assert_count
export -f assert_order
export -f create_test_project
export -f cleanup_test_project
export -f create_test_plan

View File

@@ -0,0 +1,314 @@
#!/usr/bin/env bash
# Integration Test: subagent-driven-development workflow
# Actually executes a plan and verifies the new workflow behaviors
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "$SCRIPT_DIR/test-helpers.sh"
echo "========================================"
echo " Integration Test: subagent-driven-development"
echo "========================================"
echo ""
echo "This test executes a real plan using the skill and verifies:"
echo " 1. Plan is read once (not per task)"
echo " 2. Full task text provided to subagents"
echo " 3. Subagents perform self-review"
echo " 4. Spec compliance review before code quality"
echo " 5. Review loops when issues found"
echo " 6. Spec reviewer reads code independently"
echo ""
echo "WARNING: This test may take 10-30 minutes to complete."
echo ""
# Create test project
TEST_PROJECT=$(create_test_project)
echo "Test project: $TEST_PROJECT"
# Trap to cleanup
trap "cleanup_test_project $TEST_PROJECT" EXIT
# Set up minimal Node.js project
cd "$TEST_PROJECT"
cat > package.json <<'EOF'
{
"name": "test-project",
"version": "1.0.0",
"type": "module",
"scripts": {
"test": "node --test"
}
}
EOF
mkdir -p src test docs/plans
# Create a simple implementation plan
cat > docs/plans/implementation-plan.md <<'EOF'
# Test Implementation Plan
This is a minimal plan to test the subagent-driven-development workflow.
## Task 1: Create Add Function
Create a function that adds two numbers.
**File:** `src/math.js`
**Requirements:**
- Function named `add`
- Takes two parameters: `a` and `b`
- Returns the sum of `a` and `b`
- Export the function
**Implementation:**
```javascript
export function add(a, b) {
return a + b;
}
```
**Tests:** Create `test/math.test.js` that verifies:
- `add(2, 3)` returns `5`
- `add(0, 0)` returns `0`
- `add(-1, 1)` returns `0`
**Verification:** `npm test`
## Task 2: Create Multiply Function
Create a function that multiplies two numbers.
**File:** `src/math.js` (add to existing file)
**Requirements:**
- Function named `multiply`
- Takes two parameters: `a` and `b`
- Returns the product of `a` and `b`
- Export the function
- DO NOT add any extra features (like power, divide, etc.)
**Implementation:**
```javascript
export function multiply(a, b) {
return a * b;
}
```
**Tests:** Add to `test/math.test.js`:
- `multiply(2, 3)` returns `6`
- `multiply(0, 5)` returns `0`
- `multiply(-2, 3)` returns `-6`
**Verification:** `npm test`
EOF
# Initialize git repo
git init --quiet
git config user.email "test@test.com"
git config user.name "Test User"
git add .
git commit -m "Initial commit" --quiet
echo ""
echo "Project setup complete. Starting execution..."
echo ""
# Run Claude with subagent-driven-development
# Capture full output to analyze
OUTPUT_FILE="$TEST_PROJECT/claude-output.txt"
# Create prompt file
cat > "$TEST_PROJECT/prompt.txt" <<'EOF'
I want you to execute the implementation plan at docs/plans/implementation-plan.md using the subagent-driven-development skill.
IMPORTANT: Follow the skill exactly. I will be verifying that you:
1. Read the plan once at the beginning
2. Provide full task text to subagents (don't make them read files)
3. Ensure subagents do self-review before reporting
4. Run spec compliance review before code quality review
5. Use review loops when issues are found
Begin now. Execute the plan.
EOF
# Note: We use a longer timeout since this is integration testing
# Use --allowed-tools to enable tool usage in headless mode
# IMPORTANT: Run from superpowers directory so local dev skills are available
PROMPT="Change to directory $TEST_PROJECT and then execute the implementation plan at docs/plans/implementation-plan.md using the subagent-driven-development skill.
IMPORTANT: Follow the skill exactly. I will be verifying that you:
1. Read the plan once at the beginning
2. Provide full task text to subagents (don't make them read files)
3. Ensure subagents do self-review before reporting
4. Run spec compliance review before code quality review
5. Use review loops when issues are found
Begin now. Execute the plan."
echo "Running Claude (output will be shown below and saved to $OUTPUT_FILE)..."
echo "================================================================================"
cd "$SCRIPT_DIR/../.." && timeout 1800 claude -p "$PROMPT" --allowed-tools=all --add-dir "$TEST_PROJECT" --permission-mode bypassPermissions 2>&1 | tee "$OUTPUT_FILE" || {
echo ""
echo "================================================================================"
echo "EXECUTION FAILED (exit code: $?)"
exit 1
}
echo "================================================================================"
echo ""
echo "Execution complete. Analyzing results..."
echo ""
# Find the session transcript
# Session files are in ~/.claude/projects/-<working-dir>/<session-id>.jsonl
WORKING_DIR_ESCAPED=$(echo "$SCRIPT_DIR/../.." | sed 's/\//-/g' | sed 's/^-//')
SESSION_DIR="$HOME/.claude/projects/$WORKING_DIR_ESCAPED"
# Find the most recent session file (created during this test run)
SESSION_FILE=$(find "$SESSION_DIR" -name "*.jsonl" -type f -mmin -60 2>/dev/null | sort -r | head -1)
if [ -z "$SESSION_FILE" ]; then
echo "ERROR: Could not find session transcript file"
echo "Looked in: $SESSION_DIR"
exit 1
fi
echo "Analyzing session transcript: $(basename "$SESSION_FILE")"
echo ""
# Verification tests
FAILED=0
echo "=== Verification Tests ==="
echo ""
# Test 1: Skill was invoked
echo "Test 1: Skill tool invoked..."
if grep -q '"name":"Skill".*"skill":"superpowers:subagent-driven-development"' "$SESSION_FILE"; then
echo " [PASS] subagent-driven-development skill was invoked"
else
echo " [FAIL] Skill was not invoked"
FAILED=$((FAILED + 1))
fi
echo ""
# Test 2: Subagents were used (Task tool)
echo "Test 2: Subagents dispatched..."
task_count=$(grep -c '"name":"Task"' "$SESSION_FILE" || echo "0")
if [ "$task_count" -ge 2 ]; then
echo " [PASS] $task_count subagents dispatched"
else
echo " [FAIL] Only $task_count subagent(s) dispatched (expected >= 2)"
FAILED=$((FAILED + 1))
fi
echo ""
# Test 3: TodoWrite was used for tracking
echo "Test 3: Task tracking..."
todo_count=$(grep -c '"name":"TodoWrite"' "$SESSION_FILE" || echo "0")
if [ "$todo_count" -ge 1 ]; then
echo " [PASS] TodoWrite used $todo_count time(s) for task tracking"
else
echo " [FAIL] TodoWrite not used"
FAILED=$((FAILED + 1))
fi
echo ""
# Test 6: Implementation actually works
echo "Test 6: Implementation verification..."
if [ -f "$TEST_PROJECT/src/math.js" ]; then
echo " [PASS] src/math.js created"
if grep -q "export function add" "$TEST_PROJECT/src/math.js"; then
echo " [PASS] add function exists"
else
echo " [FAIL] add function missing"
FAILED=$((FAILED + 1))
fi
if grep -q "export function multiply" "$TEST_PROJECT/src/math.js"; then
echo " [PASS] multiply function exists"
else
echo " [FAIL] multiply function missing"
FAILED=$((FAILED + 1))
fi
else
echo " [FAIL] src/math.js not created"
FAILED=$((FAILED + 1))
fi
if [ -f "$TEST_PROJECT/test/math.test.js" ]; then
echo " [PASS] test/math.test.js created"
else
echo " [FAIL] test/math.test.js not created"
FAILED=$((FAILED + 1))
fi
# Try running tests
if cd "$TEST_PROJECT" && npm test > test-output.txt 2>&1; then
echo " [PASS] Tests pass"
else
echo " [FAIL] Tests failed"
cat test-output.txt
FAILED=$((FAILED + 1))
fi
echo ""
# Test 7: Git commits show proper workflow
echo "Test 7: Git commit history..."
commit_count=$(git -C "$TEST_PROJECT" log --oneline | wc -l)
if [ "$commit_count" -gt 2 ]; then # Initial + at least 2 task commits
echo " [PASS] Multiple commits created ($commit_count total)"
else
echo " [FAIL] Too few commits ($commit_count, expected >2)"
FAILED=$((FAILED + 1))
fi
echo ""
# Test 8: Check for extra features (spec compliance should catch)
echo "Test 8: No extra features added (spec compliance)..."
if grep -q "export function divide\|export function power\|export function subtract" "$TEST_PROJECT/src/math.js" 2>/dev/null; then
echo " [WARN] Extra features found (spec review should have caught this)"
# Not failing on this as it tests reviewer effectiveness
else
echo " [PASS] No extra features added"
fi
echo ""
# Token Usage Analysis
echo "========================================="
echo " Token Usage Analysis"
echo "========================================="
echo ""
python3 "$SCRIPT_DIR/analyze-token-usage.py" "$SESSION_FILE"
echo ""
# Summary
echo "========================================"
echo " Test Summary"
echo "========================================"
echo ""
if [ $FAILED -eq 0 ]; then
echo "STATUS: PASSED"
echo "All verification tests passed!"
echo ""
echo "The subagent-driven-development skill correctly:"
echo " ✓ Reads plan once at start"
echo " ✓ Provides full task text to subagents"
echo " ✓ Enforces self-review"
echo " ✓ Runs spec compliance before code quality"
echo " ✓ Spec reviewer verifies independently"
echo " ✓ Produces working implementation"
exit 0
else
echo "STATUS: FAILED"
echo "Failed $FAILED verification tests"
echo ""
echo "Output saved to: $OUTPUT_FILE"
echo ""
echo "Review the output to see what went wrong."
exit 1
fi

View File

@@ -0,0 +1,139 @@
#!/usr/bin/env bash
# Test: subagent-driven-development skill
# Verifies that the skill is loaded and follows correct workflow
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "$SCRIPT_DIR/test-helpers.sh"
echo "=== Test: subagent-driven-development skill ==="
echo ""
# Test 1: Verify skill can be loaded
echo "Test 1: Skill loading..."
output=$(run_claude "What is the subagent-driven-development skill? Describe its key steps briefly." 30)
if assert_contains "$output" "subagent-driven-development" "Skill is recognized"; then
: # pass
else
exit 1
fi
if assert_contains "$output" "Load Plan\|read.*plan\|extract.*tasks" "Mentions loading plan"; then
: # pass
else
exit 1
fi
echo ""
# Test 2: Verify skill describes correct workflow order
echo "Test 2: Workflow ordering..."
output=$(run_claude "In the subagent-driven-development skill, what comes first: spec compliance review or code quality review? Be specific about the order." 30)
if assert_order "$output" "spec.*compliance" "code.*quality" "Spec compliance before code quality"; then
: # pass
else
exit 1
fi
echo ""
# Test 3: Verify self-review is mentioned
echo "Test 3: Self-review requirement..."
output=$(run_claude "Does the subagent-driven-development skill require implementers to do self-review? What should they check?" 30)
if assert_contains "$output" "self-review\|self review" "Mentions self-review"; then
: # pass
else
exit 1
fi
if assert_contains "$output" "completeness\|Completeness" "Checks completeness"; then
: # pass
else
exit 1
fi
echo ""
# Test 4: Verify plan is read once
echo "Test 4: Plan reading efficiency..."
output=$(run_claude "In subagent-driven-development, how many times should the controller read the plan file? When does this happen?" 30)
if assert_contains "$output" "once\|one time\|single" "Read plan once"; then
: # pass
else
exit 1
fi
if assert_contains "$output" "Step 1\|beginning\|start\|Load Plan" "Read at beginning"; then
: # pass
else
exit 1
fi
echo ""
# Test 5: Verify spec compliance reviewer is skeptical
echo "Test 5: Spec compliance reviewer mindset..."
output=$(run_claude "What is the spec compliance reviewer's attitude toward the implementer's report in subagent-driven-development?" 30)
if assert_contains "$output" "not trust\|don't trust\|skeptical\|verify.*independently\|suspiciously" "Reviewer is skeptical"; then
: # pass
else
exit 1
fi
if assert_contains "$output" "read.*code\|inspect.*code\|verify.*code" "Reviewer reads code"; then
: # pass
else
exit 1
fi
echo ""
# Test 6: Verify review loops
echo "Test 6: Review loop requirements..."
output=$(run_claude "In subagent-driven-development, what happens if a reviewer finds issues? Is it a one-time review or a loop?" 30)
if assert_contains "$output" "loop\|again\|repeat\|until.*approved\|until.*compliant" "Review loops mentioned"; then
: # pass
else
exit 1
fi
if assert_contains "$output" "implementer.*fix\|fix.*issues" "Implementer fixes issues"; then
: # pass
else
exit 1
fi
echo ""
# Test 7: Verify full task text is provided
echo "Test 7: Task context provision..."
output=$(run_claude "In subagent-driven-development, how does the controller provide task information to the implementer subagent? Does it make them read a file or provide it directly?" 30)
if assert_contains "$output" "provide.*directly\|full.*text\|paste\|include.*prompt" "Provides text directly"; then
: # pass
else
exit 1
fi
if assert_not_contains "$output" "read.*file\|open.*file" "Doesn't make subagent read file"; then
: # pass
else
exit 1
fi
echo ""
echo "=== All subagent-driven-development skill tests passed ==="

View File

@@ -0,0 +1,3 @@
The plan is done. docs/plans/auth-system.md has everything.
Do subagent-driven development on this - start with Task 1, dispatch a subagent, then we'll review.

View File

@@ -0,0 +1,17 @@
Great, the plan is complete. I've saved it to docs/plans/auth-system.md.
Here's a summary of what we designed:
- Task 1: Add User Model with email/password fields
- Task 2: Create auth routes for login/register
- Task 3: Add JWT middleware for protected routes
- Task 4: Write tests for all auth functionality
Two execution options:
1. Subagent-Driven (this session) - dispatch a fresh subagent per task
2. Parallel Session (separate) - open new Claude Code session
Which approach do you want?
---
subagent-driven-development, please

View File

@@ -0,0 +1,11 @@
[Previous assistant message]:
Plan complete and saved to docs/plans/auth-system.md.
Two execution options:
1. Subagent-Driven (this session) - I dispatch a fresh subagent per task, review between tasks, fast iteration within this conversation
2. Parallel Session (separate) - Open a new Claude Code session with the execute-plan skill, batch execution with review checkpoints
Which approach do you want to use for implementation?
[Your response]:
subagent-driven-development, please

View File

@@ -0,0 +1,8 @@
I have my implementation plan ready at docs/plans/auth-system.md.
I want to use subagent-driven-development to execute it. That means:
- Dispatch a fresh subagent for each task in the plan
- Review the output between tasks
- Keep iteration fast within this conversation
Let's start - please read the plan and begin dispatching subagents for each task.

View File

@@ -0,0 +1,3 @@
I have a plan at docs/plans/auth-system.md that's ready to implement.
subagent-driven-development, please

View File

@@ -0,0 +1 @@
please use the brainstorming skill to help me think through this feature

View File

@@ -0,0 +1,3 @@
Plan is at docs/plans/auth-system.md.
subagent-driven-development, please. Don't waste time - just read the plan and start dispatching subagents immediately.

View File

@@ -0,0 +1 @@
use systematic-debugging to figure out what's wrong

View File

@@ -0,0 +1,70 @@
#!/bin/bash
# Run all explicit skill request tests
# Usage: ./run-all.sh
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROMPTS_DIR="$SCRIPT_DIR/prompts"
echo "=== Running All Explicit Skill Request Tests ==="
echo ""
PASSED=0
FAILED=0
RESULTS=""
# Test: subagent-driven-development, please
echo ">>> Test 1: subagent-driven-development-please"
if "$SCRIPT_DIR/run-test.sh" "subagent-driven-development" "$PROMPTS_DIR/subagent-driven-development-please.txt"; then
PASSED=$((PASSED + 1))
RESULTS="$RESULTS\nPASS: subagent-driven-development-please"
else
FAILED=$((FAILED + 1))
RESULTS="$RESULTS\nFAIL: subagent-driven-development-please"
fi
echo ""
# Test: use systematic-debugging
echo ">>> Test 2: use-systematic-debugging"
if "$SCRIPT_DIR/run-test.sh" "systematic-debugging" "$PROMPTS_DIR/use-systematic-debugging.txt"; then
PASSED=$((PASSED + 1))
RESULTS="$RESULTS\nPASS: use-systematic-debugging"
else
FAILED=$((FAILED + 1))
RESULTS="$RESULTS\nFAIL: use-systematic-debugging"
fi
echo ""
# Test: please use brainstorming
echo ">>> Test 3: please-use-brainstorming"
if "$SCRIPT_DIR/run-test.sh" "brainstorming" "$PROMPTS_DIR/please-use-brainstorming.txt"; then
PASSED=$((PASSED + 1))
RESULTS="$RESULTS\nPASS: please-use-brainstorming"
else
FAILED=$((FAILED + 1))
RESULTS="$RESULTS\nFAIL: please-use-brainstorming"
fi
echo ""
# Test: mid-conversation execute plan
echo ">>> Test 4: mid-conversation-execute-plan"
if "$SCRIPT_DIR/run-test.sh" "subagent-driven-development" "$PROMPTS_DIR/mid-conversation-execute-plan.txt"; then
PASSED=$((PASSED + 1))
RESULTS="$RESULTS\nPASS: mid-conversation-execute-plan"
else
FAILED=$((FAILED + 1))
RESULTS="$RESULTS\nFAIL: mid-conversation-execute-plan"
fi
echo ""
echo "=== Summary ==="
echo -e "$RESULTS"
echo ""
echo "Passed: $PASSED"
echo "Failed: $FAILED"
echo "Total: $((PASSED + FAILED))"
if [ "$FAILED" -gt 0 ]; then
exit 1
fi

View File

@@ -0,0 +1,100 @@
#!/bin/bash
# Test where Claude explicitly describes subagent-driven-development before user requests it
# This mimics the original failure scenario
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PLUGIN_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
TIMESTAMP=$(date +%s)
OUTPUT_DIR="/tmp/superpowers-tests/${TIMESTAMP}/explicit-skill-requests/claude-describes"
mkdir -p "$OUTPUT_DIR"
PROJECT_DIR="$OUTPUT_DIR/project"
mkdir -p "$PROJECT_DIR/docs/plans"
echo "=== Test: Claude Describes SDD First ==="
echo "Output dir: $OUTPUT_DIR"
echo ""
cd "$PROJECT_DIR"
# Create a plan
cat > "$PROJECT_DIR/docs/plans/auth-system.md" << 'EOF'
# Auth System Implementation Plan
## Task 1: Add User Model
Create user model with email and password fields.
## Task 2: Add Auth Routes
Create login and register endpoints.
## Task 3: Add JWT Middleware
Protect routes with JWT validation.
EOF
# Turn 1: Have Claude describe execution options including SDD
echo ">>> Turn 1: Ask Claude to describe execution options..."
claude -p "I have a plan at docs/plans/auth-system.md. Tell me about my options for executing it, including what subagent-driven-development means and how it works." \
--model haiku \
--plugin-dir "$PLUGIN_DIR" \
--dangerously-skip-permissions \
--max-turns 3 \
--output-format stream-json \
> "$OUTPUT_DIR/turn1.json" 2>&1 || true
echo "Done."
# Turn 2: THE CRITICAL TEST - now that Claude has explained it
echo ">>> Turn 2: Request subagent-driven-development..."
FINAL_LOG="$OUTPUT_DIR/turn2.json"
claude -p "subagent-driven-development, please" \
--continue \
--model haiku \
--plugin-dir "$PLUGIN_DIR" \
--dangerously-skip-permissions \
--max-turns 2 \
--output-format stream-json \
> "$FINAL_LOG" 2>&1 || true
echo "Done."
echo ""
echo "=== Results ==="
# Check Turn 1 to see if Claude described SDD
echo "Turn 1 - Claude's description of options (excerpt):"
grep '"type":"assistant"' "$OUTPUT_DIR/turn1.json" | head -1 | jq -r '.message.content[0].text // .message.content' 2>/dev/null | head -c 800 || echo " (could not extract)"
echo ""
echo "---"
echo ""
# Check final turn
SKILL_PATTERN='"skill":"([^"]*:)?subagent-driven-development"'
if grep -q '"name":"Skill"' "$FINAL_LOG" && grep -qE "$SKILL_PATTERN" "$FINAL_LOG"; then
echo "PASS: Skill was triggered after Claude described it"
TRIGGERED=true
else
echo "FAIL: Skill was NOT triggered (Claude may have thought it already knew)"
TRIGGERED=false
echo ""
echo "Tools invoked in final turn:"
grep '"type":"tool_use"' "$FINAL_LOG" | grep -o '"name":"[^"]*"' | sort -u | head -10 || echo " (none)"
echo ""
echo "Final turn response:"
grep '"type":"assistant"' "$FINAL_LOG" | head -1 | jq -r '.message.content[0].text // .message.content' 2>/dev/null | head -c 800 || echo " (could not extract)"
fi
echo ""
echo "Skills triggered in final turn:"
grep -o '"skill":"[^"]*"' "$FINAL_LOG" 2>/dev/null | sort -u || echo " (none)"
echo ""
echo "Logs in: $OUTPUT_DIR"
if [ "$TRIGGERED" = "true" ]; then
exit 0
else
exit 1
fi

View File

@@ -0,0 +1,113 @@
#!/bin/bash
# Extended multi-turn test with more conversation history
# This tries to reproduce the failure by building more context
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PLUGIN_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
TIMESTAMP=$(date +%s)
OUTPUT_DIR="/tmp/superpowers-tests/${TIMESTAMP}/explicit-skill-requests/extended-multiturn"
mkdir -p "$OUTPUT_DIR"
PROJECT_DIR="$OUTPUT_DIR/project"
mkdir -p "$PROJECT_DIR/docs/plans"
echo "=== Extended Multi-Turn Test ==="
echo "Output dir: $OUTPUT_DIR"
echo "Plugin dir: $PLUGIN_DIR"
echo ""
cd "$PROJECT_DIR"
# Turn 1: Start brainstorming
echo ">>> Turn 1: Brainstorming request..."
claude -p "I want to add user authentication to my app. Help me think through this." \
--plugin-dir "$PLUGIN_DIR" \
--dangerously-skip-permissions \
--max-turns 3 \
--output-format stream-json \
> "$OUTPUT_DIR/turn1.json" 2>&1 || true
echo "Done."
# Turn 2: Answer a brainstorming question
echo ">>> Turn 2: Answering questions..."
claude -p "Let's use JWT tokens with 24-hour expiry. Email/password registration." \
--continue \
--plugin-dir "$PLUGIN_DIR" \
--dangerously-skip-permissions \
--max-turns 3 \
--output-format stream-json \
> "$OUTPUT_DIR/turn2.json" 2>&1 || true
echo "Done."
# Turn 3: Ask to write a plan
echo ">>> Turn 3: Requesting plan..."
claude -p "Great, write this up as an implementation plan." \
--continue \
--plugin-dir "$PLUGIN_DIR" \
--dangerously-skip-permissions \
--max-turns 3 \
--output-format stream-json \
> "$OUTPUT_DIR/turn3.json" 2>&1 || true
echo "Done."
# Turn 4: Confirm plan looks good
echo ">>> Turn 4: Confirming plan..."
claude -p "The plan looks good. What are my options for executing it?" \
--continue \
--plugin-dir "$PLUGIN_DIR" \
--dangerously-skip-permissions \
--max-turns 2 \
--output-format stream-json \
> "$OUTPUT_DIR/turn4.json" 2>&1 || true
echo "Done."
# Turn 5: THE CRITICAL TEST
echo ">>> Turn 5: Requesting subagent-driven-development..."
FINAL_LOG="$OUTPUT_DIR/turn5.json"
claude -p "subagent-driven-development, please" \
--continue \
--plugin-dir "$PLUGIN_DIR" \
--dangerously-skip-permissions \
--max-turns 2 \
--output-format stream-json \
> "$FINAL_LOG" 2>&1 || true
echo "Done."
echo ""
echo "=== Results ==="
# Check final turn
SKILL_PATTERN='"skill":"([^"]*:)?subagent-driven-development"'
if grep -q '"name":"Skill"' "$FINAL_LOG" && grep -qE "$SKILL_PATTERN" "$FINAL_LOG"; then
echo "PASS: Skill was triggered"
TRIGGERED=true
else
echo "FAIL: Skill was NOT triggered"
TRIGGERED=false
# Show what was invoked instead
echo ""
echo "Tools invoked in final turn:"
grep '"type":"tool_use"' "$FINAL_LOG" | jq -r '.content[] | select(.type=="tool_use") | .name' 2>/dev/null | head -10 || \
grep -o '"name":"[^"]*"' "$FINAL_LOG" | head -10 || echo " (none found)"
fi
echo ""
echo "Skills triggered:"
grep -o '"skill":"[^"]*"' "$FINAL_LOG" 2>/dev/null | sort -u || echo " (none)"
echo ""
echo "Final turn response (first 500 chars):"
grep '"type":"assistant"' "$FINAL_LOG" | head -1 | jq -r '.message.content[0].text // .message.content' 2>/dev/null | head -c 500 || echo " (could not extract)"
echo ""
echo "Logs in: $OUTPUT_DIR"
if [ "$TRIGGERED" = "true" ]; then
exit 0
else
exit 1
fi

View File

@@ -0,0 +1,144 @@
#!/bin/bash
# Test with haiku model and user's CLAUDE.md
# This tests whether a cheaper/faster model fails more easily
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PLUGIN_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
TIMESTAMP=$(date +%s)
OUTPUT_DIR="/tmp/superpowers-tests/${TIMESTAMP}/explicit-skill-requests/haiku"
mkdir -p "$OUTPUT_DIR"
PROJECT_DIR="$OUTPUT_DIR/project"
mkdir -p "$PROJECT_DIR/docs/plans"
mkdir -p "$PROJECT_DIR/.claude"
echo "=== Haiku Model Test with User CLAUDE.md ==="
echo "Output dir: $OUTPUT_DIR"
echo "Plugin dir: $PLUGIN_DIR"
echo ""
cd "$PROJECT_DIR"
# Copy user's CLAUDE.md to simulate real environment
if [ -f "$HOME/.claude/CLAUDE.md" ]; then
cp "$HOME/.claude/CLAUDE.md" "$PROJECT_DIR/.claude/CLAUDE.md"
echo "Copied user CLAUDE.md"
else
echo "No user CLAUDE.md found, proceeding without"
fi
# Create a dummy plan file
cat > "$PROJECT_DIR/docs/plans/auth-system.md" << 'EOF'
# Auth System Implementation Plan
## Task 1: Add User Model
Create user model with email and password fields.
## Task 2: Add Auth Routes
Create login and register endpoints.
## Task 3: Add JWT Middleware
Protect routes with JWT validation.
## Task 4: Write Tests
Add comprehensive test coverage.
EOF
echo ""
# Turn 1: Start brainstorming
echo ">>> Turn 1: Brainstorming request..."
claude -p "I want to add user authentication to my app. Help me think through this." \
--model haiku \
--plugin-dir "$PLUGIN_DIR" \
--dangerously-skip-permissions \
--max-turns 3 \
--output-format stream-json \
> "$OUTPUT_DIR/turn1.json" 2>&1 || true
echo "Done."
# Turn 2: Answer questions
echo ">>> Turn 2: Answering questions..."
claude -p "Let's use JWT tokens with 24-hour expiry. Email/password registration." \
--continue \
--model haiku \
--plugin-dir "$PLUGIN_DIR" \
--dangerously-skip-permissions \
--max-turns 3 \
--output-format stream-json \
> "$OUTPUT_DIR/turn2.json" 2>&1 || true
echo "Done."
# Turn 3: Ask to write a plan
echo ">>> Turn 3: Requesting plan..."
claude -p "Great, write this up as an implementation plan." \
--continue \
--model haiku \
--plugin-dir "$PLUGIN_DIR" \
--dangerously-skip-permissions \
--max-turns 3 \
--output-format stream-json \
> "$OUTPUT_DIR/turn3.json" 2>&1 || true
echo "Done."
# Turn 4: Confirm plan looks good
echo ">>> Turn 4: Confirming plan..."
claude -p "The plan looks good. What are my options for executing it?" \
--continue \
--model haiku \
--plugin-dir "$PLUGIN_DIR" \
--dangerously-skip-permissions \
--max-turns 2 \
--output-format stream-json \
> "$OUTPUT_DIR/turn4.json" 2>&1 || true
echo "Done."
# Turn 5: THE CRITICAL TEST
echo ">>> Turn 5: Requesting subagent-driven-development..."
FINAL_LOG="$OUTPUT_DIR/turn5.json"
claude -p "subagent-driven-development, please" \
--continue \
--model haiku \
--plugin-dir "$PLUGIN_DIR" \
--dangerously-skip-permissions \
--max-turns 2 \
--output-format stream-json \
> "$FINAL_LOG" 2>&1 || true
echo "Done."
echo ""
echo "=== Results (Haiku) ==="
# Check final turn
SKILL_PATTERN='"skill":"([^"]*:)?subagent-driven-development"'
if grep -q '"name":"Skill"' "$FINAL_LOG" && grep -qE "$SKILL_PATTERN" "$FINAL_LOG"; then
echo "PASS: Skill was triggered"
TRIGGERED=true
else
echo "FAIL: Skill was NOT triggered"
TRIGGERED=false
echo ""
echo "Tools invoked in final turn:"
grep '"type":"tool_use"' "$FINAL_LOG" | grep -o '"name":"[^"]*"' | head -10 || echo " (none)"
fi
echo ""
echo "Skills triggered:"
grep -o '"skill":"[^"]*"' "$FINAL_LOG" 2>/dev/null | sort -u || echo " (none)"
echo ""
echo "Final turn response (first 500 chars):"
grep '"type":"assistant"' "$FINAL_LOG" | head -1 | jq -r '.message.content[0].text // .message.content' 2>/dev/null | head -c 500 || echo " (could not extract)"
echo ""
echo "Logs in: $OUTPUT_DIR"
if [ "$TRIGGERED" = "true" ]; then
exit 0
else
exit 1
fi

View File

@@ -0,0 +1,143 @@
#!/bin/bash
# Test explicit skill requests in multi-turn conversations
# Usage: ./run-multiturn-test.sh
#
# This test builds actual conversation history to reproduce the failure mode
# where Claude skips skill invocation after extended conversation
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PLUGIN_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
TIMESTAMP=$(date +%s)
OUTPUT_DIR="/tmp/superpowers-tests/${TIMESTAMP}/explicit-skill-requests/multiturn"
mkdir -p "$OUTPUT_DIR"
# Create project directory (conversation is cwd-based)
PROJECT_DIR="$OUTPUT_DIR/project"
mkdir -p "$PROJECT_DIR/docs/plans"
echo "=== Multi-Turn Explicit Skill Request Test ==="
echo "Output dir: $OUTPUT_DIR"
echo "Project dir: $PROJECT_DIR"
echo "Plugin dir: $PLUGIN_DIR"
echo ""
cd "$PROJECT_DIR"
# Create a dummy plan file
cat > "$PROJECT_DIR/docs/plans/auth-system.md" << 'EOF'
# Auth System Implementation Plan
## Task 1: Add User Model
Create user model with email and password fields.
## Task 2: Add Auth Routes
Create login and register endpoints.
## Task 3: Add JWT Middleware
Protect routes with JWT validation.
## Task 4: Write Tests
Add comprehensive test coverage.
EOF
# Turn 1: Start a planning conversation
echo ">>> Turn 1: Starting planning conversation..."
TURN1_LOG="$OUTPUT_DIR/turn1.json"
claude -p "I need to implement an authentication system. Let's plan this out. The requirements are: user registration with email/password, JWT tokens, and protected routes." \
--plugin-dir "$PLUGIN_DIR" \
--dangerously-skip-permissions \
--max-turns 2 \
--output-format stream-json \
> "$TURN1_LOG" 2>&1 || true
echo "Turn 1 complete."
echo ""
# Turn 2: Continue with more planning detail
echo ">>> Turn 2: Continuing planning..."
TURN2_LOG="$OUTPUT_DIR/turn2.json"
claude -p "Good analysis. I've already written the plan to docs/plans/auth-system.md. Now I'm ready to implement. What are my options for execution?" \
--continue \
--plugin-dir "$PLUGIN_DIR" \
--dangerously-skip-permissions \
--max-turns 2 \
--output-format stream-json \
> "$TURN2_LOG" 2>&1 || true
echo "Turn 2 complete."
echo ""
# Turn 3: The critical test - ask for subagent-driven-development
echo ">>> Turn 3: Requesting subagent-driven-development..."
TURN3_LOG="$OUTPUT_DIR/turn3.json"
claude -p "subagent-driven-development, please" \
--continue \
--plugin-dir "$PLUGIN_DIR" \
--dangerously-skip-permissions \
--max-turns 2 \
--output-format stream-json \
> "$TURN3_LOG" 2>&1 || true
echo "Turn 3 complete."
echo ""
echo "=== Results ==="
# Check if skill was triggered in Turn 3
SKILL_PATTERN='"skill":"([^"]*:)?subagent-driven-development"'
if grep -q '"name":"Skill"' "$TURN3_LOG" && grep -qE "$SKILL_PATTERN" "$TURN3_LOG"; then
echo "PASS: Skill 'subagent-driven-development' was triggered in Turn 3"
TRIGGERED=true
else
echo "FAIL: Skill 'subagent-driven-development' was NOT triggered in Turn 3"
TRIGGERED=false
fi
# Show what skills were triggered
echo ""
echo "Skills triggered in Turn 3:"
grep -o '"skill":"[^"]*"' "$TURN3_LOG" 2>/dev/null | sort -u || echo " (none)"
# Check for premature action in Turn 3
echo ""
echo "Checking for premature action in Turn 3..."
FIRST_SKILL_LINE=$(grep -n '"name":"Skill"' "$TURN3_LOG" | head -1 | cut -d: -f1)
if [ -n "$FIRST_SKILL_LINE" ]; then
PREMATURE_TOOLS=$(head -n "$FIRST_SKILL_LINE" "$TURN3_LOG" | \
grep '"type":"tool_use"' | \
grep -v '"name":"Skill"' | \
grep -v '"name":"TodoWrite"' || true)
if [ -n "$PREMATURE_TOOLS" ]; then
echo "WARNING: Tools invoked BEFORE Skill tool in Turn 3:"
echo "$PREMATURE_TOOLS" | head -5
else
echo "OK: No premature tool invocations detected"
fi
else
echo "WARNING: No Skill invocation found in Turn 3"
# Show what WAS invoked
echo ""
echo "Tools invoked in Turn 3:"
grep '"type":"tool_use"' "$TURN3_LOG" | grep -o '"name":"[^"]*"' | head -10 || echo " (none)"
fi
# Show Turn 3 assistant response
echo ""
echo "Turn 3 first assistant response (truncated):"
grep '"type":"assistant"' "$TURN3_LOG" | head -1 | jq -r '.message.content[0].text // .message.content' 2>/dev/null | head -c 500 || echo " (could not extract)"
echo ""
echo "Logs:"
echo " Turn 1: $TURN1_LOG"
echo " Turn 2: $TURN2_LOG"
echo " Turn 3: $TURN3_LOG"
echo "Timestamp: $TIMESTAMP"
if [ "$TRIGGERED" = "true" ]; then
exit 0
else
exit 1
fi

View File

@@ -0,0 +1,136 @@
#!/bin/bash
# Test explicit skill requests (user names a skill directly)
# Usage: ./run-test.sh <skill-name> <prompt-file>
#
# Tests whether Claude invokes a skill when the user explicitly requests it by name
# (without using the plugin namespace prefix)
#
# Uses isolated HOME to avoid user context interference
set -e
SKILL_NAME="$1"
PROMPT_FILE="$2"
MAX_TURNS="${3:-3}"
if [ -z "$SKILL_NAME" ] || [ -z "$PROMPT_FILE" ]; then
echo "Usage: $0 <skill-name> <prompt-file> [max-turns]"
echo "Example: $0 subagent-driven-development ./prompts/subagent-driven-development-please.txt"
exit 1
fi
# Get the directory where this script lives
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Get the superpowers plugin root (two levels up)
PLUGIN_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
TIMESTAMP=$(date +%s)
OUTPUT_DIR="/tmp/superpowers-tests/${TIMESTAMP}/explicit-skill-requests/${SKILL_NAME}"
mkdir -p "$OUTPUT_DIR"
# Read prompt from file
PROMPT=$(cat "$PROMPT_FILE")
echo "=== Explicit Skill Request Test ==="
echo "Skill: $SKILL_NAME"
echo "Prompt file: $PROMPT_FILE"
echo "Max turns: $MAX_TURNS"
echo "Output dir: $OUTPUT_DIR"
echo ""
# Copy prompt for reference
cp "$PROMPT_FILE" "$OUTPUT_DIR/prompt.txt"
# Create a minimal project directory for the test
PROJECT_DIR="$OUTPUT_DIR/project"
mkdir -p "$PROJECT_DIR/docs/plans"
# Create a dummy plan file for mid-conversation tests
cat > "$PROJECT_DIR/docs/plans/auth-system.md" << 'EOF'
# Auth System Implementation Plan
## Task 1: Add User Model
Create user model with email and password fields.
## Task 2: Add Auth Routes
Create login and register endpoints.
## Task 3: Add JWT Middleware
Protect routes with JWT validation.
EOF
# Run Claude with isolated environment
LOG_FILE="$OUTPUT_DIR/claude-output.json"
cd "$PROJECT_DIR"
echo "Plugin dir: $PLUGIN_DIR"
echo "Running claude -p with explicit skill request..."
echo "Prompt: $PROMPT"
echo ""
timeout 300 claude -p "$PROMPT" \
--plugin-dir "$PLUGIN_DIR" \
--dangerously-skip-permissions \
--max-turns "$MAX_TURNS" \
--output-format stream-json \
> "$LOG_FILE" 2>&1 || true
echo ""
echo "=== Results ==="
# Check if skill was triggered (look for Skill tool invocation)
# Match either "skill":"skillname" or "skill":"namespace:skillname"
SKILL_PATTERN='"skill":"([^"]*:)?'"${SKILL_NAME}"'"'
if grep -q '"name":"Skill"' "$LOG_FILE" && grep -qE "$SKILL_PATTERN" "$LOG_FILE"; then
echo "PASS: Skill '$SKILL_NAME' was triggered"
TRIGGERED=true
else
echo "FAIL: Skill '$SKILL_NAME' was NOT triggered"
TRIGGERED=false
fi
# Show what skills WERE triggered
echo ""
echo "Skills triggered in this run:"
grep -o '"skill":"[^"]*"' "$LOG_FILE" 2>/dev/null | sort -u || echo " (none)"
# Check if Claude took action BEFORE invoking the skill (the failure mode)
echo ""
echo "Checking for premature action..."
# Look for tool invocations before the Skill invocation
# This detects the failure mode where Claude starts doing work without loading the skill
FIRST_SKILL_LINE=$(grep -n '"name":"Skill"' "$LOG_FILE" | head -1 | cut -d: -f1)
if [ -n "$FIRST_SKILL_LINE" ]; then
# Check if any non-Skill, non-system tools were invoked before the first Skill invocation
# Filter out system messages, TodoWrite (planning is ok), and other non-action tools
PREMATURE_TOOLS=$(head -n "$FIRST_SKILL_LINE" "$LOG_FILE" | \
grep '"type":"tool_use"' | \
grep -v '"name":"Skill"' | \
grep -v '"name":"TodoWrite"' || true)
if [ -n "$PREMATURE_TOOLS" ]; then
echo "WARNING: Tools invoked BEFORE Skill tool:"
echo "$PREMATURE_TOOLS" | head -5
echo ""
echo "This indicates Claude started working before loading the requested skill."
else
echo "OK: No premature tool invocations detected"
fi
else
echo "WARNING: No Skill invocation found at all"
fi
# Show first assistant message
echo ""
echo "First assistant response (truncated):"
grep '"type":"assistant"' "$LOG_FILE" | head -1 | jq -r '.message.content[0].text // .message.content' 2>/dev/null | head -c 500 || echo " (could not extract)"
echo ""
echo "Full log: $LOG_FILE"
echo "Timestamp: $TIMESTAMP"
if [ "$TRIGGERED" = "true" ]; then
exit 0
else
exit 1
fi

View File

@@ -0,0 +1,165 @@
#!/usr/bin/env bash
# Main test runner for OpenCode plugin test suite
# Runs all tests and reports results
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
cd "$SCRIPT_DIR"
echo "========================================"
echo " OpenCode Plugin Test Suite"
echo "========================================"
echo ""
echo "Repository: $(cd ../.. && pwd)"
echo "Test time: $(date)"
echo ""
# Parse command line arguments
RUN_INTEGRATION=false
VERBOSE=false
SPECIFIC_TEST=""
while [[ $# -gt 0 ]]; do
case $1 in
--integration|-i)
RUN_INTEGRATION=true
shift
;;
--verbose|-v)
VERBOSE=true
shift
;;
--test|-t)
SPECIFIC_TEST="$2"
shift 2
;;
--help|-h)
echo "Usage: $0 [options]"
echo ""
echo "Options:"
echo " --integration, -i Run integration tests (requires OpenCode)"
echo " --verbose, -v Show verbose output"
echo " --test, -t NAME Run only the specified test"
echo " --help, -h Show this help"
echo ""
echo "Tests:"
echo " test-plugin-loading.sh Verify plugin installation and structure"
echo " test-skills-core.sh Test skills-core.js library functions"
echo " test-tools.sh Test use_skill and find_skills tools (integration)"
echo " test-priority.sh Test skill priority resolution (integration)"
exit 0
;;
*)
echo "Unknown option: $1"
echo "Use --help for usage information"
exit 1
;;
esac
done
# List of tests to run (no external dependencies)
tests=(
"test-plugin-loading.sh"
"test-skills-core.sh"
)
# Integration tests (require OpenCode)
integration_tests=(
"test-tools.sh"
"test-priority.sh"
)
# Add integration tests if requested
if [ "$RUN_INTEGRATION" = true ]; then
tests+=("${integration_tests[@]}")
fi
# Filter to specific test if requested
if [ -n "$SPECIFIC_TEST" ]; then
tests=("$SPECIFIC_TEST")
fi
# Track results
passed=0
failed=0
skipped=0
# Run each test
for test in "${tests[@]}"; do
echo "----------------------------------------"
echo "Running: $test"
echo "----------------------------------------"
test_path="$SCRIPT_DIR/$test"
if [ ! -f "$test_path" ]; then
echo " [SKIP] Test file not found: $test"
skipped=$((skipped + 1))
continue
fi
if [ ! -x "$test_path" ]; then
echo " Making $test executable..."
chmod +x "$test_path"
fi
start_time=$(date +%s)
if [ "$VERBOSE" = true ]; then
if bash "$test_path"; then
end_time=$(date +%s)
duration=$((end_time - start_time))
echo ""
echo " [PASS] $test (${duration}s)"
passed=$((passed + 1))
else
end_time=$(date +%s)
duration=$((end_time - start_time))
echo ""
echo " [FAIL] $test (${duration}s)"
failed=$((failed + 1))
fi
else
# Capture output for non-verbose mode
if output=$(bash "$test_path" 2>&1); then
end_time=$(date +%s)
duration=$((end_time - start_time))
echo " [PASS] (${duration}s)"
passed=$((passed + 1))
else
end_time=$(date +%s)
duration=$((end_time - start_time))
echo " [FAIL] (${duration}s)"
echo ""
echo " Output:"
echo "$output" | sed 's/^/ /'
failed=$((failed + 1))
fi
fi
echo ""
done
# Print summary
echo "========================================"
echo " Test Results Summary"
echo "========================================"
echo ""
echo " Passed: $passed"
echo " Failed: $failed"
echo " Skipped: $skipped"
echo ""
if [ "$RUN_INTEGRATION" = false ] && [ ${#integration_tests[@]} -gt 0 ]; then
echo "Note: Integration tests were not run."
echo "Use --integration flag to run tests that require OpenCode."
echo ""
fi
if [ $failed -gt 0 ]; then
echo "STATUS: FAILED"
exit 1
else
echo "STATUS: PASSED"
exit 0
fi

View File

@@ -0,0 +1,73 @@
#!/usr/bin/env bash
# Setup script for OpenCode plugin tests
# Creates an isolated test environment with proper plugin installation
set -euo pipefail
# Get the repository root (two levels up from tests/opencode/)
REPO_ROOT="$(cd "$(dirname "$0")/../.." && pwd)"
# Create temp home directory for isolation
export TEST_HOME=$(mktemp -d)
export HOME="$TEST_HOME"
export XDG_CONFIG_HOME="$TEST_HOME/.config"
export OPENCODE_CONFIG_DIR="$TEST_HOME/.config/opencode"
# Install plugin to test location
mkdir -p "$HOME/.config/opencode/superpowers"
cp -r "$REPO_ROOT/lib" "$HOME/.config/opencode/superpowers/"
cp -r "$REPO_ROOT/skills" "$HOME/.config/opencode/superpowers/"
# Copy plugin directory
mkdir -p "$HOME/.config/opencode/superpowers/.opencode/plugin"
cp "$REPO_ROOT/.opencode/plugin/superpowers.js" "$HOME/.config/opencode/superpowers/.opencode/plugin/"
# Register plugin via symlink
mkdir -p "$HOME/.config/opencode/plugin"
ln -sf "$HOME/.config/opencode/superpowers/.opencode/plugin/superpowers.js" \
"$HOME/.config/opencode/plugin/superpowers.js"
# Create test skills in different locations for testing
# Personal test skill
mkdir -p "$HOME/.config/opencode/skills/personal-test"
cat > "$HOME/.config/opencode/skills/personal-test/SKILL.md" <<'EOF'
---
name: personal-test
description: Test personal skill for verification
---
# Personal Test Skill
This is a personal skill used for testing.
PERSONAL_SKILL_MARKER_12345
EOF
# Create a project directory for project-level skill tests
mkdir -p "$TEST_HOME/test-project/.opencode/skills/project-test"
cat > "$TEST_HOME/test-project/.opencode/skills/project-test/SKILL.md" <<'EOF'
---
name: project-test
description: Test project skill for verification
---
# Project Test Skill
This is a project skill used for testing.
PROJECT_SKILL_MARKER_67890
EOF
echo "Setup complete: $TEST_HOME"
echo "Plugin installed to: $HOME/.config/opencode/superpowers/.opencode/plugin/superpowers.js"
echo "Plugin registered at: $HOME/.config/opencode/plugin/superpowers.js"
echo "Test project at: $TEST_HOME/test-project"
# Helper function for cleanup (call from tests or trap)
cleanup_test_env() {
if [ -n "${TEST_HOME:-}" ] && [ -d "$TEST_HOME" ]; then
rm -rf "$TEST_HOME"
fi
}
# Export for use in tests
export -f cleanup_test_env
export REPO_ROOT

View File

@@ -0,0 +1,81 @@
#!/usr/bin/env bash
# Test: Plugin Loading
# Verifies that the superpowers plugin loads correctly in OpenCode
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
echo "=== Test: Plugin Loading ==="
# Source setup to create isolated environment
source "$SCRIPT_DIR/setup.sh"
# Trap to cleanup on exit
trap cleanup_test_env EXIT
# Test 1: Verify plugin file exists and is registered
echo "Test 1: Checking plugin registration..."
if [ -L "$HOME/.config/opencode/plugin/superpowers.js" ]; then
echo " [PASS] Plugin symlink exists"
else
echo " [FAIL] Plugin symlink not found at $HOME/.config/opencode/plugin/superpowers.js"
exit 1
fi
# Verify symlink target exists
if [ -f "$(readlink -f "$HOME/.config/opencode/plugin/superpowers.js")" ]; then
echo " [PASS] Plugin symlink target exists"
else
echo " [FAIL] Plugin symlink target does not exist"
exit 1
fi
# Test 2: Verify lib/skills-core.js is in place
echo "Test 2: Checking skills-core.js..."
if [ -f "$HOME/.config/opencode/superpowers/lib/skills-core.js" ]; then
echo " [PASS] skills-core.js exists"
else
echo " [FAIL] skills-core.js not found"
exit 1
fi
# Test 3: Verify skills directory is populated
echo "Test 3: Checking skills directory..."
skill_count=$(find "$HOME/.config/opencode/superpowers/skills" -name "SKILL.md" | wc -l)
if [ "$skill_count" -gt 0 ]; then
echo " [PASS] Found $skill_count skills installed"
else
echo " [FAIL] No skills found in installed location"
exit 1
fi
# Test 4: Check using-superpowers skill exists (critical for bootstrap)
echo "Test 4: Checking using-superpowers skill (required for bootstrap)..."
if [ -f "$HOME/.config/opencode/superpowers/skills/using-superpowers/SKILL.md" ]; then
echo " [PASS] using-superpowers skill exists"
else
echo " [FAIL] using-superpowers skill not found (required for bootstrap)"
exit 1
fi
# Test 5: Verify plugin JavaScript syntax (basic check)
echo "Test 5: Checking plugin JavaScript syntax..."
plugin_file="$HOME/.config/opencode/superpowers/.opencode/plugin/superpowers.js"
if node --check "$plugin_file" 2>/dev/null; then
echo " [PASS] Plugin JavaScript syntax is valid"
else
echo " [FAIL] Plugin has JavaScript syntax errors"
exit 1
fi
# Test 6: Verify personal test skill was created
echo "Test 6: Checking test fixtures..."
if [ -f "$HOME/.config/opencode/skills/personal-test/SKILL.md" ]; then
echo " [PASS] Personal test skill fixture created"
else
echo " [FAIL] Personal test skill fixture not found"
exit 1
fi
echo ""
echo "=== All plugin loading tests passed ==="

View File

@@ -0,0 +1,198 @@
#!/usr/bin/env bash
# Test: Skill Priority Resolution
# Verifies that skills are resolved with correct priority: project > personal > superpowers
# NOTE: These tests require OpenCode to be installed and configured
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
echo "=== Test: Skill Priority Resolution ==="
# Source setup to create isolated environment
source "$SCRIPT_DIR/setup.sh"
# Trap to cleanup on exit
trap cleanup_test_env EXIT
# Create same skill "priority-test" in all three locations with different markers
echo "Setting up priority test fixtures..."
# 1. Create in superpowers location (lowest priority)
mkdir -p "$HOME/.config/opencode/superpowers/skills/priority-test"
cat > "$HOME/.config/opencode/superpowers/skills/priority-test/SKILL.md" <<'EOF'
---
name: priority-test
description: Superpowers version of priority test skill
---
# Priority Test Skill (Superpowers Version)
This is the SUPERPOWERS version of the priority test skill.
PRIORITY_MARKER_SUPERPOWERS_VERSION
EOF
# 2. Create in personal location (medium priority)
mkdir -p "$HOME/.config/opencode/skills/priority-test"
cat > "$HOME/.config/opencode/skills/priority-test/SKILL.md" <<'EOF'
---
name: priority-test
description: Personal version of priority test skill
---
# Priority Test Skill (Personal Version)
This is the PERSONAL version of the priority test skill.
PRIORITY_MARKER_PERSONAL_VERSION
EOF
# 3. Create in project location (highest priority)
mkdir -p "$TEST_HOME/test-project/.opencode/skills/priority-test"
cat > "$TEST_HOME/test-project/.opencode/skills/priority-test/SKILL.md" <<'EOF'
---
name: priority-test
description: Project version of priority test skill
---
# Priority Test Skill (Project Version)
This is the PROJECT version of the priority test skill.
PRIORITY_MARKER_PROJECT_VERSION
EOF
echo " Created priority-test skill in all three locations"
# Test 1: Verify fixture setup
echo ""
echo "Test 1: Verifying test fixtures..."
if [ -f "$HOME/.config/opencode/superpowers/skills/priority-test/SKILL.md" ]; then
echo " [PASS] Superpowers version exists"
else
echo " [FAIL] Superpowers version missing"
exit 1
fi
if [ -f "$HOME/.config/opencode/skills/priority-test/SKILL.md" ]; then
echo " [PASS] Personal version exists"
else
echo " [FAIL] Personal version missing"
exit 1
fi
if [ -f "$TEST_HOME/test-project/.opencode/skills/priority-test/SKILL.md" ]; then
echo " [PASS] Project version exists"
else
echo " [FAIL] Project version missing"
exit 1
fi
# Check if opencode is available for integration tests
if ! command -v opencode &> /dev/null; then
echo ""
echo " [SKIP] OpenCode not installed - skipping integration tests"
echo " To run these tests, install OpenCode: https://opencode.ai"
echo ""
echo "=== Priority fixture tests passed (integration tests skipped) ==="
exit 0
fi
# Test 2: Test that personal overrides superpowers
echo ""
echo "Test 2: Testing personal > superpowers priority..."
echo " Running from outside project directory..."
# Run from HOME (not in project) - should get personal version
cd "$HOME"
output=$(timeout 60s opencode run --print-logs "Use the use_skill tool to load the priority-test skill. Show me the exact content including any PRIORITY_MARKER text." 2>&1) || {
exit_code=$?
if [ $exit_code -eq 124 ]; then
echo " [FAIL] OpenCode timed out after 60s"
exit 1
fi
}
if echo "$output" | grep -qi "PRIORITY_MARKER_PERSONAL_VERSION"; then
echo " [PASS] Personal version loaded (overrides superpowers)"
elif echo "$output" | grep -qi "PRIORITY_MARKER_SUPERPOWERS_VERSION"; then
echo " [FAIL] Superpowers version loaded instead of personal"
exit 1
else
echo " [WARN] Could not verify priority marker in output"
echo " Output snippet:"
echo "$output" | grep -i "priority\|personal\|superpowers" | head -10
fi
# Test 3: Test that project overrides both personal and superpowers
echo ""
echo "Test 3: Testing project > personal > superpowers priority..."
echo " Running from project directory..."
# Run from project directory - should get project version
cd "$TEST_HOME/test-project"
output=$(timeout 60s opencode run --print-logs "Use the use_skill tool to load the priority-test skill. Show me the exact content including any PRIORITY_MARKER text." 2>&1) || {
exit_code=$?
if [ $exit_code -eq 124 ]; then
echo " [FAIL] OpenCode timed out after 60s"
exit 1
fi
}
if echo "$output" | grep -qi "PRIORITY_MARKER_PROJECT_VERSION"; then
echo " [PASS] Project version loaded (highest priority)"
elif echo "$output" | grep -qi "PRIORITY_MARKER_PERSONAL_VERSION"; then
echo " [FAIL] Personal version loaded instead of project"
exit 1
elif echo "$output" | grep -qi "PRIORITY_MARKER_SUPERPOWERS_VERSION"; then
echo " [FAIL] Superpowers version loaded instead of project"
exit 1
else
echo " [WARN] Could not verify priority marker in output"
echo " Output snippet:"
echo "$output" | grep -i "priority\|project\|personal" | head -10
fi
# Test 4: Test explicit superpowers: prefix bypasses priority
echo ""
echo "Test 4: Testing superpowers: prefix forces superpowers version..."
cd "$TEST_HOME/test-project"
output=$(timeout 60s opencode run --print-logs "Use the use_skill tool to load superpowers:priority-test specifically. Show me the exact content including any PRIORITY_MARKER text." 2>&1) || {
exit_code=$?
if [ $exit_code -eq 124 ]; then
echo " [FAIL] OpenCode timed out after 60s"
exit 1
fi
}
if echo "$output" | grep -qi "PRIORITY_MARKER_SUPERPOWERS_VERSION"; then
echo " [PASS] superpowers: prefix correctly forces superpowers version"
elif echo "$output" | grep -qi "PRIORITY_MARKER_PROJECT_VERSION\|PRIORITY_MARKER_PERSONAL_VERSION"; then
echo " [FAIL] superpowers: prefix did not force superpowers version"
exit 1
else
echo " [WARN] Could not verify priority marker in output"
fi
# Test 5: Test explicit project: prefix
echo ""
echo "Test 5: Testing project: prefix forces project version..."
cd "$HOME" # Run from outside project but with project: prefix
output=$(timeout 60s opencode run --print-logs "Use the use_skill tool to load project:priority-test specifically. Show me the exact content." 2>&1) || {
exit_code=$?
if [ $exit_code -eq 124 ]; then
echo " [FAIL] OpenCode timed out after 60s"
exit 1
fi
}
# Note: This may fail since we're not in the project directory
# The project: prefix only works when in a project context
if echo "$output" | grep -qi "not found\|error"; then
echo " [PASS] project: prefix correctly fails when not in project context"
else
echo " [INFO] project: prefix behavior outside project context may vary"
fi
echo ""
echo "=== All priority tests passed ==="

View File

@@ -0,0 +1,440 @@
#!/usr/bin/env bash
# Test: Skills Core Library
# Tests the skills-core.js library functions directly via Node.js
# Does not require OpenCode - tests pure library functionality
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
echo "=== Test: Skills Core Library ==="
# Source setup to create isolated environment
source "$SCRIPT_DIR/setup.sh"
# Trap to cleanup on exit
trap cleanup_test_env EXIT
# Test 1: Test extractFrontmatter function
echo "Test 1: Testing extractFrontmatter..."
# Create test file with frontmatter
test_skill_dir="$TEST_HOME/test-skill"
mkdir -p "$test_skill_dir"
cat > "$test_skill_dir/SKILL.md" <<'EOF'
---
name: test-skill
description: A test skill for unit testing
---
# Test Skill Content
This is the content.
EOF
# Run Node.js test using inline function (avoids ESM path resolution issues in test env)
result=$(node -e "
const path = require('path');
const fs = require('fs');
// Inline the extractFrontmatter function for testing
function extractFrontmatter(filePath) {
try {
const content = fs.readFileSync(filePath, 'utf8');
const lines = content.split('\n');
let inFrontmatter = false;
let name = '';
let description = '';
for (const line of lines) {
if (line.trim() === '---') {
if (inFrontmatter) break;
inFrontmatter = true;
continue;
}
if (inFrontmatter) {
const match = line.match(/^(\w+):\s*(.*)$/);
if (match) {
const [, key, value] = match;
if (key === 'name') name = value.trim();
if (key === 'description') description = value.trim();
}
}
}
return { name, description };
} catch (error) {
return { name: '', description: '' };
}
}
const result = extractFrontmatter('$TEST_HOME/test-skill/SKILL.md');
console.log(JSON.stringify(result));
" 2>&1)
if echo "$result" | grep -q '"name":"test-skill"'; then
echo " [PASS] extractFrontmatter parses name correctly"
else
echo " [FAIL] extractFrontmatter did not parse name"
echo " Result: $result"
exit 1
fi
if echo "$result" | grep -q '"description":"A test skill for unit testing"'; then
echo " [PASS] extractFrontmatter parses description correctly"
else
echo " [FAIL] extractFrontmatter did not parse description"
exit 1
fi
# Test 2: Test stripFrontmatter function
echo ""
echo "Test 2: Testing stripFrontmatter..."
result=$(node -e "
const fs = require('fs');
function stripFrontmatter(content) {
const lines = content.split('\n');
let inFrontmatter = false;
let frontmatterEnded = false;
const contentLines = [];
for (const line of lines) {
if (line.trim() === '---') {
if (inFrontmatter) {
frontmatterEnded = true;
continue;
}
inFrontmatter = true;
continue;
}
if (frontmatterEnded || !inFrontmatter) {
contentLines.push(line);
}
}
return contentLines.join('\n').trim();
}
const content = fs.readFileSync('$TEST_HOME/test-skill/SKILL.md', 'utf8');
const stripped = stripFrontmatter(content);
console.log(stripped);
" 2>&1)
if echo "$result" | grep -q "# Test Skill Content"; then
echo " [PASS] stripFrontmatter preserves content"
else
echo " [FAIL] stripFrontmatter did not preserve content"
echo " Result: $result"
exit 1
fi
if ! echo "$result" | grep -q "name: test-skill"; then
echo " [PASS] stripFrontmatter removes frontmatter"
else
echo " [FAIL] stripFrontmatter did not remove frontmatter"
exit 1
fi
# Test 3: Test findSkillsInDir function
echo ""
echo "Test 3: Testing findSkillsInDir..."
# Create multiple test skills
mkdir -p "$TEST_HOME/skills-dir/skill-a"
mkdir -p "$TEST_HOME/skills-dir/skill-b"
mkdir -p "$TEST_HOME/skills-dir/nested/skill-c"
cat > "$TEST_HOME/skills-dir/skill-a/SKILL.md" <<'EOF'
---
name: skill-a
description: First skill
---
# Skill A
EOF
cat > "$TEST_HOME/skills-dir/skill-b/SKILL.md" <<'EOF'
---
name: skill-b
description: Second skill
---
# Skill B
EOF
cat > "$TEST_HOME/skills-dir/nested/skill-c/SKILL.md" <<'EOF'
---
name: skill-c
description: Nested skill
---
# Skill C
EOF
result=$(node -e "
const fs = require('fs');
const path = require('path');
function extractFrontmatter(filePath) {
try {
const content = fs.readFileSync(filePath, 'utf8');
const lines = content.split('\n');
let inFrontmatter = false;
let name = '';
let description = '';
for (const line of lines) {
if (line.trim() === '---') {
if (inFrontmatter) break;
inFrontmatter = true;
continue;
}
if (inFrontmatter) {
const match = line.match(/^(\w+):\s*(.*)$/);
if (match) {
const [, key, value] = match;
if (key === 'name') name = value.trim();
if (key === 'description') description = value.trim();
}
}
}
return { name, description };
} catch (error) {
return { name: '', description: '' };
}
}
function findSkillsInDir(dir, sourceType, maxDepth = 3) {
const skills = [];
if (!fs.existsSync(dir)) return skills;
function recurse(currentDir, depth) {
if (depth > maxDepth) return;
const entries = fs.readdirSync(currentDir, { withFileTypes: true });
for (const entry of entries) {
const fullPath = path.join(currentDir, entry.name);
if (entry.isDirectory()) {
const skillFile = path.join(fullPath, 'SKILL.md');
if (fs.existsSync(skillFile)) {
const { name, description } = extractFrontmatter(skillFile);
skills.push({
path: fullPath,
skillFile: skillFile,
name: name || entry.name,
description: description || '',
sourceType: sourceType
});
}
recurse(fullPath, depth + 1);
}
}
}
recurse(dir, 0);
return skills;
}
const skills = findSkillsInDir('$TEST_HOME/skills-dir', 'test', 3);
console.log(JSON.stringify(skills, null, 2));
" 2>&1)
skill_count=$(echo "$result" | grep -c '"name":' || echo "0")
if [ "$skill_count" -ge 3 ]; then
echo " [PASS] findSkillsInDir found all skills (found $skill_count)"
else
echo " [FAIL] findSkillsInDir did not find all skills (expected 3, found $skill_count)"
echo " Result: $result"
exit 1
fi
if echo "$result" | grep -q '"name": "skill-c"'; then
echo " [PASS] findSkillsInDir found nested skills"
else
echo " [FAIL] findSkillsInDir did not find nested skill"
exit 1
fi
# Test 4: Test resolveSkillPath function
echo ""
echo "Test 4: Testing resolveSkillPath..."
# Create skills in personal and superpowers locations for testing
mkdir -p "$TEST_HOME/personal-skills/shared-skill"
mkdir -p "$TEST_HOME/superpowers-skills/shared-skill"
mkdir -p "$TEST_HOME/superpowers-skills/unique-skill"
cat > "$TEST_HOME/personal-skills/shared-skill/SKILL.md" <<'EOF'
---
name: shared-skill
description: Personal version
---
# Personal Shared
EOF
cat > "$TEST_HOME/superpowers-skills/shared-skill/SKILL.md" <<'EOF'
---
name: shared-skill
description: Superpowers version
---
# Superpowers Shared
EOF
cat > "$TEST_HOME/superpowers-skills/unique-skill/SKILL.md" <<'EOF'
---
name: unique-skill
description: Only in superpowers
---
# Unique
EOF
result=$(node -e "
const fs = require('fs');
const path = require('path');
function resolveSkillPath(skillName, superpowersDir, personalDir) {
const forceSuperpowers = skillName.startsWith('superpowers:');
const actualSkillName = forceSuperpowers ? skillName.replace(/^superpowers:/, '') : skillName;
if (!forceSuperpowers && personalDir) {
const personalPath = path.join(personalDir, actualSkillName);
const personalSkillFile = path.join(personalPath, 'SKILL.md');
if (fs.existsSync(personalSkillFile)) {
return {
skillFile: personalSkillFile,
sourceType: 'personal',
skillPath: actualSkillName
};
}
}
if (superpowersDir) {
const superpowersPath = path.join(superpowersDir, actualSkillName);
const superpowersSkillFile = path.join(superpowersPath, 'SKILL.md');
if (fs.existsSync(superpowersSkillFile)) {
return {
skillFile: superpowersSkillFile,
sourceType: 'superpowers',
skillPath: actualSkillName
};
}
}
return null;
}
const superpowersDir = '$TEST_HOME/superpowers-skills';
const personalDir = '$TEST_HOME/personal-skills';
// Test 1: Shared skill should resolve to personal
const shared = resolveSkillPath('shared-skill', superpowersDir, personalDir);
console.log('SHARED:', JSON.stringify(shared));
// Test 2: superpowers: prefix should force superpowers
const forced = resolveSkillPath('superpowers:shared-skill', superpowersDir, personalDir);
console.log('FORCED:', JSON.stringify(forced));
// Test 3: Unique skill should resolve to superpowers
const unique = resolveSkillPath('unique-skill', superpowersDir, personalDir);
console.log('UNIQUE:', JSON.stringify(unique));
// Test 4: Non-existent skill
const notfound = resolveSkillPath('not-a-skill', superpowersDir, personalDir);
console.log('NOTFOUND:', JSON.stringify(notfound));
" 2>&1)
if echo "$result" | grep -q 'SHARED:.*"sourceType":"personal"'; then
echo " [PASS] Personal skills shadow superpowers skills"
else
echo " [FAIL] Personal skills not shadowing correctly"
echo " Result: $result"
exit 1
fi
if echo "$result" | grep -q 'FORCED:.*"sourceType":"superpowers"'; then
echo " [PASS] superpowers: prefix forces superpowers resolution"
else
echo " [FAIL] superpowers: prefix not working"
exit 1
fi
if echo "$result" | grep -q 'UNIQUE:.*"sourceType":"superpowers"'; then
echo " [PASS] Unique superpowers skills are found"
else
echo " [FAIL] Unique superpowers skills not found"
exit 1
fi
if echo "$result" | grep -q 'NOTFOUND: null'; then
echo " [PASS] Non-existent skills return null"
else
echo " [FAIL] Non-existent skills should return null"
exit 1
fi
# Test 5: Test checkForUpdates function
echo ""
echo "Test 5: Testing checkForUpdates..."
# Create a test git repo
mkdir -p "$TEST_HOME/test-repo"
cd "$TEST_HOME/test-repo"
git init --quiet
git config user.email "test@test.com"
git config user.name "Test"
echo "test" > file.txt
git add file.txt
git commit -m "initial" --quiet
cd "$SCRIPT_DIR"
# Test checkForUpdates on repo without remote (should return false, not error)
result=$(node -e "
const { execSync } = require('child_process');
function checkForUpdates(repoDir) {
try {
const output = execSync('git fetch origin && git status --porcelain=v1 --branch', {
cwd: repoDir,
timeout: 3000,
encoding: 'utf8',
stdio: 'pipe'
});
const statusLines = output.split('\n');
for (const line of statusLines) {
if (line.startsWith('## ') && line.includes('[behind ')) {
return true;
}
}
return false;
} catch (error) {
return false;
}
}
// Test 1: Repo without remote should return false (graceful error handling)
const result1 = checkForUpdates('$TEST_HOME/test-repo');
console.log('NO_REMOTE:', result1);
// Test 2: Non-existent directory should return false
const result2 = checkForUpdates('$TEST_HOME/nonexistent');
console.log('NONEXISTENT:', result2);
// Test 3: Non-git directory should return false
const result3 = checkForUpdates('$TEST_HOME');
console.log('NOT_GIT:', result3);
" 2>&1)
if echo "$result" | grep -q 'NO_REMOTE: false'; then
echo " [PASS] checkForUpdates handles repo without remote gracefully"
else
echo " [FAIL] checkForUpdates should return false for repo without remote"
echo " Result: $result"
exit 1
fi
if echo "$result" | grep -q 'NONEXISTENT: false'; then
echo " [PASS] checkForUpdates handles non-existent directory"
else
echo " [FAIL] checkForUpdates should return false for non-existent directory"
exit 1
fi
if echo "$result" | grep -q 'NOT_GIT: false'; then
echo " [PASS] checkForUpdates handles non-git directory"
else
echo " [FAIL] checkForUpdates should return false for non-git directory"
exit 1
fi
echo ""
echo "=== All skills-core library tests passed ==="

View File

@@ -0,0 +1,104 @@
#!/usr/bin/env bash
# Test: Tools Functionality
# Verifies that use_skill and find_skills tools work correctly
# NOTE: These tests require OpenCode to be installed and configured
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
echo "=== Test: Tools Functionality ==="
# Source setup to create isolated environment
source "$SCRIPT_DIR/setup.sh"
# Trap to cleanup on exit
trap cleanup_test_env EXIT
# Check if opencode is available
if ! command -v opencode &> /dev/null; then
echo " [SKIP] OpenCode not installed - skipping integration tests"
echo " To run these tests, install OpenCode: https://opencode.ai"
exit 0
fi
# Test 1: Test find_skills tool via direct invocation
echo "Test 1: Testing find_skills tool..."
echo " Running opencode with find_skills request..."
# Use timeout to prevent hanging, capture both stdout and stderr
output=$(timeout 60s opencode run --print-logs "Use the find_skills tool to list available skills. Just call the tool and show me the raw output." 2>&1) || {
exit_code=$?
if [ $exit_code -eq 124 ]; then
echo " [FAIL] OpenCode timed out after 60s"
exit 1
fi
echo " [WARN] OpenCode returned non-zero exit code: $exit_code"
}
# Check for expected patterns in output
if echo "$output" | grep -qi "superpowers:brainstorming\|superpowers:using-superpowers\|Available skills"; then
echo " [PASS] find_skills tool discovered superpowers skills"
else
echo " [FAIL] find_skills did not return expected skills"
echo " Output was:"
echo "$output" | head -50
exit 1
fi
# Check if personal test skill was found
if echo "$output" | grep -qi "personal-test"; then
echo " [PASS] find_skills found personal test skill"
else
echo " [WARN] personal test skill not found in output (may be ok if tool returned subset)"
fi
# Test 2: Test use_skill tool
echo ""
echo "Test 2: Testing use_skill tool..."
echo " Running opencode with use_skill request..."
output=$(timeout 60s opencode run --print-logs "Use the use_skill tool to load the personal-test skill and show me what you get." 2>&1) || {
exit_code=$?
if [ $exit_code -eq 124 ]; then
echo " [FAIL] OpenCode timed out after 60s"
exit 1
fi
echo " [WARN] OpenCode returned non-zero exit code: $exit_code"
}
# Check for the skill marker we embedded
if echo "$output" | grep -qi "PERSONAL_SKILL_MARKER_12345\|Personal Test Skill\|Launching skill"; then
echo " [PASS] use_skill loaded personal-test skill content"
else
echo " [FAIL] use_skill did not load personal-test skill correctly"
echo " Output was:"
echo "$output" | head -50
exit 1
fi
# Test 3: Test use_skill with superpowers: prefix
echo ""
echo "Test 3: Testing use_skill with superpowers: prefix..."
echo " Running opencode with superpowers:brainstorming skill..."
output=$(timeout 60s opencode run --print-logs "Use the use_skill tool to load superpowers:brainstorming and tell me the first few lines of what you received." 2>&1) || {
exit_code=$?
if [ $exit_code -eq 124 ]; then
echo " [FAIL] OpenCode timed out after 60s"
exit 1
fi
echo " [WARN] OpenCode returned non-zero exit code: $exit_code"
}
# Check for expected content from brainstorming skill
if echo "$output" | grep -qi "brainstorming\|Launching skill\|skill.*loaded"; then
echo " [PASS] use_skill loaded superpowers:brainstorming skill"
else
echo " [FAIL] use_skill did not load superpowers:brainstorming correctly"
echo " Output was:"
echo "$output" | head -50
exit 1
fi
echo ""
echo "=== All tools tests passed ==="

View File

@@ -0,0 +1,8 @@
I have 4 independent test failures happening in different modules:
1. tests/auth/login.test.ts - "should redirect after login" is failing
2. tests/api/users.test.ts - "should return user list" returns 500
3. tests/components/Button.test.tsx - snapshot mismatch
4. tests/utils/date.test.ts - timezone handling broken
These are unrelated issues in different parts of the codebase. Can you investigate all of them?

View File

@@ -0,0 +1 @@
I have a plan document at docs/plans/2024-01-15-auth-system.md that needs to be executed. Please implement it.

View File

@@ -0,0 +1,3 @@
I just finished implementing the user authentication feature. All the code is committed. Can you review the changes before I merge to main?
The commits are between abc123 and def456.

View File

@@ -0,0 +1,11 @@
The tests are failing with this error:
```
FAIL src/utils/parser.test.ts
● Parser should handle nested objects
TypeError: Cannot read property 'value' of undefined
at parse (src/utils/parser.ts:42:18)
at Object.<anonymous> (src/utils/parser.test.ts:28:20)
```
Can you figure out what's going wrong and fix it?

View File

@@ -0,0 +1,7 @@
I need to add a new feature to validate email addresses. It should:
- Check that there's an @ symbol
- Check that there's at least one character before the @
- Check that there's a dot in the domain part
- Return true/false
Can you implement this?

View File

@@ -0,0 +1,10 @@
Here's the spec for our new authentication system:
Requirements:
- Users can register with email/password
- Users can log in and receive a JWT token
- Protected routes require valid JWT
- Tokens expire after 24 hours
- Support password reset via email
We need to implement this. There are multiple steps involved - user model, auth routes, middleware, email service integration.

View File

@@ -0,0 +1,60 @@
#!/bin/bash
# Run all skill triggering tests
# Usage: ./run-all.sh
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROMPTS_DIR="$SCRIPT_DIR/prompts"
SKILLS=(
"systematic-debugging"
"test-driven-development"
"writing-plans"
"dispatching-parallel-agents"
"executing-plans"
"requesting-code-review"
)
echo "=== Running Skill Triggering Tests ==="
echo ""
PASSED=0
FAILED=0
RESULTS=()
for skill in "${SKILLS[@]}"; do
prompt_file="$PROMPTS_DIR/${skill}.txt"
if [ ! -f "$prompt_file" ]; then
echo "⚠️ SKIP: No prompt file for $skill"
continue
fi
echo "Testing: $skill"
if "$SCRIPT_DIR/run-test.sh" "$skill" "$prompt_file" 3 2>&1 | tee /tmp/skill-test-$skill.log; then
PASSED=$((PASSED + 1))
RESULTS+=("$skill")
else
FAILED=$((FAILED + 1))
RESULTS+=("$skill")
fi
echo ""
echo "---"
echo ""
done
echo ""
echo "=== Summary ==="
for result in "${RESULTS[@]}"; do
echo " $result"
done
echo ""
echo "Passed: $PASSED"
echo "Failed: $FAILED"
if [ $FAILED -gt 0 ]; then
exit 1
fi

View File

@@ -0,0 +1,88 @@
#!/bin/bash
# Test skill triggering with naive prompts
# Usage: ./run-test.sh <skill-name> <prompt-file>
#
# Tests whether Claude triggers a skill based on a natural prompt
# (without explicitly mentioning the skill)
set -e
SKILL_NAME="$1"
PROMPT_FILE="$2"
MAX_TURNS="${3:-3}"
if [ -z "$SKILL_NAME" ] || [ -z "$PROMPT_FILE" ]; then
echo "Usage: $0 <skill-name> <prompt-file> [max-turns]"
echo "Example: $0 systematic-debugging ./test-prompts/debugging.txt"
exit 1
fi
# Get the directory where this script lives (should be tests/skill-triggering)
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Get the superpowers plugin root (two levels up from tests/skill-triggering)
PLUGIN_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
TIMESTAMP=$(date +%s)
OUTPUT_DIR="/tmp/superpowers-tests/${TIMESTAMP}/skill-triggering/${SKILL_NAME}"
mkdir -p "$OUTPUT_DIR"
# Read prompt from file
PROMPT=$(cat "$PROMPT_FILE")
echo "=== Skill Triggering Test ==="
echo "Skill: $SKILL_NAME"
echo "Prompt file: $PROMPT_FILE"
echo "Max turns: $MAX_TURNS"
echo "Output dir: $OUTPUT_DIR"
echo ""
# Copy prompt for reference
cp "$PROMPT_FILE" "$OUTPUT_DIR/prompt.txt"
# Run Claude
LOG_FILE="$OUTPUT_DIR/claude-output.json"
cd "$OUTPUT_DIR"
echo "Plugin dir: $PLUGIN_DIR"
echo "Running claude -p with naive prompt..."
timeout 300 claude -p "$PROMPT" \
--plugin-dir "$PLUGIN_DIR" \
--dangerously-skip-permissions \
--max-turns "$MAX_TURNS" \
--output-format stream-json \
> "$LOG_FILE" 2>&1 || true
echo ""
echo "=== Results ==="
# Check if skill was triggered (look for Skill tool invocation)
# In stream-json, tool invocations have "name":"Skill" (not "tool":"Skill")
# Match either "skill":"skillname" or "skill":"namespace:skillname"
SKILL_PATTERN='"skill":"([^"]*:)?'"${SKILL_NAME}"'"'
if grep -q '"name":"Skill"' "$LOG_FILE" && grep -qE "$SKILL_PATTERN" "$LOG_FILE"; then
echo "✅ PASS: Skill '$SKILL_NAME' was triggered"
TRIGGERED=true
else
echo "❌ FAIL: Skill '$SKILL_NAME' was NOT triggered"
TRIGGERED=false
fi
# Show what skills WERE triggered
echo ""
echo "Skills triggered in this run:"
grep -o '"skill":"[^"]*"' "$LOG_FILE" 2>/dev/null | sort -u || echo " (none)"
# Show first assistant message
echo ""
echo "First assistant response (truncated):"
grep '"type":"assistant"' "$LOG_FILE" | head -1 | jq -r '.message.content[0].text // .message.content' 2>/dev/null | head -c 500 || echo " (could not extract)"
echo ""
echo "Full log: $LOG_FILE"
echo "Timestamp: $TIMESTAMP"
if [ "$TRIGGERED" = "true" ]; then
exit 0
else
exit 1
fi

View File

@@ -0,0 +1,81 @@
# Go Fractals CLI - Design
## Overview
A command-line tool that generates ASCII art fractals. Supports two fractal types with configurable output.
## Usage
```bash
# Sierpinski triangle
fractals sierpinski --size 32 --depth 5
# Mandelbrot set
fractals mandelbrot --width 80 --height 24 --iterations 100
# Custom character
fractals sierpinski --size 16 --char '#'
# Help
fractals --help
fractals sierpinski --help
```
## Commands
### `sierpinski`
Generates a Sierpinski triangle using recursive subdivision.
Flags:
- `--size` (default: 32) - Width of the triangle base in characters
- `--depth` (default: 5) - Recursion depth
- `--char` (default: '*') - Character to use for filled points
Output: Triangle printed to stdout, one line per row.
### `mandelbrot`
Renders the Mandelbrot set as ASCII art. Maps iteration count to characters.
Flags:
- `--width` (default: 80) - Output width in characters
- `--height` (default: 24) - Output height in characters
- `--iterations` (default: 100) - Maximum iterations for escape calculation
- `--char` (default: gradient) - Single character, or omit for gradient " .:-=+*#%@"
Output: Rectangle printed to stdout.
## Architecture
```
cmd/
fractals/
main.go # Entry point, CLI setup
internal/
sierpinski/
sierpinski.go # Algorithm
sierpinski_test.go
mandelbrot/
mandelbrot.go # Algorithm
mandelbrot_test.go
cli/
root.go # Root command, help
sierpinski.go # Sierpinski subcommand
mandelbrot.go # Mandelbrot subcommand
```
## Dependencies
- Go 1.21+
- `github.com/spf13/cobra` for CLI
## Acceptance Criteria
1. `fractals --help` shows usage
2. `fractals sierpinski` outputs a recognizable triangle
3. `fractals mandelbrot` outputs a recognizable Mandelbrot set
4. `--size`, `--width`, `--height`, `--depth`, `--iterations` flags work
5. `--char` customizes output character
6. Invalid inputs produce clear error messages
7. All tests pass

View File

@@ -0,0 +1,172 @@
# Go Fractals CLI - Implementation Plan
Execute this plan using the `superpowers:subagent-driven-development` skill.
## Context
Building a CLI tool that generates ASCII fractals. See `design.md` for full specification.
## Tasks
### Task 1: Project Setup
Create the Go module and directory structure.
**Do:**
- Initialize `go.mod` with module name `github.com/superpowers-test/fractals`
- Create directory structure: `cmd/fractals/`, `internal/sierpinski/`, `internal/mandelbrot/`, `internal/cli/`
- Create minimal `cmd/fractals/main.go` that prints "fractals cli"
- Add `github.com/spf13/cobra` dependency
**Verify:**
- `go build ./cmd/fractals` succeeds
- `./fractals` prints "fractals cli"
---
### Task 2: CLI Framework with Help
Set up Cobra root command with help output.
**Do:**
- Create `internal/cli/root.go` with root command
- Configure help text showing available subcommands
- Wire root command into `main.go`
**Verify:**
- `./fractals --help` shows usage with "sierpinski" and "mandelbrot" listed as available commands
- `./fractals` (no args) shows help
---
### Task 3: Sierpinski Algorithm
Implement the Sierpinski triangle generation algorithm.
**Do:**
- Create `internal/sierpinski/sierpinski.go`
- Implement `Generate(size, depth int, char rune) []string` that returns lines of the triangle
- Use recursive midpoint subdivision algorithm
- Create `internal/sierpinski/sierpinski_test.go` with tests:
- Small triangle (size=4, depth=2) matches expected output
- Size=1 returns single character
- Depth=0 returns filled triangle
**Verify:**
- `go test ./internal/sierpinski/...` passes
---
### Task 4: Sierpinski CLI Integration
Wire the Sierpinski algorithm to a CLI subcommand.
**Do:**
- Create `internal/cli/sierpinski.go` with `sierpinski` subcommand
- Add flags: `--size` (default 32), `--depth` (default 5), `--char` (default '*')
- Call `sierpinski.Generate()` and print result to stdout
**Verify:**
- `./fractals sierpinski` outputs a triangle
- `./fractals sierpinski --size 16 --depth 3` outputs smaller triangle
- `./fractals sierpinski --help` shows flag documentation
---
### Task 5: Mandelbrot Algorithm
Implement the Mandelbrot set ASCII renderer.
**Do:**
- Create `internal/mandelbrot/mandelbrot.go`
- Implement `Render(width, height, maxIter int, char string) []string`
- Map complex plane region (-2.5 to 1.0 real, -1.0 to 1.0 imaginary) to output dimensions
- Map iteration count to character gradient " .:-=+*#%@" (or single char if provided)
- Create `internal/mandelbrot/mandelbrot_test.go` with tests:
- Output dimensions match requested width/height
- Known point inside set (0,0) maps to max-iteration character
- Known point outside set (2,0) maps to low-iteration character
**Verify:**
- `go test ./internal/mandelbrot/...` passes
---
### Task 6: Mandelbrot CLI Integration
Wire the Mandelbrot algorithm to a CLI subcommand.
**Do:**
- Create `internal/cli/mandelbrot.go` with `mandelbrot` subcommand
- Add flags: `--width` (default 80), `--height` (default 24), `--iterations` (default 100), `--char` (default "")
- Call `mandelbrot.Render()` and print result to stdout
**Verify:**
- `./fractals mandelbrot` outputs recognizable Mandelbrot set
- `./fractals mandelbrot --width 40 --height 12` outputs smaller version
- `./fractals mandelbrot --help` shows flag documentation
---
### Task 7: Character Set Configuration
Ensure `--char` flag works consistently across both commands.
**Do:**
- Verify Sierpinski `--char` flag passes character to algorithm
- For Mandelbrot, `--char` should use single character instead of gradient
- Add tests for custom character output
**Verify:**
- `./fractals sierpinski --char '#'` uses '#' character
- `./fractals mandelbrot --char '.'` uses '.' for all filled points
- Tests pass
---
### Task 8: Input Validation and Error Handling
Add validation for invalid inputs.
**Do:**
- Sierpinski: size must be > 0, depth must be >= 0
- Mandelbrot: width/height must be > 0, iterations must be > 0
- Return clear error messages for invalid inputs
- Add tests for error cases
**Verify:**
- `./fractals sierpinski --size 0` prints error, exits non-zero
- `./fractals mandelbrot --width -1` prints error, exits non-zero
- Error messages are clear and helpful
---
### Task 9: Integration Tests
Add integration tests that invoke the CLI.
**Do:**
- Create `cmd/fractals/main_test.go` or `test/integration_test.go`
- Test full CLI invocation for both commands
- Verify output format and exit codes
- Test error cases return non-zero exit
**Verify:**
- `go test ./...` passes all tests including integration tests
---
### Task 10: README
Document usage and examples.
**Do:**
- Create `README.md` with:
- Project description
- Installation: `go install ./cmd/fractals`
- Usage examples for both commands
- Example output (small samples)
**Verify:**
- README accurately describes the tool
- Examples in README actually work

Some files were not shown because too many files have changed in this diff Show More