SuperCharge Claude Code v1.0.0 - Complete Customization Package

Features:
- 30+ Custom Skills (cognitive, development, UI/UX, autonomous agents)
- RalphLoop autonomous agent integration
- Multi-AI consultation (Qwen)
- Agent management system with sync capabilities
- Custom hooks for session management
- MCP servers integration
- Plugin marketplace setup
- Comprehensive installation script

Components:
- Skills: always-use-superpowers, ralph, brainstorming, ui-ux-pro-max, etc.
- Agents: 100+ agents across engineering, marketing, product, etc.
- Hooks: session-start-superpowers, qwen-consult, ralph-auto-trigger
- Commands: /brainstorm, /write-plan, /execute-plan
- MCP Servers: zai-mcp-server, web-search-prime, web-reader, zread
- Binaries: ralphloop wrapper

Installation: ./supercharge.sh
This commit is contained in:
uroma
2026-01-22 15:35:55 +00:00
Unverified
commit 7a491b1548
1013 changed files with 170070 additions and 0 deletions

View File

@@ -0,0 +1,207 @@
# Delegation Prompt Templates
When delegating to GPT experts, use these structured templates.
## The 7-Section Format (MANDATORY)
Every delegation prompt MUST include these sections:
```
1. TASK: [One sentence—atomic, specific goal]
2. EXPECTED OUTCOME: [What success looks like]
3. CONTEXT:
- Current state: [what exists now]
- Relevant code: [paths or snippets]
- Background: [why this is needed]
4. CONSTRAINTS:
- Technical: [versions, dependencies]
- Patterns: [existing conventions to follow]
- Limitations: [what cannot change]
5. MUST DO:
- [Requirement 1]
- [Requirement 2]
6. MUST NOT DO:
- [Forbidden action 1]
- [Forbidden action 2]
7. OUTPUT FORMAT:
- [How to structure response]
```
---
## Expert-Specific Templates
### Architect
```markdown
TASK: [Analyze/Design/Implement] [specific system/component] for [goal].
EXPECTED OUTCOME: [Clear recommendation OR working implementation]
MODE: [Advisory / Implementation]
CONTEXT:
- Current architecture: [description]
- Relevant code:
[file paths or snippets]
- Problem/Goal: [what needs to be solved]
CONSTRAINTS:
- Must work with [existing systems]
- Cannot change [protected components]
- Performance requirements: [if applicable]
MUST DO:
- [Specific requirement]
- Provide effort estimate (Quick/Short/Medium/Large)
- [For implementation: Report all modified files]
MUST NOT DO:
- Over-engineer for hypothetical future needs
- Introduce new dependencies without justification
- [For implementation: Modify files outside scope]
OUTPUT FORMAT:
[Advisory: Bottom line → Action plan → Effort estimate]
[Implementation: Summary → Files modified → Verification]
```
### Plan Reviewer
```markdown
TASK: Review [plan name/description] for completeness and clarity.
EXPECTED OUTCOME: APPROVE/REJECT verdict with specific feedback.
CONTEXT:
- Plan to review:
[plan content]
- Goals: [what the plan is trying to achieve]
- Constraints: [timeline, resources, technical limits]
MUST DO:
- Evaluate all 4 criteria (Clarity, Verifiability, Completeness, Big Picture)
- Simulate actually doing the work to find gaps
- Provide specific improvements if rejecting
MUST NOT DO:
- Rubber-stamp without real analysis
- Provide vague feedback
- Approve plans with critical gaps
OUTPUT FORMAT:
[APPROVE / REJECT]
Justification: [explanation]
Summary: [4-criteria assessment]
[If REJECT: Top 3-5 improvements needed]
```
### Scope Analyst
```markdown
TASK: Analyze [request/feature] before planning begins.
EXPECTED OUTCOME: Clear understanding of scope, risks, and questions to resolve.
CONTEXT:
- Request: [what was asked for]
- Current state: [what exists now]
- Known constraints: [technical, business, timeline]
MUST DO:
- Classify intent (Refactoring/Build/Mid-sized/Architecture/Bug Fix/Research)
- Identify hidden requirements and ambiguities
- Surface questions that need answers before proceeding
- Assess risks and blast radius
MUST NOT DO:
- Start planning (that comes after analysis)
- Make assumptions about unclear requirements
- Skip intent classification
OUTPUT FORMAT:
Intent: [classification]
Findings: [key discoveries]
Questions: [what needs clarification]
Risks: [with mitigations]
Recommendation: [Proceed / Clarify First / Reconsider]
```
### Code Reviewer
```markdown
TASK: [Review / Review and fix] [code/PR/file] for [focus areas].
EXPECTED OUTCOME: [Issue list with verdict OR fixed code]
MODE: [Advisory / Implementation]
CONTEXT:
- Code to review:
[file paths or snippets]
- Purpose: [what this code does]
- Recent changes: [what changed, if PR review]
MUST DO:
- Prioritize: Correctness → Security → Performance → Maintainability
- Focus on issues that matter, not style nitpicks
- [For implementation: Fix issues and verify]
MUST NOT DO:
- Nitpick style (let formatters handle this)
- Flag theoretical concerns unlikely to matter
- [For implementation: Change unrelated code]
OUTPUT FORMAT:
[Advisory: Summary → Critical issues → Recommendations → Verdict]
[Implementation: Summary → Issues fixed → Files modified → Verification]
```
### Security Analyst
```markdown
TASK: [Analyze / Harden] [system/code/endpoint] for security vulnerabilities.
EXPECTED OUTCOME: [Vulnerability report OR hardened code]
MODE: [Advisory / Implementation]
CONTEXT:
- Code/system to analyze:
[file paths, architecture description]
- Assets at risk: [what's valuable]
- Threat model: [who might attack, if known]
MUST DO:
- Check OWASP Top 10 categories
- Consider authentication, authorization, input validation
- Provide practical remediation, not theoretical concerns
- [For implementation: Fix vulnerabilities and verify]
MUST NOT DO:
- Flag low-risk theoretical issues
- Provide vague "be more secure" advice
- [For implementation: Break functionality while hardening]
OUTPUT FORMAT:
[Advisory: Threat summary → Vulnerabilities → Recommendations → Risk rating]
[Implementation: Summary → Vulnerabilities fixed → Files modified → Verification]
```
---
## Quick Reference
| Expert | Advisory Output | Implementation Output |
|--------|-----------------|----------------------|
| Architect | Recommendation + plan + effort | Changes + files + verification |
| Plan Reviewer | APPROVE/REJECT + justification | Revised plan |
| Scope Analyst | Analysis + questions + risks | Refined requirements |
| Code Reviewer | Issues + verdict | Fixes + verification |
| Security Analyst | Vulnerabilities + risk rating | Hardening + verification |

View File

@@ -0,0 +1,120 @@
# Model Selection Guidelines
GPT experts serve as specialized consultants for complex problems. Each expert has a distinct specialty but can operate in advisory or implementation mode.
## Expert Directory
| Expert | Specialty | Best For |
|--------|-----------|----------|
| **Architect** | System design | Architecture, tradeoffs, complex debugging |
| **Plan Reviewer** | Plan validation | Reviewing plans before execution |
| **Scope Analyst** | Requirements analysis | Catching ambiguities, pre-planning |
| **Code Reviewer** | Code quality | Code review, finding bugs |
| **Security Analyst** | Security | Vulnerabilities, threat modeling, hardening |
## Operating Modes
Every expert can operate in two modes:
| Mode | Sandbox | Approval | Use When |
|------|---------|----------|----------|
| **Advisory** | `read-only` | `on-request` | Analysis, recommendations, reviews |
| **Implementation** | `workspace-write` | `on-failure` | Making changes, fixing issues |
**Key principle**: The mode is determined by the task, not the expert. An Architect can implement architectural changes. A Security Analyst can fix vulnerabilities.
## Expert Details
### Architect
**Specialty**: System design, technical strategy, complex decision-making
**When to use**:
- System design decisions
- Database schema design
- API architecture
- Multi-service interactions
- After 2+ failed fix attempts
- Tradeoff analysis
**Philosophy**: Pragmatic minimalism—simplest solution that works.
**Output format**:
- Advisory: Bottom line, action plan, effort estimate
- Implementation: Summary, files modified, verification
### Plan Reviewer
**Specialty**: Plan validation, catching gaps and ambiguities
**When to use**:
- Before starting significant work
- After creating a work plan
- Before delegating to other agents
**Philosophy**: Ruthlessly critical—finds every gap before work begins.
**Output format**: APPROVE/REJECT with justification and criteria assessment
### Scope Analyst
**Specialty**: Pre-planning analysis, requirements clarification
**When to use**:
- Before planning unfamiliar work
- When requirements feel vague
- When multiple interpretations exist
- Before irreversible decisions
**Philosophy**: Surface problems before they derail work.
**Output format**: Intent classification, findings, questions, risks, recommendation
### Code Reviewer
**Specialty**: Code quality, bugs, maintainability
**When to use**:
- Before merging significant changes
- After implementing features (self-review)
- For security-sensitive changes
**Philosophy**: Review like you'll maintain it at 2 AM during an incident.
**Output format**:
- Advisory: Issues list with APPROVE/REQUEST CHANGES/REJECT
- Implementation: Issues fixed, files modified, verification
### Security Analyst
**Specialty**: Vulnerabilities, threat modeling, security hardening
**When to use**:
- Authentication/authorization changes
- Handling sensitive data
- New API endpoints
- Third-party integrations
- Periodic security audits
**Philosophy**: Attacker's mindset—find vulnerabilities before they do.
**Output format**:
- Advisory: Threat summary, vulnerabilities, risk rating
- Implementation: Vulnerabilities fixed, files modified, verification
## Codex Parameters Reference
| Parameter | Values | Notes |
|-----------|--------|-------|
| `sandbox` | `read-only`, `workspace-write` | Set based on task, not expert |
| `approval-policy` | `on-request`, `on-failure` | Advisory uses on-request, implementation uses on-failure |
| `cwd` | path | Working directory for the task |
| `developer-instructions` | string | Expert prompt injection |
## When NOT to Delegate
- Simple questions you can answer
- First attempt at any fix
- Trivial decisions
- Research tasks (use other tools)
- When user just wants quick info

View File

@@ -0,0 +1,236 @@
# Model Orchestration
You have access to GPT experts via MCP tools. Use them strategically based on these guidelines.
## Available Tools
| Tool | Provider | Use For |
|------|----------|---------|
| `mcp__codex__codex` | GPT | Delegate to an expert (stateless) |
> **Note:** `codex-reply` exists but requires a session ID not currently exposed to Claude Code. Each delegation is independent—include full context in every call.
## Available Experts
| Expert | Specialty | Prompt File |
|--------|-----------|-------------|
| **Architect** | System design, tradeoffs, complex debugging | `${CLAUDE_PLUGIN_ROOT}/prompts/architect.md` |
| **Plan Reviewer** | Plan validation before execution | `${CLAUDE_PLUGIN_ROOT}/prompts/plan-reviewer.md` |
| **Scope Analyst** | Pre-planning, catching ambiguities | `${CLAUDE_PLUGIN_ROOT}/prompts/scope-analyst.md` |
| **Code Reviewer** | Code quality, bugs, security issues | `${CLAUDE_PLUGIN_ROOT}/prompts/code-reviewer.md` |
| **Security Analyst** | Vulnerabilities, threat modeling | `${CLAUDE_PLUGIN_ROOT}/prompts/security-analyst.md` |
---
## Stateless Design
**Each delegation is independent.** The expert has no memory of previous calls.
**Implications:**
- Include ALL relevant context in every delegation prompt
- For retries, include what was attempted and what failed
- Don't assume the expert remembers previous interactions
**Why:** Codex MCP returns session IDs in event notifications, but Claude Code only surfaces the final text response. Until this changes, treat each call as fresh.
---
## PROACTIVE Delegation (Check on EVERY message)
Before handling any request, check if an expert would help:
| Signal | Expert |
|--------|--------|
| Architecture/design decision | Architect |
| 2+ failed fix attempts on same issue | Architect (fresh perspective) |
| "Review this plan", "validate approach" | Plan Reviewer |
| Vague/ambiguous requirements | Scope Analyst |
| "Review this code", "find issues" | Code Reviewer |
| Security concerns, "is this secure" | Security Analyst |
**If a signal matches → delegate to the appropriate expert.**
---
## REACTIVE Delegation (Explicit User Request)
When user explicitly requests GPT/Codex:
| User Says | Action |
|-----------|--------|
| "ask GPT", "consult GPT", "ask codex" | Identify task type → route to appropriate expert |
| "ask GPT to review the architecture" | Delegate to Architect |
| "have GPT review this code" | Delegate to Code Reviewer |
| "GPT security review" | Delegate to Security Analyst |
**Always honor explicit requests.**
---
## Delegation Flow (Step-by-Step)
When delegation is triggered:
### Step 1: Identify Expert
Match the task to the appropriate expert based on triggers.
### Step 2: Read Expert Prompt
**CRITICAL**: Read the expert's prompt file to get their system instructions:
```
Read ${CLAUDE_PLUGIN_ROOT}/prompts/[expert].md
```
For example, for Architect: `Read ${CLAUDE_PLUGIN_ROOT}/prompts/architect.md`
### Step 3: Determine Mode
| Task Type | Mode | Sandbox |
|-----------|------|---------|
| Analysis, review, recommendations | Advisory | `read-only` |
| Make changes, fix issues, implement | Implementation | `workspace-write` |
### Step 4: Notify User
Always inform the user before delegating:
```
Delegating to [Expert Name]: [brief task summary]
```
### Step 5: Build Delegation Prompt
Use the 7-section format from `rules/delegation-format.md`.
**IMPORTANT:** Since each call is stateless, include FULL context:
- What the user asked for
- Relevant code/files
- Any previous attempts and their results (for retries)
### Step 6: Call the Expert
```typescript
mcp__codex__codex({
prompt: "[your 7-section delegation prompt with FULL context]",
"developer-instructions": "[contents of the expert's prompt file]",
sandbox: "[read-only or workspace-write based on mode]",
cwd: "[current working directory]"
})
```
### Step 7: Handle Response
1. **Synthesize** - Never show raw output directly
2. **Extract insights** - Key recommendations, issues, changes
3. **Apply judgment** - Experts can be wrong; evaluate critically
4. **Verify implementation** - For implementation mode, confirm changes work
---
## Retry Flow (Implementation Mode)
When implementation fails verification, retry with a NEW call including error context:
```
Attempt 1 → Verify → [Fail]
Attempt 2 (new call with: original task + what was tried + error details) → Verify → [Fail]
Attempt 3 (new call with: full history of attempts) → Verify → [Fail]
Escalate to user
```
### Retry Prompt Template
```markdown
TASK: [Original task]
PREVIOUS ATTEMPT:
- What was done: [summary of changes made]
- Error encountered: [exact error message]
- Files modified: [list]
CONTEXT:
- [Full original context]
REQUIREMENTS:
- Fix the error from the previous attempt
- [Original requirements]
```
**Key:** Each retry is a fresh call. The expert doesn't know what happened before unless you tell them.
---
## Example: Architecture Question
User: "What are the tradeoffs of Redis vs in-memory caching?"
**Step 1**: Signal matches "Architecture decision" → Architect
**Step 2**: Read `${CLAUDE_PLUGIN_ROOT}/prompts/architect.md`
**Step 3**: Advisory mode (question, not implementation) → `read-only`
**Step 4**: "Delegating to Architect: Analyze caching tradeoffs"
**Step 5-6**:
```typescript
mcp__codex__codex({
prompt: `TASK: Analyze tradeoffs between Redis and in-memory caching for [context].
EXPECTED OUTCOME: Clear recommendation with rationale.
CONTEXT: [user's situation, full details]
...`,
"developer-instructions": "[contents of architect.md]",
sandbox: "read-only"
})
```
**Step 7**: Synthesize response, add your assessment.
---
## Example: Retry After Failed Implementation
First attempt failed with "TypeError: Cannot read property 'x' of undefined"
**Retry call:**
```typescript
mcp__codex__codex({
prompt: `TASK: Add input validation to the user registration endpoint.
PREVIOUS ATTEMPT:
- Added validation middleware to routes/auth.ts
- Error: TypeError: Cannot read property 'x' of undefined at line 45
- The middleware was added but req.body was undefined
CONTEXT:
- Express 4.x application
- Body parser middleware exists in app.ts
- [relevant code snippets]
REQUIREMENTS:
- Fix the undefined req.body issue
- Ensure validation runs after body parser
- Report all files modified`,
"developer-instructions": "[contents of code-reviewer.md or architect.md]",
sandbox: "workspace-write",
cwd: "/path/to/project"
})
```
---
## Cost Awareness
- **Don't spam** - One well-structured delegation beats multiple vague ones
- **Include full context** - Saves retry costs from missing information
- **Reserve for high-value tasks** - Architecture, security, complex analysis
---
## Anti-Patterns
| Don't Do This | Do This Instead |
|---------------|-----------------|
| Delegate trivial questions | Answer directly |
| Show raw expert output | Synthesize and interpret |
| Delegate without reading prompt file | ALWAYS read and inject expert prompt |
| Skip user notification | ALWAYS notify before delegating |
| Retry without including error context | Include FULL history of what was tried |
| Assume expert remembers previous calls | Include all context in every call |

View File

@@ -0,0 +1,147 @@
# Delegation Triggers
This file defines when to delegate to GPT experts via Codex.
## IMPORTANT: Check These Triggers on EVERY Message
You MUST scan incoming messages for delegation triggers. This is NOT optional.
**Behavior:**
1. **PROACTIVE**: On every user message, check if semantic triggers match → delegate automatically
2. **REACTIVE**: If user explicitly mentions GPT/Codex → delegate immediately
When a trigger matches:
1. Identify the appropriate expert
2. Read their prompt file from `${CLAUDE_PLUGIN_ROOT}/prompts/[expert].md`
3. Follow the delegation flow in `rules/orchestration.md`
---
## Available Experts
| Expert | Specialty | Use For |
|--------|-----------|---------|
| **Architect** | System design, tradeoffs | Architecture decisions, complex debugging |
| **Plan Reviewer** | Plan validation | Reviewing work plans before execution |
| **Scope Analyst** | Pre-planning analysis | Catching ambiguities before work starts |
| **Code Reviewer** | Code quality, bugs | Reviewing code changes, finding issues |
| **Security Analyst** | Vulnerabilities, threats | Security audits, hardening |
## Explicit Triggers (Highest Priority)
User explicitly requests delegation:
| Phrase Pattern | Expert |
|----------------|--------|
| "ask GPT", "consult GPT" | Route based on context |
| "review this architecture" | Architect |
| "review this plan" | Plan Reviewer |
| "analyze the scope" | Scope Analyst |
| "review this code" | Code Reviewer |
| "security review", "is this secure" | Security Analyst |
## Semantic Triggers (Intent Matching)
### Architecture & Design (→ Architect)
| Intent Pattern | Example |
|----------------|---------|
| "how should I structure" | "How should I structure this service?" |
| "what are the tradeoffs" | "Tradeoffs of this caching approach" |
| "should I use [A] or [B]" | "Should I use microservices or monolith?" |
| System design questions | "Design a notification system" |
| After 2+ failed fix attempts | Escalation for fresh perspective |
### Plan Validation (→ Plan Reviewer)
| Intent Pattern | Example |
|----------------|---------|
| "review this plan" | "Review my migration plan" |
| "is this plan complete" | "Is this implementation plan complete?" |
| "validate before I start" | "Validate my approach before starting" |
| Before significant work | Pre-execution validation |
### Requirements Analysis (→ Scope Analyst)
| Intent Pattern | Example |
|----------------|---------|
| "what am I missing" | "What am I missing in these requirements?" |
| "clarify the scope" | "Help clarify the scope of this feature" |
| Vague or ambiguous requests | Before planning unclear work |
| "before we start" | Pre-planning consultation |
### Code Review (→ Code Reviewer)
| Intent Pattern | Example |
|----------------|---------|
| "review this code" | "Review this PR" |
| "find issues in" | "Find issues in this implementation" |
| "what's wrong with" | "What's wrong with this function?" |
| After implementing features | Self-review before merge |
### Security (→ Security Analyst)
| Intent Pattern | Example |
|----------------|---------|
| "security implications" | "Security implications of this auth flow" |
| "is this secure" | "Is this token handling secure?" |
| "vulnerabilities in" | "Any vulnerabilities in this code?" |
| "threat model" | "Threat model for this API" |
| "harden this" | "Harden this endpoint" |
## Trigger Priority
1. **Explicit user request** - Always honor direct requests
2. **Security concerns** - When handling sensitive data/auth
3. **Architecture decisions** - System design with long-term impact
4. **Failure escalation** - After 2+ failed attempts
5. **Don't delegate** - Default: handle directly
## When NOT to Delegate
| Situation | Reason |
|-----------|--------|
| Simple syntax questions | Answer directly |
| Direct file operations | No external insight needed |
| Trivial bug fixes | Obvious solution |
| Research/documentation | Use other tools |
| First attempt at any fix | Try yourself first |
## Advisory vs Implementation Mode
Any expert can operate in two modes:
| Mode | Sandbox | When to Use |
|------|---------|-------------|
| **Advisory** | `read-only` | Analysis, recommendations, review verdicts |
| **Implementation** | `workspace-write` | Actually making changes, fixing issues |
Set the sandbox based on what the task requires, not the expert type.
**Examples:**
```typescript
// Architect analyzing (advisory)
mcp__codex__codex({
prompt: "Analyze tradeoffs of Redis vs in-memory caching",
sandbox: "read-only"
})
// Architect implementing (implementation)
mcp__codex__codex({
prompt: "Refactor the caching layer to use Redis",
sandbox: "workspace-write"
})
// Security Analyst reviewing (advisory)
mcp__codex__codex({
prompt: "Review this auth flow for vulnerabilities",
sandbox: "read-only"
})
// Security Analyst hardening (implementation)
mcp__codex__codex({
prompt: "Fix the SQL injection vulnerability in user.ts",
sandbox: "workspace-write"
})
```