Add 260+ Claude Code skills from skills.sh

Complete collection of AI agent skills including:
- Frontend Development (Vue, React, Next.js, Three.js)
- Backend Development (NestJS, FastAPI, Node.js)
- Mobile Development (React Native, Expo)
- Testing (E2E, frontend, webapp)
- DevOps (GitHub Actions, CI/CD)
- Marketing (SEO, copywriting, analytics)
- Security (binary analysis, vulnerability scanning)
- And many more...

Synchronized from: https://skills.sh/

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
admin
2026-01-23 18:02:28 +00:00
Unverified
commit 07242683bf
3300 changed files with 1223105 additions and 0 deletions

View File

@@ -0,0 +1,28 @@
{
"skills": [
{
"name": "agent-pipeline-builder",
"triggers": [
"multi-agent pipeline",
"agent pipeline",
"multi agent workflow",
"create pipeline",
"build pipeline",
"orchestrate agents",
"agent workflow",
"pipeline architecture",
"sequential agents",
"agent chain",
"data pipeline",
"agent orchestration",
"multi-stage workflow",
"agent composition",
"pipeline pattern",
"researcher analyzer writer",
"funnel pattern",
"transformation pipeline",
"agent data flow"
]
}
]
}

View File

@@ -0,0 +1,357 @@
---
name: agent-pipeline-builder
description: Build multi-agent pipelines with structured data flow between agents. Use when creating workflows where each agent has a specialized role and passes output to the next agent.
allowed-tools: Write, Edit, Read, Bash, WebSearch
license: MIT
---
# Agent Pipeline Builder
Build reliable multi-agent workflows where each agent has a single, focused responsibility and outputs structured data that the next agent consumes.
## When to Use This Skill
Use this skill when:
- Building complex workflows that need multiple specialized agents
- Creating content pipelines (research → analysis → writing)
- Designing data processing flows with validation at each stage
- Implementing "funnel" patterns where broad input becomes focused output
## Pipeline Pattern
A pipeline consists of:
1. **Stage 1: Researcher/Gatherer** - Fetches raw data (WebSearch, file reading, API calls)
2. **Stage 2: Analyzer/Filter** - Processes and selects best options
3. **Stage 3: Creator/Writer** - Produces final output
Each stage:
- Has ONE job
- Outputs structured JSON (or YAML)
- Wraps output in markers (e.g., `<<<stage>>>...<<<end-stage>>>`)
- Passes data to next stage via stdin or file
## RalphLoop "Tackle Until Solved" Integration
For complex pipelines (3+ stages or complexity >= 5), agent-pipeline-builder automatically delegates to Ralph Orchestrator for autonomous pipeline construction and testing.
### When Ralph is Triggered
Ralph mode activates for pipelines with:
- 3 or more stages
- Complex stage patterns (external APIs, complex processing, conditional logic)
- Parallel stage execution
- User opt-in via `RALPH_AUTO=true` or `PIPELINE_USE_RALPH=true`
### Using Ralph Integration
When a complex pipeline is detected:
1. Check for Python integration module:
```bash
python3 /home/uroma/.claude/skills/agent-pipeline-builder/ralph-pipeline.py --test-complexity
```
2. If complex, delegate to Ralph:
```bash
/home/uroma/obsidian-web-interface/bin/ralphloop -i .ralph/PIPELINE.md
```
3. Monitor Ralph's progress in `.ralph/state.json`
4. On completion, use generated pipeline from `.ralph/iterations/pipeline.md`
### Manual Ralph Invocation
For explicit Ralph mode on any pipeline:
```bash
export PIPELINE_USE_RALPH=true
# or
export RALPH_AUTO=true
```
Then invoke `/agent-pipeline-builder` as normal.
### Ralph-Generated Pipeline Structure
When Ralph builds the pipeline autonomously, it creates:
```
.claude/agents/[pipeline-name]/
├── researcher.md # Agent definition
├── analyzer.md # Agent definition
└── writer.md # Agent definition
scripts/
└── run-[pipeline-name].ts # Orchestration script
.ralph/
├── PIPELINE.md # Manifest
├── state.json # Progress tracking
└── iterations/
└── pipeline.md # Final generated pipeline
```
## Creating a Pipeline
### Step 1: Define Pipeline Manifest
Create a `pipeline.md` file:
```markdown
# Pipeline: [Name]
## Stages
1. researcher - Finds/fetches raw data
2. analyzer - Processes and selects
3. writer - Creates final output
## Data Format
All stages use JSON with markers: `<<<stage-name>>>...<<<end-stage-name>>>`
```
### Step 2: Create Agent Definitions
For each stage, create an agent file `.claude/agents/[pipeline-name]/[stage-name].md`:
```markdown
---
name: researcher
description: What this agent does
model: haiku # or sonnet, opus
---
You are a [role] agent.
## CRITICAL: NO EXPLANATION - JUST ACTION
DO NOT explain what you will do. Just USE tools immediately, then output.
## Instructions
1. Use [specific tool] to get data
2. Output JSON in the exact format below
3. Wrap in markers as specified
## Output Format
<<<researcher>>>
```json
{
"data": [...]
}
```
<<<end-researcher>>>
```
### Step 3: Implement Pipeline Script
Create a script that orchestrates the agents:
```typescript
// scripts/run-pipeline.ts
import { runAgent } from '@anthropic-ai/claude-agent-sdk';
async function runPipeline() {
// Stage 1: Researcher
const research = await runAgent('researcher', {
context: { topic: 'AI news' }
});
// Stage 2: Analyzer (uses research output)
const analysis = await runAgent('analyzer', {
input: research,
context: { criteria: 'impact' }
});
// Stage 3: Writer (uses analysis output)
const final = await runAgent('writer', {
input: analysis,
context: { format: 'tweet' }
});
return final;
}
```
## Pipeline Best Practices
### 1. Single Responsibility
Each agent does ONE thing:
- ✓ researcher: Fetches data
- ✓ analyzer: Filters and ranks
- ✗ researcher-analyzer: Does both (too complex)
### 2. Structured Data Flow
- Use JSON or YAML for all inter-agent communication
- Define schemas upfront
- Validate output before passing to next stage
### 3. Error Handling
- Each agent should fail gracefully
- Use fallback outputs
- Log errors for debugging
### 4. Deterministic Patterns
- Constrain agents with specific tools
- Use detailed system prompts
- Avoid open-ended requests
## Example Pipeline: AI News Tweet
### Manifest
```yaml
name: ai-news-tweet
stages:
- researcher: Gets today's AI news
- analyzer: Picks most impactful story
- writer: Crafts engaging tweet
```
### Researcher Agent
```markdown
---
name: researcher
description: Finds recent AI news using WebSearch
model: haiku
---
Use WebSearch to find AI news from TODAY ONLY.
Output:
<<<researcher>>>
```json
{
"items": [
{
"title": "...",
"summary": "...",
"url": "...",
"published_at": "YYYY-MM-DD"
}
]
}
```
<<<end-researcher>>>
```
### Analyzer Agent
```markdown
---
name: analyzer
description: Analyzes news and selects best story
model: sonnet
---
Input: Researcher output (stdin)
Select the most impactful story based on:
- Technical significance
- Broad interest
- Credibility of source
Output:
<<<analyzer>>>
```json
{
"selected": {
"title": "...",
"summary": "...",
"reasoning": "..."
}
}
```
<<<end-analyzer>>>
```
### Writer Agent
```markdown
---
name: writer
description: Writes engaging tweet
model: sonnet
---
Input: Analyzer output (stdin)
Write a tweet that:
- Hooks attention
- Conveys key insight
- Fits 280 characters
- Includes relevant hashtags
Output:
<<<writer>>>
```json
{
"tweet": "...",
"hashtags": ["..."]
}
```
<<<end-writer>>>
```
## Running the Pipeline
### Method 1: Sequential Script
```bash
./scripts/run-pipeline.ts
```
### Method 2: Using Task Tool
```typescript
// Launch each stage as a separate agent task
await Task('Research stage', researchPrompt, 'haiku');
await Task('Analysis stage', analysisPrompt, 'sonnet');
await Task('Writing stage', writingPrompt, 'sonnet');
```
### Method 3: Using Claude Code Skills
Create a skill that orchestrates the pipeline with proper error handling.
## Testing Pipelines
### Unit Tests
Test each agent independently:
```bash
# Test researcher
npm run test:researcher
# Test analyzer with mock data
npm run test:analyzer
# Test writer with mock analysis
npm run test:writer
```
### Integration Tests
Test full pipeline:
```bash
npm run test:pipeline
```
## Debugging Tips
1. **Enable verbose logging** - See what each agent outputs
2. **Validate JSON schemas** - Catch malformed data early
3. **Use mock inputs** - Test downstream agents independently
4. **Check marker format** - Agents must use exact markers
## Common Patterns
### Funnel Pattern
```
Many inputs → Filter → Select One → Output
```
Example: News aggregator → analyzer → best story
### Transformation Pattern
```
Input → Transform → Validate → Output
```
Example: Raw data → clean → validate → structured data
### Assembly Pattern
```
Part A + Part B → Assemble → Complete
```
Example: Research + style guide → formatted article

View File

@@ -0,0 +1,350 @@
#!/usr/bin/env python3
"""
Ralph Integration for Agent Pipeline Builder
Generates pipeline manifests for Ralph Orchestrator to autonomously build and test multi-agent pipelines.
"""
import os
import sys
import json
import subprocess
from pathlib import Path
from typing import Optional, Dict, Any, List
# Configuration
RALPHLOOP_CMD = Path(__file__).parent.parent.parent.parent / "obsidian-web-interface" / "bin" / "ralphloop"
PIPELINE_THRESHOLD = 3 # Minimum number of stages to trigger Ralph
def analyze_pipeline_complexity(stages: List[Dict[str, str]]) -> int:
"""
Analyze pipeline complexity and return estimated difficulty.
Returns: 1-10 scale
"""
complexity = len(stages) # Base: one point per stage
# Check for complex patterns
for stage in stages:
description = stage.get("description", "").lower()
# External data sources (+1)
if any(word in description for word in ["fetch", "api", "database", "web", "search"]):
complexity += 1
# Complex processing (+1)
if any(word in description for word in ["analyze", "transform", "aggregate", "compute"]):
complexity += 1
# Conditional logic (+1)
if any(word in description for word in ["filter", "validate", "check", "select"]):
complexity += 1
# Parallel stages add complexity
stage_names = [s.get("name", "") for s in stages]
if "parallel" in str(stage_names).lower():
complexity += 2
return min(10, complexity)
def create_pipeline_manifest(stages: List[Dict[str, str]], manifest_path: str = ".ralph/PIPELINE.md") -> str:
"""
Create a Ralph-formatted pipeline manifest.
Returns the path to the created manifest file.
"""
ralph_dir = Path(".ralph")
ralph_dir.mkdir(exist_ok=True)
manifest_file = ralph_dir / "PIPELINE.md"
# Format the pipeline for Ralph
manifest_content = f"""# Pipeline: Multi-Agent Workflow
## Stages
"""
for i, stage in enumerate(stages, 1):
manifest_content += f"{i}. **{stage['name']}** - {stage['description']}\n"
manifest_content += f"""
## Data Format
All stages use JSON with markers: `<<<stage-name>>>...<<<end-stage-name>>>`
## Task
Build a complete multi-agent pipeline with the following stages:
"""
for stage in stages:
manifest_content += f"""
### {stage['name']}
**Purpose:** {stage['description']}
**Agent Configuration:**
- Model: {stage.get('model', 'sonnet')}
- Allowed Tools: {', '.join(stage.get('tools', ['Read', 'Write', 'Bash']))}
**Output Format:**
<<<{stage['name']}>>>
```json
{{
"result": "...",
"metadata": {{...}}
}}
```
<<<end-{stage['name']}>>>
"""
manifest_content += """
## Success Criteria
The pipeline is complete when:
- [ ] All agent definitions are created in `.claude/agents/`
- [ ] Pipeline orchestration script is implemented
- [ ] Each stage is tested independently
- [ ] End-to-end pipeline test passes
- [ ] Error handling is verified
- [ ] Documentation is complete
## Instructions
1. Create agent definition files for each stage
2. Implement the pipeline orchestration script
3. Test each stage independently with mock data
4. Run the full end-to-end pipeline
5. Verify error handling and edge cases
6. Document usage and testing procedures
When complete, add <!-- COMPLETE --> marker to this file.
Output the final pipeline to `.ralph/iterations/pipeline.md`.
"""
manifest_file.write_text(manifest_content)
return str(manifest_file)
def should_use_ralph(stages: List[Dict[str, str]]) -> bool:
"""
Determine if pipeline is complex enough to warrant RalphLoop.
"""
# Check for explicit opt-in via environment
if os.getenv("RALPH_AUTO", "").lower() in ("true", "1", "yes"):
return True
if os.getenv("PIPELINE_USE_RALPH", "").lower() in ("true", "1", "yes"):
return True
# Check stage count
if len(stages) >= PIPELINE_THRESHOLD:
return True
# Check complexity
complexity = analyze_pipeline_complexity(stages)
return complexity >= 5
def run_ralphloop_for_pipeline(stages: List[Dict[str, str]],
pipeline_name: str = "multi-agent-pipeline",
max_iterations: Optional[int] = None) -> Dict[str, Any]:
"""
Run RalphLoop for autonomous pipeline construction.
Returns a dict with:
- success: bool
- iterations: int
- pipeline_path: str (path to generated pipeline)
- state: dict (Ralph's final state)
- error: str (if failed)
"""
print("🔄 Delegating to RalphLoop 'Tackle Until Solved' for autonomous pipeline construction...")
print(f" Stages: {len(stages)}")
print(f" Complexity: {analyze_pipeline_complexity(stages)}/10")
print()
# Create pipeline manifest
manifest_path = create_pipeline_manifest(stages)
print(f"✅ Pipeline manifest created: {manifest_path}")
print()
# Check if ralphloop exists
if not RALPHLOOP_CMD.exists():
return {
"success": False,
"error": f"RalphLoop not found at {RALPHLOOP_CMD}",
"iterations": 0,
"pipeline_path": "",
"state": {}
}
# Build command - use the manifest file as input
cmd = [str(RALPHLOOP_CMD), "-i", manifest_path]
# Add optional parameters
if max_iterations:
cmd.extend(["--max-iterations", str(max_iterations)])
# Environment variables
env = os.environ.copy()
env.setdefault("RALPH_AGENT", "claude")
env.setdefault("RALPH_MAX_ITERATIONS", str(max_iterations or 100))
print(f"Command: {' '.join(cmd)}")
print("=" * 60)
print()
# Run RalphLoop
try:
process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
bufsize=1,
env=env
)
# Stream output
output_lines = []
for line in process.stdout:
print(line, end='', flush=True)
output_lines.append(line)
process.wait()
returncode = process.returncode
print()
print("=" * 60)
if returncode == 0:
# Read final state
state_file = Path(".ralph/state.json")
pipeline_file = Path(".ralph/iterations/pipeline.md")
state = {}
if state_file.exists():
state = json.loads(state_file.read_text())
pipeline_path = ""
if pipeline_file.exists():
pipeline_path = str(pipeline_file)
iterations = state.get("iteration", 0)
print(f"✅ Pipeline construction completed in {iterations} iterations")
if pipeline_path:
print(f" Pipeline: {pipeline_path}")
print()
return {
"success": True,
"iterations": iterations,
"pipeline_path": pipeline_path,
"state": state,
"error": None
}
else:
return {
"success": False,
"error": f"RalphLoop exited with code {returncode}",
"iterations": 0,
"pipeline_path": "",
"state": {}
}
except KeyboardInterrupt:
print()
print("⚠️ RalphLoop interrupted by user")
return {
"success": False,
"error": "Interrupted by user",
"iterations": 0,
"pipeline_path": "",
"state": {}
}
except Exception as e:
return {
"success": False,
"error": str(e),
"iterations": 0,
"pipeline_path": "",
"state": {}
}
def delegate_pipeline_to_ralph(stages: List[Dict[str, str]],
pipeline_name: str = "multi-agent-pipeline") -> Optional[str]:
"""
Main entry point: Delegate pipeline construction to Ralph if complex.
If Ralph is used, returns the path to the generated pipeline.
If pipeline is simple, returns None (caller should build directly).
"""
if not should_use_ralph(stages):
return None
result = run_ralphloop_for_pipeline(stages, pipeline_name)
if result["success"]:
return result.get("pipeline_path", "")
else:
print(f"❌ RalphLoop failed: {result.get('error', 'Unknown error')}")
print("Falling back to direct pipeline construction...")
return None
# Example pipeline stages for testing
EXAMPLE_PIPELINE = [
{
"name": "researcher",
"description": "Finds and fetches raw data from various sources",
"model": "haiku",
"tools": ["WebSearch", "WebFetch", "Read"]
},
{
"name": "analyzer",
"description": "Processes data and selects best options",
"model": "sonnet",
"tools": ["Read", "Write", "Bash"]
},
{
"name": "writer",
"description": "Creates final output from analyzed data",
"model": "sonnet",
"tools": ["Write", "Edit"]
}
]
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description="Test Ralph pipeline integration")
parser.add_argument("--test-complexity", action="store_true", help="Only test complexity")
parser.add_argument("--force", action="store_true", help="Force Ralph mode")
parser.add_argument("--example", action="store_true", help="Run with example pipeline")
args = parser.parse_args()
if args.test_complexity:
complexity = analyze_pipeline_complexity(EXAMPLE_PIPELINE)
print(f"Pipeline complexity: {complexity}/10")
print(f"Should use Ralph: {should_use_ralph(EXAMPLE_PIPELINE)}")
elif args.example:
if args.force:
os.environ["PIPELINE_USE_RALPH"] = "true"
result = delegate_pipeline_to_ralph(EXAMPLE_PIPELINE, "example-pipeline")
if result:
print("\n" + "=" * 60)
print(f"PIPELINE GENERATED: {result}")
print("=" * 60)
else:
print("\nPipeline not complex enough for Ralph. Building directly...")

View File

@@ -0,0 +1,146 @@
#!/usr/bin/env bun
/**
* Agent Pipeline Validator
*
* Validates pipeline manifest and agent definitions
* Usage: ./validate-pipeline.ts [pipeline-name]
*/
import { readFileSync, existsSync } from 'fs';
import { join } from 'path';
interface PipelineManifest {
name: string;
stages: Array<{ name: string; description: string }>;
dataFormat?: string;
}
interface AgentDefinition {
name: string;
description: string;
model?: string;
}
function parseFrontmatter(content: string): { frontmatter: any; content: string } {
const match = content.match(/^---\n([\s\S]+?)\n---\n([\s\S]*)$/);
if (!match) {
return { frontmatter: {}, content };
}
const frontmatter: any = {};
const lines = match[1].split('\n');
for (const line of lines) {
const [key, ...valueParts] = line.split(':');
if (key && valueParts.length > 0) {
const value = valueParts.join(':').trim();
frontmatter[key.trim()] = value;
}
}
return { frontmatter, content: match[2] };
}
function validateAgentFile(agentPath: string): { valid: boolean; errors: string[] } {
const errors: string[] = [];
if (!existsSync(agentPath)) {
return { valid: false, errors: [`Agent file not found: ${agentPath}`] };
}
const content = readFileSync(agentPath, 'utf-8');
const { frontmatter } = parseFrontmatter(content);
// Check required fields
if (!frontmatter.name) {
errors.push(`Missing 'name' in frontmatter`);
}
if (!frontmatter.description) {
errors.push(`Missing 'description' in frontmatter`);
}
// Check for output markers
const markerPattern = /<<<(\w+)>>>/g;
const markers = content.match(markerPattern);
if (!markers || markers.length < 2) {
errors.push(`Missing output markers (expected <<<stage>>>...<<<end-stage>>>)`);
}
return { valid: errors.length === 0, errors };
}
function validatePipeline(pipelineName: string): void {
const basePath = join(process.cwd(), '.claude', 'agents', pipelineName);
const manifestPath = join(basePath, 'pipeline.md');
console.log(`\n🔍 Validating pipeline: ${pipelineName}\n`);
// Check if pipeline directory exists
if (!existsSync(basePath)) {
console.error(`❌ Pipeline directory not found: ${basePath}`);
process.exit(1);
}
// Load and validate manifest
let stages: string[] = [];
if (existsSync(manifestPath)) {
const manifestContent = readFileSync(manifestPath, 'utf-8');
const { frontmatter } = parseFrontmatter(manifestContent);
stages = frontmatter.stages?.map((s: any) => typeof s === 'string' ? s : s.name) || [];
}
// If no manifest, auto-detect agents
if (stages.length === 0) {
const { readdirSync } = require('fs');
const files = readdirSync(basePath).filter((f: string) => f.endsWith('.md') && f !== 'pipeline.md');
stages = files.map((f: string) => f.replace('.md', ''));
}
console.log(`📋 Stages: ${stages.join(' → ')}\n`);
// Validate each agent
let hasErrors = false;
for (const stage of stages) {
const agentPath = join(basePath, `${stage}.md`);
const { valid, errors } = validateAgentFile(agentPath);
if (valid) {
console.log(`${stage}`);
} else {
console.log(`${stage}`);
for (const error of errors) {
console.log(` ${error}`);
}
hasErrors = true;
}
}
// Check for scripts
const scriptsPath = join(process.cwd(), 'scripts', `run-${pipelineName}.ts`);
if (existsSync(scriptsPath)) {
console.log(`\n ✅ Pipeline script: ${scriptsPath}`);
} else {
console.log(`\n ⚠️ Missing pipeline script: ${scriptsPath}`);
console.log(` Create this script to orchestrate the agents.`);
}
console.log('');
if (hasErrors) {
console.log('❌ Pipeline validation failed\n');
process.exit(1);
} else {
console.log('✅ Pipeline validation passed!\n');
}
}
// Main
const pipelineName = process.argv[2];
if (!pipelineName) {
console.log('Usage: validate-pipeline.ts <pipeline-name>');
console.log('Example: validate-pipeline.ts ai-news-tweet');
process.exit(1);
}
validatePipeline(pipelineName);