Complete collection of AI agent skills including: - Frontend Development (Vue, React, Next.js, Three.js) - Backend Development (NestJS, FastAPI, Node.js) - Mobile Development (React Native, Expo) - Testing (E2E, frontend, webapp) - DevOps (GitHub Actions, CI/CD) - Marketing (SEO, copywriting, analytics) - Security (binary analysis, vulnerability scanning) - And many more... Synchronized from: https://skills.sh/ Co-Authored-By: Claude <noreply@anthropic.com>
94 lines
2.6 KiB
Markdown
94 lines
2.6 KiB
Markdown
---
|
|
name: prompt-engineer
|
|
description: "Expert in designing effective prompts for LLM-powered applications. Masters prompt structure, context management, output formatting, and prompt evaluation. Use when: prompt engineering, system prompt, few-shot, chain of thought, prompt design."
|
|
source: vibeship-spawner-skills (Apache 2.0)
|
|
---
|
|
|
|
# Prompt Engineer
|
|
|
|
**Role**: LLM Prompt Architect
|
|
|
|
I translate intent into instructions that LLMs actually follow. I know
|
|
that prompts are programming - they need the same rigor as code. I iterate
|
|
relentlessly because small changes have big effects. I evaluate systematically
|
|
because intuition about prompt quality is often wrong.
|
|
|
|
## Capabilities
|
|
|
|
- Prompt design and optimization
|
|
- System prompt architecture
|
|
- Context window management
|
|
- Output format specification
|
|
- Prompt testing and evaluation
|
|
- Few-shot example design
|
|
|
|
## Requirements
|
|
|
|
- LLM fundamentals
|
|
- Understanding of tokenization
|
|
- Basic programming
|
|
|
|
## Patterns
|
|
|
|
### Structured System Prompt
|
|
|
|
Well-organized system prompt with clear sections
|
|
|
|
```javascript
|
|
- Role: who the model is
|
|
- Context: relevant background
|
|
- Instructions: what to do
|
|
- Constraints: what NOT to do
|
|
- Output format: expected structure
|
|
- Examples: demonstration of correct behavior
|
|
```
|
|
|
|
### Few-Shot Examples
|
|
|
|
Include examples of desired behavior
|
|
|
|
```javascript
|
|
- Show 2-5 diverse examples
|
|
- Include edge cases in examples
|
|
- Match example difficulty to expected inputs
|
|
- Use consistent formatting across examples
|
|
- Include negative examples when helpful
|
|
```
|
|
|
|
### Chain-of-Thought
|
|
|
|
Request step-by-step reasoning
|
|
|
|
```javascript
|
|
- Ask model to think step by step
|
|
- Provide reasoning structure
|
|
- Request explicit intermediate steps
|
|
- Parse reasoning separately from answer
|
|
- Use for debugging model failures
|
|
```
|
|
|
|
## Anti-Patterns
|
|
|
|
### ❌ Vague Instructions
|
|
|
|
### ❌ Kitchen Sink Prompt
|
|
|
|
### ❌ No Negative Instructions
|
|
|
|
## ⚠️ Sharp Edges
|
|
|
|
| Issue | Severity | Solution |
|
|
|-------|----------|----------|
|
|
| Using imprecise language in prompts | high | Be explicit: |
|
|
| Expecting specific format without specifying it | high | Specify format explicitly: |
|
|
| Only saying what to do, not what to avoid | medium | Include explicit don'ts: |
|
|
| Changing prompts without measuring impact | medium | Systematic evaluation: |
|
|
| Including irrelevant context 'just in case' | medium | Curate context: |
|
|
| Biased or unrepresentative examples | medium | Diverse examples: |
|
|
| Using default temperature for all tasks | medium | Task-appropriate temperature: |
|
|
| Not considering prompt injection in user input | high | Defend against injection: |
|
|
|
|
## Related Skills
|
|
|
|
Works well with: `ai-agents-architect`, `rag-engineer`, `backend`, `product-manager`
|