diff --git a/README.md b/README.md
index 748115c..240cf2c 100644
--- a/README.md
+++ b/README.md
@@ -28,25 +28,61 @@
## Skills Index
+### AI & Automation
+| Skill | Description | Status |
+|-------|-------------|--------|
+| [π¦ Claw Setup](./skills/claw-setup/) | End-to-end AI Agent deployment (OpenClaw, NanoBot, PicoClaw, ZeroClaw) | β
Production Ready |
+
+### System Administration
| Skill | Description | Status |
|-------|-------------|--------|
| [π RAM Optimizer](./skills/ram-optimizer/) | ZRAM-based memory compression for Linux servers | β
Production Ready |
-| [π Secret Scanner](./skills/secret-scanner/) | Detect leaked credentials in codebases | β
Production Ready |
-| [ποΈ Git Archaeologist](./skills/git-archaeologist/) | Analyze repository history and find bugs | β
Production Ready |
| [πΎ Backup Automator](./skills/backup-automator/) | Automated encrypted backups to cloud storage | β
Production Ready |
-| [π Domain Manager](./skills/domain-manager/) | DNS management across multiple providers | β
Production Ready |
-| [π SSL Guardian](./skills/ssl-guardian/) | SSL certificate automation and monitoring | β
Production Ready |
| [π‘ Log Sentinel](./skills/log-sentinel/) | Log analysis and anomaly detection | β
Production Ready |
+### Security
+| Skill | Description | Status |
+|-------|-------------|--------|
+| [π Secret Scanner](./skills/secret-scanner/) | Detect leaked credentials in codebases | β
Production Ready |
+| [π SSL Guardian](./skills/ssl-guardian/) | SSL certificate automation and monitoring | β
Production Ready |
+
+### Development
+| Skill | Description | Status |
+|-------|-------------|--------|
+| [ποΈ Git Archaeologist](./skills/git-archaeologist/) | Analyze repository history and find bugs | β
Production Ready |
+
+### Infrastructure
+| Skill | Description | Status |
+|-------|-------------|--------|
+| [π Domain Manager](./skills/domain-manager/) | DNS management across multiple providers | β
Production Ready |
+
---
## Quick Start
Each skill works with Claude Code CLI. Simply ask:
+```
+"Setup Claw AI assistant on my server"
"Run ram optimizer on my server"
"Scan this directory for leaked secrets"
"Setup automated backups to S3"
+```
+
+---
+
+## Featured: Claw Setup
+
+Professional deployment of AI Agent platforms:
+
+```
+OpenClaw β Full-featured, 1700+ plugins, 215K stars
+NanoBot β Python, 4K lines, research-ready
+PicoClaw β Go, <10MB, $10 hardware
+ZeroClaw β Rust, <5MB, 10ms startup
+```
+
+Usage: `"Setup OpenClaw on my VPS with security hardening"`
---
diff --git a/skills/claw-setup/README.md b/skills/claw-setup/README.md
new file mode 100644
index 0000000..152ea9c
--- /dev/null
+++ b/skills/claw-setup/README.md
@@ -0,0 +1,482 @@
+
+
+# π¦ Claw Setup
+
+### Professional AI Agent Deployment Made Simple
+
+**End-to-end setup of OpenClaw, NanoBot, PicoClaw, ZeroClaw, or NanoClaw with security hardening and personal customization**
+
+---
+
+
+
+
+
+
+
+
+ β¨ Autonomously developed by GLM 5 Advanced Coding Model
+
+
+
+ β οΈ Disclaimer: Test in a test environment prior to using on any live system
+
+
+---
+
+
+
+## Overview
+
+Claw Setup handles the complete deployment of AI Agent platforms from the Claw family - from selection to production - with security best practices and personalized configuration through interactive brainstorming.
+
+```
+βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
+β CLAW SETUP WORKFLOW β
+βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
+β β
+β Phase 1 Phase 2 Phase 3 Phase 4 β
+β ββββββββ ββββββββ ββββββββ ββββββββ β
+β β
+β βββββββββββ βββββββββββ βββββββββββ βββββββββββ β
+β β SELECT ββββββΊβ INSTALL ββββββΊβCUSTOMIZEββββββΊβ DEPLOY β β
+β βββββββββββ βββββββββββ βββββββββββ βββββββββββ β
+β β β β β β
+β βΌ βΌ βΌ βΌ β
+β Compare Clone & Brainstorm Systemd β
+β platforms harden your use case & monitor β
+β security β
+β β
+β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
+β β SUPPORTED PLATFORMS ββ
+β β ββ
+β β π¦ OpenClaw Full-featured, 1700+ plugins, 215K stars ββ
+β β π€ NanoBot Python, 4K lines, research-ready ββ
+β β π¦ PicoClaw Go, <10MB, $10 hardware ββ
+β β β‘ ZeroClaw Rust, <5MB, 10ms startup ββ
+β β π¬ NanoClaw TypeScript, WhatsApp focused ββ
+β β ββ
+β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
+β β
+βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
+```
+
+## Platform Comparison
+
+```
+βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
+β PLATFORM COMPARISON β
+βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
+β β
+β Metric OpenClaw NanoBot PicoClaw ZeroClaw NanoClaw β
+β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
+β Language TS Python Go Rust TS β
+β Memory >1GB ~100MB <10MB <5MB ~50MB β
+β Startup ~500s ~30s ~1s <10ms ~5s β
+β Binary Size ~28MB N/A ~8MB 3.4MB ~15MB β
+β GitHub Stars 215K+ 22K 15K 10K 5K β
+β Plugins 1700+ ~50 ~20 ~15 ~10 β
+β Learning Medium Easy Easy Medium Easy β
+β β
+β BEST FOR: β
+β βββββββββ β
+β OpenClaw β Full desktop AI, extensive integrations β
+β NanoBot β Research, customization, Python developers β
+β PicoClaw β Embedded, low-resource, $10 hardware β
+β ZeroClaw β Maximum performance, security-critical β
+β NanoClaw β WhatsApp automation, messaging bots β
+β β
+βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
+```
+
+## Decision Flowchart
+
+```
+ βββββββββββββββββββ
+ β Need AI Agent? β
+ ββββββββββ¬βββββββββ
+ β
+ βΌ
+ βββββββββββββββββββββββββ
+ β Memory constrained? β
+ β (<1GB RAM available) β
+ βββββββββββββ¬ββββββββββββ
+ βββββββ΄ββββββ
+ β β
+ YES NO
+ β β
+ βΌ βΌ
+ ββββββββββββββββ ββββββββββββββββββββ
+ β Need <10MB? β β Want plugins? β
+ ββββββββ¬ββββββββ ββββββββββ¬ββββββββββ
+ βββββββ΄ββββββ βββββββ΄ββββββ
+ β β β β
+ YES NO YES NO
+ β β β β
+ βΌ βΌ βΌ βΌ
+ ββββββββββ ββββββββββ ββββββββββ ββββββββββ
+ βZeroClawβ βPicoClawβ βOpenClawβ βNanoBot β
+ β (Rust) β β (Go) β β (Full) β β(Python)β
+ ββββββββββ ββββββββββ ββββββββββ ββββββββββ
+```
+
+## Quick Start
+
+### Option 1: Interactive Setup (Recommended)
+```
+"Setup Claw AI assistant on my server"
+"Help me choose and install an AI agent platform"
+```
+
+### Option 2: Direct Platform Selection
+```
+"Setup OpenClaw with all security features"
+"Install ZeroClaw on my VPS"
+"Deploy NanoBot for research use"
+```
+
+## Installation Guides
+
+### OpenClaw (Full Featured)
+```bash
+# Prerequisites
+sudo apt update && sudo apt install -y nodejs npm git
+
+# Clone official repo
+git clone https://github.com/openclaw/openclaw.git
+cd openclaw
+
+# Install dependencies
+npm install
+
+# Run setup wizard
+npm run setup
+
+# Configure environment
+cp .env.example .env
+nano .env # Add your API keys
+
+# Start
+npm run start
+```
+
+### NanoBot (Python Lightweight)
+```bash
+# Quick install via pip
+pip install nanobot-ai
+
+# Initialize
+nanobot onboard
+
+# Configure (~/.nanobot/config.json)
+{
+ "providers": {
+ "openrouter": { "apiKey": "sk-or-v1-xxx" }
+ },
+ "agents": {
+ "defaults": { "model": "anthropic/claude-opus-4-5" }
+ }
+}
+
+# Start gateway
+nanobot gateway
+```
+
+### PicoClaw (Go Ultra-Light)
+```bash
+# Download latest release
+wget https://github.com/sipeed/picoclaw/releases/latest/download/picoclaw-linux-amd64
+chmod +x picoclaw-linux-amd64
+sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
+
+# Create config
+mkdir -p ~/.config/picoclaw
+picoclaw config init
+
+# Start
+picoclaw gateway
+```
+
+### ZeroClaw (Rust Minimal)
+```bash
+# Download latest release
+wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/download/zeroclaw-linux-amd64
+chmod +x zeroclaw-linux-amd64
+sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
+
+# Initialize config
+zeroclaw init
+
+# Migrate from OpenClaw (optional)
+zeroclaw migrate openclaw --dry-run
+
+# Start
+zeroclaw gateway
+```
+
+## Security Hardening
+
+### 1. Secrets Management
+```bash
+# Never hardcode API keys - use environment variables
+export ANTHROPIC_API_KEY="your-key"
+export OPENROUTER_API_KEY="your-key"
+
+# Add to shell profile for persistence
+echo 'export ANTHROPIC_API_KEY="your-key"' >> ~/.bashrc
+
+# Use encrypted config files
+mkdir -p ~/.config/claw
+chmod 700 ~/.config/claw
+```
+
+### 2. Network Security
+```bash
+# Bind to localhost only
+# config.json:
+{
+ "server": {
+ "host": "127.0.0.1",
+ "port": 3000
+ }
+}
+
+# Use nginx reverse proxy for external access
+sudo certbot --nginx -d claw.yourdomain.com
+```
+
+### 3. Systemd Hardened Service
+```bash
+# /etc/systemd/system/claw.service
+[Unit]
+Description=Claw AI Assistant
+After=network.target
+
+[Service]
+Type=simple
+User=claw
+Group=claw
+WorkingDirectory=/opt/claw
+ExecStart=/usr/local/bin/claw gateway
+Restart=on-failure
+
+# Security hardening
+NoNewPrivileges=true
+PrivateTmp=true
+ProtectSystem=strict
+ProtectHome=true
+ReadWritePaths=/opt/claw/data
+Environment="ANTHROPIC_API_KEY=%i"
+
+[Install]
+WantedBy=multi-user.target
+```
+
+```bash
+# Enable service
+sudo systemctl daemon-reload
+sudo systemctl enable --now claw
+```
+
+## Brainstorm Session
+
+After installation, we'll explore your needs:
+
+### π― Use Case Discovery
+```
+Q: What tasks should your AI handle?
+ β‘ Code assistance & development
+ β‘ Research & information gathering
+ β‘ Personal productivity (calendar, reminders)
+ β‘ Content creation & writing
+ β‘ Data analysis & visualization
+ β‘ Home automation
+ β‘ Customer support / chatbot
+ β‘ Other: _______________
+```
+
+### π€ Model Selection
+```
+Q: Which AI model(s) to use?
+
+ β‘ Claude (Anthropic) - Best reasoning
+ β‘ GPT-4 (OpenAI) - General purpose
+ β‘ Gemini (Google) - Multimodal
+ β‘ Local models (Ollama) - Privacy-first
+ β‘ OpenRouter - Multi-model access
+```
+
+### π Integration Planning
+```
+Q: Which platforms to connect?
+
+ Messaging:
+ β‘ Telegram β‘ Discord β‘ WhatsApp β‘ Slack
+
+ Calendar:
+ β‘ Google β‘ Outlook β‘ Apple β‘ None
+
+ Storage:
+ β‘ Local β‘ Google Drive β‘ Dropbox β‘ S3
+
+ APIs:
+ β‘ Custom REST APIs
+ β‘ Webhooks
+ β‘ Database connections
+```
+
+### π¨ Agent Personality
+```
+Q: How should your agent behave?
+
+ Tone: Professional β‘ Casual β‘ Formal β‘ Playful β‘
+
+ Proactivity:
+ β‘ Reactive (responds only when asked)
+ β‘ Proactive (suggests, reminds, initiates)
+
+ Memory:
+ β‘ Session only (fresh each chat)
+ β‘ Persistent (remembers everything)
+ β‘ Selective (configurable retention)
+```
+
+## Architecture
+
+```
+βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
+β DEPLOYED ARCHITECTURE β
+βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
+β β
+β βββββββββββββββ β
+β β Internet β β
+β ββββββββ¬βββββββ β
+β β β
+β βββββββββΌββββββββ β
+β β nginx/HTTPS β β
+β β (Reverse β β
+β β Proxy) β β
+β βββββββββ¬ββββββββ β
+β β β
+β ββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββ β
+β β localhost β β
+β β βββββββββββ βββββββββββΌβββββββββ ββββββββββββββ β β
+β β β Config β β CLAW ENGINE β β Data β β β
+β β β ~/.configβ β (Gateway) β β Storage β β β
+β β β /claw β β Port: 3000 β β ~/claw/ β β β
+β β βββββββββββ βββββββββββ¬βββββββββ ββββββββββββββ β β
+β β β β β
+β β βββββββββββββββββββΌββββββββββββββββββ β β
+β β β β β β β
+β β ββββββΌβββββ βββββββΌββββββ βββββββΌββββββ β β
+β β β LLM β β Tools β β Memory β β β
+β β β APIs β β Plugins β β Context β β β
+β β βClaude/GPTβ β Skills β β Store β β β
+β β βββββββββββ βββββββββββββ βββββββββββββ β β
+β β β β
+β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
+β β
+βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
+```
+
+## Post-Setup Checklist
+
+```
+β‘ API keys configured securely
+β‘ Network binding verified (localhost)
+β‘ Firewall configured
+β‘ SSL certificate installed (if external)
+β‘ Systemd service enabled
+β‘ Logs configured and rotating
+β‘ Backup strategy in place
+β‘ Test conversation successful
+β‘ Custom agents created
+β‘ Integrations connected
+```
+
+---
+
+
+ Learn more about GLM 5 Advanced Coding Model
+
+
+## AI Provider Configuration
+
+### Supported Providers
+
+```
+βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
+β AI PROVIDER OPTIONS β
+βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
+β β
+β Direct Providers β Gateways & Aggregators β
+β βββββββββββββββββ β ββββββββββββββββββββββ β
+β β’ Anthropic (Claude) β β’ OpenRouter (200+ models) β
+β β’ OpenAI (GPT-4, o1, o3) β β’ Replicate β
+β β’ Google (Gemini 2.0) β β
+β β’ Mistral β Fast Inference β
+β β’ DeepSeek β βββββββββββββββ β
+β β’ xAI (Grok) β β’ Groq (ultra-fast) β
+β β β’ Cerebras (fastest) β
+β Local/Self-Hosted β β’ Together AI β
+β ββββββββββββββββββ β β
+β β’ Ollama β β
+β β’ LM Studio β β
+β β’ vLLM β β
+β β
+βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
+```
+
+### Model Selection Options
+
+**Option A: Fetch from Provider**
+```bash
+# Automatically fetch available models
+"Fetch available models from OpenRouter"
+"Show me Groq models"
+"What models are available via OpenAI?"
+```
+
+**Option B: Custom Model Input**
+```
+"Add custom model: my-org/fine-tuned-llama"
+"Configure local Ollama model: llama3.2:70b"
+"Use fine-tuned GPT: ft:gpt-4o:org:custom"
+```
+
+### Multi-Provider Setup
+
+```json
+{
+ "providers": {
+ "anthropic": { "apiKey": "${ANTHROPIC_API_KEY}" },
+ "openai": { "apiKey": "${OPENAI_API_KEY}" },
+ "google": { "apiKey": "${GOOGLE_API_KEY}" },
+ "openrouter": { "apiKey": "${OPENROUTER_API_KEY}" },
+ "groq": { "apiKey": "${GROQ_API_KEY}" },
+ "ollama": { "baseURL": "http://localhost:11434" }
+ },
+ "models": {
+ "default": "anthropic/claude-sonnet-4-5",
+ "fast": "groq/llama-3.3-70b-versatile",
+ "local": "ollama/llama3.2:70b"
+ }
+}
+```
+
+### Provider Comparison
+
+| Provider | Best For | Speed | Cost |
+|----------|----------|-------|------|
+| Claude | Reasoning, coding | Medium | $$$ |
+| GPT-4o | General purpose | Fast | $$$ |
+| Gemini | Multimodal | Fast | $$ |
+| Groq | Fastest inference | Ultra-fast | $ |
+| OpenRouter | Model variety | Varies | $-$$$ |
+| Ollama | Privacy, free | Depends on HW | Free |
+
+---
+
+
+ Learn more about GLM 5 Advanced Coding Model
+
diff --git a/skills/claw-setup/SKILL.md b/skills/claw-setup/SKILL.md
new file mode 100644
index 0000000..7064f00
--- /dev/null
+++ b/skills/claw-setup/SKILL.md
@@ -0,0 +1,464 @@
+---
+name: claw-setup
+description: Use this skill when the user asks to "setup openclaw", "install nanobot", "deploy zeroclaw", "configure picoclaw", "AI agent setup", "personal AI assistant", "claw framework", or mentions setting up any AI agent/assistant platform from the Claw family (OpenClaw, NanoBot, PicoClaw, ZeroClaw, NanoClaw).
+version: 1.0.0
+---
+
+# Claw Setup Skill
+
+End-to-end professional setup of AI Agent platforms from the Claw family with security hardening and personal customization through interactive brainstorming.
+
+## Supported Platforms
+
+| Platform | Language | Memory | Startup | Best For |
+|----------|----------|--------|---------|----------|
+| **OpenClaw** | TypeScript | >1GB | ~500s | Full-featured, plugin ecosystem |
+| **NanoBot** | Python | ~100MB | ~30s | Research, easy customization |
+| **PicoClaw** | Go | <10MB | ~1s | Low-resource, embedded |
+| **ZeroClaw** | Rust | <5MB | <10ms | Maximum performance, security |
+| **NanoClaw** | TypeScript | ~50MB | ~5s | WhatsApp integration |
+
+## What This Skill Does
+
+### Phase 1: Platform Selection
+- Interactive comparison of all platforms
+- Hardware requirements check
+- Use case matching
+
+### Phase 2: Secure Installation
+- Clone from official GitHub repos
+- Security hardening (secrets management, network isolation)
+- Environment configuration
+- API key setup with best practices
+
+### Phase 3: Personal Customization
+- Interactive brainstorming session
+- Custom agent templates
+- Integration setup (messaging, calendar, etc.)
+- Memory and context configuration
+
+### Phase 4: Verification & Deployment
+- Health checks
+- Test runs
+- Production deployment options
+
+## GitHub Repositories
+
+```
+OpenClaw: https://github.com/openclaw/openclaw
+NanoBot: https://github.com/HKUDS/nanobot
+PicoClaw: https://github.com/sipeed/picoclaw
+ZeroClaw: https://github.com/zeroclaw-labs/zeroclaw
+NanoClaw: https://github.com/nanoclaw/nanoclaw
+```
+
+## Usage Examples
+
+```
+"Setup OpenClaw on my server"
+"I want to install NanoBot for personal use"
+"Help me choose between ZeroClaw and PicoClaw"
+"Deploy an AI assistant with security best practices"
+"Setup Claw framework with my custom requirements"
+```
+
+## Installation Commands by Platform
+
+### OpenClaw (Full Featured)
+```bash
+# Prerequisites
+sudo apt install -y nodejs npm
+
+# Clone and setup
+git clone https://github.com/openclaw/openclaw.git
+cd openclaw
+npm install
+npm run setup
+
+# Configure
+cp .env.example .env
+# Edit .env with API keys
+
+# Run
+npm run start
+```
+
+### NanoBot (Python Lightweight)
+```bash
+# Quick install
+pip install nanobot-ai
+
+# Or from source
+git clone https://github.com/HKUDS/nanobot.git
+cd nanobot
+pip install -e .
+
+# Setup
+nanobot onboard
+nanobot gateway
+```
+
+### PicoClaw (Go Ultra-Light)
+```bash
+# Download binary
+wget https://github.com/sipeed/picoclaw/releases/latest/picoclaw-linux-amd64
+chmod +x picoclaw-linux-amd64
+sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
+
+# Or build from source
+git clone https://github.com/sipeed/picoclaw.git
+cd picoclaw
+go build -o picoclaw
+
+# Run
+picoclaw gateway
+```
+
+### ZeroClaw (Rust Minimal)
+```bash
+# Download binary
+wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
+chmod +x zeroclaw-linux-amd64
+sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
+
+# Or from source
+git clone https://github.com/zeroclaw-labs/zeroclaw.git
+cd zeroclaw
+cargo build --release
+
+# Run
+zeroclaw gateway
+```
+
+## Security Hardening
+
+### Secrets Management
+```bash
+# Never commit .env files
+echo ".env" >> .gitignore
+echo "*.pem" >> .gitignore
+
+# Use environment variables
+export ANTHROPIC_API_KEY="your-key"
+export OPENROUTER_API_KEY="your-key"
+
+# Or use secret files with restricted permissions
+mkdir -p ~/.config/claw
+cat > ~/.config/claw/config.json << 'CONFIG'
+{
+ "providers": {
+ "openrouter": { "apiKey": "${OPENROUTER_API_KEY}" }
+ }
+}
+CONFIG
+chmod 600 ~/.config/claw/config.json
+```
+
+### Network Security
+```bash
+# Bind to localhost only
+# In config, set:
+# "server": { "host": "127.0.0.1", "port": 3000 }
+
+# Use reverse proxy for external access
+# nginx example:
+server {
+ listen 443 ssl;
+ server_name claw.yourdomain.com;
+
+ ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
+ ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
+
+ location / {
+ proxy_pass http://127.0.0.1:3000;
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ }
+}
+```
+
+### Systemd Service
+```bash
+# /etc/systemd/system/claw.service
+[Unit]
+Description=Claw AI Assistant
+After=network.target
+
+[Service]
+Type=simple
+User=claw
+Group=claw
+WorkingDirectory=/opt/claw
+ExecStart=/usr/local/bin/claw gateway
+Restart=on-failure
+RestartSec=10
+
+# Security hardening
+NoNewPrivileges=true
+PrivateTmp=true
+ProtectSystem=strict
+ProtectHome=true
+ReadWritePaths=/opt/claw/data
+
+[Install]
+WantedBy=multi-user.target
+```
+
+## Brainstorm Session Topics
+
+1. **Use Case Discovery**
+ - What tasks should the AI handle?
+ - Which platforms/channels to integrate?
+ - Automation vs. interactive preferences?
+
+2. **Model Selection**
+ - Claude, GPT, Gemini, or local models?
+ - Cost vs. performance tradeoffs?
+ - Privacy requirements?
+
+3. **Integration Planning**
+ - Messaging: Telegram, Discord, WhatsApp, Slack?
+ - Calendar: Google, Outlook, Apple?
+ - Storage: Local, cloud, hybrid?
+ - APIs to connect?
+
+4. **Custom Agent Design**
+ - Personality and tone?
+ - Domain expertise areas?
+ - Memory and context preferences?
+ - Proactive vs. reactive behavior?
+
+5. **Deployment Strategy**
+ - Local machine, VPS, or cloud?
+ - High availability requirements?
+ - Backup and recovery needs?
+
+## AI Provider Configuration
+
+### Supported Providers
+
+| Provider | Type | API Base | Models |
+|----------|------|----------|--------|
+| **Anthropic** | Direct | api.anthropic.com | Claude 3.5/4/Opus |
+| **OpenAI** | Direct | api.openai.com | GPT-4, GPT-4o, o1, o3 |
+| **Google** | Direct | generativelanguage.googleapis.com | Gemini 2.0/1.5 |
+| **OpenRouter** | Gateway | openrouter.ai/api | 200+ models |
+| **Together AI** | Direct | api.together.xyz | Llama, Mistral, Qwen |
+| **Groq** | Direct | api.groq.com | Llama, Mixtral (fast) |
+| **Cerebras** | Direct | api.cerebras.ai | Llama (fastest) |
+| **DeepSeek** | Direct | api.deepseek.com | DeepSeek V3/R1 |
+| **Mistral** | Direct | api.mistral.ai | Mistral, Codestral |
+| **xAI** | Direct | api.x.ai | Grok |
+| **Replicate** | Gateway | api.replicate.com | Various |
+| **Local** | Self-hosted | localhost | Ollama, LM Studio |
+
+### Fetch Available Models
+
+```bash
+# OpenRouter - List all models
+curl -s https://openrouter.ai/api/v1/models \
+ -H "Authorization: Bearer $OPENROUTER_API_KEY" | jq '.data[].id'
+
+# OpenAI - List models
+curl -s https://api.openai.com/v1/models \
+ -H "Authorization: Bearer $OPENAI_API_KEY" | jq '.data[].id'
+
+# Anthropic - Available models (static list)
+# claude-opus-4-5-20250219
+# claude-sonnet-4-5-20250219
+# claude-3-5-sonnet-20241022
+# claude-3-5-haiku-20241022
+
+# Google Gemini
+curl -s "https://generativelanguage.googleapis.com/v1/models?key=$GOOGLE_API_KEY" | jq '.models[].name'
+
+# Groq - List models
+curl -s https://api.groq.com/openai/v1/models \
+ -H "Authorization: Bearer $GROQ_API_KEY" | jq '.data[].id'
+
+# Together AI
+curl -s https://api.together.xyz/v1/models \
+ -H "Authorization: Bearer $TOGETHER_API_KEY" | jq '.data[].id'
+
+# Ollama (local)
+curl -s http://localhost:11434/api/tags | jq '.models[].name'
+```
+
+### Configuration Templates
+
+#### Multi-Provider Config
+```json
+{
+ "providers": {
+ "anthropic": {
+ "apiKey": "${ANTHROPIC_API_KEY}",
+ "baseURL": "https://api.anthropic.com"
+ },
+ "openai": {
+ "apiKey": "${OPENAI_API_KEY}",
+ "baseURL": "https://api.openai.com/v1"
+ },
+ "google": {
+ "apiKey": "${GOOGLE_API_KEY}",
+ "baseURL": "https://generativelanguage.googleapis.com/v1"
+ },
+ "openrouter": {
+ "apiKey": "${OPENROUTER_API_KEY}",
+ "baseURL": "https://openrouter.ai/api/v1"
+ },
+ "groq": {
+ "apiKey": "${GROQ_API_KEY}",
+ "baseURL": "https://api.groq.com/openai/v1"
+ },
+ "together": {
+ "apiKey": "${TOGETHER_API_KEY}",
+ "baseURL": "https://api.together.xyz/v1"
+ },
+ "deepseek": {
+ "apiKey": "${DEEPSEEK_API_KEY}",
+ "baseURL": "https://api.deepseek.com/v1"
+ },
+ "mistral": {
+ "apiKey": "${MISTRAL_API_KEY}",
+ "baseURL": "https://api.mistral.ai/v1"
+ },
+ "xai": {
+ "apiKey": "${XAI_API_KEY}",
+ "baseURL": "https://api.x.ai/v1"
+ },
+ "ollama": {
+ "baseURL": "http://localhost:11434/v1",
+ "apiKey": "ollama"
+ }
+ },
+ "agents": {
+ "defaults": {
+ "model": "anthropic/claude-sonnet-4-5",
+ "temperature": 0.7,
+ "maxTokens": 4096
+ }
+ }
+}
+```
+
+#### Custom Model Configuration
+```json
+{
+ "customModels": {
+ "my-fine-tuned-model": {
+ "provider": "openai",
+ "modelId": "ft:gpt-4o:my-org:custom:suffix",
+ "displayName": "My Custom GPT-4o"
+ },
+ "local-llama": {
+ "provider": "ollama",
+ "modelId": "llama3.2:70b",
+ "displayName": "Local Llama 3.2 70B"
+ },
+ "openrouter-model": {
+ "provider": "openrouter",
+ "modelId": "meta-llama/llama-3.3-70b-instruct",
+ "displayName": "Llama 3.3 70B via OpenRouter"
+ }
+ }
+}
+```
+
+### Provider Selection Flow
+
+```
+1. Ask user which providers they have API keys for:
+ β‘ Anthropic (Claude)
+ β‘ OpenAI (GPT)
+ β‘ Google (Gemini)
+ β‘ OpenRouter (Multi-model)
+ β‘ Together AI
+ β‘ Groq (Fast inference)
+ β‘ Cerebras (Fastest)
+ β‘ DeepSeek
+ β‘ Mistral
+ β‘ xAI (Grok)
+ β‘ Local (Ollama/LM Studio)
+
+2. For each selected provider:
+ - Prompt for API key
+ - Fetch available models (if API supports)
+ - Let user select or input custom model
+
+3. Generate secure configuration:
+ - Store keys in environment variables
+ - Create config.json with model selections
+ - Set up key rotation reminders
+
+4. Test connectivity:
+ - Send test prompt to each configured provider
+ - Verify response
+```
+
+### Model Fetching Script
+
+```bash
+#!/bin/bash
+# fetch-models.sh - Fetch available models from providers
+
+echo "=== AI Provider Model Fetcher ==="
+
+# OpenRouter
+if [ -n "$OPENROUTER_API_KEY" ]; then
+ echo -e "\nπ¦ OpenRouter Models:"
+ curl -s https://openrouter.ai/api/v1/models \
+ -H "Authorization: Bearer $OPENROUTER_API_KEY" | \
+ jq -r '.data[] | " β’ \(.id) - \(.name // .id)"' | head -20
+fi
+
+# OpenAI
+if [ -n "$OPENAI_API_KEY" ]; then
+ echo -e "\nπ¦ OpenAI Models:"
+ curl -s https://api.openai.com/v1/models \
+ -H "Authorization: Bearer $OPENAI_API_KEY" | \
+ jq -r '.data[] | select(.id | contains("gpt")) | " β’ \(.id)"' | sort -u
+fi
+
+# Groq
+if [ -n "$GROQ_API_KEY" ]; then
+ echo -e "\nπ¦ Groq Models:"
+ curl -s https://api.groq.com/openai/v1/models \
+ -H "Authorization: Bearer $GROQ_API_KEY" | \
+ jq -r '.data[].id' | sed 's/^/ β’ /'
+fi
+
+# Ollama (local)
+echo -e "\nπ¦ Ollama Models (local):"
+curl -s http://localhost:11434/api/tags 2>/dev/null | \
+ jq -r '.models[].name' | sed 's/^/ β’ /' || echo " Ollama not running"
+
+# Together AI
+if [ -n "$TOGETHER_API_KEY" ]; then
+ echo -e "\nπ¦ Together AI Models:"
+ curl -s https://api.together.xyz/v1/models \
+ -H "Authorization: Bearer $TOGETHER_API_KEY" | \
+ jq -r '.data[].id' | head -20 | sed 's/^/ β’ /'
+fi
+
+echo -e "\nβ
Model fetch complete"
+```
+
+### Custom Model Input
+
+When user selects "Custom Model", prompt for:
+1. **Provider**: Which provider hosts this model
+2. **Model ID**: Exact model identifier
+3. **Display Name**: Friendly name for UI
+4. **Context Window**: Max tokens (optional)
+5. **Capabilities**: Text, vision, code, etc. (optional)
+
+Example custom model entry:
+```json
+{
+ "provider": "openrouter",
+ "modelId": "custom-org/my-fine-tuned-v2",
+ "displayName": "My Fine-Tuned Model v2",
+ "contextWindow": 128000,
+ "capabilities": ["text", "code"]
+}
+```
diff --git a/skills/claw-setup/scripts/fetch-models.sh b/skills/claw-setup/scripts/fetch-models.sh
new file mode 100755
index 0000000..2a67d70
--- /dev/null
+++ b/skills/claw-setup/scripts/fetch-models.sh
@@ -0,0 +1,111 @@
+#!/bin/bash
+# fetch-models.sh - Fetch available models from AI providers
+# Usage: ./fetch-models.sh [provider]
+
+set -e
+
+GREEN='\033[0;32m'
+BLUE='\033[0;34m'
+NC='\033[0m'
+
+echo -e "${BLUE}βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ${NC}"
+echo -e "${BLUE}β AI PROVIDER MODEL FETCHER β${NC}"
+echo -e "${BLUE}βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ${NC}"
+
+fetch_openrouter() {
+ if [ -n "$OPENROUTER_API_KEY" ]; then
+ echo -e "\n${GREEN}π¦ OpenRouter Models:${NC}"
+ curl -s https://openrouter.ai/api/v1/models \
+ -H "Authorization: Bearer $OPENROUTER_API_KEY" | \
+ jq -r '.data[] | " β’ \(.id)"' | head -30
+ else
+ echo -e "\nβ οΈ OPENROUTER_API_KEY not set"
+ fi
+}
+
+fetch_openai() {
+ if [ -n "$OPENAI_API_KEY" ]; then
+ echo -e "\n${GREEN}π¦ OpenAI Models:${NC}"
+ curl -s https://api.openai.com/v1/models \
+ -H "Authorization: Bearer $OPENAI_API_KEY" | \
+ jq -r '.data[] | select(.id | test("gpt|o1|o3")) | " β’ \(.id)"' | sort -u
+ else
+ echo -e "\nβ οΈ OPENAI_API_KEY not set"
+ fi
+}
+
+fetch_groq() {
+ if [ -n "$GROQ_API_KEY" ]; then
+ echo -e "\n${GREEN}π¦ Groq Models:${NC}"
+ curl -s https://api.groq.com/openai/v1/models \
+ -H "Authorization: Bearer $GROQ_API_KEY" | \
+ jq -r '.data[].id' | sed 's/^/ β’ /'
+ else
+ echo -e "\nβ οΈ GROQ_API_KEY not set"
+ fi
+}
+
+fetch_ollama() {
+ echo -e "\n${GREEN}π¦ Ollama Models (local):${NC}"
+ if curl -s http://localhost:11434/api/tags >/dev/null 2>&1; then
+ curl -s http://localhost:11434/api/tags | jq -r '.models[].name' | sed 's/^/ β’ /'
+ else
+ echo " β οΈ Ollama not running on localhost:11434"
+ fi
+}
+
+fetch_together() {
+ if [ -n "$TOGETHER_API_KEY" ]; then
+ echo -e "\n${GREEN}π¦ Together AI Models:${NC}"
+ curl -s https://api.together.xyz/v1/models \
+ -H "Authorization: Bearer $TOGETHER_API_KEY" | \
+ jq -r '.data[].id' | head -20 | sed 's/^/ β’ /'
+ else
+ echo -e "\nβ οΈ TOGETHER_API_KEY not set"
+ fi
+}
+
+fetch_anthropic() {
+ echo -e "\n${GREEN}π¦ Anthropic Models (static list):${NC}"
+ echo " β’ claude-opus-4-5-20250219"
+ echo " β’ claude-sonnet-4-5-20250219"
+ echo " β’ claude-3-5-sonnet-20241022"
+ echo " β’ claude-3-5-haiku-20241022"
+ echo " β’ claude-3-opus-20240229"
+}
+
+fetch_google() {
+ if [ -n "$GOOGLE_API_KEY" ]; then
+ echo -e "\n${GREEN}π¦ Google Gemini Models:${NC}"
+ curl -s "https://generativelanguage.googleapis.com/v1/models?key=$GOOGLE_API_KEY" | \
+ jq -r '.models[].name' | sed 's|models/||' | sed 's/^/ β’ /'
+ else
+ echo -e "\nβ οΈ GOOGLE_API_KEY not set"
+ fi
+}
+
+# Main logic
+case "${1:-all}" in
+ openrouter) fetch_openrouter ;;
+ openai) fetch_openai ;;
+ groq) fetch_groq ;;
+ ollama) fetch_ollama ;;
+ together) fetch_together ;;
+ anthropic) fetch_anthropic ;;
+ google) fetch_google ;;
+ all)
+ fetch_anthropic
+ fetch_openai
+ fetch_google
+ fetch_openrouter
+ fetch_groq
+ fetch_together
+ fetch_ollama
+ ;;
+ *)
+ echo "Usage: $0 [openrouter|openai|groq|ollama|together|anthropic|google|all]"
+ exit 1
+ ;;
+esac
+
+echo -e "\n${GREEN}β
Model fetch complete${NC}"