feat: Add Claw Setup skill for AI Agent deployment

End-to-end professional setup of AI Agent platforms:
- OpenClaw (full-featured, 215K stars)
- NanoBot (Python, lightweight)
- PicoClaw (Go, ultra-light)
- ZeroClaw (Rust, minimal)
- NanoClaw (WhatsApp focused)

Features:
- Platform selection with comparison
- Security hardening (secrets, network, systemd)
- Interactive brainstorming for customization
- AI provider configuration with 12+ providers
- Model fetching from provider APIs
- Custom model input support

Providers supported:
Anthropic, OpenAI, Google, OpenRouter, Groq,
Cerebras, Together AI, DeepSeek, Mistral, xAI, Ollama

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Claude Code
2026-02-22 03:44:25 -05:00
Unverified
parent b28e691e46
commit 2072e16bd1
4 changed files with 1097 additions and 4 deletions

View File

@@ -28,25 +28,61 @@
## Skills Index ## Skills Index
### AI & Automation
| Skill | Description | Status |
|-------|-------------|--------|
| [🦞 Claw Setup](./skills/claw-setup/) | End-to-end AI Agent deployment (OpenClaw, NanoBot, PicoClaw, ZeroClaw) | ✅ Production Ready |
### System Administration
| Skill | Description | Status | | Skill | Description | Status |
|-------|-------------|--------| |-------|-------------|--------|
| [🚀 RAM Optimizer](./skills/ram-optimizer/) | ZRAM-based memory compression for Linux servers | ✅ Production Ready | | [🚀 RAM Optimizer](./skills/ram-optimizer/) | ZRAM-based memory compression for Linux servers | ✅ Production Ready |
| [🔐 Secret Scanner](./skills/secret-scanner/) | Detect leaked credentials in codebases | ✅ Production Ready |
| [🏛️ Git Archaeologist](./skills/git-archaeologist/) | Analyze repository history and find bugs | ✅ Production Ready |
| [💾 Backup Automator](./skills/backup-automator/) | Automated encrypted backups to cloud storage | ✅ Production Ready | | [💾 Backup Automator](./skills/backup-automator/) | Automated encrypted backups to cloud storage | ✅ Production Ready |
| [🌐 Domain Manager](./skills/domain-manager/) | DNS management across multiple providers | ✅ Production Ready |
| [🔒 SSL Guardian](./skills/ssl-guardian/) | SSL certificate automation and monitoring | ✅ Production Ready |
| [📡 Log Sentinel](./skills/log-sentinel/) | Log analysis and anomaly detection | ✅ Production Ready | | [📡 Log Sentinel](./skills/log-sentinel/) | Log analysis and anomaly detection | ✅ Production Ready |
### Security
| Skill | Description | Status |
|-------|-------------|--------|
| [🔐 Secret Scanner](./skills/secret-scanner/) | Detect leaked credentials in codebases | ✅ Production Ready |
| [🔒 SSL Guardian](./skills/ssl-guardian/) | SSL certificate automation and monitoring | ✅ Production Ready |
### Development
| Skill | Description | Status |
|-------|-------------|--------|
| [🏛️ Git Archaeologist](./skills/git-archaeologist/) | Analyze repository history and find bugs | ✅ Production Ready |
### Infrastructure
| Skill | Description | Status |
|-------|-------------|--------|
| [🌐 Domain Manager](./skills/domain-manager/) | DNS management across multiple providers | ✅ Production Ready |
--- ---
## Quick Start ## Quick Start
Each skill works with Claude Code CLI. Simply ask: Each skill works with Claude Code CLI. Simply ask:
```
"Setup Claw AI assistant on my server"
"Run ram optimizer on my server" "Run ram optimizer on my server"
"Scan this directory for leaked secrets" "Scan this directory for leaked secrets"
"Setup automated backups to S3" "Setup automated backups to S3"
```
---
## Featured: Claw Setup
Professional deployment of AI Agent platforms:
```
OpenClaw → Full-featured, 1700+ plugins, 215K stars
NanoBot → Python, 4K lines, research-ready
PicoClaw → Go, <10MB, $10 hardware
ZeroClaw → Rust, <5MB, 10ms startup
```
Usage: `"Setup OpenClaw on my VPS with security hardening"`
--- ---

482
skills/claw-setup/README.md Normal file
View File

@@ -0,0 +1,482 @@
<div align="center">
# 🦞 Claw Setup
### Professional AI Agent Deployment Made Simple
**End-to-end setup of OpenClaw, NanoBot, PicoClaw, ZeroClaw, or NanoClaw with security hardening and personal customization**
---
<p align="center">
<a href="https://z.ai/subscribe?ic=R0K78RJKNW">
<img src="https://img.shields.io/badge/Designed%20by-GLM%205%20Advanced%20Coding%20Model-blue?style=for-the-badge" alt="Designed by GLM 5">
</a>
</p>
<p align="center">
<i>✨ Autonomously developed by <a href="https://z.ai/subscribe?ic=R0K78RJKNW"><strong>GLM 5 Advanced Coding Model</strong></a></i>
</p>
<p align="center">
<b>⚠️ Disclaimer: Test in a test environment prior to using on any live system</b>
</p>
---
</div>
## Overview
Claw Setup handles the complete deployment of AI Agent platforms from the Claw family - from selection to production - with security best practices and personalized configuration through interactive brainstorming.
```
┌─────────────────────────────────────────────────────────────────┐
│ CLAW SETUP WORKFLOW │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Phase 1 Phase 2 Phase 3 Phase 4 │
│ ──────── ──────── ──────── ──────── │
│ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ SELECT │────►│ INSTALL │────►│CUSTOMIZE│────►│ DEPLOY │ │
│ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ Compare Clone & Brainstorm Systemd │
│ platforms harden your use case & monitor │
│ security │
│ │
│ ┌─────────────────────────────────────────────────────────────┐│
│ │ SUPPORTED PLATFORMS ││
│ │ ││
│ │ 🦞 OpenClaw Full-featured, 1700+ plugins, 215K stars ││
│ │ 🤖 NanoBot Python, 4K lines, research-ready ││
│ │ 🦐 PicoClaw Go, <10MB, $10 hardware ││
│ │ ⚡ ZeroClaw Rust, <5MB, 10ms startup ││
│ │ 💬 NanoClaw TypeScript, WhatsApp focused ││
│ │ ││
│ └─────────────────────────────────────────────────────────────┘│
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Platform Comparison
```
┌─────────────────────────────────────────────────────────────────┐
│ PLATFORM COMPARISON │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Metric OpenClaw NanoBot PicoClaw ZeroClaw NanoClaw │
│ ───────────────────────────────────────────────────────────── │
│ Language TS Python Go Rust TS │
│ Memory >1GB ~100MB <10MB <5MB ~50MB │
│ Startup ~500s ~30s ~1s <10ms ~5s │
│ Binary Size ~28MB N/A ~8MB 3.4MB ~15MB │
│ GitHub Stars 215K+ 22K 15K 10K 5K │
│ Plugins 1700+ ~50 ~20 ~15 ~10 │
│ Learning Medium Easy Easy Medium Easy │
│ │
│ BEST FOR: │
│ ───────── │
│ OpenClaw → Full desktop AI, extensive integrations │
│ NanoBot → Research, customization, Python developers │
│ PicoClaw → Embedded, low-resource, $10 hardware │
│ ZeroClaw → Maximum performance, security-critical │
│ NanoClaw → WhatsApp automation, messaging bots │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Decision Flowchart
```
┌─────────────────┐
│ Need AI Agent? │
└────────┬────────┘
┌───────────────────────┐
│ Memory constrained? │
│ (<1GB RAM available) │
└───────────┬───────────┘
┌─────┴─────┐
│ │
YES NO
│ │
▼ ▼
┌──────────────┐ ┌──────────────────┐
│ Need <10MB? │ │ Want plugins? │
└──────┬───────┘ └────────┬─────────┘
┌─────┴─────┐ ┌─────┴─────┐
│ │ │ │
YES NO YES NO
│ │ │ │
▼ ▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
│ZeroClaw│ │PicoClaw│ │OpenClaw│ │NanoBot │
│ (Rust) │ │ (Go) │ │ (Full) │ │(Python)│
└────────┘ └────────┘ └────────┘ └────────┘
```
## Quick Start
### Option 1: Interactive Setup (Recommended)
```
"Setup Claw AI assistant on my server"
"Help me choose and install an AI agent platform"
```
### Option 2: Direct Platform Selection
```
"Setup OpenClaw with all security features"
"Install ZeroClaw on my VPS"
"Deploy NanoBot for research use"
```
## Installation Guides
### OpenClaw (Full Featured)
```bash
# Prerequisites
sudo apt update && sudo apt install -y nodejs npm git
# Clone official repo
git clone https://github.com/openclaw/openclaw.git
cd openclaw
# Install dependencies
npm install
# Run setup wizard
npm run setup
# Configure environment
cp .env.example .env
nano .env # Add your API keys
# Start
npm run start
```
### NanoBot (Python Lightweight)
```bash
# Quick install via pip
pip install nanobot-ai
# Initialize
nanobot onboard
# Configure (~/.nanobot/config.json)
{
"providers": {
"openrouter": { "apiKey": "sk-or-v1-xxx" }
},
"agents": {
"defaults": { "model": "anthropic/claude-opus-4-5" }
}
}
# Start gateway
nanobot gateway
```
### PicoClaw (Go Ultra-Light)
```bash
# Download latest release
wget https://github.com/sipeed/picoclaw/releases/latest/download/picoclaw-linux-amd64
chmod +x picoclaw-linux-amd64
sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
# Create config
mkdir -p ~/.config/picoclaw
picoclaw config init
# Start
picoclaw gateway
```
### ZeroClaw (Rust Minimal)
```bash
# Download latest release
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/download/zeroclaw-linux-amd64
chmod +x zeroclaw-linux-amd64
sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
# Initialize config
zeroclaw init
# Migrate from OpenClaw (optional)
zeroclaw migrate openclaw --dry-run
# Start
zeroclaw gateway
```
## Security Hardening
### 1. Secrets Management
```bash
# Never hardcode API keys - use environment variables
export ANTHROPIC_API_KEY="your-key"
export OPENROUTER_API_KEY="your-key"
# Add to shell profile for persistence
echo 'export ANTHROPIC_API_KEY="your-key"' >> ~/.bashrc
# Use encrypted config files
mkdir -p ~/.config/claw
chmod 700 ~/.config/claw
```
### 2. Network Security
```bash
# Bind to localhost only
# config.json:
{
"server": {
"host": "127.0.0.1",
"port": 3000
}
}
# Use nginx reverse proxy for external access
sudo certbot --nginx -d claw.yourdomain.com
```
### 3. Systemd Hardened Service
```bash
# /etc/systemd/system/claw.service
[Unit]
Description=Claw AI Assistant
After=network.target
[Service]
Type=simple
User=claw
Group=claw
WorkingDirectory=/opt/claw
ExecStart=/usr/local/bin/claw gateway
Restart=on-failure
# Security hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/opt/claw/data
Environment="ANTHROPIC_API_KEY=%i"
[Install]
WantedBy=multi-user.target
```
```bash
# Enable service
sudo systemctl daemon-reload
sudo systemctl enable --now claw
```
## Brainstorm Session
After installation, we'll explore your needs:
### 🎯 Use Case Discovery
```
Q: What tasks should your AI handle?
□ Code assistance & development
□ Research & information gathering
□ Personal productivity (calendar, reminders)
□ Content creation & writing
□ Data analysis & visualization
□ Home automation
□ Customer support / chatbot
□ Other: _______________
```
### 🤖 Model Selection
```
Q: Which AI model(s) to use?
□ Claude (Anthropic) - Best reasoning
□ GPT-4 (OpenAI) - General purpose
□ Gemini (Google) - Multimodal
□ Local models (Ollama) - Privacy-first
□ OpenRouter - Multi-model access
```
### 🔌 Integration Planning
```
Q: Which platforms to connect?
Messaging:
□ Telegram □ Discord □ WhatsApp □ Slack
Calendar:
□ Google □ Outlook □ Apple □ None
Storage:
□ Local □ Google Drive □ Dropbox □ S3
APIs:
□ Custom REST APIs
□ Webhooks
□ Database connections
```
### 🎨 Agent Personality
```
Q: How should your agent behave?
Tone: Professional □ Casual □ Formal □ Playful □
Proactivity:
□ Reactive (responds only when asked)
□ Proactive (suggests, reminds, initiates)
Memory:
□ Session only (fresh each chat)
□ Persistent (remembers everything)
□ Selective (configurable retention)
```
## Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ DEPLOYED ARCHITECTURE │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ │
│ │ Internet │ │
│ └──────┬──────┘ │
│ │ │
│ ┌───────▼───────┐ │
│ │ nginx/HTTPS │ │
│ │ (Reverse │ │
│ │ Proxy) │ │
│ └───────┬───────┘ │
│ │ │
│ ┌──────────────────────────┼──────────────────────────────┐ │
│ │ localhost │ │
│ │ ┌─────────┐ ┌─────────▼────────┐ ┌────────────┐ │ │
│ │ │ Config │ │ CLAW ENGINE │ │ Data │ │ │
│ │ │ ~/.config│ │ (Gateway) │ │ Storage │ │ │
│ │ │ /claw │ │ Port: 3000 │ │ ~/claw/ │ │ │
│ │ └─────────┘ └─────────┬────────┘ └────────────┘ │ │
│ │ │ │ │
│ │ ┌─────────────────┼─────────────────┐ │ │
│ │ │ │ │ │ │
│ │ ┌────▼────┐ ┌─────▼─────┐ ┌─────▼─────┐ │ │
│ │ │ LLM │ │ Tools │ │ Memory │ │ │
│ │ │ APIs │ │ Plugins │ │ Context │ │ │
│ │ │Claude/GPT│ │ Skills │ │ Store │ │ │
│ │ └─────────┘ └───────────┘ └───────────┘ │ │
│ │ │ │
│ └──────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Post-Setup Checklist
```
□ API keys configured securely
□ Network binding verified (localhost)
□ Firewall configured
□ SSL certificate installed (if external)
□ Systemd service enabled
□ Logs configured and rotating
□ Backup strategy in place
□ Test conversation successful
□ Custom agents created
□ Integrations connected
```
---
<p align="center">
<a href="https://z.ai/subscribe?ic=R0K78RJKNW">Learn more about GLM 5 Advanced Coding Model</a>
</p>
## AI Provider Configuration
### Supported Providers
```
┌─────────────────────────────────────────────────────────────────┐
│ AI PROVIDER OPTIONS │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Direct Providers │ Gateways & Aggregators │
│ ───────────────── │ ────────────────────── │
│ • Anthropic (Claude) │ • OpenRouter (200+ models) │
│ • OpenAI (GPT-4, o1, o3) │ • Replicate │
│ • Google (Gemini 2.0) │ │
│ • Mistral │ Fast Inference │
│ • DeepSeek │ ─────────────── │
│ • xAI (Grok) │ • Groq (ultra-fast) │
│ │ • Cerebras (fastest) │
│ Local/Self-Hosted │ • Together AI │
│ ────────────────── │ │
│ • Ollama │ │
│ • LM Studio │ │
│ • vLLM │ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
### Model Selection Options
**Option A: Fetch from Provider**
```bash
# Automatically fetch available models
"Fetch available models from OpenRouter"
"Show me Groq models"
"What models are available via OpenAI?"
```
**Option B: Custom Model Input**
```
"Add custom model: my-org/fine-tuned-llama"
"Configure local Ollama model: llama3.2:70b"
"Use fine-tuned GPT: ft:gpt-4o:org:custom"
```
### Multi-Provider Setup
```json
{
"providers": {
"anthropic": { "apiKey": "${ANTHROPIC_API_KEY}" },
"openai": { "apiKey": "${OPENAI_API_KEY}" },
"google": { "apiKey": "${GOOGLE_API_KEY}" },
"openrouter": { "apiKey": "${OPENROUTER_API_KEY}" },
"groq": { "apiKey": "${GROQ_API_KEY}" },
"ollama": { "baseURL": "http://localhost:11434" }
},
"models": {
"default": "anthropic/claude-sonnet-4-5",
"fast": "groq/llama-3.3-70b-versatile",
"local": "ollama/llama3.2:70b"
}
}
```
### Provider Comparison
| Provider | Best For | Speed | Cost |
|----------|----------|-------|------|
| Claude | Reasoning, coding | Medium | $$$ |
| GPT-4o | General purpose | Fast | $$$ |
| Gemini | Multimodal | Fast | $$ |
| Groq | Fastest inference | Ultra-fast | $ |
| OpenRouter | Model variety | Varies | $-$$$ |
| Ollama | Privacy, free | Depends on HW | Free |
---
<p align="center">
<a href="https://z.ai/subscribe?ic=R0K78RJKNW">Learn more about GLM 5 Advanced Coding Model</a>
</p>

464
skills/claw-setup/SKILL.md Normal file
View File

@@ -0,0 +1,464 @@
---
name: claw-setup
description: Use this skill when the user asks to "setup openclaw", "install nanobot", "deploy zeroclaw", "configure picoclaw", "AI agent setup", "personal AI assistant", "claw framework", or mentions setting up any AI agent/assistant platform from the Claw family (OpenClaw, NanoBot, PicoClaw, ZeroClaw, NanoClaw).
version: 1.0.0
---
# Claw Setup Skill
End-to-end professional setup of AI Agent platforms from the Claw family with security hardening and personal customization through interactive brainstorming.
## Supported Platforms
| Platform | Language | Memory | Startup | Best For |
|----------|----------|--------|---------|----------|
| **OpenClaw** | TypeScript | >1GB | ~500s | Full-featured, plugin ecosystem |
| **NanoBot** | Python | ~100MB | ~30s | Research, easy customization |
| **PicoClaw** | Go | <10MB | ~1s | Low-resource, embedded |
| **ZeroClaw** | Rust | <5MB | <10ms | Maximum performance, security |
| **NanoClaw** | TypeScript | ~50MB | ~5s | WhatsApp integration |
## What This Skill Does
### Phase 1: Platform Selection
- Interactive comparison of all platforms
- Hardware requirements check
- Use case matching
### Phase 2: Secure Installation
- Clone from official GitHub repos
- Security hardening (secrets management, network isolation)
- Environment configuration
- API key setup with best practices
### Phase 3: Personal Customization
- Interactive brainstorming session
- Custom agent templates
- Integration setup (messaging, calendar, etc.)
- Memory and context configuration
### Phase 4: Verification & Deployment
- Health checks
- Test runs
- Production deployment options
## GitHub Repositories
```
OpenClaw: https://github.com/openclaw/openclaw
NanoBot: https://github.com/HKUDS/nanobot
PicoClaw: https://github.com/sipeed/picoclaw
ZeroClaw: https://github.com/zeroclaw-labs/zeroclaw
NanoClaw: https://github.com/nanoclaw/nanoclaw
```
## Usage Examples
```
"Setup OpenClaw on my server"
"I want to install NanoBot for personal use"
"Help me choose between ZeroClaw and PicoClaw"
"Deploy an AI assistant with security best practices"
"Setup Claw framework with my custom requirements"
```
## Installation Commands by Platform
### OpenClaw (Full Featured)
```bash
# Prerequisites
sudo apt install -y nodejs npm
# Clone and setup
git clone https://github.com/openclaw/openclaw.git
cd openclaw
npm install
npm run setup
# Configure
cp .env.example .env
# Edit .env with API keys
# Run
npm run start
```
### NanoBot (Python Lightweight)
```bash
# Quick install
pip install nanobot-ai
# Or from source
git clone https://github.com/HKUDS/nanobot.git
cd nanobot
pip install -e .
# Setup
nanobot onboard
nanobot gateway
```
### PicoClaw (Go Ultra-Light)
```bash
# Download binary
wget https://github.com/sipeed/picoclaw/releases/latest/picoclaw-linux-amd64
chmod +x picoclaw-linux-amd64
sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
# Or build from source
git clone https://github.com/sipeed/picoclaw.git
cd picoclaw
go build -o picoclaw
# Run
picoclaw gateway
```
### ZeroClaw (Rust Minimal)
```bash
# Download binary
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
chmod +x zeroclaw-linux-amd64
sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
# Or from source
git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
cargo build --release
# Run
zeroclaw gateway
```
## Security Hardening
### Secrets Management
```bash
# Never commit .env files
echo ".env" >> .gitignore
echo "*.pem" >> .gitignore
# Use environment variables
export ANTHROPIC_API_KEY="your-key"
export OPENROUTER_API_KEY="your-key"
# Or use secret files with restricted permissions
mkdir -p ~/.config/claw
cat > ~/.config/claw/config.json << 'CONFIG'
{
"providers": {
"openrouter": { "apiKey": "${OPENROUTER_API_KEY}" }
}
}
CONFIG
chmod 600 ~/.config/claw/config.json
```
### Network Security
```bash
# Bind to localhost only
# In config, set:
# "server": { "host": "127.0.0.1", "port": 3000 }
# Use reverse proxy for external access
# nginx example:
server {
listen 443 ssl;
server_name claw.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
```
### Systemd Service
```bash
# /etc/systemd/system/claw.service
[Unit]
Description=Claw AI Assistant
After=network.target
[Service]
Type=simple
User=claw
Group=claw
WorkingDirectory=/opt/claw
ExecStart=/usr/local/bin/claw gateway
Restart=on-failure
RestartSec=10
# Security hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/opt/claw/data
[Install]
WantedBy=multi-user.target
```
## Brainstorm Session Topics
1. **Use Case Discovery**
- What tasks should the AI handle?
- Which platforms/channels to integrate?
- Automation vs. interactive preferences?
2. **Model Selection**
- Claude, GPT, Gemini, or local models?
- Cost vs. performance tradeoffs?
- Privacy requirements?
3. **Integration Planning**
- Messaging: Telegram, Discord, WhatsApp, Slack?
- Calendar: Google, Outlook, Apple?
- Storage: Local, cloud, hybrid?
- APIs to connect?
4. **Custom Agent Design**
- Personality and tone?
- Domain expertise areas?
- Memory and context preferences?
- Proactive vs. reactive behavior?
5. **Deployment Strategy**
- Local machine, VPS, or cloud?
- High availability requirements?
- Backup and recovery needs?
## AI Provider Configuration
### Supported Providers
| Provider | Type | API Base | Models |
|----------|------|----------|--------|
| **Anthropic** | Direct | api.anthropic.com | Claude 3.5/4/Opus |
| **OpenAI** | Direct | api.openai.com | GPT-4, GPT-4o, o1, o3 |
| **Google** | Direct | generativelanguage.googleapis.com | Gemini 2.0/1.5 |
| **OpenRouter** | Gateway | openrouter.ai/api | 200+ models |
| **Together AI** | Direct | api.together.xyz | Llama, Mistral, Qwen |
| **Groq** | Direct | api.groq.com | Llama, Mixtral (fast) |
| **Cerebras** | Direct | api.cerebras.ai | Llama (fastest) |
| **DeepSeek** | Direct | api.deepseek.com | DeepSeek V3/R1 |
| **Mistral** | Direct | api.mistral.ai | Mistral, Codestral |
| **xAI** | Direct | api.x.ai | Grok |
| **Replicate** | Gateway | api.replicate.com | Various |
| **Local** | Self-hosted | localhost | Ollama, LM Studio |
### Fetch Available Models
```bash
# OpenRouter - List all models
curl -s https://openrouter.ai/api/v1/models \
-H "Authorization: Bearer $OPENROUTER_API_KEY" | jq '.data[].id'
# OpenAI - List models
curl -s https://api.openai.com/v1/models \
-H "Authorization: Bearer $OPENAI_API_KEY" | jq '.data[].id'
# Anthropic - Available models (static list)
# claude-opus-4-5-20250219
# claude-sonnet-4-5-20250219
# claude-3-5-sonnet-20241022
# claude-3-5-haiku-20241022
# Google Gemini
curl -s "https://generativelanguage.googleapis.com/v1/models?key=$GOOGLE_API_KEY" | jq '.models[].name'
# Groq - List models
curl -s https://api.groq.com/openai/v1/models \
-H "Authorization: Bearer $GROQ_API_KEY" | jq '.data[].id'
# Together AI
curl -s https://api.together.xyz/v1/models \
-H "Authorization: Bearer $TOGETHER_API_KEY" | jq '.data[].id'
# Ollama (local)
curl -s http://localhost:11434/api/tags | jq '.models[].name'
```
### Configuration Templates
#### Multi-Provider Config
```json
{
"providers": {
"anthropic": {
"apiKey": "${ANTHROPIC_API_KEY}",
"baseURL": "https://api.anthropic.com"
},
"openai": {
"apiKey": "${OPENAI_API_KEY}",
"baseURL": "https://api.openai.com/v1"
},
"google": {
"apiKey": "${GOOGLE_API_KEY}",
"baseURL": "https://generativelanguage.googleapis.com/v1"
},
"openrouter": {
"apiKey": "${OPENROUTER_API_KEY}",
"baseURL": "https://openrouter.ai/api/v1"
},
"groq": {
"apiKey": "${GROQ_API_KEY}",
"baseURL": "https://api.groq.com/openai/v1"
},
"together": {
"apiKey": "${TOGETHER_API_KEY}",
"baseURL": "https://api.together.xyz/v1"
},
"deepseek": {
"apiKey": "${DEEPSEEK_API_KEY}",
"baseURL": "https://api.deepseek.com/v1"
},
"mistral": {
"apiKey": "${MISTRAL_API_KEY}",
"baseURL": "https://api.mistral.ai/v1"
},
"xai": {
"apiKey": "${XAI_API_KEY}",
"baseURL": "https://api.x.ai/v1"
},
"ollama": {
"baseURL": "http://localhost:11434/v1",
"apiKey": "ollama"
}
},
"agents": {
"defaults": {
"model": "anthropic/claude-sonnet-4-5",
"temperature": 0.7,
"maxTokens": 4096
}
}
}
```
#### Custom Model Configuration
```json
{
"customModels": {
"my-fine-tuned-model": {
"provider": "openai",
"modelId": "ft:gpt-4o:my-org:custom:suffix",
"displayName": "My Custom GPT-4o"
},
"local-llama": {
"provider": "ollama",
"modelId": "llama3.2:70b",
"displayName": "Local Llama 3.2 70B"
},
"openrouter-model": {
"provider": "openrouter",
"modelId": "meta-llama/llama-3.3-70b-instruct",
"displayName": "Llama 3.3 70B via OpenRouter"
}
}
}
```
### Provider Selection Flow
```
1. Ask user which providers they have API keys for:
□ Anthropic (Claude)
□ OpenAI (GPT)
□ Google (Gemini)
□ OpenRouter (Multi-model)
□ Together AI
□ Groq (Fast inference)
□ Cerebras (Fastest)
□ DeepSeek
□ Mistral
□ xAI (Grok)
□ Local (Ollama/LM Studio)
2. For each selected provider:
- Prompt for API key
- Fetch available models (if API supports)
- Let user select or input custom model
3. Generate secure configuration:
- Store keys in environment variables
- Create config.json with model selections
- Set up key rotation reminders
4. Test connectivity:
- Send test prompt to each configured provider
- Verify response
```
### Model Fetching Script
```bash
#!/bin/bash
# fetch-models.sh - Fetch available models from providers
echo "=== AI Provider Model Fetcher ==="
# OpenRouter
if [ -n "$OPENROUTER_API_KEY" ]; then
echo -e "\n📦 OpenRouter Models:"
curl -s https://openrouter.ai/api/v1/models \
-H "Authorization: Bearer $OPENROUTER_API_KEY" | \
jq -r '.data[] | " • \(.id) - \(.name // .id)"' | head -20
fi
# OpenAI
if [ -n "$OPENAI_API_KEY" ]; then
echo -e "\n📦 OpenAI Models:"
curl -s https://api.openai.com/v1/models \
-H "Authorization: Bearer $OPENAI_API_KEY" | \
jq -r '.data[] | select(.id | contains("gpt")) | " • \(.id)"' | sort -u
fi
# Groq
if [ -n "$GROQ_API_KEY" ]; then
echo -e "\n📦 Groq Models:"
curl -s https://api.groq.com/openai/v1/models \
-H "Authorization: Bearer $GROQ_API_KEY" | \
jq -r '.data[].id' | sed 's/^/ • /'
fi
# Ollama (local)
echo -e "\n📦 Ollama Models (local):"
curl -s http://localhost:11434/api/tags 2>/dev/null | \
jq -r '.models[].name' | sed 's/^/ • /' || echo " Ollama not running"
# Together AI
if [ -n "$TOGETHER_API_KEY" ]; then
echo -e "\n📦 Together AI Models:"
curl -s https://api.together.xyz/v1/models \
-H "Authorization: Bearer $TOGETHER_API_KEY" | \
jq -r '.data[].id' | head -20 | sed 's/^/ • /'
fi
echo -e "\n✅ Model fetch complete"
```
### Custom Model Input
When user selects "Custom Model", prompt for:
1. **Provider**: Which provider hosts this model
2. **Model ID**: Exact model identifier
3. **Display Name**: Friendly name for UI
4. **Context Window**: Max tokens (optional)
5. **Capabilities**: Text, vision, code, etc. (optional)
Example custom model entry:
```json
{
"provider": "openrouter",
"modelId": "custom-org/my-fine-tuned-v2",
"displayName": "My Fine-Tuned Model v2",
"contextWindow": 128000,
"capabilities": ["text", "code"]
}
```

View File

@@ -0,0 +1,111 @@
#!/bin/bash
# fetch-models.sh - Fetch available models from AI providers
# Usage: ./fetch-models.sh [provider]
set -e
GREEN='\033[0;32m'
BLUE='\033[0;34m'
NC='\033[0m'
echo -e "${BLUE}╔═══════════════════════════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ AI PROVIDER MODEL FETCHER ║${NC}"
echo -e "${BLUE}╚═══════════════════════════════════════════════════════════════╝${NC}"
fetch_openrouter() {
if [ -n "$OPENROUTER_API_KEY" ]; then
echo -e "\n${GREEN}📦 OpenRouter Models:${NC}"
curl -s https://openrouter.ai/api/v1/models \
-H "Authorization: Bearer $OPENROUTER_API_KEY" | \
jq -r '.data[] | " • \(.id)"' | head -30
else
echo -e "\n⚠ OPENROUTER_API_KEY not set"
fi
}
fetch_openai() {
if [ -n "$OPENAI_API_KEY" ]; then
echo -e "\n${GREEN}📦 OpenAI Models:${NC}"
curl -s https://api.openai.com/v1/models \
-H "Authorization: Bearer $OPENAI_API_KEY" | \
jq -r '.data[] | select(.id | test("gpt|o1|o3")) | " • \(.id)"' | sort -u
else
echo -e "\n⚠ OPENAI_API_KEY not set"
fi
}
fetch_groq() {
if [ -n "$GROQ_API_KEY" ]; then
echo -e "\n${GREEN}📦 Groq Models:${NC}"
curl -s https://api.groq.com/openai/v1/models \
-H "Authorization: Bearer $GROQ_API_KEY" | \
jq -r '.data[].id' | sed 's/^/ • /'
else
echo -e "\n⚠ GROQ_API_KEY not set"
fi
}
fetch_ollama() {
echo -e "\n${GREEN}📦 Ollama Models (local):${NC}"
if curl -s http://localhost:11434/api/tags >/dev/null 2>&1; then
curl -s http://localhost:11434/api/tags | jq -r '.models[].name' | sed 's/^/ • /'
else
echo " ⚠️ Ollama not running on localhost:11434"
fi
}
fetch_together() {
if [ -n "$TOGETHER_API_KEY" ]; then
echo -e "\n${GREEN}📦 Together AI Models:${NC}"
curl -s https://api.together.xyz/v1/models \
-H "Authorization: Bearer $TOGETHER_API_KEY" | \
jq -r '.data[].id' | head -20 | sed 's/^/ • /'
else
echo -e "\n⚠ TOGETHER_API_KEY not set"
fi
}
fetch_anthropic() {
echo -e "\n${GREEN}📦 Anthropic Models (static list):${NC}"
echo " • claude-opus-4-5-20250219"
echo " • claude-sonnet-4-5-20250219"
echo " • claude-3-5-sonnet-20241022"
echo " • claude-3-5-haiku-20241022"
echo " • claude-3-opus-20240229"
}
fetch_google() {
if [ -n "$GOOGLE_API_KEY" ]; then
echo -e "\n${GREEN}📦 Google Gemini Models:${NC}"
curl -s "https://generativelanguage.googleapis.com/v1/models?key=$GOOGLE_API_KEY" | \
jq -r '.models[].name' | sed 's|models/||' | sed 's/^/ • /'
else
echo -e "\n⚠ GOOGLE_API_KEY not set"
fi
}
# Main logic
case "${1:-all}" in
openrouter) fetch_openrouter ;;
openai) fetch_openai ;;
groq) fetch_groq ;;
ollama) fetch_ollama ;;
together) fetch_together ;;
anthropic) fetch_anthropic ;;
google) fetch_google ;;
all)
fetch_anthropic
fetch_openai
fetch_google
fetch_openrouter
fetch_groq
fetch_together
fetch_ollama
;;
*)
echo "Usage: $0 [openrouter|openai|groq|ollama|together|anthropic|google|all]"
exit 1
;;
esac
echo -e "\n${GREEN}✅ Model fetch complete${NC}"