feat: Add all 25+ OpenCode-compatible AI providers to Claw Setup

Updated provider support to match OpenCode's full provider list:

Built-in Providers (18):
- Anthropic, OpenAI, Azure OpenAI
- Google AI, Google Vertex AI
- Amazon Bedrock
- OpenRouter, xAI, Mistral
- Groq, Cerebras, DeepInfra
- Cohere, Together AI, Perplexity
- Vercel AI, GitLab, GitHub Copilot

Custom Loader Providers:
- GitHub Copilot Enterprise
- Google Vertex Anthropic
- Azure Cognitive Services
- Cloudflare AI Gateway
- SAP AI Core

Local/Self-Hosted:
- Ollama, LM Studio, vLLM

Features:
- Model fetching from provider APIs
- Custom model input support
- Multi-provider configuration
- Environment variable security

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Claude Code
2026-02-22 03:51:55 -05:00
Unverified
parent 2072e16bd1
commit baffcf6db1
3 changed files with 466 additions and 820 deletions

View File

@@ -4,7 +4,7 @@
### Professional AI Agent Deployment Made Simple ### Professional AI Agent Deployment Made Simple
**End-to-end setup of OpenClaw, NanoBot, PicoClaw, ZeroClaw, or NanoClaw with security hardening and personal customization** **End-to-end setup of Claw platforms with 25+ AI providers, security hardening, and personal customization**
--- ---
@@ -28,7 +28,7 @@
## Overview ## Overview
Claw Setup handles the complete deployment of AI Agent platforms from the Claw family - from selection to production - with security best practices and personalized configuration through interactive brainstorming. Claw Setup handles complete deployment of AI Agent platforms with **25+ AI provider integrations** (OpenCode compatible).
``` ```
┌─────────────────────────────────────────────────────────────────┐ ┌─────────────────────────────────────────────────────────────────┐
@@ -40,411 +40,106 @@ Claw Setup handles the complete deployment of AI Agent platforms from the Claw f
│ │ │ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ SELECT │────►│ INSTALL │────►│CUSTOMIZE│────►│ DEPLOY │ │ │ │ SELECT │────►│ INSTALL │────►│CUSTOMIZE│────►│ DEPLOY │ │
│ │ Platform│ │& Secure │ │Providers│ │ & Run │ │
│ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │ │ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ Compare Clone & Brainstorm Systemd │
│ platforms harden your use case & monitor │
│ security │
│ │
│ ┌─────────────────────────────────────────────────────────────┐│
│ │ SUPPORTED PLATFORMS ││
│ │ ││
│ │ 🦞 OpenClaw Full-featured, 1700+ plugins, 215K stars ││
│ │ 🤖 NanoBot Python, 4K lines, research-ready ││
│ │ 🦐 PicoClaw Go, <10MB, $10 hardware ││
│ │ ⚡ ZeroClaw Rust, <5MB, 10ms startup ││
│ │ 💬 NanoClaw TypeScript, WhatsApp focused ││
│ │ ││
│ └─────────────────────────────────────────────────────────────┘│
│ │ │ │
└─────────────────────────────────────────────────────────────────┘ └─────────────────────────────────────────────────────────────────┘
``` ```
## Platform Comparison ## Platforms Supported
``` | Platform | Language | Memory | Startup | Best For |
┌─────────────────────────────────────────────────────────────────┐ |----------|----------|--------|---------|----------|
│ PLATFORM COMPARISON │ | **OpenClaw** | TypeScript | >1GB | ~500s | Full-featured, 1700+ plugins |
├─────────────────────────────────────────────────────────────────┤ | **NanoBot** | Python | ~100MB | ~30s | Research, customization |
│ │ | **PicoClaw** | Go | <10MB | ~1s | Embedded, $10 hardware |
│ Metric OpenClaw NanoBot PicoClaw ZeroClaw NanoClaw │ | **ZeroClaw** | Rust | <5MB | <10ms | Maximum performance |
│ ───────────────────────────────────────────────────────────── │ | **NanoClaw** | TypeScript | ~50MB | ~5s | WhatsApp integration |
│ Language TS Python Go Rust TS │
│ Memory >1GB ~100MB <10MB <5MB ~50MB │ ## AI Providers (25+ Supported)
│ Startup ~500s ~30s ~1s <10ms ~5s │
│ Binary Size ~28MB N/A ~8MB 3.4MB ~15MB │ ### Tier 1: Major AI Labs
│ GitHub Stars 215K+ 22K 15K 10K 5K │
│ Plugins 1700+ ~50 ~20 ~15 ~10 │ | Provider | Models | Features |
│ Learning Medium Easy Easy Medium Easy │ |----------|--------|----------|
│ │ | **Anthropic** | Claude 3.5/4/Opus | Extended thinking, PDF support |
│ BEST FOR: │ | **OpenAI** | GPT-4o, o1, o3, GPT-5 | Function calling, structured output |
│ ───────── │ | **Google AI** | Gemini 2.5, Gemini 3 Pro | Multimodal, long context |
│ OpenClaw → Full desktop AI, extensive integrations │ | **xAI** | Grok | Real-time data integration |
│ NanoBot → Research, customization, Python developers │ | **Mistral** | Mistral Large, Codestral | Code-focused models |
│ PicoClaw → Embedded, low-resource, $10 hardware │
│ ZeroClaw → Maximum performance, security-critical │ ### Tier 2: Cloud Platforms
│ NanoClaw → WhatsApp automation, messaging bots │
│ │ | Provider | Models | Features |
└─────────────────────────────────────────────────────────────────┘ |----------|--------|----------|
| **Azure OpenAI** | GPT-5, GPT-4o Enterprise | Azure integration |
| **Google Vertex** | Claude, Gemini on GCP | Anthropic on Google |
| **Amazon Bedrock** | Nova, Claude, Llama 3 | AWS regional prefixes |
### Tier 3: Aggregators & Gateways
| Provider | Models | Features |
|----------|--------|----------|
| **OpenRouter** | 100+ models | Multi-provider gateway |
| **Vercel AI** | Multi-provider | Edge hosting, rate limiting |
| **Together AI** | Open source | Fine-tuning, hosting |
| **DeepInfra** | Open source | Cost-effective |
### Tier 4: Fast Inference
| Provider | Speed | Models |
|----------|-------|--------|
| **Groq** | Ultra-fast | Llama 3, Mixtral |
| **Cerebras** | Fastest | Llama 3 variants |
### Tier 5: Specialized
| Provider | Use Case |
|----------|----------|
| **Perplexity** | Web search integration |
| **Cohere** | Enterprise RAG |
| **GitLab Duo** | CI/CD integration |
| **GitHub Copilot** | IDE integration |
| **Cloudflare AI** | Gateway, rate limiting |
| **SAP AI Core** | SAP enterprise |
### Local/Self-Hosted
| Provider | Use Case |
|----------|----------|
| **Ollama** | Local model hosting |
| **LM Studio** | GUI local models |
| **vLLM** | High-performance serving |
## Model Selection
**Option A: Fetch from Provider**
```bash
# Fetch available models
curl -s https://openrouter.ai/api/v1/models -H "Authorization: Bearer $KEY" | jq '.data[].id'
curl -s https://api.groq.com/openai/v1/models -H "Authorization: Bearer $KEY"
curl -s http://localhost:11434/api/tags # Ollama
``` ```
## Decision Flowchart **Option B: Custom Model Input**
```json
``` {
┌─────────────────┐ "provider": "openai",
│ Need AI Agent? │ "modelId": "ft:gpt-4o:org:custom:suffix",
└────────┬────────┘ "displayName": "My Fine-Tuned Model"
}
┌───────────────────────┐
│ Memory constrained? │
│ (<1GB RAM available) │
└───────────┬───────────┘
┌─────┴─────┐
│ │
YES NO
│ │
▼ ▼
┌──────────────┐ ┌──────────────────┐
│ Need <10MB? │ │ Want plugins? │
└──────┬───────┘ └────────┬─────────┘
┌─────┴─────┐ ┌─────┴─────┐
│ │ │ │
YES NO YES NO
│ │ │ │
▼ ▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
│ZeroClaw│ │PicoClaw│ │OpenClaw│ │NanoBot │
│ (Rust) │ │ (Go) │ │ (Full) │ │(Python)│
└────────┘ └────────┘ └────────┘ └────────┘
``` ```
## Quick Start ## Quick Start
### Option 1: Interactive Setup (Recommended)
``` ```
"Setup Claw AI assistant on my server" "Setup OpenClaw with Anthropic and OpenAI providers"
"Help me choose and install an AI agent platform" "Install NanoBot with all available providers"
"Deploy ZeroClaw with Groq for fast inference"
"Configure Claw with local Ollama models"
``` ```
### Option 2: Direct Platform Selection ## Configuration Example
```
"Setup OpenClaw with all security features"
"Install ZeroClaw on my VPS"
"Deploy NanoBot for research use"
```
## Installation Guides
### OpenClaw (Full Featured)
```bash
# Prerequisites
sudo apt update && sudo apt install -y nodejs npm git
# Clone official repo
git clone https://github.com/openclaw/openclaw.git
cd openclaw
# Install dependencies
npm install
# Run setup wizard
npm run setup
# Configure environment
cp .env.example .env
nano .env # Add your API keys
# Start
npm run start
```
### NanoBot (Python Lightweight)
```bash
# Quick install via pip
pip install nanobot-ai
# Initialize
nanobot onboard
# Configure (~/.nanobot/config.json)
{
"providers": {
"openrouter": { "apiKey": "sk-or-v1-xxx" }
},
"agents": {
"defaults": { "model": "anthropic/claude-opus-4-5" }
}
}
# Start gateway
nanobot gateway
```
### PicoClaw (Go Ultra-Light)
```bash
# Download latest release
wget https://github.com/sipeed/picoclaw/releases/latest/download/picoclaw-linux-amd64
chmod +x picoclaw-linux-amd64
sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
# Create config
mkdir -p ~/.config/picoclaw
picoclaw config init
# Start
picoclaw gateway
```
### ZeroClaw (Rust Minimal)
```bash
# Download latest release
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/download/zeroclaw-linux-amd64
chmod +x zeroclaw-linux-amd64
sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
# Initialize config
zeroclaw init
# Migrate from OpenClaw (optional)
zeroclaw migrate openclaw --dry-run
# Start
zeroclaw gateway
```
## Security Hardening
### 1. Secrets Management
```bash
# Never hardcode API keys - use environment variables
export ANTHROPIC_API_KEY="your-key"
export OPENROUTER_API_KEY="your-key"
# Add to shell profile for persistence
echo 'export ANTHROPIC_API_KEY="your-key"' >> ~/.bashrc
# Use encrypted config files
mkdir -p ~/.config/claw
chmod 700 ~/.config/claw
```
### 2. Network Security
```bash
# Bind to localhost only
# config.json:
{
"server": {
"host": "127.0.0.1",
"port": 3000
}
}
# Use nginx reverse proxy for external access
sudo certbot --nginx -d claw.yourdomain.com
```
### 3. Systemd Hardened Service
```bash
# /etc/systemd/system/claw.service
[Unit]
Description=Claw AI Assistant
After=network.target
[Service]
Type=simple
User=claw
Group=claw
WorkingDirectory=/opt/claw
ExecStart=/usr/local/bin/claw gateway
Restart=on-failure
# Security hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/opt/claw/data
Environment="ANTHROPIC_API_KEY=%i"
[Install]
WantedBy=multi-user.target
```
```bash
# Enable service
sudo systemctl daemon-reload
sudo systemctl enable --now claw
```
## Brainstorm Session
After installation, we'll explore your needs:
### 🎯 Use Case Discovery
```
Q: What tasks should your AI handle?
□ Code assistance & development
□ Research & information gathering
□ Personal productivity (calendar, reminders)
□ Content creation & writing
□ Data analysis & visualization
□ Home automation
□ Customer support / chatbot
□ Other: _______________
```
### 🤖 Model Selection
```
Q: Which AI model(s) to use?
□ Claude (Anthropic) - Best reasoning
□ GPT-4 (OpenAI) - General purpose
□ Gemini (Google) - Multimodal
□ Local models (Ollama) - Privacy-first
□ OpenRouter - Multi-model access
```
### 🔌 Integration Planning
```
Q: Which platforms to connect?
Messaging:
□ Telegram □ Discord □ WhatsApp □ Slack
Calendar:
□ Google □ Outlook □ Apple □ None
Storage:
□ Local □ Google Drive □ Dropbox □ S3
APIs:
□ Custom REST APIs
□ Webhooks
□ Database connections
```
### 🎨 Agent Personality
```
Q: How should your agent behave?
Tone: Professional □ Casual □ Formal □ Playful □
Proactivity:
□ Reactive (responds only when asked)
□ Proactive (suggests, reminds, initiates)
Memory:
□ Session only (fresh each chat)
□ Persistent (remembers everything)
□ Selective (configurable retention)
```
## Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ DEPLOYED ARCHITECTURE │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ │
│ │ Internet │ │
│ └──────┬──────┘ │
│ │ │
│ ┌───────▼───────┐ │
│ │ nginx/HTTPS │ │
│ │ (Reverse │ │
│ │ Proxy) │ │
│ └───────┬───────┘ │
│ │ │
│ ┌──────────────────────────┼──────────────────────────────┐ │
│ │ localhost │ │
│ │ ┌─────────┐ ┌─────────▼────────┐ ┌────────────┐ │ │
│ │ │ Config │ │ CLAW ENGINE │ │ Data │ │ │
│ │ │ ~/.config│ │ (Gateway) │ │ Storage │ │ │
│ │ │ /claw │ │ Port: 3000 │ │ ~/claw/ │ │ │
│ │ └─────────┘ └─────────┬────────┘ └────────────┘ │ │
│ │ │ │ │
│ │ ┌─────────────────┼─────────────────┐ │ │
│ │ │ │ │ │ │
│ │ ┌────▼────┐ ┌─────▼─────┐ ┌─────▼─────┐ │ │
│ │ │ LLM │ │ Tools │ │ Memory │ │ │
│ │ │ APIs │ │ Plugins │ │ Context │ │ │
│ │ │Claude/GPT│ │ Skills │ │ Store │ │ │
│ │ └─────────┘ └───────────┘ └───────────┘ │ │
│ │ │ │
│ └──────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Post-Setup Checklist
```
□ API keys configured securely
□ Network binding verified (localhost)
□ Firewall configured
□ SSL certificate installed (if external)
□ Systemd service enabled
□ Logs configured and rotating
□ Backup strategy in place
□ Test conversation successful
□ Custom agents created
□ Integrations connected
```
---
<p align="center">
<a href="https://z.ai/subscribe?ic=R0K78RJKNW">Learn more about GLM 5 Advanced Coding Model</a>
</p>
## AI Provider Configuration
### Supported Providers
```
┌─────────────────────────────────────────────────────────────────┐
│ AI PROVIDER OPTIONS │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Direct Providers │ Gateways & Aggregators │
│ ───────────────── │ ────────────────────── │
│ • Anthropic (Claude) │ • OpenRouter (200+ models) │
│ • OpenAI (GPT-4, o1, o3) │ • Replicate │
│ • Google (Gemini 2.0) │ │
│ • Mistral │ Fast Inference │
│ • DeepSeek │ ─────────────── │
│ • xAI (Grok) │ • Groq (ultra-fast) │
│ │ • Cerebras (fastest) │
│ Local/Self-Hosted │ • Together AI │
│ ────────────────── │ │
│ • Ollama │ │
│ • LM Studio │ │
│ • vLLM │ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
### Model Selection Options
**Option A: Fetch from Provider**
```bash
# Automatically fetch available models
"Fetch available models from OpenRouter"
"Show me Groq models"
"What models are available via OpenAI?"
```
**Option B: Custom Model Input**
```
"Add custom model: my-org/fine-tuned-llama"
"Configure local Ollama model: llama3.2:70b"
"Use fine-tuned GPT: ft:gpt-4o:org:custom"
```
### Multi-Provider Setup
```json ```json
{ {
@@ -456,24 +151,20 @@ Q: How should your agent behave?
"groq": { "apiKey": "${GROQ_API_KEY}" }, "groq": { "apiKey": "${GROQ_API_KEY}" },
"ollama": { "baseURL": "http://localhost:11434" } "ollama": { "baseURL": "http://localhost:11434" }
}, },
"models": { "agents": {
"default": "anthropic/claude-sonnet-4-5", "defaults": { "model": "anthropic/claude-sonnet-4-5" },
"fast": "groq/llama-3.3-70b-versatile", "fast": { "model": "groq/llama-3.3-70b-versatile" },
"local": "ollama/llama3.2:70b" "local": { "model": "ollama/llama3.2:70b" }
} }
} }
``` ```
### Provider Comparison ## Security
| Provider | Best For | Speed | Cost | - API keys via environment variables
|----------|----------|-------|------| - Restricted config permissions (chmod 600)
| Claude | Reasoning, coding | Medium | $$$ | - Systemd hardening (NoNewPrivileges, PrivateTmp)
| GPT-4o | General purpose | Fast | $$$ | - Network binding to localhost
| Gemini | Multimodal | Fast | $$ |
| Groq | Fastest inference | Ultra-fast | $ |
| OpenRouter | Model variety | Varies | $-$$$ |
| Ollama | Privacy, free | Depends on HW | Free |
--- ---

View File

@@ -6,7 +6,7 @@ version: 1.0.0
# Claw Setup Skill # Claw Setup Skill
End-to-end professional setup of AI Agent platforms from the Claw family with security hardening and personal customization through interactive brainstorming. End-to-end professional setup of AI Agent platforms from the Claw family with security hardening, multi-provider configuration, and personal customization through interactive brainstorming.
## Supported Platforms ## Supported Platforms
@@ -18,275 +18,85 @@ End-to-end professional setup of AI Agent platforms from the Claw family with se
| **ZeroClaw** | Rust | <5MB | <10ms | Maximum performance, security | | **ZeroClaw** | Rust | <5MB | <10ms | Maximum performance, security |
| **NanoClaw** | TypeScript | ~50MB | ~5s | WhatsApp integration | | **NanoClaw** | TypeScript | ~50MB | ~5s | WhatsApp integration |
## What This Skill Does ## AI Providers (OpenCode Compatible - 25+ Providers)
### Phase 1: Platform Selection ### Built-in Providers
- Interactive comparison of all platforms
- Hardware requirements check
- Use case matching
### Phase 2: Secure Installation | Provider | SDK Package | Key Models | Features |
- Clone from official GitHub repos |----------|-------------|------------|----------|
- Security hardening (secrets management, network isolation) | **Anthropic** | `@ai-sdk/anthropic` | Claude 3.5/4/Opus | Extended thinking, PDF support |
- Environment configuration | **OpenAI** | `@ai-sdk/openai` | GPT-4o, o1, o3, GPT-5 | Function calling, structured output |
- API key setup with best practices | **Azure OpenAI** | `@ai-sdk/azure` | GPT-5, GPT-4o Enterprise | Azure integration, custom endpoints |
| **Google AI** | `@ai-sdk/google` | Gemini 2.5, Gemini 3 Pro | Multimodal, Google Cloud |
| **Google Vertex** | `@ai-sdk/google-vertex` | Claude, Gemini on GCP | Anthropic on Google infra |
| **Amazon Bedrock** | `@ai-sdk/amazon-bedrock` | Nova, Claude, Llama 3 | AWS credentials, regional prefixes |
| **OpenRouter** | `@openrouter/ai-sdk-provider` | 100+ models | Multi-provider gateway |
| **xAI** | `@ai-sdk/xai` | Grok models | Real-time data integration |
| **Mistral AI** | `@ai-sdk/mistral` | Mistral Large, Codestral | Code-focused models |
| **Groq** | `@ai-sdk/groq` | Llama 3, Mixtral | Ultra-low latency inference |
| **DeepInfra** | `@ai-sdk/deepinfra` | Open source models | Cost-effective hosting |
| **Cerebras** | `@ai-sdk/cerebras` | Llama 3 variants | Hardware-accelerated inference |
| **Cohere** | `@ai-sdk/cohere` | Command R+, Embed | Enterprise RAG capabilities |
| **Together AI** | `@ai-sdk/togetherai` | Open source models | Fine-tuning and hosting |
| **Perplexity** | `@ai-sdk/perplexity` | Sonar models | Real-time web search |
| **Vercel AI** | `@ai-sdk/vercel` | Multi-provider gateway | Edge hosting, rate limiting |
| **GitLab** | `@gitlab/gitlab-ai-provider` | GitLab Duo | CI/CD AI integration |
| **GitHub Copilot** | Custom | GPT-5 series | IDE integration, OAuth |
### Phase 3: Personal Customization ### Custom Loader Providers
- Interactive brainstorming session
- Custom agent templates
- Integration setup (messaging, calendar, etc.)
- Memory and context configuration
### Phase 4: Verification & Deployment | Provider | Auth Method | Use Case |
- Health checks |----------|-------------|----------|
- Test runs | **GitHub Copilot Enterprise** | OAuth + API Key | Enterprise IDE integration |
- Production deployment options | **Google Vertex Anthropic** | GCP Service Account | Claude on Google Cloud |
| **Azure Cognitive Services** | Azure AD | Azure AI services |
| **Cloudflare AI Gateway** | Gateway Token | Unified billing, rate limiting |
| **SAP AI Core** | Service Key | SAP enterprise integration |
| **OpenCode Free** | None | Free public models |
## GitHub Repositories ### Local/Self-Hosted
``` | Provider | Base URL | Use Case |
OpenClaw: https://github.com/openclaw/openclaw |----------|----------|----------|
NanoBot: https://github.com/HKUDS/nanobot | **Ollama** | localhost:11434 | Local model hosting |
PicoClaw: https://github.com/sipeed/picoclaw | **LM Studio** | localhost:1234 | GUI local models |
ZeroClaw: https://github.com/zeroclaw-labs/zeroclaw | **vLLM** | localhost:8000 | High-performance serving |
NanoClaw: https://github.com/nanoclaw/nanoclaw | **LocalAI** | localhost:8080 | OpenAI-compatible local |
```
## Usage Examples ## Fetch Available Models
```
"Setup OpenClaw on my server"
"I want to install NanoBot for personal use"
"Help me choose between ZeroClaw and PicoClaw"
"Deploy an AI assistant with security best practices"
"Setup Claw framework with my custom requirements"
```
## Installation Commands by Platform
### OpenClaw (Full Featured)
```bash
# Prerequisites
sudo apt install -y nodejs npm
# Clone and setup
git clone https://github.com/openclaw/openclaw.git
cd openclaw
npm install
npm run setup
# Configure
cp .env.example .env
# Edit .env with API keys
# Run
npm run start
```
### NanoBot (Python Lightweight)
```bash
# Quick install
pip install nanobot-ai
# Or from source
git clone https://github.com/HKUDS/nanobot.git
cd nanobot
pip install -e .
# Setup
nanobot onboard
nanobot gateway
```
### PicoClaw (Go Ultra-Light)
```bash
# Download binary
wget https://github.com/sipeed/picoclaw/releases/latest/picoclaw-linux-amd64
chmod +x picoclaw-linux-amd64
sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
# Or build from source
git clone https://github.com/sipeed/picoclaw.git
cd picoclaw
go build -o picoclaw
# Run
picoclaw gateway
```
### ZeroClaw (Rust Minimal)
```bash
# Download binary
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
chmod +x zeroclaw-linux-amd64
sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
# Or from source
git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
cargo build --release
# Run
zeroclaw gateway
```
## Security Hardening
### Secrets Management
```bash
# Never commit .env files
echo ".env" >> .gitignore
echo "*.pem" >> .gitignore
# Use environment variables
export ANTHROPIC_API_KEY="your-key"
export OPENROUTER_API_KEY="your-key"
# Or use secret files with restricted permissions
mkdir -p ~/.config/claw
cat > ~/.config/claw/config.json << 'CONFIG'
{
"providers": {
"openrouter": { "apiKey": "${OPENROUTER_API_KEY}" }
}
}
CONFIG
chmod 600 ~/.config/claw/config.json
```
### Network Security
```bash
# Bind to localhost only
# In config, set:
# "server": { "host": "127.0.0.1", "port": 3000 }
# Use reverse proxy for external access
# nginx example:
server {
listen 443 ssl;
server_name claw.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
```
### Systemd Service
```bash
# /etc/systemd/system/claw.service
[Unit]
Description=Claw AI Assistant
After=network.target
[Service]
Type=simple
User=claw
Group=claw
WorkingDirectory=/opt/claw
ExecStart=/usr/local/bin/claw gateway
Restart=on-failure
RestartSec=10
# Security hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/opt/claw/data
[Install]
WantedBy=multi-user.target
```
## Brainstorm Session Topics
1. **Use Case Discovery**
- What tasks should the AI handle?
- Which platforms/channels to integrate?
- Automation vs. interactive preferences?
2. **Model Selection**
- Claude, GPT, Gemini, or local models?
- Cost vs. performance tradeoffs?
- Privacy requirements?
3. **Integration Planning**
- Messaging: Telegram, Discord, WhatsApp, Slack?
- Calendar: Google, Outlook, Apple?
- Storage: Local, cloud, hybrid?
- APIs to connect?
4. **Custom Agent Design**
- Personality and tone?
- Domain expertise areas?
- Memory and context preferences?
- Proactive vs. reactive behavior?
5. **Deployment Strategy**
- Local machine, VPS, or cloud?
- High availability requirements?
- Backup and recovery needs?
## AI Provider Configuration
### Supported Providers
| Provider | Type | API Base | Models |
|----------|------|----------|--------|
| **Anthropic** | Direct | api.anthropic.com | Claude 3.5/4/Opus |
| **OpenAI** | Direct | api.openai.com | GPT-4, GPT-4o, o1, o3 |
| **Google** | Direct | generativelanguage.googleapis.com | Gemini 2.0/1.5 |
| **OpenRouter** | Gateway | openrouter.ai/api | 200+ models |
| **Together AI** | Direct | api.together.xyz | Llama, Mistral, Qwen |
| **Groq** | Direct | api.groq.com | Llama, Mixtral (fast) |
| **Cerebras** | Direct | api.cerebras.ai | Llama (fastest) |
| **DeepSeek** | Direct | api.deepseek.com | DeepSeek V3/R1 |
| **Mistral** | Direct | api.mistral.ai | Mistral, Codestral |
| **xAI** | Direct | api.x.ai | Grok |
| **Replicate** | Gateway | api.replicate.com | Various |
| **Local** | Self-hosted | localhost | Ollama, LM Studio |
### Fetch Available Models
```bash ```bash
# OpenRouter - List all models # OpenRouter - All models
curl -s https://openrouter.ai/api/v1/models \ curl -s https://openrouter.ai/api/v1/models \
-H "Authorization: Bearer $OPENROUTER_API_KEY" | jq '.data[].id' -H "Authorization: Bearer $OPENROUTER_API_KEY" | jq '.data[].id'
# OpenAI - List models # OpenAI - GPT models
curl -s https://api.openai.com/v1/models \ curl -s https://api.openai.com/v1/models \
-H "Authorization: Bearer $OPENAI_API_KEY" | jq '.data[].id' -H "Authorization: Bearer $OPENAI_API_KEY" | jq '.data[].id'
# Anthropic - Available models (static list) # Anthropic (static list)
# claude-opus-4-5-20250219 # claude-opus-4-5-20250219, claude-sonnet-4-5-20250219, claude-3-5-sonnet-20241022
# claude-sonnet-4-5-20250219
# claude-3-5-sonnet-20241022
# claude-3-5-haiku-20241022
# Google Gemini # Google Gemini
curl -s "https://generativelanguage.googleapis.com/v1/models?key=$GOOGLE_API_KEY" | jq '.models[].name' curl -s "https://generativelanguage.googleapis.com/v1/models?key=$GOOGLE_API_KEY"
# Groq - List models # Groq
curl -s https://api.groq.com/openai/v1/models \ curl -s https://api.groq.com/openai/v1/models \
-H "Authorization: Bearer $GROQ_API_KEY" | jq '.data[].id' -H "Authorization: Bearer $GROQ_API_KEY"
# Together AI # Together AI
curl -s https://api.together.xyz/v1/models \ curl -s https://api.together.xyz/v1/models \
-H "Authorization: Bearer $TOGETHER_API_KEY" | jq '.data[].id' -H "Authorization: Bearer $TOGETHER_API_KEY"
# Ollama (local) # Ollama (local)
curl -s http://localhost:11434/api/tags | jq '.models[].name' curl -s http://localhost:11434/api/tags
# models.dev - Universal model list
curl -s https://models.dev/api/models.json
``` ```
### Configuration Templates ## Multi-Provider Configuration
#### Multi-Provider Config
```json ```json
{ {
"providers": { "providers": {
@@ -298,33 +108,85 @@ curl -s http://localhost:11434/api/tags | jq '.models[].name'
"apiKey": "${OPENAI_API_KEY}", "apiKey": "${OPENAI_API_KEY}",
"baseURL": "https://api.openai.com/v1" "baseURL": "https://api.openai.com/v1"
}, },
"azure": {
"apiKey": "${AZURE_OPENAI_API_KEY}",
"baseURL": "${AZURE_OPENAI_ENDPOINT}",
"deployment": "gpt-4o"
},
"google": { "google": {
"apiKey": "${GOOGLE_API_KEY}", "apiKey": "${GOOGLE_API_KEY}",
"baseURL": "https://generativelanguage.googleapis.com/v1" "baseURL": "https://generativelanguage.googleapis.com/v1"
}, },
"vertex": {
"projectId": "${GOOGLE_CLOUD_PROJECT}",
"location": "${GOOGLE_CLOUD_LOCATION}",
"credentials": "${GOOGLE_APPLICATION_CREDENTIALS}"
},
"bedrock": {
"region": "us-east-1",
"accessKeyId": "${AWS_ACCESS_KEY_ID}",
"secretAccessKey": "${AWS_SECRET_ACCESS_KEY}"
},
"openrouter": { "openrouter": {
"apiKey": "${OPENROUTER_API_KEY}", "apiKey": "${OPENROUTER_API_KEY}",
"baseURL": "https://openrouter.ai/api/v1" "baseURL": "https://openrouter.ai/api/v1",
"headers": {
"HTTP-Referer": "https://yourapp.com",
"X-Title": "YourApp"
}
}, },
"groq": { "xai": {
"apiKey": "${GROQ_API_KEY}", "apiKey": "${XAI_API_KEY}",
"baseURL": "https://api.groq.com/openai/v1" "baseURL": "https://api.x.ai/v1"
},
"together": {
"apiKey": "${TOGETHER_API_KEY}",
"baseURL": "https://api.together.xyz/v1"
},
"deepseek": {
"apiKey": "${DEEPSEEK_API_KEY}",
"baseURL": "https://api.deepseek.com/v1"
}, },
"mistral": { "mistral": {
"apiKey": "${MISTRAL_API_KEY}", "apiKey": "${MISTRAL_API_KEY}",
"baseURL": "https://api.mistral.ai/v1" "baseURL": "https://api.mistral.ai/v1"
}, },
"xai": { "groq": {
"apiKey": "${XAI_API_KEY}", "apiKey": "${GROQ_API_KEY}",
"baseURL": "https://api.x.ai/v1" "baseURL": "https://api.groq.com/openai/v1"
},
"cerebras": {
"apiKey": "${CEREBRAS_API_KEY}",
"baseURL": "https://api.cerebras.ai/v1"
},
"deepinfra": {
"apiKey": "${DEEPINFRA_API_KEY}",
"baseURL": "https://api.deepinfra.com/v1"
},
"cohere": {
"apiKey": "${COHERE_API_KEY}",
"baseURL": "https://api.cohere.ai/v1"
},
"together": {
"apiKey": "${TOGETHER_API_KEY}",
"baseURL": "https://api.together.xyz/v1"
},
"perplexity": {
"apiKey": "${PERPLEXITY_API_KEY}",
"baseURL": "https://api.perplexity.ai"
},
"vercel": {
"apiKey": "${VERCEL_AI_KEY}",
"baseURL": "https://api.vercel.ai/v1"
},
"gitlab": {
"token": "${GITLAB_TOKEN}",
"baseURL": "${GITLAB_URL}/api/v4"
},
"github": {
"token": "${GITHUB_TOKEN}",
"baseURL": "https://api.github.com"
},
"cloudflare": {
"accountId": "${CF_ACCOUNT_ID}",
"gatewayId": "${CF_GATEWAY_ID}",
"token": "${CF_AI_TOKEN}"
},
"sap": {
"serviceKey": "${AICORE_SERVICE_KEY}",
"deploymentId": "${AICORE_DEPLOYMENT_ID}"
}, },
"ollama": { "ollama": {
"baseURL": "http://localhost:11434/v1", "baseURL": "http://localhost:11434/v1",
@@ -336,16 +198,29 @@ curl -s http://localhost:11434/api/tags | jq '.models[].name'
"model": "anthropic/claude-sonnet-4-5", "model": "anthropic/claude-sonnet-4-5",
"temperature": 0.7, "temperature": 0.7,
"maxTokens": 4096 "maxTokens": 4096
},
"fast": {
"model": "groq/llama-3.3-70b-versatile"
},
"coding": {
"model": "anthropic/claude-sonnet-4-5"
},
"research": {
"model": "perplexity/sonar-pro"
},
"local": {
"model": "ollama/llama3.2:70b"
} }
} }
} }
``` ```
#### Custom Model Configuration ## Custom Model Support
```json ```json
{ {
"customModels": { "customModels": {
"my-fine-tuned-model": { "my-fine-tuned-gpt": {
"provider": "openai", "provider": "openai",
"modelId": "ft:gpt-4o:my-org:custom:suffix", "modelId": "ft:gpt-4o:my-org:custom:suffix",
"displayName": "My Custom GPT-4o" "displayName": "My Custom GPT-4o"
@@ -355,110 +230,61 @@ curl -s http://localhost:11434/api/tags | jq '.models[].name'
"modelId": "llama3.2:70b", "modelId": "llama3.2:70b",
"displayName": "Local Llama 3.2 70B" "displayName": "Local Llama 3.2 70B"
}, },
"openrouter-model": { "openrouter-custom": {
"provider": "openrouter", "provider": "openrouter",
"modelId": "meta-llama/llama-3.3-70b-instruct", "modelId": "custom-org/my-model",
"displayName": "Llama 3.3 70B via OpenRouter" "displayName": "Custom via OpenRouter"
} }
} }
} }
``` ```
### Provider Selection Flow ## Installation Commands
``` ### OpenClaw
1. Ask user which providers they have API keys for: ```bash
□ Anthropic (Claude) git clone https://github.com/openclaw/openclaw.git
□ OpenAI (GPT) cd openclaw && npm install && npm run setup
□ Google (Gemini)
□ OpenRouter (Multi-model)
□ Together AI
□ Groq (Fast inference)
□ Cerebras (Fastest)
□ DeepSeek
□ Mistral
□ xAI (Grok)
□ Local (Ollama/LM Studio)
2. For each selected provider:
- Prompt for API key
- Fetch available models (if API supports)
- Let user select or input custom model
3. Generate secure configuration:
- Store keys in environment variables
- Create config.json with model selections
- Set up key rotation reminders
4. Test connectivity:
- Send test prompt to each configured provider
- Verify response
``` ```
### Model Fetching Script ### NanoBot
```bash
pip install nanobot-ai
nanobot onboard
```
### PicoClaw
```bash
wget https://github.com/sipeed/picoclaw/releases/latest/picoclaw-linux-amd64
chmod +x picoclaw-linux-amd64 && sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
```
### ZeroClaw
```bash
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
chmod +x zeroclaw-linux-amd64 && sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
```
## Security Hardening
```bash ```bash
#!/bin/bash # Secrets in environment variables
# fetch-models.sh - Fetch available models from providers export ANTHROPIC_API_KEY="your-key"
export OPENAI_API_KEY="your-key"
echo "=== AI Provider Model Fetcher ===" # Restricted config permissions
chmod 600 ~/.config/claw/config.json
# OpenRouter # Systemd hardening
if [ -n "$OPENROUTER_API_KEY" ]; then NoNewPrivileges=true
echo -e "\n📦 OpenRouter Models:" PrivateTmp=true
curl -s https://openrouter.ai/api/v1/models \ ProtectSystem=strict
-H "Authorization: Bearer $OPENROUTER_API_KEY" | \
jq -r '.data[] | " • \(.id) - \(.name // .id)"' | head -20
fi
# OpenAI
if [ -n "$OPENAI_API_KEY" ]; then
echo -e "\n📦 OpenAI Models:"
curl -s https://api.openai.com/v1/models \
-H "Authorization: Bearer $OPENAI_API_KEY" | \
jq -r '.data[] | select(.id | contains("gpt")) | " • \(.id)"' | sort -u
fi
# Groq
if [ -n "$GROQ_API_KEY" ]; then
echo -e "\n📦 Groq Models:"
curl -s https://api.groq.com/openai/v1/models \
-H "Authorization: Bearer $GROQ_API_KEY" | \
jq -r '.data[].id' | sed 's/^/ • /'
fi
# Ollama (local)
echo -e "\n📦 Ollama Models (local):"
curl -s http://localhost:11434/api/tags 2>/dev/null | \
jq -r '.models[].name' | sed 's/^/ • /' || echo " Ollama not running"
# Together AI
if [ -n "$TOGETHER_API_KEY" ]; then
echo -e "\n📦 Together AI Models:"
curl -s https://api.together.xyz/v1/models \
-H "Authorization: Bearer $TOGETHER_API_KEY" | \
jq -r '.data[].id' | head -20 | sed 's/^/ • /'
fi
echo -e "\n✅ Model fetch complete"
``` ```
### Custom Model Input ## Brainstorm Session Topics
When user selects "Custom Model", prompt for: 1. **Use Case**: Coding, research, productivity, automation?
1. **Provider**: Which provider hosts this model 2. **Model Selection**: Claude, GPT, Gemini, local?
2. **Model ID**: Exact model identifier 3. **Integrations**: Telegram, Discord, calendar, storage?
3. **Display Name**: Friendly name for UI 4. **Deployment**: Local, VPS, cloud?
4. **Context Window**: Max tokens (optional) 5. **Custom Agents**: Personality, memory, proactivity?
5. **Capabilities**: Text, vision, code, etc. (optional)
Example custom model entry:
```json
{
"provider": "openrouter",
"modelId": "custom-org/my-fine-tuned-v2",
"displayName": "My Fine-Tuned Model v2",
"contextWindow": 128000,
"capabilities": ["text", "code"]
}
```

View File

@@ -1,72 +1,21 @@
#!/bin/bash #!/bin/bash
# fetch-models.sh - Fetch available models from AI providers # fetch-models.sh - Fetch available models from all AI providers
# Usage: ./fetch-models.sh [provider] # Usage: ./fetch-models.sh [provider|all]
set -e set -e
GREEN='\033[0;32m' GREEN='\033[0;32m'
BLUE='\033[0;34m' BLUE='\033[0;34m'
YELLOW='\033[1;33m'
NC='\033[0m' NC='\033[0m'
echo -e "${BLUE}╔═══════════════════════════════════════════════════════════════╗${NC}" echo -e "${BLUE}╔═══════════════════════════════════════════════════════════════╗${NC}"
echo -e "${BLUE} AI PROVIDER MODEL FETCHER ${NC}" echo -e "${BLUE}║ AI PROVIDER MODEL FETCHER (25+ Providers)${NC}"
echo -e "${BLUE}╚═══════════════════════════════════════════════════════════════╝${NC}" echo -e "${BLUE}╚═══════════════════════════════════════════════════════════════╝${NC}"
fetch_openrouter() { # Anthropic (static list - no API endpoint for models)
if [ -n "$OPENROUTER_API_KEY" ]; then
echo -e "\n${GREEN}📦 OpenRouter Models:${NC}"
curl -s https://openrouter.ai/api/v1/models \
-H "Authorization: Bearer $OPENROUTER_API_KEY" | \
jq -r '.data[] | " • \(.id)"' | head -30
else
echo -e "\n⚠ OPENROUTER_API_KEY not set"
fi
}
fetch_openai() {
if [ -n "$OPENAI_API_KEY" ]; then
echo -e "\n${GREEN}📦 OpenAI Models:${NC}"
curl -s https://api.openai.com/v1/models \
-H "Authorization: Bearer $OPENAI_API_KEY" | \
jq -r '.data[] | select(.id | test("gpt|o1|o3")) | " • \(.id)"' | sort -u
else
echo -e "\n⚠ OPENAI_API_KEY not set"
fi
}
fetch_groq() {
if [ -n "$GROQ_API_KEY" ]; then
echo -e "\n${GREEN}📦 Groq Models:${NC}"
curl -s https://api.groq.com/openai/v1/models \
-H "Authorization: Bearer $GROQ_API_KEY" | \
jq -r '.data[].id' | sed 's/^/ • /'
else
echo -e "\n⚠ GROQ_API_KEY not set"
fi
}
fetch_ollama() {
echo -e "\n${GREEN}📦 Ollama Models (local):${NC}"
if curl -s http://localhost:11434/api/tags >/dev/null 2>&1; then
curl -s http://localhost:11434/api/tags | jq -r '.models[].name' | sed 's/^/ • /'
else
echo " ⚠️ Ollama not running on localhost:11434"
fi
}
fetch_together() {
if [ -n "$TOGETHER_API_KEY" ]; then
echo -e "\n${GREEN}📦 Together AI Models:${NC}"
curl -s https://api.together.xyz/v1/models \
-H "Authorization: Bearer $TOGETHER_API_KEY" | \
jq -r '.data[].id' | head -20 | sed 's/^/ • /'
else
echo -e "\n⚠ TOGETHER_API_KEY not set"
fi
}
fetch_anthropic() { fetch_anthropic() {
echo -e "\n${GREEN}📦 Anthropic Models (static list):${NC}" echo -e "\n${GREEN}📦 Anthropic Models:${NC}"
echo " • claude-opus-4-5-20250219" echo " • claude-opus-4-5-20250219"
echo " • claude-sonnet-4-5-20250219" echo " • claude-sonnet-4-5-20250219"
echo " • claude-3-5-sonnet-20241022" echo " • claude-3-5-sonnet-20241022"
@@ -74,25 +23,197 @@ fetch_anthropic() {
echo " • claude-3-opus-20240229" echo " • claude-3-opus-20240229"
} }
# OpenAI
fetch_openai() {
if [ -n "$OPENAI_API_KEY" ]; then
echo -e "\n${GREEN}📦 OpenAI Models:${NC}"
curl -s https://api.openai.com/v1/models \
-H "Authorization: Bearer $OPENAI_API_KEY" | \
jq -r '.data[] | select(.id | test("gpt|o1|o3|chatgpt")) | " • \(.id)"' | sort -u
else
echo -e "\n${YELLOW}⚠️ OPENAI_API_KEY not set${NC}"
fi
}
# Google Gemini
fetch_google() { fetch_google() {
if [ -n "$GOOGLE_API_KEY" ]; then if [ -n "$GOOGLE_API_KEY" ]; then
echo -e "\n${GREEN}📦 Google Gemini Models:${NC}" echo -e "\n${GREEN}📦 Google Gemini Models:${NC}"
curl -s "https://generativelanguage.googleapis.com/v1/models?key=$GOOGLE_API_KEY" | \ curl -s "https://generativelanguage.googleapis.com/v1/models?key=$GOOGLE_API_KEY" | \
jq -r '.models[].name' | sed 's|models/||' | sed 's/^/ • /' jq -r '.models[].name' | sed 's|models/||' | sed 's/^/ • /'
else else
echo -e "\n⚠️ GOOGLE_API_KEY not set" echo -e "\n${YELLOW}⚠️ GOOGLE_API_KEY not set${NC}"
fi fi
} }
# OpenRouter
fetch_openrouter() {
if [ -n "$OPENROUTER_API_KEY" ]; then
echo -e "\n${GREEN}📦 OpenRouter Models (100+):${NC}"
curl -s https://openrouter.ai/api/v1/models \
-H "Authorization: Bearer $OPENROUTER_API_KEY" | \
jq -r '.data[].id' | head -30 | sed 's/^/ • /'
echo " ... (and more)"
else
echo -e "\n${YELLOW}⚠️ OPENROUTER_API_KEY not set${NC}"
fi
}
# Groq
fetch_groq() {
if [ -n "$GROQ_API_KEY" ]; then
echo -e "\n${GREEN}📦 Groq Models:${NC}"
curl -s https://api.groq.com/openai/v1/models \
-H "Authorization: Bearer $GROQ_API_KEY" | \
jq -r '.data[].id' | sed 's/^/ • /'
else
echo -e "\n${YELLOW}⚠️ GROQ_API_KEY not set${NC}"
fi
}
# Together AI
fetch_together() {
if [ -n "$TOGETHER_API_KEY" ]; then
echo -e "\n${GREEN}📦 Together AI Models:${NC}"
curl -s https://api.together.xyz/v1/models \
-H "Authorization: Bearer $TOGETHER_API_KEY" | \
jq -r '.data[].id' | head -20 | sed 's/^/ • /'
else
echo -e "\n${YELLOW}⚠️ TOGETHER_API_KEY not set${NC}"
fi
}
# Mistral
fetch_mistral() {
if [ -n "$MISTRAL_API_KEY" ]; then
echo -e "\n${GREEN}📦 Mistral Models:${NC}"
curl -s https://api.mistral.ai/v1/models \
-H "Authorization: Bearer $MISTRAL_API_KEY" | \
jq -r '.data[].id' | sed 's/^/ • /'
else
echo -e "\n${YELLOW}⚠️ MISTRAL_API_KEY not set${NC}"
fi
}
# xAI (Grok)
fetch_xai() {
if [ -n "$XAI_API_KEY" ]; then
echo -e "\n${GREEN}📦 xAI (Grok) Models:${NC}"
curl -s https://api.x.ai/v1/models \
-H "Authorization: Bearer $XAI_API_KEY" | \
jq -r '.data[].id' | sed 's/^/ • /'
else
echo -e "\n${YELLOW}⚠️ XAI_API_KEY not set${NC}"
fi
}
# Cerebras
fetch_cerebras() {
if [ -n "$CEREBRAS_API_KEY" ]; then
echo -e "\n${GREEN}📦 Cerebras Models:${NC}"
curl -s https://api.cerebras.ai/v1/models \
-H "Authorization: Bearer $CEREBRAS_API_KEY" | \
jq -r '.data[].id' | sed 's/^/ • /'
else
echo -e "\n${YELLOW}⚠️ CEREBRAS_API_KEY not set${NC}"
fi
}
# DeepInfra
fetch_deepinfra() {
if [ -n "$DEEPINFRA_API_KEY" ]; then
echo -e "\n${GREEN}📦 DeepInfra Models:${NC}"
curl -s https://api.deepinfra.com/v1/models \
-H "Authorization: Bearer $DEEPINFRA_API_KEY" | \
jq -r '.data[].id' | head -20 | sed 's/^/ • /'
else
echo -e "\n${YELLOW}⚠️ DEEPINFRA_API_KEY not set${NC}"
fi
}
# Cohere
fetch_cohere() {
if [ -n "$COHERE_API_KEY" ]; then
echo -e "\n${GREEN}📦 Cohere Models:${NC}"
curl -s https://api.cohere.ai/v1/models \
-H "Authorization: Bearer $COHERE_API_KEY" | \
jq -r '.models[].name' | sed 's/^/ • /'
else
echo -e "\n${YELLOW}⚠️ COHERE_API_KEY not set${NC}"
fi
}
# Perplexity
fetch_perplexity() {
if [ -n "$PERPLEXITY_API_KEY" ]; then
echo -e "\n${GREEN}📦 Perplexity Models:${NC}"
echo " • sonar-pro"
echo " • sonar"
echo " • sonar-reasoning"
else
echo -e "\n${YELLOW}⚠️ PERPLEXITY_API_KEY not set${NC}"
fi
}
# Ollama (local)
fetch_ollama() {
echo -e "\n${GREEN}📦 Ollama Models (local):${NC}"
if curl -s http://localhost:11434/api/tags >/dev/null 2>&1; then
curl -s http://localhost:11434/api/tags | jq -r '.models[].name' | sed 's/^/ • /'
else
echo " ${YELLOW}⚠️ Ollama not running on localhost:11434${NC}"
fi
}
# models.dev - Universal
fetch_models_dev() {
echo -e "\n${GREEN}📦 models.dev (Universal Registry):${NC}"
if command -v jq &>/dev/null; then
curl -s https://models.dev/api/models.json 2>/dev/null | \
jq -r 'keys[]' | head -20 | sed 's/^/ • /' || echo " Unable to fetch"
else
echo " Requires jq"
fi
}
# Help
show_help() {
echo "Usage: $0 [provider|all]"
echo ""
echo "Providers:"
echo " anthropic - Claude models (static list)"
echo " openai - GPT, o1, o3 models"
echo " google - Gemini models"
echo " openrouter - 100+ models gateway"
echo " groq - Ultra-fast inference"
echo " together - Together AI"
echo " mistral - Mistral models"
echo " xai - Grok models"
echo " cerebras - Cerebras fast inference"
echo " deepinfra - DeepInfra models"
echo " cohere - Cohere models"
echo " perplexity - Perplexity Sonar"
echo " ollama - Local models"
echo " models.dev - Universal registry"
echo " all - Fetch from all providers"
}
# Main logic # Main logic
case "${1:-all}" in case "${1:-all}" in
openrouter) fetch_openrouter ;;
openai) fetch_openai ;;
groq) fetch_groq ;;
ollama) fetch_ollama ;;
together) fetch_together ;;
anthropic) fetch_anthropic ;; anthropic) fetch_anthropic ;;
openai) fetch_openai ;;
google) fetch_google ;; google) fetch_google ;;
openrouter) fetch_openrouter ;;
groq) fetch_groq ;;
together) fetch_together ;;
mistral) fetch_mistral ;;
xai) fetch_xai ;;
cerebras) fetch_cerebras ;;
deepinfra) fetch_deepinfra ;;
cohere) fetch_cohere ;;
perplexity) fetch_perplexity ;;
ollama) fetch_ollama ;;
models.dev|modelsdev) fetch_models_dev ;;
all) all)
fetch_anthropic fetch_anthropic
fetch_openai fetch_openai
@@ -100,10 +221,18 @@ case "${1:-all}" in
fetch_openrouter fetch_openrouter
fetch_groq fetch_groq
fetch_together fetch_together
fetch_mistral
fetch_xai
fetch_cerebras
fetch_deepinfra
fetch_cohere
fetch_perplexity
fetch_ollama fetch_ollama
;; ;;
-h|--help|help) show_help ; exit 0 ;;
*) *)
echo "Usage: $0 [openrouter|openai|groq|ollama|together|anthropic|google|all]" echo "Unknown provider: $1"
show_help
exit 1 exit 1
;; ;;
esac esac