feat: Add all 25+ OpenCode-compatible AI providers to Claw Setup

Updated provider support to match OpenCode's full provider list:

Built-in Providers (18):
- Anthropic, OpenAI, Azure OpenAI
- Google AI, Google Vertex AI
- Amazon Bedrock
- OpenRouter, xAI, Mistral
- Groq, Cerebras, DeepInfra
- Cohere, Together AI, Perplexity
- Vercel AI, GitLab, GitHub Copilot

Custom Loader Providers:
- GitHub Copilot Enterprise
- Google Vertex Anthropic
- Azure Cognitive Services
- Cloudflare AI Gateway
- SAP AI Core

Local/Self-Hosted:
- Ollama, LM Studio, vLLM

Features:
- Model fetching from provider APIs
- Custom model input support
- Multi-provider configuration
- Environment variable security

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Claude Code
2026-02-22 03:51:55 -05:00
Unverified
parent 2072e16bd1
commit baffcf6db1
3 changed files with 466 additions and 820 deletions

View File

@@ -4,7 +4,7 @@
### Professional AI Agent Deployment Made Simple
**End-to-end setup of OpenClaw, NanoBot, PicoClaw, ZeroClaw, or NanoClaw with security hardening and personal customization**
**End-to-end setup of Claw platforms with 25+ AI providers, security hardening, and personal customization**
---
@@ -28,7 +28,7 @@
## Overview
Claw Setup handles the complete deployment of AI Agent platforms from the Claw family - from selection to production - with security best practices and personalized configuration through interactive brainstorming.
Claw Setup handles complete deployment of AI Agent platforms with **25+ AI provider integrations** (OpenCode compatible).
```
┌─────────────────────────────────────────────────────────────────┐
@@ -40,411 +40,106 @@ Claw Setup handles the complete deployment of AI Agent platforms from the Claw f
│ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ SELECT │────►│ INSTALL │────►│CUSTOMIZE│────►│ DEPLOY │ │
│ │ Platform│ │& Secure │ │Providers│ │ & Run │ │
│ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ Compare Clone & Brainstorm Systemd │
│ platforms harden your use case & monitor │
│ security │
│ │
│ ┌─────────────────────────────────────────────────────────────┐│
│ │ SUPPORTED PLATFORMS ││
│ │ ││
│ │ 🦞 OpenClaw Full-featured, 1700+ plugins, 215K stars ││
│ │ 🤖 NanoBot Python, 4K lines, research-ready ││
│ │ 🦐 PicoClaw Go, <10MB, $10 hardware ││
│ │ ⚡ ZeroClaw Rust, <5MB, 10ms startup ││
│ │ 💬 NanoClaw TypeScript, WhatsApp focused ││
│ │ ││
│ └─────────────────────────────────────────────────────────────┘│
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Platform Comparison
## Platforms Supported
```
┌─────────────────────────────────────────────────────────────────┐
│ PLATFORM COMPARISON │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Metric OpenClaw NanoBot PicoClaw ZeroClaw NanoClaw │
│ ───────────────────────────────────────────────────────────── │
│ Language TS Python Go Rust TS │
│ Memory >1GB ~100MB <10MB <5MB ~50MB │
│ Startup ~500s ~30s ~1s <10ms ~5s │
│ Binary Size ~28MB N/A ~8MB 3.4MB ~15MB │
│ GitHub Stars 215K+ 22K 15K 10K 5K │
│ Plugins 1700+ ~50 ~20 ~15 ~10 │
│ Learning Medium Easy Easy Medium Easy │
│ │
│ BEST FOR: │
│ ───────── │
│ OpenClaw → Full desktop AI, extensive integrations │
│ NanoBot → Research, customization, Python developers │
│ PicoClaw → Embedded, low-resource, $10 hardware │
│ ZeroClaw → Maximum performance, security-critical │
│ NanoClaw → WhatsApp automation, messaging bots │
│ │
└─────────────────────────────────────────────────────────────────┘
| Platform | Language | Memory | Startup | Best For |
|----------|----------|--------|---------|----------|
| **OpenClaw** | TypeScript | >1GB | ~500s | Full-featured, 1700+ plugins |
| **NanoBot** | Python | ~100MB | ~30s | Research, customization |
| **PicoClaw** | Go | <10MB | ~1s | Embedded, $10 hardware |
| **ZeroClaw** | Rust | <5MB | <10ms | Maximum performance |
| **NanoClaw** | TypeScript | ~50MB | ~5s | WhatsApp integration |
## AI Providers (25+ Supported)
### Tier 1: Major AI Labs
| Provider | Models | Features |
|----------|--------|----------|
| **Anthropic** | Claude 3.5/4/Opus | Extended thinking, PDF support |
| **OpenAI** | GPT-4o, o1, o3, GPT-5 | Function calling, structured output |
| **Google AI** | Gemini 2.5, Gemini 3 Pro | Multimodal, long context |
| **xAI** | Grok | Real-time data integration |
| **Mistral** | Mistral Large, Codestral | Code-focused models |
### Tier 2: Cloud Platforms
| Provider | Models | Features |
|----------|--------|----------|
| **Azure OpenAI** | GPT-5, GPT-4o Enterprise | Azure integration |
| **Google Vertex** | Claude, Gemini on GCP | Anthropic on Google |
| **Amazon Bedrock** | Nova, Claude, Llama 3 | AWS regional prefixes |
### Tier 3: Aggregators & Gateways
| Provider | Models | Features |
|----------|--------|----------|
| **OpenRouter** | 100+ models | Multi-provider gateway |
| **Vercel AI** | Multi-provider | Edge hosting, rate limiting |
| **Together AI** | Open source | Fine-tuning, hosting |
| **DeepInfra** | Open source | Cost-effective |
### Tier 4: Fast Inference
| Provider | Speed | Models |
|----------|-------|--------|
| **Groq** | Ultra-fast | Llama 3, Mixtral |
| **Cerebras** | Fastest | Llama 3 variants |
### Tier 5: Specialized
| Provider | Use Case |
|----------|----------|
| **Perplexity** | Web search integration |
| **Cohere** | Enterprise RAG |
| **GitLab Duo** | CI/CD integration |
| **GitHub Copilot** | IDE integration |
| **Cloudflare AI** | Gateway, rate limiting |
| **SAP AI Core** | SAP enterprise |
### Local/Self-Hosted
| Provider | Use Case |
|----------|----------|
| **Ollama** | Local model hosting |
| **LM Studio** | GUI local models |
| **vLLM** | High-performance serving |
## Model Selection
**Option A: Fetch from Provider**
```bash
# Fetch available models
curl -s https://openrouter.ai/api/v1/models -H "Authorization: Bearer $KEY" | jq '.data[].id'
curl -s https://api.groq.com/openai/v1/models -H "Authorization: Bearer $KEY"
curl -s http://localhost:11434/api/tags # Ollama
```
## Decision Flowchart
```
┌─────────────────┐
│ Need AI Agent? │
└────────┬────────┘
┌───────────────────────┐
│ Memory constrained? │
│ (<1GB RAM available) │
└───────────┬───────────┘
┌─────┴─────┐
│ │
YES NO
│ │
▼ ▼
┌──────────────┐ ┌──────────────────┐
│ Need <10MB? │ │ Want plugins? │
└──────┬───────┘ └────────┬─────────┘
┌─────┴─────┐ ┌─────┴─────┐
│ │ │ │
YES NO YES NO
│ │ │ │
▼ ▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
│ZeroClaw│ │PicoClaw│ │OpenClaw│ │NanoBot │
│ (Rust) │ │ (Go) │ │ (Full) │ │(Python)│
└────────┘ └────────┘ └────────┘ └────────┘
**Option B: Custom Model Input**
```json
{
"provider": "openai",
"modelId": "ft:gpt-4o:org:custom:suffix",
"displayName": "My Fine-Tuned Model"
}
```
## Quick Start
### Option 1: Interactive Setup (Recommended)
```
"Setup Claw AI assistant on my server"
"Help me choose and install an AI agent platform"
"Setup OpenClaw with Anthropic and OpenAI providers"
"Install NanoBot with all available providers"
"Deploy ZeroClaw with Groq for fast inference"
"Configure Claw with local Ollama models"
```
### Option 2: Direct Platform Selection
```
"Setup OpenClaw with all security features"
"Install ZeroClaw on my VPS"
"Deploy NanoBot for research use"
```
## Installation Guides
### OpenClaw (Full Featured)
```bash
# Prerequisites
sudo apt update && sudo apt install -y nodejs npm git
# Clone official repo
git clone https://github.com/openclaw/openclaw.git
cd openclaw
# Install dependencies
npm install
# Run setup wizard
npm run setup
# Configure environment
cp .env.example .env
nano .env # Add your API keys
# Start
npm run start
```
### NanoBot (Python Lightweight)
```bash
# Quick install via pip
pip install nanobot-ai
# Initialize
nanobot onboard
# Configure (~/.nanobot/config.json)
{
"providers": {
"openrouter": { "apiKey": "sk-or-v1-xxx" }
},
"agents": {
"defaults": { "model": "anthropic/claude-opus-4-5" }
}
}
# Start gateway
nanobot gateway
```
### PicoClaw (Go Ultra-Light)
```bash
# Download latest release
wget https://github.com/sipeed/picoclaw/releases/latest/download/picoclaw-linux-amd64
chmod +x picoclaw-linux-amd64
sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
# Create config
mkdir -p ~/.config/picoclaw
picoclaw config init
# Start
picoclaw gateway
```
### ZeroClaw (Rust Minimal)
```bash
# Download latest release
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/download/zeroclaw-linux-amd64
chmod +x zeroclaw-linux-amd64
sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
# Initialize config
zeroclaw init
# Migrate from OpenClaw (optional)
zeroclaw migrate openclaw --dry-run
# Start
zeroclaw gateway
```
## Security Hardening
### 1. Secrets Management
```bash
# Never hardcode API keys - use environment variables
export ANTHROPIC_API_KEY="your-key"
export OPENROUTER_API_KEY="your-key"
# Add to shell profile for persistence
echo 'export ANTHROPIC_API_KEY="your-key"' >> ~/.bashrc
# Use encrypted config files
mkdir -p ~/.config/claw
chmod 700 ~/.config/claw
```
### 2. Network Security
```bash
# Bind to localhost only
# config.json:
{
"server": {
"host": "127.0.0.1",
"port": 3000
}
}
# Use nginx reverse proxy for external access
sudo certbot --nginx -d claw.yourdomain.com
```
### 3. Systemd Hardened Service
```bash
# /etc/systemd/system/claw.service
[Unit]
Description=Claw AI Assistant
After=network.target
[Service]
Type=simple
User=claw
Group=claw
WorkingDirectory=/opt/claw
ExecStart=/usr/local/bin/claw gateway
Restart=on-failure
# Security hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/opt/claw/data
Environment="ANTHROPIC_API_KEY=%i"
[Install]
WantedBy=multi-user.target
```
```bash
# Enable service
sudo systemctl daemon-reload
sudo systemctl enable --now claw
```
## Brainstorm Session
After installation, we'll explore your needs:
### 🎯 Use Case Discovery
```
Q: What tasks should your AI handle?
□ Code assistance & development
□ Research & information gathering
□ Personal productivity (calendar, reminders)
□ Content creation & writing
□ Data analysis & visualization
□ Home automation
□ Customer support / chatbot
□ Other: _______________
```
### 🤖 Model Selection
```
Q: Which AI model(s) to use?
□ Claude (Anthropic) - Best reasoning
□ GPT-4 (OpenAI) - General purpose
□ Gemini (Google) - Multimodal
□ Local models (Ollama) - Privacy-first
□ OpenRouter - Multi-model access
```
### 🔌 Integration Planning
```
Q: Which platforms to connect?
Messaging:
□ Telegram □ Discord □ WhatsApp □ Slack
Calendar:
□ Google □ Outlook □ Apple □ None
Storage:
□ Local □ Google Drive □ Dropbox □ S3
APIs:
□ Custom REST APIs
□ Webhooks
□ Database connections
```
### 🎨 Agent Personality
```
Q: How should your agent behave?
Tone: Professional □ Casual □ Formal □ Playful □
Proactivity:
□ Reactive (responds only when asked)
□ Proactive (suggests, reminds, initiates)
Memory:
□ Session only (fresh each chat)
□ Persistent (remembers everything)
□ Selective (configurable retention)
```
## Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ DEPLOYED ARCHITECTURE │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ │
│ │ Internet │ │
│ └──────┬──────┘ │
│ │ │
│ ┌───────▼───────┐ │
│ │ nginx/HTTPS │ │
│ │ (Reverse │ │
│ │ Proxy) │ │
│ └───────┬───────┘ │
│ │ │
│ ┌──────────────────────────┼──────────────────────────────┐ │
│ │ localhost │ │
│ │ ┌─────────┐ ┌─────────▼────────┐ ┌────────────┐ │ │
│ │ │ Config │ │ CLAW ENGINE │ │ Data │ │ │
│ │ │ ~/.config│ │ (Gateway) │ │ Storage │ │ │
│ │ │ /claw │ │ Port: 3000 │ │ ~/claw/ │ │ │
│ │ └─────────┘ └─────────┬────────┘ └────────────┘ │ │
│ │ │ │ │
│ │ ┌─────────────────┼─────────────────┐ │ │
│ │ │ │ │ │ │
│ │ ┌────▼────┐ ┌─────▼─────┐ ┌─────▼─────┐ │ │
│ │ │ LLM │ │ Tools │ │ Memory │ │ │
│ │ │ APIs │ │ Plugins │ │ Context │ │ │
│ │ │Claude/GPT│ │ Skills │ │ Store │ │ │
│ │ └─────────┘ └───────────┘ └───────────┘ │ │
│ │ │ │
│ └──────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Post-Setup Checklist
```
□ API keys configured securely
□ Network binding verified (localhost)
□ Firewall configured
□ SSL certificate installed (if external)
□ Systemd service enabled
□ Logs configured and rotating
□ Backup strategy in place
□ Test conversation successful
□ Custom agents created
□ Integrations connected
```
---
<p align="center">
<a href="https://z.ai/subscribe?ic=R0K78RJKNW">Learn more about GLM 5 Advanced Coding Model</a>
</p>
## AI Provider Configuration
### Supported Providers
```
┌─────────────────────────────────────────────────────────────────┐
│ AI PROVIDER OPTIONS │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Direct Providers │ Gateways & Aggregators │
│ ───────────────── │ ────────────────────── │
│ • Anthropic (Claude) │ • OpenRouter (200+ models) │
│ • OpenAI (GPT-4, o1, o3) │ • Replicate │
│ • Google (Gemini 2.0) │ │
│ • Mistral │ Fast Inference │
│ • DeepSeek │ ─────────────── │
│ • xAI (Grok) │ • Groq (ultra-fast) │
│ │ • Cerebras (fastest) │
│ Local/Self-Hosted │ • Together AI │
│ ────────────────── │ │
│ • Ollama │ │
│ • LM Studio │ │
│ • vLLM │ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
### Model Selection Options
**Option A: Fetch from Provider**
```bash
# Automatically fetch available models
"Fetch available models from OpenRouter"
"Show me Groq models"
"What models are available via OpenAI?"
```
**Option B: Custom Model Input**
```
"Add custom model: my-org/fine-tuned-llama"
"Configure local Ollama model: llama3.2:70b"
"Use fine-tuned GPT: ft:gpt-4o:org:custom"
```
### Multi-Provider Setup
## Configuration Example
```json
{
@@ -456,24 +151,20 @@ Q: How should your agent behave?
"groq": { "apiKey": "${GROQ_API_KEY}" },
"ollama": { "baseURL": "http://localhost:11434" }
},
"models": {
"default": "anthropic/claude-sonnet-4-5",
"fast": "groq/llama-3.3-70b-versatile",
"local": "ollama/llama3.2:70b"
"agents": {
"defaults": { "model": "anthropic/claude-sonnet-4-5" },
"fast": { "model": "groq/llama-3.3-70b-versatile" },
"local": { "model": "ollama/llama3.2:70b" }
}
}
```
### Provider Comparison
## Security
| Provider | Best For | Speed | Cost |
|----------|----------|-------|------|
| Claude | Reasoning, coding | Medium | $$$ |
| GPT-4o | General purpose | Fast | $$$ |
| Gemini | Multimodal | Fast | $$ |
| Groq | Fastest inference | Ultra-fast | $ |
| OpenRouter | Model variety | Varies | $-$$$ |
| Ollama | Privacy, free | Depends on HW | Free |
- API keys via environment variables
- Restricted config permissions (chmod 600)
- Systemd hardening (NoNewPrivileges, PrivateTmp)
- Network binding to localhost
---