Files
ClaudeCode-Custom-Skills/skills/claw-setup/README.md
Claude Code baffcf6db1 feat: Add all 25+ OpenCode-compatible AI providers to Claw Setup
Updated provider support to match OpenCode's full provider list:

Built-in Providers (18):
- Anthropic, OpenAI, Azure OpenAI
- Google AI, Google Vertex AI
- Amazon Bedrock
- OpenRouter, xAI, Mistral
- Groq, Cerebras, DeepInfra
- Cohere, Together AI, Perplexity
- Vercel AI, GitLab, GitHub Copilot

Custom Loader Providers:
- GitHub Copilot Enterprise
- Google Vertex Anthropic
- Azure Cognitive Services
- Cloudflare AI Gateway
- SAP AI Core

Local/Self-Hosted:
- Ollama, LM Studio, vLLM

Features:
- Model fetching from provider APIs
- Custom model input support
- Multi-provider configuration
- Environment variable security

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 03:51:55 -05:00

6.1 KiB

🦞 Claw Setup

Professional AI Agent Deployment Made Simple

End-to-end setup of Claw platforms with 25+ AI providers, security hardening, and personal customization


Designed by GLM 5

Autonomously developed by GLM 5 Advanced Coding Model

⚠️ Disclaimer: Test in a test environment prior to using on any live system


Overview

Claw Setup handles complete deployment of AI Agent platforms with 25+ AI provider integrations (OpenCode compatible).

┌─────────────────────────────────────────────────────────────────┐
│                    CLAW SETUP WORKFLOW                          │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Phase 1          Phase 2          Phase 3          Phase 4    │
│  ────────         ────────         ────────         ────────   │
│                                                                 │
│  ┌─────────┐     ┌─────────┐     ┌─────────┐     ┌─────────┐  │
│  │ SELECT  │────►│ INSTALL │────►│CUSTOMIZE│────►│ DEPLOY  │  │
│  │ Platform│     │& Secure │     │Providers│     │ & Run   │  │
│  └─────────┘     └─────────┘     └─────────┘     └─────────┘  │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Platforms Supported

Platform Language Memory Startup Best For
OpenClaw TypeScript >1GB ~500s Full-featured, 1700+ plugins
NanoBot Python ~100MB ~30s Research, customization
PicoClaw Go <10MB ~1s Embedded, $10 hardware
ZeroClaw Rust <5MB <10ms Maximum performance
NanoClaw TypeScript ~50MB ~5s WhatsApp integration

AI Providers (25+ Supported)

Tier 1: Major AI Labs

Provider Models Features
Anthropic Claude 3.5/4/Opus Extended thinking, PDF support
OpenAI GPT-4o, o1, o3, GPT-5 Function calling, structured output
Google AI Gemini 2.5, Gemini 3 Pro Multimodal, long context
xAI Grok Real-time data integration
Mistral Mistral Large, Codestral Code-focused models

Tier 2: Cloud Platforms

Provider Models Features
Azure OpenAI GPT-5, GPT-4o Enterprise Azure integration
Google Vertex Claude, Gemini on GCP Anthropic on Google
Amazon Bedrock Nova, Claude, Llama 3 AWS regional prefixes

Tier 3: Aggregators & Gateways

Provider Models Features
OpenRouter 100+ models Multi-provider gateway
Vercel AI Multi-provider Edge hosting, rate limiting
Together AI Open source Fine-tuning, hosting
DeepInfra Open source Cost-effective

Tier 4: Fast Inference

Provider Speed Models
Groq Ultra-fast Llama 3, Mixtral
Cerebras Fastest Llama 3 variants

Tier 5: Specialized

Provider Use Case
Perplexity Web search integration
Cohere Enterprise RAG
GitLab Duo CI/CD integration
GitHub Copilot IDE integration
Cloudflare AI Gateway, rate limiting
SAP AI Core SAP enterprise

Local/Self-Hosted

Provider Use Case
Ollama Local model hosting
LM Studio GUI local models
vLLM High-performance serving

Model Selection

Option A: Fetch from Provider

# Fetch available models
curl -s https://openrouter.ai/api/v1/models -H "Authorization: Bearer $KEY" | jq '.data[].id'
curl -s https://api.groq.com/openai/v1/models -H "Authorization: Bearer $KEY"
curl -s http://localhost:11434/api/tags  # Ollama

Option B: Custom Model Input

{
  "provider": "openai",
  "modelId": "ft:gpt-4o:org:custom:suffix",
  "displayName": "My Fine-Tuned Model"
}

Quick Start

"Setup OpenClaw with Anthropic and OpenAI providers"
"Install NanoBot with all available providers"
"Deploy ZeroClaw with Groq for fast inference"
"Configure Claw with local Ollama models"

Configuration Example

{
  "providers": {
    "anthropic": { "apiKey": "${ANTHROPIC_API_KEY}" },
    "openai": { "apiKey": "${OPENAI_API_KEY}" },
    "google": { "apiKey": "${GOOGLE_API_KEY}" },
    "openrouter": { "apiKey": "${OPENROUTER_API_KEY}" },
    "groq": { "apiKey": "${GROQ_API_KEY}" },
    "ollama": { "baseURL": "http://localhost:11434" }
  },
  "agents": {
    "defaults": { "model": "anthropic/claude-sonnet-4-5" },
    "fast": { "model": "groq/llama-3.3-70b-versatile" },
    "local": { "model": "ollama/llama3.2:70b" }
  }
}

Security

  • API keys via environment variables
  • Restricted config permissions (chmod 600)
  • Systemd hardening (NoNewPrivileges, PrivateTmp)
  • Network binding to localhost

Learn more about GLM 5 Advanced Coding Model