docs: Comprehensive claw-setup skill documentation

Added complete documentation covering all features:

FEATURES DOCUMENTED:
1. FREE Qwen OAuth Cross-Platform Import
   - 2,000 requests/day free tier
   - Works with ALL Claw platforms
   - Platform-specific import guides

2. 25+ OpenCode-Compatible AI Providers
   - Tier 1: FREE (Qwen OAuth)
   - Tier 2: Major Labs (Anthropic, OpenAI, Google, xAI, Mistral)
   - Tier 3: Cloud (Azure, Bedrock, Vertex)
   - Tier 4: Gateways (OpenRouter 100+, Together AI)
   - Tier 5: Fast (Groq, Cerebras)
   - Tier 6: Specialized (Perplexity, Cohere, GitLab)
   - Tier 7: Local (Ollama, LM Studio, vLLM)

3. Customization Options
   - Model selection (fetch or custom)
   - Security hardening
   - Interactive brainstorming
   - Multi-provider configuration

4. Installation Guides
   - All 6 platforms with step-by-step instructions

5. Configuration Examples
   - Multi-provider setup
   - Environment variables
   - Custom models

6. Usage Examples
   - Basic, advanced, and provider-specific

7. Troubleshooting
   - Common issues and solutions

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Claude Code
2026-02-22 04:22:37 -05:00
Unverified
parent 4299e9dce4
commit cf0d6489d1

View File

@@ -2,9 +2,9 @@
# 🦞 Claw Setup # 🦞 Claw Setup
### Cross-Platform AI Agent Deployment with 25+ Providers + FREE Qwen OAuth ### The Ultimate AI Agent Deployment Skill
**Use ANY AI provider with ANY Claw platform - including FREE Qwen tier!** **Setup ANY Claw platform with 25+ AI providers + FREE Qwen OAuth + Full Customization**
--- ---
@@ -26,211 +26,493 @@
</div> </div>
## ⭐ Two Powerful Features ## Table of Contents
1. [Features Overview](#-features-overview)
2. [Supported Platforms](#-supported-platforms)
3. [FREE Qwen OAuth Import](#-feature-1-free-qwen-oauth-import)
4. [25+ AI Providers](#-feature-2-25-ai-providers)
5. [Customization Options](#-customization-options)
6. [Installation Guides](#-installation-guides)
7. [Configuration Examples](#-configuration-examples)
8. [Usage Examples](#-usage-examples)
---
## 🎯 Features Overview
``` ```
┌─────────────────────────────────────────────────────────────────┐ ┌─────────────────────────────────────────────────────────────────┐
│ CLAW SETUP FEATURES │ │ CLAW SETUP FEATURES │
├─────────────────────────────────────────────────────────────────┤ ├─────────────────────────────────────────────────────────────────┤
│ │ │ │
│ FEATURE 1: FREE Qwen OAuth Cross-Platform Import FEATURE 1: FREE Qwen OAuth Cross-Platform Import │
─────────────────────────────────────────────── • 2,000 requests/day FREE
✅ FREE: 2,000 requests/day, 60 req/min • Works with ALL Claw platforms
✅ Works with: OpenClaw, NanoBot, PicoClaw, ZeroClaw • Qwen3-Coder model (coding-optimized)
✅ Model: Qwen3-Coder (coding-optimized) • Browser OAuth - no API key needed
│ ✅ No API key needed - browser OAuth │
│ │ │ │
│ FEATURE 2: 25+ OpenCode-Compatible AI Providers FEATURE 2: 25+ OpenCode-Compatible AI Providers │
───────────────────────────────────────────── • All major AI labs
✅ All major AI labs: Anthropic, OpenAI, Google, xAI • Cloud platforms (Azure, AWS, GCP)
✅ Cloud platforms: Azure, AWS Bedrock, Google Vertex • Fast inference (Groq, Cerebras)
✅ Fast inference: Groq (ultra-fast), Cerebras (fastest) • Gateways (OpenRouter: 100+ models)
✅ Gateways: OpenRouter (100+ models), Together AI • Local models (Ollama, LM Studio)
✅ Local: Ollama, LM Studio, vLLM
│ ✅ FEATURE 3: Full Customization │
│ • Model selection (fetch or custom) │
│ • Security hardening │
│ • Interactive brainstorming │
│ • Multi-provider configuration │
│ │ │ │
└─────────────────────────────────────────────────────────────────┘ └─────────────────────────────────────────────────────────────────┘
``` ```
--- ---
## Platforms Supported ## 🦀 Supported Platforms
| Platform | Qwen OAuth | All Providers | Memory | Best For | | Platform | Language | Memory | Startup | Qwen OAuth | All Providers | Best For |
|----------|------------|---------------|--------|----------| |----------|----------|--------|---------|------------|---------------|----------|
| **Qwen Code** | ✅ Native | ✅ | ~200MB | FREE coding | | **Qwen Code** | TypeScript | ~200MB | ~5s | ✅ Native | ✅ | FREE coding |
| **OpenClaw** | ✅ Import | ✅ | >1GB | Full-featured | | **OpenClaw** | TypeScript | >1GB | ~500s | ✅ Import | ✅ | Full-featured, 1700+ plugins |
| **NanoBot** | ✅ Import | ✅ | ~100MB | Research | | **NanoBot** | Python | ~100MB | ~30s | ✅ Import | ✅ | Research, Python devs |
| **PicoClaw** | ✅ Import | ✅ | <10MB | Embedded | | **PicoClaw** | Go | <10MB | ~1s | ✅ Import | ✅ | Embedded, $10 hardware |
| **ZeroClaw** | ✅ Import | ✅ | <5MB | Performance | | **ZeroClaw** | Rust | <5MB | <10ms | ✅ Import | ✅ | Maximum performance |
| **NanoClaw** | TypeScript | ~50MB | ~5s | ✅ Import | ✅ | WhatsApp integration |
### Platform Selection Guide
```
┌─────────────────┐
│ Need AI Agent? │
└────────┬────────┘
┌───────────────────────┐
│ Want FREE tier? │
└───────────┬───────────┘
┌─────┴─────┐
YES NO
│ │
▼ ▼
┌──────────────┐ ┌──────────────────┐
│ ⭐ Qwen Code │ │ Memory limited? │
│ OAuth FREE │ └────────┬─────────┘
│ 2000/day │ ┌─────┴─────┐
└──────────────┘ YES NO
│ │
▼ ▼
┌──────────┐ ┌──────────┐
│ZeroClaw/ │ │OpenClaw │
│PicoClaw │ │(Full) │
└──────────┘ └──────────┘
```
--- ---
# FEATURE 1: FREE Qwen OAuth Import ## ⭐ FEATURE 1: FREE Qwen OAuth Import
## Quick Start (FREE) ### What You Get
| Metric | Value |
|--------|-------|
| **Requests/day** | 2,000 |
| **Requests/minute** | 60 |
| **Cost** | **FREE** |
| **Model** | Qwen3-Coder (coding-optimized) |
| **Auth** | Browser OAuth via qwen.ai |
### Quick Start
```bash ```bash
# 1. Install Qwen Code # Step 1: Install Qwen Code
npm install -g @qwen-code/qwen-code@latest npm install -g @qwen-code/qwen-code@latest
# 2. Get FREE OAuth (2,000 req/day) # Step 2: Get FREE OAuth
qwen qwen
/auth # Select "Qwen OAuth" → Browser login /auth # Select "Qwen OAuth" → Browser login with qwen.ai
# 3. Import to ANY platform # Step 3: Import to ANY platform
source ~/.qwen/.env && openclaw source ~/.qwen/.env && openclaw
source ~/.qwen/.env && nanobot gateway source ~/.qwen/.env && nanobot gateway
source ~/.qwen/.env && picoclaw gateway source ~/.qwen/.env && picoclaw gateway
source ~/.qwen/.env && zeroclaw gateway source ~/.qwen/.env && zeroclaw gateway
``` ```
## Free Tier Limits ### Platform-Specific Import
| Metric | Limit | #### OpenClaw + FREE Qwen
|--------|-------| ```bash
| Requests/day | **2,000** | git clone https://github.com/openclaw/openclaw.git
| Requests/minute | 60 | cd openclaw && npm install
| Cost | **FREE** |
export OPENAI_API_KEY=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
export OPENAI_BASE_URL="https://api.qwen.ai/v1"
export OPENAI_MODEL="qwen3-coder-plus"
npm run start
```
#### NanoBot + FREE Qwen
```bash
pip install nanobot-ai
cat > ~/.nanobot/config.json << CONFIG
{
"providers": {
"qwen": {
"apiKey": "$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')",
"baseURL": "https://api.qwen.ai/v1"
}
},
"agents": { "defaults": { "model": "qwen/qwen3-coder-plus" } }
}
CONFIG
nanobot gateway
```
#### ZeroClaw + FREE Qwen
```bash
# Install ZeroClaw
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
chmod +x zeroclaw-linux-amd64 && sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
# Import Qwen OAuth
export OPENAI_API_KEY=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
export OPENAI_MODEL="qwen3-coder-plus"
zeroclaw gateway
```
--- ---
# FEATURE 2: 25+ AI Providers ## 🤖 FEATURE 2: 25+ AI Providers
## FREE Tier ### Tier 1: FREE
| Provider | Free | Model | How to Get | | Provider | Free Tier | Model | Setup |
|----------|------|-------|------------| |----------|-----------|-------|-------|
| **Qwen OAuth** | ✅ 2K/day | Qwen3-Coder | `qwen && /auth` | | **Qwen OAuth** | ✅ 2,000/day | Qwen3-Coder | `qwen && /auth` |
## Major AI Labs ### Tier 2: Major AI Labs
| Provider | SDK Package | Key Models | Features |
|----------|-------------|------------|----------|
| **Anthropic** | `@ai-sdk/anthropic` | Claude 3.5/4/Opus | Extended thinking, PDF support |
| **OpenAI** | `@ai-sdk/openai` | GPT-4o, o1, o3, GPT-5 | Function calling, structured output |
| **Google AI** | `@ai-sdk/google` | Gemini 2.5, Gemini 3 Pro | Multimodal, long context |
| **xAI** | `@ai-sdk/xai` | Grok models | Real-time data integration |
| **Mistral** | `@ai-sdk/mistral` | Mistral Large, Codestral | Code-focused models |
### Tier 3: Cloud Platforms
| Provider | SDK Package | Models | Features |
|----------|-------------|--------|----------|
| **Azure OpenAI** | `@ai-sdk/azure` | GPT-5 Enterprise | Azure integration |
| **Google Vertex** | `@ai-sdk/google-vertex` | Claude, Gemini on GCP | Anthropic on Google |
| **Amazon Bedrock** | `@ai-sdk/amazon-bedrock` | Nova, Claude, Llama 3 | AWS credentials |
### Tier 4: Aggregators & Gateways
| Provider | Models | Features | | Provider | Models | Features |
|----------|--------|----------| |----------|--------|----------|
| **Anthropic** | Claude 3.5/4/Opus | Extended thinking, PDF | | **OpenRouter** | 100+ models | Multi-provider gateway |
| **OpenAI** | GPT-4o, o1, o3, GPT-5 | Function calling | | **Together AI** | Open source | Fine-tuning, hosting |
| **Google AI** | Gemini 2.5, 3 Pro | Multimodal | | **DeepInfra** | Open source | Cost-effective |
| **xAI** | Grok | Real-time data | | **Vercel AI** | Multi-provider | Edge hosting |
| **Mistral** | Large, Codestral | Code-focused |
## Cloud Platforms ### Tier 5: Fast Inference
| Provider | Models | Use Case |
|----------|--------|----------|
| **Azure OpenAI** | GPT-5 Enterprise | Azure integration |
| **Google Vertex** | Claude, Gemini | GCP infrastructure |
| **Amazon Bedrock** | Nova, Claude, Llama | AWS integration |
## Fast Inference
| Provider | Speed | Models | | Provider | Speed | Models |
|----------|-------|--------| |----------|-------|--------|
| **Groq** | Ultra-fast | Llama 3, Mixtral | | **Groq** | Ultra-fast | Llama 3, Mixtral |
| **Cerebras** | Fastest | Llama 3 variants | | **Cerebras** | Fastest | Llama 3 variants |
## Gateways (100+ Models) ### Tier 6: Specialized
| Provider | Models | Features |
|----------|--------|----------|
| **OpenRouter** | 100+ | Multi-provider gateway |
| **Together AI** | Open source | Fine-tuning |
| **Vercel AI** | Multi | Edge hosting |
## Local/Self-Hosted
| Provider | Use Case | | Provider | Use Case |
|----------|----------| |----------|----------|
| **Ollama** | Local models | | **Perplexity** | Web search integration |
| **LM Studio** | GUI local | | **Cohere** | Enterprise RAG |
| **vLLM** | High-performance | | **GitLab Duo** | CI/CD AI integration |
| **GitHub Copilot** | IDE integration |
### Tier 7: Local/Self-Hosted
| Provider | Base URL | Use Case |
|----------|----------|----------|
| **Ollama** | localhost:11434 | Local model hosting |
| **LM Studio** | localhost:1234 | GUI local models |
| **vLLM** | localhost:8000 | High-performance serving |
--- ---
# Multi-Provider Configuration ## 🎨 Customization Options
### 1. Model Selection
**Option A: Fetch from Provider**
```bash
# Use included script
./scripts/fetch-models.sh openrouter
./scripts/fetch-models.sh groq
./scripts/fetch-models.sh ollama
# Or manually
curl -s https://openrouter.ai/api/v1/models \
-H "Authorization: Bearer $KEY" | jq '.data[].id'
```
**Option B: Custom Model Input**
```json
{
"customModels": {
"my-fine-tuned": {
"provider": "openai",
"modelId": "ft:gpt-4o:org:custom:suffix",
"displayName": "My Custom Model"
},
"local-llama": {
"provider": "ollama",
"modelId": "llama3.2:70b",
"displayName": "Local Llama 3.2 70B"
}
}
}
```
### 2. Security Hardening
```bash
# Environment variables (never hardcode keys)
export ANTHROPIC_API_KEY="your-key"
export OPENAI_API_KEY="your-key"
# Restricted config permissions
chmod 600 ~/.config/claw/config.json
chmod 600 ~/.qwen/settings.json
# Systemd hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
```
### 3. Interactive Brainstorming
After installation, customize with brainstorming:
| Topic | Questions |
|-------|-----------|
| **Use Case** | Coding, research, productivity, automation? |
| **Model Selection** | Claude, GPT, Gemini, Qwen, local? |
| **Integrations** | Telegram, Discord, calendar, storage? |
| **Deployment** | Local, VPS, cloud? |
| **Agent Personality** | Tone, memory, proactivity? |
---
## 📦 Installation Guides
### Qwen Code (Native FREE OAuth)
```bash
npm install -g @qwen-code/qwen-code@latest
qwen
/auth # Select Qwen OAuth
```
### OpenClaw
```bash
git clone https://github.com/openclaw/openclaw.git
cd openclaw && npm install && npm run setup
```
### NanoBot
```bash
pip install nanobot-ai
nanobot onboard
nanobot gateway
```
### PicoClaw
```bash
wget https://github.com/sipeed/picoclaw/releases/latest/picoclaw-linux-amd64
chmod +x picoclaw-linux-amd64
sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
picoclaw gateway
```
### ZeroClaw
```bash
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
chmod +x zeroclaw-linux-amd64
sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
zeroclaw gateway
```
---
## ⚙️ Configuration Examples
### Multi-Provider Setup
```json ```json
{ {
"providers": { "providers": {
"qwen": { "type": "oauth", "free": true, "limit": 2000 }, "qwen": {
"anthropic": { "apiKey": "${ANTHROPIC_API_KEY}" }, "type": "oauth",
"openai": { "apiKey": "${OPENAI_API_KEY}" }, "free": true,
"google": { "apiKey": "${GOOGLE_API_KEY}" }, "daily_limit": 2000,
"groq": { "apiKey": "${GROQ_API_KEY}" }, "model": "qwen3-coder-plus"
"openrouter": { "apiKey": "${OPENROUTER_API_KEY}" }, },
"ollama": { "baseURL": "http://localhost:11434" } "anthropic": {
"apiKey": "${ANTHROPIC_API_KEY}",
"baseURL": "https://api.anthropic.com"
},
"openai": {
"apiKey": "${OPENAI_API_KEY}",
"baseURL": "https://api.openai.com/v1"
},
"google": {
"apiKey": "${GOOGLE_API_KEY}"
},
"openrouter": {
"apiKey": "${OPENROUTER_API_KEY}",
"baseURL": "https://openrouter.ai/api/v1"
},
"groq": {
"apiKey": "${GROQ_API_KEY}",
"baseURL": "https://api.groq.com/openai/v1"
},
"ollama": {
"baseURL": "http://localhost:11434/v1"
}
}, },
"agents": { "agents": {
"free": { "model": "qwen/qwen3-coder-plus" }, "free": {
"premium": { "model": "anthropic/claude-sonnet-4-5" }, "model": "qwen/qwen3-coder-plus"
"fast": { "model": "groq/llama-3.3-70b-versatile" }, },
"local": { "model": "ollama/llama3.2:70b" } "premium": {
"model": "anthropic/claude-sonnet-4-5"
},
"fast": {
"model": "groq/llama-3.3-70b-versatile"
},
"local": {
"model": "ollama/llama3.2:70b"
}
} }
} }
``` ```
--- ### Environment Variables
# Quick Setup Examples
## Option 1: FREE Only
```bash ```bash
# Get FREE Qwen OAuth # ~/.qwen/.env or ~/.config/claw/.env
npm install -g @qwen-code/qwen-code@latest
qwen && /auth
# Use with any platform # Qwen OAuth (FREE - from qwen && /auth)
source ~/.qwen/.env && openclaw OPENAI_API_KEY=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
``` OPENAI_BASE_URL=https://api.qwen.ai/v1
OPENAI_MODEL=qwen3-coder-plus
## Option 2: With API Keys # Or use paid providers
```bash ANTHROPIC_API_KEY=sk-ant-xxx
# Configure providers OPENAI_API_KEY=sk-xxx
export ANTHROPIC_API_KEY="your-key" GOOGLE_API_KEY=xxx
export OPENAI_API_KEY="your-key" GROQ_API_KEY=gsk_xxx
export GOOGLE_API_KEY="your-key" OPENROUTER_API_KEY=sk-or-xxx
export GROQ_API_KEY="your-key" MISTRAL_API_KEY=xxx
XAI_API_KEY=xxx
COHERE_API_KEY=xxx
PERPLEXITY_API_KEY=xxx
CEREBRAS_API_KEY=xxx
TOGETHER_API_KEY=xxx
DEEPINFRA_API_KEY=xxx
# Or use OpenRouter for 100+ models # Cloud providers
export OPENROUTER_API_KEY="your-key" AZURE_OPENAI_API_KEY=xxx
export OPENAI_API_KEY="$OPENROUTER_API_KEY" AZURE_OPENAI_ENDPOINT=https://xxx.openai.azure.com/
export OPENAI_BASE_URL="https://openrouter.ai/api/v1" AWS_ACCESS_KEY_ID=xxx
``` AWS_SECRET_ACCESS_KEY=xxx
GOOGLE_CLOUD_PROJECT=my-project
## Option 3: Local Models GOOGLE_CLOUD_LOCATION=us-central1
```bash
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.2:70b
# Use with Claw platforms
export OPENAI_BASE_URL="http://localhost:11434/v1"
export OPENAI_MODEL="llama3.2:70b"
``` ```
--- ---
# Fetch Available Models ## 💬 Usage Examples
```bash
# Use included script
./scripts/fetch-models.sh all
# Or manually
curl -s https://openrouter.ai/api/v1/models -H "Authorization: Bearer $KEY" | jq '.data[].id'
curl -s https://api.groq.com/openai/v1/models -H "Authorization: Bearer $KEY"
curl -s http://localhost:11434/api/tags
```
---
# Usage Examples
### Basic Usage
``` ```
"Setup OpenClaw with FREE Qwen OAuth" "Setup OpenClaw with FREE Qwen OAuth"
"Configure NanoBot with Anthropic and OpenAI" "Install NanoBot with all AI providers"
"Import Qwen OAuth to ZeroClaw" "Configure ZeroClaw with Groq for fast inference"
"Fetch available models from OpenRouter" ```
"Setup Claw with all 25+ providers"
"Add custom fine-tuned model" ### Advanced Usage
```
"Setup Claw with Anthropic, OpenAI, and FREE Qwen fallback"
"Fetch available models from OpenRouter and let me choose"
"Configure PicoClaw with my custom fine-tuned model"
"Import Qwen OAuth to use with OpenClaw"
"Setup Claw platform with security hardening"
```
### Provider-Specific
```
"Configure Claw with Anthropic Claude 4"
"Setup Claw with OpenAI GPT-5"
"Use Google Gemini 3 Pro with OpenClaw"
"Setup local Ollama models with Claw"
"Configure OpenRouter gateway for 100+ models"
```
---
## 📁 Files in This Skill
```
skills/claw-setup/
├── SKILL.md # Skill definition (this file's source)
├── README.md # This documentation
└── scripts/
├── import-qwen-oauth.sh # Import FREE Qwen OAuth to any platform
└── fetch-models.sh # Fetch models from all providers
```
---
## 🔧 Troubleshooting
### Qwen OAuth Token Not Found
```bash
# Re-authenticate
qwen && /auth # Select Qwen OAuth
# Check token location
ls ~/.qwen/
find ~/.qwen -name "*.json"
```
### Token Expired
```bash
# Tokens auto-refresh in Qwen Code
qwen -p "refresh"
# Re-export
source ~/.qwen/.env
```
### API Errors
```bash
# Verify token is valid
curl -H "Authorization: Bearer $OPENAI_API_KEY" \
https://api.qwen.ai/v1/models
# Check rate limits
# FREE tier: 60 req/min, 2000/day
``` ```
--- ---