Restructured documentation to highlight both key features: FEATURE 1: Qwen OAuth Cross-Platform Import (FREE) - 2,000 requests/day free tier - Works with ALL Claw platforms - Browser OAuth via qwen.ai - Model: Qwen3-Coder FEATURE 2: 25+ OpenCode-Compatible Providers - Major AI Labs: Anthropic, OpenAI, Google, xAI, Mistral - Cloud Platforms: Azure, AWS Bedrock, Google Vertex - Fast Inference: Groq, Cerebras - Gateways: OpenRouter (100+ models), Together AI - Local: Ollama, LM Studio, vLLM Provider Tiers: 1. FREE: Qwen OAuth 2. Major Labs: Anthropic, OpenAI, Google, xAI, Mistral 3. Cloud: Azure, Bedrock, Vertex 4. Fast: Groq, Cerebras 5. Gateways: OpenRouter, Together AI, Vercel 6. Specialized: Perplexity, Cohere, GitLab, GitHub 7. Local: Ollama, LM Studio, vLLM Platforms with full support: - Qwen Code (native OAuth) - OpenClaw, NanoBot, PicoClaw, ZeroClaw (import OAuth) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
4299e9dce4
·
2026-02-22 04:19:29 -05:00
History
🦞 Claw Setup
Cross-Platform AI Agent Deployment with 25+ Providers + FREE Qwen OAuth
Use ANY AI provider with ANY Claw platform - including FREE Qwen tier!
✨ Autonomously developed by GLM 5 Advanced Coding Model
⚠️ Disclaimer: Test in a test environment prior to using on any live system
⭐ Two Powerful Features
┌─────────────────────────────────────────────────────────────────┐
│ CLAW SETUP FEATURES │
├─────────────────────────────────────────────────────────────────┤
│ │
│ FEATURE 1: FREE Qwen OAuth Cross-Platform Import │
│ ─────────────────────────────────────────────── │
│ ✅ FREE: 2,000 requests/day, 60 req/min │
│ ✅ Works with: OpenClaw, NanoBot, PicoClaw, ZeroClaw │
│ ✅ Model: Qwen3-Coder (coding-optimized) │
│ ✅ No API key needed - browser OAuth │
│ │
│ FEATURE 2: 25+ OpenCode-Compatible AI Providers │
│ ───────────────────────────────────────────── │
│ ✅ All major AI labs: Anthropic, OpenAI, Google, xAI │
│ ✅ Cloud platforms: Azure, AWS Bedrock, Google Vertex │
│ ✅ Fast inference: Groq (ultra-fast), Cerebras (fastest) │
│ ✅ Gateways: OpenRouter (100+ models), Together AI │
│ ✅ Local: Ollama, LM Studio, vLLM │
│ │
└─────────────────────────────────────────────────────────────────┘
Platforms Supported
| Platform | Qwen OAuth | All Providers | Memory | Best For |
|---|---|---|---|---|
| Qwen Code | ✅ Native | ✅ | ~200MB | FREE coding |
| OpenClaw | ✅ Import | ✅ | >1GB | Full-featured |
| NanoBot | ✅ Import | ✅ | ~100MB | Research |
| PicoClaw | ✅ Import | ✅ | <10MB | Embedded |
| ZeroClaw | ✅ Import | ✅ | <5MB | Performance |
FEATURE 1: FREE Qwen OAuth Import
Quick Start (FREE)
# 1. Install Qwen Code
npm install -g @qwen-code/qwen-code@latest
# 2. Get FREE OAuth (2,000 req/day)
qwen
/auth # Select "Qwen OAuth" → Browser login
# 3. Import to ANY platform
source ~/.qwen/.env && openclaw
source ~/.qwen/.env && nanobot gateway
source ~/.qwen/.env && picoclaw gateway
source ~/.qwen/.env && zeroclaw gateway
Free Tier Limits
| Metric | Limit |
|---|---|
| Requests/day | 2,000 |
| Requests/minute | 60 |
| Cost | FREE |
FEATURE 2: 25+ AI Providers
FREE Tier
| Provider | Free | Model | How to Get |
|---|---|---|---|
| Qwen OAuth | ✅ 2K/day | Qwen3-Coder | qwen && /auth |
Major AI Labs
| Provider | Models | Features |
|---|---|---|
| Anthropic | Claude 3.5/4/Opus | Extended thinking, PDF |
| OpenAI | GPT-4o, o1, o3, GPT-5 | Function calling |
| Google AI | Gemini 2.5, 3 Pro | Multimodal |
| xAI | Grok | Real-time data |
| Mistral | Large, Codestral | Code-focused |
Cloud Platforms
| Provider | Models | Use Case |
|---|---|---|
| Azure OpenAI | GPT-5 Enterprise | Azure integration |
| Google Vertex | Claude, Gemini | GCP infrastructure |
| Amazon Bedrock | Nova, Claude, Llama | AWS integration |
Fast Inference
| Provider | Speed | Models |
|---|---|---|
| Groq | Ultra-fast | Llama 3, Mixtral |
| Cerebras | Fastest | Llama 3 variants |
Gateways (100+ Models)
| Provider | Models | Features |
|---|---|---|
| OpenRouter | 100+ | Multi-provider gateway |
| Together AI | Open source | Fine-tuning |
| Vercel AI | Multi | Edge hosting |
Local/Self-Hosted
| Provider | Use Case |
|---|---|
| Ollama | Local models |
| LM Studio | GUI local |
| vLLM | High-performance |
Multi-Provider Configuration
{
"providers": {
"qwen": { "type": "oauth", "free": true, "limit": 2000 },
"anthropic": { "apiKey": "${ANTHROPIC_API_KEY}" },
"openai": { "apiKey": "${OPENAI_API_KEY}" },
"google": { "apiKey": "${GOOGLE_API_KEY}" },
"groq": { "apiKey": "${GROQ_API_KEY}" },
"openrouter": { "apiKey": "${OPENROUTER_API_KEY}" },
"ollama": { "baseURL": "http://localhost:11434" }
},
"agents": {
"free": { "model": "qwen/qwen3-coder-plus" },
"premium": { "model": "anthropic/claude-sonnet-4-5" },
"fast": { "model": "groq/llama-3.3-70b-versatile" },
"local": { "model": "ollama/llama3.2:70b" }
}
}
Quick Setup Examples
Option 1: FREE Only
# Get FREE Qwen OAuth
npm install -g @qwen-code/qwen-code@latest
qwen && /auth
# Use with any platform
source ~/.qwen/.env && openclaw
Option 2: With API Keys
# Configure providers
export ANTHROPIC_API_KEY="your-key"
export OPENAI_API_KEY="your-key"
export GOOGLE_API_KEY="your-key"
export GROQ_API_KEY="your-key"
# Or use OpenRouter for 100+ models
export OPENROUTER_API_KEY="your-key"
export OPENAI_API_KEY="$OPENROUTER_API_KEY"
export OPENAI_BASE_URL="https://openrouter.ai/api/v1"
Option 3: Local Models
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.2:70b
# Use with Claw platforms
export OPENAI_BASE_URL="http://localhost:11434/v1"
export OPENAI_MODEL="llama3.2:70b"
Fetch Available Models
# Use included script
./scripts/fetch-models.sh all
# Or manually
curl -s https://openrouter.ai/api/v1/models -H "Authorization: Bearer $KEY" | jq '.data[].id'
curl -s https://api.groq.com/openai/v1/models -H "Authorization: Bearer $KEY"
curl -s http://localhost:11434/api/tags
Usage Examples
"Setup OpenClaw with FREE Qwen OAuth"
"Configure NanoBot with Anthropic and OpenAI"
"Import Qwen OAuth to ZeroClaw"
"Fetch available models from OpenRouter"
"Setup Claw with all 25+ providers"
"Add custom fine-tuned model"