docs: Comprehensive documentation for 25+ providers + Qwen OAuth

Restructured documentation to highlight both key features:

FEATURE 1: Qwen OAuth Cross-Platform Import (FREE)
- 2,000 requests/day free tier
- Works with ALL Claw platforms
- Browser OAuth via qwen.ai
- Model: Qwen3-Coder

FEATURE 2: 25+ OpenCode-Compatible Providers
- Major AI Labs: Anthropic, OpenAI, Google, xAI, Mistral
- Cloud Platforms: Azure, AWS Bedrock, Google Vertex
- Fast Inference: Groq, Cerebras
- Gateways: OpenRouter (100+ models), Together AI
- Local: Ollama, LM Studio, vLLM

Provider Tiers:
1. FREE: Qwen OAuth
2. Major Labs: Anthropic, OpenAI, Google, xAI, Mistral
3. Cloud: Azure, Bedrock, Vertex
4. Fast: Groq, Cerebras
5. Gateways: OpenRouter, Together AI, Vercel
6. Specialized: Perplexity, Cohere, GitLab, GitHub
7. Local: Ollama, LM Studio, vLLM

Platforms with full support:
- Qwen Code (native OAuth)
- OpenClaw, NanoBot, PicoClaw, ZeroClaw (import OAuth)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Claude Code
2026-02-22 04:19:29 -05:00
Unverified
parent 46ed77201c
commit 4299e9dce4
3 changed files with 468 additions and 446 deletions

View File

@@ -1,321 +1,320 @@
---
name: claw-setup
description: Use this skill when the user asks to "setup openclaw", "install nanobot", "deploy zeroclaw", "configure picoclaw", "setup qwen code", "import qwen oauth", "use free qwen tier", "AI agent setup", or mentions setting up AI platforms with free providers.
version: 1.2.0
description: Use this skill when the user asks to "setup openclaw", "install nanobot", "deploy zeroclaw", "configure picoclaw", "setup qwen code", "import qwen oauth", "use free qwen tier", "configure AI providers", "add openai provider", "AI agent setup", or mentions setting up AI platforms.
version: 1.3.0
---
# Claw Setup Skill
End-to-end professional setup of AI Agent platforms with **cross-platform Qwen OAuth import** - use the FREE Qwen tier with ANY Claw platform!
End-to-end professional setup of AI Agent platforms with **25+ OpenCode-compatible providers** and **FREE Qwen OAuth cross-platform import**.
## ⭐ Key Feature: Qwen OAuth Import
**Use Qwen's FREE tier (2,000 req/day) with ANY platform:**
## ⭐ Two Key Features
```
┌─────────────────────────────────────────────────────────────────┐
QWEN OAUTH CROSS-PLATFORM IMPORT
CLAW SETUP FEATURES
├─────────────────────────────────────────────────────────────────┤
│ │
Qwen Code CLI Other Platforms
───────────── ───────────────
FEATURE 1: Qwen OAuth Cross-Platform Import (FREE)
│ ───────────────────────────────────────────────────
│ • FREE: 2,000 requests/day, 60 req/min │
│ • Works with: OpenClaw, NanoBot, PicoClaw, ZeroClaw │
│ • Model: Qwen3-Coder (optimized for coding) │
│ • Auth: Browser OAuth via qwen.ai │
│ │
┌─────────────┐ ┌─────────────┐
│ qwen.ai │ │ OpenClaw │
│ OAuth Login │──────┬──────►│ NanoBot │
│ FREE 2K/day │ │ │ PicoClaw │
└─────────────┘ │ │ ZeroClaw
│ │ │ NanoClaw │
└─────────────┘
│ ┌─────────────┐ │ │
│ │ ~/.qwen/ │ │ Export OAuth as OpenAI-compatible │
│ │ OAuth Token │──────┘ API configuration │
│ └─────────────┘ │
FEATURE 2: 25+ OpenCode-Compatible AI Providers
─────────────────────────────────────────────────
• All major AI labs: Anthropic, OpenAI, Google, xAI, Mistral
• Cloud platforms: Azure, AWS Bedrock, Google Vertex
• Fast inference: Groq, Cerebras
• Gateways: OpenRouter (100+ models), Together AI
• Local: Ollama, LM Studio, vLLM
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Supported Platforms
| Platform | Language | Memory | Qwen OAuth | Best For |
|----------|----------|--------|------------|----------|
| **Qwen Code** | TypeScript | ~200MB | ✅ Native | Coding, FREE tier |
| **OpenClaw** | TypeScript | >1GB | ✅ Importable | Full-featured |
| **NanoBot** | Python | ~100MB | ✅ Importable | Research |
| **PicoClaw** | Go | <10MB | ✅ Importable | Embedded |
| **ZeroClaw** | Rust | <5MB | ✅ Importable | Performance |
| **NanoClaw** | TypeScript | ~50MB | ✅ Importable | WhatsApp |
| Platform | Language | Memory | Qwen OAuth | All Providers | Best For |
|----------|----------|--------|------------|---------------|----------|
| **Qwen Code** | TypeScript | ~200MB | ✅ Native | ✅ | FREE coding |
| **OpenClaw** | TypeScript | >1GB | ✅ Import | ✅ | Full-featured |
| **NanoBot** | Python | ~100MB | ✅ Import | ✅ | Research |
| **PicoClaw** | Go | <10MB | ✅ Import | ✅ | Embedded |
| **ZeroClaw** | Rust | <5MB | ✅ Import | ✅ | Performance |
| **NanoClaw** | TypeScript | ~50MB | ✅ Import | ✅ | WhatsApp |
## Step 1: Get Qwen OAuth Token (FREE)
---
# FEATURE 1: Qwen OAuth Cross-Platform Import (FREE)
## Get FREE Qwen OAuth
### Install Qwen Code
```bash
# Install Qwen Code
npm install -g @qwen-code/qwen-code@latest
```
### Authenticate with FREE OAuth
```bash
# Authenticate (FREE)
qwen
# In Qwen Code session:
/auth
# Select "Qwen OAuth"
# Browser opens -> Sign in with qwen.ai account
/auth # Select "Qwen OAuth" → Browser login with qwen.ai
# FREE: 2,000 requests/day, 60 req/min
```
### Extract OAuth Token
```bash
# OAuth token is stored in:
ls -la ~/.qwen/
# View token file
cat ~/.qwen/settings.json
# Or find OAuth credentials
find ~/.qwen -name "*.json" -exec cat {} \;
```
## Step 2: Configure Any Platform with Qwen
### Method A: Use OAuth Token Directly
After authenticating with Qwen Code, extract and use the token:
## Import to Any Platform
```bash
# Token location (after /auth)
QWEN_TOKEN=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
# Extract token
export QWEN_TOKEN=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
# Use with any OpenAI-compatible platform
export OPENAI_API_KEY="$QWEN_TOKEN"
export OPENAI_BASE_URL="https://api.qwen.ai/v1" # Qwen API endpoint
export OPENAI_MODEL="qwen3-coder-plus"
```
### Method B: Use Alibaba Cloud DashScope (Alternative)
If you have Alibaba Cloud API key (paid):
```bash
# For China users
export OPENAI_API_KEY="your-dashscope-api-key"
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
export OPENAI_MODEL="qwen3-coder-plus"
# For International users
export OPENAI_BASE_URL="https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
# For US users
export OPENAI_BASE_URL="https://dashscope-us.aliyuncs.com/compatible-mode/v1"
```
## Step 3: Platform-Specific Configuration
### OpenClaw with Qwen OAuth
```bash
# Install OpenClaw
git clone https://github.com/openclaw/openclaw.git
cd openclaw
npm install
# Configure with Qwen
# Configure for any platform
export OPENAI_API_KEY="$QWEN_TOKEN"
export OPENAI_BASE_URL="https://api.qwen.ai/v1"
export OPENAI_MODEL="qwen3-coder-plus"
# Or in .env file
cat > .env << ENVEOF
OPENAI_API_KEY=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
OPENAI_BASE_URL=https://api.qwen.ai/v1
OPENAI_MODEL=qwen3-coder-plus
ENVEOF
# Start OpenClaw
npm run start
# Use with any platform!
openclaw # OpenClaw with FREE Qwen
nanobot # NanoBot with FREE Qwen
picoclaw # PicoClaw with FREE Qwen
zeroclaw # ZeroClaw with FREE Qwen
```
### NanoBot with Qwen OAuth
```bash
# Install NanoBot
pip install nanobot-ai
---
# Configure
mkdir -p ~/.nanobot
cat > ~/.nanobot/config.json << 'CONFIG'
# FEATURE 2: 25+ OpenCode-Compatible AI Providers
## Tier 1: FREE Tier
| Provider | Free Tier | Model | Setup |
|----------|-----------|-------|-------|
| **Qwen OAuth** | 2,000/day | Qwen3-Coder | `qwen && /auth` |
## Tier 2: Major AI Labs
| Provider | SDK Package | Key Models | Features |
|----------|-------------|------------|----------|
| **Anthropic** | `@ai-sdk/anthropic` | Claude 3.5/4/Opus | Extended thinking, PDF support |
| **OpenAI** | `@ai-sdk/openai` | GPT-4o, o1, o3, GPT-5 | Function calling, structured output |
| **Google AI** | `@ai-sdk/google` | Gemini 2.5, Gemini 3 Pro | Multimodal, long context |
| **xAI** | `@ai-sdk/xai` | Grok models | Real-time data integration |
| **Mistral** | `@ai-sdk/mistral` | Mistral Large, Codestral | Code-focused models |
## Tier 3: Cloud Platforms
| Provider | SDK Package | Models | Features |
|----------|-------------|--------|----------|
| **Azure OpenAI** | `@ai-sdk/azure` | GPT-5 Enterprise | Azure integration, custom endpoints |
| **Google Vertex** | `@ai-sdk/google-vertex` | Claude, Gemini on GCP | Anthropic on Google infrastructure |
| **Amazon Bedrock** | `@ai-sdk/amazon-bedrock` | Nova, Claude, Llama 3 | AWS credentials, regional prefixes |
## Tier 4: Aggregators & Gateways
| Provider | SDK Package | Models | Features |
|----------|-------------|--------|----------|
| **OpenRouter** | `@openrouter/ai-sdk-provider` | 100+ models | Multi-provider gateway |
| **Vercel AI** | `@ai-sdk/vercel` | Multi-provider | Edge hosting, rate limiting |
| **Together AI** | `@ai-sdk/togetherai` | Open source models | Fine-tuning, hosting |
| **DeepInfra** | `@ai-sdk/deepinfra` | Open source | Cost-effective hosting |
## Tier 5: Fast Inference
| Provider | SDK Package | Speed | Models |
|----------|-------------|-------|--------|
| **Groq** | `@ai-sdk/groq` | Ultra-fast | Llama 3, Mixtral |
| **Cerebras** | `@ai-sdk/cerebras` | Fastest | Llama 3 variants |
## Tier 6: Specialized
| Provider | SDK Package | Use Case |
|----------|-------------|----------|
| **Perplexity** | `@ai-sdk/perplexity` | Web search integration |
| **Cohere** | `@ai-sdk/cohere` | Enterprise RAG |
| **GitLab Duo** | `@gitlab/gitlab-ai-provider` | CI/CD AI integration |
| **GitHub Copilot** | Custom | IDE integration |
## Tier 7: Local/Self-Hosted
| Provider | Base URL | Use Case |
|----------|----------|----------|
| **Ollama** | localhost:11434 | Local model hosting |
| **LM Studio** | localhost:1234 | GUI local models |
| **vLLM** | localhost:8000 | High-performance serving |
| **LocalAI** | localhost:8080 | OpenAI-compatible local |
---
# Multi-Provider Configuration
## Full Configuration Example
```json
{
"providers": {
"qwen": {
"apiKey": "${QWEN_TOKEN}",
"baseURL": "https://api.qwen.ai/v1"
"qwen_oauth": {
"type": "oauth",
"free": true,
"daily_limit": 2000,
"model": "qwen3-coder-plus"
},
"anthropic": {
"apiKey": "${ANTHROPIC_API_KEY}",
"baseURL": "https://api.anthropic.com"
},
"openai": {
"apiKey": "${OPENAI_API_KEY}",
"baseURL": "https://api.openai.com/v1"
},
"google": {
"apiKey": "${GOOGLE_API_KEY}",
"baseURL": "https://generativelanguage.googleapis.com/v1"
},
"azure": {
"apiKey": "${AZURE_OPENAI_API_KEY}",
"baseURL": "${AZURE_OPENAI_ENDPOINT}"
},
"vertex": {
"projectId": "${GOOGLE_CLOUD_PROJECT}",
"location": "${GOOGLE_CLOUD_LOCATION}"
},
"bedrock": {
"region": "us-east-1",
"accessKeyId": "${AWS_ACCESS_KEY_ID}",
"secretAccessKey": "${AWS_SECRET_ACCESS_KEY}"
},
"openrouter": {
"apiKey": "${OPENROUTER_API_KEY}",
"baseURL": "https://openrouter.ai/api/v1"
},
"xai": {
"apiKey": "${XAI_API_KEY}",
"baseURL": "https://api.x.ai/v1"
},
"mistral": {
"apiKey": "${MISTRAL_API_KEY}",
"baseURL": "https://api.mistral.ai/v1"
},
"groq": {
"apiKey": "${GROQ_API_KEY}",
"baseURL": "https://api.groq.com/openai/v1"
},
"cerebras": {
"apiKey": "${CEREBRAS_API_KEY}",
"baseURL": "https://api.cerebras.ai/v1"
},
"deepinfra": {
"apiKey": "${DEEPINFRA_API_KEY}",
"baseURL": "https://api.deepinfra.com/v1"
},
"cohere": {
"apiKey": "${COHERE_API_KEY}",
"baseURL": "https://api.cohere.ai/v1"
},
"together": {
"apiKey": "${TOGETHER_API_KEY}",
"baseURL": "https://api.together.xyz/v1"
},
"perplexity": {
"apiKey": "${PERPLEXITY_API_KEY}",
"baseURL": "https://api.perplexity.ai"
},
"ollama": {
"baseURL": "http://localhost:11434/v1",
"apiKey": "ollama"
}
},
"agents": {
"defaults": {
"model": "anthropic/claude-sonnet-4-5",
"temperature": 0.7
},
"free": {
"model": "qwen/qwen3-coder-plus"
},
"fast": {
"model": "groq/llama-3.3-70b-versatile"
},
"local": {
"model": "ollama/llama3.2:70b"
}
}
}
CONFIG
# Export token and run
export QWEN_TOKEN=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
nanobot gateway
```
### PicoClaw with Qwen OAuth
## Fetch Available Models
```bash
# OpenRouter - All 100+ models
curl -s https://openrouter.ai/api/v1/models \
-H "Authorization: Bearer $OPENROUTER_API_KEY" | jq '.data[].id'
# OpenAI - GPT models
curl -s https://api.openai.com/v1/models \
-H "Authorization: Bearer $OPENAI_API_KEY" | jq '.data[].id'
# Groq - Fast inference models
curl -s https://api.groq.com/openai/v1/models \
-H "Authorization: Bearer $GROQ_API_KEY" | jq '.data[].id'
# Ollama - Local models
curl -s http://localhost:11434/api/tags | jq '.models[].name'
# Anthropic (static list)
# claude-opus-4-5-20250219, claude-sonnet-4-5-20250219, claude-3-5-sonnet-20241022
# Google Gemini
curl -s "https://generativelanguage.googleapis.com/v1/models?key=$GOOGLE_API_KEY"
```
---
# Platform Installation
## Qwen Code (Native FREE OAuth)
```bash
npm install -g @qwen-code/qwen-code@latest
qwen && /auth
```
## OpenClaw
```bash
git clone https://github.com/openclaw/openclaw.git
cd openclaw && npm install && npm run setup
```
## NanoBot
```bash
pip install nanobot-ai && nanobot onboard
```
## PicoClaw
```bash
# Install PicoClaw
wget https://github.com/sipeed/picoclaw/releases/latest/picoclaw-linux-amd64
chmod +x picoclaw-linux-amd64
sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
# Configure with environment variables
export OPENAI_API_KEY=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
export OPENAI_BASE_URL="https://api.qwen.ai/v1"
export OPENAI_MODEL="qwen3-coder-plus"
# Run
picoclaw gateway
chmod +x picoclaw-linux-amd64 && sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
```
### ZeroClaw with Qwen OAuth
## ZeroClaw
```bash
# Install ZeroClaw
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
chmod +x zeroclaw-linux-amd64
sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
# Configure
export OPENAI_API_KEY=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
export OPENAI_PROVIDER="openai"
export OPENAI_MODEL="qwen3-coder-plus"
# Run
zeroclaw gateway
chmod +x zeroclaw-linux-amd64 && sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
```
## Automation Script: Import Qwen OAuth
---
```bash
#!/bin/bash
# import-qwen-oauth.sh - Import Qwen OAuth to any platform
set -e
echo "╔═══════════════════════════════════════════════════════════════╗"
echo "║ QWEN OAUTH CROSS-PLATFORM IMPORTER ║"
echo "╚═══════════════════════════════════════════════════════════════╝"
# Check if Qwen Code is authenticated
if [ ! -d ~/.qwen ]; then
echo "❌ Qwen Code not authenticated. Run: qwen && /auth"
exit 1
fi
# Find and extract token
TOKEN_FILE=$(find ~/.qwen -name "*.json" -type f | head -1)
if [ -z "$TOKEN_FILE" ]; then
echo "❌ No OAuth token found in ~/.qwen/"
exit 1
fi
# Extract access token
QWEN_TOKEN=$(cat "$TOKEN_FILE" | jq -r '.access_token // .token // .accessToken' 2>/dev/null)
if [ -z "$QWEN_TOKEN" ] || [ "$QWEN_TOKEN" = "null" ]; then
echo "❌ Could not extract token from $TOKEN_FILE"
echo " Try re-authenticating: qwen && /auth"
exit 1
fi
echo "✅ Found Qwen OAuth token"
echo ""
# Export for current session
export OPENAI_API_KEY="$QWEN_TOKEN"
export OPENAI_BASE_URL="https://api.qwen.ai/v1"
export OPENAI_MODEL="qwen3-coder-plus"
# Also save to .env for persistence
cat > ~/.qwen/.env << ENVEOF
OPENAI_API_KEY=$QWEN_TOKEN
OPENAI_BASE_URL=https://api.qwen.ai/v1
OPENAI_MODEL=qwen3-coder-plus
ENVEOF
echo "✅ Environment variables set:"
echo " OPENAI_API_KEY=***${QWEN_TOKEN: -8}"
echo " OPENAI_BASE_URL=https://api.qwen.ai/v1"
echo " OPENAI_MODEL=qwen3-coder-plus"
echo ""
echo "✅ Saved to ~/.qwen/.env for persistence"
echo ""
echo "Usage for other platforms:"
echo " source ~/.qwen/.env && openclaw"
echo " source ~/.qwen/.env && nanobot gateway"
echo " source ~/.qwen/.env && picoclaw gateway"
echo " source ~/.qwen/.env && zeroclaw gateway"
```
## Qwen API Endpoints
| Endpoint | Region | Type | Use Case |
|----------|--------|------|----------|
| `https://api.qwen.ai/v1` | Global | OAuth | FREE tier with OAuth token |
| `https://dashscope.aliyuncs.com/compatible-mode/v1` | China | API Key | Alibaba Cloud paid |
| `https://dashscope-intl.aliyuncs.com/compatible-mode/v1` | International | API Key | Alibaba Cloud paid |
| `https://dashscope-us.aliyuncs.com/compatible-mode/v1` | US | API Key | Alibaba Cloud paid |
| `https://api-inference.modelscope.cn/v1` | China | API Key | ModelScope (free tier) |
## Qwen Models Available
| Model | Context | Best For |
|-------|---------|----------|
| `qwen3-coder-plus` | 128K | General coding (recommended) |
| `qwen3-coder-next` | 128K | Latest features |
| `qwen3.5-plus` | 128K | General purpose |
| `Qwen/Qwen3-Coder-480B-A35B-Instruct` | 128K | ModelScope |
## Usage Examples
# Usage Examples
```
"Setup OpenClaw with Qwen OAuth free tier"
"Import Qwen OAuth to NanoBot"
"Configure PicoClaw with free Qwen3-Coder"
"Use Qwen free tier with ZeroClaw"
"Setup OpenClaw with FREE Qwen OAuth"
"Configure NanoBot with all AI providers"
"Import Qwen OAuth to ZeroClaw"
"Fetch available models from OpenRouter"
"Setup Claw with Anthropic and OpenAI providers"
"Add custom model to my Claw setup"
```
## Troubleshooting
---
### Token Not Found
```bash
# Re-authenticate with Qwen Code
qwen
/auth # Select Qwen OAuth
# Automation Scripts
# Check token location
ls -la ~/.qwen/
find ~/.qwen -name "*.json"
```
### Token Expired
```bash
# Tokens auto-refresh in Qwen Code
# Just run any command in qwen to refresh
qwen -p "hello"
# Then re-export
source ~/.qwen/.env
```
### API Errors
```bash
# Verify token is valid
curl -H "Authorization: Bearer $OPENAI_API_KEY" \
https://api.qwen.ai/v1/models
# Check rate limits (FREE tier: 60 req/min, 2000/day)
```
## 25+ Other AI Providers
See full list in README.md - Anthropic, OpenAI, Google, xAI, Mistral, Groq, Cerebras, etc.
See `scripts/` directory:
- `import-qwen-oauth.sh` - Import FREE Qwen OAuth to any platform
- `fetch-models.sh` - Fetch available models from all providers