🦞 Claw Setup
The Ultimate AI Agent Deployment Skill
Setup ANY Claw platform with 25+ AI providers + FREE model providers like OpenRouter and Qwen via OAuth + Full Customization
✨ Autonomously developed by GLM 5 Advanced Coding Model
⚠️ Disclaimer: Test in a test environment prior to using on any live system
Table of Contents
- Features Overview
- Supported Platforms
- FREE Qwen OAuth Import
- 25+ AI Providers
- Customization Options
- Installation Guides
- Configuration Examples
- Usage Examples
🎯 Features Overview
┌─────────────────────────────────────────────────────────────────┐
│ CLAW SETUP FEATURES │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ✅ FEATURE 1: FREE Qwen OAuth Cross-Platform Import │
│ • 2,000 requests/day FREE │
│ • Works with ALL Claw platforms │
│ • Qwen3-Coder model (coding-optimized) │
│ • Browser OAuth - no API key needed │
│ │
│ ✅ FEATURE 2: 25+ OpenCode-Compatible AI Providers │
│ • All major AI labs │
│ • Cloud platforms (Azure, AWS, GCP) │
│ • Fast inference (Groq, Cerebras) │
│ • Gateways (OpenRouter: 100+ models) │
│ • Local models (Ollama, LM Studio) │
│ │
│ ✅ FEATURE 3: Full Customization │
│ • Model selection (fetch or custom) │
│ • Security hardening │
│ • Interactive brainstorming │
│ • Multi-provider configuration │
│ │
└─────────────────────────────────────────────────────────────────┘
🦀 Supported Platforms
| Platform | Language | Memory | Startup | Qwen OAuth | All Providers | Best For |
|---|---|---|---|---|---|---|
| Qwen Code | TypeScript | ~200MB | ~5s | ✅ Native | ✅ | FREE coding |
| ZeroClaw | Rust | <5MB | <10ms | ✅ Native | ✅ | Maximum performance |
| OpenClaw | TypeScript | >1GB | ~500s | ✅ Full | ✅ | Full-featured, 1700+ plugins |
| NanoBot | Python | ~100MB | ~30s | ✅ Full | ✅ | Research, Python devs |
| PicoClaw | Go | <10MB | ~1s | ✅ Full | ✅ | Embedded, $10 hardware |
| NanoClaw | TypeScript | ~50MB | ~5s | ✅ Full | ✅ | WhatsApp integration |
Platform Selection Guide
┌─────────────────┐
│ Need AI Agent? │
└────────┬────────┘
│
▼
┌───────────────────────┐
│ Want FREE tier? │
└───────────┬───────────┘
┌─────┴─────┐
YES NO
│ │
▼ ▼
┌──────────────┐ ┌──────────────────┐
│ ⭐ Qwen Code │ │ Memory limited? │
│ OAuth FREE │ └────────┬─────────┘
│ 2000/day │ ┌─────┴─────┐
└──────────────┘ YES NO
│ │
▼ ▼
┌──────────┐ ┌──────────┐
│ZeroClaw/ │ │OpenClaw │
│PicoClaw │ │(Full) │
└──────────┘ └──────────┘
⭐ FEATURE 1: FREE Qwen OAuth Import
What You Get
| Metric | Value |
|---|---|
| Requests/day | 2,000 |
| Requests/minute | 60 |
| Cost | FREE |
| Model | coder-model (qwen3-coder-plus) |
| Auth | Browser OAuth via qwen.ai |
| Default | ✅ Recommended default provider |
Quick Start
# Step 1: Install Qwen Code CLI
npm install -g @qwen-code/qwen-code@latest
# Step 2: Get FREE OAuth (opens browser for login)
qwen --auth-type qwen-oauth -p "test"
# Credentials saved to: ~/.qwen/oauth_creds.json
# Step 3: Import to ANY platform
# ZeroClaw (native provider - auto token refresh)
cat > ~/.zeroclaw/config.toml << EOF
default_provider = "qwen-oauth"
default_model = "qwen3-coder-plus"
EOF
# Other platforms (OpenAI-compatible)
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
Platform-Specific Import
OpenClaw + FREE Qwen (OpenAI-Compatible)
git clone https://github.com/openclaw/openclaw.git
cd openclaw && npm install
# Extract token from Qwen OAuth credentials
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
export OPENAI_MODEL="qwen3-coder-plus"
npm run start
NanoBot + FREE Qwen (OpenAI-Compatible)
pip install nanobot-ai
# Extract token and configure
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
export OPENAI_MODEL="qwen3-coder-plus"
nanobot gateway
PicoClaw + FREE Qwen (OpenAI-Compatible)
# Extract token and set environment
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
picoclaw gateway
NanoClaw + FREE Qwen (OpenAI-Compatible)
# Extract token and set environment
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
nanoclaw
ZeroClaw + FREE Qwen (NATIVE Provider)
# Install ZeroClaw
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
chmod +x zeroclaw-linux-amd64 && sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
# ZeroClaw has NATIVE qwen-oauth provider support!
# First, get OAuth credentials via Qwen Code:
qwen && /auth # Select Qwen OAuth → creates ~/.qwen/oauth_creds.json
# Configure ZeroClaw to use native qwen-oauth provider
cat > ~/.zeroclaw/config.toml << CONFIG
default_provider = "qwen-oauth"
default_model = "qwen3-coder-plus"
default_temperature = 0.7
CONFIG
# ZeroClaw reads ~/.qwen/oauth_creds.json directly with auto token refresh!
zeroclaw gateway
Qwen OAuth Integration - SAME Experience on ALL Platforms
┌─────────────────────────────────────────────────────────────────────────────┐
│ QWEN OAUTH - UNIFIED EXPERIENCE ACROSS ALL PLATFORMS │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ALL PLATFORMS NOW HAVE: │
│ ──────────────────────── │
│ ✅ FREE: 2,000 requests/day, 60 req/min │
│ ✅ Model: coder-model (qwen3-coder-plus) │
│ ✅ Auto Token Refresh (via refresh_token) │
│ ✅ Same credentials file: ~/.qwen/oauth_creds.json │
│ ✅ Same API endpoint: dashscope.aliyuncs.com/compatible-mode/v1 │
│ │
│ IMPLEMENTATION: │
│ ┌─────────────┬──────────────────────────────────────────────────┐ │
│ │ Platform │ How It Works │ │
│ ├─────────────┼──────────────────────────────────────────────────┤ │
│ │ ZeroClaw │ Native "qwen-oauth" provider (built-in) │ │
│ │ OpenClaw │ OpenAI-compatible + auto-refresh script │ │
│ │ NanoBot │ OpenAI-compatible + auto-refresh script │ │
│ │ PicoClaw │ OpenAI-compatible + auto-refresh script │ │
│ │ NanoClaw │ OpenAI-compatible + auto-refresh script │ │
│ └─────────────┴──────────────────────────────────────────────────┘ │
│ │
│ RESULT: User experience is IDENTICAL across all platforms! │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
OAuth Credentials Structure
Qwen Code stores OAuth credentials in ~/.qwen/oauth_creds.json:
{
"access_token": "pIFwnvSC3fQPG0i5waDbozvUNEWE4w9x...",
"refresh_token": "9Fm_Ob-c8_WAT_3QvgGwVGfgoNfAdP...",
"token_type": "Bearer",
"resource_url": "portal.qwen.ai",
"expiry_date": 1771774796531
}
| Field | Purpose |
|---|---|
access_token |
Used for API authentication |
refresh_token |
Used to get new access_token when expired |
expiry_date |
Unix timestamp when access_token expires |
Auto Token Refresh for ALL Platforms
# Check token status
./scripts/qwen-token-refresh.sh --status
# Refresh if expired (5 min buffer)
./scripts/qwen-token-refresh.sh
# Run as background daemon
./scripts/qwen-token-refresh.sh --daemon
# Install as systemd service (auto-start)
./scripts/qwen-token-refresh.sh --install
systemctl --user enable --now qwen-token-refresh
The refresh script:
- Checks token expiry every 5 minutes
- Refreshes automatically when < 5 min remaining
- Updates
~/.qwen/oauth_creds.jsonand~/.qwen/.env - Works for ALL platforms (OpenClaw, NanoBot, PicoClaw, NanoClaw)
API Endpoints
| Endpoint | URL |
|---|---|
| Auth (Browser) | https://portal.qwen.ai |
| Token Refresh | https://chat.qwen.ai/api/v1/oauth2/token |
| API Base | https://dashscope.aliyuncs.com/compatible-mode/v1 |
| Chat Completions | /chat/completions |
Available Models (FREE Tier)
| Model | Best For |
|---|---|
qwen3-coder-plus |
Coding (recommended) |
qwen3-coder-flash |
Fast coding |
qwen-max |
Complex tasks |
🤖 FEATURE 2: 25+ AI Providers
Tier 1: FREE
| Provider | Free Tier | Model | Setup |
|---|---|---|---|
| Qwen OAuth | ✅ 2,000/day | Qwen3-Coder | qwen && /auth |
Tier 2: Major AI Labs
| Provider | SDK Package | Key Models | Features |
|---|---|---|---|
| Anthropic | @ai-sdk/anthropic |
Claude 3.5/4/Opus | Extended thinking, PDF support |
| OpenAI | @ai-sdk/openai |
GPT-4o, o1, o3, GPT-5 | Function calling, structured output |
| Google AI | @ai-sdk/google |
Gemini 2.5, Gemini 3 Pro | Multimodal, long context |
| xAI | @ai-sdk/xai |
Grok models | Real-time data integration |
| Mistral | @ai-sdk/mistral |
Mistral Large, Codestral | Code-focused models |
Tier 3: Cloud Platforms
| Provider | SDK Package | Models | Features |
|---|---|---|---|
| Azure OpenAI | @ai-sdk/azure |
GPT-5 Enterprise | Azure integration |
| Google Vertex | @ai-sdk/google-vertex |
Claude, Gemini on GCP | Anthropic on Google |
| Amazon Bedrock | @ai-sdk/amazon-bedrock |
Nova, Claude, Llama 3 | AWS credentials |
Tier 4: Aggregators & Gateways
| Provider | Models | Features |
|---|---|---|
| OpenRouter | 100+ models | Multi-provider gateway |
| Together AI | Open source | Fine-tuning, hosting |
| DeepInfra | Open source | Cost-effective |
| Vercel AI | Multi-provider | Edge hosting |
Tier 5: Fast Inference
| Provider | Speed | Models |
|---|---|---|
| Groq | Ultra-fast | Llama 3, Mixtral |
| Cerebras | Fastest | Llama 3 variants |
Tier 6: Specialized
| Provider | Use Case |
|---|---|
| Perplexity | Web search integration |
| Cohere | Enterprise RAG |
| GitLab Duo | CI/CD AI integration |
| GitHub Copilot | IDE integration |
Tier 7: Local/Self-Hosted
| Provider | Base URL | Use Case |
|---|---|---|
| Ollama | localhost:11434 | Local model hosting |
| LM Studio | localhost:1234 | GUI local models |
| vLLM | localhost:8000 | High-performance serving |
🎨 Customization Options
1. Model Selection
Option A: Fetch from Provider
# Use included script
./scripts/fetch-models.sh openrouter
./scripts/fetch-models.sh groq
./scripts/fetch-models.sh ollama
# Or manually
curl -s https://openrouter.ai/api/v1/models \
-H "Authorization: Bearer $KEY" | jq '.data[].id'
Option B: Custom Model Input
{
"customModels": {
"my-fine-tuned": {
"provider": "openai",
"modelId": "ft:gpt-4o:org:custom:suffix",
"displayName": "My Custom Model"
},
"local-llama": {
"provider": "ollama",
"modelId": "llama3.2:70b",
"displayName": "Local Llama 3.2 70B"
}
}
}
2. Security Hardening
# Environment variables (never hardcode keys)
export ANTHROPIC_API_KEY="your-key"
export OPENAI_API_KEY="your-key"
# Restricted config permissions
chmod 600 ~/.config/claw/config.json
chmod 600 ~/.qwen/settings.json
# Systemd hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
3. Interactive Brainstorming
After installation, customize with brainstorming:
| Topic | Questions |
|---|---|
| Use Case | Coding, research, productivity, automation? |
| Model Selection | Claude, GPT, Gemini, Qwen, local? |
| Integrations | Telegram, Discord, calendar, storage? |
| Deployment | Local, VPS, cloud? |
| Agent Personality | Tone, memory, proactivity? |
📦 Installation Guides
Qwen Code (Native FREE OAuth)
npm install -g @qwen-code/qwen-code@latest
qwen
/auth # Select Qwen OAuth
OpenClaw
git clone https://github.com/openclaw/openclaw.git
cd openclaw && npm install && npm run setup
NanoBot
pip install nanobot-ai
nanobot onboard
nanobot gateway
PicoClaw
wget https://github.com/sipeed/picoclaw/releases/latest/picoclaw-linux-amd64
chmod +x picoclaw-linux-amd64
sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
picoclaw gateway
ZeroClaw
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
chmod +x zeroclaw-linux-amd64
sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
zeroclaw gateway
⚙️ Configuration Examples
Multi-Provider Setup
{
"providers": {
"qwen": {
"type": "oauth",
"free": true,
"daily_limit": 2000,
"model": "qwen3-coder-plus"
},
"anthropic": {
"apiKey": "${ANTHROPIC_API_KEY}",
"baseURL": "https://api.anthropic.com"
},
"openai": {
"apiKey": "${OPENAI_API_KEY}",
"baseURL": "https://api.openai.com/v1"
},
"google": {
"apiKey": "${GOOGLE_API_KEY}"
},
"openrouter": {
"apiKey": "${OPENROUTER_API_KEY}",
"baseURL": "https://openrouter.ai/api/v1"
},
"groq": {
"apiKey": "${GROQ_API_KEY}",
"baseURL": "https://api.groq.com/openai/v1"
},
"ollama": {
"baseURL": "http://localhost:11434/v1"
}
},
"agents": {
"free": {
"model": "qwen/qwen3-coder-plus"
},
"premium": {
"model": "anthropic/claude-sonnet-4-5"
},
"fast": {
"model": "groq/llama-3.3-70b-versatile"
},
"local": {
"model": "ollama/llama3.2:70b"
}
}
}
Environment Variables
# ~/.qwen/.env or ~/.config/claw/.env
# Qwen OAuth (FREE - from qwen --auth-type qwen-oauth)
# Credentials stored in: ~/.qwen/oauth_creds.json
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
export OPENAI_MODEL="qwen3-coder-plus"
# Or use paid providers
ANTHROPIC_API_KEY=sk-ant-xxx
OPENAI_API_KEY=sk-xxx
GOOGLE_API_KEY=xxx
GROQ_API_KEY=gsk_xxx
OPENROUTER_API_KEY=sk-or-xxx
MISTRAL_API_KEY=xxx
XAI_API_KEY=xxx
COHERE_API_KEY=xxx
PERPLEXITY_API_KEY=xxx
CEREBRAS_API_KEY=xxx
TOGETHER_API_KEY=xxx
DEEPINFRA_API_KEY=xxx
# Cloud providers
AZURE_OPENAI_API_KEY=xxx
AZURE_OPENAI_ENDPOINT=https://xxx.openai.azure.com/
AWS_ACCESS_KEY_ID=xxx
AWS_SECRET_ACCESS_KEY=xxx
GOOGLE_CLOUD_PROJECT=my-project
GOOGLE_CLOUD_LOCATION=us-central1
💬 Usage Examples
Basic Usage
"Setup OpenClaw with FREE Qwen OAuth"
"Install NanoBot with all AI providers"
"Configure ZeroClaw with Groq for fast inference"
Advanced Usage
"Setup Claw with Anthropic, OpenAI, and FREE Qwen fallback"
"Fetch available models from OpenRouter and let me choose"
"Configure PicoClaw with my custom fine-tuned model"
"Import Qwen OAuth to use with OpenClaw"
"Setup Claw platform with security hardening"
Provider-Specific
"Configure Claw with Anthropic Claude 4"
"Setup Claw with OpenAI GPT-5"
"Use Google Gemini 3 Pro with OpenClaw"
"Setup local Ollama models with Claw"
"Configure OpenRouter gateway for 100+ models"
📁 Files in This Skill
skills/claw-setup/
├── SKILL.md # Skill definition (this file's source)
├── README.md # This documentation
└── scripts/
├── import-qwen-oauth.sh # Import FREE Qwen OAuth to any platform
├── qwen-token-refresh.sh # Auto-refresh tokens (daemon/systemd)
└── fetch-models.sh # Fetch models from all providers
🔧 Troubleshooting
Qwen OAuth Token Not Found
# Re-authenticate
qwen && /auth # Select Qwen OAuth
# Check token location
ls ~/.qwen/
find ~/.qwen -name "*.json"
Token Expired
# Option 1: Use auto-refresh script
./scripts/qwen-token-refresh.sh
# Option 2: Manual re-auth
qwen --auth-type qwen-oauth -p "test"
source ~/.qwen/.env
# Option 3: Install systemd service for auto-refresh
./scripts/qwen-token-refresh.sh --install
systemctl --user enable --now qwen-token-refresh
API Errors
# Verify token is valid
QWEN_TOKEN=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
curl -X POST "https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions" \
-H "Authorization: Bearer $QWEN_TOKEN" \
-H "Content-Type: application/json" \
-d '{"model": "qwen3-coder-plus", "messages": [{"role": "user", "content": "Hello"}]}'
# Check rate limits
# FREE tier: 60 req/min, 2000/day