- All platforms now have IDENTICAL Qwen OAuth integration - ZeroClaw: native provider (built-in) - Others: OpenAI-compatible + auto-refresh script - Same result: FREE tier, auto refresh, same credentials file - Updated platform table to show "Full" support (not just "Import") User experience is now identical regardless of platform choice. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
13 KiB
13 KiB
name, description, version
| name | description | version |
|---|---|---|
| claw-setup | Use this skill when the user asks to "setup openclaw", "install nanobot", "deploy zeroclaw", "configure picoclaw", "setup qwen code", "import qwen oauth", "use free qwen tier", "configure AI providers", "add openai provider", "AI agent setup", or mentions setting up AI platforms. | 2.0.0 |
Claw Setup Skill
End-to-end professional setup of AI Agent platforms with 25+ OpenCode-compatible providers and FREE Qwen OAuth cross-platform import.
Two Key Features
┌─────────────────────────────────────────────────────────────────┐
│ CLAW SETUP FEATURES │
├─────────────────────────────────────────────────────────────────┤
│ │
│ FEATURE 1: Qwen OAuth Cross-Platform Import (FREE) │
│ ─────────────────────────────────────────────────── │
│ • FREE: 2,000 requests/day, 60 req/min │
│ • Works with: OpenClaw, NanoBot, PicoClaw, ZeroClaw │
│ • Model: coder-model (qwen3-coder-plus) │
│ • Auth: Browser OAuth via qwen.ai │
│ • Token refresh: Automatic (ALL platforms) │
│ • DEFAULT: Recommended as primary provider │
│ │
│ FEATURE 2: 25+ OpenCode-Compatible AI Providers │
│ ───────────────────────────────────────────────── │
│ • All major AI labs: Anthropic, OpenAI, Google, xAI, Mistral │
│ • Cloud platforms: Azure, AWS Bedrock, Google Vertex │
│ • Fast inference: Groq, Cerebras │
│ • Gateways: OpenRouter (100+ models), Together AI │
│ • Local: Ollama, LM Studio, vLLM │
│ │
└─────────────────────────────────────────────────────────────────┘
Supported Platforms
| Platform | Language | Memory | Qwen OAuth | All Providers | Best For |
|---|---|---|---|---|---|
| Qwen Code | TypeScript | ~200MB | ✅ Native | ✅ | FREE coding |
| ZeroClaw | Rust | <5MB | ✅ Native | ✅ | Max performance |
| OpenClaw | TypeScript | >1GB | ✅ Full | ✅ | Full-featured |
| NanoBot | Python | ~100MB | ✅ Full | ✅ | Research |
| PicoClaw | Go | <10MB | ✅ Full | ✅ | Embedded |
| NanoClaw | TypeScript | ~50MB | ✅ Full | ✅ |
All platforms have IDENTICAL Qwen OAuth experience:
- FREE tier: 2,000 requests/day
- Auto token refresh
- Same credentials:
~/.qwen/oauth_creds.json - Same API:
dashscope.aliyuncs.com
FEATURE 1: Qwen OAuth Cross-Platform Import (FREE)
Get FREE Qwen OAuth
# Install Qwen Code CLI
npm install -g @qwen-code/qwen-code@latest
# Authenticate (FREE - opens browser)
qwen --auth-type qwen-oauth -p "test"
# FREE: 2,000 requests/day, 60 req/min
# Credentials saved to: ~/.qwen/oauth_creds.json
Credentials File Structure
{
"access_token": "your-access-token",
"refresh_token": "your-refresh-token",
"token_type": "Bearer",
"resource_url": "portal.qwen.ai",
"expiry_date": 1771774796531
}
API Endpoints
| Endpoint | URL |
|---|---|
| Auth (Browser) | https://portal.qwen.ai |
| Token Refresh | https://chat.qwen.ai/api/v1/oauth2/token |
| API Base | https://dashscope.aliyuncs.com/compatible-mode/v1 |
| Chat Completions | /chat/completions |
Available Models (FREE Tier)
| Model | Best For |
|---|---|
coder-model (qwen3-coder-plus) |
Coding (DEFAULT) |
qwen3-coder-flash |
Fast coding |
qwen-max |
Complex tasks |
Set as Default Provider
# ZeroClaw - Native qwen-oauth as default
cat > ~/.zeroclaw/config.toml << EOF
default_provider = "qwen-oauth"
default_model = "coder-model" # qwen3-coder-plus
EOF
# Other platforms - Set environment variables
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
export OPENAI_MODEL="coder-model" # qwen3-coder-plus
Import Methods
ZeroClaw (Native Provider - Auto Token Refresh)
ZeroClaw has built-in qwen-oauth provider that handles token refresh automatically:
# Configure ZeroClaw to use native qwen-oauth provider
cat > ~/.zeroclaw/config.toml << EOF
default_provider = "qwen-oauth"
default_model = "qwen3-coder-plus"
default_temperature = 0.7
EOF
# ZeroClaw automatically:
# • Reads ~/.qwen/oauth_creds.json
# • Refreshes expired tokens using refresh_token
# • Uses correct DashScope API endpoint
zeroclaw agent -m "Hello!"
Other Platforms (OpenAI-Compatible Import)
For OpenClaw, NanoBot, PicoClaw, NanoClaw - use OpenAI-compatible endpoint:
# Extract token
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
# Use with any platform
openclaw # OpenClaw with FREE Qwen
nanobot # NanoBot with FREE Qwen
picoclaw # PicoClaw with FREE Qwen
nanoclaw # NanoClaw with FREE Qwen
Auto Token Refresh (ALL Platforms)
Use the included refresh script to automatically refresh expired tokens:
# Check token status
./scripts/qwen-token-refresh.sh --status
# Refresh if expired
./scripts/qwen-token-refresh.sh
# Run as background daemon (checks every 5 min)
./scripts/qwen-token-refresh.sh --daemon
# Install as systemd service (auto-start on boot)
./scripts/qwen-token-refresh.sh --install
systemctl --user enable --now qwen-token-refresh
How Auto-Refresh Works
┌─────────────────────────────────────────────────────────────────────────────┐
│ AUTO TOKEN REFRESH FLOW │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ 1. Check expiry_date in ~/.qwen/oauth_creds.json │
│ 2. If expired (< 5 min buffer): │
│ POST https://chat.qwen.ai/api/v1/oauth2/token │
│ Body: grant_type=refresh_token&refresh_token=xxx │
│ 3. Response: { access_token, refresh_token, expires_in } │
│ 4. Update ~/.qwen/oauth_creds.json with new tokens │
│ 5. Update ~/.qwen/.env with new OPENAI_API_KEY │
│ 6. Platforms using source ~/.qwen/.env get fresh token │
│ │
│ Systemd service: Runs every 5 minutes, refreshes when needed │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
FEATURE 2: 25+ OpenCode-Compatible AI Providers
Tier 1: FREE Tier
| Provider | Free Tier | Model | Setup |
|---|---|---|---|
| Qwen OAuth | 2,000/day | Qwen3-Coder | qwen --auth-type qwen-oauth |
Tier 2: Major AI Labs
| Provider | SDK Package | Key Models | Features |
|---|---|---|---|
| Anthropic | @ai-sdk/anthropic |
Claude 3.5/4/Opus | Extended thinking, PDF support |
| OpenAI | @ai-sdk/openai |
GPT-4o, o1, o3, GPT-5 | Function calling, structured output |
| Google AI | @ai-sdk/google |
Gemini 2.5, Gemini 3 Pro | Multimodal, long context |
| xAI | @ai-sdk/xai |
Grok models | Real-time data integration |
| Mistral | @ai-sdk/mistral |
Mistral Large, Codestral | Code-focused models |
Tier 3: Cloud Platforms
| Provider | SDK Package | Models | Features |
|---|---|---|---|
| Azure OpenAI | @ai-sdk/azure |
GPT-5 Enterprise | Azure integration, custom endpoints |
| Google Vertex | @ai-sdk/google-vertex |
Claude, Gemini on GCP | Anthropic on Google infrastructure |
| Amazon Bedrock | @ai-sdk/amazon-bedrock |
Nova, Claude, Llama 3 | AWS credentials, regional prefixes |
Tier 4: Aggregators & Gateways
| Provider | SDK Package | Models | Features |
|---|---|---|---|
| OpenRouter | @openrouter/ai-sdk-provider |
100+ models | Multi-provider gateway |
| Vercel AI | @ai-sdk/vercel |
Multi-provider | Edge hosting, rate limiting |
| Together AI | @ai-sdk/togetherai |
Open source models | Fine-tuning, hosting |
| DeepInfra | @ai-sdk/deepinfra |
Open source | Cost-effective hosting |
Tier 5: Fast Inference
| Provider | SDK Package | Speed | Models |
|---|---|---|---|
| Groq | @ai-sdk/groq |
Ultra-fast | Llama 3, Mixtral |
| Cerebras | @ai-sdk/cerebras |
Fastest | Llama 3 variants |
Tier 6: Specialized
| Provider | SDK Package | Use Case |
|---|---|---|
| Perplexity | @ai-sdk/perplexity |
Web search integration |
| Cohere | @ai-sdk/cohere |
Enterprise RAG |
| GitLab Duo | @gitlab/gitlab-ai-provider |
CI/CD AI integration |
| GitHub Copilot | Custom | IDE integration |
Tier 7: Local/Self-Hosted
| Provider | Base URL | Use Case |
|---|---|---|
| Ollama | localhost:11434 | Local model hosting |
| LM Studio | localhost:1234 | GUI local models |
| vLLM | localhost:8000 | High-performance serving |
| LocalAI | localhost:8080 | OpenAI-compatible local |
Quick Import Script
#!/bin/bash
# Usage: source import-qwen-oauth.sh [platform]
CREDS_FILE="$HOME/.qwen/oauth_creds.json"
PLATFORM="${1:-zeroclaw}"
QWEN_TOKEN=$(cat "$CREDS_FILE" | jq -r '.access_token')
case "$PLATFORM" in
zeroclaw)
# Native provider - just set config
sed -i 's/^default_provider = .*/default_provider = "qwen-oauth"/' ~/.zeroclaw/config.toml
sed -i 's/^default_model = .*/default_model = "qwen3-coder-plus"/' ~/.zeroclaw/config.toml
;;
*)
# OpenAI-compatible
export OPENAI_API_KEY="$QWEN_TOKEN"
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
;;
esac
Platform Installation
Qwen Code (Native FREE OAuth)
npm install -g @qwen-code/qwen-code@latest
qwen --auth-type qwen-oauth -p "test"
OpenClaw
git clone https://github.com/openclaw/openclaw.git
cd openclaw && npm install && npm run setup
NanoBot
pip install nanobot-ai && nanobot onboard
PicoClaw
wget https://github.com/sipeed/picoclaw/releases/latest/picoclaw-linux-amd64
chmod +x picoclaw-linux-amd64 && sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
ZeroClaw
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
chmod +x zeroclaw-linux-amd64 && sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
Usage Examples
"Setup OpenClaw with FREE Qwen OAuth"
"Configure NanoBot with all AI providers"
"Import Qwen OAuth to ZeroClaw"
"Fetch available models from OpenRouter"
"Setup Claw with Anthropic and OpenAI providers"
"Add custom model to my Claw setup"
Troubleshooting
Token Expired
# ZeroClaw: Automatic refresh (no action needed)
# Other platforms: Re-authenticate and re-export
qwen --auth-type qwen-oauth -p "test"
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
API Test
QWEN_TOKEN=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
curl -X POST "https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions" \
-H "Authorization: Bearer $QWEN_TOKEN" \
-H "Content-Type: application/json" \
-d '{"model": "qwen3-coder-plus", "messages": [{"role": "user", "content": "Hello"}]}'