Files
ClaudeCode-Custom-Skills/skills/claw-setup/SKILL.md
Claude Code 4299e9dce4 docs: Comprehensive documentation for 25+ providers + Qwen OAuth
Restructured documentation to highlight both key features:

FEATURE 1: Qwen OAuth Cross-Platform Import (FREE)
- 2,000 requests/day free tier
- Works with ALL Claw platforms
- Browser OAuth via qwen.ai
- Model: Qwen3-Coder

FEATURE 2: 25+ OpenCode-Compatible Providers
- Major AI Labs: Anthropic, OpenAI, Google, xAI, Mistral
- Cloud Platforms: Azure, AWS Bedrock, Google Vertex
- Fast Inference: Groq, Cerebras
- Gateways: OpenRouter (100+ models), Together AI
- Local: Ollama, LM Studio, vLLM

Provider Tiers:
1. FREE: Qwen OAuth
2. Major Labs: Anthropic, OpenAI, Google, xAI, Mistral
3. Cloud: Azure, Bedrock, Vertex
4. Fast: Groq, Cerebras
5. Gateways: OpenRouter, Together AI, Vercel
6. Specialized: Perplexity, Cohere, GitLab, GitHub
7. Local: Ollama, LM Studio, vLLM

Platforms with full support:
- Qwen Code (native OAuth)
- OpenClaw, NanoBot, PicoClaw, ZeroClaw (import OAuth)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 04:19:29 -05:00

10 KiB

name, description, version
name description version
claw-setup Use this skill when the user asks to "setup openclaw", "install nanobot", "deploy zeroclaw", "configure picoclaw", "setup qwen code", "import qwen oauth", "use free qwen tier", "configure AI providers", "add openai provider", "AI agent setup", or mentions setting up AI platforms. 1.3.0

Claw Setup Skill

End-to-end professional setup of AI Agent platforms with 25+ OpenCode-compatible providers and FREE Qwen OAuth cross-platform import.

Two Key Features

┌─────────────────────────────────────────────────────────────────┐
│                    CLAW SETUP FEATURES                          │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  FEATURE 1: Qwen OAuth Cross-Platform Import (FREE)            │
│  ───────────────────────────────────────────────────            │
│  • FREE: 2,000 requests/day, 60 req/min                        │
│  • Works with: OpenClaw, NanoBot, PicoClaw, ZeroClaw           │
│  • Model: Qwen3-Coder (optimized for coding)                   │
│  • Auth: Browser OAuth via qwen.ai                             │
│                                                                 │
│  FEATURE 2: 25+ OpenCode-Compatible AI Providers                │
│  ─────────────────────────────────────────────────              │
│  • All major AI labs: Anthropic, OpenAI, Google, xAI, Mistral  │
│  • Cloud platforms: Azure, AWS Bedrock, Google Vertex          │
│  • Fast inference: Groq, Cerebras                              │
│  • Gateways: OpenRouter (100+ models), Together AI             │
│  • Local: Ollama, LM Studio, vLLM                              │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Supported Platforms

Platform Language Memory Qwen OAuth All Providers Best For
Qwen Code TypeScript ~200MB Native FREE coding
OpenClaw TypeScript >1GB Import Full-featured
NanoBot Python ~100MB Import Research
PicoClaw Go <10MB Import Embedded
ZeroClaw Rust <5MB Import Performance
NanoClaw TypeScript ~50MB Import WhatsApp

FEATURE 1: Qwen OAuth Cross-Platform Import (FREE)

Get FREE Qwen OAuth

# Install Qwen Code
npm install -g @qwen-code/qwen-code@latest

# Authenticate (FREE)
qwen
/auth  # Select "Qwen OAuth" → Browser login with qwen.ai

# FREE: 2,000 requests/day, 60 req/min

Import to Any Platform

# Extract token
export QWEN_TOKEN=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')

# Configure for any platform
export OPENAI_API_KEY="$QWEN_TOKEN"
export OPENAI_BASE_URL="https://api.qwen.ai/v1"
export OPENAI_MODEL="qwen3-coder-plus"

# Use with any platform!
openclaw      # OpenClaw with FREE Qwen
nanobot       # NanoBot with FREE Qwen
picoclaw      # PicoClaw with FREE Qwen
zeroclaw      # ZeroClaw with FREE Qwen

FEATURE 2: 25+ OpenCode-Compatible AI Providers

Tier 1: FREE Tier

Provider Free Tier Model Setup
Qwen OAuth 2,000/day Qwen3-Coder qwen && /auth

Tier 2: Major AI Labs

Provider SDK Package Key Models Features
Anthropic @ai-sdk/anthropic Claude 3.5/4/Opus Extended thinking, PDF support
OpenAI @ai-sdk/openai GPT-4o, o1, o3, GPT-5 Function calling, structured output
Google AI @ai-sdk/google Gemini 2.5, Gemini 3 Pro Multimodal, long context
xAI @ai-sdk/xai Grok models Real-time data integration
Mistral @ai-sdk/mistral Mistral Large, Codestral Code-focused models

Tier 3: Cloud Platforms

Provider SDK Package Models Features
Azure OpenAI @ai-sdk/azure GPT-5 Enterprise Azure integration, custom endpoints
Google Vertex @ai-sdk/google-vertex Claude, Gemini on GCP Anthropic on Google infrastructure
Amazon Bedrock @ai-sdk/amazon-bedrock Nova, Claude, Llama 3 AWS credentials, regional prefixes

Tier 4: Aggregators & Gateways

Provider SDK Package Models Features
OpenRouter @openrouter/ai-sdk-provider 100+ models Multi-provider gateway
Vercel AI @ai-sdk/vercel Multi-provider Edge hosting, rate limiting
Together AI @ai-sdk/togetherai Open source models Fine-tuning, hosting
DeepInfra @ai-sdk/deepinfra Open source Cost-effective hosting

Tier 5: Fast Inference

Provider SDK Package Speed Models
Groq @ai-sdk/groq Ultra-fast Llama 3, Mixtral
Cerebras @ai-sdk/cerebras Fastest Llama 3 variants

Tier 6: Specialized

Provider SDK Package Use Case
Perplexity @ai-sdk/perplexity Web search integration
Cohere @ai-sdk/cohere Enterprise RAG
GitLab Duo @gitlab/gitlab-ai-provider CI/CD AI integration
GitHub Copilot Custom IDE integration

Tier 7: Local/Self-Hosted

Provider Base URL Use Case
Ollama localhost:11434 Local model hosting
LM Studio localhost:1234 GUI local models
vLLM localhost:8000 High-performance serving
LocalAI localhost:8080 OpenAI-compatible local

Multi-Provider Configuration

Full Configuration Example

{
  "providers": {
    "qwen_oauth": {
      "type": "oauth",
      "free": true,
      "daily_limit": 2000,
      "model": "qwen3-coder-plus"
    },
    "anthropic": {
      "apiKey": "${ANTHROPIC_API_KEY}",
      "baseURL": "https://api.anthropic.com"
    },
    "openai": {
      "apiKey": "${OPENAI_API_KEY}",
      "baseURL": "https://api.openai.com/v1"
    },
    "google": {
      "apiKey": "${GOOGLE_API_KEY}",
      "baseURL": "https://generativelanguage.googleapis.com/v1"
    },
    "azure": {
      "apiKey": "${AZURE_OPENAI_API_KEY}",
      "baseURL": "${AZURE_OPENAI_ENDPOINT}"
    },
    "vertex": {
      "projectId": "${GOOGLE_CLOUD_PROJECT}",
      "location": "${GOOGLE_CLOUD_LOCATION}"
    },
    "bedrock": {
      "region": "us-east-1",
      "accessKeyId": "${AWS_ACCESS_KEY_ID}",
      "secretAccessKey": "${AWS_SECRET_ACCESS_KEY}"
    },
    "openrouter": {
      "apiKey": "${OPENROUTER_API_KEY}",
      "baseURL": "https://openrouter.ai/api/v1"
    },
    "xai": {
      "apiKey": "${XAI_API_KEY}",
      "baseURL": "https://api.x.ai/v1"
    },
    "mistral": {
      "apiKey": "${MISTRAL_API_KEY}",
      "baseURL": "https://api.mistral.ai/v1"
    },
    "groq": {
      "apiKey": "${GROQ_API_KEY}",
      "baseURL": "https://api.groq.com/openai/v1"
    },
    "cerebras": {
      "apiKey": "${CEREBRAS_API_KEY}",
      "baseURL": "https://api.cerebras.ai/v1"
    },
    "deepinfra": {
      "apiKey": "${DEEPINFRA_API_KEY}",
      "baseURL": "https://api.deepinfra.com/v1"
    },
    "cohere": {
      "apiKey": "${COHERE_API_KEY}",
      "baseURL": "https://api.cohere.ai/v1"
    },
    "together": {
      "apiKey": "${TOGETHER_API_KEY}",
      "baseURL": "https://api.together.xyz/v1"
    },
    "perplexity": {
      "apiKey": "${PERPLEXITY_API_KEY}",
      "baseURL": "https://api.perplexity.ai"
    },
    "ollama": {
      "baseURL": "http://localhost:11434/v1",
      "apiKey": "ollama"
    }
  },
  "agents": {
    "defaults": {
      "model": "anthropic/claude-sonnet-4-5",
      "temperature": 0.7
    },
    "free": {
      "model": "qwen/qwen3-coder-plus"
    },
    "fast": {
      "model": "groq/llama-3.3-70b-versatile"
    },
    "local": {
      "model": "ollama/llama3.2:70b"
    }
  }
}

Fetch Available Models

# OpenRouter - All 100+ models
curl -s https://openrouter.ai/api/v1/models \
  -H "Authorization: Bearer $OPENROUTER_API_KEY" | jq '.data[].id'

# OpenAI - GPT models
curl -s https://api.openai.com/v1/models \
  -H "Authorization: Bearer $OPENAI_API_KEY" | jq '.data[].id'

# Groq - Fast inference models
curl -s https://api.groq.com/openai/v1/models \
  -H "Authorization: Bearer $GROQ_API_KEY" | jq '.data[].id'

# Ollama - Local models
curl -s http://localhost:11434/api/tags | jq '.models[].name'

# Anthropic (static list)
# claude-opus-4-5-20250219, claude-sonnet-4-5-20250219, claude-3-5-sonnet-20241022

# Google Gemini
curl -s "https://generativelanguage.googleapis.com/v1/models?key=$GOOGLE_API_KEY"

Platform Installation

Qwen Code (Native FREE OAuth)

npm install -g @qwen-code/qwen-code@latest
qwen && /auth

OpenClaw

git clone https://github.com/openclaw/openclaw.git
cd openclaw && npm install && npm run setup

NanoBot

pip install nanobot-ai && nanobot onboard

PicoClaw

wget https://github.com/sipeed/picoclaw/releases/latest/picoclaw-linux-amd64
chmod +x picoclaw-linux-amd64 && sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw

ZeroClaw

wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
chmod +x zeroclaw-linux-amd64 && sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw

Usage Examples

"Setup OpenClaw with FREE Qwen OAuth"
"Configure NanoBot with all AI providers"
"Import Qwen OAuth to ZeroClaw"
"Fetch available models from OpenRouter"
"Setup Claw with Anthropic and OpenAI providers"
"Add custom model to my Claw setup"

Automation Scripts

See scripts/ directory:

  • import-qwen-oauth.sh - Import FREE Qwen OAuth to any platform
  • fetch-models.sh - Fetch available models from all providers