# π¦ Claw Setup
### The Ultimate AI Agent Deployment Skill
**Setup ANY Claw platform with 25+ AI providers + FREE Qwen OAuth + Full Customization**
---
β¨ Autonomously developed by GLM 5 Advanced Coding Model
β οΈ Disclaimer: Test in a test environment prior to using on any live system
---
## Table of Contents
1. [Features Overview](#-features-overview)
2. [Supported Platforms](#-supported-platforms)
3. [FREE Qwen OAuth Import](#-feature-1-free-qwen-oauth-import)
4. [25+ AI Providers](#-feature-2-25-ai-providers)
5. [Customization Options](#-customization-options)
6. [Installation Guides](#-installation-guides)
7. [Configuration Examples](#-configuration-examples)
8. [Usage Examples](#-usage-examples)
---
## π― Features Overview
```
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CLAW SETUP FEATURES β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β β
FEATURE 1: FREE Qwen OAuth Cross-Platform Import β
β β’ 2,000 requests/day FREE β
β β’ Works with ALL Claw platforms β
β β’ Qwen3-Coder model (coding-optimized) β
β β’ Browser OAuth - no API key needed β
β β
β β
FEATURE 2: 25+ OpenCode-Compatible AI Providers β
β β’ All major AI labs β
β β’ Cloud platforms (Azure, AWS, GCP) β
β β’ Fast inference (Groq, Cerebras) β
β β’ Gateways (OpenRouter: 100+ models) β
β β’ Local models (Ollama, LM Studio) β
β β
β β
FEATURE 3: Full Customization β
β β’ Model selection (fetch or custom) β
β β’ Security hardening β
β β’ Interactive brainstorming β
β β’ Multi-provider configuration β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
---
## π¦ Supported Platforms
| Platform | Language | Memory | Startup | Qwen OAuth | All Providers | Best For |
|----------|----------|--------|---------|------------|---------------|----------|
| **Qwen Code** | TypeScript | ~200MB | ~5s | β
Native | β
| FREE coding |
| **OpenClaw** | TypeScript | >1GB | ~500s | β
Import | β
| Full-featured, 1700+ plugins |
| **NanoBot** | Python | ~100MB | ~30s | β
Import | β
| Research, Python devs |
| **PicoClaw** | Go | <10MB | ~1s | β
Import | β
| Embedded, $10 hardware |
| **ZeroClaw** | Rust | <5MB | <10ms | β
Native | β
| Maximum performance |
| **NanoClaw** | TypeScript | ~50MB | ~5s | β
Import | β
| WhatsApp integration |
### Platform Selection Guide
```
βββββββββββββββββββ
β Need AI Agent? β
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββββββββ
β Want FREE tier? β
βββββββββββββ¬ββββββββββββ
βββββββ΄ββββββ
YES NO
β β
βΌ βΌ
ββββββββββββββββ ββββββββββββββββββββ
β β Qwen Code β β Memory limited? β
β OAuth FREE β ββββββββββ¬ββββββββββ
β 2000/day β βββββββ΄ββββββ
ββββββββββββββββ YES NO
β β
βΌ βΌ
ββββββββββββ ββββββββββββ
βZeroClaw/ β βOpenClaw β
βPicoClaw β β(Full) β
ββββββββββββ ββββββββββββ
```
---
## β FEATURE 1: FREE Qwen OAuth Import
### What You Get
| Metric | Value |
|--------|-------|
| **Requests/day** | 2,000 |
| **Requests/minute** | 60 |
| **Cost** | **FREE** |
| **Model** | Qwen3-Coder (coding-optimized) |
| **Auth** | Browser OAuth via qwen.ai |
### Quick Start
```bash
# Step 1: Install Qwen Code CLI
npm install -g @qwen-code/qwen-code@latest
# Step 2: Get FREE OAuth (opens browser for login)
qwen --auth-type qwen-oauth -p "test"
# Credentials saved to: ~/.qwen/oauth_creds.json
# Step 3: Import to ANY platform
# ZeroClaw (native provider - auto token refresh)
cat > ~/.zeroclaw/config.toml << EOF
default_provider = "qwen-oauth"
default_model = "qwen3-coder-plus"
EOF
# Other platforms (OpenAI-compatible)
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
```
### Platform-Specific Import
#### OpenClaw + FREE Qwen (OpenAI-Compatible)
```bash
git clone https://github.com/openclaw/openclaw.git
cd openclaw && npm install
# Extract token from Qwen OAuth credentials
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
export OPENAI_MODEL="qwen3-coder-plus"
npm run start
```
#### NanoBot + FREE Qwen (OpenAI-Compatible)
```bash
pip install nanobot-ai
# Extract token and configure
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
export OPENAI_MODEL="qwen3-coder-plus"
nanobot gateway
```
#### PicoClaw + FREE Qwen (OpenAI-Compatible)
```bash
# Extract token and set environment
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
picoclaw gateway
```
#### NanoClaw + FREE Qwen (OpenAI-Compatible)
```bash
# Extract token and set environment
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
nanoclaw
```
#### ZeroClaw + FREE Qwen (NATIVE Provider)
```bash
# Install ZeroClaw
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
chmod +x zeroclaw-linux-amd64 && sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
# ZeroClaw has NATIVE qwen-oauth provider support!
# First, get OAuth credentials via Qwen Code:
qwen && /auth # Select Qwen OAuth β creates ~/.qwen/oauth_creds.json
# Configure ZeroClaw to use native qwen-oauth provider
cat > ~/.zeroclaw/config.toml << CONFIG
default_provider = "qwen-oauth"
default_model = "qwen3-coder-plus"
default_temperature = 0.7
CONFIG
# ZeroClaw reads ~/.qwen/oauth_creds.json directly with auto token refresh!
zeroclaw gateway
```
### Understanding Qwen OAuth Import Methods
```
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β QWEN OAUTH IMPORT METHODS β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β METHOD 1: Native Provider (ZeroClaw ONLY) β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β’ ZeroClaw has built-in "qwen-oauth" provider β
β β’ Reads ~/.qwen/oauth_creds.json directly β
β β’ Automatic token refresh using refresh_token β
β β’ Tracks expiry_date and refreshes when needed β
β β’ Configuration: default_provider = "qwen-oauth" β
β β
β METHOD 2: OpenAI-Compatible (All Other Platforms) β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β’ Treats Qwen API as OpenAI-compatible endpoint β
β β’ Extract access_token and use as OPENAI_API_KEY β
β β’ Set OPENAI_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1 β
β β’ Manual re-export needed when token expires β
β β
β COMPARISON: β
β βββββββββββββββββββ¬βββββββββββββββββ¬ββββββββββββββββββββββ β
β β Feature β Native β OpenAI-Compatible β β
β βββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββ€ β
β β Token Refresh β β
Automatic β β Manual β β
β β Token Expiry β β
Handled β β οΈ Re-export neededβ β
β β Platforms β ZeroClaw only β All others β β
β β Config File β ~/.qwen/oauth_creds.json β env vars β β
β βββββββββββββββββββ΄βββββββββββββββββ΄ββββββββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
### OAuth Credentials Structure
Qwen Code stores OAuth credentials in `~/.qwen/oauth_creds.json`:
```json
{
"access_token": "pIFwnvSC3fQPG0i5waDbozvUNEWE4w9x...",
"refresh_token": "9Fm_Ob-c8_WAT_3QvgGwVGfgoNfAdP...",
"token_type": "Bearer",
"resource_url": "portal.qwen.ai",
"expiry_date": 1771774796531
}
```
| Field | Purpose |
|-------|---------|
| `access_token` | Used for API authentication |
| `refresh_token` | Used to get new access_token when expired |
| `expiry_date` | Unix timestamp when access_token expires |
### API Endpoints
| Endpoint | URL |
|----------|-----|
| **API Base** | `https://dashscope.aliyuncs.com/compatible-mode/v1` |
| **Chat Completions** | `https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions` |
| **Token Refresh** | `https://chat.qwen.ai/api/v1/oauth2/token` |
### Available Models (FREE Tier)
| Model | Best For |
|-------|----------|
| `qwen3-coder-plus` | Coding (recommended) |
| `qwen3-coder-flash` | Fast coding |
| `qwen-max` | Complex tasks |
---
## π€ FEATURE 2: 25+ AI Providers
### Tier 1: FREE
| Provider | Free Tier | Model | Setup |
|----------|-----------|-------|-------|
| **Qwen OAuth** | β
2,000/day | Qwen3-Coder | `qwen && /auth` |
### Tier 2: Major AI Labs
| Provider | SDK Package | Key Models | Features |
|----------|-------------|------------|----------|
| **Anthropic** | `@ai-sdk/anthropic` | Claude 3.5/4/Opus | Extended thinking, PDF support |
| **OpenAI** | `@ai-sdk/openai` | GPT-4o, o1, o3, GPT-5 | Function calling, structured output |
| **Google AI** | `@ai-sdk/google` | Gemini 2.5, Gemini 3 Pro | Multimodal, long context |
| **xAI** | `@ai-sdk/xai` | Grok models | Real-time data integration |
| **Mistral** | `@ai-sdk/mistral` | Mistral Large, Codestral | Code-focused models |
### Tier 3: Cloud Platforms
| Provider | SDK Package | Models | Features |
|----------|-------------|--------|----------|
| **Azure OpenAI** | `@ai-sdk/azure` | GPT-5 Enterprise | Azure integration |
| **Google Vertex** | `@ai-sdk/google-vertex` | Claude, Gemini on GCP | Anthropic on Google |
| **Amazon Bedrock** | `@ai-sdk/amazon-bedrock` | Nova, Claude, Llama 3 | AWS credentials |
### Tier 4: Aggregators & Gateways
| Provider | Models | Features |
|----------|--------|----------|
| **OpenRouter** | 100+ models | Multi-provider gateway |
| **Together AI** | Open source | Fine-tuning, hosting |
| **DeepInfra** | Open source | Cost-effective |
| **Vercel AI** | Multi-provider | Edge hosting |
### Tier 5: Fast Inference
| Provider | Speed | Models |
|----------|-------|--------|
| **Groq** | Ultra-fast | Llama 3, Mixtral |
| **Cerebras** | Fastest | Llama 3 variants |
### Tier 6: Specialized
| Provider | Use Case |
|----------|----------|
| **Perplexity** | Web search integration |
| **Cohere** | Enterprise RAG |
| **GitLab Duo** | CI/CD AI integration |
| **GitHub Copilot** | IDE integration |
### Tier 7: Local/Self-Hosted
| Provider | Base URL | Use Case |
|----------|----------|----------|
| **Ollama** | localhost:11434 | Local model hosting |
| **LM Studio** | localhost:1234 | GUI local models |
| **vLLM** | localhost:8000 | High-performance serving |
---
## π¨ Customization Options
### 1. Model Selection
**Option A: Fetch from Provider**
```bash
# Use included script
./scripts/fetch-models.sh openrouter
./scripts/fetch-models.sh groq
./scripts/fetch-models.sh ollama
# Or manually
curl -s https://openrouter.ai/api/v1/models \
-H "Authorization: Bearer $KEY" | jq '.data[].id'
```
**Option B: Custom Model Input**
```json
{
"customModels": {
"my-fine-tuned": {
"provider": "openai",
"modelId": "ft:gpt-4o:org:custom:suffix",
"displayName": "My Custom Model"
},
"local-llama": {
"provider": "ollama",
"modelId": "llama3.2:70b",
"displayName": "Local Llama 3.2 70B"
}
}
}
```
### 2. Security Hardening
```bash
# Environment variables (never hardcode keys)
export ANTHROPIC_API_KEY="your-key"
export OPENAI_API_KEY="your-key"
# Restricted config permissions
chmod 600 ~/.config/claw/config.json
chmod 600 ~/.qwen/settings.json
# Systemd hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
```
### 3. Interactive Brainstorming
After installation, customize with brainstorming:
| Topic | Questions |
|-------|-----------|
| **Use Case** | Coding, research, productivity, automation? |
| **Model Selection** | Claude, GPT, Gemini, Qwen, local? |
| **Integrations** | Telegram, Discord, calendar, storage? |
| **Deployment** | Local, VPS, cloud? |
| **Agent Personality** | Tone, memory, proactivity? |
---
## π¦ Installation Guides
### Qwen Code (Native FREE OAuth)
```bash
npm install -g @qwen-code/qwen-code@latest
qwen
/auth # Select Qwen OAuth
```
### OpenClaw
```bash
git clone https://github.com/openclaw/openclaw.git
cd openclaw && npm install && npm run setup
```
### NanoBot
```bash
pip install nanobot-ai
nanobot onboard
nanobot gateway
```
### PicoClaw
```bash
wget https://github.com/sipeed/picoclaw/releases/latest/picoclaw-linux-amd64
chmod +x picoclaw-linux-amd64
sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
picoclaw gateway
```
### ZeroClaw
```bash
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
chmod +x zeroclaw-linux-amd64
sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
zeroclaw gateway
```
---
## βοΈ Configuration Examples
### Multi-Provider Setup
```json
{
"providers": {
"qwen": {
"type": "oauth",
"free": true,
"daily_limit": 2000,
"model": "qwen3-coder-plus"
},
"anthropic": {
"apiKey": "${ANTHROPIC_API_KEY}",
"baseURL": "https://api.anthropic.com"
},
"openai": {
"apiKey": "${OPENAI_API_KEY}",
"baseURL": "https://api.openai.com/v1"
},
"google": {
"apiKey": "${GOOGLE_API_KEY}"
},
"openrouter": {
"apiKey": "${OPENROUTER_API_KEY}",
"baseURL": "https://openrouter.ai/api/v1"
},
"groq": {
"apiKey": "${GROQ_API_KEY}",
"baseURL": "https://api.groq.com/openai/v1"
},
"ollama": {
"baseURL": "http://localhost:11434/v1"
}
},
"agents": {
"free": {
"model": "qwen/qwen3-coder-plus"
},
"premium": {
"model": "anthropic/claude-sonnet-4-5"
},
"fast": {
"model": "groq/llama-3.3-70b-versatile"
},
"local": {
"model": "ollama/llama3.2:70b"
}
}
}
```
### Environment Variables
```bash
# ~/.qwen/.env or ~/.config/claw/.env
# Qwen OAuth (FREE - from qwen --auth-type qwen-oauth)
# Credentials stored in: ~/.qwen/oauth_creds.json
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
export OPENAI_MODEL="qwen3-coder-plus"
# Or use paid providers
ANTHROPIC_API_KEY=sk-ant-xxx
OPENAI_API_KEY=sk-xxx
GOOGLE_API_KEY=xxx
GROQ_API_KEY=gsk_xxx
OPENROUTER_API_KEY=sk-or-xxx
MISTRAL_API_KEY=xxx
XAI_API_KEY=xxx
COHERE_API_KEY=xxx
PERPLEXITY_API_KEY=xxx
CEREBRAS_API_KEY=xxx
TOGETHER_API_KEY=xxx
DEEPINFRA_API_KEY=xxx
# Cloud providers
AZURE_OPENAI_API_KEY=xxx
AZURE_OPENAI_ENDPOINT=https://xxx.openai.azure.com/
AWS_ACCESS_KEY_ID=xxx
AWS_SECRET_ACCESS_KEY=xxx
GOOGLE_CLOUD_PROJECT=my-project
GOOGLE_CLOUD_LOCATION=us-central1
```
---
## π¬ Usage Examples
### Basic Usage
```
"Setup OpenClaw with FREE Qwen OAuth"
"Install NanoBot with all AI providers"
"Configure ZeroClaw with Groq for fast inference"
```
### Advanced Usage
```
"Setup Claw with Anthropic, OpenAI, and FREE Qwen fallback"
"Fetch available models from OpenRouter and let me choose"
"Configure PicoClaw with my custom fine-tuned model"
"Import Qwen OAuth to use with OpenClaw"
"Setup Claw platform with security hardening"
```
### Provider-Specific
```
"Configure Claw with Anthropic Claude 4"
"Setup Claw with OpenAI GPT-5"
"Use Google Gemini 3 Pro with OpenClaw"
"Setup local Ollama models with Claw"
"Configure OpenRouter gateway for 100+ models"
```
---
## π Files in This Skill
```
skills/claw-setup/
βββ SKILL.md # Skill definition (this file's source)
βββ README.md # This documentation
βββ scripts/
βββ import-qwen-oauth.sh # Import FREE Qwen OAuth to any platform
βββ fetch-models.sh # Fetch models from all providers
```
---
## π§ Troubleshooting
### Qwen OAuth Token Not Found
```bash
# Re-authenticate
qwen && /auth # Select Qwen OAuth
# Check token location
ls ~/.qwen/
find ~/.qwen -name "*.json"
```
### Token Expired
```bash
# Tokens auto-refresh in Qwen Code
qwen -p "refresh"
# Re-export
source ~/.qwen/.env
```
### API Errors
```bash
# Verify token is valid
QWEN_TOKEN=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
curl -X POST "https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions" \
-H "Authorization: Bearer $QWEN_TOKEN" \
-H "Content-Type: application/json" \
-d '{"model": "qwen3-coder-plus", "messages": [{"role": "user", "content": "Hello"}]}'
# Check rate limits
# FREE tier: 60 req/min, 2000/day
```
---