Based on ZeroClaw implementation study: - Change API endpoint from api.qwen.ai to dashscope.aliyuncs.com/compatible-mode/v1 - Update credentials file reference to oauth_creds.json - Add ZeroClaw native qwen-oauth provider documentation - Add API endpoints and models reference table - Update import script with correct endpoint and platform support - Add PicoClaw and NanoClaw platform configurations Key findings from ZeroClaw binary: - Native qwen-oauth provider with auto token refresh - Uses DashScope OpenAI-compatible endpoint - Reads ~/.qwen/oauth_creds.json directly Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
624 lines
21 KiB
Markdown
624 lines
21 KiB
Markdown
<div align="center">
|
|
|
|
# 🦞 Claw Setup
|
|
|
|
### The Ultimate AI Agent Deployment Skill
|
|
|
|
**Setup ANY Claw platform with 25+ AI providers + FREE Qwen OAuth + Full Customization**
|
|
|
|
---
|
|
|
|
<p align="center">
|
|
<a href="https://z.ai/subscribe?ic=R0K78RJKNW">
|
|
<img src="https://img.shields.io/badge/Designed%20by-GLM%205%20Advanced%20Coding%20Model-blue?style=for-the-badge" alt="Designed by GLM 5">
|
|
</a>
|
|
</p>
|
|
|
|
<p align="center">
|
|
<i>✨ Autonomously developed by <a href="https://z.ai/subscribe?ic=R0K78RJKNW"><strong>GLM 5 Advanced Coding Model</strong></a></i>
|
|
</p>
|
|
|
|
<p align="center">
|
|
<b>⚠️ Disclaimer: Test in a test environment prior to using on any live system</b>
|
|
</p>
|
|
|
|
---
|
|
|
|
</div>
|
|
|
|
## Table of Contents
|
|
|
|
1. [Features Overview](#-features-overview)
|
|
2. [Supported Platforms](#-supported-platforms)
|
|
3. [FREE Qwen OAuth Import](#-feature-1-free-qwen-oauth-import)
|
|
4. [25+ AI Providers](#-feature-2-25-ai-providers)
|
|
5. [Customization Options](#-customization-options)
|
|
6. [Installation Guides](#-installation-guides)
|
|
7. [Configuration Examples](#-configuration-examples)
|
|
8. [Usage Examples](#-usage-examples)
|
|
|
|
---
|
|
|
|
## 🎯 Features Overview
|
|
|
|
```
|
|
┌─────────────────────────────────────────────────────────────────┐
|
|
│ CLAW SETUP FEATURES │
|
|
├─────────────────────────────────────────────────────────────────┤
|
|
│ │
|
|
│ ✅ FEATURE 1: FREE Qwen OAuth Cross-Platform Import │
|
|
│ • 2,000 requests/day FREE │
|
|
│ • Works with ALL Claw platforms │
|
|
│ • Qwen3-Coder model (coding-optimized) │
|
|
│ • Browser OAuth - no API key needed │
|
|
│ │
|
|
│ ✅ FEATURE 2: 25+ OpenCode-Compatible AI Providers │
|
|
│ • All major AI labs │
|
|
│ • Cloud platforms (Azure, AWS, GCP) │
|
|
│ • Fast inference (Groq, Cerebras) │
|
|
│ • Gateways (OpenRouter: 100+ models) │
|
|
│ • Local models (Ollama, LM Studio) │
|
|
│ │
|
|
│ ✅ FEATURE 3: Full Customization │
|
|
│ • Model selection (fetch or custom) │
|
|
│ • Security hardening │
|
|
│ • Interactive brainstorming │
|
|
│ • Multi-provider configuration │
|
|
│ │
|
|
└─────────────────────────────────────────────────────────────────┘
|
|
```
|
|
|
|
---
|
|
|
|
## 🦀 Supported Platforms
|
|
|
|
| Platform | Language | Memory | Startup | Qwen OAuth | All Providers | Best For |
|
|
|----------|----------|--------|---------|------------|---------------|----------|
|
|
| **Qwen Code** | TypeScript | ~200MB | ~5s | ✅ Native | ✅ | FREE coding |
|
|
| **OpenClaw** | TypeScript | >1GB | ~500s | ✅ Import | ✅ | Full-featured, 1700+ plugins |
|
|
| **NanoBot** | Python | ~100MB | ~30s | ✅ Import | ✅ | Research, Python devs |
|
|
| **PicoClaw** | Go | <10MB | ~1s | ✅ Import | ✅ | Embedded, $10 hardware |
|
|
| **ZeroClaw** | Rust | <5MB | <10ms | ✅ Native | ✅ | Maximum performance |
|
|
| **NanoClaw** | TypeScript | ~50MB | ~5s | ✅ Import | ✅ | WhatsApp integration |
|
|
|
|
### Platform Selection Guide
|
|
|
|
```
|
|
┌─────────────────┐
|
|
│ Need AI Agent? │
|
|
└────────┬────────┘
|
|
│
|
|
▼
|
|
┌───────────────────────┐
|
|
│ Want FREE tier? │
|
|
└───────────┬───────────┘
|
|
┌─────┴─────┐
|
|
YES NO
|
|
│ │
|
|
▼ ▼
|
|
┌──────────────┐ ┌──────────────────┐
|
|
│ ⭐ Qwen Code │ │ Memory limited? │
|
|
│ OAuth FREE │ └────────┬─────────┘
|
|
│ 2000/day │ ┌─────┴─────┐
|
|
└──────────────┘ YES NO
|
|
│ │
|
|
▼ ▼
|
|
┌──────────┐ ┌──────────┐
|
|
│ZeroClaw/ │ │OpenClaw │
|
|
│PicoClaw │ │(Full) │
|
|
└──────────┘ └──────────┘
|
|
```
|
|
|
|
---
|
|
|
|
## ⭐ FEATURE 1: FREE Qwen OAuth Import
|
|
|
|
### What You Get
|
|
|
|
| Metric | Value |
|
|
|--------|-------|
|
|
| **Requests/day** | 2,000 |
|
|
| **Requests/minute** | 60 |
|
|
| **Cost** | **FREE** |
|
|
| **Model** | Qwen3-Coder (coding-optimized) |
|
|
| **Auth** | Browser OAuth via qwen.ai |
|
|
|
|
### Quick Start
|
|
|
|
```bash
|
|
# Step 1: Install Qwen Code CLI
|
|
npm install -g @qwen-code/qwen-code@latest
|
|
|
|
# Step 2: Get FREE OAuth (opens browser for login)
|
|
qwen --auth-type qwen-oauth -p "test"
|
|
# Credentials saved to: ~/.qwen/oauth_creds.json
|
|
|
|
# Step 3: Import to ANY platform
|
|
|
|
# ZeroClaw (native provider - auto token refresh)
|
|
cat > ~/.zeroclaw/config.toml << EOF
|
|
default_provider = "qwen-oauth"
|
|
default_model = "qwen3-coder-plus"
|
|
EOF
|
|
|
|
# Other platforms (OpenAI-compatible)
|
|
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
|
|
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
|
|
```
|
|
|
|
### Platform-Specific Import
|
|
|
|
#### OpenClaw + FREE Qwen (OpenAI-Compatible)
|
|
```bash
|
|
git clone https://github.com/openclaw/openclaw.git
|
|
cd openclaw && npm install
|
|
|
|
# Extract token from Qwen OAuth credentials
|
|
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
|
|
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
|
|
export OPENAI_MODEL="qwen3-coder-plus"
|
|
|
|
npm run start
|
|
```
|
|
|
|
#### NanoBot + FREE Qwen (OpenAI-Compatible)
|
|
```bash
|
|
pip install nanobot-ai
|
|
|
|
# Extract token and configure
|
|
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
|
|
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
|
|
export OPENAI_MODEL="qwen3-coder-plus"
|
|
|
|
nanobot gateway
|
|
```
|
|
|
|
#### PicoClaw + FREE Qwen (OpenAI-Compatible)
|
|
```bash
|
|
# Extract token and set environment
|
|
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
|
|
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
|
|
|
|
picoclaw gateway
|
|
```
|
|
|
|
#### NanoClaw + FREE Qwen (OpenAI-Compatible)
|
|
```bash
|
|
# Extract token and set environment
|
|
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
|
|
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
|
|
|
|
nanoclaw
|
|
```
|
|
|
|
#### ZeroClaw + FREE Qwen (NATIVE Provider)
|
|
```bash
|
|
# Install ZeroClaw
|
|
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
|
|
chmod +x zeroclaw-linux-amd64 && sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
|
|
|
|
# ZeroClaw has NATIVE qwen-oauth provider support!
|
|
# First, get OAuth credentials via Qwen Code:
|
|
qwen && /auth # Select Qwen OAuth → creates ~/.qwen/oauth_creds.json
|
|
|
|
# Configure ZeroClaw to use native qwen-oauth provider
|
|
cat > ~/.zeroclaw/config.toml << CONFIG
|
|
default_provider = "qwen-oauth"
|
|
default_model = "qwen3-coder-plus"
|
|
default_temperature = 0.7
|
|
CONFIG
|
|
|
|
# ZeroClaw reads ~/.qwen/oauth_creds.json directly with auto token refresh!
|
|
zeroclaw gateway
|
|
```
|
|
|
|
### Understanding Qwen OAuth Import Methods
|
|
|
|
```
|
|
┌─────────────────────────────────────────────────────────────────────────────┐
|
|
│ QWEN OAUTH IMPORT METHODS │
|
|
├─────────────────────────────────────────────────────────────────────────────┤
|
|
│ │
|
|
│ METHOD 1: Native Provider (ZeroClaw ONLY) │
|
|
│ ───────────────────────────────────────────────────── │
|
|
│ • ZeroClaw has built-in "qwen-oauth" provider │
|
|
│ • Reads ~/.qwen/oauth_creds.json directly │
|
|
│ • Automatic token refresh using refresh_token │
|
|
│ • Tracks expiry_date and refreshes when needed │
|
|
│ • Configuration: default_provider = "qwen-oauth" │
|
|
│ │
|
|
│ METHOD 2: OpenAI-Compatible (All Other Platforms) │
|
|
│ ───────────────────────────────────────────────────── │
|
|
│ • Treats Qwen API as OpenAI-compatible endpoint │
|
|
│ • Extract access_token and use as OPENAI_API_KEY │
|
|
│ • Set OPENAI_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1 │
|
|
│ • Manual re-export needed when token expires │
|
|
│ │
|
|
│ COMPARISON: │
|
|
│ ┌─────────────────┬────────────────┬─────────────────────┐ │
|
|
│ │ Feature │ Native │ OpenAI-Compatible │ │
|
|
│ ├─────────────────┼────────────────┼─────────────────────┤ │
|
|
│ │ Token Refresh │ ✅ Automatic │ ❌ Manual │ │
|
|
│ │ Token Expiry │ ✅ Handled │ ⚠️ Re-export needed│ │
|
|
│ │ Platforms │ ZeroClaw only │ All others │ │
|
|
│ │ Config File │ ~/.qwen/oauth_creds.json │ env vars │ │
|
|
│ └─────────────────┴────────────────┴─────────────────────┘ │
|
|
│ │
|
|
└─────────────────────────────────────────────────────────────────────────────┘
|
|
```
|
|
|
|
### OAuth Credentials Structure
|
|
|
|
Qwen Code stores OAuth credentials in `~/.qwen/oauth_creds.json`:
|
|
|
|
```json
|
|
{
|
|
"access_token": "pIFwnvSC3fQPG0i5waDbozvUNEWE4w9x...",
|
|
"refresh_token": "9Fm_Ob-c8_WAT_3QvgGwVGfgoNfAdP...",
|
|
"token_type": "Bearer",
|
|
"resource_url": "portal.qwen.ai",
|
|
"expiry_date": 1771774796531
|
|
}
|
|
```
|
|
|
|
| Field | Purpose |
|
|
|-------|---------|
|
|
| `access_token` | Used for API authentication |
|
|
| `refresh_token` | Used to get new access_token when expired |
|
|
| `expiry_date` | Unix timestamp when access_token expires |
|
|
|
|
### API Endpoints
|
|
|
|
| Endpoint | URL |
|
|
|----------|-----|
|
|
| **API Base** | `https://dashscope.aliyuncs.com/compatible-mode/v1` |
|
|
| **Chat Completions** | `https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions` |
|
|
| **Token Refresh** | `https://chat.qwen.ai/api/v1/oauth2/token` |
|
|
|
|
### Available Models (FREE Tier)
|
|
|
|
| Model | Best For |
|
|
|-------|----------|
|
|
| `qwen3-coder-plus` | Coding (recommended) |
|
|
| `qwen3-coder-flash` | Fast coding |
|
|
| `qwen-max` | Complex tasks |
|
|
|
|
---
|
|
|
|
## 🤖 FEATURE 2: 25+ AI Providers
|
|
|
|
### Tier 1: FREE
|
|
|
|
| Provider | Free Tier | Model | Setup |
|
|
|----------|-----------|-------|-------|
|
|
| **Qwen OAuth** | ✅ 2,000/day | Qwen3-Coder | `qwen && /auth` |
|
|
|
|
### Tier 2: Major AI Labs
|
|
|
|
| Provider | SDK Package | Key Models | Features |
|
|
|----------|-------------|------------|----------|
|
|
| **Anthropic** | `@ai-sdk/anthropic` | Claude 3.5/4/Opus | Extended thinking, PDF support |
|
|
| **OpenAI** | `@ai-sdk/openai` | GPT-4o, o1, o3, GPT-5 | Function calling, structured output |
|
|
| **Google AI** | `@ai-sdk/google` | Gemini 2.5, Gemini 3 Pro | Multimodal, long context |
|
|
| **xAI** | `@ai-sdk/xai` | Grok models | Real-time data integration |
|
|
| **Mistral** | `@ai-sdk/mistral` | Mistral Large, Codestral | Code-focused models |
|
|
|
|
### Tier 3: Cloud Platforms
|
|
|
|
| Provider | SDK Package | Models | Features |
|
|
|----------|-------------|--------|----------|
|
|
| **Azure OpenAI** | `@ai-sdk/azure` | GPT-5 Enterprise | Azure integration |
|
|
| **Google Vertex** | `@ai-sdk/google-vertex` | Claude, Gemini on GCP | Anthropic on Google |
|
|
| **Amazon Bedrock** | `@ai-sdk/amazon-bedrock` | Nova, Claude, Llama 3 | AWS credentials |
|
|
|
|
### Tier 4: Aggregators & Gateways
|
|
|
|
| Provider | Models | Features |
|
|
|----------|--------|----------|
|
|
| **OpenRouter** | 100+ models | Multi-provider gateway |
|
|
| **Together AI** | Open source | Fine-tuning, hosting |
|
|
| **DeepInfra** | Open source | Cost-effective |
|
|
| **Vercel AI** | Multi-provider | Edge hosting |
|
|
|
|
### Tier 5: Fast Inference
|
|
|
|
| Provider | Speed | Models |
|
|
|----------|-------|--------|
|
|
| **Groq** | Ultra-fast | Llama 3, Mixtral |
|
|
| **Cerebras** | Fastest | Llama 3 variants |
|
|
|
|
### Tier 6: Specialized
|
|
|
|
| Provider | Use Case |
|
|
|----------|----------|
|
|
| **Perplexity** | Web search integration |
|
|
| **Cohere** | Enterprise RAG |
|
|
| **GitLab Duo** | CI/CD AI integration |
|
|
| **GitHub Copilot** | IDE integration |
|
|
|
|
### Tier 7: Local/Self-Hosted
|
|
|
|
| Provider | Base URL | Use Case |
|
|
|----------|----------|----------|
|
|
| **Ollama** | localhost:11434 | Local model hosting |
|
|
| **LM Studio** | localhost:1234 | GUI local models |
|
|
| **vLLM** | localhost:8000 | High-performance serving |
|
|
|
|
---
|
|
|
|
## 🎨 Customization Options
|
|
|
|
### 1. Model Selection
|
|
|
|
**Option A: Fetch from Provider**
|
|
```bash
|
|
# Use included script
|
|
./scripts/fetch-models.sh openrouter
|
|
./scripts/fetch-models.sh groq
|
|
./scripts/fetch-models.sh ollama
|
|
|
|
# Or manually
|
|
curl -s https://openrouter.ai/api/v1/models \
|
|
-H "Authorization: Bearer $KEY" | jq '.data[].id'
|
|
```
|
|
|
|
**Option B: Custom Model Input**
|
|
```json
|
|
{
|
|
"customModels": {
|
|
"my-fine-tuned": {
|
|
"provider": "openai",
|
|
"modelId": "ft:gpt-4o:org:custom:suffix",
|
|
"displayName": "My Custom Model"
|
|
},
|
|
"local-llama": {
|
|
"provider": "ollama",
|
|
"modelId": "llama3.2:70b",
|
|
"displayName": "Local Llama 3.2 70B"
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
### 2. Security Hardening
|
|
|
|
```bash
|
|
# Environment variables (never hardcode keys)
|
|
export ANTHROPIC_API_KEY="your-key"
|
|
export OPENAI_API_KEY="your-key"
|
|
|
|
# Restricted config permissions
|
|
chmod 600 ~/.config/claw/config.json
|
|
chmod 600 ~/.qwen/settings.json
|
|
|
|
# Systemd hardening
|
|
NoNewPrivileges=true
|
|
PrivateTmp=true
|
|
ProtectSystem=strict
|
|
```
|
|
|
|
### 3. Interactive Brainstorming
|
|
|
|
After installation, customize with brainstorming:
|
|
|
|
| Topic | Questions |
|
|
|-------|-----------|
|
|
| **Use Case** | Coding, research, productivity, automation? |
|
|
| **Model Selection** | Claude, GPT, Gemini, Qwen, local? |
|
|
| **Integrations** | Telegram, Discord, calendar, storage? |
|
|
| **Deployment** | Local, VPS, cloud? |
|
|
| **Agent Personality** | Tone, memory, proactivity? |
|
|
|
|
---
|
|
|
|
## 📦 Installation Guides
|
|
|
|
### Qwen Code (Native FREE OAuth)
|
|
```bash
|
|
npm install -g @qwen-code/qwen-code@latest
|
|
qwen
|
|
/auth # Select Qwen OAuth
|
|
```
|
|
|
|
### OpenClaw
|
|
```bash
|
|
git clone https://github.com/openclaw/openclaw.git
|
|
cd openclaw && npm install && npm run setup
|
|
```
|
|
|
|
### NanoBot
|
|
```bash
|
|
pip install nanobot-ai
|
|
nanobot onboard
|
|
nanobot gateway
|
|
```
|
|
|
|
### PicoClaw
|
|
```bash
|
|
wget https://github.com/sipeed/picoclaw/releases/latest/picoclaw-linux-amd64
|
|
chmod +x picoclaw-linux-amd64
|
|
sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
|
|
picoclaw gateway
|
|
```
|
|
|
|
### ZeroClaw
|
|
```bash
|
|
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
|
|
chmod +x zeroclaw-linux-amd64
|
|
sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
|
|
zeroclaw gateway
|
|
```
|
|
|
|
---
|
|
|
|
## ⚙️ Configuration Examples
|
|
|
|
### Multi-Provider Setup
|
|
|
|
```json
|
|
{
|
|
"providers": {
|
|
"qwen": {
|
|
"type": "oauth",
|
|
"free": true,
|
|
"daily_limit": 2000,
|
|
"model": "qwen3-coder-plus"
|
|
},
|
|
"anthropic": {
|
|
"apiKey": "${ANTHROPIC_API_KEY}",
|
|
"baseURL": "https://api.anthropic.com"
|
|
},
|
|
"openai": {
|
|
"apiKey": "${OPENAI_API_KEY}",
|
|
"baseURL": "https://api.openai.com/v1"
|
|
},
|
|
"google": {
|
|
"apiKey": "${GOOGLE_API_KEY}"
|
|
},
|
|
"openrouter": {
|
|
"apiKey": "${OPENROUTER_API_KEY}",
|
|
"baseURL": "https://openrouter.ai/api/v1"
|
|
},
|
|
"groq": {
|
|
"apiKey": "${GROQ_API_KEY}",
|
|
"baseURL": "https://api.groq.com/openai/v1"
|
|
},
|
|
"ollama": {
|
|
"baseURL": "http://localhost:11434/v1"
|
|
}
|
|
},
|
|
"agents": {
|
|
"free": {
|
|
"model": "qwen/qwen3-coder-plus"
|
|
},
|
|
"premium": {
|
|
"model": "anthropic/claude-sonnet-4-5"
|
|
},
|
|
"fast": {
|
|
"model": "groq/llama-3.3-70b-versatile"
|
|
},
|
|
"local": {
|
|
"model": "ollama/llama3.2:70b"
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
### Environment Variables
|
|
|
|
```bash
|
|
# ~/.qwen/.env or ~/.config/claw/.env
|
|
|
|
# Qwen OAuth (FREE - from qwen --auth-type qwen-oauth)
|
|
# Credentials stored in: ~/.qwen/oauth_creds.json
|
|
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
|
|
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
|
|
export OPENAI_MODEL="qwen3-coder-plus"
|
|
|
|
# Or use paid providers
|
|
ANTHROPIC_API_KEY=sk-ant-xxx
|
|
OPENAI_API_KEY=sk-xxx
|
|
GOOGLE_API_KEY=xxx
|
|
GROQ_API_KEY=gsk_xxx
|
|
OPENROUTER_API_KEY=sk-or-xxx
|
|
MISTRAL_API_KEY=xxx
|
|
XAI_API_KEY=xxx
|
|
COHERE_API_KEY=xxx
|
|
PERPLEXITY_API_KEY=xxx
|
|
CEREBRAS_API_KEY=xxx
|
|
TOGETHER_API_KEY=xxx
|
|
DEEPINFRA_API_KEY=xxx
|
|
|
|
# Cloud providers
|
|
AZURE_OPENAI_API_KEY=xxx
|
|
AZURE_OPENAI_ENDPOINT=https://xxx.openai.azure.com/
|
|
AWS_ACCESS_KEY_ID=xxx
|
|
AWS_SECRET_ACCESS_KEY=xxx
|
|
GOOGLE_CLOUD_PROJECT=my-project
|
|
GOOGLE_CLOUD_LOCATION=us-central1
|
|
```
|
|
|
|
---
|
|
|
|
## 💬 Usage Examples
|
|
|
|
### Basic Usage
|
|
```
|
|
"Setup OpenClaw with FREE Qwen OAuth"
|
|
"Install NanoBot with all AI providers"
|
|
"Configure ZeroClaw with Groq for fast inference"
|
|
```
|
|
|
|
### Advanced Usage
|
|
```
|
|
"Setup Claw with Anthropic, OpenAI, and FREE Qwen fallback"
|
|
"Fetch available models from OpenRouter and let me choose"
|
|
"Configure PicoClaw with my custom fine-tuned model"
|
|
"Import Qwen OAuth to use with OpenClaw"
|
|
"Setup Claw platform with security hardening"
|
|
```
|
|
|
|
### Provider-Specific
|
|
```
|
|
"Configure Claw with Anthropic Claude 4"
|
|
"Setup Claw with OpenAI GPT-5"
|
|
"Use Google Gemini 3 Pro with OpenClaw"
|
|
"Setup local Ollama models with Claw"
|
|
"Configure OpenRouter gateway for 100+ models"
|
|
```
|
|
|
|
---
|
|
|
|
## 📁 Files in This Skill
|
|
|
|
```
|
|
skills/claw-setup/
|
|
├── SKILL.md # Skill definition (this file's source)
|
|
├── README.md # This documentation
|
|
└── scripts/
|
|
├── import-qwen-oauth.sh # Import FREE Qwen OAuth to any platform
|
|
└── fetch-models.sh # Fetch models from all providers
|
|
```
|
|
|
|
---
|
|
|
|
## 🔧 Troubleshooting
|
|
|
|
### Qwen OAuth Token Not Found
|
|
```bash
|
|
# Re-authenticate
|
|
qwen && /auth # Select Qwen OAuth
|
|
|
|
# Check token location
|
|
ls ~/.qwen/
|
|
find ~/.qwen -name "*.json"
|
|
```
|
|
|
|
### Token Expired
|
|
```bash
|
|
# Tokens auto-refresh in Qwen Code
|
|
qwen -p "refresh"
|
|
|
|
# Re-export
|
|
source ~/.qwen/.env
|
|
```
|
|
|
|
### API Errors
|
|
```bash
|
|
# Verify token is valid
|
|
QWEN_TOKEN=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
|
|
curl -X POST "https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions" \
|
|
-H "Authorization: Bearer $QWEN_TOKEN" \
|
|
-H "Content-Type: application/json" \
|
|
-d '{"model": "qwen3-coder-plus", "messages": [{"role": "user", "content": "Hello"}]}'
|
|
|
|
# Check rate limits
|
|
# FREE tier: 60 req/min, 2000/day
|
|
```
|
|
|
|
---
|
|
|
|
<p align="center">
|
|
<a href="https://z.ai/subscribe?ic=R0K78RJKNW">Learn more about GLM 5 Advanced Coding Model</a>
|
|
</p>
|