docs: Comprehensive documentation for 25+ providers + Qwen OAuth
Restructured documentation to highlight both key features: FEATURE 1: Qwen OAuth Cross-Platform Import (FREE) - 2,000 requests/day free tier - Works with ALL Claw platforms - Browser OAuth via qwen.ai - Model: Qwen3-Coder FEATURE 2: 25+ OpenCode-Compatible Providers - Major AI Labs: Anthropic, OpenAI, Google, xAI, Mistral - Cloud Platforms: Azure, AWS Bedrock, Google Vertex - Fast Inference: Groq, Cerebras - Gateways: OpenRouter (100+ models), Together AI - Local: Ollama, LM Studio, vLLM Provider Tiers: 1. FREE: Qwen OAuth 2. Major Labs: Anthropic, OpenAI, Google, xAI, Mistral 3. Cloud: Azure, Bedrock, Vertex 4. Fast: Groq, Cerebras 5. Gateways: OpenRouter, Together AI, Vercel 6. Specialized: Perplexity, Cohere, GitLab, GitHub 7. Local: Ollama, LM Studio, vLLM Platforms with full support: - Qwen Code (native OAuth) - OpenClaw, NanoBot, PicoClaw, ZeroClaw (import OAuth) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
66
README.md
66
README.md
@@ -26,12 +26,12 @@
|
|||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
## Skills Index
|
## Skills Index (8 Skills)
|
||||||
|
|
||||||
### AI & Automation
|
### AI & Automation
|
||||||
| Skill | Description | Status |
|
| Skill | Description | Status |
|
||||||
|-------|-------------|--------|
|
|-------|-------------|--------|
|
||||||
| [🦞 Claw Setup](./skills/claw-setup/) | AI Agent deployment + **FREE Qwen OAuth import** for ANY platform | ✅ Production Ready |
|
| [🦞 Claw Setup](./skills/claw-setup/) | AI Agent deployment + **25+ providers** + **FREE Qwen OAuth** | ✅ Production Ready |
|
||||||
|
|
||||||
### System Administration
|
### System Administration
|
||||||
| Skill | Description | Status |
|
| Skill | Description | Status |
|
||||||
@@ -58,40 +58,44 @@
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## ⭐ Featured: Claw Setup with FREE Qwen OAuth Import
|
## ⭐ Featured: Claw Setup
|
||||||
|
|
||||||
**Use Qwen's FREE tier (2,000 req/day) with ANY platform:**
|
### Two Powerful Features
|
||||||
|
|
||||||
```
|
**1. FREE Qwen OAuth (2,000 req/day)**
|
||||||
1. Get Qwen OAuth (FREE)
|
- Works with: OpenClaw, NanoBot, PicoClaw, ZeroClaw
|
||||||
$ npm install -g @qwen-code/qwen-code@latest
|
- Model: Qwen3-Coder (coding-optimized)
|
||||||
$ qwen
|
- No API key needed
|
||||||
$ /auth → Select "Qwen OAuth" → Browser login
|
|
||||||
|
|
||||||
2. Import to ANY Platform
|
**2. 25+ OpenCode-Compatible Providers**
|
||||||
$ source ~/.qwen/.env && openclaw
|
- Major labs: Anthropic, OpenAI, Google, xAI, Mistral
|
||||||
$ source ~/.qwen/.env && nanobot gateway
|
- Cloud: Azure, AWS Bedrock, Google Vertex
|
||||||
$ source ~/.qwen/.env && picoclaw gateway
|
- Fast: Groq, Cerebras
|
||||||
$ source ~/.qwen/.env && zeroclaw gateway
|
- Gateways: OpenRouter (100+ models)
|
||||||
|
- Local: Ollama, LM Studio
|
||||||
|
|
||||||
|
### Quick Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# FREE Qwen OAuth
|
||||||
|
npm install -g @qwen-code/qwen-code@latest
|
||||||
|
qwen && /auth
|
||||||
|
|
||||||
|
# Import to any platform
|
||||||
|
source ~/.qwen/.env && openclaw
|
||||||
|
source ~/.qwen/.env && nanobot gateway
|
||||||
|
source ~/.qwen/.env && zeroclaw gateway
|
||||||
```
|
```
|
||||||
|
|
||||||
### Platforms with Qwen OAuth Support
|
### Platforms Supported
|
||||||
|
|
||||||
| Platform | Qwen OAuth | Memory | Best For |
|
| Platform | Qwen OAuth | All Providers | Memory |
|
||||||
|----------|------------|--------|----------|
|
|----------|------------|---------------|--------|
|
||||||
| Qwen Code | ✅ Native | ~200MB | Coding |
|
| Qwen Code | ✅ Native | ✅ | ~200MB |
|
||||||
| OpenClaw | ✅ Import | >1GB | Full-featured |
|
| OpenClaw | ✅ Import | ✅ | >1GB |
|
||||||
| NanoBot | ✅ Import | ~100MB | Research |
|
| NanoBot | ✅ Import | ✅ | ~100MB |
|
||||||
| PicoClaw | ✅ Import | <10MB | Embedded |
|
| PicoClaw | ✅ Import | ✅ | <10MB |
|
||||||
| ZeroClaw | ✅ Import | <5MB | Performance |
|
| ZeroClaw | ✅ Import | ✅ | <5MB |
|
||||||
|
|
||||||
### Free Tier Limits
|
|
||||||
|
|
||||||
| Metric | Limit |
|
|
||||||
|--------|-------|
|
|
||||||
| Requests/day | **2,000** |
|
|
||||||
| Requests/minute | 60 |
|
|
||||||
| Cost | **FREE** |
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -99,7 +103,7 @@
|
|||||||
|
|
||||||
```
|
```
|
||||||
"Setup OpenClaw with FREE Qwen OAuth"
|
"Setup OpenClaw with FREE Qwen OAuth"
|
||||||
"Import Qwen OAuth to NanoBot"
|
"Configure Claw with all 25+ providers"
|
||||||
"Run ram optimizer on my server"
|
"Run ram optimizer on my server"
|
||||||
"Scan this directory for leaked secrets"
|
"Scan this directory for leaked secrets"
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -2,9 +2,9 @@
|
|||||||
|
|
||||||
# 🦞 Claw Setup
|
# 🦞 Claw Setup
|
||||||
|
|
||||||
### Cross-Platform AI Agent Deployment with FREE Qwen OAuth
|
### Cross-Platform AI Agent Deployment with 25+ Providers + FREE Qwen OAuth
|
||||||
|
|
||||||
**Use Qwen's FREE tier (2,000 req/day) with ANY Claw platform!**
|
**Use ANY AI provider with ANY Claw platform - including FREE Qwen tier!**
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -26,192 +26,211 @@
|
|||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
## ⭐ Key Feature: Qwen OAuth Cross-Platform Import
|
## ⭐ Two Powerful Features
|
||||||
|
|
||||||
```
|
```
|
||||||
┌─────────────────────────────────────────────────────────────────┐
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
│ QWEN OAUTH CROSS-PLATFORM IMPORT │
|
│ CLAW SETUP FEATURES │
|
||||||
├─────────────────────────────────────────────────────────────────┤
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
│ │
|
│ │
|
||||||
│ 1. Get FREE Qwen OAuth (2,000 req/day) │
|
│ FEATURE 1: FREE Qwen OAuth Cross-Platform Import │
|
||||||
│ $ qwen │
|
│ ─────────────────────────────────────────────── │
|
||||||
│ $ /auth → Select "Qwen OAuth" → Browser login │
|
│ ✅ FREE: 2,000 requests/day, 60 req/min │
|
||||||
|
│ ✅ Works with: OpenClaw, NanoBot, PicoClaw, ZeroClaw │
|
||||||
|
│ ✅ Model: Qwen3-Coder (coding-optimized) │
|
||||||
|
│ ✅ No API key needed - browser OAuth │
|
||||||
│ │
|
│ │
|
||||||
│ 2. Extract OAuth Token │
|
│ FEATURE 2: 25+ OpenCode-Compatible AI Providers │
|
||||||
│ $ cat ~/.qwen/oauth-token.json | jq -r '.access_token' │
|
│ ───────────────────────────────────────────── │
|
||||||
│ │
|
│ ✅ All major AI labs: Anthropic, OpenAI, Google, xAI │
|
||||||
│ 3. Use with ANY Platform │
|
│ ✅ Cloud platforms: Azure, AWS Bedrock, Google Vertex │
|
||||||
│ $ export OPENAI_API_KEY="$QWEN_TOKEN" │
|
│ ✅ Fast inference: Groq (ultra-fast), Cerebras (fastest) │
|
||||||
│ $ export OPENAI_BASE_URL="https://api.qwen.ai/v1" │
|
│ ✅ Gateways: OpenRouter (100+ models), Together AI │
|
||||||
│ $ export OPENAI_MODEL="qwen3-coder-plus" │
|
│ ✅ Local: Ollama, LM Studio, vLLM │
|
||||||
│ │
|
|
||||||
│ Then run: openclaw / nanobot / picoclaw / zeroclaw │
|
|
||||||
│ │
|
│ │
|
||||||
└─────────────────────────────────────────────────────────────────┘
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
## Platforms with Qwen OAuth Support
|
---
|
||||||
|
|
||||||
| Platform | Qwen OAuth | Memory | Best For |
|
## Platforms Supported
|
||||||
|----------|------------|--------|----------|
|
|
||||||
| **Qwen Code** | ✅ Native | ~200MB | Coding, FREE tier |
|
|
||||||
| **OpenClaw** | ✅ Import | >1GB | Full-featured |
|
|
||||||
| **NanoBot** | ✅ Import | ~100MB | Research, Python |
|
|
||||||
| **PicoClaw** | ✅ Import | <10MB | Embedded |
|
|
||||||
| **ZeroClaw** | ✅ Import | <5MB | Performance |
|
|
||||||
| **NanoClaw** | ✅ Import | ~50MB | WhatsApp |
|
|
||||||
|
|
||||||
## Quick Start: FREE Qwen OAuth Import
|
| Platform | Qwen OAuth | All Providers | Memory | Best For |
|
||||||
|
|----------|------------|---------------|--------|----------|
|
||||||
|
| **Qwen Code** | ✅ Native | ✅ | ~200MB | FREE coding |
|
||||||
|
| **OpenClaw** | ✅ Import | ✅ | >1GB | Full-featured |
|
||||||
|
| **NanoBot** | ✅ Import | ✅ | ~100MB | Research |
|
||||||
|
| **PicoClaw** | ✅ Import | ✅ | <10MB | Embedded |
|
||||||
|
| **ZeroClaw** | ✅ Import | ✅ | <5MB | Performance |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# FEATURE 1: FREE Qwen OAuth Import
|
||||||
|
|
||||||
|
## Quick Start (FREE)
|
||||||
|
|
||||||
### Step 1: Get Qwen OAuth (One-time)
|
|
||||||
```bash
|
```bash
|
||||||
# Install Qwen Code
|
# 1. Install Qwen Code
|
||||||
npm install -g @qwen-code/qwen-code@latest
|
npm install -g @qwen-code/qwen-code@latest
|
||||||
|
|
||||||
# Authenticate (FREE)
|
# 2. Get FREE OAuth (2,000 req/day)
|
||||||
qwen
|
qwen
|
||||||
/auth # Select "Qwen OAuth" → Browser login with qwen.ai
|
/auth # Select "Qwen OAuth" → Browser login
|
||||||
# FREE: 2,000 requests/day, 60 req/min
|
|
||||||
|
# 3. Import to ANY platform
|
||||||
|
source ~/.qwen/.env && openclaw
|
||||||
|
source ~/.qwen/.env && nanobot gateway
|
||||||
|
source ~/.qwen/.env && picoclaw gateway
|
||||||
|
source ~/.qwen/.env && zeroclaw gateway
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 2: Import to Any Platform
|
|
||||||
```bash
|
|
||||||
# Extract token
|
|
||||||
export QWEN_TOKEN=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
|
|
||||||
|
|
||||||
# Configure for OpenAI-compatible platforms
|
|
||||||
export OPENAI_API_KEY="$QWEN_TOKEN"
|
|
||||||
export OPENAI_BASE_URL="https://api.qwen.ai/v1"
|
|
||||||
export OPENAI_MODEL="qwen3-coder-plus"
|
|
||||||
|
|
||||||
# Now use with any platform!
|
|
||||||
openclaw # OpenClaw with FREE Qwen
|
|
||||||
nanobot # NanoBot with FREE Qwen
|
|
||||||
picoclaw # PicoClaw with FREE Qwen
|
|
||||||
zeroclaw # ZeroClaw with FREE Qwen
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Automate with Script
|
|
||||||
```bash
|
|
||||||
# Create import script
|
|
||||||
cat > ~/import-qwen-oauth.sh << 'SCRIPT'
|
|
||||||
#!/bin/bash
|
|
||||||
export OPENAI_API_KEY=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
|
|
||||||
export OPENAI_BASE_URL="https://api.qwen.ai/v1"
|
|
||||||
export OPENAI_MODEL="qwen3-coder-plus"
|
|
||||||
echo "✅ Qwen OAuth imported. Run your platform now."
|
|
||||||
SCRIPT
|
|
||||||
chmod +x ~/import-qwen-oauth.sh
|
|
||||||
|
|
||||||
# Usage
|
|
||||||
source ~/import-qwen-oauth.sh && openclaw
|
|
||||||
```
|
|
||||||
|
|
||||||
## Platform-Specific Setup
|
|
||||||
|
|
||||||
### OpenClaw + Qwen OAuth (FREE)
|
|
||||||
```bash
|
|
||||||
git clone https://github.com/openclaw/openclaw.git
|
|
||||||
cd openclaw && npm install
|
|
||||||
|
|
||||||
# Import Qwen OAuth
|
|
||||||
source ~/import-qwen-oauth.sh
|
|
||||||
|
|
||||||
# Or create .env
|
|
||||||
echo "OPENAI_API_KEY=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')" > .env
|
|
||||||
echo "OPENAI_BASE_URL=https://api.qwen.ai/v1" >> .env
|
|
||||||
echo "OPENAI_MODEL=qwen3-coder-plus" >> .env
|
|
||||||
|
|
||||||
npm run start
|
|
||||||
```
|
|
||||||
|
|
||||||
### NanoBot + Qwen OAuth (FREE)
|
|
||||||
```bash
|
|
||||||
pip install nanobot-ai
|
|
||||||
|
|
||||||
# Configure
|
|
||||||
cat > ~/.nanobot/config.json << CONFIG
|
|
||||||
{
|
|
||||||
"providers": {
|
|
||||||
"qwen": {
|
|
||||||
"apiKey": "$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')",
|
|
||||||
"baseURL": "https://api.qwen.ai/v1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"agents": {
|
|
||||||
"defaults": { "model": "qwen/qwen3-coder-plus" }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
CONFIG
|
|
||||||
|
|
||||||
nanobot gateway
|
|
||||||
```
|
|
||||||
|
|
||||||
### ZeroClaw + Qwen OAuth (FREE)
|
|
||||||
```bash
|
|
||||||
# Install
|
|
||||||
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
|
|
||||||
chmod +x zeroclaw-linux-amd64 && sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
|
|
||||||
|
|
||||||
# Import Qwen OAuth
|
|
||||||
export OPENAI_API_KEY=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
|
|
||||||
export OPENAI_MODEL="qwen3-coder-plus"
|
|
||||||
|
|
||||||
zeroclaw gateway
|
|
||||||
```
|
|
||||||
|
|
||||||
## Qwen API Endpoints
|
|
||||||
|
|
||||||
| Endpoint | Type | Use Case |
|
|
||||||
|----------|------|----------|
|
|
||||||
| `https://api.qwen.ai/v1` | **OAuth (FREE)** | FREE 2K req/day |
|
|
||||||
| `https://dashscope.aliyuncs.com/compatible-mode/v1` | API Key | Alibaba Cloud (China) |
|
|
||||||
| `https://dashscope-intl.aliyuncs.com/compatible-mode/v1` | API Key | Alibaba Cloud (Intl) |
|
|
||||||
| `https://api-inference.modelscope.cn/v1` | API Key | ModelScope |
|
|
||||||
|
|
||||||
## Qwen Models
|
|
||||||
|
|
||||||
| Model | Context | Description |
|
|
||||||
|-------|---------|-------------|
|
|
||||||
| `qwen3-coder-plus` | 128K | **Recommended for coding** |
|
|
||||||
| `qwen3-coder-next` | 128K | Latest features |
|
|
||||||
| `qwen3.5-plus` | 128K | General purpose |
|
|
||||||
|
|
||||||
## Free Tier Limits
|
## Free Tier Limits
|
||||||
|
|
||||||
| Metric | Limit |
|
| Metric | Limit |
|
||||||
|--------|-------|
|
|--------|-------|
|
||||||
| Requests/day | 2,000 |
|
| Requests/day | **2,000** |
|
||||||
| Requests/minute | 60 |
|
| Requests/minute | 60 |
|
||||||
| Cost | **FREE** |
|
| Cost | **FREE** |
|
||||||
|
|
||||||
## Usage Examples
|
---
|
||||||
|
|
||||||
|
# FEATURE 2: 25+ AI Providers
|
||||||
|
|
||||||
|
## FREE Tier
|
||||||
|
|
||||||
|
| Provider | Free | Model | How to Get |
|
||||||
|
|----------|------|-------|------------|
|
||||||
|
| **Qwen OAuth** | ✅ 2K/day | Qwen3-Coder | `qwen && /auth` |
|
||||||
|
|
||||||
|
## Major AI Labs
|
||||||
|
|
||||||
|
| Provider | Models | Features |
|
||||||
|
|----------|--------|----------|
|
||||||
|
| **Anthropic** | Claude 3.5/4/Opus | Extended thinking, PDF |
|
||||||
|
| **OpenAI** | GPT-4o, o1, o3, GPT-5 | Function calling |
|
||||||
|
| **Google AI** | Gemini 2.5, 3 Pro | Multimodal |
|
||||||
|
| **xAI** | Grok | Real-time data |
|
||||||
|
| **Mistral** | Large, Codestral | Code-focused |
|
||||||
|
|
||||||
|
## Cloud Platforms
|
||||||
|
|
||||||
|
| Provider | Models | Use Case |
|
||||||
|
|----------|--------|----------|
|
||||||
|
| **Azure OpenAI** | GPT-5 Enterprise | Azure integration |
|
||||||
|
| **Google Vertex** | Claude, Gemini | GCP infrastructure |
|
||||||
|
| **Amazon Bedrock** | Nova, Claude, Llama | AWS integration |
|
||||||
|
|
||||||
|
## Fast Inference
|
||||||
|
|
||||||
|
| Provider | Speed | Models |
|
||||||
|
|----------|-------|--------|
|
||||||
|
| **Groq** | Ultra-fast | Llama 3, Mixtral |
|
||||||
|
| **Cerebras** | Fastest | Llama 3 variants |
|
||||||
|
|
||||||
|
## Gateways (100+ Models)
|
||||||
|
|
||||||
|
| Provider | Models | Features |
|
||||||
|
|----------|--------|----------|
|
||||||
|
| **OpenRouter** | 100+ | Multi-provider gateway |
|
||||||
|
| **Together AI** | Open source | Fine-tuning |
|
||||||
|
| **Vercel AI** | Multi | Edge hosting |
|
||||||
|
|
||||||
|
## Local/Self-Hosted
|
||||||
|
|
||||||
|
| Provider | Use Case |
|
||||||
|
|----------|----------|
|
||||||
|
| **Ollama** | Local models |
|
||||||
|
| **LM Studio** | GUI local |
|
||||||
|
| **vLLM** | High-performance |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Multi-Provider Configuration
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"providers": {
|
||||||
|
"qwen": { "type": "oauth", "free": true, "limit": 2000 },
|
||||||
|
"anthropic": { "apiKey": "${ANTHROPIC_API_KEY}" },
|
||||||
|
"openai": { "apiKey": "${OPENAI_API_KEY}" },
|
||||||
|
"google": { "apiKey": "${GOOGLE_API_KEY}" },
|
||||||
|
"groq": { "apiKey": "${GROQ_API_KEY}" },
|
||||||
|
"openrouter": { "apiKey": "${OPENROUTER_API_KEY}" },
|
||||||
|
"ollama": { "baseURL": "http://localhost:11434" }
|
||||||
|
},
|
||||||
|
"agents": {
|
||||||
|
"free": { "model": "qwen/qwen3-coder-plus" },
|
||||||
|
"premium": { "model": "anthropic/claude-sonnet-4-5" },
|
||||||
|
"fast": { "model": "groq/llama-3.3-70b-versatile" },
|
||||||
|
"local": { "model": "ollama/llama3.2:70b" }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Quick Setup Examples
|
||||||
|
|
||||||
|
## Option 1: FREE Only
|
||||||
|
```bash
|
||||||
|
# Get FREE Qwen OAuth
|
||||||
|
npm install -g @qwen-code/qwen-code@latest
|
||||||
|
qwen && /auth
|
||||||
|
|
||||||
|
# Use with any platform
|
||||||
|
source ~/.qwen/.env && openclaw
|
||||||
|
```
|
||||||
|
|
||||||
|
## Option 2: With API Keys
|
||||||
|
```bash
|
||||||
|
# Configure providers
|
||||||
|
export ANTHROPIC_API_KEY="your-key"
|
||||||
|
export OPENAI_API_KEY="your-key"
|
||||||
|
export GOOGLE_API_KEY="your-key"
|
||||||
|
export GROQ_API_KEY="your-key"
|
||||||
|
|
||||||
|
# Or use OpenRouter for 100+ models
|
||||||
|
export OPENROUTER_API_KEY="your-key"
|
||||||
|
export OPENAI_API_KEY="$OPENROUTER_API_KEY"
|
||||||
|
export OPENAI_BASE_URL="https://openrouter.ai/api/v1"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Option 3: Local Models
|
||||||
|
```bash
|
||||||
|
# Install Ollama
|
||||||
|
curl -fsSL https://ollama.com/install.sh | sh
|
||||||
|
ollama pull llama3.2:70b
|
||||||
|
|
||||||
|
# Use with Claw platforms
|
||||||
|
export OPENAI_BASE_URL="http://localhost:11434/v1"
|
||||||
|
export OPENAI_MODEL="llama3.2:70b"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Fetch Available Models
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Use included script
|
||||||
|
./scripts/fetch-models.sh all
|
||||||
|
|
||||||
|
# Or manually
|
||||||
|
curl -s https://openrouter.ai/api/v1/models -H "Authorization: Bearer $KEY" | jq '.data[].id'
|
||||||
|
curl -s https://api.groq.com/openai/v1/models -H "Authorization: Bearer $KEY"
|
||||||
|
curl -s http://localhost:11434/api/tags
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Usage Examples
|
||||||
|
|
||||||
```
|
```
|
||||||
"Setup OpenClaw with FREE Qwen OAuth"
|
"Setup OpenClaw with FREE Qwen OAuth"
|
||||||
"Import Qwen OAuth to NanoBot for free coding"
|
"Configure NanoBot with Anthropic and OpenAI"
|
||||||
"Configure ZeroClaw with Qwen3-Coder free tier"
|
"Import Qwen OAuth to ZeroClaw"
|
||||||
"Use my Qwen free tier with any Claw platform"
|
"Fetch available models from OpenRouter"
|
||||||
```
|
"Setup Claw with all 25+ providers"
|
||||||
|
"Add custom fine-tuned model"
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
**Token not found?**
|
|
||||||
```bash
|
|
||||||
# Re-authenticate
|
|
||||||
qwen && /auth # Select Qwen OAuth
|
|
||||||
|
|
||||||
# Check location
|
|
||||||
ls ~/.qwen/
|
|
||||||
```
|
|
||||||
|
|
||||||
**Token expired?**
|
|
||||||
```bash
|
|
||||||
# Tokens auto-refresh - just use qwen
|
|
||||||
qwen -p "refresh"
|
|
||||||
|
|
||||||
# Re-export
|
|
||||||
source ~/import-qwen-oauth.sh
|
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
@@ -1,321 +1,320 @@
|
|||||||
---
|
---
|
||||||
name: claw-setup
|
name: claw-setup
|
||||||
description: Use this skill when the user asks to "setup openclaw", "install nanobot", "deploy zeroclaw", "configure picoclaw", "setup qwen code", "import qwen oauth", "use free qwen tier", "AI agent setup", or mentions setting up AI platforms with free providers.
|
description: Use this skill when the user asks to "setup openclaw", "install nanobot", "deploy zeroclaw", "configure picoclaw", "setup qwen code", "import qwen oauth", "use free qwen tier", "configure AI providers", "add openai provider", "AI agent setup", or mentions setting up AI platforms.
|
||||||
version: 1.2.0
|
version: 1.3.0
|
||||||
---
|
---
|
||||||
|
|
||||||
# Claw Setup Skill
|
# Claw Setup Skill
|
||||||
|
|
||||||
End-to-end professional setup of AI Agent platforms with **cross-platform Qwen OAuth import** - use the FREE Qwen tier with ANY Claw platform!
|
End-to-end professional setup of AI Agent platforms with **25+ OpenCode-compatible providers** and **FREE Qwen OAuth cross-platform import**.
|
||||||
|
|
||||||
## ⭐ Key Feature: Qwen OAuth Import
|
## ⭐ Two Key Features
|
||||||
|
|
||||||
**Use Qwen's FREE tier (2,000 req/day) with ANY platform:**
|
|
||||||
|
|
||||||
```
|
```
|
||||||
┌─────────────────────────────────────────────────────────────────┐
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
│ QWEN OAUTH CROSS-PLATFORM IMPORT │
|
│ CLAW SETUP FEATURES │
|
||||||
├─────────────────────────────────────────────────────────────────┤
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
│ │
|
│ │
|
||||||
│ Qwen Code CLI Other Platforms │
|
│ FEATURE 1: Qwen OAuth Cross-Platform Import (FREE) │
|
||||||
│ ───────────── ─────────────── │
|
│ ─────────────────────────────────────────────────── │
|
||||||
|
│ • FREE: 2,000 requests/day, 60 req/min │
|
||||||
|
│ • Works with: OpenClaw, NanoBot, PicoClaw, ZeroClaw │
|
||||||
|
│ • Model: Qwen3-Coder (optimized for coding) │
|
||||||
|
│ • Auth: Browser OAuth via qwen.ai │
|
||||||
│ │
|
│ │
|
||||||
│ ┌─────────────┐ ┌─────────────┐ │
|
│ FEATURE 2: 25+ OpenCode-Compatible AI Providers │
|
||||||
│ │ qwen.ai │ │ OpenClaw │ │
|
│ ───────────────────────────────────────────────── │
|
||||||
│ │ OAuth Login │──────┬──────►│ NanoBot │ │
|
│ • All major AI labs: Anthropic, OpenAI, Google, xAI, Mistral │
|
||||||
│ │ FREE 2K/day │ │ │ PicoClaw │ │
|
│ • Cloud platforms: Azure, AWS Bedrock, Google Vertex │
|
||||||
│ └─────────────┘ │ │ ZeroClaw │ │
|
│ • Fast inference: Groq, Cerebras │
|
||||||
│ │ │ │ NanoClaw │ │
|
│ • Gateways: OpenRouter (100+ models), Together AI │
|
||||||
│ ▼ │ └─────────────┘ │
|
│ • Local: Ollama, LM Studio, vLLM │
|
||||||
│ ┌─────────────┐ │ │
|
|
||||||
│ │ ~/.qwen/ │ │ Export OAuth as OpenAI-compatible │
|
|
||||||
│ │ OAuth Token │──────┘ API configuration │
|
|
||||||
│ └─────────────┘ │
|
|
||||||
│ │
|
│ │
|
||||||
└─────────────────────────────────────────────────────────────────┘
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
## Supported Platforms
|
## Supported Platforms
|
||||||
|
|
||||||
| Platform | Language | Memory | Qwen OAuth | Best For |
|
| Platform | Language | Memory | Qwen OAuth | All Providers | Best For |
|
||||||
|----------|----------|--------|------------|----------|
|
|----------|----------|--------|------------|---------------|----------|
|
||||||
| **Qwen Code** | TypeScript | ~200MB | ✅ Native | Coding, FREE tier |
|
| **Qwen Code** | TypeScript | ~200MB | ✅ Native | ✅ | FREE coding |
|
||||||
| **OpenClaw** | TypeScript | >1GB | ✅ Importable | Full-featured |
|
| **OpenClaw** | TypeScript | >1GB | ✅ Import | ✅ | Full-featured |
|
||||||
| **NanoBot** | Python | ~100MB | ✅ Importable | Research |
|
| **NanoBot** | Python | ~100MB | ✅ Import | ✅ | Research |
|
||||||
| **PicoClaw** | Go | <10MB | ✅ Importable | Embedded |
|
| **PicoClaw** | Go | <10MB | ✅ Import | ✅ | Embedded |
|
||||||
| **ZeroClaw** | Rust | <5MB | ✅ Importable | Performance |
|
| **ZeroClaw** | Rust | <5MB | ✅ Import | ✅ | Performance |
|
||||||
| **NanoClaw** | TypeScript | ~50MB | ✅ Importable | WhatsApp |
|
| **NanoClaw** | TypeScript | ~50MB | ✅ Import | ✅ | WhatsApp |
|
||||||
|
|
||||||
## Step 1: Get Qwen OAuth Token (FREE)
|
---
|
||||||
|
|
||||||
|
# FEATURE 1: Qwen OAuth Cross-Platform Import (FREE)
|
||||||
|
|
||||||
|
## Get FREE Qwen OAuth
|
||||||
|
|
||||||
### Install Qwen Code
|
|
||||||
```bash
|
```bash
|
||||||
|
# Install Qwen Code
|
||||||
npm install -g @qwen-code/qwen-code@latest
|
npm install -g @qwen-code/qwen-code@latest
|
||||||
```
|
|
||||||
|
|
||||||
### Authenticate with FREE OAuth
|
# Authenticate (FREE)
|
||||||
```bash
|
|
||||||
qwen
|
qwen
|
||||||
# In Qwen Code session:
|
/auth # Select "Qwen OAuth" → Browser login with qwen.ai
|
||||||
/auth
|
|
||||||
# Select "Qwen OAuth"
|
|
||||||
# Browser opens -> Sign in with qwen.ai account
|
|
||||||
# FREE: 2,000 requests/day, 60 req/min
|
# FREE: 2,000 requests/day, 60 req/min
|
||||||
```
|
```
|
||||||
|
|
||||||
### Extract OAuth Token
|
## Import to Any Platform
|
||||||
```bash
|
|
||||||
# OAuth token is stored in:
|
|
||||||
ls -la ~/.qwen/
|
|
||||||
|
|
||||||
# View token file
|
|
||||||
cat ~/.qwen/settings.json
|
|
||||||
|
|
||||||
# Or find OAuth credentials
|
|
||||||
find ~/.qwen -name "*.json" -exec cat {} \;
|
|
||||||
```
|
|
||||||
|
|
||||||
## Step 2: Configure Any Platform with Qwen
|
|
||||||
|
|
||||||
### Method A: Use OAuth Token Directly
|
|
||||||
|
|
||||||
After authenticating with Qwen Code, extract and use the token:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Token location (after /auth)
|
# Extract token
|
||||||
QWEN_TOKEN=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
|
export QWEN_TOKEN=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
|
||||||
|
|
||||||
# Use with any OpenAI-compatible platform
|
# Configure for any platform
|
||||||
export OPENAI_API_KEY="$QWEN_TOKEN"
|
|
||||||
export OPENAI_BASE_URL="https://api.qwen.ai/v1" # Qwen API endpoint
|
|
||||||
export OPENAI_MODEL="qwen3-coder-plus"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Method B: Use Alibaba Cloud DashScope (Alternative)
|
|
||||||
|
|
||||||
If you have Alibaba Cloud API key (paid):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# For China users
|
|
||||||
export OPENAI_API_KEY="your-dashscope-api-key"
|
|
||||||
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
|
|
||||||
export OPENAI_MODEL="qwen3-coder-plus"
|
|
||||||
|
|
||||||
# For International users
|
|
||||||
export OPENAI_BASE_URL="https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
|
|
||||||
|
|
||||||
# For US users
|
|
||||||
export OPENAI_BASE_URL="https://dashscope-us.aliyuncs.com/compatible-mode/v1"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Step 3: Platform-Specific Configuration
|
|
||||||
|
|
||||||
### OpenClaw with Qwen OAuth
|
|
||||||
```bash
|
|
||||||
# Install OpenClaw
|
|
||||||
git clone https://github.com/openclaw/openclaw.git
|
|
||||||
cd openclaw
|
|
||||||
npm install
|
|
||||||
|
|
||||||
# Configure with Qwen
|
|
||||||
export OPENAI_API_KEY="$QWEN_TOKEN"
|
export OPENAI_API_KEY="$QWEN_TOKEN"
|
||||||
export OPENAI_BASE_URL="https://api.qwen.ai/v1"
|
export OPENAI_BASE_URL="https://api.qwen.ai/v1"
|
||||||
export OPENAI_MODEL="qwen3-coder-plus"
|
export OPENAI_MODEL="qwen3-coder-plus"
|
||||||
|
|
||||||
# Or in .env file
|
# Use with any platform!
|
||||||
cat > .env << ENVEOF
|
openclaw # OpenClaw with FREE Qwen
|
||||||
OPENAI_API_KEY=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
|
nanobot # NanoBot with FREE Qwen
|
||||||
OPENAI_BASE_URL=https://api.qwen.ai/v1
|
picoclaw # PicoClaw with FREE Qwen
|
||||||
OPENAI_MODEL=qwen3-coder-plus
|
zeroclaw # ZeroClaw with FREE Qwen
|
||||||
ENVEOF
|
|
||||||
|
|
||||||
# Start OpenClaw
|
|
||||||
npm run start
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### NanoBot with Qwen OAuth
|
---
|
||||||
```bash
|
|
||||||
# Install NanoBot
|
|
||||||
pip install nanobot-ai
|
|
||||||
|
|
||||||
# Configure
|
# FEATURE 2: 25+ OpenCode-Compatible AI Providers
|
||||||
mkdir -p ~/.nanobot
|
|
||||||
cat > ~/.nanobot/config.json << 'CONFIG'
|
## Tier 1: FREE Tier
|
||||||
|
|
||||||
|
| Provider | Free Tier | Model | Setup |
|
||||||
|
|----------|-----------|-------|-------|
|
||||||
|
| **Qwen OAuth** | 2,000/day | Qwen3-Coder | `qwen && /auth` |
|
||||||
|
|
||||||
|
## Tier 2: Major AI Labs
|
||||||
|
|
||||||
|
| Provider | SDK Package | Key Models | Features |
|
||||||
|
|----------|-------------|------------|----------|
|
||||||
|
| **Anthropic** | `@ai-sdk/anthropic` | Claude 3.5/4/Opus | Extended thinking, PDF support |
|
||||||
|
| **OpenAI** | `@ai-sdk/openai` | GPT-4o, o1, o3, GPT-5 | Function calling, structured output |
|
||||||
|
| **Google AI** | `@ai-sdk/google` | Gemini 2.5, Gemini 3 Pro | Multimodal, long context |
|
||||||
|
| **xAI** | `@ai-sdk/xai` | Grok models | Real-time data integration |
|
||||||
|
| **Mistral** | `@ai-sdk/mistral` | Mistral Large, Codestral | Code-focused models |
|
||||||
|
|
||||||
|
## Tier 3: Cloud Platforms
|
||||||
|
|
||||||
|
| Provider | SDK Package | Models | Features |
|
||||||
|
|----------|-------------|--------|----------|
|
||||||
|
| **Azure OpenAI** | `@ai-sdk/azure` | GPT-5 Enterprise | Azure integration, custom endpoints |
|
||||||
|
| **Google Vertex** | `@ai-sdk/google-vertex` | Claude, Gemini on GCP | Anthropic on Google infrastructure |
|
||||||
|
| **Amazon Bedrock** | `@ai-sdk/amazon-bedrock` | Nova, Claude, Llama 3 | AWS credentials, regional prefixes |
|
||||||
|
|
||||||
|
## Tier 4: Aggregators & Gateways
|
||||||
|
|
||||||
|
| Provider | SDK Package | Models | Features |
|
||||||
|
|----------|-------------|--------|----------|
|
||||||
|
| **OpenRouter** | `@openrouter/ai-sdk-provider` | 100+ models | Multi-provider gateway |
|
||||||
|
| **Vercel AI** | `@ai-sdk/vercel` | Multi-provider | Edge hosting, rate limiting |
|
||||||
|
| **Together AI** | `@ai-sdk/togetherai` | Open source models | Fine-tuning, hosting |
|
||||||
|
| **DeepInfra** | `@ai-sdk/deepinfra` | Open source | Cost-effective hosting |
|
||||||
|
|
||||||
|
## Tier 5: Fast Inference
|
||||||
|
|
||||||
|
| Provider | SDK Package | Speed | Models |
|
||||||
|
|----------|-------------|-------|--------|
|
||||||
|
| **Groq** | `@ai-sdk/groq` | Ultra-fast | Llama 3, Mixtral |
|
||||||
|
| **Cerebras** | `@ai-sdk/cerebras` | Fastest | Llama 3 variants |
|
||||||
|
|
||||||
|
## Tier 6: Specialized
|
||||||
|
|
||||||
|
| Provider | SDK Package | Use Case |
|
||||||
|
|----------|-------------|----------|
|
||||||
|
| **Perplexity** | `@ai-sdk/perplexity` | Web search integration |
|
||||||
|
| **Cohere** | `@ai-sdk/cohere` | Enterprise RAG |
|
||||||
|
| **GitLab Duo** | `@gitlab/gitlab-ai-provider` | CI/CD AI integration |
|
||||||
|
| **GitHub Copilot** | Custom | IDE integration |
|
||||||
|
|
||||||
|
## Tier 7: Local/Self-Hosted
|
||||||
|
|
||||||
|
| Provider | Base URL | Use Case |
|
||||||
|
|----------|----------|----------|
|
||||||
|
| **Ollama** | localhost:11434 | Local model hosting |
|
||||||
|
| **LM Studio** | localhost:1234 | GUI local models |
|
||||||
|
| **vLLM** | localhost:8000 | High-performance serving |
|
||||||
|
| **LocalAI** | localhost:8080 | OpenAI-compatible local |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Multi-Provider Configuration
|
||||||
|
|
||||||
|
## Full Configuration Example
|
||||||
|
|
||||||
|
```json
|
||||||
{
|
{
|
||||||
"providers": {
|
"providers": {
|
||||||
"qwen": {
|
"qwen_oauth": {
|
||||||
"apiKey": "${QWEN_TOKEN}",
|
"type": "oauth",
|
||||||
"baseURL": "https://api.qwen.ai/v1"
|
"free": true,
|
||||||
|
"daily_limit": 2000,
|
||||||
|
"model": "qwen3-coder-plus"
|
||||||
|
},
|
||||||
|
"anthropic": {
|
||||||
|
"apiKey": "${ANTHROPIC_API_KEY}",
|
||||||
|
"baseURL": "https://api.anthropic.com"
|
||||||
|
},
|
||||||
|
"openai": {
|
||||||
|
"apiKey": "${OPENAI_API_KEY}",
|
||||||
|
"baseURL": "https://api.openai.com/v1"
|
||||||
|
},
|
||||||
|
"google": {
|
||||||
|
"apiKey": "${GOOGLE_API_KEY}",
|
||||||
|
"baseURL": "https://generativelanguage.googleapis.com/v1"
|
||||||
|
},
|
||||||
|
"azure": {
|
||||||
|
"apiKey": "${AZURE_OPENAI_API_KEY}",
|
||||||
|
"baseURL": "${AZURE_OPENAI_ENDPOINT}"
|
||||||
|
},
|
||||||
|
"vertex": {
|
||||||
|
"projectId": "${GOOGLE_CLOUD_PROJECT}",
|
||||||
|
"location": "${GOOGLE_CLOUD_LOCATION}"
|
||||||
|
},
|
||||||
|
"bedrock": {
|
||||||
|
"region": "us-east-1",
|
||||||
|
"accessKeyId": "${AWS_ACCESS_KEY_ID}",
|
||||||
|
"secretAccessKey": "${AWS_SECRET_ACCESS_KEY}"
|
||||||
|
},
|
||||||
|
"openrouter": {
|
||||||
|
"apiKey": "${OPENROUTER_API_KEY}",
|
||||||
|
"baseURL": "https://openrouter.ai/api/v1"
|
||||||
|
},
|
||||||
|
"xai": {
|
||||||
|
"apiKey": "${XAI_API_KEY}",
|
||||||
|
"baseURL": "https://api.x.ai/v1"
|
||||||
|
},
|
||||||
|
"mistral": {
|
||||||
|
"apiKey": "${MISTRAL_API_KEY}",
|
||||||
|
"baseURL": "https://api.mistral.ai/v1"
|
||||||
|
},
|
||||||
|
"groq": {
|
||||||
|
"apiKey": "${GROQ_API_KEY}",
|
||||||
|
"baseURL": "https://api.groq.com/openai/v1"
|
||||||
|
},
|
||||||
|
"cerebras": {
|
||||||
|
"apiKey": "${CEREBRAS_API_KEY}",
|
||||||
|
"baseURL": "https://api.cerebras.ai/v1"
|
||||||
|
},
|
||||||
|
"deepinfra": {
|
||||||
|
"apiKey": "${DEEPINFRA_API_KEY}",
|
||||||
|
"baseURL": "https://api.deepinfra.com/v1"
|
||||||
|
},
|
||||||
|
"cohere": {
|
||||||
|
"apiKey": "${COHERE_API_KEY}",
|
||||||
|
"baseURL": "https://api.cohere.ai/v1"
|
||||||
|
},
|
||||||
|
"together": {
|
||||||
|
"apiKey": "${TOGETHER_API_KEY}",
|
||||||
|
"baseURL": "https://api.together.xyz/v1"
|
||||||
|
},
|
||||||
|
"perplexity": {
|
||||||
|
"apiKey": "${PERPLEXITY_API_KEY}",
|
||||||
|
"baseURL": "https://api.perplexity.ai"
|
||||||
|
},
|
||||||
|
"ollama": {
|
||||||
|
"baseURL": "http://localhost:11434/v1",
|
||||||
|
"apiKey": "ollama"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"agents": {
|
"agents": {
|
||||||
"defaults": {
|
"defaults": {
|
||||||
|
"model": "anthropic/claude-sonnet-4-5",
|
||||||
|
"temperature": 0.7
|
||||||
|
},
|
||||||
|
"free": {
|
||||||
"model": "qwen/qwen3-coder-plus"
|
"model": "qwen/qwen3-coder-plus"
|
||||||
|
},
|
||||||
|
"fast": {
|
||||||
|
"model": "groq/llama-3.3-70b-versatile"
|
||||||
|
},
|
||||||
|
"local": {
|
||||||
|
"model": "ollama/llama3.2:70b"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
CONFIG
|
|
||||||
|
|
||||||
# Export token and run
|
|
||||||
export QWEN_TOKEN=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
|
|
||||||
nanobot gateway
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### PicoClaw with Qwen OAuth
|
## Fetch Available Models
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# OpenRouter - All 100+ models
|
||||||
|
curl -s https://openrouter.ai/api/v1/models \
|
||||||
|
-H "Authorization: Bearer $OPENROUTER_API_KEY" | jq '.data[].id'
|
||||||
|
|
||||||
|
# OpenAI - GPT models
|
||||||
|
curl -s https://api.openai.com/v1/models \
|
||||||
|
-H "Authorization: Bearer $OPENAI_API_KEY" | jq '.data[].id'
|
||||||
|
|
||||||
|
# Groq - Fast inference models
|
||||||
|
curl -s https://api.groq.com/openai/v1/models \
|
||||||
|
-H "Authorization: Bearer $GROQ_API_KEY" | jq '.data[].id'
|
||||||
|
|
||||||
|
# Ollama - Local models
|
||||||
|
curl -s http://localhost:11434/api/tags | jq '.models[].name'
|
||||||
|
|
||||||
|
# Anthropic (static list)
|
||||||
|
# claude-opus-4-5-20250219, claude-sonnet-4-5-20250219, claude-3-5-sonnet-20241022
|
||||||
|
|
||||||
|
# Google Gemini
|
||||||
|
curl -s "https://generativelanguage.googleapis.com/v1/models?key=$GOOGLE_API_KEY"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Platform Installation
|
||||||
|
|
||||||
|
## Qwen Code (Native FREE OAuth)
|
||||||
|
```bash
|
||||||
|
npm install -g @qwen-code/qwen-code@latest
|
||||||
|
qwen && /auth
|
||||||
|
```
|
||||||
|
|
||||||
|
## OpenClaw
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/openclaw/openclaw.git
|
||||||
|
cd openclaw && npm install && npm run setup
|
||||||
|
```
|
||||||
|
|
||||||
|
## NanoBot
|
||||||
|
```bash
|
||||||
|
pip install nanobot-ai && nanobot onboard
|
||||||
|
```
|
||||||
|
|
||||||
|
## PicoClaw
|
||||||
```bash
|
```bash
|
||||||
# Install PicoClaw
|
|
||||||
wget https://github.com/sipeed/picoclaw/releases/latest/picoclaw-linux-amd64
|
wget https://github.com/sipeed/picoclaw/releases/latest/picoclaw-linux-amd64
|
||||||
chmod +x picoclaw-linux-amd64
|
chmod +x picoclaw-linux-amd64 && sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
|
||||||
sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
|
|
||||||
|
|
||||||
# Configure with environment variables
|
|
||||||
export OPENAI_API_KEY=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
|
|
||||||
export OPENAI_BASE_URL="https://api.qwen.ai/v1"
|
|
||||||
export OPENAI_MODEL="qwen3-coder-plus"
|
|
||||||
|
|
||||||
# Run
|
|
||||||
picoclaw gateway
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### ZeroClaw with Qwen OAuth
|
## ZeroClaw
|
||||||
```bash
|
```bash
|
||||||
# Install ZeroClaw
|
|
||||||
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
|
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
|
||||||
chmod +x zeroclaw-linux-amd64
|
chmod +x zeroclaw-linux-amd64 && sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
|
||||||
sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
|
|
||||||
|
|
||||||
# Configure
|
|
||||||
export OPENAI_API_KEY=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
|
|
||||||
export OPENAI_PROVIDER="openai"
|
|
||||||
export OPENAI_MODEL="qwen3-coder-plus"
|
|
||||||
|
|
||||||
# Run
|
|
||||||
zeroclaw gateway
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Automation Script: Import Qwen OAuth
|
---
|
||||||
|
|
||||||
```bash
|
# Usage Examples
|
||||||
#!/bin/bash
|
|
||||||
# import-qwen-oauth.sh - Import Qwen OAuth to any platform
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
echo "╔═══════════════════════════════════════════════════════════════╗"
|
|
||||||
echo "║ QWEN OAUTH CROSS-PLATFORM IMPORTER ║"
|
|
||||||
echo "╚═══════════════════════════════════════════════════════════════╝"
|
|
||||||
|
|
||||||
# Check if Qwen Code is authenticated
|
|
||||||
if [ ! -d ~/.qwen ]; then
|
|
||||||
echo "❌ Qwen Code not authenticated. Run: qwen && /auth"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Find and extract token
|
|
||||||
TOKEN_FILE=$(find ~/.qwen -name "*.json" -type f | head -1)
|
|
||||||
if [ -z "$TOKEN_FILE" ]; then
|
|
||||||
echo "❌ No OAuth token found in ~/.qwen/"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Extract access token
|
|
||||||
QWEN_TOKEN=$(cat "$TOKEN_FILE" | jq -r '.access_token // .token // .accessToken' 2>/dev/null)
|
|
||||||
|
|
||||||
if [ -z "$QWEN_TOKEN" ] || [ "$QWEN_TOKEN" = "null" ]; then
|
|
||||||
echo "❌ Could not extract token from $TOKEN_FILE"
|
|
||||||
echo " Try re-authenticating: qwen && /auth"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "✅ Found Qwen OAuth token"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Export for current session
|
|
||||||
export OPENAI_API_KEY="$QWEN_TOKEN"
|
|
||||||
export OPENAI_BASE_URL="https://api.qwen.ai/v1"
|
|
||||||
export OPENAI_MODEL="qwen3-coder-plus"
|
|
||||||
|
|
||||||
# Also save to .env for persistence
|
|
||||||
cat > ~/.qwen/.env << ENVEOF
|
|
||||||
OPENAI_API_KEY=$QWEN_TOKEN
|
|
||||||
OPENAI_BASE_URL=https://api.qwen.ai/v1
|
|
||||||
OPENAI_MODEL=qwen3-coder-plus
|
|
||||||
ENVEOF
|
|
||||||
|
|
||||||
echo "✅ Environment variables set:"
|
|
||||||
echo " OPENAI_API_KEY=***${QWEN_TOKEN: -8}"
|
|
||||||
echo " OPENAI_BASE_URL=https://api.qwen.ai/v1"
|
|
||||||
echo " OPENAI_MODEL=qwen3-coder-plus"
|
|
||||||
echo ""
|
|
||||||
echo "✅ Saved to ~/.qwen/.env for persistence"
|
|
||||||
echo ""
|
|
||||||
echo "Usage for other platforms:"
|
|
||||||
echo " source ~/.qwen/.env && openclaw"
|
|
||||||
echo " source ~/.qwen/.env && nanobot gateway"
|
|
||||||
echo " source ~/.qwen/.env && picoclaw gateway"
|
|
||||||
echo " source ~/.qwen/.env && zeroclaw gateway"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Qwen API Endpoints
|
|
||||||
|
|
||||||
| Endpoint | Region | Type | Use Case |
|
|
||||||
|----------|--------|------|----------|
|
|
||||||
| `https://api.qwen.ai/v1` | Global | OAuth | FREE tier with OAuth token |
|
|
||||||
| `https://dashscope.aliyuncs.com/compatible-mode/v1` | China | API Key | Alibaba Cloud paid |
|
|
||||||
| `https://dashscope-intl.aliyuncs.com/compatible-mode/v1` | International | API Key | Alibaba Cloud paid |
|
|
||||||
| `https://dashscope-us.aliyuncs.com/compatible-mode/v1` | US | API Key | Alibaba Cloud paid |
|
|
||||||
| `https://api-inference.modelscope.cn/v1` | China | API Key | ModelScope (free tier) |
|
|
||||||
|
|
||||||
## Qwen Models Available
|
|
||||||
|
|
||||||
| Model | Context | Best For |
|
|
||||||
|-------|---------|----------|
|
|
||||||
| `qwen3-coder-plus` | 128K | General coding (recommended) |
|
|
||||||
| `qwen3-coder-next` | 128K | Latest features |
|
|
||||||
| `qwen3.5-plus` | 128K | General purpose |
|
|
||||||
| `Qwen/Qwen3-Coder-480B-A35B-Instruct` | 128K | ModelScope |
|
|
||||||
|
|
||||||
## Usage Examples
|
|
||||||
|
|
||||||
```
|
```
|
||||||
"Setup OpenClaw with Qwen OAuth free tier"
|
"Setup OpenClaw with FREE Qwen OAuth"
|
||||||
"Import Qwen OAuth to NanoBot"
|
"Configure NanoBot with all AI providers"
|
||||||
"Configure PicoClaw with free Qwen3-Coder"
|
"Import Qwen OAuth to ZeroClaw"
|
||||||
"Use Qwen free tier with ZeroClaw"
|
"Fetch available models from OpenRouter"
|
||||||
|
"Setup Claw with Anthropic and OpenAI providers"
|
||||||
|
"Add custom model to my Claw setup"
|
||||||
```
|
```
|
||||||
|
|
||||||
## Troubleshooting
|
---
|
||||||
|
|
||||||
### Token Not Found
|
# Automation Scripts
|
||||||
```bash
|
|
||||||
# Re-authenticate with Qwen Code
|
|
||||||
qwen
|
|
||||||
/auth # Select Qwen OAuth
|
|
||||||
|
|
||||||
# Check token location
|
See `scripts/` directory:
|
||||||
ls -la ~/.qwen/
|
- `import-qwen-oauth.sh` - Import FREE Qwen OAuth to any platform
|
||||||
find ~/.qwen -name "*.json"
|
- `fetch-models.sh` - Fetch available models from all providers
|
||||||
```
|
|
||||||
|
|
||||||
### Token Expired
|
|
||||||
```bash
|
|
||||||
# Tokens auto-refresh in Qwen Code
|
|
||||||
# Just run any command in qwen to refresh
|
|
||||||
qwen -p "hello"
|
|
||||||
|
|
||||||
# Then re-export
|
|
||||||
source ~/.qwen/.env
|
|
||||||
```
|
|
||||||
|
|
||||||
### API Errors
|
|
||||||
```bash
|
|
||||||
# Verify token is valid
|
|
||||||
curl -H "Authorization: Bearer $OPENAI_API_KEY" \
|
|
||||||
https://api.qwen.ai/v1/models
|
|
||||||
|
|
||||||
# Check rate limits (FREE tier: 60 req/min, 2000/day)
|
|
||||||
```
|
|
||||||
|
|
||||||
## 25+ Other AI Providers
|
|
||||||
|
|
||||||
See full list in README.md - Anthropic, OpenAI, Google, xAI, Mistral, Groq, Cerebras, etc.
|
|
||||||
|
|||||||
Reference in New Issue
Block a user