feat: Add Qwen OAuth cross-platform import for ALL Claw platforms

Key Feature: Use FREE Qwen tier (2,000 req/day) with ANY platform!

How it works:
1. Get Qwen OAuth: qwen && /auth (FREE)
2. Extract token from ~/.qwen/
3. Configure any platform with token

Supported platforms:
- OpenClaw 
- NanoBot 
- PicoClaw 
- ZeroClaw 
- NanoClaw 

Configuration:
  export OPENAI_API_KEY="$QWEN_TOKEN"
  export OPENAI_BASE_URL="https://api.qwen.ai/v1"
  export OPENAI_MODEL="qwen3-coder-plus"

Added:
- import-qwen-oauth.sh script for automation
- Cross-platform configuration examples
- Qwen API endpoints reference
- Troubleshooting guide

Free tier: 2,000 requests/day, 60 requests/minute

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Claude Code
2026-02-22 04:05:18 -05:00
Unverified
parent 7a5c60f227
commit 46ed77201c
4 changed files with 561 additions and 342 deletions

View File

@@ -1,226 +1,321 @@
---
name: claw-setup
description: Use this skill when the user asks to "setup openclaw", "install nanobot", "deploy zeroclaw", "configure picoclaw", "setup qwen code", "qwen-code", "AI agent setup", "personal AI assistant", "claw framework", or mentions setting up any AI agent/assistant platform.
version: 1.1.0
description: Use this skill when the user asks to "setup openclaw", "install nanobot", "deploy zeroclaw", "configure picoclaw", "setup qwen code", "import qwen oauth", "use free qwen tier", "AI agent setup", or mentions setting up AI platforms with free providers.
version: 1.2.0
---
# Claw Setup Skill
End-to-end professional setup of AI Agent platforms with security hardening, multi-provider configuration, and personal customization through interactive brainstorming.
End-to-end professional setup of AI Agent platforms with **cross-platform Qwen OAuth import** - use the FREE Qwen tier with ANY Claw platform!
## ⭐ Key Feature: Qwen OAuth Import
**Use Qwen's FREE tier (2,000 req/day) with ANY platform:**
```
┌─────────────────────────────────────────────────────────────────┐
│ QWEN OAUTH CROSS-PLATFORM IMPORT │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Qwen Code CLI Other Platforms │
│ ───────────── ─────────────── │
│ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ qwen.ai │ │ OpenClaw │ │
│ │ OAuth Login │──────┬──────►│ NanoBot │ │
│ │ FREE 2K/day │ │ │ PicoClaw │ │
│ └─────────────┘ │ │ ZeroClaw │ │
│ │ │ │ NanoClaw │ │
│ ▼ │ └─────────────┘ │
│ ┌─────────────┐ │ │
│ │ ~/.qwen/ │ │ Export OAuth as OpenAI-compatible │
│ │ OAuth Token │──────┘ API configuration │
│ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Supported Platforms
| Platform | Language | Memory | Startup | Best For |
|----------|----------|--------|---------|----------|
| **OpenClaw** | TypeScript | >1GB | ~500s | Full-featured, plugin ecosystem |
| **NanoBot** | Python | ~100MB | ~30s | Research, easy customization |
| **PicoClaw** | Go | <10MB | ~1s | Low-resource, embedded |
| **ZeroClaw** | Rust | <5MB | <10ms | Maximum performance, security |
| **NanoClaw** | TypeScript | ~50MB | ~5s | WhatsApp integration |
| **Qwen Code** | TypeScript | ~200MB | ~5s | **FREE OAuth tier, Qwen3-Coder** |
| Platform | Language | Memory | Qwen OAuth | Best For |
|----------|----------|--------|------------|----------|
| **Qwen Code** | TypeScript | ~200MB | ✅ Native | Coding, FREE tier |
| **OpenClaw** | TypeScript | >1GB | ✅ Importable | Full-featured |
| **NanoBot** | Python | ~100MB | ✅ Importable | Research |
| **PicoClaw** | Go | <10MB | ✅ Importable | Embedded |
| **ZeroClaw** | Rust | <5MB | ✅ Importable | Performance |
| **NanoClaw** | TypeScript | ~50MB | ✅ Importable | WhatsApp |
## Qwen Code (FREE OAuth Tier) ⭐
## Step 1: Get Qwen OAuth Token (FREE)
**Special: Free 2,000 requests/day with Qwen OAuth!**
| Feature | Details |
|---------|---------|
| **Model** | Qwen3-Coder (coder-model) |
| **Free Tier** | 2,000 requests/day via OAuth |
| **Auth** | qwen.ai account (browser OAuth) |
| **GitHub** | https://github.com/QwenLM/qwen-code |
| **License** | Apache 2.0 |
### Installation
### Install Qwen Code
```bash
# NPM (recommended)
npm install -g @qwen-code/qwen-code@latest
# Homebrew (macOS, Linux)
brew install qwen-code
# Or from source
git clone https://github.com/QwenLM/qwen-code.git
cd qwen-code
npm install
npm run build
```
### Quick Start
### Authenticate with FREE OAuth
```bash
# Start interactive mode
qwen
# In session, authenticate with free OAuth
# In Qwen Code session:
/auth
# Select "Qwen OAuth" -> browser opens -> sign in with qwen.ai
# Or use OpenAI-compatible API
export OPENAI_API_KEY="your-key"
export OPENAI_MODEL="qwen3-coder"
qwen
# Select "Qwen OAuth"
# Browser opens -> Sign in with qwen.ai account
# FREE: 2,000 requests/day, 60 req/min
```
### Qwen Code Features
- **Free OAuth Tier**: 2,000 requests/day, no API key needed
- **Qwen3-Coder Model**: Optimized for coding tasks
- **OpenAI-Compatible**: Works with any OpenAI-compatible API
- **IDE Integration**: VS Code, Zed, JetBrains
- **Headless Mode**: For CI/CD automation
- **TypeScript SDK**: Build custom integrations
### Configuration
```json
// ~/.qwen/settings.json
{
"model": "qwen3-coder-480b",
"temperature": 0.7,
"maxTokens": 4096
}
```
## AI Providers (25+ Supported)
### Built-in Providers
| Provider | SDK Package | Key Models | Features |
|----------|-------------|------------|----------|
| **Qwen OAuth** | Free tier | Qwen3-Coder | **2,000 free req/day** |
| **Anthropic** | `@ai-sdk/anthropic` | Claude 3.5/4/Opus | Extended thinking |
| **OpenAI** | `@ai-sdk/openai` | GPT-4o, o1, o3, GPT-5 | Function calling |
| **Azure OpenAI** | `@ai-sdk/azure` | GPT-5 Enterprise | Azure integration |
| **Google AI** | `@ai-sdk/google` | Gemini 2.5, 3 Pro | Multimodal |
| **Google Vertex** | `@ai-sdk/google-vertex` | Claude, Gemini on GCP | Google Cloud |
| **Amazon Bedrock** | `@ai-sdk/amazon-bedrock` | Nova, Claude, Llama 3 | AWS integration |
| **OpenRouter** | `@openrouter/ai-sdk-provider` | 100+ models | Multi-provider gateway |
| **xAI** | `@ai-sdk/xai` | Grok | Real-time data |
| **Mistral** | `@ai-sdk/mistral` | Mistral Large, Codestral | Code-focused |
| **Groq** | `@ai-sdk/groq` | Llama 3, Mixtral | Ultra-fast inference |
| **Cerebras** | `@ai-sdk/cerebras` | Llama 3 variants | Hardware-accelerated |
| **DeepInfra** | `@ai-sdk/deepinfra` | Open source models | Cost-effective |
| **Cohere** | `@ai-sdk/cohere` | Command R+, Embed | Enterprise RAG |
| **Together AI** | `@ai-sdk/togetherai` | Open source models | Fine-tuning |
| **Perplexity** | `@ai-sdk/perplexity` | Sonar models | Web search |
| **Vercel AI** | `@ai-sdk/vercel` | Multi-provider | Edge hosting |
| **GitLab** | `@gitlab/gitlab-ai-provider` | GitLab Duo | CI/CD AI |
| **GitHub Copilot** | Custom | GPT-5 series | IDE integration |
### Local/Self-Hosted
| Provider | Base URL | Use Case |
|----------|----------|----------|
| **Ollama** | localhost:11434 | Local model hosting |
| **LM Studio** | localhost:1234 | GUI local models |
| **vLLM** | localhost:8000 | High-performance serving |
## Platform Selection Guide
```
┌─────────────────┐
│ Need AI Agent? │
└────────┬────────┘
┌───────────────────────┐
│ Want FREE tier? │
└───────────┬───────────┘
┌─────┴─────┐
│ │
YES NO
│ │
▼ ▼
┌──────────────┐ ┌──────────────────┐
│ Qwen Code │ │ Memory constrained?
│ (OAuth FREE) │ └────────┬─────────┘
│ 2000/day │ ┌─────┴─────┐
└──────────────┘ │ │
YES NO
│ │
▼ ▼
┌──────────┐ ┌──────────┐
│ZeroClaw/ │ │OpenClaw │
│PicoClaw │ │(Full) │
└──────────┘ └──────────┘
```
## Installation Commands
### Qwen Code (FREE)
### Extract OAuth Token
```bash
npm install -g @qwen-code/qwen-code@latest
qwen
/auth # Select Qwen OAuth for free tier
# OAuth token is stored in:
ls -la ~/.qwen/
# View token file
cat ~/.qwen/settings.json
# Or find OAuth credentials
find ~/.qwen -name "*.json" -exec cat {} \;
```
### OpenClaw
## Step 2: Configure Any Platform with Qwen
### Method A: Use OAuth Token Directly
After authenticating with Qwen Code, extract and use the token:
```bash
# Token location (after /auth)
QWEN_TOKEN=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
# Use with any OpenAI-compatible platform
export OPENAI_API_KEY="$QWEN_TOKEN"
export OPENAI_BASE_URL="https://api.qwen.ai/v1" # Qwen API endpoint
export OPENAI_MODEL="qwen3-coder-plus"
```
### Method B: Use Alibaba Cloud DashScope (Alternative)
If you have Alibaba Cloud API key (paid):
```bash
# For China users
export OPENAI_API_KEY="your-dashscope-api-key"
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
export OPENAI_MODEL="qwen3-coder-plus"
# For International users
export OPENAI_BASE_URL="https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
# For US users
export OPENAI_BASE_URL="https://dashscope-us.aliyuncs.com/compatible-mode/v1"
```
## Step 3: Platform-Specific Configuration
### OpenClaw with Qwen OAuth
```bash
# Install OpenClaw
git clone https://github.com/openclaw/openclaw.git
cd openclaw && npm install && npm run setup
cd openclaw
npm install
# Configure with Qwen
export OPENAI_API_KEY="$QWEN_TOKEN"
export OPENAI_BASE_URL="https://api.qwen.ai/v1"
export OPENAI_MODEL="qwen3-coder-plus"
# Or in .env file
cat > .env << ENVEOF
OPENAI_API_KEY=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
OPENAI_BASE_URL=https://api.qwen.ai/v1
OPENAI_MODEL=qwen3-coder-plus
ENVEOF
# Start OpenClaw
npm run start
```
### NanoBot
### NanoBot with Qwen OAuth
```bash
# Install NanoBot
pip install nanobot-ai
nanobot onboard
```
### PicoClaw
```bash
wget https://github.com/sipeed/picoclaw/releases/latest/picoclaw-linux-amd64
chmod +x picoclaw-linux-amd64 && sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
```
### ZeroClaw
```bash
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
chmod +x zeroclaw-linux-amd64 && sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
```
## Multi-Provider Configuration
```json
# Configure
mkdir -p ~/.nanobot
cat > ~/.nanobot/config.json << 'CONFIG'
{
"providers": {
"qwen": {
"type": "oauth",
"free": true,
"daily_limit": 2000
},
"anthropic": { "apiKey": "${ANTHROPIC_API_KEY}" },
"openai": { "apiKey": "${OPENAI_API_KEY}" },
"google": { "apiKey": "${GOOGLE_API_KEY}" },
"openrouter": { "apiKey": "${OPENROUTER_API_KEY}" },
"groq": { "apiKey": "${GROQ_API_KEY}" },
"ollama": { "baseURL": "http://localhost:11434" }
"apiKey": "${QWEN_TOKEN}",
"baseURL": "https://api.qwen.ai/v1"
}
},
"agents": {
"defaults": { "model": "qwen/qwen3-coder" },
"premium": { "model": "anthropic/claude-sonnet-4-5" },
"fast": { "model": "groq/llama-3.3-70b-versatile" },
"local": { "model": "ollama/llama3.2:70b" }
"defaults": {
"model": "qwen/qwen3-coder-plus"
}
}
}
CONFIG
# Export token and run
export QWEN_TOKEN=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
nanobot gateway
```
## Security Hardening
### PicoClaw with Qwen OAuth
```bash
# Install PicoClaw
wget https://github.com/sipeed/picoclaw/releases/latest/picoclaw-linux-amd64
chmod +x picoclaw-linux-amd64
sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
# Configure with environment variables
export OPENAI_API_KEY=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
export OPENAI_BASE_URL="https://api.qwen.ai/v1"
export OPENAI_MODEL="qwen3-coder-plus"
# Run
picoclaw gateway
```
### ZeroClaw with Qwen OAuth
```bash
# Install ZeroClaw
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
chmod +x zeroclaw-linux-amd64
sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
# Configure
export OPENAI_API_KEY=$(cat ~/.qwen/oauth-token.json | jq -r '.access_token')
export OPENAI_PROVIDER="openai"
export OPENAI_MODEL="qwen3-coder-plus"
# Run
zeroclaw gateway
```
## Automation Script: Import Qwen OAuth
```bash
# Environment variables for API keys
export ANTHROPIC_API_KEY="your-key"
export OPENAI_API_KEY="your-key"
#!/bin/bash
# import-qwen-oauth.sh - Import Qwen OAuth to any platform
# Qwen OAuth - no key needed, browser auth
set -e
# Restricted config
chmod 600 ~/.qwen/settings.json
chmod 600 ~/.config/claw/config.json
echo "╔═══════════════════════════════════════════════════════════════╗"
echo "║ QWEN OAUTH CROSS-PLATFORM IMPORTER ║"
echo "╚═══════════════════════════════════════════════════════════════╝"
# Check if Qwen Code is authenticated
if [ ! -d ~/.qwen ]; then
echo "❌ Qwen Code not authenticated. Run: qwen && /auth"
exit 1
fi
# Find and extract token
TOKEN_FILE=$(find ~/.qwen -name "*.json" -type f | head -1)
if [ -z "$TOKEN_FILE" ]; then
echo "❌ No OAuth token found in ~/.qwen/"
exit 1
fi
# Extract access token
QWEN_TOKEN=$(cat "$TOKEN_FILE" | jq -r '.access_token // .token // .accessToken' 2>/dev/null)
if [ -z "$QWEN_TOKEN" ] || [ "$QWEN_TOKEN" = "null" ]; then
echo "❌ Could not extract token from $TOKEN_FILE"
echo " Try re-authenticating: qwen && /auth"
exit 1
fi
echo "✅ Found Qwen OAuth token"
echo ""
# Export for current session
export OPENAI_API_KEY="$QWEN_TOKEN"
export OPENAI_BASE_URL="https://api.qwen.ai/v1"
export OPENAI_MODEL="qwen3-coder-plus"
# Also save to .env for persistence
cat > ~/.qwen/.env << ENVEOF
OPENAI_API_KEY=$QWEN_TOKEN
OPENAI_BASE_URL=https://api.qwen.ai/v1
OPENAI_MODEL=qwen3-coder-plus
ENVEOF
echo "✅ Environment variables set:"
echo " OPENAI_API_KEY=***${QWEN_TOKEN: -8}"
echo " OPENAI_BASE_URL=https://api.qwen.ai/v1"
echo " OPENAI_MODEL=qwen3-coder-plus"
echo ""
echo "✅ Saved to ~/.qwen/.env for persistence"
echo ""
echo "Usage for other platforms:"
echo " source ~/.qwen/.env && openclaw"
echo " source ~/.qwen/.env && nanobot gateway"
echo " source ~/.qwen/.env && picoclaw gateway"
echo " source ~/.qwen/.env && zeroclaw gateway"
```
## Brainstorm Session Topics
## Qwen API Endpoints
1. **Platform Selection**: Free tier vs paid, features needed
2. **Provider Selection**: Which AI providers to configure
3. **Model Selection**: Fetch models or input custom
4. **Integrations**: Messaging, calendar, storage
5. **Deployment**: Local, VPS, cloud
6. **Custom Agents**: Personality, memory, proactivity
| Endpoint | Region | Type | Use Case |
|----------|--------|------|----------|
| `https://api.qwen.ai/v1` | Global | OAuth | FREE tier with OAuth token |
| `https://dashscope.aliyuncs.com/compatible-mode/v1` | China | API Key | Alibaba Cloud paid |
| `https://dashscope-intl.aliyuncs.com/compatible-mode/v1` | International | API Key | Alibaba Cloud paid |
| `https://dashscope-us.aliyuncs.com/compatible-mode/v1` | US | API Key | Alibaba Cloud paid |
| `https://api-inference.modelscope.cn/v1` | China | API Key | ModelScope (free tier) |
## Qwen Models Available
| Model | Context | Best For |
|-------|---------|----------|
| `qwen3-coder-plus` | 128K | General coding (recommended) |
| `qwen3-coder-next` | 128K | Latest features |
| `qwen3.5-plus` | 128K | General purpose |
| `Qwen/Qwen3-Coder-480B-A35B-Instruct` | 128K | ModelScope |
## Usage Examples
```
"Setup OpenClaw with Qwen OAuth free tier"
"Import Qwen OAuth to NanoBot"
"Configure PicoClaw with free Qwen3-Coder"
"Use Qwen free tier with ZeroClaw"
```
## Troubleshooting
### Token Not Found
```bash
# Re-authenticate with Qwen Code
qwen
/auth # Select Qwen OAuth
# Check token location
ls -la ~/.qwen/
find ~/.qwen -name "*.json"
```
### Token Expired
```bash
# Tokens auto-refresh in Qwen Code
# Just run any command in qwen to refresh
qwen -p "hello"
# Then re-export
source ~/.qwen/.env
```
### API Errors
```bash
# Verify token is valid
curl -H "Authorization: Bearer $OPENAI_API_KEY" \
https://api.qwen.ai/v1/models
# Check rate limits (FREE tier: 60 req/min, 2000/day)
```
## 25+ Other AI Providers
See full list in README.md - Anthropic, OpenAI, Google, xAI, Mistral, Groq, Cerebras, etc.