feat: Add Qwen Code with FREE OAuth tier (2,000 requests/day)
New platform option with no API key required: Qwen Code Features: - FREE OAuth tier: 2,000 requests/day - Model: Qwen3-Coder (coder-model) - Auth: Browser OAuth via qwen.ai - GitHub: https://github.com/QwenLM/qwen-code Installation: npm install -g @qwen-code/qwen-code@latest qwen /auth # Select Qwen OAuth Platform comparison updated: - Qwen Code: FREE, ~200MB, coding-optimized - OpenClaw: Full-featured, 1700+ plugins - NanoBot: Python, research - PicoClaw: Go, <10MB - ZeroClaw: Rust, <5MB Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -4,7 +4,7 @@
|
||||
|
||||
### Professional AI Agent Deployment Made Simple
|
||||
|
||||
**End-to-end setup of Claw platforms with 25+ AI providers, security hardening, and personal customization**
|
||||
**End-to-end setup of Claw platforms + Qwen Code FREE tier with 25+ AI providers**
|
||||
|
||||
---
|
||||
|
||||
@@ -28,143 +28,173 @@
|
||||
|
||||
## Overview
|
||||
|
||||
Claw Setup handles complete deployment of AI Agent platforms with **25+ AI provider integrations** (OpenCode compatible).
|
||||
Claw Setup handles complete deployment of AI Agent platforms with **Qwen Code FREE tier** and **25+ AI provider integrations**.
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ CLAW SETUP WORKFLOW │
|
||||
│ PLATFORMS SUPPORTED │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Phase 1 Phase 2 Phase 3 Phase 4 │
|
||||
│ ──────── ──────── ──────── ──────── │
|
||||
│ ⭐ FREE TIER │
|
||||
│ ─────────── │
|
||||
│ 🤖 Qwen Code TypeScript ~200MB FREE OAuth │
|
||||
│ • 2,000 requests/day FREE │
|
||||
│ • Qwen3-Coder model │
|
||||
│ • No API key needed │
|
||||
│ │
|
||||
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
|
||||
│ │ SELECT │────►│ INSTALL │────►│CUSTOMIZE│────►│ DEPLOY │ │
|
||||
│ │ Platform│ │& Secure │ │Providers│ │ & Run │ │
|
||||
│ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │
|
||||
│ 🦞 FULL-FEATURED │
|
||||
│ ─────────────── │
|
||||
│ OpenClaw TypeScript >1GB 1700+ plugins │
|
||||
│ NanoBot Python ~100MB Research-ready │
|
||||
│ PicoClaw Go <10MB $10 hardware │
|
||||
│ ZeroClaw Rust <5MB 10ms startup │
|
||||
│ NanoClaw TypeScript ~50MB WhatsApp │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Platforms Supported
|
||||
## ⭐ Qwen Code (FREE OAuth Tier)
|
||||
|
||||
| Platform | Language | Memory | Startup | Best For |
|
||||
|----------|----------|--------|---------|----------|
|
||||
| **OpenClaw** | TypeScript | >1GB | ~500s | Full-featured, 1700+ plugins |
|
||||
| **NanoBot** | Python | ~100MB | ~30s | Research, customization |
|
||||
| **PicoClaw** | Go | <10MB | ~1s | Embedded, $10 hardware |
|
||||
| **ZeroClaw** | Rust | <5MB | <10ms | Maximum performance |
|
||||
| **NanoClaw** | TypeScript | ~50MB | ~5s | WhatsApp integration |
|
||||
**Special: 2,000 FREE requests/day - No API key needed!**
|
||||
|
||||
| Feature | Details |
|
||||
|---------|---------|
|
||||
| **Model** | Qwen3-Coder (coder-model) |
|
||||
| **Free Tier** | 2,000 requests/day |
|
||||
| **Auth** | Browser OAuth via qwen.ai |
|
||||
| **GitHub** | https://github.com/QwenLM/qwen-code |
|
||||
|
||||
### Quick Start
|
||||
```bash
|
||||
# Install
|
||||
npm install -g @qwen-code/qwen-code@latest
|
||||
|
||||
# Start
|
||||
qwen
|
||||
|
||||
# Authenticate (FREE)
|
||||
/auth
|
||||
# Select "Qwen OAuth" -> Browser opens -> Sign in with qwen.ai
|
||||
```
|
||||
|
||||
### Features
|
||||
- ✅ **FREE**: 2,000 requests/day
|
||||
- ✅ **No API Key**: Browser OAuth authentication
|
||||
- ✅ **Qwen3-Coder**: Optimized for coding
|
||||
- ✅ **OpenAI-Compatible**: Works with other APIs too
|
||||
- ✅ **IDE Integration**: VS Code, Zed, JetBrains
|
||||
- ✅ **Headless Mode**: CI/CD automation
|
||||
|
||||
## Platform Comparison
|
||||
|
||||
| Platform | Memory | Startup | Free? | Best For |
|
||||
|----------|--------|---------|-------|----------|
|
||||
| **Qwen Code** | ~200MB | ~5s | ✅ 2K/day | **Coding, FREE tier** |
|
||||
| OpenClaw | >1GB | ~500s | ❌ | Full-featured |
|
||||
| NanoBot | ~100MB | ~30s | ❌ | Research |
|
||||
| PicoClaw | <10MB | ~1s | ❌ | Embedded |
|
||||
| ZeroClaw | <5MB | <10ms | ❌ | Performance |
|
||||
|
||||
## Decision Flowchart
|
||||
|
||||
```
|
||||
┌─────────────────┐
|
||||
│ Need AI Agent? │
|
||||
└────────┬────────┘
|
||||
│
|
||||
▼
|
||||
┌───────────────────────┐
|
||||
│ Want FREE tier? │
|
||||
└───────────┬───────────┘
|
||||
┌─────┴─────┐
|
||||
│ │
|
||||
YES NO
|
||||
│ │
|
||||
▼ ▼
|
||||
┌──────────────┐ ┌──────────────────┐
|
||||
│ ⭐ Qwen Code │ │ Memory limited? │
|
||||
│ OAuth FREE │ └────────┬─────────┘
|
||||
│ 2000/day │ ┌─────┴─────┐
|
||||
└──────────────┘ │ │
|
||||
YES NO
|
||||
│ │
|
||||
▼ ▼
|
||||
┌──────────┐ ┌──────────┐
|
||||
│ZeroClaw/ │ │OpenClaw │
|
||||
│PicoClaw │ │(Full) │
|
||||
└──────────┘ └──────────┘
|
||||
```
|
||||
|
||||
## AI Providers (25+ Supported)
|
||||
|
||||
### Tier 1: Major AI Labs
|
||||
### Tier 1: FREE
|
||||
| Provider | Free Tier | Models |
|
||||
|----------|-----------|--------|
|
||||
| **Qwen OAuth** | 2,000/day | Qwen3-Coder |
|
||||
|
||||
### Tier 2: Major AI Labs
|
||||
| Provider | Models | Features |
|
||||
|----------|--------|----------|
|
||||
| **Anthropic** | Claude 3.5/4/Opus | Extended thinking, PDF support |
|
||||
| **OpenAI** | GPT-4o, o1, o3, GPT-5 | Function calling, structured output |
|
||||
| **Google AI** | Gemini 2.5, Gemini 3 Pro | Multimodal, long context |
|
||||
| **xAI** | Grok | Real-time data integration |
|
||||
| **Mistral** | Mistral Large, Codestral | Code-focused models |
|
||||
|
||||
### Tier 2: Cloud Platforms
|
||||
|
||||
| Provider | Models | Features |
|
||||
|----------|--------|----------|
|
||||
| **Azure OpenAI** | GPT-5, GPT-4o Enterprise | Azure integration |
|
||||
| **Google Vertex** | Claude, Gemini on GCP | Anthropic on Google |
|
||||
| **Amazon Bedrock** | Nova, Claude, Llama 3 | AWS regional prefixes |
|
||||
|
||||
### Tier 3: Aggregators & Gateways
|
||||
|
||||
| Provider | Models | Features |
|
||||
|----------|--------|----------|
|
||||
| **OpenRouter** | 100+ models | Multi-provider gateway |
|
||||
| **Vercel AI** | Multi-provider | Edge hosting, rate limiting |
|
||||
| **Together AI** | Open source | Fine-tuning, hosting |
|
||||
| **DeepInfra** | Open source | Cost-effective |
|
||||
|
||||
### Tier 4: Fast Inference
|
||||
| Anthropic | Claude 3.5/4/Opus | Extended thinking |
|
||||
| OpenAI | GPT-4o, o1, o3, GPT-5 | Function calling |
|
||||
| Google AI | Gemini 2.5, 3 Pro | Multimodal |
|
||||
| xAI | Grok | Real-time data |
|
||||
| Mistral | Large, Codestral | Code-focused |
|
||||
|
||||
### Tier 3: Fast Inference
|
||||
| Provider | Speed | Models |
|
||||
|----------|-------|--------|
|
||||
| **Groq** | Ultra-fast | Llama 3, Mixtral |
|
||||
| **Cerebras** | Fastest | Llama 3 variants |
|
||||
| Groq | Ultra-fast | Llama 3, Mixtral |
|
||||
| Cerebras | Fastest | Llama 3 variants |
|
||||
|
||||
### Tier 5: Specialized
|
||||
### Tier 4: Gateways & Local
|
||||
| Provider | Type | Models |
|
||||
|----------|------|--------|
|
||||
| OpenRouter | Gateway | 100+ models |
|
||||
| Together AI | Hosting | Open source |
|
||||
| Ollama | Local | Self-hosted |
|
||||
| LM Studio | Local | GUI self-hosted |
|
||||
|
||||
| Provider | Use Case |
|
||||
|----------|----------|
|
||||
| **Perplexity** | Web search integration |
|
||||
| **Cohere** | Enterprise RAG |
|
||||
| **GitLab Duo** | CI/CD integration |
|
||||
| **GitHub Copilot** | IDE integration |
|
||||
| **Cloudflare AI** | Gateway, rate limiting |
|
||||
| **SAP AI Core** | SAP enterprise |
|
||||
## Quick Start Examples
|
||||
|
||||
### Local/Self-Hosted
|
||||
|
||||
| Provider | Use Case |
|
||||
|----------|----------|
|
||||
| **Ollama** | Local model hosting |
|
||||
| **LM Studio** | GUI local models |
|
||||
| **vLLM** | High-performance serving |
|
||||
|
||||
## Model Selection
|
||||
|
||||
**Option A: Fetch from Provider**
|
||||
### Option 1: FREE Qwen Code
|
||||
```bash
|
||||
# Fetch available models
|
||||
curl -s https://openrouter.ai/api/v1/models -H "Authorization: Bearer $KEY" | jq '.data[].id'
|
||||
curl -s https://api.groq.com/openai/v1/models -H "Authorization: Bearer $KEY"
|
||||
curl -s http://localhost:11434/api/tags # Ollama
|
||||
npm install -g @qwen-code/qwen-code@latest
|
||||
qwen
|
||||
/auth # Select Qwen OAuth
|
||||
```
|
||||
|
||||
**Option B: Custom Model Input**
|
||||
```json
|
||||
{
|
||||
"provider": "openai",
|
||||
"modelId": "ft:gpt-4o:org:custom:suffix",
|
||||
"displayName": "My Fine-Tuned Model"
|
||||
}
|
||||
### Option 2: With Your Own API Keys
|
||||
```bash
|
||||
# Configure providers
|
||||
export ANTHROPIC_API_KEY="your-key"
|
||||
export OPENAI_API_KEY="your-key"
|
||||
export GOOGLE_API_KEY="your-key"
|
||||
|
||||
# Or use OpenRouter for 100+ models
|
||||
export OPENROUTER_API_KEY="your-key"
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
### Option 3: Local Models
|
||||
```bash
|
||||
# Install Ollama
|
||||
curl -fsSL https://ollama.com/install.sh | sh
|
||||
|
||||
```
|
||||
"Setup OpenClaw with Anthropic and OpenAI providers"
|
||||
"Install NanoBot with all available providers"
|
||||
"Deploy ZeroClaw with Groq for fast inference"
|
||||
"Configure Claw with local Ollama models"
|
||||
# Pull model
|
||||
ollama pull llama3.2:70b
|
||||
|
||||
# Use with Claw platforms
|
||||
```
|
||||
|
||||
## Configuration Example
|
||||
## Usage Examples
|
||||
|
||||
```json
|
||||
{
|
||||
"providers": {
|
||||
"anthropic": { "apiKey": "${ANTHROPIC_API_KEY}" },
|
||||
"openai": { "apiKey": "${OPENAI_API_KEY}" },
|
||||
"google": { "apiKey": "${GOOGLE_API_KEY}" },
|
||||
"openrouter": { "apiKey": "${OPENROUTER_API_KEY}" },
|
||||
"groq": { "apiKey": "${GROQ_API_KEY}" },
|
||||
"ollama": { "baseURL": "http://localhost:11434" }
|
||||
},
|
||||
"agents": {
|
||||
"defaults": { "model": "anthropic/claude-sonnet-4-5" },
|
||||
"fast": { "model": "groq/llama-3.3-70b-versatile" },
|
||||
"local": { "model": "ollama/llama3.2:70b" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
- API keys via environment variables
|
||||
- Restricted config permissions (chmod 600)
|
||||
- Systemd hardening (NoNewPrivileges, PrivateTmp)
|
||||
- Network binding to localhost
|
||||
"Setup Qwen Code with free OAuth tier"
|
||||
"Install OpenClaw with Anthropic provider"
|
||||
"Configure Claw with all free options"
|
||||
"Setup ZeroClaw with Groq for fast inference"
|
||||
"Fetch available models from OpenRouter"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
|
||||
Reference in New Issue
Block a user