feat: Add Qwen Code with FREE OAuth tier (2,000 requests/day)
New platform option with no API key required: Qwen Code Features: - FREE OAuth tier: 2,000 requests/day - Model: Qwen3-Coder (coder-model) - Auth: Browser OAuth via qwen.ai - GitHub: https://github.com/QwenLM/qwen-code Installation: npm install -g @qwen-code/qwen-code@latest qwen /auth # Select Qwen OAuth Platform comparison updated: - Qwen Code: FREE, ~200MB, coding-optimized - OpenClaw: Full-featured, 1700+ plugins - NanoBot: Python, research - PicoClaw: Go, <10MB - ZeroClaw: Rust, <5MB Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -1,12 +1,12 @@
|
||||
---
|
||||
name: claw-setup
|
||||
description: Use this skill when the user asks to "setup openclaw", "install nanobot", "deploy zeroclaw", "configure picoclaw", "AI agent setup", "personal AI assistant", "claw framework", or mentions setting up any AI agent/assistant platform from the Claw family (OpenClaw, NanoBot, PicoClaw, ZeroClaw, NanoClaw).
|
||||
version: 1.0.0
|
||||
description: Use this skill when the user asks to "setup openclaw", "install nanobot", "deploy zeroclaw", "configure picoclaw", "setup qwen code", "qwen-code", "AI agent setup", "personal AI assistant", "claw framework", or mentions setting up any AI agent/assistant platform.
|
||||
version: 1.1.0
|
||||
---
|
||||
|
||||
# Claw Setup Skill
|
||||
|
||||
End-to-end professional setup of AI Agent platforms from the Claw family with security hardening, multi-provider configuration, and personal customization through interactive brainstorming.
|
||||
End-to-end professional setup of AI Agent platforms with security hardening, multi-provider configuration, and personal customization through interactive brainstorming.
|
||||
|
||||
## Supported Platforms
|
||||
|
||||
@@ -17,42 +17,93 @@ End-to-end professional setup of AI Agent platforms from the Claw family with se
|
||||
| **PicoClaw** | Go | <10MB | ~1s | Low-resource, embedded |
|
||||
| **ZeroClaw** | Rust | <5MB | <10ms | Maximum performance, security |
|
||||
| **NanoClaw** | TypeScript | ~50MB | ~5s | WhatsApp integration |
|
||||
| **Qwen Code** | TypeScript | ~200MB | ~5s | **FREE OAuth tier, Qwen3-Coder** |
|
||||
|
||||
## AI Providers (OpenCode Compatible - 25+ Providers)
|
||||
## Qwen Code (FREE OAuth Tier) ⭐
|
||||
|
||||
**Special: Free 2,000 requests/day with Qwen OAuth!**
|
||||
|
||||
| Feature | Details |
|
||||
|---------|---------|
|
||||
| **Model** | Qwen3-Coder (coder-model) |
|
||||
| **Free Tier** | 2,000 requests/day via OAuth |
|
||||
| **Auth** | qwen.ai account (browser OAuth) |
|
||||
| **GitHub** | https://github.com/QwenLM/qwen-code |
|
||||
| **License** | Apache 2.0 |
|
||||
|
||||
### Installation
|
||||
```bash
|
||||
# NPM (recommended)
|
||||
npm install -g @qwen-code/qwen-code@latest
|
||||
|
||||
# Homebrew (macOS, Linux)
|
||||
brew install qwen-code
|
||||
|
||||
# Or from source
|
||||
git clone https://github.com/QwenLM/qwen-code.git
|
||||
cd qwen-code
|
||||
npm install
|
||||
npm run build
|
||||
```
|
||||
|
||||
### Quick Start
|
||||
```bash
|
||||
# Start interactive mode
|
||||
qwen
|
||||
|
||||
# In session, authenticate with free OAuth
|
||||
/auth
|
||||
# Select "Qwen OAuth" -> browser opens -> sign in with qwen.ai
|
||||
|
||||
# Or use OpenAI-compatible API
|
||||
export OPENAI_API_KEY="your-key"
|
||||
export OPENAI_MODEL="qwen3-coder"
|
||||
qwen
|
||||
```
|
||||
|
||||
### Qwen Code Features
|
||||
- **Free OAuth Tier**: 2,000 requests/day, no API key needed
|
||||
- **Qwen3-Coder Model**: Optimized for coding tasks
|
||||
- **OpenAI-Compatible**: Works with any OpenAI-compatible API
|
||||
- **IDE Integration**: VS Code, Zed, JetBrains
|
||||
- **Headless Mode**: For CI/CD automation
|
||||
- **TypeScript SDK**: Build custom integrations
|
||||
|
||||
### Configuration
|
||||
```json
|
||||
// ~/.qwen/settings.json
|
||||
{
|
||||
"model": "qwen3-coder-480b",
|
||||
"temperature": 0.7,
|
||||
"maxTokens": 4096
|
||||
}
|
||||
```
|
||||
|
||||
## AI Providers (25+ Supported)
|
||||
|
||||
### Built-in Providers
|
||||
|
||||
| Provider | SDK Package | Key Models | Features |
|
||||
|----------|-------------|------------|----------|
|
||||
| **Anthropic** | `@ai-sdk/anthropic` | Claude 3.5/4/Opus | Extended thinking, PDF support |
|
||||
| **OpenAI** | `@ai-sdk/openai` | GPT-4o, o1, o3, GPT-5 | Function calling, structured output |
|
||||
| **Azure OpenAI** | `@ai-sdk/azure` | GPT-5, GPT-4o Enterprise | Azure integration, custom endpoints |
|
||||
| **Google AI** | `@ai-sdk/google` | Gemini 2.5, Gemini 3 Pro | Multimodal, Google Cloud |
|
||||
| **Google Vertex** | `@ai-sdk/google-vertex` | Claude, Gemini on GCP | Anthropic on Google infra |
|
||||
| **Amazon Bedrock** | `@ai-sdk/amazon-bedrock` | Nova, Claude, Llama 3 | AWS credentials, regional prefixes |
|
||||
| **Qwen OAuth** | Free tier | Qwen3-Coder | **2,000 free req/day** |
|
||||
| **Anthropic** | `@ai-sdk/anthropic` | Claude 3.5/4/Opus | Extended thinking |
|
||||
| **OpenAI** | `@ai-sdk/openai` | GPT-4o, o1, o3, GPT-5 | Function calling |
|
||||
| **Azure OpenAI** | `@ai-sdk/azure` | GPT-5 Enterprise | Azure integration |
|
||||
| **Google AI** | `@ai-sdk/google` | Gemini 2.5, 3 Pro | Multimodal |
|
||||
| **Google Vertex** | `@ai-sdk/google-vertex` | Claude, Gemini on GCP | Google Cloud |
|
||||
| **Amazon Bedrock** | `@ai-sdk/amazon-bedrock` | Nova, Claude, Llama 3 | AWS integration |
|
||||
| **OpenRouter** | `@openrouter/ai-sdk-provider` | 100+ models | Multi-provider gateway |
|
||||
| **xAI** | `@ai-sdk/xai` | Grok models | Real-time data integration |
|
||||
| **Mistral AI** | `@ai-sdk/mistral` | Mistral Large, Codestral | Code-focused models |
|
||||
| **Groq** | `@ai-sdk/groq` | Llama 3, Mixtral | Ultra-low latency inference |
|
||||
| **DeepInfra** | `@ai-sdk/deepinfra` | Open source models | Cost-effective hosting |
|
||||
| **Cerebras** | `@ai-sdk/cerebras` | Llama 3 variants | Hardware-accelerated inference |
|
||||
| **Cohere** | `@ai-sdk/cohere` | Command R+, Embed | Enterprise RAG capabilities |
|
||||
| **Together AI** | `@ai-sdk/togetherai` | Open source models | Fine-tuning and hosting |
|
||||
| **Perplexity** | `@ai-sdk/perplexity` | Sonar models | Real-time web search |
|
||||
| **Vercel AI** | `@ai-sdk/vercel` | Multi-provider gateway | Edge hosting, rate limiting |
|
||||
| **GitLab** | `@gitlab/gitlab-ai-provider` | GitLab Duo | CI/CD AI integration |
|
||||
| **GitHub Copilot** | Custom | GPT-5 series | IDE integration, OAuth |
|
||||
|
||||
### Custom Loader Providers
|
||||
|
||||
| Provider | Auth Method | Use Case |
|
||||
|----------|-------------|----------|
|
||||
| **GitHub Copilot Enterprise** | OAuth + API Key | Enterprise IDE integration |
|
||||
| **Google Vertex Anthropic** | GCP Service Account | Claude on Google Cloud |
|
||||
| **Azure Cognitive Services** | Azure AD | Azure AI services |
|
||||
| **Cloudflare AI Gateway** | Gateway Token | Unified billing, rate limiting |
|
||||
| **SAP AI Core** | Service Key | SAP enterprise integration |
|
||||
| **OpenCode Free** | None | Free public models |
|
||||
| **xAI** | `@ai-sdk/xai` | Grok | Real-time data |
|
||||
| **Mistral** | `@ai-sdk/mistral` | Mistral Large, Codestral | Code-focused |
|
||||
| **Groq** | `@ai-sdk/groq` | Llama 3, Mixtral | Ultra-fast inference |
|
||||
| **Cerebras** | `@ai-sdk/cerebras` | Llama 3 variants | Hardware-accelerated |
|
||||
| **DeepInfra** | `@ai-sdk/deepinfra` | Open source models | Cost-effective |
|
||||
| **Cohere** | `@ai-sdk/cohere` | Command R+, Embed | Enterprise RAG |
|
||||
| **Together AI** | `@ai-sdk/togetherai` | Open source models | Fine-tuning |
|
||||
| **Perplexity** | `@ai-sdk/perplexity` | Sonar models | Web search |
|
||||
| **Vercel AI** | `@ai-sdk/vercel` | Multi-provider | Edge hosting |
|
||||
| **GitLab** | `@gitlab/gitlab-ai-provider` | GitLab Duo | CI/CD AI |
|
||||
| **GitHub Copilot** | Custom | GPT-5 series | IDE integration |
|
||||
|
||||
### Local/Self-Hosted
|
||||
|
||||
@@ -61,186 +112,46 @@ End-to-end professional setup of AI Agent platforms from the Claw family with se
|
||||
| **Ollama** | localhost:11434 | Local model hosting |
|
||||
| **LM Studio** | localhost:1234 | GUI local models |
|
||||
| **vLLM** | localhost:8000 | High-performance serving |
|
||||
| **LocalAI** | localhost:8080 | OpenAI-compatible local |
|
||||
|
||||
## Fetch Available Models
|
||||
## Platform Selection Guide
|
||||
|
||||
```bash
|
||||
# OpenRouter - All models
|
||||
curl -s https://openrouter.ai/api/v1/models \
|
||||
-H "Authorization: Bearer $OPENROUTER_API_KEY" | jq '.data[].id'
|
||||
|
||||
# OpenAI - GPT models
|
||||
curl -s https://api.openai.com/v1/models \
|
||||
-H "Authorization: Bearer $OPENAI_API_KEY" | jq '.data[].id'
|
||||
|
||||
# Anthropic (static list)
|
||||
# claude-opus-4-5-20250219, claude-sonnet-4-5-20250219, claude-3-5-sonnet-20241022
|
||||
|
||||
# Google Gemini
|
||||
curl -s "https://generativelanguage.googleapis.com/v1/models?key=$GOOGLE_API_KEY"
|
||||
|
||||
# Groq
|
||||
curl -s https://api.groq.com/openai/v1/models \
|
||||
-H "Authorization: Bearer $GROQ_API_KEY"
|
||||
|
||||
# Together AI
|
||||
curl -s https://api.together.xyz/v1/models \
|
||||
-H "Authorization: Bearer $TOGETHER_API_KEY"
|
||||
|
||||
# Ollama (local)
|
||||
curl -s http://localhost:11434/api/tags
|
||||
|
||||
# models.dev - Universal model list
|
||||
curl -s https://models.dev/api/models.json
|
||||
```
|
||||
|
||||
## Multi-Provider Configuration
|
||||
|
||||
```json
|
||||
{
|
||||
"providers": {
|
||||
"anthropic": {
|
||||
"apiKey": "${ANTHROPIC_API_KEY}",
|
||||
"baseURL": "https://api.anthropic.com"
|
||||
},
|
||||
"openai": {
|
||||
"apiKey": "${OPENAI_API_KEY}",
|
||||
"baseURL": "https://api.openai.com/v1"
|
||||
},
|
||||
"azure": {
|
||||
"apiKey": "${AZURE_OPENAI_API_KEY}",
|
||||
"baseURL": "${AZURE_OPENAI_ENDPOINT}",
|
||||
"deployment": "gpt-4o"
|
||||
},
|
||||
"google": {
|
||||
"apiKey": "${GOOGLE_API_KEY}",
|
||||
"baseURL": "https://generativelanguage.googleapis.com/v1"
|
||||
},
|
||||
"vertex": {
|
||||
"projectId": "${GOOGLE_CLOUD_PROJECT}",
|
||||
"location": "${GOOGLE_CLOUD_LOCATION}",
|
||||
"credentials": "${GOOGLE_APPLICATION_CREDENTIALS}"
|
||||
},
|
||||
"bedrock": {
|
||||
"region": "us-east-1",
|
||||
"accessKeyId": "${AWS_ACCESS_KEY_ID}",
|
||||
"secretAccessKey": "${AWS_SECRET_ACCESS_KEY}"
|
||||
},
|
||||
"openrouter": {
|
||||
"apiKey": "${OPENROUTER_API_KEY}",
|
||||
"baseURL": "https://openrouter.ai/api/v1",
|
||||
"headers": {
|
||||
"HTTP-Referer": "https://yourapp.com",
|
||||
"X-Title": "YourApp"
|
||||
}
|
||||
},
|
||||
"xai": {
|
||||
"apiKey": "${XAI_API_KEY}",
|
||||
"baseURL": "https://api.x.ai/v1"
|
||||
},
|
||||
"mistral": {
|
||||
"apiKey": "${MISTRAL_API_KEY}",
|
||||
"baseURL": "https://api.mistral.ai/v1"
|
||||
},
|
||||
"groq": {
|
||||
"apiKey": "${GROQ_API_KEY}",
|
||||
"baseURL": "https://api.groq.com/openai/v1"
|
||||
},
|
||||
"cerebras": {
|
||||
"apiKey": "${CEREBRAS_API_KEY}",
|
||||
"baseURL": "https://api.cerebras.ai/v1"
|
||||
},
|
||||
"deepinfra": {
|
||||
"apiKey": "${DEEPINFRA_API_KEY}",
|
||||
"baseURL": "https://api.deepinfra.com/v1"
|
||||
},
|
||||
"cohere": {
|
||||
"apiKey": "${COHERE_API_KEY}",
|
||||
"baseURL": "https://api.cohere.ai/v1"
|
||||
},
|
||||
"together": {
|
||||
"apiKey": "${TOGETHER_API_KEY}",
|
||||
"baseURL": "https://api.together.xyz/v1"
|
||||
},
|
||||
"perplexity": {
|
||||
"apiKey": "${PERPLEXITY_API_KEY}",
|
||||
"baseURL": "https://api.perplexity.ai"
|
||||
},
|
||||
"vercel": {
|
||||
"apiKey": "${VERCEL_AI_KEY}",
|
||||
"baseURL": "https://api.vercel.ai/v1"
|
||||
},
|
||||
"gitlab": {
|
||||
"token": "${GITLAB_TOKEN}",
|
||||
"baseURL": "${GITLAB_URL}/api/v4"
|
||||
},
|
||||
"github": {
|
||||
"token": "${GITHUB_TOKEN}",
|
||||
"baseURL": "https://api.github.com"
|
||||
},
|
||||
"cloudflare": {
|
||||
"accountId": "${CF_ACCOUNT_ID}",
|
||||
"gatewayId": "${CF_GATEWAY_ID}",
|
||||
"token": "${CF_AI_TOKEN}"
|
||||
},
|
||||
"sap": {
|
||||
"serviceKey": "${AICORE_SERVICE_KEY}",
|
||||
"deploymentId": "${AICORE_DEPLOYMENT_ID}"
|
||||
},
|
||||
"ollama": {
|
||||
"baseURL": "http://localhost:11434/v1",
|
||||
"apiKey": "ollama"
|
||||
}
|
||||
},
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"model": "anthropic/claude-sonnet-4-5",
|
||||
"temperature": 0.7,
|
||||
"maxTokens": 4096
|
||||
},
|
||||
"fast": {
|
||||
"model": "groq/llama-3.3-70b-versatile"
|
||||
},
|
||||
"coding": {
|
||||
"model": "anthropic/claude-sonnet-4-5"
|
||||
},
|
||||
"research": {
|
||||
"model": "perplexity/sonar-pro"
|
||||
},
|
||||
"local": {
|
||||
"model": "ollama/llama3.2:70b"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Custom Model Support
|
||||
|
||||
```json
|
||||
{
|
||||
"customModels": {
|
||||
"my-fine-tuned-gpt": {
|
||||
"provider": "openai",
|
||||
"modelId": "ft:gpt-4o:my-org:custom:suffix",
|
||||
"displayName": "My Custom GPT-4o"
|
||||
},
|
||||
"local-llama": {
|
||||
"provider": "ollama",
|
||||
"modelId": "llama3.2:70b",
|
||||
"displayName": "Local Llama 3.2 70B"
|
||||
},
|
||||
"openrouter-custom": {
|
||||
"provider": "openrouter",
|
||||
"modelId": "custom-org/my-model",
|
||||
"displayName": "Custom via OpenRouter"
|
||||
}
|
||||
}
|
||||
}
|
||||
┌─────────────────┐
|
||||
│ Need AI Agent? │
|
||||
└────────┬────────┘
|
||||
│
|
||||
▼
|
||||
┌───────────────────────┐
|
||||
│ Want FREE tier? │
|
||||
└───────────┬───────────┘
|
||||
┌─────┴─────┐
|
||||
│ │
|
||||
YES NO
|
||||
│ │
|
||||
▼ ▼
|
||||
┌──────────────┐ ┌──────────────────┐
|
||||
│ Qwen Code │ │ Memory constrained?
|
||||
│ (OAuth FREE) │ └────────┬─────────┘
|
||||
│ 2000/day │ ┌─────┴─────┐
|
||||
└──────────────┘ │ │
|
||||
YES NO
|
||||
│ │
|
||||
▼ ▼
|
||||
┌──────────┐ ┌──────────┐
|
||||
│ZeroClaw/ │ │OpenClaw │
|
||||
│PicoClaw │ │(Full) │
|
||||
└──────────┘ └──────────┘
|
||||
```
|
||||
|
||||
## Installation Commands
|
||||
|
||||
### Qwen Code (FREE)
|
||||
```bash
|
||||
npm install -g @qwen-code/qwen-code@latest
|
||||
qwen
|
||||
/auth # Select Qwen OAuth for free tier
|
||||
```
|
||||
|
||||
### OpenClaw
|
||||
```bash
|
||||
git clone https://github.com/openclaw/openclaw.git
|
||||
@@ -265,26 +176,51 @@ wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-am
|
||||
chmod +x zeroclaw-linux-amd64 && sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
|
||||
```
|
||||
|
||||
## Multi-Provider Configuration
|
||||
|
||||
```json
|
||||
{
|
||||
"providers": {
|
||||
"qwen": {
|
||||
"type": "oauth",
|
||||
"free": true,
|
||||
"daily_limit": 2000
|
||||
},
|
||||
"anthropic": { "apiKey": "${ANTHROPIC_API_KEY}" },
|
||||
"openai": { "apiKey": "${OPENAI_API_KEY}" },
|
||||
"google": { "apiKey": "${GOOGLE_API_KEY}" },
|
||||
"openrouter": { "apiKey": "${OPENROUTER_API_KEY}" },
|
||||
"groq": { "apiKey": "${GROQ_API_KEY}" },
|
||||
"ollama": { "baseURL": "http://localhost:11434" }
|
||||
},
|
||||
"agents": {
|
||||
"defaults": { "model": "qwen/qwen3-coder" },
|
||||
"premium": { "model": "anthropic/claude-sonnet-4-5" },
|
||||
"fast": { "model": "groq/llama-3.3-70b-versatile" },
|
||||
"local": { "model": "ollama/llama3.2:70b" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Security Hardening
|
||||
|
||||
```bash
|
||||
# Secrets in environment variables
|
||||
# Environment variables for API keys
|
||||
export ANTHROPIC_API_KEY="your-key"
|
||||
export OPENAI_API_KEY="your-key"
|
||||
|
||||
# Restricted config permissions
|
||||
chmod 600 ~/.config/claw/config.json
|
||||
# Qwen OAuth - no key needed, browser auth
|
||||
|
||||
# Systemd hardening
|
||||
NoNewPrivileges=true
|
||||
PrivateTmp=true
|
||||
ProtectSystem=strict
|
||||
# Restricted config
|
||||
chmod 600 ~/.qwen/settings.json
|
||||
chmod 600 ~/.config/claw/config.json
|
||||
```
|
||||
|
||||
## Brainstorm Session Topics
|
||||
|
||||
1. **Use Case**: Coding, research, productivity, automation?
|
||||
2. **Model Selection**: Claude, GPT, Gemini, local?
|
||||
3. **Integrations**: Telegram, Discord, calendar, storage?
|
||||
4. **Deployment**: Local, VPS, cloud?
|
||||
5. **Custom Agents**: Personality, memory, proactivity?
|
||||
1. **Platform Selection**: Free tier vs paid, features needed
|
||||
2. **Provider Selection**: Which AI providers to configure
|
||||
3. **Model Selection**: Fetch models or input custom
|
||||
4. **Integrations**: Messaging, calendar, storage
|
||||
5. **Deployment**: Local, VPS, cloud
|
||||
6. **Custom Agents**: Personality, memory, proactivity
|
||||
|
||||
Reference in New Issue
Block a user