New platform option with no API key required: Qwen Code Features: - FREE OAuth tier: 2,000 requests/day - Model: Qwen3-Coder (coder-model) - Auth: Browser OAuth via qwen.ai - GitHub: https://github.com/QwenLM/qwen-code Installation: npm install -g @qwen-code/qwen-code@latest qwen /auth # Select Qwen OAuth Platform comparison updated: - Qwen Code: FREE, ~200MB, coding-optimized - OpenClaw: Full-featured, 1700+ plugins - NanoBot: Python, research - PicoClaw: Go, <10MB - ZeroClaw: Rust, <5MB Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
227 lines
7.8 KiB
Markdown
227 lines
7.8 KiB
Markdown
---
|
|
name: claw-setup
|
|
description: Use this skill when the user asks to "setup openclaw", "install nanobot", "deploy zeroclaw", "configure picoclaw", "setup qwen code", "qwen-code", "AI agent setup", "personal AI assistant", "claw framework", or mentions setting up any AI agent/assistant platform.
|
|
version: 1.1.0
|
|
---
|
|
|
|
# Claw Setup Skill
|
|
|
|
End-to-end professional setup of AI Agent platforms with security hardening, multi-provider configuration, and personal customization through interactive brainstorming.
|
|
|
|
## Supported Platforms
|
|
|
|
| Platform | Language | Memory | Startup | Best For |
|
|
|----------|----------|--------|---------|----------|
|
|
| **OpenClaw** | TypeScript | >1GB | ~500s | Full-featured, plugin ecosystem |
|
|
| **NanoBot** | Python | ~100MB | ~30s | Research, easy customization |
|
|
| **PicoClaw** | Go | <10MB | ~1s | Low-resource, embedded |
|
|
| **ZeroClaw** | Rust | <5MB | <10ms | Maximum performance, security |
|
|
| **NanoClaw** | TypeScript | ~50MB | ~5s | WhatsApp integration |
|
|
| **Qwen Code** | TypeScript | ~200MB | ~5s | **FREE OAuth tier, Qwen3-Coder** |
|
|
|
|
## Qwen Code (FREE OAuth Tier) ⭐
|
|
|
|
**Special: Free 2,000 requests/day with Qwen OAuth!**
|
|
|
|
| Feature | Details |
|
|
|---------|---------|
|
|
| **Model** | Qwen3-Coder (coder-model) |
|
|
| **Free Tier** | 2,000 requests/day via OAuth |
|
|
| **Auth** | qwen.ai account (browser OAuth) |
|
|
| **GitHub** | https://github.com/QwenLM/qwen-code |
|
|
| **License** | Apache 2.0 |
|
|
|
|
### Installation
|
|
```bash
|
|
# NPM (recommended)
|
|
npm install -g @qwen-code/qwen-code@latest
|
|
|
|
# Homebrew (macOS, Linux)
|
|
brew install qwen-code
|
|
|
|
# Or from source
|
|
git clone https://github.com/QwenLM/qwen-code.git
|
|
cd qwen-code
|
|
npm install
|
|
npm run build
|
|
```
|
|
|
|
### Quick Start
|
|
```bash
|
|
# Start interactive mode
|
|
qwen
|
|
|
|
# In session, authenticate with free OAuth
|
|
/auth
|
|
# Select "Qwen OAuth" -> browser opens -> sign in with qwen.ai
|
|
|
|
# Or use OpenAI-compatible API
|
|
export OPENAI_API_KEY="your-key"
|
|
export OPENAI_MODEL="qwen3-coder"
|
|
qwen
|
|
```
|
|
|
|
### Qwen Code Features
|
|
- **Free OAuth Tier**: 2,000 requests/day, no API key needed
|
|
- **Qwen3-Coder Model**: Optimized for coding tasks
|
|
- **OpenAI-Compatible**: Works with any OpenAI-compatible API
|
|
- **IDE Integration**: VS Code, Zed, JetBrains
|
|
- **Headless Mode**: For CI/CD automation
|
|
- **TypeScript SDK**: Build custom integrations
|
|
|
|
### Configuration
|
|
```json
|
|
// ~/.qwen/settings.json
|
|
{
|
|
"model": "qwen3-coder-480b",
|
|
"temperature": 0.7,
|
|
"maxTokens": 4096
|
|
}
|
|
```
|
|
|
|
## AI Providers (25+ Supported)
|
|
|
|
### Built-in Providers
|
|
|
|
| Provider | SDK Package | Key Models | Features |
|
|
|----------|-------------|------------|----------|
|
|
| **Qwen OAuth** | Free tier | Qwen3-Coder | **2,000 free req/day** |
|
|
| **Anthropic** | `@ai-sdk/anthropic` | Claude 3.5/4/Opus | Extended thinking |
|
|
| **OpenAI** | `@ai-sdk/openai` | GPT-4o, o1, o3, GPT-5 | Function calling |
|
|
| **Azure OpenAI** | `@ai-sdk/azure` | GPT-5 Enterprise | Azure integration |
|
|
| **Google AI** | `@ai-sdk/google` | Gemini 2.5, 3 Pro | Multimodal |
|
|
| **Google Vertex** | `@ai-sdk/google-vertex` | Claude, Gemini on GCP | Google Cloud |
|
|
| **Amazon Bedrock** | `@ai-sdk/amazon-bedrock` | Nova, Claude, Llama 3 | AWS integration |
|
|
| **OpenRouter** | `@openrouter/ai-sdk-provider` | 100+ models | Multi-provider gateway |
|
|
| **xAI** | `@ai-sdk/xai` | Grok | Real-time data |
|
|
| **Mistral** | `@ai-sdk/mistral` | Mistral Large, Codestral | Code-focused |
|
|
| **Groq** | `@ai-sdk/groq` | Llama 3, Mixtral | Ultra-fast inference |
|
|
| **Cerebras** | `@ai-sdk/cerebras` | Llama 3 variants | Hardware-accelerated |
|
|
| **DeepInfra** | `@ai-sdk/deepinfra` | Open source models | Cost-effective |
|
|
| **Cohere** | `@ai-sdk/cohere` | Command R+, Embed | Enterprise RAG |
|
|
| **Together AI** | `@ai-sdk/togetherai` | Open source models | Fine-tuning |
|
|
| **Perplexity** | `@ai-sdk/perplexity` | Sonar models | Web search |
|
|
| **Vercel AI** | `@ai-sdk/vercel` | Multi-provider | Edge hosting |
|
|
| **GitLab** | `@gitlab/gitlab-ai-provider` | GitLab Duo | CI/CD AI |
|
|
| **GitHub Copilot** | Custom | GPT-5 series | IDE integration |
|
|
|
|
### Local/Self-Hosted
|
|
|
|
| Provider | Base URL | Use Case |
|
|
|----------|----------|----------|
|
|
| **Ollama** | localhost:11434 | Local model hosting |
|
|
| **LM Studio** | localhost:1234 | GUI local models |
|
|
| **vLLM** | localhost:8000 | High-performance serving |
|
|
|
|
## Platform Selection Guide
|
|
|
|
```
|
|
┌─────────────────┐
|
|
│ Need AI Agent? │
|
|
└────────┬────────┘
|
|
│
|
|
▼
|
|
┌───────────────────────┐
|
|
│ Want FREE tier? │
|
|
└───────────┬───────────┘
|
|
┌─────┴─────┐
|
|
│ │
|
|
YES NO
|
|
│ │
|
|
▼ ▼
|
|
┌──────────────┐ ┌──────────────────┐
|
|
│ Qwen Code │ │ Memory constrained?
|
|
│ (OAuth FREE) │ └────────┬─────────┘
|
|
│ 2000/day │ ┌─────┴─────┐
|
|
└──────────────┘ │ │
|
|
YES NO
|
|
│ │
|
|
▼ ▼
|
|
┌──────────┐ ┌──────────┐
|
|
│ZeroClaw/ │ │OpenClaw │
|
|
│PicoClaw │ │(Full) │
|
|
└──────────┘ └──────────┘
|
|
```
|
|
|
|
## Installation Commands
|
|
|
|
### Qwen Code (FREE)
|
|
```bash
|
|
npm install -g @qwen-code/qwen-code@latest
|
|
qwen
|
|
/auth # Select Qwen OAuth for free tier
|
|
```
|
|
|
|
### OpenClaw
|
|
```bash
|
|
git clone https://github.com/openclaw/openclaw.git
|
|
cd openclaw && npm install && npm run setup
|
|
```
|
|
|
|
### NanoBot
|
|
```bash
|
|
pip install nanobot-ai
|
|
nanobot onboard
|
|
```
|
|
|
|
### PicoClaw
|
|
```bash
|
|
wget https://github.com/sipeed/picoclaw/releases/latest/picoclaw-linux-amd64
|
|
chmod +x picoclaw-linux-amd64 && sudo mv picoclaw-linux-amd64 /usr/local/bin/picoclaw
|
|
```
|
|
|
|
### ZeroClaw
|
|
```bash
|
|
wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-amd64
|
|
chmod +x zeroclaw-linux-amd64 && sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
|
|
```
|
|
|
|
## Multi-Provider Configuration
|
|
|
|
```json
|
|
{
|
|
"providers": {
|
|
"qwen": {
|
|
"type": "oauth",
|
|
"free": true,
|
|
"daily_limit": 2000
|
|
},
|
|
"anthropic": { "apiKey": "${ANTHROPIC_API_KEY}" },
|
|
"openai": { "apiKey": "${OPENAI_API_KEY}" },
|
|
"google": { "apiKey": "${GOOGLE_API_KEY}" },
|
|
"openrouter": { "apiKey": "${OPENROUTER_API_KEY}" },
|
|
"groq": { "apiKey": "${GROQ_API_KEY}" },
|
|
"ollama": { "baseURL": "http://localhost:11434" }
|
|
},
|
|
"agents": {
|
|
"defaults": { "model": "qwen/qwen3-coder" },
|
|
"premium": { "model": "anthropic/claude-sonnet-4-5" },
|
|
"fast": { "model": "groq/llama-3.3-70b-versatile" },
|
|
"local": { "model": "ollama/llama3.2:70b" }
|
|
}
|
|
}
|
|
```
|
|
|
|
## Security Hardening
|
|
|
|
```bash
|
|
# Environment variables for API keys
|
|
export ANTHROPIC_API_KEY="your-key"
|
|
export OPENAI_API_KEY="your-key"
|
|
|
|
# Qwen OAuth - no key needed, browser auth
|
|
|
|
# Restricted config
|
|
chmod 600 ~/.qwen/settings.json
|
|
chmod 600 ~/.config/claw/config.json
|
|
```
|
|
|
|
## Brainstorm Session Topics
|
|
|
|
1. **Platform Selection**: Free tier vs paid, features needed
|
|
2. **Provider Selection**: Which AI providers to configure
|
|
3. **Model Selection**: Fetch models or input custom
|
|
4. **Integrations**: Messaging, calendar, storage
|
|
5. **Deployment**: Local, VPS, cloud
|
|
6. **Custom Agents**: Personality, memory, proactivity
|