feat: Add Qwen Code with FREE OAuth tier (2,000 requests/day)
New platform option with no API key required: Qwen Code Features: - FREE OAuth tier: 2,000 requests/day - Model: Qwen3-Coder (coder-model) - Auth: Browser OAuth via qwen.ai - GitHub: https://github.com/QwenLM/qwen-code Installation: npm install -g @qwen-code/qwen-code@latest qwen /auth # Select Qwen OAuth Platform comparison updated: - Qwen Code: FREE, ~200MB, coding-optimized - OpenClaw: Full-featured, 1700+ plugins - NanoBot: Python, research - PicoClaw: Go, <10MB - ZeroClaw: Rust, <5MB Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
53
README.md
53
README.md
@@ -4,7 +4,7 @@
|
|||||||
|
|
||||||
### Curated Collection of Custom Skills for Claude Code CLI
|
### Curated Collection of Custom Skills for Claude Code CLI
|
||||||
|
|
||||||
**Automate system administration, security, and development workflows**
|
**Automate system administration, security, and AI agent deployment**
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -31,7 +31,7 @@
|
|||||||
### AI & Automation
|
### AI & Automation
|
||||||
| Skill | Description | Status |
|
| Skill | Description | Status |
|
||||||
|-------|-------------|--------|
|
|-------|-------------|--------|
|
||||||
| [🦞 Claw Setup](./skills/claw-setup/) | End-to-end AI Agent deployment (OpenClaw, NanoBot, PicoClaw, ZeroClaw) | ✅ Production Ready |
|
| [🦞 Claw Setup](./skills/claw-setup/) | AI Agent deployment (OpenClaw, NanoBot, **Qwen Code FREE**) | ✅ Production Ready |
|
||||||
|
|
||||||
### System Administration
|
### System Administration
|
||||||
| Skill | Description | Status |
|
| Skill | Description | Status |
|
||||||
@@ -58,12 +58,40 @@
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## Featured: Claw Setup with FREE Qwen Code
|
||||||
|
|
||||||
|
**⭐ Qwen Code: 2,000 FREE requests/day via OAuth - No API key needed!**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install -g @qwen-code/qwen-code@latest
|
||||||
|
qwen
|
||||||
|
/auth # Select Qwen OAuth for free tier
|
||||||
|
```
|
||||||
|
|
||||||
|
### Platforms Supported
|
||||||
|
|
||||||
|
| Platform | Free? | Memory | Best For |
|
||||||
|
|----------|-------|--------|----------|
|
||||||
|
| **Qwen Code** | ✅ 2K/day | ~200MB | Coding, FREE tier |
|
||||||
|
| OpenClaw | ❌ | >1GB | Full-featured, plugins |
|
||||||
|
| NanoBot | ❌ | ~100MB | Research, Python |
|
||||||
|
| PicoClaw | ❌ | <10MB | Embedded, $10 HW |
|
||||||
|
| ZeroClaw | ❌ | <5MB | Performance, Rust |
|
||||||
|
|
||||||
|
### 25+ AI Providers
|
||||||
|
|
||||||
|
**FREE:** Qwen OAuth
|
||||||
|
|
||||||
|
**Paid:** Anthropic, OpenAI, Google, xAI, Mistral, Groq, Cerebras, OpenRouter, Together AI, Cohere, Perplexity, and more
|
||||||
|
|
||||||
|
**Local:** Ollama, LM Studio, vLLM
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Quick Start
|
## Quick Start
|
||||||
|
|
||||||
Each skill works with Claude Code CLI. Simply ask:
|
|
||||||
|
|
||||||
```
|
```
|
||||||
"Setup Claw AI assistant on my server"
|
"Setup Qwen Code with free OAuth tier"
|
||||||
"Run ram optimizer on my server"
|
"Run ram optimizer on my server"
|
||||||
"Scan this directory for leaked secrets"
|
"Scan this directory for leaked secrets"
|
||||||
"Setup automated backups to S3"
|
"Setup automated backups to S3"
|
||||||
@@ -71,21 +99,6 @@ Each skill works with Claude Code CLI. Simply ask:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Featured: Claw Setup
|
|
||||||
|
|
||||||
Professional deployment of AI Agent platforms:
|
|
||||||
|
|
||||||
```
|
|
||||||
OpenClaw → Full-featured, 1700+ plugins, 215K stars
|
|
||||||
NanoBot → Python, 4K lines, research-ready
|
|
||||||
PicoClaw → Go, <10MB, $10 hardware
|
|
||||||
ZeroClaw → Rust, <5MB, 10ms startup
|
|
||||||
```
|
|
||||||
|
|
||||||
Usage: `"Setup OpenClaw on my VPS with security hardening"`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<a href="https://z.ai/subscribe?ic=R0K78RJKNW">Learn more about GLM 5 Advanced Coding Model</a>
|
<a href="https://z.ai/subscribe?ic=R0K78RJKNW">Learn more about GLM 5 Advanced Coding Model</a>
|
||||||
</p>
|
</p>
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
|
|
||||||
### Professional AI Agent Deployment Made Simple
|
### Professional AI Agent Deployment Made Simple
|
||||||
|
|
||||||
**End-to-end setup of Claw platforms with 25+ AI providers, security hardening, and personal customization**
|
**End-to-end setup of Claw platforms + Qwen Code FREE tier with 25+ AI providers**
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -28,143 +28,173 @@
|
|||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
Claw Setup handles complete deployment of AI Agent platforms with **25+ AI provider integrations** (OpenCode compatible).
|
Claw Setup handles complete deployment of AI Agent platforms with **Qwen Code FREE tier** and **25+ AI provider integrations**.
|
||||||
|
|
||||||
```
|
```
|
||||||
┌─────────────────────────────────────────────────────────────────┐
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
│ CLAW SETUP WORKFLOW │
|
│ PLATFORMS SUPPORTED │
|
||||||
├─────────────────────────────────────────────────────────────────┤
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
│ │
|
│ │
|
||||||
│ Phase 1 Phase 2 Phase 3 Phase 4 │
|
│ ⭐ FREE TIER │
|
||||||
│ ──────── ──────── ──────── ──────── │
|
│ ─────────── │
|
||||||
|
│ 🤖 Qwen Code TypeScript ~200MB FREE OAuth │
|
||||||
|
│ • 2,000 requests/day FREE │
|
||||||
|
│ • Qwen3-Coder model │
|
||||||
|
│ • No API key needed │
|
||||||
│ │
|
│ │
|
||||||
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
|
│ 🦞 FULL-FEATURED │
|
||||||
│ │ SELECT │────►│ INSTALL │────►│CUSTOMIZE│────►│ DEPLOY │ │
|
│ ─────────────── │
|
||||||
│ │ Platform│ │& Secure │ │Providers│ │ & Run │ │
|
│ OpenClaw TypeScript >1GB 1700+ plugins │
|
||||||
│ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │
|
│ NanoBot Python ~100MB Research-ready │
|
||||||
|
│ PicoClaw Go <10MB $10 hardware │
|
||||||
|
│ ZeroClaw Rust <5MB 10ms startup │
|
||||||
|
│ NanoClaw TypeScript ~50MB WhatsApp │
|
||||||
│ │
|
│ │
|
||||||
└─────────────────────────────────────────────────────────────────┘
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
## Platforms Supported
|
## ⭐ Qwen Code (FREE OAuth Tier)
|
||||||
|
|
||||||
| Platform | Language | Memory | Startup | Best For |
|
**Special: 2,000 FREE requests/day - No API key needed!**
|
||||||
|----------|----------|--------|---------|----------|
|
|
||||||
| **OpenClaw** | TypeScript | >1GB | ~500s | Full-featured, 1700+ plugins |
|
| Feature | Details |
|
||||||
| **NanoBot** | Python | ~100MB | ~30s | Research, customization |
|
|---------|---------|
|
||||||
| **PicoClaw** | Go | <10MB | ~1s | Embedded, $10 hardware |
|
| **Model** | Qwen3-Coder (coder-model) |
|
||||||
| **ZeroClaw** | Rust | <5MB | <10ms | Maximum performance |
|
| **Free Tier** | 2,000 requests/day |
|
||||||
| **NanoClaw** | TypeScript | ~50MB | ~5s | WhatsApp integration |
|
| **Auth** | Browser OAuth via qwen.ai |
|
||||||
|
| **GitHub** | https://github.com/QwenLM/qwen-code |
|
||||||
|
|
||||||
|
### Quick Start
|
||||||
|
```bash
|
||||||
|
# Install
|
||||||
|
npm install -g @qwen-code/qwen-code@latest
|
||||||
|
|
||||||
|
# Start
|
||||||
|
qwen
|
||||||
|
|
||||||
|
# Authenticate (FREE)
|
||||||
|
/auth
|
||||||
|
# Select "Qwen OAuth" -> Browser opens -> Sign in with qwen.ai
|
||||||
|
```
|
||||||
|
|
||||||
|
### Features
|
||||||
|
- ✅ **FREE**: 2,000 requests/day
|
||||||
|
- ✅ **No API Key**: Browser OAuth authentication
|
||||||
|
- ✅ **Qwen3-Coder**: Optimized for coding
|
||||||
|
- ✅ **OpenAI-Compatible**: Works with other APIs too
|
||||||
|
- ✅ **IDE Integration**: VS Code, Zed, JetBrains
|
||||||
|
- ✅ **Headless Mode**: CI/CD automation
|
||||||
|
|
||||||
|
## Platform Comparison
|
||||||
|
|
||||||
|
| Platform | Memory | Startup | Free? | Best For |
|
||||||
|
|----------|--------|---------|-------|----------|
|
||||||
|
| **Qwen Code** | ~200MB | ~5s | ✅ 2K/day | **Coding, FREE tier** |
|
||||||
|
| OpenClaw | >1GB | ~500s | ❌ | Full-featured |
|
||||||
|
| NanoBot | ~100MB | ~30s | ❌ | Research |
|
||||||
|
| PicoClaw | <10MB | ~1s | ❌ | Embedded |
|
||||||
|
| ZeroClaw | <5MB | <10ms | ❌ | Performance |
|
||||||
|
|
||||||
|
## Decision Flowchart
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────┐
|
||||||
|
│ Need AI Agent? │
|
||||||
|
└────────┬────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌───────────────────────┐
|
||||||
|
│ Want FREE tier? │
|
||||||
|
└───────────┬───────────┘
|
||||||
|
┌─────┴─────┐
|
||||||
|
│ │
|
||||||
|
YES NO
|
||||||
|
│ │
|
||||||
|
▼ ▼
|
||||||
|
┌──────────────┐ ┌──────────────────┐
|
||||||
|
│ ⭐ Qwen Code │ │ Memory limited? │
|
||||||
|
│ OAuth FREE │ └────────┬─────────┘
|
||||||
|
│ 2000/day │ ┌─────┴─────┐
|
||||||
|
└──────────────┘ │ │
|
||||||
|
YES NO
|
||||||
|
│ │
|
||||||
|
▼ ▼
|
||||||
|
┌──────────┐ ┌──────────┐
|
||||||
|
│ZeroClaw/ │ │OpenClaw │
|
||||||
|
│PicoClaw │ │(Full) │
|
||||||
|
└──────────┘ └──────────┘
|
||||||
|
```
|
||||||
|
|
||||||
## AI Providers (25+ Supported)
|
## AI Providers (25+ Supported)
|
||||||
|
|
||||||
### Tier 1: Major AI Labs
|
### Tier 1: FREE
|
||||||
|
| Provider | Free Tier | Models |
|
||||||
|
|----------|-----------|--------|
|
||||||
|
| **Qwen OAuth** | 2,000/day | Qwen3-Coder |
|
||||||
|
|
||||||
|
### Tier 2: Major AI Labs
|
||||||
| Provider | Models | Features |
|
| Provider | Models | Features |
|
||||||
|----------|--------|----------|
|
|----------|--------|----------|
|
||||||
| **Anthropic** | Claude 3.5/4/Opus | Extended thinking, PDF support |
|
| Anthropic | Claude 3.5/4/Opus | Extended thinking |
|
||||||
| **OpenAI** | GPT-4o, o1, o3, GPT-5 | Function calling, structured output |
|
| OpenAI | GPT-4o, o1, o3, GPT-5 | Function calling |
|
||||||
| **Google AI** | Gemini 2.5, Gemini 3 Pro | Multimodal, long context |
|
| Google AI | Gemini 2.5, 3 Pro | Multimodal |
|
||||||
| **xAI** | Grok | Real-time data integration |
|
| xAI | Grok | Real-time data |
|
||||||
| **Mistral** | Mistral Large, Codestral | Code-focused models |
|
| Mistral | Large, Codestral | Code-focused |
|
||||||
|
|
||||||
### Tier 2: Cloud Platforms
|
|
||||||
|
|
||||||
| Provider | Models | Features |
|
|
||||||
|----------|--------|----------|
|
|
||||||
| **Azure OpenAI** | GPT-5, GPT-4o Enterprise | Azure integration |
|
|
||||||
| **Google Vertex** | Claude, Gemini on GCP | Anthropic on Google |
|
|
||||||
| **Amazon Bedrock** | Nova, Claude, Llama 3 | AWS regional prefixes |
|
|
||||||
|
|
||||||
### Tier 3: Aggregators & Gateways
|
|
||||||
|
|
||||||
| Provider | Models | Features |
|
|
||||||
|----------|--------|----------|
|
|
||||||
| **OpenRouter** | 100+ models | Multi-provider gateway |
|
|
||||||
| **Vercel AI** | Multi-provider | Edge hosting, rate limiting |
|
|
||||||
| **Together AI** | Open source | Fine-tuning, hosting |
|
|
||||||
| **DeepInfra** | Open source | Cost-effective |
|
|
||||||
|
|
||||||
### Tier 4: Fast Inference
|
|
||||||
|
|
||||||
|
### Tier 3: Fast Inference
|
||||||
| Provider | Speed | Models |
|
| Provider | Speed | Models |
|
||||||
|----------|-------|--------|
|
|----------|-------|--------|
|
||||||
| **Groq** | Ultra-fast | Llama 3, Mixtral |
|
| Groq | Ultra-fast | Llama 3, Mixtral |
|
||||||
| **Cerebras** | Fastest | Llama 3 variants |
|
| Cerebras | Fastest | Llama 3 variants |
|
||||||
|
|
||||||
### Tier 5: Specialized
|
### Tier 4: Gateways & Local
|
||||||
|
| Provider | Type | Models |
|
||||||
|
|----------|------|--------|
|
||||||
|
| OpenRouter | Gateway | 100+ models |
|
||||||
|
| Together AI | Hosting | Open source |
|
||||||
|
| Ollama | Local | Self-hosted |
|
||||||
|
| LM Studio | Local | GUI self-hosted |
|
||||||
|
|
||||||
| Provider | Use Case |
|
## Quick Start Examples
|
||||||
|----------|----------|
|
|
||||||
| **Perplexity** | Web search integration |
|
|
||||||
| **Cohere** | Enterprise RAG |
|
|
||||||
| **GitLab Duo** | CI/CD integration |
|
|
||||||
| **GitHub Copilot** | IDE integration |
|
|
||||||
| **Cloudflare AI** | Gateway, rate limiting |
|
|
||||||
| **SAP AI Core** | SAP enterprise |
|
|
||||||
|
|
||||||
### Local/Self-Hosted
|
### Option 1: FREE Qwen Code
|
||||||
|
|
||||||
| Provider | Use Case |
|
|
||||||
|----------|----------|
|
|
||||||
| **Ollama** | Local model hosting |
|
|
||||||
| **LM Studio** | GUI local models |
|
|
||||||
| **vLLM** | High-performance serving |
|
|
||||||
|
|
||||||
## Model Selection
|
|
||||||
|
|
||||||
**Option A: Fetch from Provider**
|
|
||||||
```bash
|
```bash
|
||||||
# Fetch available models
|
npm install -g @qwen-code/qwen-code@latest
|
||||||
curl -s https://openrouter.ai/api/v1/models -H "Authorization: Bearer $KEY" | jq '.data[].id'
|
qwen
|
||||||
curl -s https://api.groq.com/openai/v1/models -H "Authorization: Bearer $KEY"
|
/auth # Select Qwen OAuth
|
||||||
curl -s http://localhost:11434/api/tags # Ollama
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Option B: Custom Model Input**
|
### Option 2: With Your Own API Keys
|
||||||
```json
|
```bash
|
||||||
{
|
# Configure providers
|
||||||
"provider": "openai",
|
export ANTHROPIC_API_KEY="your-key"
|
||||||
"modelId": "ft:gpt-4o:org:custom:suffix",
|
export OPENAI_API_KEY="your-key"
|
||||||
"displayName": "My Fine-Tuned Model"
|
export GOOGLE_API_KEY="your-key"
|
||||||
}
|
|
||||||
|
# Or use OpenRouter for 100+ models
|
||||||
|
export OPENROUTER_API_KEY="your-key"
|
||||||
```
|
```
|
||||||
|
|
||||||
## Quick Start
|
### Option 3: Local Models
|
||||||
|
```bash
|
||||||
|
# Install Ollama
|
||||||
|
curl -fsSL https://ollama.com/install.sh | sh
|
||||||
|
|
||||||
```
|
# Pull model
|
||||||
"Setup OpenClaw with Anthropic and OpenAI providers"
|
ollama pull llama3.2:70b
|
||||||
"Install NanoBot with all available providers"
|
|
||||||
"Deploy ZeroClaw with Groq for fast inference"
|
# Use with Claw platforms
|
||||||
"Configure Claw with local Ollama models"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Configuration Example
|
## Usage Examples
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"providers": {
|
|
||||||
"anthropic": { "apiKey": "${ANTHROPIC_API_KEY}" },
|
|
||||||
"openai": { "apiKey": "${OPENAI_API_KEY}" },
|
|
||||||
"google": { "apiKey": "${GOOGLE_API_KEY}" },
|
|
||||||
"openrouter": { "apiKey": "${OPENROUTER_API_KEY}" },
|
|
||||||
"groq": { "apiKey": "${GROQ_API_KEY}" },
|
|
||||||
"ollama": { "baseURL": "http://localhost:11434" }
|
|
||||||
},
|
|
||||||
"agents": {
|
|
||||||
"defaults": { "model": "anthropic/claude-sonnet-4-5" },
|
|
||||||
"fast": { "model": "groq/llama-3.3-70b-versatile" },
|
|
||||||
"local": { "model": "ollama/llama3.2:70b" }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
"Setup Qwen Code with free OAuth tier"
|
||||||
## Security
|
"Install OpenClaw with Anthropic provider"
|
||||||
|
"Configure Claw with all free options"
|
||||||
- API keys via environment variables
|
"Setup ZeroClaw with Groq for fast inference"
|
||||||
- Restricted config permissions (chmod 600)
|
"Fetch available models from OpenRouter"
|
||||||
- Systemd hardening (NoNewPrivileges, PrivateTmp)
|
```
|
||||||
- Network binding to localhost
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -1,12 +1,12 @@
|
|||||||
---
|
---
|
||||||
name: claw-setup
|
name: claw-setup
|
||||||
description: Use this skill when the user asks to "setup openclaw", "install nanobot", "deploy zeroclaw", "configure picoclaw", "AI agent setup", "personal AI assistant", "claw framework", or mentions setting up any AI agent/assistant platform from the Claw family (OpenClaw, NanoBot, PicoClaw, ZeroClaw, NanoClaw).
|
description: Use this skill when the user asks to "setup openclaw", "install nanobot", "deploy zeroclaw", "configure picoclaw", "setup qwen code", "qwen-code", "AI agent setup", "personal AI assistant", "claw framework", or mentions setting up any AI agent/assistant platform.
|
||||||
version: 1.0.0
|
version: 1.1.0
|
||||||
---
|
---
|
||||||
|
|
||||||
# Claw Setup Skill
|
# Claw Setup Skill
|
||||||
|
|
||||||
End-to-end professional setup of AI Agent platforms from the Claw family with security hardening, multi-provider configuration, and personal customization through interactive brainstorming.
|
End-to-end professional setup of AI Agent platforms with security hardening, multi-provider configuration, and personal customization through interactive brainstorming.
|
||||||
|
|
||||||
## Supported Platforms
|
## Supported Platforms
|
||||||
|
|
||||||
@@ -17,42 +17,93 @@ End-to-end professional setup of AI Agent platforms from the Claw family with se
|
|||||||
| **PicoClaw** | Go | <10MB | ~1s | Low-resource, embedded |
|
| **PicoClaw** | Go | <10MB | ~1s | Low-resource, embedded |
|
||||||
| **ZeroClaw** | Rust | <5MB | <10ms | Maximum performance, security |
|
| **ZeroClaw** | Rust | <5MB | <10ms | Maximum performance, security |
|
||||||
| **NanoClaw** | TypeScript | ~50MB | ~5s | WhatsApp integration |
|
| **NanoClaw** | TypeScript | ~50MB | ~5s | WhatsApp integration |
|
||||||
|
| **Qwen Code** | TypeScript | ~200MB | ~5s | **FREE OAuth tier, Qwen3-Coder** |
|
||||||
|
|
||||||
## AI Providers (OpenCode Compatible - 25+ Providers)
|
## Qwen Code (FREE OAuth Tier) ⭐
|
||||||
|
|
||||||
|
**Special: Free 2,000 requests/day with Qwen OAuth!**
|
||||||
|
|
||||||
|
| Feature | Details |
|
||||||
|
|---------|---------|
|
||||||
|
| **Model** | Qwen3-Coder (coder-model) |
|
||||||
|
| **Free Tier** | 2,000 requests/day via OAuth |
|
||||||
|
| **Auth** | qwen.ai account (browser OAuth) |
|
||||||
|
| **GitHub** | https://github.com/QwenLM/qwen-code |
|
||||||
|
| **License** | Apache 2.0 |
|
||||||
|
|
||||||
|
### Installation
|
||||||
|
```bash
|
||||||
|
# NPM (recommended)
|
||||||
|
npm install -g @qwen-code/qwen-code@latest
|
||||||
|
|
||||||
|
# Homebrew (macOS, Linux)
|
||||||
|
brew install qwen-code
|
||||||
|
|
||||||
|
# Or from source
|
||||||
|
git clone https://github.com/QwenLM/qwen-code.git
|
||||||
|
cd qwen-code
|
||||||
|
npm install
|
||||||
|
npm run build
|
||||||
|
```
|
||||||
|
|
||||||
|
### Quick Start
|
||||||
|
```bash
|
||||||
|
# Start interactive mode
|
||||||
|
qwen
|
||||||
|
|
||||||
|
# In session, authenticate with free OAuth
|
||||||
|
/auth
|
||||||
|
# Select "Qwen OAuth" -> browser opens -> sign in with qwen.ai
|
||||||
|
|
||||||
|
# Or use OpenAI-compatible API
|
||||||
|
export OPENAI_API_KEY="your-key"
|
||||||
|
export OPENAI_MODEL="qwen3-coder"
|
||||||
|
qwen
|
||||||
|
```
|
||||||
|
|
||||||
|
### Qwen Code Features
|
||||||
|
- **Free OAuth Tier**: 2,000 requests/day, no API key needed
|
||||||
|
- **Qwen3-Coder Model**: Optimized for coding tasks
|
||||||
|
- **OpenAI-Compatible**: Works with any OpenAI-compatible API
|
||||||
|
- **IDE Integration**: VS Code, Zed, JetBrains
|
||||||
|
- **Headless Mode**: For CI/CD automation
|
||||||
|
- **TypeScript SDK**: Build custom integrations
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
```json
|
||||||
|
// ~/.qwen/settings.json
|
||||||
|
{
|
||||||
|
"model": "qwen3-coder-480b",
|
||||||
|
"temperature": 0.7,
|
||||||
|
"maxTokens": 4096
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## AI Providers (25+ Supported)
|
||||||
|
|
||||||
### Built-in Providers
|
### Built-in Providers
|
||||||
|
|
||||||
| Provider | SDK Package | Key Models | Features |
|
| Provider | SDK Package | Key Models | Features |
|
||||||
|----------|-------------|------------|----------|
|
|----------|-------------|------------|----------|
|
||||||
| **Anthropic** | `@ai-sdk/anthropic` | Claude 3.5/4/Opus | Extended thinking, PDF support |
|
| **Qwen OAuth** | Free tier | Qwen3-Coder | **2,000 free req/day** |
|
||||||
| **OpenAI** | `@ai-sdk/openai` | GPT-4o, o1, o3, GPT-5 | Function calling, structured output |
|
| **Anthropic** | `@ai-sdk/anthropic` | Claude 3.5/4/Opus | Extended thinking |
|
||||||
| **Azure OpenAI** | `@ai-sdk/azure` | GPT-5, GPT-4o Enterprise | Azure integration, custom endpoints |
|
| **OpenAI** | `@ai-sdk/openai` | GPT-4o, o1, o3, GPT-5 | Function calling |
|
||||||
| **Google AI** | `@ai-sdk/google` | Gemini 2.5, Gemini 3 Pro | Multimodal, Google Cloud |
|
| **Azure OpenAI** | `@ai-sdk/azure` | GPT-5 Enterprise | Azure integration |
|
||||||
| **Google Vertex** | `@ai-sdk/google-vertex` | Claude, Gemini on GCP | Anthropic on Google infra |
|
| **Google AI** | `@ai-sdk/google` | Gemini 2.5, 3 Pro | Multimodal |
|
||||||
| **Amazon Bedrock** | `@ai-sdk/amazon-bedrock` | Nova, Claude, Llama 3 | AWS credentials, regional prefixes |
|
| **Google Vertex** | `@ai-sdk/google-vertex` | Claude, Gemini on GCP | Google Cloud |
|
||||||
|
| **Amazon Bedrock** | `@ai-sdk/amazon-bedrock` | Nova, Claude, Llama 3 | AWS integration |
|
||||||
| **OpenRouter** | `@openrouter/ai-sdk-provider` | 100+ models | Multi-provider gateway |
|
| **OpenRouter** | `@openrouter/ai-sdk-provider` | 100+ models | Multi-provider gateway |
|
||||||
| **xAI** | `@ai-sdk/xai` | Grok models | Real-time data integration |
|
| **xAI** | `@ai-sdk/xai` | Grok | Real-time data |
|
||||||
| **Mistral AI** | `@ai-sdk/mistral` | Mistral Large, Codestral | Code-focused models |
|
| **Mistral** | `@ai-sdk/mistral` | Mistral Large, Codestral | Code-focused |
|
||||||
| **Groq** | `@ai-sdk/groq` | Llama 3, Mixtral | Ultra-low latency inference |
|
| **Groq** | `@ai-sdk/groq` | Llama 3, Mixtral | Ultra-fast inference |
|
||||||
| **DeepInfra** | `@ai-sdk/deepinfra` | Open source models | Cost-effective hosting |
|
| **Cerebras** | `@ai-sdk/cerebras` | Llama 3 variants | Hardware-accelerated |
|
||||||
| **Cerebras** | `@ai-sdk/cerebras` | Llama 3 variants | Hardware-accelerated inference |
|
| **DeepInfra** | `@ai-sdk/deepinfra` | Open source models | Cost-effective |
|
||||||
| **Cohere** | `@ai-sdk/cohere` | Command R+, Embed | Enterprise RAG capabilities |
|
| **Cohere** | `@ai-sdk/cohere` | Command R+, Embed | Enterprise RAG |
|
||||||
| **Together AI** | `@ai-sdk/togetherai` | Open source models | Fine-tuning and hosting |
|
| **Together AI** | `@ai-sdk/togetherai` | Open source models | Fine-tuning |
|
||||||
| **Perplexity** | `@ai-sdk/perplexity` | Sonar models | Real-time web search |
|
| **Perplexity** | `@ai-sdk/perplexity` | Sonar models | Web search |
|
||||||
| **Vercel AI** | `@ai-sdk/vercel` | Multi-provider gateway | Edge hosting, rate limiting |
|
| **Vercel AI** | `@ai-sdk/vercel` | Multi-provider | Edge hosting |
|
||||||
| **GitLab** | `@gitlab/gitlab-ai-provider` | GitLab Duo | CI/CD AI integration |
|
| **GitLab** | `@gitlab/gitlab-ai-provider` | GitLab Duo | CI/CD AI |
|
||||||
| **GitHub Copilot** | Custom | GPT-5 series | IDE integration, OAuth |
|
| **GitHub Copilot** | Custom | GPT-5 series | IDE integration |
|
||||||
|
|
||||||
### Custom Loader Providers
|
|
||||||
|
|
||||||
| Provider | Auth Method | Use Case |
|
|
||||||
|----------|-------------|----------|
|
|
||||||
| **GitHub Copilot Enterprise** | OAuth + API Key | Enterprise IDE integration |
|
|
||||||
| **Google Vertex Anthropic** | GCP Service Account | Claude on Google Cloud |
|
|
||||||
| **Azure Cognitive Services** | Azure AD | Azure AI services |
|
|
||||||
| **Cloudflare AI Gateway** | Gateway Token | Unified billing, rate limiting |
|
|
||||||
| **SAP AI Core** | Service Key | SAP enterprise integration |
|
|
||||||
| **OpenCode Free** | None | Free public models |
|
|
||||||
|
|
||||||
### Local/Self-Hosted
|
### Local/Self-Hosted
|
||||||
|
|
||||||
@@ -61,186 +112,46 @@ End-to-end professional setup of AI Agent platforms from the Claw family with se
|
|||||||
| **Ollama** | localhost:11434 | Local model hosting |
|
| **Ollama** | localhost:11434 | Local model hosting |
|
||||||
| **LM Studio** | localhost:1234 | GUI local models |
|
| **LM Studio** | localhost:1234 | GUI local models |
|
||||||
| **vLLM** | localhost:8000 | High-performance serving |
|
| **vLLM** | localhost:8000 | High-performance serving |
|
||||||
| **LocalAI** | localhost:8080 | OpenAI-compatible local |
|
|
||||||
|
|
||||||
## Fetch Available Models
|
## Platform Selection Guide
|
||||||
|
|
||||||
```bash
|
|
||||||
# OpenRouter - All models
|
|
||||||
curl -s https://openrouter.ai/api/v1/models \
|
|
||||||
-H "Authorization: Bearer $OPENROUTER_API_KEY" | jq '.data[].id'
|
|
||||||
|
|
||||||
# OpenAI - GPT models
|
|
||||||
curl -s https://api.openai.com/v1/models \
|
|
||||||
-H "Authorization: Bearer $OPENAI_API_KEY" | jq '.data[].id'
|
|
||||||
|
|
||||||
# Anthropic (static list)
|
|
||||||
# claude-opus-4-5-20250219, claude-sonnet-4-5-20250219, claude-3-5-sonnet-20241022
|
|
||||||
|
|
||||||
# Google Gemini
|
|
||||||
curl -s "https://generativelanguage.googleapis.com/v1/models?key=$GOOGLE_API_KEY"
|
|
||||||
|
|
||||||
# Groq
|
|
||||||
curl -s https://api.groq.com/openai/v1/models \
|
|
||||||
-H "Authorization: Bearer $GROQ_API_KEY"
|
|
||||||
|
|
||||||
# Together AI
|
|
||||||
curl -s https://api.together.xyz/v1/models \
|
|
||||||
-H "Authorization: Bearer $TOGETHER_API_KEY"
|
|
||||||
|
|
||||||
# Ollama (local)
|
|
||||||
curl -s http://localhost:11434/api/tags
|
|
||||||
|
|
||||||
# models.dev - Universal model list
|
|
||||||
curl -s https://models.dev/api/models.json
|
|
||||||
```
|
```
|
||||||
|
┌─────────────────┐
|
||||||
## Multi-Provider Configuration
|
│ Need AI Agent? │
|
||||||
|
└────────┬────────┘
|
||||||
```json
|
│
|
||||||
{
|
▼
|
||||||
"providers": {
|
┌───────────────────────┐
|
||||||
"anthropic": {
|
│ Want FREE tier? │
|
||||||
"apiKey": "${ANTHROPIC_API_KEY}",
|
└───────────┬───────────┘
|
||||||
"baseURL": "https://api.anthropic.com"
|
┌─────┴─────┐
|
||||||
},
|
│ │
|
||||||
"openai": {
|
YES NO
|
||||||
"apiKey": "${OPENAI_API_KEY}",
|
│ │
|
||||||
"baseURL": "https://api.openai.com/v1"
|
▼ ▼
|
||||||
},
|
┌──────────────┐ ┌──────────────────┐
|
||||||
"azure": {
|
│ Qwen Code │ │ Memory constrained?
|
||||||
"apiKey": "${AZURE_OPENAI_API_KEY}",
|
│ (OAuth FREE) │ └────────┬─────────┘
|
||||||
"baseURL": "${AZURE_OPENAI_ENDPOINT}",
|
│ 2000/day │ ┌─────┴─────┐
|
||||||
"deployment": "gpt-4o"
|
└──────────────┘ │ │
|
||||||
},
|
YES NO
|
||||||
"google": {
|
│ │
|
||||||
"apiKey": "${GOOGLE_API_KEY}",
|
▼ ▼
|
||||||
"baseURL": "https://generativelanguage.googleapis.com/v1"
|
┌──────────┐ ┌──────────┐
|
||||||
},
|
│ZeroClaw/ │ │OpenClaw │
|
||||||
"vertex": {
|
│PicoClaw │ │(Full) │
|
||||||
"projectId": "${GOOGLE_CLOUD_PROJECT}",
|
└──────────┘ └──────────┘
|
||||||
"location": "${GOOGLE_CLOUD_LOCATION}",
|
|
||||||
"credentials": "${GOOGLE_APPLICATION_CREDENTIALS}"
|
|
||||||
},
|
|
||||||
"bedrock": {
|
|
||||||
"region": "us-east-1",
|
|
||||||
"accessKeyId": "${AWS_ACCESS_KEY_ID}",
|
|
||||||
"secretAccessKey": "${AWS_SECRET_ACCESS_KEY}"
|
|
||||||
},
|
|
||||||
"openrouter": {
|
|
||||||
"apiKey": "${OPENROUTER_API_KEY}",
|
|
||||||
"baseURL": "https://openrouter.ai/api/v1",
|
|
||||||
"headers": {
|
|
||||||
"HTTP-Referer": "https://yourapp.com",
|
|
||||||
"X-Title": "YourApp"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"xai": {
|
|
||||||
"apiKey": "${XAI_API_KEY}",
|
|
||||||
"baseURL": "https://api.x.ai/v1"
|
|
||||||
},
|
|
||||||
"mistral": {
|
|
||||||
"apiKey": "${MISTRAL_API_KEY}",
|
|
||||||
"baseURL": "https://api.mistral.ai/v1"
|
|
||||||
},
|
|
||||||
"groq": {
|
|
||||||
"apiKey": "${GROQ_API_KEY}",
|
|
||||||
"baseURL": "https://api.groq.com/openai/v1"
|
|
||||||
},
|
|
||||||
"cerebras": {
|
|
||||||
"apiKey": "${CEREBRAS_API_KEY}",
|
|
||||||
"baseURL": "https://api.cerebras.ai/v1"
|
|
||||||
},
|
|
||||||
"deepinfra": {
|
|
||||||
"apiKey": "${DEEPINFRA_API_KEY}",
|
|
||||||
"baseURL": "https://api.deepinfra.com/v1"
|
|
||||||
},
|
|
||||||
"cohere": {
|
|
||||||
"apiKey": "${COHERE_API_KEY}",
|
|
||||||
"baseURL": "https://api.cohere.ai/v1"
|
|
||||||
},
|
|
||||||
"together": {
|
|
||||||
"apiKey": "${TOGETHER_API_KEY}",
|
|
||||||
"baseURL": "https://api.together.xyz/v1"
|
|
||||||
},
|
|
||||||
"perplexity": {
|
|
||||||
"apiKey": "${PERPLEXITY_API_KEY}",
|
|
||||||
"baseURL": "https://api.perplexity.ai"
|
|
||||||
},
|
|
||||||
"vercel": {
|
|
||||||
"apiKey": "${VERCEL_AI_KEY}",
|
|
||||||
"baseURL": "https://api.vercel.ai/v1"
|
|
||||||
},
|
|
||||||
"gitlab": {
|
|
||||||
"token": "${GITLAB_TOKEN}",
|
|
||||||
"baseURL": "${GITLAB_URL}/api/v4"
|
|
||||||
},
|
|
||||||
"github": {
|
|
||||||
"token": "${GITHUB_TOKEN}",
|
|
||||||
"baseURL": "https://api.github.com"
|
|
||||||
},
|
|
||||||
"cloudflare": {
|
|
||||||
"accountId": "${CF_ACCOUNT_ID}",
|
|
||||||
"gatewayId": "${CF_GATEWAY_ID}",
|
|
||||||
"token": "${CF_AI_TOKEN}"
|
|
||||||
},
|
|
||||||
"sap": {
|
|
||||||
"serviceKey": "${AICORE_SERVICE_KEY}",
|
|
||||||
"deploymentId": "${AICORE_DEPLOYMENT_ID}"
|
|
||||||
},
|
|
||||||
"ollama": {
|
|
||||||
"baseURL": "http://localhost:11434/v1",
|
|
||||||
"apiKey": "ollama"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"agents": {
|
|
||||||
"defaults": {
|
|
||||||
"model": "anthropic/claude-sonnet-4-5",
|
|
||||||
"temperature": 0.7,
|
|
||||||
"maxTokens": 4096
|
|
||||||
},
|
|
||||||
"fast": {
|
|
||||||
"model": "groq/llama-3.3-70b-versatile"
|
|
||||||
},
|
|
||||||
"coding": {
|
|
||||||
"model": "anthropic/claude-sonnet-4-5"
|
|
||||||
},
|
|
||||||
"research": {
|
|
||||||
"model": "perplexity/sonar-pro"
|
|
||||||
},
|
|
||||||
"local": {
|
|
||||||
"model": "ollama/llama3.2:70b"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Custom Model Support
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"customModels": {
|
|
||||||
"my-fine-tuned-gpt": {
|
|
||||||
"provider": "openai",
|
|
||||||
"modelId": "ft:gpt-4o:my-org:custom:suffix",
|
|
||||||
"displayName": "My Custom GPT-4o"
|
|
||||||
},
|
|
||||||
"local-llama": {
|
|
||||||
"provider": "ollama",
|
|
||||||
"modelId": "llama3.2:70b",
|
|
||||||
"displayName": "Local Llama 3.2 70B"
|
|
||||||
},
|
|
||||||
"openrouter-custom": {
|
|
||||||
"provider": "openrouter",
|
|
||||||
"modelId": "custom-org/my-model",
|
|
||||||
"displayName": "Custom via OpenRouter"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Installation Commands
|
## Installation Commands
|
||||||
|
|
||||||
|
### Qwen Code (FREE)
|
||||||
|
```bash
|
||||||
|
npm install -g @qwen-code/qwen-code@latest
|
||||||
|
qwen
|
||||||
|
/auth # Select Qwen OAuth for free tier
|
||||||
|
```
|
||||||
|
|
||||||
### OpenClaw
|
### OpenClaw
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/openclaw/openclaw.git
|
git clone https://github.com/openclaw/openclaw.git
|
||||||
@@ -265,26 +176,51 @@ wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-am
|
|||||||
chmod +x zeroclaw-linux-amd64 && sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
|
chmod +x zeroclaw-linux-amd64 && sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Multi-Provider Configuration
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"providers": {
|
||||||
|
"qwen": {
|
||||||
|
"type": "oauth",
|
||||||
|
"free": true,
|
||||||
|
"daily_limit": 2000
|
||||||
|
},
|
||||||
|
"anthropic": { "apiKey": "${ANTHROPIC_API_KEY}" },
|
||||||
|
"openai": { "apiKey": "${OPENAI_API_KEY}" },
|
||||||
|
"google": { "apiKey": "${GOOGLE_API_KEY}" },
|
||||||
|
"openrouter": { "apiKey": "${OPENROUTER_API_KEY}" },
|
||||||
|
"groq": { "apiKey": "${GROQ_API_KEY}" },
|
||||||
|
"ollama": { "baseURL": "http://localhost:11434" }
|
||||||
|
},
|
||||||
|
"agents": {
|
||||||
|
"defaults": { "model": "qwen/qwen3-coder" },
|
||||||
|
"premium": { "model": "anthropic/claude-sonnet-4-5" },
|
||||||
|
"fast": { "model": "groq/llama-3.3-70b-versatile" },
|
||||||
|
"local": { "model": "ollama/llama3.2:70b" }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
## Security Hardening
|
## Security Hardening
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Secrets in environment variables
|
# Environment variables for API keys
|
||||||
export ANTHROPIC_API_KEY="your-key"
|
export ANTHROPIC_API_KEY="your-key"
|
||||||
export OPENAI_API_KEY="your-key"
|
export OPENAI_API_KEY="your-key"
|
||||||
|
|
||||||
# Restricted config permissions
|
# Qwen OAuth - no key needed, browser auth
|
||||||
chmod 600 ~/.config/claw/config.json
|
|
||||||
|
|
||||||
# Systemd hardening
|
# Restricted config
|
||||||
NoNewPrivileges=true
|
chmod 600 ~/.qwen/settings.json
|
||||||
PrivateTmp=true
|
chmod 600 ~/.config/claw/config.json
|
||||||
ProtectSystem=strict
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Brainstorm Session Topics
|
## Brainstorm Session Topics
|
||||||
|
|
||||||
1. **Use Case**: Coding, research, productivity, automation?
|
1. **Platform Selection**: Free tier vs paid, features needed
|
||||||
2. **Model Selection**: Claude, GPT, Gemini, local?
|
2. **Provider Selection**: Which AI providers to configure
|
||||||
3. **Integrations**: Telegram, Discord, calendar, storage?
|
3. **Model Selection**: Fetch models or input custom
|
||||||
4. **Deployment**: Local, VPS, cloud?
|
4. **Integrations**: Messaging, calendar, storage
|
||||||
5. **Custom Agents**: Personality, memory, proactivity?
|
5. **Deployment**: Local, VPS, cloud
|
||||||
|
6. **Custom Agents**: Personality, memory, proactivity
|
||||||
|
|||||||
Reference in New Issue
Block a user