diff --git a/README.md b/README.md index 240cf2c..8769c72 100644 --- a/README.md +++ b/README.md @@ -4,7 +4,7 @@ ### Curated Collection of Custom Skills for Claude Code CLI -**Automate system administration, security, and development workflows** +**Automate system administration, security, and AI agent deployment** --- @@ -31,7 +31,7 @@ ### AI & Automation | Skill | Description | Status | |-------|-------------|--------| -| [π¦ Claw Setup](./skills/claw-setup/) | End-to-end AI Agent deployment (OpenClaw, NanoBot, PicoClaw, ZeroClaw) | β Production Ready | +| [π¦ Claw Setup](./skills/claw-setup/) | AI Agent deployment (OpenClaw, NanoBot, **Qwen Code FREE**) | β Production Ready | ### System Administration | Skill | Description | Status | @@ -58,12 +58,40 @@ --- +## Featured: Claw Setup with FREE Qwen Code + +**β Qwen Code: 2,000 FREE requests/day via OAuth - No API key needed!** + +```bash +npm install -g @qwen-code/qwen-code@latest +qwen +/auth # Select Qwen OAuth for free tier +``` + +### Platforms Supported + +| Platform | Free? | Memory | Best For | +|----------|-------|--------|----------| +| **Qwen Code** | β 2K/day | ~200MB | Coding, FREE tier | +| OpenClaw | β | >1GB | Full-featured, plugins | +| NanoBot | β | ~100MB | Research, Python | +| PicoClaw | β | <10MB | Embedded, $10 HW | +| ZeroClaw | β | <5MB | Performance, Rust | + +### 25+ AI Providers + +**FREE:** Qwen OAuth + +**Paid:** Anthropic, OpenAI, Google, xAI, Mistral, Groq, Cerebras, OpenRouter, Together AI, Cohere, Perplexity, and more + +**Local:** Ollama, LM Studio, vLLM + +--- + ## Quick Start -Each skill works with Claude Code CLI. Simply ask: - ``` -"Setup Claw AI assistant on my server" +"Setup Qwen Code with free OAuth tier" "Run ram optimizer on my server" "Scan this directory for leaked secrets" "Setup automated backups to S3" @@ -71,21 +99,6 @@ Each skill works with Claude Code CLI. Simply ask: --- -## Featured: Claw Setup - -Professional deployment of AI Agent platforms: - -``` -OpenClaw β Full-featured, 1700+ plugins, 215K stars -NanoBot β Python, 4K lines, research-ready -PicoClaw β Go, <10MB, $10 hardware -ZeroClaw β Rust, <5MB, 10ms startup -``` - -Usage: `"Setup OpenClaw on my VPS with security hardening"` - ---- -
diff --git a/skills/claw-setup/README.md b/skills/claw-setup/README.md index 06a1b20..1893e0f 100644 --- a/skills/claw-setup/README.md +++ b/skills/claw-setup/README.md @@ -4,7 +4,7 @@ ### Professional AI Agent Deployment Made Simple -**End-to-end setup of Claw platforms with 25+ AI providers, security hardening, and personal customization** +**End-to-end setup of Claw platforms + Qwen Code FREE tier with 25+ AI providers** --- @@ -28,143 +28,173 @@ ## Overview -Claw Setup handles complete deployment of AI Agent platforms with **25+ AI provider integrations** (OpenCode compatible). +Claw Setup handles complete deployment of AI Agent platforms with **Qwen Code FREE tier** and **25+ AI provider integrations**. ``` βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ -β CLAW SETUP WORKFLOW β +β PLATFORMS SUPPORTED β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ β β -β Phase 1 Phase 2 Phase 3 Phase 4 β -β ββββββββ ββββββββ ββββββββ ββββββββ β +β β FREE TIER β +β βββββββββββ β +β π€ Qwen Code TypeScript ~200MB FREE OAuth β +β β’ 2,000 requests/day FREE β +β β’ Qwen3-Coder model β +β β’ No API key needed β β β -β βββββββββββ βββββββββββ βββββββββββ βββββββββββ β -β β SELECT ββββββΊβ INSTALL ββββββΊβCUSTOMIZEββββββΊβ DEPLOY β β -β β Platformβ β& Secure β βProvidersβ β & Run β β -β βββββββββββ βββββββββββ βββββββββββ βββββββββββ β +β π¦ FULL-FEATURED β +β βββββββββββββββ β +β OpenClaw TypeScript >1GB 1700+ plugins β +β NanoBot Python ~100MB Research-ready β +β PicoClaw Go <10MB $10 hardware β +β ZeroClaw Rust <5MB 10ms startup β +β NanoClaw TypeScript ~50MB WhatsApp β β β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ ``` -## Platforms Supported +## β Qwen Code (FREE OAuth Tier) -| Platform | Language | Memory | Startup | Best For | -|----------|----------|--------|---------|----------| -| **OpenClaw** | TypeScript | >1GB | ~500s | Full-featured, 1700+ plugins | -| **NanoBot** | Python | ~100MB | ~30s | Research, customization | -| **PicoClaw** | Go | <10MB | ~1s | Embedded, $10 hardware | -| **ZeroClaw** | Rust | <5MB | <10ms | Maximum performance | -| **NanoClaw** | TypeScript | ~50MB | ~5s | WhatsApp integration | +**Special: 2,000 FREE requests/day - No API key needed!** + +| Feature | Details | +|---------|---------| +| **Model** | Qwen3-Coder (coder-model) | +| **Free Tier** | 2,000 requests/day | +| **Auth** | Browser OAuth via qwen.ai | +| **GitHub** | https://github.com/QwenLM/qwen-code | + +### Quick Start +```bash +# Install +npm install -g @qwen-code/qwen-code@latest + +# Start +qwen + +# Authenticate (FREE) +/auth +# Select "Qwen OAuth" -> Browser opens -> Sign in with qwen.ai +``` + +### Features +- β **FREE**: 2,000 requests/day +- β **No API Key**: Browser OAuth authentication +- β **Qwen3-Coder**: Optimized for coding +- β **OpenAI-Compatible**: Works with other APIs too +- β **IDE Integration**: VS Code, Zed, JetBrains +- β **Headless Mode**: CI/CD automation + +## Platform Comparison + +| Platform | Memory | Startup | Free? | Best For | +|----------|--------|---------|-------|----------| +| **Qwen Code** | ~200MB | ~5s | β 2K/day | **Coding, FREE tier** | +| OpenClaw | >1GB | ~500s | β | Full-featured | +| NanoBot | ~100MB | ~30s | β | Research | +| PicoClaw | <10MB | ~1s | β | Embedded | +| ZeroClaw | <5MB | <10ms | β | Performance | + +## Decision Flowchart + +``` + βββββββββββββββββββ + β Need AI Agent? β + ββββββββββ¬βββββββββ + β + βΌ + βββββββββββββββββββββββββ + β Want FREE tier? β + βββββββββββββ¬ββββββββββββ + βββββββ΄ββββββ + β β + YES NO + β β + βΌ βΌ + ββββββββββββββββ ββββββββββββββββββββ + β β Qwen Code β β Memory limited? β + β OAuth FREE β ββββββββββ¬ββββββββββ + β 2000/day β βββββββ΄ββββββ + ββββββββββββββββ β β + YES NO + β β + βΌ βΌ + ββββββββββββ ββββββββββββ + βZeroClaw/ β βOpenClaw β + βPicoClaw β β(Full) β + ββββββββββββ ββββββββββββ +``` ## AI Providers (25+ Supported) -### Tier 1: Major AI Labs +### Tier 1: FREE +| Provider | Free Tier | Models | +|----------|-----------|--------| +| **Qwen OAuth** | 2,000/day | Qwen3-Coder | +### Tier 2: Major AI Labs | Provider | Models | Features | |----------|--------|----------| -| **Anthropic** | Claude 3.5/4/Opus | Extended thinking, PDF support | -| **OpenAI** | GPT-4o, o1, o3, GPT-5 | Function calling, structured output | -| **Google AI** | Gemini 2.5, Gemini 3 Pro | Multimodal, long context | -| **xAI** | Grok | Real-time data integration | -| **Mistral** | Mistral Large, Codestral | Code-focused models | - -### Tier 2: Cloud Platforms - -| Provider | Models | Features | -|----------|--------|----------| -| **Azure OpenAI** | GPT-5, GPT-4o Enterprise | Azure integration | -| **Google Vertex** | Claude, Gemini on GCP | Anthropic on Google | -| **Amazon Bedrock** | Nova, Claude, Llama 3 | AWS regional prefixes | - -### Tier 3: Aggregators & Gateways - -| Provider | Models | Features | -|----------|--------|----------| -| **OpenRouter** | 100+ models | Multi-provider gateway | -| **Vercel AI** | Multi-provider | Edge hosting, rate limiting | -| **Together AI** | Open source | Fine-tuning, hosting | -| **DeepInfra** | Open source | Cost-effective | - -### Tier 4: Fast Inference +| Anthropic | Claude 3.5/4/Opus | Extended thinking | +| OpenAI | GPT-4o, o1, o3, GPT-5 | Function calling | +| Google AI | Gemini 2.5, 3 Pro | Multimodal | +| xAI | Grok | Real-time data | +| Mistral | Large, Codestral | Code-focused | +### Tier 3: Fast Inference | Provider | Speed | Models | |----------|-------|--------| -| **Groq** | Ultra-fast | Llama 3, Mixtral | -| **Cerebras** | Fastest | Llama 3 variants | +| Groq | Ultra-fast | Llama 3, Mixtral | +| Cerebras | Fastest | Llama 3 variants | -### Tier 5: Specialized +### Tier 4: Gateways & Local +| Provider | Type | Models | +|----------|------|--------| +| OpenRouter | Gateway | 100+ models | +| Together AI | Hosting | Open source | +| Ollama | Local | Self-hosted | +| LM Studio | Local | GUI self-hosted | -| Provider | Use Case | -|----------|----------| -| **Perplexity** | Web search integration | -| **Cohere** | Enterprise RAG | -| **GitLab Duo** | CI/CD integration | -| **GitHub Copilot** | IDE integration | -| **Cloudflare AI** | Gateway, rate limiting | -| **SAP AI Core** | SAP enterprise | +## Quick Start Examples -### Local/Self-Hosted - -| Provider | Use Case | -|----------|----------| -| **Ollama** | Local model hosting | -| **LM Studio** | GUI local models | -| **vLLM** | High-performance serving | - -## Model Selection - -**Option A: Fetch from Provider** +### Option 1: FREE Qwen Code ```bash -# Fetch available models -curl -s https://openrouter.ai/api/v1/models -H "Authorization: Bearer $KEY" | jq '.data[].id' -curl -s https://api.groq.com/openai/v1/models -H "Authorization: Bearer $KEY" -curl -s http://localhost:11434/api/tags # Ollama +npm install -g @qwen-code/qwen-code@latest +qwen +/auth # Select Qwen OAuth ``` -**Option B: Custom Model Input** -```json -{ - "provider": "openai", - "modelId": "ft:gpt-4o:org:custom:suffix", - "displayName": "My Fine-Tuned Model" -} +### Option 2: With Your Own API Keys +```bash +# Configure providers +export ANTHROPIC_API_KEY="your-key" +export OPENAI_API_KEY="your-key" +export GOOGLE_API_KEY="your-key" + +# Or use OpenRouter for 100+ models +export OPENROUTER_API_KEY="your-key" ``` -## Quick Start +### Option 3: Local Models +```bash +# Install Ollama +curl -fsSL https://ollama.com/install.sh | sh -``` -"Setup OpenClaw with Anthropic and OpenAI providers" -"Install NanoBot with all available providers" -"Deploy ZeroClaw with Groq for fast inference" -"Configure Claw with local Ollama models" +# Pull model +ollama pull llama3.2:70b + +# Use with Claw platforms ``` -## Configuration Example +## Usage Examples -```json -{ - "providers": { - "anthropic": { "apiKey": "${ANTHROPIC_API_KEY}" }, - "openai": { "apiKey": "${OPENAI_API_KEY}" }, - "google": { "apiKey": "${GOOGLE_API_KEY}" }, - "openrouter": { "apiKey": "${OPENROUTER_API_KEY}" }, - "groq": { "apiKey": "${GROQ_API_KEY}" }, - "ollama": { "baseURL": "http://localhost:11434" } - }, - "agents": { - "defaults": { "model": "anthropic/claude-sonnet-4-5" }, - "fast": { "model": "groq/llama-3.3-70b-versatile" }, - "local": { "model": "ollama/llama3.2:70b" } - } -} ``` - -## Security - -- API keys via environment variables -- Restricted config permissions (chmod 600) -- Systemd hardening (NoNewPrivileges, PrivateTmp) -- Network binding to localhost +"Setup Qwen Code with free OAuth tier" +"Install OpenClaw with Anthropic provider" +"Configure Claw with all free options" +"Setup ZeroClaw with Groq for fast inference" +"Fetch available models from OpenRouter" +``` --- diff --git a/skills/claw-setup/SKILL.md b/skills/claw-setup/SKILL.md index 1bf1237..acc4924 100644 --- a/skills/claw-setup/SKILL.md +++ b/skills/claw-setup/SKILL.md @@ -1,12 +1,12 @@ --- name: claw-setup -description: Use this skill when the user asks to "setup openclaw", "install nanobot", "deploy zeroclaw", "configure picoclaw", "AI agent setup", "personal AI assistant", "claw framework", or mentions setting up any AI agent/assistant platform from the Claw family (OpenClaw, NanoBot, PicoClaw, ZeroClaw, NanoClaw). -version: 1.0.0 +description: Use this skill when the user asks to "setup openclaw", "install nanobot", "deploy zeroclaw", "configure picoclaw", "setup qwen code", "qwen-code", "AI agent setup", "personal AI assistant", "claw framework", or mentions setting up any AI agent/assistant platform. +version: 1.1.0 --- # Claw Setup Skill -End-to-end professional setup of AI Agent platforms from the Claw family with security hardening, multi-provider configuration, and personal customization through interactive brainstorming. +End-to-end professional setup of AI Agent platforms with security hardening, multi-provider configuration, and personal customization through interactive brainstorming. ## Supported Platforms @@ -17,42 +17,93 @@ End-to-end professional setup of AI Agent platforms from the Claw family with se | **PicoClaw** | Go | <10MB | ~1s | Low-resource, embedded | | **ZeroClaw** | Rust | <5MB | <10ms | Maximum performance, security | | **NanoClaw** | TypeScript | ~50MB | ~5s | WhatsApp integration | +| **Qwen Code** | TypeScript | ~200MB | ~5s | **FREE OAuth tier, Qwen3-Coder** | -## AI Providers (OpenCode Compatible - 25+ Providers) +## Qwen Code (FREE OAuth Tier) β + +**Special: Free 2,000 requests/day with Qwen OAuth!** + +| Feature | Details | +|---------|---------| +| **Model** | Qwen3-Coder (coder-model) | +| **Free Tier** | 2,000 requests/day via OAuth | +| **Auth** | qwen.ai account (browser OAuth) | +| **GitHub** | https://github.com/QwenLM/qwen-code | +| **License** | Apache 2.0 | + +### Installation +```bash +# NPM (recommended) +npm install -g @qwen-code/qwen-code@latest + +# Homebrew (macOS, Linux) +brew install qwen-code + +# Or from source +git clone https://github.com/QwenLM/qwen-code.git +cd qwen-code +npm install +npm run build +``` + +### Quick Start +```bash +# Start interactive mode +qwen + +# In session, authenticate with free OAuth +/auth +# Select "Qwen OAuth" -> browser opens -> sign in with qwen.ai + +# Or use OpenAI-compatible API +export OPENAI_API_KEY="your-key" +export OPENAI_MODEL="qwen3-coder" +qwen +``` + +### Qwen Code Features +- **Free OAuth Tier**: 2,000 requests/day, no API key needed +- **Qwen3-Coder Model**: Optimized for coding tasks +- **OpenAI-Compatible**: Works with any OpenAI-compatible API +- **IDE Integration**: VS Code, Zed, JetBrains +- **Headless Mode**: For CI/CD automation +- **TypeScript SDK**: Build custom integrations + +### Configuration +```json +// ~/.qwen/settings.json +{ + "model": "qwen3-coder-480b", + "temperature": 0.7, + "maxTokens": 4096 +} +``` + +## AI Providers (25+ Supported) ### Built-in Providers | Provider | SDK Package | Key Models | Features | |----------|-------------|------------|----------| -| **Anthropic** | `@ai-sdk/anthropic` | Claude 3.5/4/Opus | Extended thinking, PDF support | -| **OpenAI** | `@ai-sdk/openai` | GPT-4o, o1, o3, GPT-5 | Function calling, structured output | -| **Azure OpenAI** | `@ai-sdk/azure` | GPT-5, GPT-4o Enterprise | Azure integration, custom endpoints | -| **Google AI** | `@ai-sdk/google` | Gemini 2.5, Gemini 3 Pro | Multimodal, Google Cloud | -| **Google Vertex** | `@ai-sdk/google-vertex` | Claude, Gemini on GCP | Anthropic on Google infra | -| **Amazon Bedrock** | `@ai-sdk/amazon-bedrock` | Nova, Claude, Llama 3 | AWS credentials, regional prefixes | +| **Qwen OAuth** | Free tier | Qwen3-Coder | **2,000 free req/day** | +| **Anthropic** | `@ai-sdk/anthropic` | Claude 3.5/4/Opus | Extended thinking | +| **OpenAI** | `@ai-sdk/openai` | GPT-4o, o1, o3, GPT-5 | Function calling | +| **Azure OpenAI** | `@ai-sdk/azure` | GPT-5 Enterprise | Azure integration | +| **Google AI** | `@ai-sdk/google` | Gemini 2.5, 3 Pro | Multimodal | +| **Google Vertex** | `@ai-sdk/google-vertex` | Claude, Gemini on GCP | Google Cloud | +| **Amazon Bedrock** | `@ai-sdk/amazon-bedrock` | Nova, Claude, Llama 3 | AWS integration | | **OpenRouter** | `@openrouter/ai-sdk-provider` | 100+ models | Multi-provider gateway | -| **xAI** | `@ai-sdk/xai` | Grok models | Real-time data integration | -| **Mistral AI** | `@ai-sdk/mistral` | Mistral Large, Codestral | Code-focused models | -| **Groq** | `@ai-sdk/groq` | Llama 3, Mixtral | Ultra-low latency inference | -| **DeepInfra** | `@ai-sdk/deepinfra` | Open source models | Cost-effective hosting | -| **Cerebras** | `@ai-sdk/cerebras` | Llama 3 variants | Hardware-accelerated inference | -| **Cohere** | `@ai-sdk/cohere` | Command R+, Embed | Enterprise RAG capabilities | -| **Together AI** | `@ai-sdk/togetherai` | Open source models | Fine-tuning and hosting | -| **Perplexity** | `@ai-sdk/perplexity` | Sonar models | Real-time web search | -| **Vercel AI** | `@ai-sdk/vercel` | Multi-provider gateway | Edge hosting, rate limiting | -| **GitLab** | `@gitlab/gitlab-ai-provider` | GitLab Duo | CI/CD AI integration | -| **GitHub Copilot** | Custom | GPT-5 series | IDE integration, OAuth | - -### Custom Loader Providers - -| Provider | Auth Method | Use Case | -|----------|-------------|----------| -| **GitHub Copilot Enterprise** | OAuth + API Key | Enterprise IDE integration | -| **Google Vertex Anthropic** | GCP Service Account | Claude on Google Cloud | -| **Azure Cognitive Services** | Azure AD | Azure AI services | -| **Cloudflare AI Gateway** | Gateway Token | Unified billing, rate limiting | -| **SAP AI Core** | Service Key | SAP enterprise integration | -| **OpenCode Free** | None | Free public models | +| **xAI** | `@ai-sdk/xai` | Grok | Real-time data | +| **Mistral** | `@ai-sdk/mistral` | Mistral Large, Codestral | Code-focused | +| **Groq** | `@ai-sdk/groq` | Llama 3, Mixtral | Ultra-fast inference | +| **Cerebras** | `@ai-sdk/cerebras` | Llama 3 variants | Hardware-accelerated | +| **DeepInfra** | `@ai-sdk/deepinfra` | Open source models | Cost-effective | +| **Cohere** | `@ai-sdk/cohere` | Command R+, Embed | Enterprise RAG | +| **Together AI** | `@ai-sdk/togetherai` | Open source models | Fine-tuning | +| **Perplexity** | `@ai-sdk/perplexity` | Sonar models | Web search | +| **Vercel AI** | `@ai-sdk/vercel` | Multi-provider | Edge hosting | +| **GitLab** | `@gitlab/gitlab-ai-provider` | GitLab Duo | CI/CD AI | +| **GitHub Copilot** | Custom | GPT-5 series | IDE integration | ### Local/Self-Hosted @@ -61,186 +112,46 @@ End-to-end professional setup of AI Agent platforms from the Claw family with se | **Ollama** | localhost:11434 | Local model hosting | | **LM Studio** | localhost:1234 | GUI local models | | **vLLM** | localhost:8000 | High-performance serving | -| **LocalAI** | localhost:8080 | OpenAI-compatible local | -## Fetch Available Models +## Platform Selection Guide -```bash -# OpenRouter - All models -curl -s https://openrouter.ai/api/v1/models \ - -H "Authorization: Bearer $OPENROUTER_API_KEY" | jq '.data[].id' - -# OpenAI - GPT models -curl -s https://api.openai.com/v1/models \ - -H "Authorization: Bearer $OPENAI_API_KEY" | jq '.data[].id' - -# Anthropic (static list) -# claude-opus-4-5-20250219, claude-sonnet-4-5-20250219, claude-3-5-sonnet-20241022 - -# Google Gemini -curl -s "https://generativelanguage.googleapis.com/v1/models?key=$GOOGLE_API_KEY" - -# Groq -curl -s https://api.groq.com/openai/v1/models \ - -H "Authorization: Bearer $GROQ_API_KEY" - -# Together AI -curl -s https://api.together.xyz/v1/models \ - -H "Authorization: Bearer $TOGETHER_API_KEY" - -# Ollama (local) -curl -s http://localhost:11434/api/tags - -# models.dev - Universal model list -curl -s https://models.dev/api/models.json ``` - -## Multi-Provider Configuration - -```json -{ - "providers": { - "anthropic": { - "apiKey": "${ANTHROPIC_API_KEY}", - "baseURL": "https://api.anthropic.com" - }, - "openai": { - "apiKey": "${OPENAI_API_KEY}", - "baseURL": "https://api.openai.com/v1" - }, - "azure": { - "apiKey": "${AZURE_OPENAI_API_KEY}", - "baseURL": "${AZURE_OPENAI_ENDPOINT}", - "deployment": "gpt-4o" - }, - "google": { - "apiKey": "${GOOGLE_API_KEY}", - "baseURL": "https://generativelanguage.googleapis.com/v1" - }, - "vertex": { - "projectId": "${GOOGLE_CLOUD_PROJECT}", - "location": "${GOOGLE_CLOUD_LOCATION}", - "credentials": "${GOOGLE_APPLICATION_CREDENTIALS}" - }, - "bedrock": { - "region": "us-east-1", - "accessKeyId": "${AWS_ACCESS_KEY_ID}", - "secretAccessKey": "${AWS_SECRET_ACCESS_KEY}" - }, - "openrouter": { - "apiKey": "${OPENROUTER_API_KEY}", - "baseURL": "https://openrouter.ai/api/v1", - "headers": { - "HTTP-Referer": "https://yourapp.com", - "X-Title": "YourApp" - } - }, - "xai": { - "apiKey": "${XAI_API_KEY}", - "baseURL": "https://api.x.ai/v1" - }, - "mistral": { - "apiKey": "${MISTRAL_API_KEY}", - "baseURL": "https://api.mistral.ai/v1" - }, - "groq": { - "apiKey": "${GROQ_API_KEY}", - "baseURL": "https://api.groq.com/openai/v1" - }, - "cerebras": { - "apiKey": "${CEREBRAS_API_KEY}", - "baseURL": "https://api.cerebras.ai/v1" - }, - "deepinfra": { - "apiKey": "${DEEPINFRA_API_KEY}", - "baseURL": "https://api.deepinfra.com/v1" - }, - "cohere": { - "apiKey": "${COHERE_API_KEY}", - "baseURL": "https://api.cohere.ai/v1" - }, - "together": { - "apiKey": "${TOGETHER_API_KEY}", - "baseURL": "https://api.together.xyz/v1" - }, - "perplexity": { - "apiKey": "${PERPLEXITY_API_KEY}", - "baseURL": "https://api.perplexity.ai" - }, - "vercel": { - "apiKey": "${VERCEL_AI_KEY}", - "baseURL": "https://api.vercel.ai/v1" - }, - "gitlab": { - "token": "${GITLAB_TOKEN}", - "baseURL": "${GITLAB_URL}/api/v4" - }, - "github": { - "token": "${GITHUB_TOKEN}", - "baseURL": "https://api.github.com" - }, - "cloudflare": { - "accountId": "${CF_ACCOUNT_ID}", - "gatewayId": "${CF_GATEWAY_ID}", - "token": "${CF_AI_TOKEN}" - }, - "sap": { - "serviceKey": "${AICORE_SERVICE_KEY}", - "deploymentId": "${AICORE_DEPLOYMENT_ID}" - }, - "ollama": { - "baseURL": "http://localhost:11434/v1", - "apiKey": "ollama" - } - }, - "agents": { - "defaults": { - "model": "anthropic/claude-sonnet-4-5", - "temperature": 0.7, - "maxTokens": 4096 - }, - "fast": { - "model": "groq/llama-3.3-70b-versatile" - }, - "coding": { - "model": "anthropic/claude-sonnet-4-5" - }, - "research": { - "model": "perplexity/sonar-pro" - }, - "local": { - "model": "ollama/llama3.2:70b" - } - } -} -``` - -## Custom Model Support - -```json -{ - "customModels": { - "my-fine-tuned-gpt": { - "provider": "openai", - "modelId": "ft:gpt-4o:my-org:custom:suffix", - "displayName": "My Custom GPT-4o" - }, - "local-llama": { - "provider": "ollama", - "modelId": "llama3.2:70b", - "displayName": "Local Llama 3.2 70B" - }, - "openrouter-custom": { - "provider": "openrouter", - "modelId": "custom-org/my-model", - "displayName": "Custom via OpenRouter" - } - } -} + βββββββββββββββββββ + β Need AI Agent? β + ββββββββββ¬βββββββββ + β + βΌ + βββββββββββββββββββββββββ + β Want FREE tier? β + βββββββββββββ¬ββββββββββββ + βββββββ΄ββββββ + β β + YES NO + β β + βΌ βΌ + ββββββββββββββββ ββββββββββββββββββββ + β Qwen Code β β Memory constrained? + β (OAuth FREE) β ββββββββββ¬ββββββββββ + β 2000/day β βββββββ΄ββββββ + ββββββββββββββββ β β + YES NO + β β + βΌ βΌ + ββββββββββββ ββββββββββββ + βZeroClaw/ β βOpenClaw β + βPicoClaw β β(Full) β + ββββββββββββ ββββββββββββ ``` ## Installation Commands +### Qwen Code (FREE) +```bash +npm install -g @qwen-code/qwen-code@latest +qwen +/auth # Select Qwen OAuth for free tier +``` + ### OpenClaw ```bash git clone https://github.com/openclaw/openclaw.git @@ -265,26 +176,51 @@ wget https://github.com/zeroclaw-labs/zeroclaw/releases/latest/zeroclaw-linux-am chmod +x zeroclaw-linux-amd64 && sudo mv zeroclaw-linux-amd64 /usr/local/bin/zeroclaw ``` +## Multi-Provider Configuration + +```json +{ + "providers": { + "qwen": { + "type": "oauth", + "free": true, + "daily_limit": 2000 + }, + "anthropic": { "apiKey": "${ANTHROPIC_API_KEY}" }, + "openai": { "apiKey": "${OPENAI_API_KEY}" }, + "google": { "apiKey": "${GOOGLE_API_KEY}" }, + "openrouter": { "apiKey": "${OPENROUTER_API_KEY}" }, + "groq": { "apiKey": "${GROQ_API_KEY}" }, + "ollama": { "baseURL": "http://localhost:11434" } + }, + "agents": { + "defaults": { "model": "qwen/qwen3-coder" }, + "premium": { "model": "anthropic/claude-sonnet-4-5" }, + "fast": { "model": "groq/llama-3.3-70b-versatile" }, + "local": { "model": "ollama/llama3.2:70b" } + } +} +``` + ## Security Hardening ```bash -# Secrets in environment variables +# Environment variables for API keys export ANTHROPIC_API_KEY="your-key" export OPENAI_API_KEY="your-key" -# Restricted config permissions -chmod 600 ~/.config/claw/config.json +# Qwen OAuth - no key needed, browser auth -# Systemd hardening -NoNewPrivileges=true -PrivateTmp=true -ProtectSystem=strict +# Restricted config +chmod 600 ~/.qwen/settings.json +chmod 600 ~/.config/claw/config.json ``` ## Brainstorm Session Topics -1. **Use Case**: Coding, research, productivity, automation? -2. **Model Selection**: Claude, GPT, Gemini, local? -3. **Integrations**: Telegram, Discord, calendar, storage? -4. **Deployment**: Local, VPS, cloud? -5. **Custom Agents**: Personality, memory, proactivity? +1. **Platform Selection**: Free tier vs paid, features needed +2. **Provider Selection**: Which AI providers to configure +3. **Model Selection**: Fetch models or input custom +4. **Integrations**: Messaging, calendar, storage +5. **Deployment**: Local, VPS, cloud +6. **Custom Agents**: Personality, memory, proactivity