Commit Graph

9 Commits

  • docs: Comprehensive claw-setup skill documentation
    Added complete documentation covering all features:
    
    FEATURES DOCUMENTED:
    1. FREE Qwen OAuth Cross-Platform Import
       - 2,000 requests/day free tier
       - Works with ALL Claw platforms
       - Platform-specific import guides
    
    2. 25+ OpenCode-Compatible AI Providers
       - Tier 1: FREE (Qwen OAuth)
       - Tier 2: Major Labs (Anthropic, OpenAI, Google, xAI, Mistral)
       - Tier 3: Cloud (Azure, Bedrock, Vertex)
       - Tier 4: Gateways (OpenRouter 100+, Together AI)
       - Tier 5: Fast (Groq, Cerebras)
       - Tier 6: Specialized (Perplexity, Cohere, GitLab)
       - Tier 7: Local (Ollama, LM Studio, vLLM)
    
    3. Customization Options
       - Model selection (fetch or custom)
       - Security hardening
       - Interactive brainstorming
       - Multi-provider configuration
    
    4. Installation Guides
       - All 6 platforms with step-by-step instructions
    
    5. Configuration Examples
       - Multi-provider setup
       - Environment variables
       - Custom models
    
    6. Usage Examples
       - Basic, advanced, and provider-specific
    
    7. Troubleshooting
       - Common issues and solutions
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • docs: Comprehensive documentation for 25+ providers + Qwen OAuth
    Restructured documentation to highlight both key features:
    
    FEATURE 1: Qwen OAuth Cross-Platform Import (FREE)
    - 2,000 requests/day free tier
    - Works with ALL Claw platforms
    - Browser OAuth via qwen.ai
    - Model: Qwen3-Coder
    
    FEATURE 2: 25+ OpenCode-Compatible Providers
    - Major AI Labs: Anthropic, OpenAI, Google, xAI, Mistral
    - Cloud Platforms: Azure, AWS Bedrock, Google Vertex
    - Fast Inference: Groq, Cerebras
    - Gateways: OpenRouter (100+ models), Together AI
    - Local: Ollama, LM Studio, vLLM
    
    Provider Tiers:
    1. FREE: Qwen OAuth
    2. Major Labs: Anthropic, OpenAI, Google, xAI, Mistral
    3. Cloud: Azure, Bedrock, Vertex
    4. Fast: Groq, Cerebras
    5. Gateways: OpenRouter, Together AI, Vercel
    6. Specialized: Perplexity, Cohere, GitLab, GitHub
    7. Local: Ollama, LM Studio, vLLM
    
    Platforms with full support:
    - Qwen Code (native OAuth)
    - OpenClaw, NanoBot, PicoClaw, ZeroClaw (import OAuth)
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: Add Qwen OAuth cross-platform import for ALL Claw platforms
    Key Feature: Use FREE Qwen tier (2,000 req/day) with ANY platform!
    
    How it works:
    1. Get Qwen OAuth: qwen && /auth (FREE)
    2. Extract token from ~/.qwen/
    3. Configure any platform with token
    
    Supported platforms:
    - OpenClaw 
    - NanoBot 
    - PicoClaw 
    - ZeroClaw 
    - NanoClaw 
    
    Configuration:
      export OPENAI_API_KEY="$QWEN_TOKEN"
      export OPENAI_BASE_URL="https://api.qwen.ai/v1"
      export OPENAI_MODEL="qwen3-coder-plus"
    
    Added:
    - import-qwen-oauth.sh script for automation
    - Cross-platform configuration examples
    - Qwen API endpoints reference
    - Troubleshooting guide
    
    Free tier: 2,000 requests/day, 60 requests/minute
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: Add Qwen Code with FREE OAuth tier (2,000 requests/day)
    New platform option with no API key required:
    
    Qwen Code Features:
    - FREE OAuth tier: 2,000 requests/day
    - Model: Qwen3-Coder (coder-model)
    - Auth: Browser OAuth via qwen.ai
    - GitHub: https://github.com/QwenLM/qwen-code
    
    Installation:
      npm install -g @qwen-code/qwen-code@latest
      qwen
      /auth  # Select Qwen OAuth
    
    Platform comparison updated:
    - Qwen Code: FREE, ~200MB, coding-optimized
    - OpenClaw: Full-featured, 1700+ plugins
    - NanoBot: Python, research
    - PicoClaw: Go, <10MB
    - ZeroClaw: Rust, <5MB
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: Add all 25+ OpenCode-compatible AI providers to Claw Setup
    Updated provider support to match OpenCode's full provider list:
    
    Built-in Providers (18):
    - Anthropic, OpenAI, Azure OpenAI
    - Google AI, Google Vertex AI
    - Amazon Bedrock
    - OpenRouter, xAI, Mistral
    - Groq, Cerebras, DeepInfra
    - Cohere, Together AI, Perplexity
    - Vercel AI, GitLab, GitHub Copilot
    
    Custom Loader Providers:
    - GitHub Copilot Enterprise
    - Google Vertex Anthropic
    - Azure Cognitive Services
    - Cloudflare AI Gateway
    - SAP AI Core
    
    Local/Self-Hosted:
    - Ollama, LM Studio, vLLM
    
    Features:
    - Model fetching from provider APIs
    - Custom model input support
    - Multi-provider configuration
    - Environment variable security
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: Add Claw Setup skill for AI Agent deployment
    End-to-end professional setup of AI Agent platforms:
    - OpenClaw (full-featured, 215K stars)
    - NanoBot (Python, lightweight)
    - PicoClaw (Go, ultra-light)
    - ZeroClaw (Rust, minimal)
    - NanoClaw (WhatsApp focused)
    
    Features:
    - Platform selection with comparison
    - Security hardening (secrets, network, systemd)
    - Interactive brainstorming for customization
    - AI provider configuration with 12+ providers
    - Model fetching from provider APIs
    - Custom model input support
    
    Providers supported:
    Anthropic, OpenAI, Google, OpenRouter, Groq,
    Cerebras, Together AI, DeepSeek, Mistral, xAI, Ollama
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: Add 6 new Claude Code skills
    Skills added:
    - 🔐 Secret Scanner: Detect leaked credentials in codebases
    - 🏛️ Git Archaeologist: Analyze git history, find bugs
    - 💾 Backup Automator: Automated encrypted cloud backups
    - 🌐 Domain Manager: Unified DNS management
    - 🔒 SSL Guardian: Certificate automation and monitoring
    - 📡 Log Sentinel: Log analysis and anomaly detection
    
    All skills include:
    - SKILL.md with trigger patterns
    - README.md with documentation
    - GLM 5 attribution and disclaimer
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • docs: Add hero section with GLM 5 attribution
    Added prominent hero section linking to GLM 5 Advanced Coding Model
    at https://z.ai/subscribe?ic=R0K78RJKNW
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: Add RAM Optimizer skill with ZRAM compression
    - ZRAM-based memory compression for Linux servers
    - 2-3x effective memory increase without hardware upgrades
    - KSM (Kernel Samepage Merging) for memory deduplication
    - Sysctl optimizations for low-memory systems
    - Supports Ubuntu/Debian/Fedora/Arch Linux
    - Works on local machines and remote SSH servers
    
    Performance gains:
    - Effective memory: +137% average increase
    - Swap I/O latency: -90% (disk to RAM)
    - OOM events: Eliminated
    - SSD disk wear: -95%
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>