21 Commits

  • Remove outdated specific models, point to live catalog
    - Remove all hardcoded model name examples from documentation
    - Replace outdated model table with live catalog guidance
    - Add comprehensive list of major providers (current as of 2025)
    - Highlight catalog features: filters, pricing, context length, etc.
    - Update example commands to remove specific model references
    - Emphasize always checking https://openrouter.ai/models for current models
    - Add note that models are added/updated regularly
    - Keep documentation future-proof by referencing live catalog
  • Add interactive model selection to OpenRouter skill
    - Ask user which OpenRouter model they want to use
    - Provide link to OpenRouter model catalog (https://openrouter.ai/models)
    - Guide users to browse, click, and copy model name
    - Store selected model in ~/.claude/settings.json
    - Add popular model recommendations table
    - Document model change process (3 options)
    - Add model-specific troubleshooting (model not found, context length)
    - Expand supported models section with examples from multiple providers
    - Include free tier models (meta-llama with :free suffix)
    - Add model name accuracy notes (suffixes like :beta, :free)
    - Update example commands to include model selection scenarios
  • Update OpenRouter Config skill with complete documentation
    - Add official OpenRouter documentation source link
    - Include API endpoint (https://openrouter.ai/api)
    - Add all required environment variables with explanations
    - Document provider priority (Anthropic 1P)
    - Add detailed benefits: failover, budget controls, analytics
    - Include verification steps with /status command
    - Add troubleshooting section
    - Document advanced features: statusline, GitHub Action, Agent SDK
    - Add security and privacy notes
    - Include important notes about ANTHROPIC_API_KEY being empty
    - Reference official docs for up-to-date information
  • Add OpenRouter Config skill
    - Add README.md with user-facing documentation
    - Add SKILL.md with skill metadata and instructions
    - Configure OpenRouter as AI provider for Claude Code
    - Support arcee-ai/trinity-mini:free model by default
  • docs: clarify unified Qwen OAuth experience across ALL platforms
    - All platforms now have IDENTICAL Qwen OAuth integration
    - ZeroClaw: native provider (built-in)
    - Others: OpenAI-compatible + auto-refresh script
    - Same result: FREE tier, auto refresh, same credentials file
    - Updated platform table to show "Full" support (not just "Import")
    
    User experience is now identical regardless of platform choice.
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: add auto token refresh for ALL platforms
    New qwen-token-refresh.sh script provides automatic token refresh
    for OpenClaw, NanoBot, PicoClaw, NanoClaw (ZeroClaw has native support).
    
    Features:
    - Check token status and expiry
    - Auto-refresh when < 5 min remaining
    - Background daemon mode (5 min intervals)
    - Systemd service installation
    - Updates both oauth_creds.json and .env file
    
    Usage:
      ./scripts/qwen-token-refresh.sh --status   # Check status
      ./scripts/qwen-token-refresh.sh            # Refresh if needed
      ./scripts/qwen-token-refresh.sh --daemon   # Background daemon
      ./scripts/qwen-token-refresh.sh --install  # Systemd service
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • docs: add Qwen OAuth auth URL (portal.qwen.ai)
    - Add browser auth URL for re-authentication
    - Clarify auth vs refresh vs API endpoints
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • docs: set qwen as default provider with coder-model
    - Mark Qwen OAuth as recommended default provider
    - Update model reference to coder-model (qwen3-coder-plus)
    - Add default provider setup instructions
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: use correct DashScope API endpoint for Qwen OAuth
    Based on ZeroClaw implementation study:
    - Change API endpoint from api.qwen.ai to dashscope.aliyuncs.com/compatible-mode/v1
    - Update credentials file reference to oauth_creds.json
    - Add ZeroClaw native qwen-oauth provider documentation
    - Add API endpoints and models reference table
    - Update import script with correct endpoint and platform support
    - Add PicoClaw and NanoClaw platform configurations
    
    Key findings from ZeroClaw binary:
    - Native qwen-oauth provider with auto token refresh
    - Uses DashScope OpenAI-compatible endpoint
    - Reads ~/.qwen/oauth_creds.json directly
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • docs: clarify ZeroClaw native qwen-oauth vs OpenAI-compatible import
    - Document ZeroClaw's native qwen-oauth provider with auto token refresh
    - Explain two import methods: Native vs OpenAI-compatible
    - Add OAuth credentials structure documentation
    - Add comparison table showing feature differences
    - Update platform table to show ZeroClaw has Native (not Import) OAuth support
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • docs: Comprehensive claw-setup skill documentation
    Added complete documentation covering all features:
    
    FEATURES DOCUMENTED:
    1. FREE Qwen OAuth Cross-Platform Import
       - 2,000 requests/day free tier
       - Works with ALL Claw platforms
       - Platform-specific import guides
    
    2. 25+ OpenCode-Compatible AI Providers
       - Tier 1: FREE (Qwen OAuth)
       - Tier 2: Major Labs (Anthropic, OpenAI, Google, xAI, Mistral)
       - Tier 3: Cloud (Azure, Bedrock, Vertex)
       - Tier 4: Gateways (OpenRouter 100+, Together AI)
       - Tier 5: Fast (Groq, Cerebras)
       - Tier 6: Specialized (Perplexity, Cohere, GitLab)
       - Tier 7: Local (Ollama, LM Studio, vLLM)
    
    3. Customization Options
       - Model selection (fetch or custom)
       - Security hardening
       - Interactive brainstorming
       - Multi-provider configuration
    
    4. Installation Guides
       - All 6 platforms with step-by-step instructions
    
    5. Configuration Examples
       - Multi-provider setup
       - Environment variables
       - Custom models
    
    6. Usage Examples
       - Basic, advanced, and provider-specific
    
    7. Troubleshooting
       - Common issues and solutions
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • docs: Comprehensive documentation for 25+ providers + Qwen OAuth
    Restructured documentation to highlight both key features:
    
    FEATURE 1: Qwen OAuth Cross-Platform Import (FREE)
    - 2,000 requests/day free tier
    - Works with ALL Claw platforms
    - Browser OAuth via qwen.ai
    - Model: Qwen3-Coder
    
    FEATURE 2: 25+ OpenCode-Compatible Providers
    - Major AI Labs: Anthropic, OpenAI, Google, xAI, Mistral
    - Cloud Platforms: Azure, AWS Bedrock, Google Vertex
    - Fast Inference: Groq, Cerebras
    - Gateways: OpenRouter (100+ models), Together AI
    - Local: Ollama, LM Studio, vLLM
    
    Provider Tiers:
    1. FREE: Qwen OAuth
    2. Major Labs: Anthropic, OpenAI, Google, xAI, Mistral
    3. Cloud: Azure, Bedrock, Vertex
    4. Fast: Groq, Cerebras
    5. Gateways: OpenRouter, Together AI, Vercel
    6. Specialized: Perplexity, Cohere, GitLab, GitHub
    7. Local: Ollama, LM Studio, vLLM
    
    Platforms with full support:
    - Qwen Code (native OAuth)
    - OpenClaw, NanoBot, PicoClaw, ZeroClaw (import OAuth)
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: Add Qwen OAuth cross-platform import for ALL Claw platforms
    Key Feature: Use FREE Qwen tier (2,000 req/day) with ANY platform!
    
    How it works:
    1. Get Qwen OAuth: qwen && /auth (FREE)
    2. Extract token from ~/.qwen/
    3. Configure any platform with token
    
    Supported platforms:
    - OpenClaw 
    - NanoBot 
    - PicoClaw 
    - ZeroClaw 
    - NanoClaw 
    
    Configuration:
      export OPENAI_API_KEY="$QWEN_TOKEN"
      export OPENAI_BASE_URL="https://api.qwen.ai/v1"
      export OPENAI_MODEL="qwen3-coder-plus"
    
    Added:
    - import-qwen-oauth.sh script for automation
    - Cross-platform configuration examples
    - Qwen API endpoints reference
    - Troubleshooting guide
    
    Free tier: 2,000 requests/day, 60 requests/minute
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: Add Qwen Code with FREE OAuth tier (2,000 requests/day)
    New platform option with no API key required:
    
    Qwen Code Features:
    - FREE OAuth tier: 2,000 requests/day
    - Model: Qwen3-Coder (coder-model)
    - Auth: Browser OAuth via qwen.ai
    - GitHub: https://github.com/QwenLM/qwen-code
    
    Installation:
      npm install -g @qwen-code/qwen-code@latest
      qwen
      /auth  # Select Qwen OAuth
    
    Platform comparison updated:
    - Qwen Code: FREE, ~200MB, coding-optimized
    - OpenClaw: Full-featured, 1700+ plugins
    - NanoBot: Python, research
    - PicoClaw: Go, <10MB
    - ZeroClaw: Rust, <5MB
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: Add all 25+ OpenCode-compatible AI providers to Claw Setup
    Updated provider support to match OpenCode's full provider list:
    
    Built-in Providers (18):
    - Anthropic, OpenAI, Azure OpenAI
    - Google AI, Google Vertex AI
    - Amazon Bedrock
    - OpenRouter, xAI, Mistral
    - Groq, Cerebras, DeepInfra
    - Cohere, Together AI, Perplexity
    - Vercel AI, GitLab, GitHub Copilot
    
    Custom Loader Providers:
    - GitHub Copilot Enterprise
    - Google Vertex Anthropic
    - Azure Cognitive Services
    - Cloudflare AI Gateway
    - SAP AI Core
    
    Local/Self-Hosted:
    - Ollama, LM Studio, vLLM
    
    Features:
    - Model fetching from provider APIs
    - Custom model input support
    - Multi-provider configuration
    - Environment variable security
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: Add Claw Setup skill for AI Agent deployment
    End-to-end professional setup of AI Agent platforms:
    - OpenClaw (full-featured, 215K stars)
    - NanoBot (Python, lightweight)
    - PicoClaw (Go, ultra-light)
    - ZeroClaw (Rust, minimal)
    - NanoClaw (WhatsApp focused)
    
    Features:
    - Platform selection with comparison
    - Security hardening (secrets, network, systemd)
    - Interactive brainstorming for customization
    - AI provider configuration with 12+ providers
    - Model fetching from provider APIs
    - Custom model input support
    
    Providers supported:
    Anthropic, OpenAI, Google, OpenRouter, Groq,
    Cerebras, Together AI, DeepSeek, Mistral, xAI, Ollama
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: Add 6 new Claude Code skills
    Skills added:
    - 🔐 Secret Scanner: Detect leaked credentials in codebases
    - 🏛️ Git Archaeologist: Analyze git history, find bugs
    - 💾 Backup Automator: Automated encrypted cloud backups
    - 🌐 Domain Manager: Unified DNS management
    - 🔒 SSL Guardian: Certificate automation and monitoring
    - 📡 Log Sentinel: Log analysis and anomaly detection
    
    All skills include:
    - SKILL.md with trigger patterns
    - README.md with documentation
    - GLM 5 attribution and disclaimer
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • docs: Add hero section with GLM 5 attribution
    Added prominent hero section linking to GLM 5 Advanced Coding Model
    at https://z.ai/subscribe?ic=R0K78RJKNW
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: Add RAM Optimizer skill with ZRAM compression
    - ZRAM-based memory compression for Linux servers
    - 2-3x effective memory increase without hardware upgrades
    - KSM (Kernel Samepage Merging) for memory deduplication
    - Sysctl optimizations for low-memory systems
    - Supports Ubuntu/Debian/Fedora/Arch Linux
    - Works on local machines and remote SSH servers
    
    Performance gains:
    - Effective memory: +137% average increase
    - Swap I/O latency: -90% (disk to RAM)
    - OOM events: Eliminated
    - SSD disk wear: -95%
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>