Commit Graph

9 Commits

  • feat: add auto token refresh for ALL platforms
    New qwen-token-refresh.sh script provides automatic token refresh
    for OpenClaw, NanoBot, PicoClaw, NanoClaw (ZeroClaw has native support).
    
    Features:
    - Check token status and expiry
    - Auto-refresh when < 5 min remaining
    - Background daemon mode (5 min intervals)
    - Systemd service installation
    - Updates both oauth_creds.json and .env file
    
    Usage:
      ./scripts/qwen-token-refresh.sh --status   # Check status
      ./scripts/qwen-token-refresh.sh            # Refresh if needed
      ./scripts/qwen-token-refresh.sh --daemon   # Background daemon
      ./scripts/qwen-token-refresh.sh --install  # Systemd service
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • docs: add Qwen OAuth auth URL (portal.qwen.ai)
    - Add browser auth URL for re-authentication
    - Clarify auth vs refresh vs API endpoints
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • docs: set qwen as default provider with coder-model
    - Mark Qwen OAuth as recommended default provider
    - Update model reference to coder-model (qwen3-coder-plus)
    - Add default provider setup instructions
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: use correct DashScope API endpoint for Qwen OAuth
    Based on ZeroClaw implementation study:
    - Change API endpoint from api.qwen.ai to dashscope.aliyuncs.com/compatible-mode/v1
    - Update credentials file reference to oauth_creds.json
    - Add ZeroClaw native qwen-oauth provider documentation
    - Add API endpoints and models reference table
    - Update import script with correct endpoint and platform support
    - Add PicoClaw and NanoClaw platform configurations
    
    Key findings from ZeroClaw binary:
    - Native qwen-oauth provider with auto token refresh
    - Uses DashScope OpenAI-compatible endpoint
    - Reads ~/.qwen/oauth_creds.json directly
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • docs: Comprehensive documentation for 25+ providers + Qwen OAuth
    Restructured documentation to highlight both key features:
    
    FEATURE 1: Qwen OAuth Cross-Platform Import (FREE)
    - 2,000 requests/day free tier
    - Works with ALL Claw platforms
    - Browser OAuth via qwen.ai
    - Model: Qwen3-Coder
    
    FEATURE 2: 25+ OpenCode-Compatible Providers
    - Major AI Labs: Anthropic, OpenAI, Google, xAI, Mistral
    - Cloud Platforms: Azure, AWS Bedrock, Google Vertex
    - Fast Inference: Groq, Cerebras
    - Gateways: OpenRouter (100+ models), Together AI
    - Local: Ollama, LM Studio, vLLM
    
    Provider Tiers:
    1. FREE: Qwen OAuth
    2. Major Labs: Anthropic, OpenAI, Google, xAI, Mistral
    3. Cloud: Azure, Bedrock, Vertex
    4. Fast: Groq, Cerebras
    5. Gateways: OpenRouter, Together AI, Vercel
    6. Specialized: Perplexity, Cohere, GitLab, GitHub
    7. Local: Ollama, LM Studio, vLLM
    
    Platforms with full support:
    - Qwen Code (native OAuth)
    - OpenClaw, NanoBot, PicoClaw, ZeroClaw (import OAuth)
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: Add Qwen OAuth cross-platform import for ALL Claw platforms
    Key Feature: Use FREE Qwen tier (2,000 req/day) with ANY platform!
    
    How it works:
    1. Get Qwen OAuth: qwen && /auth (FREE)
    2. Extract token from ~/.qwen/
    3. Configure any platform with token
    
    Supported platforms:
    - OpenClaw 
    - NanoBot 
    - PicoClaw 
    - ZeroClaw 
    - NanoClaw 
    
    Configuration:
      export OPENAI_API_KEY="$QWEN_TOKEN"
      export OPENAI_BASE_URL="https://api.qwen.ai/v1"
      export OPENAI_MODEL="qwen3-coder-plus"
    
    Added:
    - import-qwen-oauth.sh script for automation
    - Cross-platform configuration examples
    - Qwen API endpoints reference
    - Troubleshooting guide
    
    Free tier: 2,000 requests/day, 60 requests/minute
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: Add Qwen Code with FREE OAuth tier (2,000 requests/day)
    New platform option with no API key required:
    
    Qwen Code Features:
    - FREE OAuth tier: 2,000 requests/day
    - Model: Qwen3-Coder (coder-model)
    - Auth: Browser OAuth via qwen.ai
    - GitHub: https://github.com/QwenLM/qwen-code
    
    Installation:
      npm install -g @qwen-code/qwen-code@latest
      qwen
      /auth  # Select Qwen OAuth
    
    Platform comparison updated:
    - Qwen Code: FREE, ~200MB, coding-optimized
    - OpenClaw: Full-featured, 1700+ plugins
    - NanoBot: Python, research
    - PicoClaw: Go, <10MB
    - ZeroClaw: Rust, <5MB
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: Add all 25+ OpenCode-compatible AI providers to Claw Setup
    Updated provider support to match OpenCode's full provider list:
    
    Built-in Providers (18):
    - Anthropic, OpenAI, Azure OpenAI
    - Google AI, Google Vertex AI
    - Amazon Bedrock
    - OpenRouter, xAI, Mistral
    - Groq, Cerebras, DeepInfra
    - Cohere, Together AI, Perplexity
    - Vercel AI, GitLab, GitHub Copilot
    
    Custom Loader Providers:
    - GitHub Copilot Enterprise
    - Google Vertex Anthropic
    - Azure Cognitive Services
    - Cloudflare AI Gateway
    - SAP AI Core
    
    Local/Self-Hosted:
    - Ollama, LM Studio, vLLM
    
    Features:
    - Model fetching from provider APIs
    - Custom model input support
    - Multi-provider configuration
    - Environment variable security
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: Add Claw Setup skill for AI Agent deployment
    End-to-end professional setup of AI Agent platforms:
    - OpenClaw (full-featured, 215K stars)
    - NanoBot (Python, lightweight)
    - PicoClaw (Go, ultra-light)
    - ZeroClaw (Rust, minimal)
    - NanoClaw (WhatsApp focused)
    
    Features:
    - Platform selection with comparison
    - Security hardening (secrets, network, systemd)
    - Interactive brainstorming for customization
    - AI provider configuration with 12+ providers
    - Model fetching from provider APIs
    - Custom model input support
    
    Providers supported:
    Anthropic, OpenAI, Google, OpenRouter, Groq,
    Cerebras, Together AI, DeepSeek, Mistral, xAI, Ollama
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>