- Remove all hardcoded model name examples from documentation
- Replace outdated model table with live catalog guidance
- Add comprehensive list of major providers (current as of 2025)
- Highlight catalog features: filters, pricing, context length, etc.
- Update example commands to remove specific model references
- Emphasize always checking https://openrouter.ai/models for current models
- Add note that models are added/updated regularly
- Keep documentation future-proof by referencing live catalog
- Ask user which OpenRouter model they want to use
- Provide link to OpenRouter model catalog (https://openrouter.ai/models)
- Guide users to browse, click, and copy model name
- Store selected model in ~/.claude/settings.json
- Add popular model recommendations table
- Document model change process (3 options)
- Add model-specific troubleshooting (model not found, context length)
- Expand supported models section with examples from multiple providers
- Include free tier models (meta-llama with :free suffix)
- Add model name accuracy notes (suffixes like :beta, :free)
- Update example commands to include model selection scenarios
- Add official OpenRouter documentation source link
- Include API endpoint (https://openrouter.ai/api)
- Add all required environment variables with explanations
- Document provider priority (Anthropic 1P)
- Add detailed benefits: failover, budget controls, analytics
- Include verification steps with /status command
- Add troubleshooting section
- Document advanced features: statusline, GitHub Action, Agent SDK
- Add security and privacy notes
- Include important notes about ANTHROPIC_API_KEY being empty
- Reference official docs for up-to-date information
- Add README.md with user-facing documentation
- Add SKILL.md with skill metadata and instructions
- Configure OpenRouter as AI provider for Claude Code
- Support arcee-ai/trinity-mini:free model by default
- All platforms now have IDENTICAL Qwen OAuth integration
- ZeroClaw: native provider (built-in)
- Others: OpenAI-compatible + auto-refresh script
- Same result: FREE tier, auto refresh, same credentials file
- Updated platform table to show "Full" support (not just "Import")
User experience is now identical regardless of platform choice.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
New qwen-token-refresh.sh script provides automatic token refresh
for OpenClaw, NanoBot, PicoClaw, NanoClaw (ZeroClaw has native support).
Features:
- Check token status and expiry
- Auto-refresh when < 5 min remaining
- Background daemon mode (5 min intervals)
- Systemd service installation
- Updates both oauth_creds.json and .env file
Usage:
./scripts/qwen-token-refresh.sh --status # Check status
./scripts/qwen-token-refresh.sh # Refresh if needed
./scripts/qwen-token-refresh.sh --daemon # Background daemon
./scripts/qwen-token-refresh.sh --install # Systemd service
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Mark Qwen OAuth as recommended default provider
- Update model reference to coder-model (qwen3-coder-plus)
- Add default provider setup instructions
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Based on ZeroClaw implementation study:
- Change API endpoint from api.qwen.ai to dashscope.aliyuncs.com/compatible-mode/v1
- Update credentials file reference to oauth_creds.json
- Add ZeroClaw native qwen-oauth provider documentation
- Add API endpoints and models reference table
- Update import script with correct endpoint and platform support
- Add PicoClaw and NanoClaw platform configurations
Key findings from ZeroClaw binary:
- Native qwen-oauth provider with auto token refresh
- Uses DashScope OpenAI-compatible endpoint
- Reads ~/.qwen/oauth_creds.json directly
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Document ZeroClaw's native qwen-oauth provider with auto token refresh
- Explain two import methods: Native vs OpenAI-compatible
- Add OAuth credentials structure documentation
- Add comparison table showing feature differences
- Update platform table to show ZeroClaw has Native (not Import) OAuth support
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- ZRAM-based memory compression for Linux servers
- 2-3x effective memory increase without hardware upgrades
- KSM (Kernel Samepage Merging) for memory deduplication
- Sysctl optimizations for low-memory systems
- Supports Ubuntu/Debian/Fedora/Arch Linux
- Works on local machines and remote SSH servers
Performance gains:
- Effective memory: +137% average increase
- Swap I/O latency: -90% (disk to RAM)
- OOM events: Eliminated
- SSD disk wear: -95%
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>