- Mark Qwen OAuth as recommended default provider
- Update model reference to coder-model (qwen3-coder-plus)
- Add default provider setup instructions
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Based on ZeroClaw implementation study:
- Change API endpoint from api.qwen.ai to dashscope.aliyuncs.com/compatible-mode/v1
- Update credentials file reference to oauth_creds.json
- Add ZeroClaw native qwen-oauth provider documentation
- Add API endpoints and models reference table
- Update import script with correct endpoint and platform support
- Add PicoClaw and NanoClaw platform configurations
Key findings from ZeroClaw binary:
- Native qwen-oauth provider with auto token refresh
- Uses DashScope OpenAI-compatible endpoint
- Reads ~/.qwen/oauth_creds.json directly
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Document ZeroClaw's native qwen-oauth provider with auto token refresh
- Explain two import methods: Native vs OpenAI-compatible
- Add OAuth credentials structure documentation
- Add comparison table showing feature differences
- Update platform table to show ZeroClaw has Native (not Import) OAuth support
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- ZRAM-based memory compression for Linux servers
- 2-3x effective memory increase without hardware upgrades
- KSM (Kernel Samepage Merging) for memory deduplication
- Sysctl optimizations for low-memory systems
- Supports Ubuntu/Debian/Fedora/Arch Linux
- Works on local machines and remote SSH servers
Performance gains:
- Effective memory: +137% average increase
- Swap I/O latency: -90% (disk to RAM)
- OOM events: Eliminated
- SSD disk wear: -95%
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>