Compare commits

...

10 Commits

  • Remove outdated specific models, point to live catalog
    - Remove all hardcoded model name examples from documentation
    - Replace outdated model table with live catalog guidance
    - Add comprehensive list of major providers (current as of 2025)
    - Highlight catalog features: filters, pricing, context length, etc.
    - Update example commands to remove specific model references
    - Emphasize always checking https://openrouter.ai/models for current models
    - Add note that models are added/updated regularly
    - Keep documentation future-proof by referencing live catalog
  • Add interactive model selection to OpenRouter skill
    - Ask user which OpenRouter model they want to use
    - Provide link to OpenRouter model catalog (https://openrouter.ai/models)
    - Guide users to browse, click, and copy model name
    - Store selected model in ~/.claude/settings.json
    - Add popular model recommendations table
    - Document model change process (3 options)
    - Add model-specific troubleshooting (model not found, context length)
    - Expand supported models section with examples from multiple providers
    - Include free tier models (meta-llama with :free suffix)
    - Add model name accuracy notes (suffixes like :beta, :free)
    - Update example commands to include model selection scenarios
  • Update OpenRouter Config skill with complete documentation
    - Add official OpenRouter documentation source link
    - Include API endpoint (https://openrouter.ai/api)
    - Add all required environment variables with explanations
    - Document provider priority (Anthropic 1P)
    - Add detailed benefits: failover, budget controls, analytics
    - Include verification steps with /status command
    - Add troubleshooting section
    - Document advanced features: statusline, GitHub Action, Agent SDK
    - Add security and privacy notes
    - Include important notes about ANTHROPIC_API_KEY being empty
    - Reference official docs for up-to-date information
  • Add OpenRouter Config skill
    - Add README.md with user-facing documentation
    - Add SKILL.md with skill metadata and instructions
    - Configure OpenRouter as AI provider for Claude Code
    - Support arcee-ai/trinity-mini:free model by default
  • docs: clarify unified Qwen OAuth experience across ALL platforms
    - All platforms now have IDENTICAL Qwen OAuth integration
    - ZeroClaw: native provider (built-in)
    - Others: OpenAI-compatible + auto-refresh script
    - Same result: FREE tier, auto refresh, same credentials file
    - Updated platform table to show "Full" support (not just "Import")
    
    User experience is now identical regardless of platform choice.
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • feat: add auto token refresh for ALL platforms
    New qwen-token-refresh.sh script provides automatic token refresh
    for OpenClaw, NanoBot, PicoClaw, NanoClaw (ZeroClaw has native support).
    
    Features:
    - Check token status and expiry
    - Auto-refresh when < 5 min remaining
    - Background daemon mode (5 min intervals)
    - Systemd service installation
    - Updates both oauth_creds.json and .env file
    
    Usage:
      ./scripts/qwen-token-refresh.sh --status   # Check status
      ./scripts/qwen-token-refresh.sh            # Refresh if needed
      ./scripts/qwen-token-refresh.sh --daemon   # Background daemon
      ./scripts/qwen-token-refresh.sh --install  # Systemd service
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • docs: add Qwen OAuth auth URL (portal.qwen.ai)
    - Add browser auth URL for re-authentication
    - Clarify auth vs refresh vs API endpoints
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
  • docs: set qwen as default provider with coder-model
    - Mark Qwen OAuth as recommended default provider
    - Update model reference to coder-model (qwen3-coder-plus)
    - Add default provider setup instructions
    
    Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
7 changed files with 1130 additions and 48 deletions

View File

@@ -26,12 +26,13 @@
</div> </div>
## Skills Index (8 Skills) ## Skills Index (9 Skills)
### AI & Automation ### AI & Automation
| Skill | Description | Status | | Skill | Description | Status |
|-------|-------------|--------| |-------|-------------|--------|
| [🦞 Claw Setup](./skills/claw-setup/) | AI Agent deployment + **25+ providers** + **FREE Qwen OAuth** | ✅ Production Ready | | [🦞 Claw Setup](./skills/claw-setup/) | AI Agent deployment + **25+ providers** + **FREE Qwen OAuth** | ✅ Production Ready |
| [🔀 OpenRouter Config](./skills/openrouter-config/) | Configure OpenRouter as AI provider for Claude Code | ✅ Production Ready |
### System Administration ### System Administration
| Skill | Description | Status | | Skill | Description | Status |

View File

@@ -4,7 +4,7 @@
### The Ultimate AI Agent Deployment Skill ### The Ultimate AI Agent Deployment Skill
**Setup ANY Claw platform with 25+ AI providers + FREE Qwen OAuth + Full Customization** **Setup ANY Claw platform with 25+ AI providers + FREE model providers like OpenRouter and Qwen via OAuth + Full Customization**
--- ---
@@ -75,11 +75,11 @@
| Platform | Language | Memory | Startup | Qwen OAuth | All Providers | Best For | | Platform | Language | Memory | Startup | Qwen OAuth | All Providers | Best For |
|----------|----------|--------|---------|------------|---------------|----------| |----------|----------|--------|---------|------------|---------------|----------|
| **Qwen Code** | TypeScript | ~200MB | ~5s | ✅ Native | ✅ | FREE coding | | **Qwen Code** | TypeScript | ~200MB | ~5s | ✅ Native | ✅ | FREE coding |
| **OpenClaw** | TypeScript | >1GB | ~500s | ✅ Import | ✅ | Full-featured, 1700+ plugins |
| **NanoBot** | Python | ~100MB | ~30s | ✅ Import | ✅ | Research, Python devs |
| **PicoClaw** | Go | <10MB | ~1s | ✅ Import | ✅ | Embedded, $10 hardware |
| **ZeroClaw** | Rust | <5MB | <10ms | ✅ Native | ✅ | Maximum performance | | **ZeroClaw** | Rust | <5MB | <10ms | ✅ Native | ✅ | Maximum performance |
| **NanoClaw** | TypeScript | ~50MB | ~5s | ✅ Import | ✅ | WhatsApp integration | | **OpenClaw** | TypeScript | >1GB | ~500s | ✅ Full | ✅ | Full-featured, 1700+ plugins |
| **NanoBot** | Python | ~100MB | ~30s | ✅ Full | ✅ | Research, Python devs |
| **PicoClaw** | Go | <10MB | ~1s | ✅ Full | ✅ | Embedded, $10 hardware |
| **NanoClaw** | TypeScript | ~50MB | ~5s | ✅ Full | ✅ | WhatsApp integration |
### Platform Selection Guide ### Platform Selection Guide
@@ -120,8 +120,9 @@
| **Requests/day** | 2,000 | | **Requests/day** | 2,000 |
| **Requests/minute** | 60 | | **Requests/minute** | 60 |
| **Cost** | **FREE** | | **Cost** | **FREE** |
| **Model** | Qwen3-Coder (coding-optimized) | | **Model** | `coder-model` (qwen3-coder-plus) |
| **Auth** | Browser OAuth via qwen.ai | | **Auth** | Browser OAuth via qwen.ai |
| **Default** | ✅ Recommended default provider |
### Quick Start ### Quick Start
@@ -212,37 +213,33 @@ CONFIG
zeroclaw gateway zeroclaw gateway
``` ```
### Understanding Qwen OAuth Import Methods ### Qwen OAuth Integration - SAME Experience on ALL Platforms
``` ```
┌─────────────────────────────────────────────────────────────────────────────┐ ┌─────────────────────────────────────────────────────────────────────────────┐
QWEN OAUTH IMPORT METHODS │ QWEN OAUTH - UNIFIED EXPERIENCE ACROSS ALL PLATFORMS
├─────────────────────────────────────────────────────────────────────────────┤ ├─────────────────────────────────────────────────────────────────────────────┤
│ │ │ │
METHOD 1: Native Provider (ZeroClaw ONLY) ALL PLATFORMS NOW HAVE:
│ ───────────────────────────────────────────────────── │ ────────────────────────
• ZeroClaw has built-in "qwen-oauth" provider ✅ FREE: 2,000 requests/day, 60 req/min
• Reads ~/.qwen/oauth_creds.json directly ✅ Model: coder-model (qwen3-coder-plus)
Automatic token refresh using refresh_token │ Auto Token Refresh (via refresh_token)
• Tracks expiry_date and refreshes when needed ✅ Same credentials file: ~/.qwen/oauth_creds.json
• Configuration: default_provider = "qwen-oauth" ✅ Same API endpoint: dashscope.aliyuncs.com/compatible-mode/v1
│ │ │ │
METHOD 2: OpenAI-Compatible (All Other Platforms) IMPLEMENTATION:
│ ───────────────────────────────────────────────────── ─────────────┬──────────────────────────────────────────────────
• Treats Qwen API as OpenAI-compatible endpoint │ Platform │ How It Works
• Extract access_token and use as OPENAI_API_KEY ├─────────────┼──────────────────────────────────────────────────┤
• Set OPENAI_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1 │ ZeroClaw │ Native "qwen-oauth" provider (built-in) │
• Manual re-export needed when token expires │ OpenClaw │ OpenAI-compatible + auto-refresh script │
│ │ NanoBot │ OpenAI-compatible + auto-refresh script │ │
│ │ PicoClaw │ OpenAI-compatible + auto-refresh script │ │
│ │ NanoClaw │ OpenAI-compatible + auto-refresh script │ │
│ └─────────────┴──────────────────────────────────────────────────┘ │
│ │ │ │
COMPARISON: RESULT: User experience is IDENTICAL across all platforms!
│ ┌─────────────────┬────────────────┬─────────────────────┐ │
│ │ Feature │ Native │ OpenAI-Compatible │ │
│ ├─────────────────┼────────────────┼─────────────────────┤ │
│ │ Token Refresh │ ✅ Automatic │ ❌ Manual │ │
│ │ Token Expiry │ ✅ Handled │ ⚠️ Re-export needed│ │
│ │ Platforms │ ZeroClaw only │ All others │ │
│ │ Config File │ ~/.qwen/oauth_creds.json │ env vars │ │
│ └─────────────────┴────────────────┴─────────────────────┘ │
│ │ │ │
└─────────────────────────────────────────────────────────────────────────────┘ └─────────────────────────────────────────────────────────────────────────────┘
``` ```
@@ -267,13 +264,37 @@ Qwen Code stores OAuth credentials in `~/.qwen/oauth_creds.json`:
| `refresh_token` | Used to get new access_token when expired | | `refresh_token` | Used to get new access_token when expired |
| `expiry_date` | Unix timestamp when access_token expires | | `expiry_date` | Unix timestamp when access_token expires |
### Auto Token Refresh for ALL Platforms
```bash
# Check token status
./scripts/qwen-token-refresh.sh --status
# Refresh if expired (5 min buffer)
./scripts/qwen-token-refresh.sh
# Run as background daemon
./scripts/qwen-token-refresh.sh --daemon
# Install as systemd service (auto-start)
./scripts/qwen-token-refresh.sh --install
systemctl --user enable --now qwen-token-refresh
```
The refresh script:
- Checks token expiry every 5 minutes
- Refreshes automatically when < 5 min remaining
- Updates `~/.qwen/oauth_creds.json` and `~/.qwen/.env`
- Works for ALL platforms (OpenClaw, NanoBot, PicoClaw, NanoClaw)
### API Endpoints ### API Endpoints
| Endpoint | URL | | Endpoint | URL |
|----------|-----| |----------|-----|
| **API Base** | `https://dashscope.aliyuncs.com/compatible-mode/v1` | | **Auth (Browser)** | `https://portal.qwen.ai` |
| **Chat Completions** | `https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions` |
| **Token Refresh** | `https://chat.qwen.ai/api/v1/oauth2/token` | | **Token Refresh** | `https://chat.qwen.ai/api/v1/oauth2/token` |
| **API Base** | `https://dashscope.aliyuncs.com/compatible-mode/v1` |
| **Chat Completions** | `/chat/completions` |
### Available Models (FREE Tier) ### Available Models (FREE Tier)
@@ -577,6 +598,7 @@ skills/claw-setup/
├── README.md # This documentation ├── README.md # This documentation
└── scripts/ └── scripts/
├── import-qwen-oauth.sh # Import FREE Qwen OAuth to any platform ├── import-qwen-oauth.sh # Import FREE Qwen OAuth to any platform
├── qwen-token-refresh.sh # Auto-refresh tokens (daemon/systemd)
└── fetch-models.sh # Fetch models from all providers └── fetch-models.sh # Fetch models from all providers
``` ```
@@ -596,11 +618,16 @@ find ~/.qwen -name "*.json"
### Token Expired ### Token Expired
```bash ```bash
# Tokens auto-refresh in Qwen Code # Option 1: Use auto-refresh script
qwen -p "refresh" ./scripts/qwen-token-refresh.sh
# Re-export # Option 2: Manual re-auth
qwen --auth-type qwen-oauth -p "test"
source ~/.qwen/.env source ~/.qwen/.env
# Option 3: Install systemd service for auto-refresh
./scripts/qwen-token-refresh.sh --install
systemctl --user enable --now qwen-token-refresh
``` ```
### API Errors ### API Errors

View File

@@ -19,9 +19,10 @@ End-to-end professional setup of AI Agent platforms with **25+ OpenCode-compatib
│ ─────────────────────────────────────────────────── │ │ ─────────────────────────────────────────────────── │
│ • FREE: 2,000 requests/day, 60 req/min │ │ • FREE: 2,000 requests/day, 60 req/min │
│ • Works with: OpenClaw, NanoBot, PicoClaw, ZeroClaw │ │ • Works with: OpenClaw, NanoBot, PicoClaw, ZeroClaw │
│ • Model: Qwen3-Coder (optimized for coding) │ • Model: coder-model (qwen3-coder-plus)
│ • Auth: Browser OAuth via qwen.ai │ │ • Auth: Browser OAuth via qwen.ai │
│ • Token refresh: Automatic (ZeroClaw) / Manual (others) │ • Token refresh: Automatic (ALL platforms)
│ • DEFAULT: Recommended as primary provider │
│ │ │ │
│ FEATURE 2: 25+ OpenCode-Compatible AI Providers │ │ FEATURE 2: 25+ OpenCode-Compatible AI Providers │
│ ───────────────────────────────────────────────── │ │ ───────────────────────────────────────────────── │
@@ -40,10 +41,16 @@ End-to-end professional setup of AI Agent platforms with **25+ OpenCode-compatib
|----------|----------|--------|------------|---------------|----------| |----------|----------|--------|------------|---------------|----------|
| **Qwen Code** | TypeScript | ~200MB | ✅ Native | ✅ | FREE coding | | **Qwen Code** | TypeScript | ~200MB | ✅ Native | ✅ | FREE coding |
| **ZeroClaw** | Rust | <5MB | ✅ Native | ✅ | Max performance | | **ZeroClaw** | Rust | <5MB | ✅ Native | ✅ | Max performance |
| **PicoClaw** | Go | <10MB | ✅ Import | ✅ | Embedded | | **OpenClaw** | TypeScript | >1GB | ✅ Full | ✅ | Full-featured |
| **OpenClaw** | TypeScript | >1GB | ✅ Import | ✅ | Full-featured | | **NanoBot** | Python | ~100MB | ✅ Full | ✅ | Research |
| **NanoBot** | Python | ~100MB | ✅ Import | ✅ | Research | | **PicoClaw** | Go | <10MB | ✅ Full | ✅ | Embedded |
| **NanoClaw** | TypeScript | ~50MB | ✅ Import | ✅ | WhatsApp | | **NanoClaw** | TypeScript | ~50MB | ✅ Full | ✅ | WhatsApp |
**All platforms have IDENTICAL Qwen OAuth experience:**
- FREE tier: 2,000 requests/day
- Auto token refresh
- Same credentials: `~/.qwen/oauth_creds.json`
- Same API: `dashscope.aliyuncs.com`
--- ---
@@ -78,18 +85,34 @@ qwen --auth-type qwen-oauth -p "test"
| Endpoint | URL | | Endpoint | URL |
|----------|-----| |----------|-----|
| **Auth (Browser)** | `https://portal.qwen.ai` |
| **Token Refresh** | `https://chat.qwen.ai/api/v1/oauth2/token` |
| **API Base** | `https://dashscope.aliyuncs.com/compatible-mode/v1` | | **API Base** | `https://dashscope.aliyuncs.com/compatible-mode/v1` |
| **Chat Completions** | `/chat/completions` | | **Chat Completions** | `/chat/completions` |
| **Token Refresh** | `https://chat.qwen.ai/api/v1/oauth2/token` |
## Available Models (FREE Tier) ## Available Models (FREE Tier)
| Model | Best For | | Model | Best For |
|-------|----------| |-------|----------|
| `qwen3-coder-plus` | Coding (recommended) | | `coder-model` (qwen3-coder-plus) | Coding (DEFAULT) |
| `qwen3-coder-flash` | Fast coding | | `qwen3-coder-flash` | Fast coding |
| `qwen-max` | Complex tasks | | `qwen-max` | Complex tasks |
## Set as Default Provider
```bash
# ZeroClaw - Native qwen-oauth as default
cat > ~/.zeroclaw/config.toml << EOF
default_provider = "qwen-oauth"
default_model = "coder-model" # qwen3-coder-plus
EOF
# Other platforms - Set environment variables
export OPENAI_API_KEY=$(cat ~/.qwen/oauth_creds.json | jq -r '.access_token')
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
export OPENAI_MODEL="coder-model" # qwen3-coder-plus
```
## Import Methods ## Import Methods
### ZeroClaw (Native Provider - Auto Token Refresh) ### ZeroClaw (Native Provider - Auto Token Refresh)
@@ -128,6 +151,46 @@ picoclaw # PicoClaw with FREE Qwen
nanoclaw # NanoClaw with FREE Qwen nanoclaw # NanoClaw with FREE Qwen
``` ```
## Auto Token Refresh (ALL Platforms)
Use the included refresh script to automatically refresh expired tokens:
```bash
# Check token status
./scripts/qwen-token-refresh.sh --status
# Refresh if expired
./scripts/qwen-token-refresh.sh
# Run as background daemon (checks every 5 min)
./scripts/qwen-token-refresh.sh --daemon
# Install as systemd service (auto-start on boot)
./scripts/qwen-token-refresh.sh --install
systemctl --user enable --now qwen-token-refresh
```
### How Auto-Refresh Works
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ AUTO TOKEN REFRESH FLOW │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ 1. Check expiry_date in ~/.qwen/oauth_creds.json │
│ 2. If expired (< 5 min buffer): │
│ POST https://chat.qwen.ai/api/v1/oauth2/token │
│ Body: grant_type=refresh_token&refresh_token=xxx │
│ 3. Response: { access_token, refresh_token, expires_in } │
│ 4. Update ~/.qwen/oauth_creds.json with new tokens │
│ 5. Update ~/.qwen/.env with new OPENAI_API_KEY │
│ 6. Platforms using source ~/.qwen/.env get fresh token │
│ │
│ Systemd service: Runs every 5 minutes, refreshes when needed │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
```
--- ---
# FEATURE 2: 25+ OpenCode-Compatible AI Providers # FEATURE 2: 25+ OpenCode-Compatible AI Providers

View File

@@ -199,4 +199,10 @@ echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━" echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "FREE Qwen OAuth: 2,000 requests/day, 60 req/min" echo "FREE Qwen OAuth: 2,000 requests/day, 60 req/min"
echo "API Endpoint: https://dashscope.aliyuncs.com/compatible-mode/v1" echo "API Endpoint: https://dashscope.aliyuncs.com/compatible-mode/v1"
echo ""
echo "🔄 AUTO TOKEN REFRESH (for non-ZeroClaw platforms):"
echo " ./scripts/qwen-token-refresh.sh --status # Check status"
echo " ./scripts/qwen-token-refresh.sh # Refresh if needed"
echo " ./scripts/qwen-token-refresh.sh --daemon # Background daemon"
echo " ./scripts/qwen-token-refresh.sh --install # Systemd service"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━" echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"

View File

@@ -0,0 +1,335 @@
#!/bin/bash
# qwen-token-refresh.sh - Auto-refresh Qwen OAuth token
# Usage:
# ./qwen-token-refresh.sh # Check and refresh if expired
# ./qwen-token-refresh.sh --daemon # Run as background daemon
# ./qwen-token-refresh.sh --status # Check token status
#
# This enables auto token refresh for ALL platforms:
# OpenClaw, NanoBot, PicoClaw, NanoClaw
#
# For ZeroClaw, this is NOT needed (native support).
set -e
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
CREDS_FILE="$HOME/.qwen/oauth_creds.json"
ENV_FILE="$HOME/.qwen/.env"
REFRESH_URL="https://chat.qwen.ai/api/v1/oauth2/token"
API_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
# Token refresh buffer (refresh 5 minutes before expiry)
REFRESH_BUFFER_SECONDS=300
check_dependencies() {
if ! command -v jq &> /dev/null; then
echo -e "${RED}Error: jq is required${NC}"
echo "Install with: sudo apt install jq"
exit 1
fi
if ! command -v curl &> /dev/null; then
echo -e "${RED}Error: curl is required${NC}"
exit 1
fi
}
get_token_status() {
if [ ! -f "$CREDS_FILE" ]; then
echo "NOT_FOUND"
return
fi
local expiry_date=$(cat "$CREDS_FILE" | jq -r '.expiry_date // 0')
local current_time=$(date +%s)000
if [ "$expiry_date" -eq 0 ]; then
echo "UNKNOWN"
return
fi
local expiry_seconds=$((expiry_date / 1000))
local current_seconds=$((current_time / 1000))
local remaining=$((expiry_seconds - current_seconds))
if [ "$remaining" -lt 0 ]; then
echo "EXPIRED"
elif [ "$remaining" -lt "$REFRESH_BUFFER_SECONDS" ]; then
echo "EXPIRING_SOON"
else
echo "VALID"
fi
}
format_time_remaining() {
local expiry_date=$1
local expiry_seconds=$((expiry_date / 1000))
local current_seconds=$(date +%s)
local remaining=$((expiry_seconds - current_seconds))
if [ "$remaining" -lt 0 ]; then
echo "expired"
elif [ "$remaining" -lt 60 ]; then
echo "${remaining}s"
elif [ "$remaining" -lt 3600 ]; then
echo "$((remaining / 60))m"
else
echo "$((remaining / 3600))h $(((remaining % 3600) / 60))m"
fi
}
refresh_token() {
local refresh_token=$(cat "$CREDS_FILE" | jq -r '.refresh_token // empty')
if [ -z "$refresh_token" ] || [ "$refresh_token" = "null" ]; then
echo -e "${RED}Error: No refresh_token found in credentials${NC}"
echo "Please re-authenticate: qwen --auth-type qwen-oauth -p 'test'"
return 1
fi
echo -e "${BLUE}Refreshing token...${NC}"
local response=$(curl -s -X POST "$REFRESH_URL" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=refresh_token" \
-d "refresh_token=$refresh_token" \
2>/dev/null)
if [ -z "$response" ]; then
echo -e "${RED}Error: Empty response from refresh endpoint${NC}"
return 1
fi
local new_access_token=$(echo "$response" | jq -r '.access_token // empty')
local new_refresh_token=$(echo "$response" | jq -r '.refresh_token // empty')
local expires_in=$(echo "$response" | jq -r '.expires_in // 3600')
if [ -z "$new_access_token" ] || [ "$new_access_token" = "null" ]; then
echo -e "${RED}Error: Failed to get new access_token${NC}"
echo "Response: $response"
echo ""
echo "Please re-authenticate: qwen --auth-type qwen-oauth -p 'test'"
return 1
fi
# Calculate new expiry date (current time + expires_in, in milliseconds)
local current_ms=$(date +%s)000
local expires_in_ms=$((expires_in * 1000))
local new_expiry=$((current_ms + expires_in_ms))
# Update credentials file
local resource_url=$(cat "$CREDS_FILE" | jq -r '.resource_url // "portal.qwen.ai"')
cat > "$CREDS_FILE" << EOF
{
"access_token": "$new_access_token",
"refresh_token": "${new_refresh_token:-$refresh_token}",
"token_type": "Bearer",
"resource_url": "$resource_url",
"expiry_date": $new_expiry
}
EOF
chmod 600 "$CREDS_FILE"
# Update .env file for other platforms
mkdir -p "$HOME/.qwen"
cat > "$ENV_FILE" << EOF
# Qwen OAuth Configuration (auto-refreshed: $(date))
export OPENAI_API_KEY="$new_access_token"
export OPENAI_BASE_URL="$API_BASE_URL"
export OPENAI_MODEL="coder-model"
EOF
chmod 600 "$ENV_FILE"
echo -e "${GREEN}✅ Token refreshed successfully!${NC}"
echo " New token expires in: $(format_time_remaining $new_expiry)"
echo ""
echo "Environment updated. Run to apply:"
echo " source ~/.qwen/.env"
return 0
}
show_status() {
echo "🦞 Qwen OAuth Token Status"
echo "==========================="
echo ""
if [ ! -f "$CREDS_FILE" ]; then
echo -e "${RED}❌ Credentials file not found${NC}"
echo " Expected: $CREDS_FILE"
echo ""
echo "To authenticate, run:"
echo " qwen --auth-type qwen-oauth -p 'test'"
exit 1
fi
local status=$(get_token_status)
local expiry_date=$(cat "$CREDS_FILE" | jq -r '.expiry_date // 0')
local access_token=$(cat "$CREDS_FILE" | jq -r '.access_token // ""')
echo "Credentials: $CREDS_FILE"
echo "Token: ${access_token:0:20}...${access_token: -10}"
echo ""
case "$status" in
VALID)
echo -e "Status: ${GREEN}✅ VALID${NC}"
echo "Expires in: $(format_time_remaining $expiry_date)"
;;
EXPIRING_SOON)
echo -e "Status: ${YELLOW}⚠️ EXPIRING SOON${NC}"
echo "Expires in: $(format_time_remaining $expiry_date)"
echo ""
echo "Run refresh: $0"
;;
EXPIRED)
echo -e "Status: ${RED}❌ EXPIRED${NC}"
echo ""
echo "Run refresh: $0"
;;
UNKNOWN)
echo -e "Status: ${YELLOW}⚠️ UNKNOWN${NC}"
echo "Expiry date not found in credentials"
;;
esac
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "FREE tier: 2,000 requests/day, 60 req/min"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
}
run_daemon() {
echo "🦞 Qwen Token Refresh Daemon"
echo "============================="
echo ""
echo "Starting background refresh service..."
echo "Check interval: 5 minutes"
echo "Refresh buffer: ${REFRESH_BUFFER_SECONDS}s before expiry"
echo ""
echo "Press Ctrl+C to stop"
echo ""
while true; do
local status=$(get_token_status)
case "$status" in
EXPIRED|EXPIRING_SOON)
echo "$(date '+%Y-%m-%d %H:%M:%S') - Token needs refresh ($status)"
if refresh_token > /dev/null 2>&1; then
echo "$(date '+%Y-%m-%d %H:%M:%S') - ✅ Token refreshed"
else
echo "$(date '+%Y-%m-%d %H:%M:%S') - ❌ Refresh failed"
fi
;;
VALID)
local expiry=$(cat "$CREDS_FILE" | jq -r '.expiry_date // 0')
echo "$(date '+%Y-%m-%d %H:%M:%S') - Token valid ($(format_time_remaining $expiry) remaining)"
;;
*)
echo "$(date '+%Y-%m-%d %H:%M:%S') - Token status: $status"
;;
esac
sleep 300 # Check every 5 minutes
done
}
install_systemd_service() {
local service_dir="$HOME/.config/systemd/user"
local service_file="$service_dir/qwen-token-refresh.service"
mkdir -p "$service_dir"
cat > "$service_file" << 'EOF'
[Unit]
Description=Qwen OAuth Token Auto-Refresh
After=network.target
[Service]
Type=simple
ExecStart=%h/.local/bin/qwen-token-refresh.sh --daemon
Restart=always
RestartSec=60
[Install]
WantedBy=default.target
EOF
echo -e "${GREEN}✅ Systemd service installed${NC}"
echo ""
echo "To enable and start:"
echo " systemctl --user daemon-reload"
echo " systemctl --user enable qwen-token-refresh"
echo " systemctl --user start qwen-token-refresh"
echo ""
echo "To check status:"
echo " systemctl --user status qwen-token-refresh"
}
# Main
check_dependencies
case "${1:-}" in
--status|-s)
show_status
;;
--daemon|-d)
run_daemon
;;
--install|-i)
install_systemd_service
;;
--help|-h)
echo "Usage: $0 [OPTION]"
echo ""
echo "Options:"
echo " (none) Check and refresh token if expired"
echo " --status Show token status"
echo " --daemon Run as background refresh daemon"
echo " --install Install as systemd user service"
echo " --help Show this help"
echo ""
echo "Examples:"
echo " $0 # Refresh if needed"
echo " $0 --status # Check status"
echo " $0 --daemon # Run background daemon"
echo " $0 --install # Install systemd service"
;;
*)
# Default: check and refresh if needed
if [ ! -f "$CREDS_FILE" ]; then
echo -e "${RED}Error: Credentials not found${NC}"
echo "Run: qwen --auth-type qwen-oauth -p 'test'"
exit 1
fi
local status=$(get_token_status)
case "$status" in
VALID)
local expiry=$(cat "$CREDS_FILE" | jq -r '.expiry_date // 0')
echo -e "${GREEN}✅ Token is valid${NC} ($(format_time_remaining $expiry) remaining)"
echo ""
echo "To force refresh anyway, delete ~/.qwen/oauth_creds.json and re-auth"
;;
EXPIRED|EXPIRING_SOON)
echo -e "${YELLOW}Token needs refresh ($status)${NC}"
refresh_token
;;
*)
echo -e "${YELLOW}Unknown token status, attempting refresh...${NC}"
refresh_token
;;
esac
;;
esac

View File

@@ -0,0 +1,315 @@
# OpenRouter Config Skill
## Description
This skill helps users configure and enable OpenRouter as the AI provider in Claude Code. It allows you to select your preferred OpenRouter model, sets up the API key, configures Anthropic-compatible API endpoints, and ensures Claude Code uses OpenRouter exclusively with proper failover capabilities.
## Documentation Source
Based on official OpenRouter documentation: https://openrouter.ai/docs/guides/claude-code-integration
## Why Use OpenRouter with Claude Code?
### 1. Provider Failover for High Availability
Anthropic's API occasionally experiences outages or rate limiting. When you route Claude Code through OpenRouter, your requests automatically fail over between multiple Anthropic providers. If one provider is unavailable or rate-limited, OpenRouter seamlessly routes to another, keeping your coding sessions uninterrupted.
### 2. Access to 100+ Models
OpenRouter gives you access to models from multiple providers including:
- **Anthropic**: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
- **OpenAI**: GPT-4, GPT-4 Turbo, GPT-3.5
- **Google**: Gemini Pro, Gemini Flash
- **Meta**: Llama 2, Llama 3
- **Mistral**: Mixtral, Mistral Large
- **And many more**: xAI, Perplexity, Cohere, etc.
### 3. Organizational Budget Controls
For teams and organizations, OpenRouter provides centralized budget management. You can set spending limits, allocate credits across team members, and prevent unexpected cost overruns.
### 4. Usage Visibility and Analytics
OpenRouter gives you complete visibility into how Claude Code is being used across your team. Track usage patterns, monitor costs in real-time, and understand which projects or team members are consuming the most resources. All of this data is available in your [OpenRouter Activity Dashboard](https://openrouter.ai/dashboard/activity).
## Usage
To use this skill, simply ask Claude Code to configure OpenRouter. The skill will:
1. **Ask you which OpenRouter model you want to use**
2. Provide a link to browse available models
3. Guide you to copy the model name from OpenRouter's model catalog
4. Set up your OpenRouter API key
5. Configure Anthropic-compatible environment variables
6. Ensure Claude Code uses OpenRouter exclusively
7. Set up proper provider priority
8. Verify the configuration is working
## Prerequisites
- An OpenRouter API key (get one from https://openrouter.ai/keys)
- Claude Code installed
## Model Selection
### Step 1: Browse Available Models
Visit the OpenRouter models catalog to choose your preferred model:
- **OpenRouter Models**: https://openrouter.ai/models
### Step 2: Find and Copy Your Model
Browse through the available models and click on any model to see:
- Model name (click to copy)
- Pricing (input/output tokens)
- Context length
- Features (function calling, vision, etc.)
- Provider information
**Model Selection Tips:**
**Visit the live model catalog**: https://openrouter.ai/models
On the model catalog page, you can:
- **Filter by provider**: Anthropic, OpenAI, Google, Meta, Mistral, DeepSeek, Qwen, Perplexity, and more
- **Compare pricing**: See input/output token costs for each model
- **Check context length**: View max tokens supported
- **Filter by features**: Function calling, vision, free tier, etc.
- **View popularity**: See what other developers are using
- **Click to copy**: Easily copy the exact model ID you want to use
**To select a model:**
1. Visit https://openrouter.ai/models
2. Browse and filter models by your needs (provider, features, price, etc.)
3. Click on any model to see detailed information
4. Click the model name to copy it to your clipboard
5. Paste the model name when prompted by this skill
### Step 3: Configure Your Chosen Model
The skill will ask you to paste the model name you selected. You can also change the model later by editing your Claude Code settings.
## Configuration Steps
### Step 1: Install Claude Code (if not already installed)
```bash
# macOS, Linux, WSL:
curl -fsSL https://claude.ai/install.sh | bash
# Windows PowerShell:
irm https://claude.ai/install.ps1 | iex
```
### Step 2: Configure Environment Variables
The skill will help you set these environment variables in your shell profile:
```bash
# Add to ~/.zshrc (or ~/.bashrc for Bash, ~/.config/fish/config.fish for Fish)
export OPENROUTER_API_KEY="<your-openrouter-api-key>"
export ANTHROPIC_BASE_URL="https://openrouter.ai/api"
export ANTHROPIC_AUTH_TOKEN="$OPENROUTER_API_KEY"
export ANTHROPIC_API_KEY="" # Important: Must be explicitly empty
```
**Important Notes:**
- **Do NOT** put these in a project-level `.env` file - the native Claude Code installer does not read standard `.env` files
- **Must explicitly blank out** `ANTHROPIC_API_KEY` to prevent conflicts
- If previously logged in to Claude Code with Anthropic, run `/logout` in a Claude Code session to clear cached credentials
### Step 3: Configure Your Model
The skill will add your chosen model to your Claude Code settings at `~/.claude/settings.json`:
```json
{
"provider": {
"baseUrl": "https://openrouter.ai/api",
"apiKey": "${ANTHROPIC_AUTH_TOKEN}",
"defaultModel": "your-chosen-model-name"
}
}
```
### Step 4: Start Claude Code
```bash
cd /path/to/your/project
claude
```
### Step 5: Verify Configuration
Run the `/status` command inside Claude Code:
```
/status
```
Expected output:
```
Auth token: ANTHROPIC_AUTH_TOKEN
Anthropic base URL: https://openrouter.ai/api
```
You can also check the [OpenRouter Activity Dashboard](https://openrouter.ai/dashboard/activity) to see your requests appearing in real-time.
## API Configuration
### Base URL
```
https://openrouter.ai/api
```
### Authentication
Use your OpenRouter API key as the auth token.
### Model Selection
Choose your model from the OpenRouter model catalog:
- **Browse Models**: https://openrouter.ai/models
- Click on any model to see details and copy the model name
- Paste the model name when prompted by this skill
### Provider Priority
**Important**: Claude Code with OpenRouter is only guaranteed to work with the Anthropic first-party provider. For maximum compatibility, we recommend setting **Anthropic 1P** as the top priority provider when using Claude Code.
## Supported Models
OpenRouter provides **100+ models** from leading AI providers. The catalog is constantly updated with new models.
**Always check the live catalog for the most current models:**
- **OpenRouter Models**: https://openrouter.ai/models
**Major Providers Available:**
- **Anthropic**: Latest Claude models
- **OpenAI**: GPT-4 series, o1, o3
- **Google**: Gemini models
- **Meta**: Llama family
- **Mistral**: Mistral Large, Pixtral
- **DeepSeek**: DeepSeek Chat, R1
- **Qwen**: Qwen 2.5, Qwen Coder
- **Perplexity**: Sonar models with web search
- **And many more**: xAI, Cohere, etc.
**Model Features You Can Filter By:**
- ✅ Free tier availability
- ✅ Vision/multimodal capabilities
- ✅ Function calling/tool use
- ✅ Long context windows (up to 1M+ tokens)
- ✅ Coding specialization
- ✅ Web search integration
- ✅ Price range
**Note:** Models are added and updated regularly. Always visit https://openrouter.ai/models to see the complete, up-to-date catalog with real-time pricing and availability.
## How It Works
### Direct Connection
When you set `ANTHROPIC_BASE_URL` to `https://openrouter.ai/api`, Claude Code speaks its native protocol directly to OpenRouter. No local proxy server is required.
### Anthropic Skin
OpenRouter's "Anthropic Skin" behaves exactly like the Anthropic API. It handles model mapping and passes through advanced features like "Thinking" blocks and native tool use.
### Model Routing
When you specify a model name from the OpenRouter catalog, OpenRouter routes your request to the appropriate provider automatically, handling all API differences transparently.
### Billing
You are billed using your OpenRouter credits. Usage (including reasoning tokens) appears in your OpenRouter dashboard.
## Security
The skill handles your API key securely and stores it in your shell environment with appropriate file permissions.
**Privacy Note**: OpenRouter does not log your source code prompts unless you explicitly opt-in to prompt logging in your account settings. See their [Privacy Policy](https://openrouter.ai/privacy) for details.
## Changing Models
You can change your model at any time by:
### Option 1: Update Claude Code Settings
Edit `~/.claude/settings.json`:
```json
{
"provider": {
"baseUrl": "https://openrouter.ai/api",
"apiKey": "${ANTHROPIC_AUTH_TOKEN}",
"defaultModel": "new-model-name"
}
}
```
### Option 2: Use the Skill Again
Run the configuration process again and select a different model when prompted.
### Option 3: Per-Session Model Selection
In Claude Code, you can sometimes specify models inline:
```
/model openrouter:your-model-name
```
## Advanced Features
### Cost Tracking Statusline
You can add a custom statusline to Claude Code that tracks your OpenRouter API costs in real-time. The statusline displays provider, model, cumulative cost, and cache discounts for your session.
Download the statusline scripts from the [openrouter-examples repository](https://github.com/openrouter/openrouter-examples), make them executable, and add the following to your `~/.claude/settings.json`:
```json
{
"statusLine": {
"type": "command",
"command": "/path/to/statusline.sh"
}
}
```
### GitHub Action Integration
You can use OpenRouter with the official [Claude Code GitHub Action](https://github.com/anthropics/claude-code-action). To adapt the example workflow for OpenRouter, make two changes to the action step:
1. Pass your OpenRouter API key via `anthropic_api_key` (store it as a GitHub secret named `OPENROUTER_API_KEY`)
2. Set the `ANTHROPIC_BASE_URL` environment variable to `https://openrouter.ai/api`
Example:
```yaml
- name: Run Claude Code
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.OPENROUTER_API_KEY }}
env:
ANTHROPIC_BASE_URL: https://openrouter.ai/api
```
### Agent SDK Integration
The [Anthropic Agent SDK](https://github.com/anthropics/anthropic-sdk-typescript) lets you build AI agents programmatically using Python or TypeScript. Since the Agent SDK uses Claude Code as its runtime, you can connect it to OpenRouter using the same environment variables described above.
## Troubleshooting
### Model Not Found Errors
- Verify the model name is copied exactly from OpenRouter's model catalog
- Check that the model is currently available on OpenRouter
- Some models may have different naming conventions (e.g., with `:free`, `:beta` suffixes)
### Auth Errors
- Ensure `ANTHROPIC_API_KEY` is set to an empty string (`""`)
- If it is unset (null), Claude Code might fall back to its default behavior and try to authenticate with Anthropic servers
### Context Length Errors
- If you hit context limits, consider switching to a model with larger context window
- Break your task into smaller chunks or start a new session
- Check the model's context limit at https://openrouter.ai/models
### Previous Anthropic Login
- If you were previously logged in to Claude Code with Anthropic, run `/logout` in a Claude Code session to clear cached credentials before the OpenRouter configuration takes effect
### Connection Issues
- Verify your OpenRouter API key is correct
- Check that environment variables are set in your shell profile (not just in current session)
- Ensure you've restarted your terminal after updating your shell profile
- Run `/status` to verify configuration
## Related Skills
- API Key Management
- Provider Configuration
- Model Selection
- Cost Tracking
## Example Commands
- "Configure OpenRouter with my API key"
- "Set up OpenRouter as my AI provider for Claude Code"
- "Enable OpenRouter in Claude Code"
- "Configure OpenRouter for Claude Code"
- "Help me select and configure an OpenRouter model"
- "Show me the available models on OpenRouter"
- "I want to browse OpenRouter models and pick one"
## Notes
- This skill is designed specifically for Claude Code
- It requires write access to your shell profile (~/.bashrc, ~/.zshrc, or ~/.config/fish/config.fish)
- The skill ensures OpenRouter is used exclusively with Anthropic 1P as top priority
- You can browse and select from 100+ models at https://openrouter.ai/models
- You can change your model at any time by editing `~/.claude/settings.json` or running this skill again
- You can revert to using Anthropic directly by removing these environment variables and running `/logout`
- Always refer to the [official OpenRouter documentation](https://openrouter.ai/docs/guides/claude-code-integration) for the most up-to-date information

View File

@@ -0,0 +1,335 @@
---
name: openrouter-config
description: Use this skill when the user asks to "configure OpenRouter", "set up OpenRouter API key", "enable OpenRouter as AI provider", "configure OpenRouter in Claude Code", "select an OpenRouter model", "use a specific model with OpenRouter", "connect Claude Code to OpenRouter", or mentions configuring OpenRouter for Claude Code.
version: 1.0.0
---
# OpenRouter Config Skill
Helps users configure and enable OpenRouter as the AI provider for Claude Code by letting them select their preferred model from the OpenRouter catalog, setting up the API key, configuring Anthropic-compatible environment variables, and ensuring proper provider priority and failover capabilities.
## Documentation Source
Official OpenRouter Claude Code Integration Guide: https://openrouter.ai/docs/guides/claude-code-integration
## What It Does
1. **Model Selection**: Prompts user to choose a model from OpenRouter's 100+ model catalog
2. **Model Catalog Guidance**: Provides link to https://openrouter.ai/models and instructions to copy model name
3. **API Key Setup**: Securely configures OpenRouter API key
4. **Environment Configuration**: Sets up Anthropic-compatible environment variables
5. **Model Configuration**: Stores chosen model in Claude Code settings
6. **Provider Priority**: Ensures Anthropic 1P is set as top priority provider
7. **API Endpoint Configuration**: Configures `https://openrouter.ai/api` as the base URL
8. **Configuration Verification**: Validates that the configuration is working correctly
9. **Failover Setup**: Enables automatic provider failover for high availability
## Key Benefits
### 1. Provider Failover for High Availability
Anthropic's API occasionally experiences outages or rate limiting. OpenRouter automatically fails over between multiple Anthropic providers, keeping coding sessions uninterrupted.
### 2. Access to 100+ Models
Choose from models by Anthropic, OpenAI, Google, Meta, Mistral, xAI, and many more providers. Browse the complete catalog at https://openrouter.ai/models
### 3. Organizational Budget Controls
Centralized budget management with spending limits, credit allocation, and cost overrun prevention for teams.
### 4. Usage Visibility and Analytics
Complete visibility into Claude Code usage across teams, including usage patterns, real-time cost monitoring, and resource consumption tracking via the [OpenRouter Activity Dashboard](https://openrouter.ai/dashboard/activity).
## Model Selection Process
### Step 1: Ask for Model Preference
When the skill starts, it will ask:
> "Which OpenRouter model would you like to use with Claude Code?"
### Step 2: Provide Model Catalog
The skill will provide a link to browse available models:
- **OpenRouter Models**: https://openrouter.ai/models
### Step 3: Guide User Selection
The skill will instruct the user to:
1. Visit https://openrouter.ai/models
2. Browse through the available models
3. Click on a model to see details (pricing, context length, features)
4. Copy the model name (click the model name to copy it)
5. Paste the model name back into the chat
### Step 4: Store Model Configuration
The skill will configure the chosen model in Claude Code settings at `~/.claude/settings.json`:
```json
{
"provider": {
"baseUrl": "https://openrouter.ai/api",
"apiKey": "${ANTHROPIC_AUTH_TOKEN}",
"defaultModel": "user-selected-model-name"
}
}
```
## Model Selection Guidance
**Always use the live model catalog**: https://openrouter.ai/models
The OpenRouter catalog shows:
- **All current models** from every provider
- **Real-time pricing** for input/output tokens
- **Context limits** for each model
- **Available features**: vision, function calling, free tier, etc.
- **Provider information**: Which provider hosts each model
**How to find and select a model:**
1. Visit https://openrouter.ai/models
2. Use filters to narrow down by provider, features, price range
3. Click on any model to see detailed information
4. Click the model name to copy it to clipboard
5. Paste the exact model name when the skill prompts you
**The catalog is always current** - models are added and updated regularly, so checking the live catalog ensures you see the latest available models.
## Quick Start
### Environment Variables to Configure
```bash
export OPENROUTER_API_KEY="<your-openrouter-api-key>"
export ANTHROPIC_BASE_URL="https://openrouter.ai/api"
export ANTHROPIC_AUTH_TOKEN="$OPENROUTER_API_KEY"
export ANTHROPIC_API_KEY="" # Important: Must be explicitly empty
```
### Shell Profile Locations
Add to one of these files:
- `~/.bashrc` for Bash
- `~/.zshrc` for Zsh
- `~/.config/fish/config.fish` for Fish
**Important**: Do NOT add these to a project-level `.env` file - Claude Code's native installer does not read standard `.env` files.
### Verification
Run `/status` inside Claude Code to verify:
```
/status
```
Expected output:
```
Auth token: ANTHROPIC_AUTH_TOKEN
Anthropic base URL: https://openrouter.ai/api
```
## API Configuration
### Base URL
```
https://openrouter.ai/api
```
### Authentication
OpenRouter API key (get from https://openrouter.ai/keys)
### Model Selection
- Browse models at: https://openrouter.ai/models
- Click on any model to see details and copy the model name
- Paste the model name when prompted by the skill
- Model is stored in `~/.claude/settings.json` as the default model
### Provider Priority
**Critical**: Set **Anthropic 1P** as the top priority provider for maximum compatibility with Claude Code.
## Usage
```
"Configure OpenRouter with my API key"
"Set up OpenRouter as my AI provider"
"Enable OpenRouter in Claude Code"
"Configure OpenRouter for Claude Code"
"Connect Claude Code to OpenRouter"
"Help me select and configure an OpenRouter model"
"Show me the available models on OpenRouter"
"I want to browse OpenRouter models and configure one"
```
## How It Works
### Direct Connection
Claude Code speaks its native protocol directly to OpenRouter when `ANTHROPIC_BASE_URL` is set to `https://openrouter.ai/api`. No local proxy server is required.
### Anthropic Skin
OpenRouter's "Anthropic Skin" behaves exactly like the Anthropic API, handling model mapping and passing through advanced features like "Thinking" blocks and native tool use.
### Model Routing
When a user selects a model from the OpenRouter catalog, OpenRouter routes all requests to that specific model, handling provider differences transparently.
### Billing
Billed using OpenRouter credits. Usage (including reasoning tokens) appears in the OpenRouter dashboard.
## Important Notes
### Previous Anthropic Login
If previously logged in to Claude Code with Anthropic, run `/logout` in a Claude Code session to clear cached credentials before OpenRouter configuration takes effect.
### Explicit Empty API Key
`ANTHROPIC_API_KEY` must be set to an empty string (`""`), not unset. If unset (null), Claude Code may fall back to default behavior and authenticate with Anthropic servers directly.
### Shell Profile Persistence
Environment variables should be added to the user's shell profile for persistence across sessions.
### Model Name Accuracy
Model names must be copied exactly from OpenRouter's model catalog (https://openrouter.ai/models). Include any suffixes like `:free`, `:beta`, etc.
### Claude Code Models
When using Claude Code through OpenRouter, users can access any of the 100+ models available, including Anthropic's Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku, as well as models from OpenAI, Google, Meta, Mistral, and more.
## Advanced Features
### Cost Tracking Statusline
Download statusline scripts from [openrouter-examples repository](https://github.com/openrouter/openrouter-examples) and add to `~/.claude/settings.json`:
```json
{
"statusLine": {
"type": "command",
"command": "/path/to/statusline.sh"
}
}
```
### GitHub Action Integration
```yaml
- name: Run Claude Code
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.OPENROUTER_API_KEY }}
env:
ANTHROPIC_BASE_URL: https://openrouter.ai/api
```
### Agent SDK Integration
The Anthropic Agent SDK uses the same environment variables for OpenRouter integration.
## Security Notes
- API keys are stored in shell environment with appropriate file permissions
- OpenRouter does not log source code prompts unless you explicitly opt-in to prompt logging
- See [OpenRouter Privacy Policy](https://openrouter.ai/privacy) for details
## Troubleshooting
### Model Not Found Errors
- Verify the model name is copied exactly from OpenRouter's model catalog
- Check that the model is currently available
- Some models may have different naming conventions (e.g., with `:free`, `:beta` suffixes)
- Check https://openrouter.ai/models for current availability
### Auth Errors
- Ensure `ANTHROPIC_API_KEY` is set to empty string (`""`)
- Check that environment variables are set in shell profile (not just current session)
- Restart terminal after updating shell profile
### Context Length Errors
- Switch to a model with larger context window
- Check model context limits at https://openrouter.ai/models
- Break tasks into smaller chunks
- Start a new session
### Connection Issues
- Verify OpenRouter API key is correct
- Check all environment variables are set correctly
- Run `/status` to verify configuration
- Check [OpenRouter Activity Dashboard](https://openrouter.ai/dashboard/activity) for requests appearing
### Previous Credentials
- Run `/logout` to clear cached Anthropic credentials
- Restart Claude Code after environment variable changes
## Example Workflow
1. User requests: "Configure OpenRouter for Claude Code"
2. Skill asks: "Which OpenRouter model would you like to use?"
3. Skill provides link: https://openrouter.ai/models
4. User browses models, clicks on one, and copies the model name
5. User pastes model name back to the skill
6. Skill prompts for OpenRouter API key
7. Skill adds environment variables to shell profile
8. Skill configures chosen model in `~/.claude/settings.json`
9. Skill reminds user to restart terminal
10. User restarts terminal and runs `claude`
11. User runs `/status` to verify configuration
12. User can now use Claude Code with their chosen OpenRouter model
## Changing Models
Users can change their model at any time by:
### Option 1: Update Claude Code Settings
Edit `~/.claude/settings.json`:
```json
{
"provider": {
"baseUrl": "https://openrouter.ai/api",
"apiKey": "${ANTHROPIC_AUTH_TOKEN}",
"defaultModel": "new-model-name"
}
}
```
### Option 2: Run the Skill Again
Re-run the configuration process and select a different model when prompted.
## Configuration File Location
Environment variables are added to shell profile:
- `~/.bashrc`, `~/.zshrc`, or `~/.config/fish/config.fish`
Claude Code settings (includes model configuration):
- `~/.claude/settings.json`
## Model Catalog
The complete OpenRouter model catalog is available at:
- **Browse Models**: https://openrouter.ai/models
Users can filter by:
- Provider (Anthropic, OpenAI, Google, Meta, etc.)
- Features (function calling, vision, etc.)
- Context length
- Price range
- Free tier availability
## Supported Models
OpenRouter provides **100+ models** from leading AI providers. The catalog is constantly updated with new models.
**Always check the live catalog for the most current models:**
- **Browse Models**: https://openrouter.ai/models
**Major Providers Available:**
- **Anthropic**: Latest Claude models
- **OpenAI**: GPT-4 series, o1, o3
- **Google**: Gemini models
- **Meta**: Llama family
- **Mistral**: Mistral Large, Pixtral
- **DeepSeek**: DeepSeek Chat, R1
- **Qwen**: Qwen 2.5, Qwen Coder
- **Perplexity**: Sonar models with web search
- **And many more**: xAI, Cohere, etc.
**Model Features You Can Filter By:**
- ✅ Free tier availability
- ✅ Vision/multimodal capabilities
- ✅ Function calling/tool use
- ✅ Long context windows (up to 1M+ tokens)
- ✅ Coding specialization
- ✅ Web search integration
- ✅ Price range
**Note:** Models are added and updated regularly. Always visit https://openrouter.ai/models to see the complete, up-to-date catalog with real-time pricing and availability.
## Related Resources
- [OpenRouter Claude Code Integration Guide](https://openrouter.ai/docs/guides/claude-code-integration)
- [OpenRouter Model Catalog](https://openrouter.ai/models)
- [OpenRouter Activity Dashboard](https://openrouter.ai/dashboard/activity)
- [OpenRouter API Keys](https://openrouter.ai/keys)
- [OpenRouter Examples Repository](https://github.com/openrouter/openrouter-examples)
- [Anthropic Agent SDK](https://github.com/anthropics/anthropic-sdk-typescript)