- Add sendStreamingMessage() to message-sender.js with typing indicators
- Enable stream: true in chatWithAI() with SSE parsing
- Replace all ctx.reply() calls with sendStreamingMessage()
- Real-time text streaming with 50ms delay between chunks
- Add promotional banner after title with ROK78RJKNW code
- Include direct link to z.ai/subscribe with invite code
- Positioned prominently for visibility
- Encourages Z.AI usage for AI-powered coding
- Remove .env file with real API keys and bot token
- Add .env.example with placeholder values
- Verify .env is in .gitignore
- Repository is now safe to share with others
Environment variables:
- ZAI_API_KEY: Z.AI API key
- TELEGRAM_BOT_TOKEN: Telegram bot token
- TELEGRAM_ALLOWED_USERS: Allowed Telegram user IDs
- ZCODE_WEBHOOK_URL: Optional webhook URL
- ZCODE_PORT: Bot port (default: 3001)
Users must create their own .env file from .env.example
- Add RTK utility module (src/utils/rtk.js)
- Integrate RTK into BashTool for all bash commands
- Integrate RTK into GitTool for git operations
- Initialize RTK on bot startup
- Support 60+ command types (git, npm, cargo, pytest, docker, etc.)
- Track and report token savings per command
- Graceful fallback when RTK is not available
Expected savings: 60-90% token reduction for supported commands
toolOrchestration.ts used `export { getMaxToolUseConcurrency } from
'./toolConcurrency.js'` (a re-export) but then called the function
directly in runToolsConcurrently(). A re-export does not bring the name
into the module's own scope, causing a ReferenceError at runtime —
visible as "getMaxToolUseConcurrency is not defined" in --print mode.
TypeScript also flagged this as TS2304 on main.
Fix: replace the re-export with an explicit import + a separate export
statement so the name is both locally callable and publicly exported.
ripgrep.ts fell through to the vendor binary path on non-bundled (npm)
installs where the vendor/ directory is not shipped alongside dist/cli.mjs,
producing "spawn .../vendor/ripgrep/x64-linux/rg ENOENT". Fix: check
whether the vendor binary exists before returning the builtin config;
fall back to system rg if it is absent.
- Add new 'firepass' provider type alongside anthropic, openai, openrouter
- FirePass uses Fireworks AI's endpoint for Kimi K2.5 Turbo model
- Subscription billing model ($7/week) with 256K context window
- Anthropic API compatible (uses Anthropic SDK with custom baseURL)
Changes:
- providers.ts: Add firepass detection and base URL handling
- auth.ts: Add FirePass API key management (FIREPASS_API_KEY or FIREWORKS_API_KEY)
- config.ts: Add firepassApiKey and firepass auth provider
- client.ts: Add firepass client creation with custom baseURL
- http.ts: Add firepass auth headers
- modelStrings.ts: Return Kimi K2.5 Turbo model ID for firepass
- model.ts: Add Kimi display name handling and default model logic
- modelOptions.ts: Simplified model picker for firepass (Kimi K2.5 Turbo only)
- status.tsx: Display FirePass in status bar
- login.tsx: Add FirePass option to provider selection
- FirepassLoginFlow.tsx: New component for FirePass login flow
Usage:
1. Run /login and select "FirePass"
2. Enter your Fireworks API key
3. Model picker shows Kimi K2.5 Turbo