- Pidfile lock prevents duplicate instances (auto-kills stale PIDs)
- EADDRINUSE retry: kills port hog, retries up to 3x with 1.5s delay
- releasePidfile() on graceful shutdown
- Added fs/path imports needed by pidfile utilities
- ConversationStore: per-chat JSON files in data/, survives restarts
- 6000 token budget per chat context (fits ~20-30 exchanges)
- Auto-trims old messages, always includes most recent
- Wired into message handler: loads history before AI call, saves after
- /reset command to clear chat history per chat
- Cross-session, cross-model, cross-chat isolation
When streaming produces a long essay that exceeds Telegram's edit limit,
the final HTML edit silently fails, leaving the user with raw ** markers.
Now: delete the draft message and send fresh formatted message(s) via
sendFormatted which handles HTML conversion, splitting, and fallback.
- New memory.js: JSON-backed MemoryStore with 5 categories (lesson, pattern, preference, discovery, gotcha)
- Memory injected into system prompt — bot sees past learnings every session
- Curiosity engine: auto-detects errors/fixes, corrections, successful patterns, new tool discoveries
- New commands: /memory (stats), /remember (save), /recall (search), /forget (delete)
- Runs AFTER response delivery — zero latency impact
- 500 memory cap with smart eviction (keeps gotchas/lessons, evicts old discoveries)
- data/ directory gitignored (memory is local to each deployment)
- start.sh: use dirname instead of hardcoded path
- src/zcode.js: remove hardcoded chat_id fallback
- src/utils/rtk.js: use 'rtk' from PATH instead of hardcoded binary path
- src/telegram-bot.ts: use process.cwd() instead of hardcoded path
- TELEGRAM_SETUP.md: replace token/chat_id with placeholders
- QUICKSTART.md: sanitize all references
- SERVICE_MAP.md: use relative paths instead of absolute
- Add markdownToHtml() converter: **bold**, *italic*, code blocks, links, headings, quotes, lists
- StreamConsumer: intermediate edits stay plain text, FINAL message gets full HTML formatting
- sendFormatted() now uses HTML parse_mode with fallback to stripped plain text
- stripMarkdown() for plain-text fallback (no raw syntax chars)
- All Telegram sends now use HTML instead of legacy Markdown mode
- Removed SSE streaming from chatWithAI()
- Keep sendStreamingMessage() for chunked delivery
- Self-correction loops still active
- Messages will be delivered in chunks with typing indicator
- Add sendStreamingMessage() to message-sender.js with typing indicators
- Enable stream: true in chatWithAI() with SSE parsing
- Replace all ctx.reply() calls with sendStreamingMessage()
- Real-time text streaming with 50ms delay between chunks
- Add promotional banner after title with ROK78RJKNW code
- Include direct link to z.ai/subscribe with invite code
- Positioned prominently for visibility
- Encourages Z.AI usage for AI-powered coding
- Remove .env file with real API keys and bot token
- Add .env.example with placeholder values
- Verify .env is in .gitignore
- Repository is now safe to share with others
Environment variables:
- ZAI_API_KEY: Z.AI API key
- TELEGRAM_BOT_TOKEN: Telegram bot token
- TELEGRAM_ALLOWED_USERS: Allowed Telegram user IDs
- ZCODE_WEBHOOK_URL: Optional webhook URL
- ZCODE_PORT: Bot port (default: 3001)
Users must create their own .env file from .env.example
- Add RTK utility module (src/utils/rtk.js)
- Integrate RTK into BashTool for all bash commands
- Integrate RTK into GitTool for git operations
- Initialize RTK on bot startup
- Support 60+ command types (git, npm, cargo, pytest, docker, etc.)
- Track and report token savings per command
- Graceful fallback when RTK is not available
Expected savings: 60-90% token reduction for supported commands
toolOrchestration.ts used `export { getMaxToolUseConcurrency } from
'./toolConcurrency.js'` (a re-export) but then called the function
directly in runToolsConcurrently(). A re-export does not bring the name
into the module's own scope, causing a ReferenceError at runtime —
visible as "getMaxToolUseConcurrency is not defined" in --print mode.
TypeScript also flagged this as TS2304 on main.
Fix: replace the re-export with an explicit import + a separate export
statement so the name is both locally callable and publicly exported.
ripgrep.ts fell through to the vendor binary path on non-bundled (npm)
installs where the vendor/ directory is not shipped alongside dist/cli.mjs,
producing "spawn .../vendor/ripgrep/x64-linux/rg ENOENT". Fix: check
whether the vendor binary exists before returning the builtin config;
fall back to system rg if it is absent.
- Add new 'firepass' provider type alongside anthropic, openai, openrouter
- FirePass uses Fireworks AI's endpoint for Kimi K2.5 Turbo model
- Subscription billing model ($7/week) with 256K context window
- Anthropic API compatible (uses Anthropic SDK with custom baseURL)
Changes:
- providers.ts: Add firepass detection and base URL handling
- auth.ts: Add FirePass API key management (FIREPASS_API_KEY or FIREWORKS_API_KEY)
- config.ts: Add firepassApiKey and firepass auth provider
- client.ts: Add firepass client creation with custom baseURL
- http.ts: Add firepass auth headers
- modelStrings.ts: Return Kimi K2.5 Turbo model ID for firepass
- model.ts: Add Kimi display name handling and default model logic
- modelOptions.ts: Simplified model picker for firepass (Kimi K2.5 Turbo only)
- status.tsx: Display FirePass in status bar
- login.tsx: Add FirePass option to provider selection
- FirepassLoginFlow.tsx: New component for FirePass login flow
Usage:
1. Run /login and select "FirePass"
2. Enter your Fireworks API key
3. Model picker shows Kimi K2.5 Turbo