- New memory.js: JSON-backed MemoryStore with 5 categories (lesson, pattern, preference, discovery, gotcha)
- Memory injected into system prompt — bot sees past learnings every session
- Curiosity engine: auto-detects errors/fixes, corrections, successful patterns, new tool discoveries
- New commands: /memory (stats), /remember (save), /recall (search), /forget (delete)
- Runs AFTER response delivery — zero latency impact
- 500 memory cap with smart eviction (keeps gotchas/lessons, evicts old discoveries)
- data/ directory gitignored (memory is local to each deployment)
- Add markdownToHtml() converter: **bold**, *italic*, code blocks, links, headings, quotes, lists
- StreamConsumer: intermediate edits stay plain text, FINAL message gets full HTML formatting
- sendFormatted() now uses HTML parse_mode with fallback to stripped plain text
- stripMarkdown() for plain-text fallback (no raw syntax chars)
- All Telegram sends now use HTML instead of legacy Markdown mode
- Removed SSE streaming from chatWithAI()
- Keep sendStreamingMessage() for chunked delivery
- Self-correction loops still active
- Messages will be delivered in chunks with typing indicator
- Add sendStreamingMessage() to message-sender.js with typing indicators
- Enable stream: true in chatWithAI() with SSE parsing
- Replace all ctx.reply() calls with sendStreamingMessage()
- Real-time text streaming with 50ms delay between chunks
- Add RTK utility module (src/utils/rtk.js)
- Integrate RTK into BashTool for all bash commands
- Integrate RTK into GitTool for git operations
- Initialize RTK on bot startup
- Support 60+ command types (git, npm, cargo, pytest, docker, etc.)
- Track and report token savings per command
- Graceful fallback when RTK is not available
Expected savings: 60-90% token reduction for supported commands