Commit Graph

18 Commits

  • fix: bulletproof command handler + auto-restart + README overhaul
    - sendStreamingMessage: replaced broken simulated streaming with reliable
      HTML send + stripped plain text fallback (was silently failing)
    - Added global unhandledRejection guard (catches async errors that
      sequentialize middleware would swallow)
    - restart.sh: auto-restart loop on crash (3s delay) instead of bare node
    - README: comprehensive update with self-learning memory, curiosity engine,
      memory architecture diagram, updated command table, updated comparison
  • feat: persistent self-learning memory + curiosity engine
    - New memory.js: JSON-backed MemoryStore with 5 categories (lesson, pattern, preference, discovery, gotcha)
    - Memory injected into system prompt — bot sees past learnings every session
    - Curiosity engine: auto-detects errors/fixes, corrections, successful patterns, new tool discoveries
    - New commands: /memory (stats), /remember (save), /recall (search), /forget (delete)
    - Runs AFTER response delivery — zero latency impact
    - 500 memory cap with smart eviction (keeps gotchas/lessons, evicts old discoveries)
    - data/ directory gitignored (memory is local to each deployment)
  • fix: beautiful Telegram formatting via HTML (no more raw **)
    - Add markdownToHtml() converter: **bold**, *italic*, code blocks, links, headings, quotes, lists
    - StreamConsumer: intermediate edits stay plain text, FINAL message gets full HTML formatting
    - sendFormatted() now uses HTML parse_mode with fallback to stripped plain text
    - stripMarkdown() for plain-text fallback (no raw syntax chars)
    - All Telegram sends now use HTML instead of legacy Markdown mode
  • feat: real-time SSE streaming via StreamConsumer (adapted from Hermes Agent)
    - StreamConsumer class: queued token buffer → rate-limited editMessageText loop
    - Adaptive flood control backoff (3 strikes → fallback to plain send)
    - Cursor indicator (▉) during typing, stripped on completion
    - chatWithAI now supports onDelta callback for SSE token streaming
    - Uses native fetch() for SSE (Node 18+), falls back to non-streaming on error
    - Message handler wires StreamConsumer into the chat pipeline
    - Graceful fallback: if streaming fails entirely, sends as plain message
  • fix: revert streaming to prevent webhook errors
    - Removed SSE streaming from chatWithAI()
    - Keep sendStreamingMessage() for chunked delivery
    - Self-correction loops still active
    - Messages will be delivered in chunks with typing indicator
  • feat: fully enable self-correction loops
    - Import withSelfCorrection from self-correction.js
    - Wrap chatWithAI() with self-correction wrapper
    - Add /selfcorrection command to show status
    - Update /start to mention self-correction and streaming
    - Self-correction: 2 retries + exponential backoff + auto-simplification
    - Triggers: error responses, rate limits, timeouts, 5xx errors
  • feat: enable streaming responses like OpenClaw
    - Add sendStreamingMessage() to message-sender.js with typing indicators
    - Enable stream: true in chatWithAI() with SSE parsing
    - Replace all ctx.reply() calls with sendStreamingMessage()
    - Real-time text streaming with 50ms delay between chunks
  • feat: full service exposure with grammy bot + claudegram patterns
    - Rewrote bot/index.js using grammy (@grammyjs/auto-retry + runner)
    - Added deduplication.js (adapted from claudegram)
    - Added request-queue.js (per-chat sequential processing)
    - Added message-sender.js (chunking + Markdown fallback)
    - Wired all JS-shim services: tools, skills, agents, config, RTK
    - Added function calling support to ZAIProvider.chat()
    - Added dynamic command routing (tools, skills, agents, model, stats)
    - Added per-agent delegation commands (/agent_coder, /agent_architect, etc.)
    - Added dedup + queue patterns from claudegram's battle-tested codebase
    - Updated zcode.js to pass agents to initBot()
    - Updated README feature comparison table to reflect real capabilities
  • feat: Add RTK (Rust Token Killer) integration for token optimization
    - Add RTK utility module (src/utils/rtk.js)
    - Integrate RTK into BashTool for all bash commands
    - Integrate RTK into GitTool for git operations
    - Initialize RTK on bot startup
    - Support 60+ command types (git, npm, cargo, pytest, docker, etc.)
    - Track and report token savings per command
    - Graceful fallback when RTK is not available
    
    Expected savings: 60-90% token reduction for supported commands
  • feat: Complete zCode CLI X with Telegram bot integration
    - Add full Telegram bot functionality with Z.AI API integration
    - Implement 4 tools: Bash, FileEdit, WebSearch, Git
    - Add 3 agents: Code Reviewer, Architect, DevOps Engineer
    - Add 6 skills for common coding tasks
    - Add systemd service file for 24/7 operation
    - Add nginx configuration for HTTPS webhook
    - Add comprehensive documentation
    - Implement WebSocket server for real-time updates
    - Add logging system with Winston
    - Add environment validation
    
    🤖 zCode CLI X - Agentic coder with Z.AI + Telegram integration