fix: bulletproof command handler + auto-restart + README overhaul
- sendStreamingMessage: replaced broken simulated streaming with reliable HTML send + stripped plain text fallback (was silently failing) - Added global unhandledRejection guard (catches async errors that sequentialize middleware would swallow) - restart.sh: auto-restart loop on crash (3s delay) instead of bare node - README: comprehensive update with self-learning memory, curiosity engine, memory architecture diagram, updated command table, updated comparison
This commit is contained in:
139
README.md
139
README.md
@@ -1,6 +1,6 @@
|
||||
# zCode CLI X
|
||||
|
||||
Agentic coding assistant with **Z.AI + Telegram integration** — autonomous code execution with real-time streaming, self-correction loops, and RTK token optimization.
|
||||
Agentic coding assistant with **Z.AI + Telegram integration** — autonomous code execution with real-time streaming, self-correction loops, persistent self-learning memory, and RTK token optimization.
|
||||
|
||||
> 💡 **Get 10% OFF Z.AI** — Use code **ROK78RJKNW** at [z.ai/subscribe](https://z.ai/subscribe?ic=ROK78RJKNW) for the Coding Plan
|
||||
|
||||
@@ -13,6 +13,51 @@ Agentic coding assistant with **Z.AI + Telegram integration** — autonomous cod
|
||||
- **🧠 Agent System**: Code Reviewer, System Architect, DevOps Engineer
|
||||
- **📚 Skills System**: Pre-built skills for common tasks
|
||||
|
||||
### 🧠 Self-Learning Memory
|
||||
- **Persistent across sessions**: JSON-backed memory store survives restarts
|
||||
- **5 categories**: `lesson`, `pattern`, `preference`, `discovery`, `gotcha`
|
||||
- **Auto-injected into system prompt**: AI knows what it learned before — every conversation builds on the last
|
||||
- **Smart eviction**: Max 500 memories with priority-based eviction (old discoveries first, lessons/gotchas kept)
|
||||
- **Deduplication**: Same memory won't be stored twice — access count increments instead
|
||||
|
||||
### 🔬 Curiosity Engine
|
||||
The bot doesn't just respond — it **learns from every interaction**. After each response, an asynchronous analysis pass runs:
|
||||
|
||||
```
|
||||
User message + AI response
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Pattern Detector │ ← runs AFTER delivery (zero latency)
|
||||
└────────┬────────┘
|
||||
│
|
||||
┌───────┼───────┬──────────┬──────────┐
|
||||
▼ ▼ ▼ ▼ ▼
|
||||
Error User Successful First-time New API
|
||||
+ Fix Correct Complex Tool Quirk
|
||||
│ │ Solution Usage Found
|
||||
▼ ▼ ▼ ▼ ▼
|
||||
gotcha lesson pattern discovery discovery
|
||||
```
|
||||
|
||||
**What triggers learning:**
|
||||
| Trigger | Category | Example |
|
||||
|---|---|---|
|
||||
| Error with fix found | `gotcha` | `ENOENT: no such file → use absolute paths` |
|
||||
| User says "wrong" or "fix" | `lesson` | `Correction on "npm install": use --legacy-peer-deps` |
|
||||
| Complex successful solution | `pattern` | `Solution for "deploy to VPS": 12-step process with SSH` |
|
||||
| First tool usage works | `discovery` | `Bash tool works for shell commands on this server` |
|
||||
| New API quirk discovered | `discovery` | `Z.AI SSE sends empty data lines between chunks` |
|
||||
| Repeated user preference | `preference` | `User always wants TypeScript over JavaScript` |
|
||||
|
||||
**Commands:**
|
||||
| Command | Description |
|
||||
|---|---|
|
||||
| `/memory` | View memory stats + recent memories |
|
||||
| `/remember <text>` | Manually save a memory (auto-detects category) |
|
||||
| `/recall <query>` | Search memories by keyword |
|
||||
| `/forget <id>` | Delete a specific memory |
|
||||
|
||||
### Streaming & Formatting
|
||||
- **⚡ Real-time SSE Streaming**: Token-by-token delivery via `StreamConsumer` — adapted from [Hermes Agent's GatewayStreamConsumer](https://github.com/nousresearch/hermes-agent)
|
||||
- Queued token buffer → rate-limited `editMessageText` loop (1s base interval)
|
||||
@@ -29,6 +74,7 @@ Agentic coding assistant with **Z.AI + Telegram integration** — autonomous cod
|
||||
- Triggers: API errors, rate limits, timeouts, 5xx server errors
|
||||
- Auto-simplification: prompts simplified on retry to avoid recurring errors
|
||||
- Full logging of all retry attempts with reason tracking
|
||||
- **🔁 Auto-Restart**: Process supervisor restarts the bot on crash (3s delay)
|
||||
- **🛡️ RTK (Rust Token Killer)**: Token optimization for supported commands
|
||||
- 60-90% savings on git, npm, cargo, pytest, docker, and more
|
||||
- Active tracking stats via `getTrackingStats()`
|
||||
@@ -39,6 +85,7 @@ Agentic coding assistant with **Z.AI + Telegram integration** — autonomous cod
|
||||
- **📋 Request Queue**: Per-chat sequential processing (no race conditions)
|
||||
- **🔌 MCP Protocol**: Full MCP client + server management
|
||||
- **⏰ Cron Scheduling**: 1s interval, task locking, auto-recovery
|
||||
- **🛡️ Unhandled rejection guard**: Catches any async error that slips through
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
@@ -79,10 +126,10 @@ node bin/zcode.js
|
||||
### Run as Telegram Bot (24/7)
|
||||
|
||||
```bash
|
||||
node bin/zcode.js --bot
|
||||
node bin/zcode.js --no-cli
|
||||
```
|
||||
|
||||
### Run as systemd service
|
||||
### Run as systemd service (recommended)
|
||||
|
||||
```ini
|
||||
# /etc/systemd/system/zcode.service
|
||||
@@ -94,7 +141,7 @@ After=network.target
|
||||
Type=simple
|
||||
User=<your-user>
|
||||
WorkingDirectory=/path/to/zcode-cli-x
|
||||
ExecStart=/usr/bin/node bin/zcode.js --bot
|
||||
ExecStart=/usr/bin/node bin/zcode.js --no-cli
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
|
||||
@@ -107,6 +154,12 @@ sudo systemctl enable zcode
|
||||
sudo systemctl start zcode
|
||||
```
|
||||
|
||||
### Quick restart (no systemd)
|
||||
|
||||
```bash
|
||||
bash restart.sh
|
||||
```
|
||||
|
||||
## 🤖 Telegram Bot Commands
|
||||
|
||||
| Command | Description |
|
||||
@@ -117,6 +170,10 @@ sudo systemctl start zcode
|
||||
| `/agents` | List agent roles |
|
||||
| `/model <name>` | Switch AI model |
|
||||
| `/stats` | System & RTK stats |
|
||||
| `/memory` | 🧠 Persistent memory stats |
|
||||
| `/remember <text>` | 📝 Save to memory |
|
||||
| `/recall <query>` | 🔍 Search memory |
|
||||
| `/forget <id>` | 🗑 Delete a memory |
|
||||
| `/selfcorrection` | Self-correction status |
|
||||
| `/bash <cmd>` | Execute shell command |
|
||||
| `/web <query>` | Search the web |
|
||||
@@ -189,6 +246,13 @@ User Message
|
||||
┌──────────────┐
|
||||
│ Telegram API │
|
||||
│ (HTML mode) │
|
||||
└──────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────┐
|
||||
│ 🧠 Self- │ ← async, zero latency
|
||||
│ Learning │ extracts patterns
|
||||
│ Engine │ stores to memory.json
|
||||
└──────────────┘
|
||||
```
|
||||
|
||||
@@ -200,8 +264,9 @@ zcode-cli-x/
|
||||
│ └── zcode.js # CLI entry point
|
||||
├── src/
|
||||
│ ├── bot/
|
||||
│ │ ├── index.js # Telegram bot (grammy + SSE streaming)
|
||||
│ │ ├── index.js # Telegram bot (grammy + SSE streaming + memory)
|
||||
│ │ ├── message-sender.js # StreamConsumer + markdownToHtml converter
|
||||
│ │ ├── memory.js # Persistent self-learning memory store
|
||||
│ │ ├── deduplication.js # Message deduplication (60s TTL)
|
||||
│ │ ├── request-queue.js # Per-chat request queuing
|
||||
│ │ ├── delivery-hub.js # Multi-channel delivery
|
||||
@@ -222,6 +287,9 @@ zcode-cli-x/
|
||||
│ ├── logger.js # Winston logger
|
||||
│ ├── env.js # Environment validation
|
||||
│ └── rtk.js # RTK (Rust Token Killer) integration
|
||||
├── data/
|
||||
│ └── memory.json # Persistent memory (auto-created, gitignored)
|
||||
├── logs/ # Runtime logs (gitignored)
|
||||
├── .env # Configuration
|
||||
└── package.json
|
||||
```
|
||||
@@ -231,10 +299,12 @@ zcode-cli-x/
|
||||
1. **Message Reception**: Telegram webhook → grammy handler
|
||||
2. **Deduplication**: `deduplication.js` (60s TTL, prevents double-processing)
|
||||
3. **Request Queue**: `request-queue.js` (per-chat sequential processing)
|
||||
4. **Self-Correction**: `self-correction.js` (2 retries + exponential backoff + auto-simplification)
|
||||
5. **AI Chat + Streaming**: `chatWithAI()` → SSE stream → `StreamConsumer` → real-time edits
|
||||
6. **Formatting**: `markdownToHtml()` converts AI markdown → Telegram HTML
|
||||
7. **Final Delivery**: `editMessageText` with HTML parse_mode (or fallback to stripped plain text)
|
||||
4. **Memory Injection**: Memory context injected into system prompt
|
||||
5. **Self-Correction**: `self-correction.js` (2 retries + exponential backoff + auto-simplification)
|
||||
6. **AI Chat + Streaming**: `chatWithAI()` → SSE stream → `StreamConsumer` → real-time edits
|
||||
7. **Formatting**: `markdownToHtml()` converts AI markdown → Telegram HTML
|
||||
8. **Final Delivery**: `editMessageText` with HTML parse_mode (or fallback to stripped plain text)
|
||||
9. **Self-Learning**: `selfLearn()` analyzes interaction → extracts patterns → saves to memory.json
|
||||
|
||||
### StreamConsumer Pipeline
|
||||
|
||||
@@ -262,9 +332,48 @@ Z.AI API (SSE)
|
||||
┌──────────────┐
|
||||
│ editMessage │ ← final message with parse_mode: 'HTML'
|
||||
│ Text() │ fallback: stripped plain text (no raw **)
|
||||
└──────────────┘
|
||||
│
|
||||
▼ (async, after delivery)
|
||||
┌──────────────┐
|
||||
│ selfLearn() │ ← pattern detector extracts learnable insights
|
||||
│ │ saves to data/memory.json
|
||||
└──────────────┘
|
||||
```
|
||||
|
||||
### Memory System Architecture
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────┐
|
||||
│ data/memory.json │
|
||||
│ (persistent, survives restarts, gitignored) │
|
||||
└──────────────────┬───────────────────────────┘
|
||||
│
|
||||
┌─────────┴─────────┐
|
||||
│ MemoryStore │
|
||||
│ (singleton) │
|
||||
└─────────┬─────────┘
|
||||
│
|
||||
┌──────────────┼──────────────┬──────────────┬──────────────┐
|
||||
▼ ▼ ▼ ▼ ▼
|
||||
📖 lesson 🔧 pattern 👤 preference 💡 discovery ⚠️ gotcha
|
||||
"Always "For deploy: "User prefers "Z.AI SSE "ENOENT →
|
||||
use abs use scp..." TS over JS" sends empty use absolute
|
||||
paths" data lines" paths"
|
||||
│ │ │ │ │
|
||||
└──────────────┴──────────────┴──────────────┴──────────────┘
|
||||
│
|
||||
▼
|
||||
buildContextSummary() → injected into system prompt
|
||||
recall(query) → search memories
|
||||
remember(cat, text) → save new memory
|
||||
forget(id) → delete memory
|
||||
```
|
||||
|
||||
**Priority in system prompt:** gotchas > lessons > patterns > preferences > discoveries
|
||||
|
||||
**Eviction policy:** When memory exceeds 500 entries, old single-access discoveries are evicted first. Lessons and gotchas are never evicted unless all else fails.
|
||||
|
||||
## 📊 Feature Comparison
|
||||
|
||||
| Feature | zCode CLI X | Hermes Agent | better-clawd |
|
||||
@@ -274,6 +383,10 @@ Z.AI API (SSE)
|
||||
| Sub-agents | ✅ Multi-agent (swarm) | ✅ delegate_task + batch | ❌ Single agent only |
|
||||
| Agent roles | ✅ Code Reviewer, Architect, DevOps | ✅ Agent Registry (10+ roles) | ❌ Fixed single role |
|
||||
| Self-correction loops | ✅ 2 retries + backoff + auto-simplification | ✅ Agent self-correction skill | ❌ None |
|
||||
| **Intelligence** | | | |
|
||||
| Persistent memory | ✅ JSON-backed, 5 categories, auto-learn | ✅ Cross-session memory | ❌ None |
|
||||
| Self-learning / curiosity | ✅ Pattern detector + auto-extraction | ✅ Knowledge + memory tools | ❌ None |
|
||||
| Memory-injected prompts | ✅ Every conversation uses past lessons | ✅ Memory injected | ❌ None |
|
||||
| **Streaming** | | | |
|
||||
| Real-time SSE streaming | ✅ StreamConsumer (edit-in-place) | ✅ GatewayStreamConsumer | ❌ None |
|
||||
| Telegram HTML formatting | ✅ markdownToHtml + fallback | ✅ Native HTML support | ❌ None |
|
||||
@@ -293,19 +406,19 @@ Z.AI API (SSE)
|
||||
| **Infrastructure** | | | |
|
||||
| Model routing | ✅ Multi-provider | ✅ Multi-provider routing | ❌ Single model |
|
||||
| Context compression | ✅ Compact pipeline | ✅ lean-ctx MCP (90% savings) | ❌ None |
|
||||
| Memory persistence | ✅ Session memory | ✅ Cross-session memory | ❌ None |
|
||||
| Auto-restart | ✅ Process supervisor | ✅ systemd managed | ❌ None |
|
||||
| Cron scheduling | ✅ 1s interval, jitter, locks | ✅ Cron jobs with delivery | ❌ None |
|
||||
|
||||
### Summary
|
||||
|
||||
- **zCode CLI X** — Lightweight agentic coder focused on Telegram + Z.AI. Real-time SSE streaming, self-correction loops, RTK optimization, and beautiful HTML formatting. Ideal for quick coding tasks via Telegram.
|
||||
- **zCode CLI X** — Lightweight agentic coder focused on Telegram + Z.AI. Real-time SSE streaming, self-correction loops, persistent self-learning memory with curiosity engine, RTK optimization, and beautiful HTML formatting. Gets smarter with every conversation. Ideal for quick coding tasks via Telegram.
|
||||
- **Hermes Agent** — Full-stack AI assistant platform. Best for complex multi-agent workflows, scheduled automation, and cross-platform deployment. 500+ skills, MCP ecosystem, deepest toolset.
|
||||
- **better-clawd** — Minimal Claude Code clone. Useful as a lightweight reference but lacks agentic depth.
|
||||
|
||||
## 🔗 Integrations
|
||||
|
||||
- **Z.AI API**: GLM-5.1 model (Coding Plan) with SSE streaming
|
||||
- **Telegram Bot API**: grammy + auto-retry + runner + webhook
|
||||
- **Telegram Bot API**: grammy + auto-retry + sequentialize + webhook
|
||||
- **Discord.js v14**: Discord bot with GatewayIntentBits
|
||||
- **Express.js**: HTTP server for webhook handling
|
||||
- **Winston**: Structured logging
|
||||
@@ -316,7 +429,7 @@ Z.AI API (SSE)
|
||||
|
||||
Contributions welcome! Based on:
|
||||
- [better-clawd](https://github.com/x1xhlol/better-clawd.git) — Claude Code clone
|
||||
- [Hermes Agent](https://hermes-agent.nousresearch.com) — AI assistant platform (streaming architecture credit)
|
||||
- [Hermes Agent](https://hermes-agent.nousresearch.com) — AI assistant platform (streaming architecture + memory system credit)
|
||||
|
||||
---
|
||||
|
||||
|
||||
Reference in New Issue
Block a user