# Spark Intelligence Integration Guide for QwenClaw ## πŸš€ Why Spark Intelligence? **Spark Intelligence transforms QwenClaw from a stateless executor into a learning system that:** - βœ… **Remembers** what worked and what didn't - βœ… **Warns** before repeating mistakes - βœ… **Promotes** validated wisdom automatically - βœ… **Adapts** to your specific workflows - βœ… **Improves** continuously through outcome tracking --- ## πŸ“¦ Installation ### Step 1: Install Spark Intelligence **Windows (PowerShell):** ```powershell irm https://raw.githubusercontent.com/vibeforge1111/vibeship-spark-intelligence/main/install.ps1 | iex ``` **Mac/Linux:** ```bash curl -fsSL https://raw.githubusercontent.com/vibeforge1111/vibeship-spark-intelligence/main/install.sh | bash ``` **Manual Install:** ```bash git clone https://github.com/vibeforge1111/vibeship-spark-intelligence cd vibeship-spark-intelligence python -m venv .venv .venv\Scripts\activate # Windows # or: source .venv/bin/activate # Mac/Linux python -m pip install -e .[services] ``` ### Step 2: Verify Installation ```bash python -m spark.cli health python -m spark.cli up python -m spark.cli learnings ``` --- ## πŸ”— Integration with QwenClaw ### Architecture ``` β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ QwenClaw Session β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Spark Event Capture (Hooks) β”‚ β”‚ - PreToolUse - PostToolUse - UserPromptSubmit β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Spark Intelligence Pipeline β”‚ β”‚ Capture β†’ Distill β†’ Transform β†’ Store β†’ Act β†’ Guard β†’ Learnβ”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Pre-Tool Advisory (Before QwenClaw Acts) β”‚ β”‚ BLOCK (0.95+) | WARNING (0.80-0.95) | NOTE (0.48-0.80) β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` ### Configuration Create `~/.spark/config.yaml`: ```yaml spark: enabled: true session_id: qwenclaw-${timestamp} hooks: pre_tool_use: true post_tool_use: true user_prompt: true advisory: enabled: true min_score: 0.48 cooldown_seconds: 300 authority_levels: block: 0.95 warning: 0.80 note: 0.48 memory: auto_capture: true min_importance: 0.55 importance_boosts: causal_language: 0.15 quantitative_data: 0.30 technical_specificity: 0.15 observatory: enabled: true sync_interval_seconds: 120 vault_path: ~/Documents/Obsidian Vault/Spark-Intelligence-Observatory ``` ### Start Spark + QwenClaw ```bash # Terminal 1: Start Spark pipeline python -m spark.cli up # Terminal 2: Start QwenClaw qwenclaw start # Terminal 3: Send tasks qwenclaw send "Refactor the authentication module" ``` --- ## 🧠 How Spark Improves QwenClaw ### 1. Pre-Tool Advisory Guidance **Before QwenClaw executes a tool**, Spark surfaces relevant lessons: #### BLOCK Example (0.95+ score) ``` ⚠️ BLOCKED: Spark advisory Action: rm -rf ./node_modules Reason: This command will delete critical dependencies. Last 3 executions resulted in 2+ hour rebuild times. Alternative: npm clean-install Confidence: 0.97 | Validated: 12 times ``` #### WARNING Example (0.80-0.95 score) ``` ⚠️ WARNING: Spark advisory Action: Edit file without reading File: src/config/database.ts Pattern: This pattern failed 4 times in the last 24 hours. Missing context caused incorrect modifications. Suggestion: Read the file first, then edit. Reliability: 0.91 | Validated: 8 times ``` #### NOTE Example (0.48-0.80 score) ``` ℹ️ NOTE: Spark advisory User Preference: Always use --no-cache flag for Docker builds Context: Prevents stale layer caching issues Captured from: Session #4521, 2 days ago ``` ### 2. Anti-Pattern Detection Spark identifies and corrects problematic workflows: | Pattern | Detection | Correction | |---------|-----------|------------| | Edit without Read | File modified without prior read | Suggests reading first | | Recurring Command Failures | Same command fails 3+ times | Suggests alternatives | | Missing Tests | Code committed without tests | Reminds testing policy | | Hardcoded Secrets | Secrets detected in code | Blocks and warns | ### 3. Memory Capture with Intelligence **Automatic Importance Scoring (0.0-1.0):** | Score | Action | Example | |-------|--------|---------| | β‰₯0.65 | Auto-save | "Remember: always use --no-cache for Docker" | | 0.55-0.65 | Suggest | "I prefer TypeScript over JavaScript" | | <0.55 | Ignore | Generic statements, noise | **Signals that boost importance:** - Causal language: "because", "leads to" (+0.15-0.30) - Quantitative data: "reduced from 4.2s to 1.6s" (+0.30) - Technical specificity: real tools, libraries, patterns (+0.15-0.30) ### 4. Auto-Promotion to Project Files High-reliability insights automatically promote to: **CLAUDE.md** - Wisdom, reasoning, context insights: ```markdown ## Docker Best Practices - Always use `--no-cache` flag for production builds - Validated: 12 times | Reliability: 0.96 ``` **AGENTS.md** - Meta-learning, self-awareness: ```markdown ## Project Preferences - Prefer TypeScript over JavaScript for large projects - Test-first development required for core modules ``` **SOUL.md** - Communication preferences, user understanding: ```markdown ## User Communication Style - Prefers concise explanations with code examples - Values performance metrics and quantitative data ``` ### 5. EIDOS Episodic Intelligence Extracts structured rules from experience: | Type | Description | Example | |------|-------------|---------| | **Heuristics** | General rules of thumb | "Always test before deploying" | | **Sharp Edges** | Things to watch out for | "API rate limits hit at 100 req/min" | | **Anti-Patterns** | What not to do | "Don't edit config without backup" | | **Playbooks** | Proven approaches | "Database migration checklist" | | **Policies** | Enforced constraints | "Must have tests for core modules" | --- ## πŸ“Š Obsidian Observatory Spark auto-generates **465+ markdown pages** with live Dataview queries: ### Generate Observatory ```bash python scripts/generate_observatory.py --force --verbose ``` **Vault Location:** `~/Documents/Obsidian Vault/Spark-Intelligence-Observatory` ### What's Included - **Pipeline Health** - 12-stage pipeline detail pages with metrics - **Cognitive Insights** - Stored insights with reliability scores - **EIDOS Episodes** - Pattern distillations and heuristics - **Advisory Decisions** - Pre-tool guidance history - **Explorer Views** - Real-time data exploration - **Canvas View** - Spatial pipeline visualization ### Auto-Sync Observatory syncs every **120 seconds** when pipeline is running. --- ## πŸ“ˆ Measurable Outcomes ### Advisory Source Effectiveness | Source | What It Provides | Effectiveness | |--------|-----------------|---------------| | **Cognitive** | Validated session insights | ~62% (dominant) | | **Bank** | User memory banks | ~10% | | **EIDOS** | Pattern distillations | ~5% | | **Baseline** | Static rules | ~5% | | **Trigger** | Event-specific rules | ~5% | | **Semantic** | BM25 + embedding retrieval | ~3% | ### Timeline to Value | Time | What Happens | |------|--------------| | **Hour 1** | Spark starts capturing events | | **Hour 2-4** | Patterns emerge (tool effectiveness, error patterns) | | **Day 1-2** | Insights get promoted to project files | | **Week 1+** | Advisory goes live with pre-tool guidance | --- ## πŸ”§ CLI Commands ### Spark Commands ```bash # Start pipeline python -m spark.cli up # Stop pipeline python -m spark.cli down # Check status python -m spark.cli status # View learnings python -m spark.cli learnings # View advisories python -m spark.cli advisories # Promote insight manually python -m spark.cli promote # Health check python -m spark.cli health ``` ### QwenClaw Commands ```bash # Start QwenClaw qwenclaw start # Send task (Spark captures automatically) qwenclaw send "Refactor the authentication module" # Check status qwenclaw status ``` --- ## 🎯 Best Practices ### 1. Let Spark Learn Naturally Just use QwenClaw normally. Spark captures and learns in the background. ### 2. Provide Explicit Feedback Tell Spark what to remember: ``` "Remember: always use --force for this legacy package" "I prefer yarn over npm in this project" "Test files should be in __tests__ directory" ``` ### 3. Review Advisories Pay attention to pre-tool warnings. They're based on validated patterns. ### 4. Check Observatory Review the Obsidian vault weekly to understand what Spark has learned. ### 5. Promote High-Value Insights Manually promote insights that are immediately valuable: ```bash python -m spark.cli promote ``` --- ## 🚨 Troubleshooting ### Spark Not Capturing Events **Check:** ```bash python -m spark.cli health python -m spark.cli status ``` **Solution:** - Ensure Spark pipeline is running: `python -m spark.cli up` - Verify hooks are enabled in config - Check QwenClaw session ID matches ### Advisories Not Surfacing **Check:** ```bash python -m spark.cli advisories ``` **Solution:** - Verify advisory min_score in config (default: 0.48) - Check cooldown period (default: 300 seconds) - Ensure insights have been validated (5+ times) ### Observatory Not Syncing **Check:** ```bash python scripts/generate_observatory.py --verbose ``` **Solution:** - Verify Obsidian vault path in config - Ensure vault exists - Check sync interval (default: 120 seconds) --- ## πŸ“š Resources - **Spark Docs:** https://spark.vibeship.co - **GitHub:** https://github.com/vibeforge1111/vibeship-spark-intelligence - **Onboarding:** `docs/SPARK_ONBOARDING_COMPLETE.md` - **Quickstart:** `docs/QUICKSTART.md` - **Obsidian Guide:** `docs/OBSIDIAN_OBSERVATORY_GUIDE.md` --- ## ✨ Summary **Spark Intelligence + QwenClaw = Self-Evolving AI Assistant** | Without Spark | With Spark | |---------------|------------| | Stateless execution | Continuous learning | | Repeats mistakes | Warns before errors | | No memory | Captures preferences | | Static behavior | Evolves over time | | No observability | Full Obsidian vault | **Install Spark today and transform QwenClaw into a learning system!** 🧠✨