10 KiB
Spark Intelligence Skill for QwenClaw
Overview
Name: spark-intelligence
Source: https://github.com/vibeforge1111/vibeship-spark-intelligence
Website: https://spark.vibeship.co
Spark Intelligence is a self-evolving AI companion that transforms QwenClaw into a learning system that remembers, adapts, and improves continuously.
What Spark Does
Spark closes the intelligence loop for QwenClaw:
QwenClaw Session → Spark Captures Events → Pipeline Filters Noise →
Quality Gate Scores Insights → Storage → Advisory Delivery →
Pre-Tool Guidance → Outcomes Feed Back → System Evolves
Key Capabilities
| Capability | Description |
|---|---|
| Pre-Tool Advisory | Surfaces warnings/notes BEFORE QwenClaw executes tools |
| Memory Capture | Automatically captures important user preferences and patterns |
| Anti-Pattern Detection | Identifies recurring mistakes (e.g., "edit without read") |
| Auto-Promotion | Validated insights promote to CLAUDE.md, AGENTS.md, SOUL.md |
| EIDOS Loop | Prediction → outcome → evaluation for continuous learning |
| Domain Chips | Pluggable expertise modules for specialized domains |
| Obsidian Observatory | 465+ auto-generated markdown pages with live queries |
Installation
Prerequisites
- Python 3.10+
- pip
- Git
Windows One-Command Install
irm https://raw.githubusercontent.com/vibeforge1111/vibeship-spark-intelligence/main/install.ps1 | iex
Mac/Linux One-Command Install
curl -fsSL https://raw.githubusercontent.com/vibeforge1111/vibeship-spark-intelligence/main/install.sh | bash
Manual Install
git clone https://github.com/vibeforge1111/vibeship-spark-intelligence
cd vibeship-spark-intelligence
python -m venv .venv && source .venv/bin/activate # Mac/Linux
# or .venv\Scripts\activate # Windows
python -m pip install -e .[services]
Verify Installation
python -m spark.cli health
python -m spark.cli learnings
python -m spark.cli up
Integration with QwenClaw
Step 1: Install Spark Intelligence
Run the installation command above.
Step 2: Configure QwenClaw Session Hook
Add to QwenClaw's session initialization:
// In qwenclaw.js or session config
const sparkConfig = {
enabled: true,
sessionId: `qwenclaw-${Date.now()}`,
hooks: {
preToolUse: true,
postToolUse: true,
userPrompt: true,
},
};
Step 3: Enable Event Capture
Spark captures QwenClaw events:
# Start Spark pipeline
python -m spark.cli up
# Start QwenClaw
qwenclaw start
Step 4: Generate Obsidian Observatory (Optional)
python scripts/generate_observatory.py --force --verbose
Vault location: ~/Documents/Obsidian Vault/Spark-Intelligence-Observatory
Advisory Authority Levels
Spark provides pre-tool guidance with three authority levels:
| Level | Score | Behavior |
|---|---|---|
| BLOCK | 0.95+ | Prevents the action entirely |
| WARNING | 0.80-0.95 | Prominent caution before action |
| NOTE | 0.48-0.80 | Included in context for awareness |
Examples
BLOCK Example:
⚠️ BLOCKED: Spark advisory
This command will delete the production database.
Last 3 executions resulted in data loss.
Confidence: 0.97 | Validated: 12 times
WARNING Example:
⚠️ WARNING: Spark advisory
You're editing this file without reading it first.
This pattern failed 4 times in the last 24 hours.
Consider: Read the file first, then edit.
NOTE Example:
ℹ️ NOTE: Spark advisory
User prefers `--no-cache` flag for Docker builds.
Captured from session #4521.
Memory Capture with Intelligence
Automatic Importance Scoring (0.0-1.0)
| Score | Action | Example Triggers |
|---|---|---|
| ≥0.65 | Auto-save | "remember this", quantitative data |
| 0.55-0.65 | Suggest | "I prefer", design constraints |
| <0.55 | Ignore | Generic statements, noise |
Signals That Boost Importance
- Causal language: "because", "leads to" (+0.15-0.30)
- Quantitative data: "reduced from 4.2s to 1.6s" (+0.30)
- Technical specificity: real tools, libraries, patterns (+0.15-0.30)
Example Captures
User: "Remember: always use --no-cache when building Docker images"
→ Spark: Captured (score: 0.82)
→ Promoted to: CLAUDE.md
User: "I prefer TypeScript over JavaScript for large projects"
→ Spark: Captured (score: 0.68)
→ Promoted to: AGENTS.md
User: "The build time reduced from 4.2s to 1.6s after caching"
→ Spark: Captured (score: 0.91)
→ Promoted to: EIDOS pattern
Quality Pipeline
Every observation passes through rigorous gates:
Event → Importance Scoring → Meta-Ralph Quality Gate →
Cognitive Storage → Validation Loop → Promotion Decision
Meta-Ralph Quality Scores (0-12)
Scores on:
- Actionability (can you act on it?)
- Novelty (genuine insight vs. obvious)
- Reasoning (explicit causal explanation)
- Specificity (context-specific vs. generic)
- Outcome-Linked (validated by results)
Promotion Criteria
Track 1 (Reliability):
- Reliability ≥80% AND validated ≥5 times
Track 2 (Confidence):
- Confidence ≥95% AND age ≥6 hours AND validated ≥5 times
Contradicted insights lose reliability automatically.
EIDOS Episodic Intelligence
Extracts structured rules from experience:
| Type | Description | Example |
|---|---|---|
| Heuristics | General rules of thumb | "Always test before deploying" |
| Sharp Edges | Things to watch out for | "API rate limits hit at 100 req/min" |
| Anti-Patterns | What not to do | "Don't edit config without backup" |
| Playbooks | Proven approaches | "Database migration checklist" |
| Policies | Enforced constraints | "Must have tests for core modules" |
Usage in QwenClaw
Basic Usage
# Start Spark pipeline
python -m spark.cli up
# Start QwenClaw (Spark captures automatically)
qwenclaw start
# Send task
qwenclaw send "Refactor the authentication module"
Check Spark Status
python -m spark.cli status
python -m spark.cli learnings
View Advisory History
python -m spark.cli advisories
Promote Insights Manually
python -m spark.cli promote <insight-id>
Obsidian Observatory
Spark auto-generates 465+ markdown pages with live Dataview queries:
What's Included
- Pipeline Health - 12-stage pipeline detail pages
- Cognitive Insights - Stored insights with reliability scores
- EIDOS Episodes - Pattern distillations
- Advisory Decisions - Pre-tool guidance history
- Explorer Views - Real-time data exploration
- Canvas View - Spatial pipeline visualization
Auto-Sync
Observatory syncs every 120 seconds when pipeline is running.
Measurable Outcomes
Advisory Source Effectiveness
| Source | What It Provides | Effectiveness |
|---|---|---|
| Cognitive | Validated session insights | ~62% (dominant) |
| Bank | User memory banks | ~10% |
| EIDOS | Pattern distillations | ~5% |
| Baseline | Static rules | ~5% |
| Trigger | Event-specific rules | ~5% |
| Semantic | BM25 + embedding retrieval | ~3% |
Timeline to Value
| Time | What Happens |
|---|---|
| Hour 1 | Spark starts capturing events |
| Hour 2-4 | Patterns emerge (tool effectiveness, error patterns) |
| Day 1-2 | Insights get promoted to project files |
| Week 1+ | Advisory goes live with pre-tool guidance |
Integration Examples
Example 1: Preventing Recurring Errors
QwenClaw: About to run: npm install
Spark: ⚠️ WARNING
Last 3 times you ran `npm install` without --legacy-peer-deps,
it failed with ERESOLVE errors.
Suggestion: Use `npm install --legacy-peer-deps`
Reliability: 0.94 | Validated: 8 times
Example 2: Auto-Promoting Best Practices
User: "Remember: always run tests before committing"
Spark: Captured (score: 0.78)
→ After 5 successful validations:
Promoted to CLAUDE.md:
"## Testing Policy
Always run tests before committing changes.
Validated: 12 times | Reliability: 0.96"
Example 3: Domain-Specific Expertise
Domain Chip: React Development
Spark Advisory:
ℹ️ NOTE
In this project, useEffect dependencies are managed
with eslint-plugin-react-hooks.
Missing dependencies auto-fixed 23 times.
Reliability: 0.89
Configuration
spark.config.yaml
spark:
enabled: true
session_id: qwenclaw-${timestamp}
hooks:
pre_tool_use: true
post_tool_use: true
user_prompt: true
advisory:
enabled: true
min_score: 0.48
cooldown_seconds: 300
memory:
auto_capture: true
min_importance: 0.55
observatory:
enabled: true
sync_interval_seconds: 120
Best Practices
1. Let Spark Learn Naturally
Just use QwenClaw normally. Spark captures and learns in the background.
2. Review Advisories
Pay attention to pre-tool warnings. They're based on validated patterns.
3. Provide Explicit Feedback
Tell Spark what to remember:
- "Remember: always use --force for this legacy package"
- "I prefer yarn over npm in this project"
4. Check Observatory
Review the Obsidian vault to understand what Spark has learned.
5. Promote High-Value Insights
Manually promote insights that are immediately valuable.
Skill Metadata
name: spark-intelligence
version: 1.0.0
category: automation
description: Self-evolving AI companion that captures, distills, and delivers
actionable insights from QwenClaw sessions
author: Vibeship (https://github.com/vibeforge1111/vibeship-spark-intelligence)
license: MIT
tags:
- learning
- memory
- advisory
- self-improving
- local-first
- obsidian
Resources
- Website: https://spark.vibeship.co
- GitHub: https://github.com/vibeforge1111/vibeship-spark-intelligence
- Onboarding:
docs/SPARK_ONBOARDING_COMPLETE.md - Quickstart:
docs/QUICKSTART.md - Obsidian Guide:
docs/OBSIDIAN_OBSERVATORY_GUIDE.md
Spark Intelligence transforms QwenClaw from a stateless executor into a learning system! 🧠✨