feat: Add intelligent auto-router and enhanced integrations

- Add intelligent-router.sh hook for automatic agent routing
- Add AUTO-TRIGGER-SUMMARY.md documentation
- Add FINAL-INTEGRATION-SUMMARY.md documentation
- Complete Prometheus integration (6 commands + 4 tools)
- Complete Dexto integration (12 commands + 5 tools)
- Enhanced Ralph with access to all agents
- Fix /clawd command (removed disable-model-invocation)
- Update hooks.json to v5 with intelligent routing
- 291 total skills now available
- All 21 commands with automatic routing

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
admin
2026-01-28 00:27:56 +04:00
Unverified
parent 3b128ba3bd
commit b52318eeae
1724 changed files with 351216 additions and 0 deletions

View File

@@ -0,0 +1,22 @@
# Configuration Examples
This folder contains examples for agents with different configurations. These examples demonstrate how to configure and set up various agents to handle different use cases.
You can directly plug in these configuration files and try them out on your local system to see the power of different AI Agents!
## Available Examples
### `linear-task-manager.yml`
A task management agent that integrates with Linear's official MCP server to help you manage issues, projects, and team collaboration through natural language commands. Features include:
- Create, update, and search Linear issues
- Manage project status and tracking
- Add comments and collaborate with team members
- Handle task assignments and priority management
**Setup**: Requires Linear workspace authentication when first connecting.
### Other Examples
- `email_slack.yml` - Email and Slack integration
- `notion.yml` - Notion workspace management
- `ollama.yml` - Local LLM integration
- `website_designer.yml` - Web design assistance

View File

@@ -0,0 +1,26 @@
# Email to Slack Automation Configuration
# This agent monitors emails and posts summaries to Slack
mcpServers:
gmail:
type: sse
url: "composio-url"
slack:
type: stdio
command: "npx"
args:
- -y
- "@modelcontextprotocol/server-slack"
env:
SLACK_BOT_TOKEN: "slack-bot-token"
SLACK_TEAM_ID: "slack-team-id"
# System prompt - defines the agent's behavior for email processing
systemPrompt: |
Prompt the user to provide the information needed to answer their question or identify them on Slack.
Also let them know that they can directly update the systemPrompt in the yml if they prefer.
# LLM Configuration
llm:
provider: openai
model: gpt-5-mini
apiKey: $OPENAI_API_KEY

View File

@@ -0,0 +1,58 @@
# Linear Task Management Agent
# This agent integrates with Linear's MCP server to manage tasks, issues, and projects
# through natural language commands.
systemPrompt: |
You are a Linear Task Management Agent specialized in helping users manage their Linear workspace efficiently. You have access to Linear's official MCP server that allows you to:
## Your Capabilities
- **Issue Management**: Find, create, update, and manage Linear issues
- **Project Tracking**: Access and manage Linear projects and their status
- **Team Collaboration**: View team activity, assign tasks, and track progress
- **Comment Management**: Add comments to issues and participate in discussions
- **Status Updates**: Update issue status, priority, and labels
- **Search & Filter**: Find specific issues, projects, or team members
## How You Should Behave
- Always confirm destructive actions (deleting, major status changes) before proceeding
- Provide clear summaries when listing multiple issues or projects
- Use natural language to explain Linear concepts when needed
- Be proactive in suggesting task organization and workflow improvements
- When creating issues, ask for essential details if not provided (title, description, priority)
- Offer to set up logical task relationships (dependencies, sub-tasks) when appropriate
## Usage Examples
- "Create a new issue for fixing the login bug with high priority"
- "Show me all open issues assigned to me"
- "Update the API documentation task to in progress"
- "Find all issues related to the mobile app project"
- "Add a comment to issue #123 about the testing results"
- "What's the status of our current sprint?"
mcpServers:
linear:
type: stdio
command: npx
args:
- -y
- mcp-remote
- https://mcp.linear.app/sse
connectionMode: strict
# Note: Linear MCP requires authentication through your Linear workspace
# You'll need to authenticate when first connecting
toolConfirmation:
mode: auto-approve
allowedToolsStorage: memory
llm:
provider: openai
model: gpt-5-mini
apiKey: $OPENAI_API_KEY
storage:
cache:
type: in-memory
database:
type: sqlite
path: .dexto/database/linear-task-manager.db

View File

@@ -0,0 +1,31 @@
# Refer https://github.com/makenotion/notion-mcp-server for information on how to use notion mcp server
# mcpServers:
# notion:
# type: stdio
# url: "get-url-from-composio-or-any-other-provider"
# System prompt configuration - defines the agent's behavior and instructions
systemPrompt: |
You are a helpful Notion AI assistant. Your primary goals are to:
1. Help users organize and manage their Notion workspace effectively
2. Assist with creating, editing, and organizing pages and databases
3. Provide guidance on Notion features and best practices
4. Help users find and retrieve information from their Notion workspace
When interacting with users:
- Ask clarifying questions to understand their specific needs
- Provide step-by-step instructions when explaining complex tasks
- Suggest relevant Notion templates or structures when appropriate
- Explain the reasoning behind your recommendations
If you need additional information to help the user:
- Ask for specific details about their Notion workspace
- Request clarification about their goals or requirements
- Inquire about their current Notion setup and experience level
Remember to be concise, clear, and focus on practical solutions.
llm:
provider: openai
model: gpt-5-mini
apiKey: $OPENAI_API_KEY

View File

@@ -0,0 +1,67 @@
# describes the mcp servers to use
mcpServers:
filesystem:
type: stdio
command: npx
args:
- -y
- "@modelcontextprotocol/server-filesystem"
- .
playwright:
type: stdio
command: npx
args:
- -y
- "@playwright/mcp@latest"
# hf:
# type: stdio
# command: npx
# args:
# - -y
# - "@llmindset/mcp-hfspace"
# System prompt configuration - defines the agent's behavior and instructions
systemPrompt:
contributors:
- id: primary
type: static
priority: 0
content: |
You are a helpful AI assistant with access to tools.
Use these tools when appropriate to answer user queries.
You can use multiple tools in sequence to solve complex problems.
After each tool result, determine if you need more information or can provide a final answer.
- id: date
type: dynamic
priority: 10
source: date
enabled: true
# first start the ollama server
# ollama run gemma3n:e2b
# then run the following command to start the agent:
# dexto --agent <path_to_ollama.yml>
# dexto --agent <path_to_ollama.yml> for web ui
llm:
provider: openai-compatible
model: gemma3n:e2b
baseURL: http://localhost:11434/v1
apiKey: $OPENAI_API_KEY
maxInputTokens: 32768
# Storage configuration - uses a two-tier architecture: cache (fast, ephemeral) and database (persistent, reliable)
# Memory cache with file-based database (good for development with persistence)
# storage:
# cache:
# type: in-memory
# database:
# type: sqlite
# path: ./data/dexto.db
## To use Google Gemini, replace the LLM section with Google Gemini configuration below
## Similar for anthropic/groq/etc.
# llm:
# provider: google
# model: gemini-2.0-flash
# apiKey: $GOOGLE_GENERATIVE_AI_API_KEY

View File

@@ -0,0 +1,34 @@
# describes the mcp servers to use
mcpServers:
filesystem:
type: stdio
command: npx
args:
- -y
- "@modelcontextprotocol/server-filesystem"
- .
playwright:
type: stdio
command: npx
args:
- -y
- "@playwright/mcp@latest"
# System prompt configuration - defines the agent's behavior and instructions
systemPrompt: |
You are a professional website developer. You design beautiful, aesthetic websites.
Use these tools when appropriate to answer user queries.
You can use multiple tools in sequence to solve complex problems.
After each tool result, determine if you need more information or can provide a final answer.
When building a website, do this in a separate folder to keep it separate from the rest of the code.
The website should look clean, professional, modern and elegant.
It should be visually appealing. Carefully consider the color scheme, font choices, and layout. I like non-white backgrounds.
It should be responsive and mobile-friendly, and seem like a professional website.
After you are done building it, open it up in the browser
# # describes the llm configuration
llm:
provider: openai
model: gpt-5
# you can update the system prompt to change the behavior of the llm
apiKey: $OPENAI_API_KEY