feat: Add intelligent auto-router and enhanced integrations

- Add intelligent-router.sh hook for automatic agent routing
- Add AUTO-TRIGGER-SUMMARY.md documentation
- Add FINAL-INTEGRATION-SUMMARY.md documentation
- Complete Prometheus integration (6 commands + 4 tools)
- Complete Dexto integration (12 commands + 5 tools)
- Enhanced Ralph with access to all agents
- Fix /clawd command (removed disable-model-invocation)
- Update hooks.json to v5 with intelligent routing
- 291 total skills now available
- All 21 commands with automatic routing

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
admin
2026-01-28 00:27:56 +04:00
Unverified
parent 3b128ba3bd
commit b52318eeae
1724 changed files with 351216 additions and 0 deletions

View File

@@ -0,0 +1,89 @@
# Dexto + LangChain Example
This example demonstrates how Dexto's orchestration layer can integrate existing agents from other frameworks (like LangChain, LangGraph, etc.) via the Model Context Protocol (MCP), enabling seamless multi-agent workflows.
## Architecture
```mermaid
graph TD
A[Dexto Orchestrator] --> B[Filesystem Tools]
A --> C[Puppeteer Tools]
A --> D[LangChain Agent]
style A fill:#4f46e5,stroke:#312e81,stroke-width:2px,color:#fff
style B fill:#10b981,stroke:#065f46,stroke-width:1px,color:#fff
style C fill:#f59e0b,stroke:#92400e,stroke-width:1px,color:#fff
style D fill:#8b5cf6,stroke:#5b21b6,stroke-width:1px,color:#fff
```
## How to Think About Multi-Agent Integration
When building multi-agent systems, you often have agents built in different frameworks. Here's how to approach this with Dexto:
1. **Start with what you have**: You may already have agents in LangChain, LangGraph, AutoGen, or other frameworks
2. **Use MCP as the bridge**: Instead of rebuilding or creating custom adapters, wrap your existing agents with MCP as a tool
3. **Let Dexto orchestrate**: Dexto can then coordinate between your existing agents and other tools/subsystems
4. **Build incrementally**: Add more agents and frameworks as needed - MCP makes it straightforward
## Quick Setup
```bash
# Install dependencies
cd examples/dexto-langchain-integration/langchain-agent
npm install
npm run build
# Set API key
export OPENAI_API_KEY="your_openai_api_key_here"
# Test integration (run from repository root)
cd ../../..
dexto --agent ./examples/dexto-langchain-integration/dexto-agent-with-langchain.yml "Analyze the sentiment of this review: 'I absolutely love this product! The quality is amazing and the customer service was outstanding. Best purchase I've made this year.'"
# Note: Agent file paths in the YAML config are resolved relative to the current working directory
```
## What You Can Do
**Dexto orchestrates between:**
- **Filesystem**: Read/write files
- **Puppeteer**: Web browsing and interaction
- **LangChain Agent**: Text summarization, translation, sentiment analysis
**Example workflows:**
```bash
# Text summarization
dexto --agent ./examples/dexto-langchain-integration/dexto-agent-with-langchain.yml "Summarize this article: Artificial intelligence has transformed how we work, with tools like ChatGPT and GitHub Copilot becoming essential for developers. These AI assistants help write code, debug issues, and even design entire applications. The impact extends beyond coding - AI is reshaping customer service, content creation, and decision-making processes across industries."
# Translation
dexto --agent ./examples/dexto-langchain-integration/dexto-agent-with-langchain.yml "Translate this text to Spanish: The weather is beautiful today and I'm going to the park to enjoy the sunshine."
# Sentiment Analysis
dexto --agent ./examples/dexto-langchain-integration/dexto-agent-with-langchain.yml "Analyze the sentiment of this customer review: 'I absolutely love this product! The quality is amazing and the customer service was outstanding. Best purchase I've made this year.'"
# Multi-step: Read file → Summarize → Save
dexto --agent ./examples/dexto-langchain-integration/dexto-agent-with-langchain.yml "Read README.md, summarize it, save the summary"
# Complex: Web scrape → Sentiment Analysis → Save
dexto --agent ./examples/dexto-langchain-integration/dexto-agent-with-langchain.yml "Search for customer reviews about our product, analyze the sentiment, save as sentiment_report.md"
```
## How It Works
1. **Dexto Orchestrator**: Manages & supervises all subsystems and workflows
2. **LangChain MCP Agent**: Wraps existing LangChain agent as a Dexto subsystem
3. **Configuration**: Registers LangChain alongside filesystem and puppeteer tools
## Extending
**Add agents from other frameworks:**
1. Wrap more agents into an MCP Server
2. Add to Dexto configuration
3. Dexto orchestrates between all agents and subsystems
**Add capabilities to existing agents:**
1. Extend your external agent capabilities
2. Register new tools/methods
3. Dexto accesses via MCP integration
This demonstrates how to think about Dexto as your orchestration layer for multi-agent systems - start with your existing agents, use MCP to connect them, and let Dexto handle the coordination.

View File

@@ -0,0 +1,89 @@
# Dexto Agent Configuration with External LangChain Framework Integration
# This demonstrates how to connect a self-contained LangChain agent to Dexto via MCP
# System prompt that explains the agent's capabilities including LangChain integration
systemPrompt:
contributors:
- id: primary
type: static
priority: 0
content: |
You are a Dexto AI agent with access to a complete LangChain agent via MCP.
You can orchestrate tasks across different AI frameworks and tools.
## Your Capabilities
**Core Dexto Tools:**
- File system operations (read, write, list files)
- Web browsing and interaction via Puppeteer
- General AI assistance and task coordination
**LangChain Agent Integration:**
- `chat_with_langchain_agent`: Interact with a complete LangChain agent that has its own internal tools and reasoning capabilities
The LangChain agent can handle:
- Text summarization and content analysis
- Language translation between different languages
- Sentiment analysis and emotion detection
## Usage Examples
**Basic LangChain interaction:**
- "Use the LangChain agent to summarize this article about AI trends"
- "Ask the LangChain agent to translate this text to Spanish"
- "Have the LangChain agent analyze the sentiment of this customer review"
**Multi-framework orchestration:**
- "Read the README.md file, then use the LangChain agent to summarize it"
- "Search the web for news about AI, then have the LangChain agent translate it to Spanish"
- "Use the LangChain agent to analyze sentiment of customer feedback, then save the report"
**Complex workflows:**
- "Use the LangChain agent to summarize this document, then save it as a report"
- "Have the LangChain agent analyze sentiment of this text, then translate the analysis to Spanish"
The LangChain agent handles its own internal reasoning and tool selection, so you can simply send it natural language requests and it will figure out what to do.
- id: date
type: dynamic
priority: 10
source: date
enabled: true
# MCP Server configurations
mcpServers:
# Standard Dexto tools
filesystem:
type: stdio
command: npx
args:
- -y
- "@modelcontextprotocol/server-filesystem"
- .
connectionMode: strict
playwright:
type: stdio
command: npx
args:
- "-y"
- "@playwright/mcp@latest"
connectionMode: lenient
# External LangChain agent integration
langchain:
type: stdio
command: node
args:
- "${{dexto.agent_dir}}/langchain-agent/dist/mcp-server.js"
env:
OPENAI_API_KEY: $OPENAI_API_KEY
timeout: 30000
connectionMode: strict
# LLM configuration for Dexto agent
llm:
provider: openai
model: gpt-5-mini
apiKey: $OPENAI_API_KEY
temperature: 0.7

View File

@@ -0,0 +1,155 @@
#!/usr/bin/env node
/* eslint-env node */
import { ChatOpenAI } from '@langchain/openai';
import { PromptTemplate } from '@langchain/core/prompts';
interface AgentTools {
summarize: (input: string | { text: string }) => Promise<string>;
translate: (input: string | { text: string; target_language?: string }) => Promise<string>;
analyze: (input: string | { text: string }) => Promise<string>;
}
export class LangChainAgent {
private llm: ChatOpenAI;
private tools: AgentTools;
constructor() {
this.llm = new ChatOpenAI({
model: 'gpt-5-mini',
temperature: 0.7,
});
this.tools = {
summarize: this.summarize.bind(this),
translate: this.translate.bind(this),
analyze: this.analyze.bind(this),
};
}
async run(input: string): Promise<string> {
try {
console.error(
`LangChain Agent received: ${input.substring(0, 100)}${input.length > 100 ? '...' : ''}`
);
const prompt = PromptTemplate.fromTemplate(`
You are a helpful AI assistant with three core capabilities:
**Core Tools:**
- summarize: Create concise summaries of text, articles, or documents
- translate: Translate text between different languages
- analyze: Perform sentiment analysis on text to understand emotions and tone
User input: {user_input}
Based on the user's request, determine which tool would be most helpful:
- summarize: For creating summaries of text, articles, or documents
- translate: For translating text between languages
- analyze: For performing sentiment analysis on text to understand emotions and tone
Provide a helpful response that addresses the user's needs.
`);
const chain = prompt.pipe(this.llm);
const result = await chain.invoke({ user_input: input });
const content =
typeof result.content === 'string' ? result.content : String(result.content);
console.error(
`LangChain Agent response: ${content.substring(0, 100)}${content.length > 100 ? '...' : ''}`
);
return content;
} catch (error: any) {
console.error(`LangChain Agent error: ${error.message}`);
return `I encountered an error: ${error.message}`;
}
}
private async summarize(input: string | { text: string }): Promise<string> {
const summaryPrompt = PromptTemplate.fromTemplate(`
Please create a concise summary of the following text:
Text: {text}
Provide a clear, well-structured summary that captures the key points and main ideas.
`);
const chain = summaryPrompt.pipe(this.llm);
const result = await chain.invoke({
text: typeof input === 'string' ? input : input.text,
});
return result.content as string;
}
private async translate(
input: string | { text: string; target_language?: string }
): Promise<string> {
const translatePrompt = PromptTemplate.fromTemplate(`
Please translate the following text:
Text: {text}
Target Language: {target_language}
Provide an accurate translation that maintains the original meaning and tone.
`);
const chain = translatePrompt.pipe(this.llm);
const result = await chain.invoke({
text: typeof input === 'string' ? input : input.text,
target_language:
typeof input === 'string' ? 'English' : input.target_language || 'English',
});
return result.content as string;
}
private async analyze(input: string | { text: string }): Promise<string> {
const analyzePrompt = PromptTemplate.fromTemplate(`
Please perform sentiment analysis on the following text:
Text: {text}
Provide a comprehensive sentiment analysis covering:
1. **Overall Sentiment**: Positive, Negative, or Neutral
2. **Sentiment Score**: Rate from 1-10 (1=very negative, 10=very positive)
3. **Key Emotions**: Identify specific emotions present (e.g., joy, anger, sadness, excitement)
4. **Confidence Level**: How confident are you in this analysis?
5. **Key Phrases**: Highlight specific phrases that influenced the sentiment
6. **Context**: Any contextual factors that might affect interpretation
Be specific and provide clear reasoning for your analysis.
`);
const chain = analyzePrompt.pipe(this.llm);
const result = await chain.invoke({
text: typeof input === 'string' ? input : input.text,
});
return result.content as string;
}
}
// For direct testing
if (import.meta.url === `file://${process.argv[1]}`) {
const agent = new LangChainAgent();
console.log('LangChain Agent Test Mode');
console.log('Type your message (or "quit" to exit):');
process.stdin.setEncoding('utf8');
process.stdin.on('data', async (data) => {
const input = data.toString().trim();
if (input.toLowerCase() === 'quit') {
process.exit(0);
}
try {
const response = await agent.run(input);
console.log('\nAgent Response:', response);
} catch (error: any) {
console.error('Error:', error.message);
}
console.log('\nType your message (or "quit" to exit):');
});
}

View File

@@ -0,0 +1,77 @@
#!/usr/bin/env node
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { z } from 'zod';
import { LangChainAgent } from './agent.js';
class LangChainMCPServer {
private server: McpServer;
private agent: LangChainAgent;
constructor() {
this.server = new McpServer({
name: 'langchain-agent',
version: '1.0.0',
});
this.agent = new LangChainAgent();
this.registerTools();
}
private registerTools(): void {
this.server.registerTool(
'chat_with_langchain_agent',
{
description:
'Chat with a helpful LangChain agent that can summarize text, translate languages, and perform sentiment analysis.',
inputSchema: {
// Cannot use zod object here due to type incompatibility with MCP SDK
message: z
.string()
.describe(
'The message to send to the LangChain agent. The agent will use its own reasoning to determine which internal tools to use.'
),
},
},
async ({ message }: { message: string }) => {
try {
console.error(`MCP Server: Forwarding message to LangChain agent`);
const response = await this.agent.run(message);
console.error(`MCP Server: Received response from LangChain agent`);
return {
content: [
{
type: 'text',
text: response,
},
],
};
} catch (error: any) {
console.error(`MCP Server error: ${error.message}`);
return {
content: [
{
type: 'text',
text: `Error communicating with LangChain agent: ${error.message}`,
},
],
};
}
}
);
}
async start(): Promise<void> {
const transport = new StdioServerTransport();
await this.server.connect(transport);
console.error('LangChain Agent MCP Server started and ready for connections');
}
}
// Start the server
const server = new LangChainMCPServer();
server.start().catch(console.error);

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,33 @@
{
"name": "langchain-agent-example",
"version": "1.0.0",
"description": "Self-contained LangChain agent wrapped in MCP server",
"type": "module",
"main": "dist/mcp-server.js",
"scripts": {
"build": "tsc",
"start": "npm run build && node dist/mcp-server.js",
"agent": "npm run build && node dist/agent.js",
"dev": "tsc --watch & node --watch dist/mcp-server.js"
},
"dependencies": {
"@modelcontextprotocol/sdk": "^1.25.2",
"@langchain/openai": "^0.6.7",
"@langchain/core": "^0.3.80",
"langchain": "^0.3.37",
"zod": "^3.22.4"
},
"devDependencies": {
"@types/node": "^20.0.0",
"typescript": "^5.0.0"
},
"keywords": [
"langchain",
"mcp",
"agent",
"ai",
"model-context-protocol"
],
"author": "Dexto Team",
"license": "MIT"
}

View File

@@ -0,0 +1,27 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "ESNext",
"moduleResolution": "node",
"outDir": "./dist",
"rootDir": "./",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"declaration": true,
"declarationMap": true,
"sourceMap": true,
"allowSyntheticDefaultImports": true,
"resolveJsonModule": true,
"types": ["node"]
},
"include": [
"*.ts",
"*.js"
],
"exclude": [
"node_modules",
"dist"
]
}