feat: Add intelligent auto-router and enhanced integrations
- Add intelligent-router.sh hook for automatic agent routing - Add AUTO-TRIGGER-SUMMARY.md documentation - Add FINAL-INTEGRATION-SUMMARY.md documentation - Complete Prometheus integration (6 commands + 4 tools) - Complete Dexto integration (12 commands + 5 tools) - Enhanced Ralph with access to all agents - Fix /clawd command (removed disable-model-invocation) - Update hooks.json to v5 with intelligent routing - 291 total skills now available - All 21 commands with automatic routing 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
130
dexto/examples/README.md
Normal file
130
dexto/examples/README.md
Normal file
@@ -0,0 +1,130 @@
|
||||
# Dexto Examples
|
||||
|
||||
This directory contains example code and configurations demonstrating how to use Dexto in various contexts.
|
||||
|
||||
## Code Examples
|
||||
|
||||
### Basic Agent Usage (`basic-agent-example.ts`)
|
||||
|
||||
The simplest example of how to use the Dexto Agent SDK. Shows:
|
||||
- Creating an agent with minimal configuration
|
||||
- Starting and stopping the agent
|
||||
- Creating a session
|
||||
- Using `generate()` for request/response interactions
|
||||
- Token usage tracking
|
||||
|
||||
Run it with:
|
||||
```bash
|
||||
npx tsx basic-agent-example.ts
|
||||
```
|
||||
|
||||
### LangChain Integration (`dexto-langchain-integration/`)
|
||||
|
||||
Shows how to integrate Dexto with LangChain, useful if you're already using LangChain in your project.
|
||||
|
||||
### Agent Manager (`agent-manager-example/`)
|
||||
|
||||
Demonstrates using the AgentManager API for managing multiple agents programmatically.
|
||||
|
||||
### Agent Delegation (`agent-delegation/`)
|
||||
|
||||
Shows a pattern for implementing a multi-agent coordinator/specialist architecture where one agent delegates tasks to specialized agents.
|
||||
|
||||
### Demo Server (`resources-demo-server/`)
|
||||
|
||||
A simple HTTP server example demonstrating Dexto's resource authorization flow.
|
||||
|
||||
## Agent Configuration Examples
|
||||
|
||||
See the `/agents/` directory for YAML configuration examples for different use cases.
|
||||
|
||||
## How to Use These Examples
|
||||
|
||||
1. **Copy an example** to your project or workspace
|
||||
2. **Customize** the configuration for your needs
|
||||
3. **Install dependencies** if it has a `package.json`
|
||||
4. **Follow the README** for setup and running instructions
|
||||
|
||||
Each example is self-contained and can be run independently.
|
||||
|
||||
## Platform Integration Examples
|
||||
|
||||
These examples show how to integrate DextoAgent with different messaging platforms. They are **reference implementations** that you can customize and extend for your own use cases.
|
||||
|
||||
### Discord Bot (`discord-bot/`)
|
||||
|
||||
A complete Discord bot integration using discord.js and the Discord Gateway API.
|
||||
|
||||
**Features:**
|
||||
- Responds to messages in DMs and server channels
|
||||
- Support for the `!ask` command prefix in channels
|
||||
- Image attachment processing
|
||||
- Rate limiting per user
|
||||
- Persistent per-user conversation sessions
|
||||
- Tool call notifications
|
||||
|
||||
**Quick Start:**
|
||||
```bash
|
||||
cd discord-bot
|
||||
pnpm install
|
||||
cp .env.example .env
|
||||
# Add your DISCORD_BOT_TOKEN to .env
|
||||
pnpm start
|
||||
```
|
||||
|
||||
**See:** [`discord-bot/README.md`](./discord-bot/README.md) for detailed setup and usage instructions.
|
||||
|
||||
### Telegram Bot (`telegram-bot/`)
|
||||
|
||||
A complete Telegram bot integration using grammy and the Telegram Bot API.
|
||||
|
||||
**Features:**
|
||||
- Responds to messages in DMs and group chats
|
||||
- Support for `/ask` command and `/start` menu
|
||||
- Image attachment processing
|
||||
- Inline query support (use bot username in any chat)
|
||||
- Session reset button
|
||||
- Concurrency control for inline queries
|
||||
- Persistent per-user conversation sessions
|
||||
- Tool call notifications
|
||||
|
||||
**Quick Start:**
|
||||
```bash
|
||||
cd telegram-bot
|
||||
pnpm install
|
||||
cp .env.example .env
|
||||
# Add your TELEGRAM_BOT_TOKEN to .env
|
||||
pnpm start
|
||||
```
|
||||
|
||||
**See:** [`telegram-bot/README.md`](./telegram-bot/README.md) for detailed setup and usage instructions.
|
||||
|
||||
## Building Your Own Integration
|
||||
|
||||
To build your own platform integration:
|
||||
|
||||
1. **Start with a reference implementation** - Use `discord-bot` or `telegram-bot` as a template
|
||||
2. **Adapt the bot.ts** - Replace platform-specific code with your target platform's SDK
|
||||
3. **Keep the pattern** - Receive a pre-initialized DextoAgent and implement platform-specific I/O
|
||||
4. **Reuse the config** - Use the same agent-config.yml pattern for configuration
|
||||
5. **Add main.ts** - Create a standalone runner that initializes the agent and starts your bot
|
||||
|
||||
The key pattern is:
|
||||
```typescript
|
||||
export function startMyBot(agent: DextoAgent) {
|
||||
// Platform-specific setup
|
||||
// Use agent.run() to process user input
|
||||
// Use agent.agentEventBus to listen for events
|
||||
// Return your platform's client/connection object
|
||||
}
|
||||
```
|
||||
|
||||
## Documentation
|
||||
|
||||
- [Dexto Documentation](https://dexto.dev)
|
||||
- [DextoAgent API](https://docs.dexto.dev)
|
||||
- [Configuration Reference](../agents/examples/README.md)
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
121
dexto/examples/agent-delegation/coordinator-agent.yml
Normal file
121
dexto/examples/agent-delegation/coordinator-agent.yml
Normal file
@@ -0,0 +1,121 @@
|
||||
# Coordinator Agent - Delegates tasks to specialist agents
|
||||
agentId: task-coordinator
|
||||
|
||||
# Agent Card for A2A Protocol
|
||||
agentCard:
|
||||
name: "Task Coordinator"
|
||||
description: "Intelligent coordinator that delegates specialized tasks to expert agents. Orchestrates multi-agent workflows."
|
||||
url: "http://localhost:3000"
|
||||
version: "1.0.0"
|
||||
skills:
|
||||
- id: "task-delegation"
|
||||
name: "Task Delegation"
|
||||
description: "Intelligently route tasks to specialized agents based on their capabilities"
|
||||
tags: ["coordination", "delegation", "orchestration"]
|
||||
examples:
|
||||
- "Delegate data analysis tasks"
|
||||
- "Coordinate multi-agent workflows"
|
||||
- id: "result-synthesis"
|
||||
name: "Result Synthesis"
|
||||
description: "Combine results from multiple agents into coherent responses"
|
||||
tags: ["synthesis", "aggregation", "coordination"]
|
||||
|
||||
# LLM Configuration
|
||||
llm:
|
||||
provider: anthropic
|
||||
model: claude-sonnet-4-5-20250929
|
||||
apiKey: ${ANTHROPIC_API_KEY}
|
||||
|
||||
# Internal Tools - Enable delegation
|
||||
internalTools:
|
||||
- delegate_to_url
|
||||
|
||||
# System Prompt
|
||||
systemPrompt:
|
||||
contributors:
|
||||
- id: primary
|
||||
type: static
|
||||
priority: 0
|
||||
content: |
|
||||
You are a Task Coordinator agent. Your role is to:
|
||||
|
||||
1. Understand user requests and identify when specialized help is needed
|
||||
2. Delegate tasks to specialist agents using the delegate_to_url tool
|
||||
3. Manage stateful conversations with specialists using sessionId
|
||||
4. Synthesize results from specialists into clear responses
|
||||
|
||||
Available Specialist Agents:
|
||||
- Data Analyzer (http://localhost:3001):
|
||||
* Analyzes data and identifies trends
|
||||
* Generates statistical insights
|
||||
* Creates comprehensive reports
|
||||
* Use for: data analysis, trend identification, statistical insights
|
||||
|
||||
IMPORTANT - Session Management for Multi-Turn Conversations:
|
||||
|
||||
The delegate_to_url tool supports STATEFUL conversations:
|
||||
|
||||
1. FIRST delegation to an agent:
|
||||
- Call tool with: {url: "http://localhost:3001", message: "your task"}
|
||||
- Tool returns: {success: true, sessionId: "delegation-xxx", response: "..."}
|
||||
- REMEMBER this sessionId!
|
||||
|
||||
2. FOLLOW-UP delegations to SAME agent:
|
||||
- Call tool with: {url: "http://localhost:3001", message: "follow-up question", sessionId: "delegation-xxx"}
|
||||
- Use the SAME sessionId from step 1
|
||||
- The agent REMEMBERS the previous conversation
|
||||
|
||||
3. NEW conversation with SAME agent:
|
||||
- Don't provide sessionId (or use a new one)
|
||||
- Starts fresh conversation
|
||||
|
||||
Example multi-turn delegation:
|
||||
```
|
||||
// First delegation
|
||||
delegate_to_url({
|
||||
url: "http://localhost:3001",
|
||||
message: "Analyze Q4 sales data: Revenue $2.5M, Growth 35%"
|
||||
})
|
||||
→ Returns: {sessionId: "delegation-abc123", response: "Analysis..."}
|
||||
|
||||
// Follow-up (remembers previous analysis)
|
||||
delegate_to_url({
|
||||
url: "http://localhost:3001",
|
||||
message: "What was the most important factor you identified?",
|
||||
sessionId: "delegation-abc123" ← SAME sessionId
|
||||
})
|
||||
→ Agent remembers the Q4 analysis and can answer specifically
|
||||
```
|
||||
|
||||
BEST PRACTICE: Track sessionIds for each specialist agent you work with so you can maintain context across multiple user questions.
|
||||
- id: date
|
||||
type: dynamic
|
||||
priority: 10
|
||||
source: date
|
||||
enabled: true
|
||||
|
||||
# Session configuration
|
||||
sessions:
|
||||
sessionTTL: 3600000 # 1 hour
|
||||
maxSessions: 100
|
||||
|
||||
# Storage
|
||||
storage:
|
||||
cache:
|
||||
type: in-memory
|
||||
database:
|
||||
type: sqlite
|
||||
blob:
|
||||
type: in-memory
|
||||
|
||||
# Tool confirmation
|
||||
toolConfirmation:
|
||||
mode: auto-approve
|
||||
timeout: 120000
|
||||
|
||||
# Logging
|
||||
logger:
|
||||
level: info
|
||||
transports:
|
||||
- type: console
|
||||
colorize: true
|
||||
85
dexto/examples/agent-delegation/specialist-agent.yml
Normal file
85
dexto/examples/agent-delegation/specialist-agent.yml
Normal file
@@ -0,0 +1,85 @@
|
||||
# Specialist Agent - Receives delegated tasks and processes them
|
||||
agentId: data-analyzer-specialist
|
||||
|
||||
# Agent Card for A2A Protocol
|
||||
agentCard:
|
||||
name: "Data Analyzer"
|
||||
description: "Specialized agent for analyzing data, generating insights, and creating reports. Excellent at statistical analysis and data visualization."
|
||||
url: "http://localhost:3001"
|
||||
version: "1.0.0"
|
||||
skills:
|
||||
- id: "data-analysis"
|
||||
name: "Data Analysis"
|
||||
description: "Analyze datasets, identify trends, and generate statistical insights"
|
||||
tags: ["data", "analysis", "statistics", "trends"]
|
||||
examples:
|
||||
- "Analyze sales trends for Q4"
|
||||
- "Find correlations in customer data"
|
||||
- "Generate summary statistics"
|
||||
- id: "report-generation"
|
||||
name: "Report Generation"
|
||||
description: "Create comprehensive reports with insights and recommendations"
|
||||
tags: ["reporting", "documentation", "insights"]
|
||||
examples:
|
||||
- "Generate quarterly report"
|
||||
- "Summarize key findings"
|
||||
|
||||
# LLM Configuration
|
||||
llm:
|
||||
provider: anthropic
|
||||
model: claude-sonnet-4-5-20250929
|
||||
apiKey: ${ANTHROPIC_API_KEY}
|
||||
|
||||
# System Prompt
|
||||
systemPrompt:
|
||||
contributors:
|
||||
- id: primary
|
||||
type: static
|
||||
priority: 0
|
||||
content: |
|
||||
You are a Data Analyzer specialist agent. Your role is to:
|
||||
|
||||
1. Analyze data and identify patterns/trends
|
||||
2. Provide statistical insights
|
||||
3. Generate clear, actionable reports
|
||||
|
||||
When you receive a delegation request, focus on:
|
||||
- Understanding the data or question thoroughly
|
||||
- Providing specific, quantitative insights
|
||||
- Being concise but comprehensive
|
||||
|
||||
Always structure your responses with:
|
||||
- Summary of findings
|
||||
- Key insights (3-5 bullet points)
|
||||
- Recommendations
|
||||
- id: date
|
||||
type: dynamic
|
||||
priority: 10
|
||||
source: date
|
||||
enabled: true
|
||||
|
||||
# Session configuration
|
||||
sessions:
|
||||
sessionTTL: 3600000 # 1 hour
|
||||
maxSessions: 100
|
||||
|
||||
# Storage
|
||||
storage:
|
||||
cache:
|
||||
type: in-memory
|
||||
database:
|
||||
type: sqlite
|
||||
blob:
|
||||
type: in-memory
|
||||
|
||||
# Tool confirmation
|
||||
toolConfirmation:
|
||||
mode: auto-approve
|
||||
timeout: 120000
|
||||
|
||||
# Logging
|
||||
logger:
|
||||
level: info
|
||||
transports:
|
||||
- type: console
|
||||
colorize: true
|
||||
185
dexto/examples/agent-delegation/test.sh
Executable file
185
dexto/examples/agent-delegation/test.sh
Executable file
@@ -0,0 +1,185 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Agent Delegation Test - Validates delegate_to_url internal tool
|
||||
#
|
||||
# This test proves:
|
||||
# 1. Specialist agent starts and exposes A2A JSON-RPC endpoint
|
||||
# 2. Direct A2A delegation works (send message, get response)
|
||||
# 3. Multi-turn stateful conversations work (3 turns, same sessionId)
|
||||
# 4. Agent remembers context across follow-up questions
|
||||
#
|
||||
# Files needed:
|
||||
# - specialist-agent.yml (agent that receives delegated tasks)
|
||||
# - coordinator-agent.yml (agent with delegate_to_url tool - not used in this test)
|
||||
# - test.sh (this file)
|
||||
#
|
||||
# Usage: cd examples/agent-delegation && ./test.sh
|
||||
# Requires: ANTHROPIC_API_KEY in .env file at project root
|
||||
|
||||
set -e
|
||||
|
||||
# Get the directory where the script is located
|
||||
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Cleanup function
|
||||
cleanup() {
|
||||
echo ""
|
||||
echo "🧹 Cleaning up..."
|
||||
if [ ! -z "$SPECIALIST_PID" ]; then
|
||||
kill $SPECIALIST_PID 2>/dev/null || true
|
||||
wait $SPECIALIST_PID 2>/dev/null || true
|
||||
fi
|
||||
rm -f /tmp/turn*.json /tmp/specialist-stateful.log 2>/dev/null || true
|
||||
}
|
||||
|
||||
# Trap cleanup on exit
|
||||
trap cleanup EXIT INT TERM
|
||||
|
||||
# Load env
|
||||
if [ -f ../../.env ]; then
|
||||
export $(cat ../../.env | grep -v '^#' | grep -v '^$' | xargs) 2>/dev/null || true
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "🔄 Testing Stateful Delegation (Conversation Resumption)"
|
||||
echo "═══════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
|
||||
# Start specialist
|
||||
echo "📡 Starting Specialist Agent (port 3001)..."
|
||||
PORT=3001 node ../../packages/cli/dist/index.js --mode server --agent specialist-agent.yml > /tmp/specialist-stateful.log 2>&1 &
|
||||
SPECIALIST_PID=$!
|
||||
|
||||
# Wait for ready
|
||||
READY=false
|
||||
for i in {1..30}; do
|
||||
if curl -s http://localhost:3001/health > /dev/null 2>&1; then
|
||||
echo "✅ Specialist ready!"
|
||||
READY=true
|
||||
break
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
|
||||
if [ "$READY" = false ]; then
|
||||
echo "❌ Failed to start specialist agent"
|
||||
cat /tmp/specialist-stateful.log 2>/dev/null || echo "No logs available"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "🧪 Test: Multi-Turn Conversation via A2A"
|
||||
echo "───────────────────────────────────────────────────"
|
||||
echo ""
|
||||
|
||||
# Generate unique session ID for this test
|
||||
SESSION_ID="test-session-$(date +%s)"
|
||||
echo "📝 Using session ID: $SESSION_ID"
|
||||
echo ""
|
||||
|
||||
# Turn 1: Initial analysis
|
||||
echo "💬 Turn 1: Ask specialist to analyze data..."
|
||||
cat > /tmp/turn1.json << EOF
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": "turn1",
|
||||
"method": "message/send",
|
||||
"params": {
|
||||
"message": {
|
||||
"role": "user",
|
||||
"parts": [{"kind": "text", "text": "Analyze these Q4 metrics: Revenue \$2.5M (+35%), 1200 customers, 87% retention. What are the top 3 insights?"}],
|
||||
"messageId": "msg-1",
|
||||
"taskId": "$SESSION_ID",
|
||||
"kind": "message"
|
||||
},
|
||||
"configuration": {"blocking": true}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
RESPONSE1=$(curl -s -X POST http://localhost:3001/jsonrpc -H "Content-Type: application/json" -d @/tmp/turn1.json)
|
||||
if echo "$RESPONSE1" | jq -e '.error' > /dev/null 2>&1; then
|
||||
echo "❌ Turn 1 failed:"
|
||||
echo "$RESPONSE1" | jq '.'
|
||||
exit 1
|
||||
fi
|
||||
echo "$RESPONSE1" | jq -r '.result.history[-1].parts[0].text' | head -15
|
||||
echo ""
|
||||
echo "✅ Turn 1 completed"
|
||||
echo ""
|
||||
|
||||
# Turn 2: Follow-up question using SAME session
|
||||
echo "💬 Turn 2: Ask follow-up question (same session)..."
|
||||
sleep 1
|
||||
cat > /tmp/turn2.json << EOF
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": "turn2",
|
||||
"method": "message/send",
|
||||
"params": {
|
||||
"message": {
|
||||
"role": "user",
|
||||
"parts": [{"kind": "text", "text": "Which of those 3 insights is most important and why?"}],
|
||||
"messageId": "msg-2",
|
||||
"taskId": "$SESSION_ID",
|
||||
"kind": "message"
|
||||
},
|
||||
"configuration": {"blocking": true}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
RESPONSE2=$(curl -s -X POST http://localhost:3001/jsonrpc -H "Content-Type: application/json" -d @/tmp/turn2.json)
|
||||
if echo "$RESPONSE2" | jq -e '.error' > /dev/null 2>&1; then
|
||||
echo "❌ Turn 2 failed:"
|
||||
echo "$RESPONSE2" | jq '.'
|
||||
exit 1
|
||||
fi
|
||||
echo "$RESPONSE2" | jq -r '.result.history[-1].parts[0].text' | head -20
|
||||
echo ""
|
||||
echo "✅ Turn 2 completed"
|
||||
echo ""
|
||||
|
||||
# Turn 3: Another follow-up
|
||||
echo "💬 Turn 3: Ask another follow-up (same session)..."
|
||||
sleep 1
|
||||
cat > /tmp/turn3.json << EOF
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": "turn3",
|
||||
"method": "message/send",
|
||||
"params": {
|
||||
"message": {
|
||||
"role": "user",
|
||||
"parts": [{"kind": "text", "text": "Based on our discussion, what should be the #1 priority for Q1?"}],
|
||||
"messageId": "msg-3",
|
||||
"taskId": "$SESSION_ID",
|
||||
"kind": "message"
|
||||
},
|
||||
"configuration": {"blocking": true}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
RESPONSE3=$(curl -s -X POST http://localhost:3001/jsonrpc -H "Content-Type: application/json" -d @/tmp/turn3.json)
|
||||
if echo "$RESPONSE3" | jq -e '.error' > /dev/null 2>&1; then
|
||||
echo "❌ Turn 3 failed:"
|
||||
echo "$RESPONSE3" | jq '.'
|
||||
exit 1
|
||||
fi
|
||||
echo "$RESPONSE3" | jq -r '.result.history[-1].parts[0].text' | head -15
|
||||
echo ""
|
||||
echo "✅ Turn 3 completed"
|
||||
echo ""
|
||||
|
||||
echo ""
|
||||
echo "✅ Stateful Conversation Test Complete!"
|
||||
echo "═══════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Validation:"
|
||||
echo " ✅ 3 messages sent to same session"
|
||||
echo " ✅ Agent remembered context across turns"
|
||||
echo " ✅ Follow-up questions worked without re-stating context"
|
||||
echo " ✅ Session ID: $SESSION_ID maintained throughout"
|
||||
echo ""
|
||||
13
dexto/examples/agent-manager-example/agents/coding-agent.yml
Normal file
13
dexto/examples/agent-manager-example/agents/coding-agent.yml
Normal file
@@ -0,0 +1,13 @@
|
||||
systemPrompt: |
|
||||
You are an expert coding assistant. You help developers write clean,
|
||||
efficient code and explain programming concepts clearly.
|
||||
|
||||
When providing code examples:
|
||||
- Use clear variable names
|
||||
- Add brief comments for complex logic
|
||||
- Follow best practices for the language
|
||||
|
||||
llm:
|
||||
provider: openai
|
||||
model: gpt-4o-mini
|
||||
apiKey: $OPENAI_API_KEY
|
||||
20
dexto/examples/agent-manager-example/agents/registry.json
Normal file
20
dexto/examples/agent-manager-example/agents/registry.json
Normal file
@@ -0,0 +1,20 @@
|
||||
{
|
||||
"agents": [
|
||||
{
|
||||
"id": "coding-agent",
|
||||
"name": "Coding Assistant",
|
||||
"description": "Expert coding assistant for development tasks",
|
||||
"configPath": "./coding-agent.yml",
|
||||
"author": "Dexto Team",
|
||||
"tags": ["coding", "development"]
|
||||
},
|
||||
{
|
||||
"id": "support-agent",
|
||||
"name": "Support Assistant",
|
||||
"description": "Friendly customer support agent",
|
||||
"configPath": "./support-agent.yml",
|
||||
"author": "Dexto Team",
|
||||
"tags": ["support", "customer-service"]
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1,14 @@
|
||||
systemPrompt: |
|
||||
You are a friendly and helpful customer support agent. Your goal is to
|
||||
help users resolve their issues quickly and leave them satisfied.
|
||||
|
||||
Guidelines:
|
||||
- Be warm and empathetic
|
||||
- Ask clarifying questions when needed
|
||||
- Provide clear, step-by-step solutions
|
||||
- Always thank users for their patience
|
||||
|
||||
llm:
|
||||
provider: openai
|
||||
model: gpt-4o-mini
|
||||
apiKey: $OPENAI_API_KEY
|
||||
82
dexto/examples/agent-manager-example/main.ts
Normal file
82
dexto/examples/agent-manager-example/main.ts
Normal file
@@ -0,0 +1,82 @@
|
||||
/**
|
||||
* AgentManager Example
|
||||
*
|
||||
* This example demonstrates how to use AgentManager to:
|
||||
* - Load agents from a registry file
|
||||
* - List available agents with metadata
|
||||
* - Create and use agents by ID
|
||||
*
|
||||
* Run with: npx tsx examples/agent-manager-example/main.ts
|
||||
*/
|
||||
import 'dotenv/config';
|
||||
import path from 'path';
|
||||
import { AgentManager } from '@dexto/agent-management';
|
||||
|
||||
const registryPath = path.join(import.meta.dirname, 'agents/registry.json');
|
||||
|
||||
async function main() {
|
||||
console.log('=== AgentManager Example ===\n');
|
||||
|
||||
// Initialize the manager with a registry file
|
||||
const manager = new AgentManager(registryPath);
|
||||
await manager.loadRegistry();
|
||||
|
||||
// List all available agents
|
||||
console.log('Available agents:');
|
||||
const agents = manager.listAgents();
|
||||
for (const agent of agents) {
|
||||
console.log(` - ${agent.name} (${agent.id})`);
|
||||
console.log(` ${agent.description}`);
|
||||
if (agent.tags?.length) {
|
||||
console.log(` Tags: ${agent.tags.join(', ')}`);
|
||||
}
|
||||
console.log();
|
||||
}
|
||||
|
||||
// Check if a specific agent exists
|
||||
const agentId = 'coding-agent';
|
||||
if (!manager.hasAgent(agentId)) {
|
||||
console.error(`Agent '${agentId}' not found in registry`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Load and use the coding agent
|
||||
console.log(`Loading '${agentId}'...`);
|
||||
const codingAgent = await manager.loadAgent(agentId);
|
||||
await codingAgent.start();
|
||||
|
||||
const session = await codingAgent.createSession();
|
||||
|
||||
console.log('\nAsking the coding agent a question...\n');
|
||||
const response = await codingAgent.generate(
|
||||
'Write a TypeScript function that checks if a string is a palindrome.',
|
||||
session.id
|
||||
);
|
||||
|
||||
console.log('Response:');
|
||||
console.log(response.content);
|
||||
console.log(`\n(Used ${response.usage.totalTokens} tokens)`);
|
||||
|
||||
await codingAgent.stop();
|
||||
|
||||
// Demonstrate switching to a different agent
|
||||
console.log('\n--- Switching to support agent ---\n');
|
||||
|
||||
const supportAgent = await manager.loadAgent('support-agent');
|
||||
await supportAgent.start();
|
||||
|
||||
const supportSession = await supportAgent.createSession();
|
||||
const supportResponse = await supportAgent.generate(
|
||||
"Hi, I'm having trouble logging into my account. Can you help?",
|
||||
supportSession.id
|
||||
);
|
||||
|
||||
console.log('Response:');
|
||||
console.log(supportResponse.content);
|
||||
|
||||
await supportAgent.stop();
|
||||
|
||||
console.log('\n✅ Done!');
|
||||
}
|
||||
|
||||
main().catch(console.error);
|
||||
43
dexto/examples/basic-agent-example.ts
Normal file
43
dexto/examples/basic-agent-example.ts
Normal file
@@ -0,0 +1,43 @@
|
||||
/**
|
||||
* Basic Dexto Agent SDK Example
|
||||
*
|
||||
* This example demonstrates the simplest way to use the Dexto Agent SDK
|
||||
* to create an AI agent and have a conversation.
|
||||
*
|
||||
* Run with: npx tsx examples/basic-agent-example.ts
|
||||
*/
|
||||
import 'dotenv/config';
|
||||
import { DextoAgent } from '@dexto/core';
|
||||
|
||||
// Create agent with minimal configuration
|
||||
const agent = new DextoAgent({
|
||||
systemPrompt: 'You are a helpful AI assistant.',
|
||||
llm: {
|
||||
provider: 'openai',
|
||||
model: 'gpt-5-mini',
|
||||
apiKey: process.env.OPENAI_API_KEY || '',
|
||||
},
|
||||
});
|
||||
|
||||
await agent.start();
|
||||
|
||||
// Create a session for the conversation
|
||||
const session = await agent.createSession();
|
||||
|
||||
// Use generate() for simple request/response
|
||||
console.log('Asking a question...\n');
|
||||
const response = await agent.generate('What is TypeScript and why is it useful?', session.id);
|
||||
console.log(response.content);
|
||||
console.log(`\n(Used ${response.usage.totalTokens} tokens)\n`);
|
||||
|
||||
// Conversations maintain context within a session
|
||||
console.log('---\nAsking for a haiku...\n');
|
||||
const haiku = await agent.generate('Write a haiku about TypeScript', session.id);
|
||||
console.log(haiku.content);
|
||||
|
||||
console.log('\n---\nAsking to make it funnier...\n');
|
||||
const funnier = await agent.generate('Make it funnier', session.id);
|
||||
console.log(funnier.content);
|
||||
|
||||
await agent.stop();
|
||||
console.log('\n✅ Done!');
|
||||
89
dexto/examples/dexto-langchain-integration/README.md
Normal file
89
dexto/examples/dexto-langchain-integration/README.md
Normal file
@@ -0,0 +1,89 @@
|
||||
# Dexto + LangChain Example
|
||||
|
||||
This example demonstrates how Dexto's orchestration layer can integrate existing agents from other frameworks (like LangChain, LangGraph, etc.) via the Model Context Protocol (MCP), enabling seamless multi-agent workflows.
|
||||
|
||||
## Architecture
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Dexto Orchestrator] --> B[Filesystem Tools]
|
||||
A --> C[Puppeteer Tools]
|
||||
A --> D[LangChain Agent]
|
||||
|
||||
style A fill:#4f46e5,stroke:#312e81,stroke-width:2px,color:#fff
|
||||
style B fill:#10b981,stroke:#065f46,stroke-width:1px,color:#fff
|
||||
style C fill:#f59e0b,stroke:#92400e,stroke-width:1px,color:#fff
|
||||
style D fill:#8b5cf6,stroke:#5b21b6,stroke-width:1px,color:#fff
|
||||
```
|
||||
|
||||
## How to Think About Multi-Agent Integration
|
||||
|
||||
When building multi-agent systems, you often have agents built in different frameworks. Here's how to approach this with Dexto:
|
||||
|
||||
1. **Start with what you have**: You may already have agents in LangChain, LangGraph, AutoGen, or other frameworks
|
||||
2. **Use MCP as the bridge**: Instead of rebuilding or creating custom adapters, wrap your existing agents with MCP as a tool
|
||||
3. **Let Dexto orchestrate**: Dexto can then coordinate between your existing agents and other tools/subsystems
|
||||
4. **Build incrementally**: Add more agents and frameworks as needed - MCP makes it straightforward
|
||||
|
||||
## Quick Setup
|
||||
|
||||
```bash
|
||||
# Install dependencies
|
||||
cd examples/dexto-langchain-integration/langchain-agent
|
||||
npm install
|
||||
npm run build
|
||||
|
||||
# Set API key
|
||||
export OPENAI_API_KEY="your_openai_api_key_here"
|
||||
|
||||
# Test integration (run from repository root)
|
||||
cd ../../..
|
||||
dexto --agent ./examples/dexto-langchain-integration/dexto-agent-with-langchain.yml "Analyze the sentiment of this review: 'I absolutely love this product! The quality is amazing and the customer service was outstanding. Best purchase I've made this year.'"
|
||||
|
||||
# Note: Agent file paths in the YAML config are resolved relative to the current working directory
|
||||
```
|
||||
|
||||
## What You Can Do
|
||||
|
||||
**Dexto orchestrates between:**
|
||||
- **Filesystem**: Read/write files
|
||||
- **Puppeteer**: Web browsing and interaction
|
||||
- **LangChain Agent**: Text summarization, translation, sentiment analysis
|
||||
|
||||
**Example workflows:**
|
||||
```bash
|
||||
# Text summarization
|
||||
dexto --agent ./examples/dexto-langchain-integration/dexto-agent-with-langchain.yml "Summarize this article: Artificial intelligence has transformed how we work, with tools like ChatGPT and GitHub Copilot becoming essential for developers. These AI assistants help write code, debug issues, and even design entire applications. The impact extends beyond coding - AI is reshaping customer service, content creation, and decision-making processes across industries."
|
||||
|
||||
# Translation
|
||||
dexto --agent ./examples/dexto-langchain-integration/dexto-agent-with-langchain.yml "Translate this text to Spanish: The weather is beautiful today and I'm going to the park to enjoy the sunshine."
|
||||
|
||||
# Sentiment Analysis
|
||||
dexto --agent ./examples/dexto-langchain-integration/dexto-agent-with-langchain.yml "Analyze the sentiment of this customer review: 'I absolutely love this product! The quality is amazing and the customer service was outstanding. Best purchase I've made this year.'"
|
||||
|
||||
# Multi-step: Read file → Summarize → Save
|
||||
dexto --agent ./examples/dexto-langchain-integration/dexto-agent-with-langchain.yml "Read README.md, summarize it, save the summary"
|
||||
|
||||
# Complex: Web scrape → Sentiment Analysis → Save
|
||||
dexto --agent ./examples/dexto-langchain-integration/dexto-agent-with-langchain.yml "Search for customer reviews about our product, analyze the sentiment, save as sentiment_report.md"
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Dexto Orchestrator**: Manages & supervises all subsystems and workflows
|
||||
2. **LangChain MCP Agent**: Wraps existing LangChain agent as a Dexto subsystem
|
||||
3. **Configuration**: Registers LangChain alongside filesystem and puppeteer tools
|
||||
|
||||
## Extending
|
||||
|
||||
**Add agents from other frameworks:**
|
||||
1. Wrap more agents into an MCP Server
|
||||
2. Add to Dexto configuration
|
||||
3. Dexto orchestrates between all agents and subsystems
|
||||
|
||||
**Add capabilities to existing agents:**
|
||||
1. Extend your external agent capabilities
|
||||
2. Register new tools/methods
|
||||
3. Dexto accesses via MCP integration
|
||||
|
||||
This demonstrates how to think about Dexto as your orchestration layer for multi-agent systems - start with your existing agents, use MCP to connect them, and let Dexto handle the coordination.
|
||||
@@ -0,0 +1,89 @@
|
||||
# Dexto Agent Configuration with External LangChain Framework Integration
|
||||
# This demonstrates how to connect a self-contained LangChain agent to Dexto via MCP
|
||||
|
||||
# System prompt that explains the agent's capabilities including LangChain integration
|
||||
systemPrompt:
|
||||
contributors:
|
||||
- id: primary
|
||||
type: static
|
||||
priority: 0
|
||||
content: |
|
||||
You are a Dexto AI agent with access to a complete LangChain agent via MCP.
|
||||
You can orchestrate tasks across different AI frameworks and tools.
|
||||
|
||||
## Your Capabilities
|
||||
|
||||
**Core Dexto Tools:**
|
||||
- File system operations (read, write, list files)
|
||||
- Web browsing and interaction via Puppeteer
|
||||
- General AI assistance and task coordination
|
||||
|
||||
**LangChain Agent Integration:**
|
||||
- `chat_with_langchain_agent`: Interact with a complete LangChain agent that has its own internal tools and reasoning capabilities
|
||||
|
||||
The LangChain agent can handle:
|
||||
- Text summarization and content analysis
|
||||
- Language translation between different languages
|
||||
- Sentiment analysis and emotion detection
|
||||
|
||||
## Usage Examples
|
||||
|
||||
**Basic LangChain interaction:**
|
||||
- "Use the LangChain agent to summarize this article about AI trends"
|
||||
- "Ask the LangChain agent to translate this text to Spanish"
|
||||
- "Have the LangChain agent analyze the sentiment of this customer review"
|
||||
|
||||
**Multi-framework orchestration:**
|
||||
- "Read the README.md file, then use the LangChain agent to summarize it"
|
||||
- "Search the web for news about AI, then have the LangChain agent translate it to Spanish"
|
||||
- "Use the LangChain agent to analyze sentiment of customer feedback, then save the report"
|
||||
|
||||
**Complex workflows:**
|
||||
- "Use the LangChain agent to summarize this document, then save it as a report"
|
||||
- "Have the LangChain agent analyze sentiment of this text, then translate the analysis to Spanish"
|
||||
|
||||
The LangChain agent handles its own internal reasoning and tool selection, so you can simply send it natural language requests and it will figure out what to do.
|
||||
|
||||
- id: date
|
||||
type: dynamic
|
||||
priority: 10
|
||||
source: date
|
||||
enabled: true
|
||||
|
||||
# MCP Server configurations
|
||||
mcpServers:
|
||||
# Standard Dexto tools
|
||||
filesystem:
|
||||
type: stdio
|
||||
command: npx
|
||||
args:
|
||||
- -y
|
||||
- "@modelcontextprotocol/server-filesystem"
|
||||
- .
|
||||
connectionMode: strict
|
||||
|
||||
playwright:
|
||||
type: stdio
|
||||
command: npx
|
||||
args:
|
||||
- "-y"
|
||||
- "@playwright/mcp@latest"
|
||||
connectionMode: lenient
|
||||
|
||||
# External LangChain agent integration
|
||||
langchain:
|
||||
type: stdio
|
||||
command: node
|
||||
args:
|
||||
- "${{dexto.agent_dir}}/langchain-agent/dist/mcp-server.js"
|
||||
env:
|
||||
OPENAI_API_KEY: $OPENAI_API_KEY
|
||||
timeout: 30000
|
||||
connectionMode: strict
|
||||
|
||||
# LLM configuration for Dexto agent
|
||||
llm:
|
||||
provider: openai
|
||||
model: gpt-5-mini
|
||||
apiKey: $OPENAI_API_KEY
|
||||
temperature: 0.7
|
||||
@@ -0,0 +1,155 @@
|
||||
#!/usr/bin/env node
|
||||
/* eslint-env node */
|
||||
|
||||
import { ChatOpenAI } from '@langchain/openai';
|
||||
import { PromptTemplate } from '@langchain/core/prompts';
|
||||
|
||||
interface AgentTools {
|
||||
summarize: (input: string | { text: string }) => Promise<string>;
|
||||
translate: (input: string | { text: string; target_language?: string }) => Promise<string>;
|
||||
analyze: (input: string | { text: string }) => Promise<string>;
|
||||
}
|
||||
|
||||
export class LangChainAgent {
|
||||
private llm: ChatOpenAI;
|
||||
private tools: AgentTools;
|
||||
|
||||
constructor() {
|
||||
this.llm = new ChatOpenAI({
|
||||
model: 'gpt-5-mini',
|
||||
temperature: 0.7,
|
||||
});
|
||||
|
||||
this.tools = {
|
||||
summarize: this.summarize.bind(this),
|
||||
translate: this.translate.bind(this),
|
||||
analyze: this.analyze.bind(this),
|
||||
};
|
||||
}
|
||||
|
||||
async run(input: string): Promise<string> {
|
||||
try {
|
||||
console.error(
|
||||
`LangChain Agent received: ${input.substring(0, 100)}${input.length > 100 ? '...' : ''}`
|
||||
);
|
||||
|
||||
const prompt = PromptTemplate.fromTemplate(`
|
||||
You are a helpful AI assistant with three core capabilities:
|
||||
|
||||
**Core Tools:**
|
||||
- summarize: Create concise summaries of text, articles, or documents
|
||||
- translate: Translate text between different languages
|
||||
- analyze: Perform sentiment analysis on text to understand emotions and tone
|
||||
|
||||
User input: {user_input}
|
||||
|
||||
Based on the user's request, determine which tool would be most helpful:
|
||||
- summarize: For creating summaries of text, articles, or documents
|
||||
- translate: For translating text between languages
|
||||
- analyze: For performing sentiment analysis on text to understand emotions and tone
|
||||
|
||||
Provide a helpful response that addresses the user's needs.
|
||||
`);
|
||||
|
||||
const chain = prompt.pipe(this.llm);
|
||||
const result = await chain.invoke({ user_input: input });
|
||||
|
||||
const content =
|
||||
typeof result.content === 'string' ? result.content : String(result.content);
|
||||
console.error(
|
||||
`LangChain Agent response: ${content.substring(0, 100)}${content.length > 100 ? '...' : ''}`
|
||||
);
|
||||
|
||||
return content;
|
||||
} catch (error: any) {
|
||||
console.error(`LangChain Agent error: ${error.message}`);
|
||||
return `I encountered an error: ${error.message}`;
|
||||
}
|
||||
}
|
||||
|
||||
private async summarize(input: string | { text: string }): Promise<string> {
|
||||
const summaryPrompt = PromptTemplate.fromTemplate(`
|
||||
Please create a concise summary of the following text:
|
||||
|
||||
Text: {text}
|
||||
|
||||
Provide a clear, well-structured summary that captures the key points and main ideas.
|
||||
`);
|
||||
|
||||
const chain = summaryPrompt.pipe(this.llm);
|
||||
const result = await chain.invoke({
|
||||
text: typeof input === 'string' ? input : input.text,
|
||||
});
|
||||
return result.content as string;
|
||||
}
|
||||
|
||||
private async translate(
|
||||
input: string | { text: string; target_language?: string }
|
||||
): Promise<string> {
|
||||
const translatePrompt = PromptTemplate.fromTemplate(`
|
||||
Please translate the following text:
|
||||
|
||||
Text: {text}
|
||||
Target Language: {target_language}
|
||||
|
||||
Provide an accurate translation that maintains the original meaning and tone.
|
||||
`);
|
||||
|
||||
const chain = translatePrompt.pipe(this.llm);
|
||||
const result = await chain.invoke({
|
||||
text: typeof input === 'string' ? input : input.text,
|
||||
target_language:
|
||||
typeof input === 'string' ? 'English' : input.target_language || 'English',
|
||||
});
|
||||
return result.content as string;
|
||||
}
|
||||
|
||||
private async analyze(input: string | { text: string }): Promise<string> {
|
||||
const analyzePrompt = PromptTemplate.fromTemplate(`
|
||||
Please perform sentiment analysis on the following text:
|
||||
|
||||
Text: {text}
|
||||
|
||||
Provide a comprehensive sentiment analysis covering:
|
||||
1. **Overall Sentiment**: Positive, Negative, or Neutral
|
||||
2. **Sentiment Score**: Rate from 1-10 (1=very negative, 10=very positive)
|
||||
3. **Key Emotions**: Identify specific emotions present (e.g., joy, anger, sadness, excitement)
|
||||
4. **Confidence Level**: How confident are you in this analysis?
|
||||
5. **Key Phrases**: Highlight specific phrases that influenced the sentiment
|
||||
6. **Context**: Any contextual factors that might affect interpretation
|
||||
|
||||
Be specific and provide clear reasoning for your analysis.
|
||||
`);
|
||||
|
||||
const chain = analyzePrompt.pipe(this.llm);
|
||||
const result = await chain.invoke({
|
||||
text: typeof input === 'string' ? input : input.text,
|
||||
});
|
||||
return result.content as string;
|
||||
}
|
||||
}
|
||||
|
||||
// For direct testing
|
||||
if (import.meta.url === `file://${process.argv[1]}`) {
|
||||
const agent = new LangChainAgent();
|
||||
|
||||
console.log('LangChain Agent Test Mode');
|
||||
console.log('Type your message (or "quit" to exit):');
|
||||
|
||||
process.stdin.setEncoding('utf8');
|
||||
process.stdin.on('data', async (data) => {
|
||||
const input = data.toString().trim();
|
||||
if (input.toLowerCase() === 'quit') {
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
try {
|
||||
const response = await agent.run(input);
|
||||
console.log('\nAgent Response:', response);
|
||||
} catch (error: any) {
|
||||
console.error('Error:', error.message);
|
||||
}
|
||||
|
||||
console.log('\nType your message (or "quit" to exit):');
|
||||
});
|
||||
}
|
||||
@@ -0,0 +1,77 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
|
||||
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
|
||||
import { z } from 'zod';
|
||||
import { LangChainAgent } from './agent.js';
|
||||
|
||||
class LangChainMCPServer {
|
||||
private server: McpServer;
|
||||
private agent: LangChainAgent;
|
||||
|
||||
constructor() {
|
||||
this.server = new McpServer({
|
||||
name: 'langchain-agent',
|
||||
version: '1.0.0',
|
||||
});
|
||||
|
||||
this.agent = new LangChainAgent();
|
||||
this.registerTools();
|
||||
}
|
||||
|
||||
private registerTools(): void {
|
||||
this.server.registerTool(
|
||||
'chat_with_langchain_agent',
|
||||
{
|
||||
description:
|
||||
'Chat with a helpful LangChain agent that can summarize text, translate languages, and perform sentiment analysis.',
|
||||
inputSchema: {
|
||||
// Cannot use zod object here due to type incompatibility with MCP SDK
|
||||
message: z
|
||||
.string()
|
||||
.describe(
|
||||
'The message to send to the LangChain agent. The agent will use its own reasoning to determine which internal tools to use.'
|
||||
),
|
||||
},
|
||||
},
|
||||
async ({ message }: { message: string }) => {
|
||||
try {
|
||||
console.error(`MCP Server: Forwarding message to LangChain agent`);
|
||||
|
||||
const response = await this.agent.run(message);
|
||||
|
||||
console.error(`MCP Server: Received response from LangChain agent`);
|
||||
|
||||
return {
|
||||
content: [
|
||||
{
|
||||
type: 'text',
|
||||
text: response,
|
||||
},
|
||||
],
|
||||
};
|
||||
} catch (error: any) {
|
||||
console.error(`MCP Server error: ${error.message}`);
|
||||
return {
|
||||
content: [
|
||||
{
|
||||
type: 'text',
|
||||
text: `Error communicating with LangChain agent: ${error.message}`,
|
||||
},
|
||||
],
|
||||
};
|
||||
}
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
async start(): Promise<void> {
|
||||
const transport = new StdioServerTransport();
|
||||
await this.server.connect(transport);
|
||||
console.error('LangChain Agent MCP Server started and ready for connections');
|
||||
}
|
||||
}
|
||||
|
||||
// Start the server
|
||||
const server = new LangChainMCPServer();
|
||||
server.start().catch(console.error);
|
||||
1677
dexto/examples/dexto-langchain-integration/langchain-agent/package-lock.json
generated
Normal file
1677
dexto/examples/dexto-langchain-integration/langchain-agent/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,33 @@
|
||||
{
|
||||
"name": "langchain-agent-example",
|
||||
"version": "1.0.0",
|
||||
"description": "Self-contained LangChain agent wrapped in MCP server",
|
||||
"type": "module",
|
||||
"main": "dist/mcp-server.js",
|
||||
"scripts": {
|
||||
"build": "tsc",
|
||||
"start": "npm run build && node dist/mcp-server.js",
|
||||
"agent": "npm run build && node dist/agent.js",
|
||||
"dev": "tsc --watch & node --watch dist/mcp-server.js"
|
||||
},
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "^1.25.2",
|
||||
"@langchain/openai": "^0.6.7",
|
||||
"@langchain/core": "^0.3.80",
|
||||
"langchain": "^0.3.37",
|
||||
"zod": "^3.22.4"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/node": "^20.0.0",
|
||||
"typescript": "^5.0.0"
|
||||
},
|
||||
"keywords": [
|
||||
"langchain",
|
||||
"mcp",
|
||||
"agent",
|
||||
"ai",
|
||||
"model-context-protocol"
|
||||
],
|
||||
"author": "Dexto Team",
|
||||
"license": "MIT"
|
||||
}
|
||||
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
"target": "ES2022",
|
||||
"module": "ESNext",
|
||||
"moduleResolution": "node",
|
||||
"outDir": "./dist",
|
||||
"rootDir": "./",
|
||||
"strict": true,
|
||||
"esModuleInterop": true,
|
||||
"skipLibCheck": true,
|
||||
"forceConsistentCasingInFileNames": true,
|
||||
"declaration": true,
|
||||
"declarationMap": true,
|
||||
"sourceMap": true,
|
||||
"allowSyntheticDefaultImports": true,
|
||||
"resolveJsonModule": true,
|
||||
"types": ["node"]
|
||||
},
|
||||
"include": [
|
||||
"*.ts",
|
||||
"*.js"
|
||||
],
|
||||
"exclude": [
|
||||
"node_modules",
|
||||
"dist"
|
||||
]
|
||||
}
|
||||
19
dexto/examples/discord-bot/.env.example
Normal file
19
dexto/examples/discord-bot/.env.example
Normal file
@@ -0,0 +1,19 @@
|
||||
# Discord Bot Token
|
||||
# Get this from Discord Developer Portal: https://discord.com/developers/applications
|
||||
DISCORD_BOT_TOKEN=your_discord_bot_token_here
|
||||
|
||||
# LLM API Key
|
||||
# Required: Set this to your OpenAI API key
|
||||
# Get one at: https://platform.openai.com/account/api-keys
|
||||
OPENAI_API_KEY=your_openai_api_key_here
|
||||
|
||||
# Alternative LLM providers (uncomment one and use it in agent-config.yml)
|
||||
# ANTHROPIC_API_KEY=your_anthropic_api_key_here
|
||||
# GOOGLE_GENERATIVE_AI_API_KEY=your_google_api_key_here
|
||||
|
||||
# Rate limiting settings (optional)
|
||||
# Enable/disable rate limiting per user (default: true)
|
||||
DISCORD_RATE_LIMIT_ENABLED=true
|
||||
|
||||
# Cooldown in seconds between messages from the same user (default: 5)
|
||||
DISCORD_RATE_LIMIT_SECONDS=5
|
||||
249
dexto/examples/discord-bot/README.md
Normal file
249
dexto/examples/discord-bot/README.md
Normal file
@@ -0,0 +1,249 @@
|
||||
# Discord Bot Example
|
||||
|
||||
This is a **reference implementation** showing how to integrate DextoAgent with Discord using discord.js. It demonstrates:
|
||||
- Connecting to Discord's WebSocket API
|
||||
- Processing messages and commands
|
||||
- Handling image attachments
|
||||
- Managing per-user conversation sessions
|
||||
- Integrating tool calls with Discord messages
|
||||
|
||||
## ⚠️ Important: This is a Reference Implementation
|
||||
|
||||
This example is provided to show how to build Discord integrations with Dexto. While it works, it's not a production-ready bot and may lack:
|
||||
- Advanced error recovery and retry logic
|
||||
- Comprehensive logging and monitoring
|
||||
- Scalability features for large deployments
|
||||
- Advanced permission management
|
||||
|
||||
Use this as a foundation to build your own customized Discord bot!
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Get Your Discord Bot Token
|
||||
|
||||
1. Go to [Discord Developer Portal](https://discord.com/developers/applications)
|
||||
2. Click "New Application" and give it a name
|
||||
3. In the sidebar, navigate to **Bot** → Click **Add Bot**
|
||||
4. Under the TOKEN section, click **Copy** (or **Reset Token** if you need a new one)
|
||||
5. Save this token - you'll need it in the next step
|
||||
|
||||
### 2. Set Up Your Environment Variables
|
||||
|
||||
Copy `.env.example` to `.env`:
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
Edit `.env` and add:
|
||||
|
||||
1. **Your Discord Bot Token** (required):
|
||||
```
|
||||
DISCORD_BOT_TOKEN=your_token_here
|
||||
```
|
||||
|
||||
2. **Your LLM API Key** (required):
|
||||
|
||||
For OpenAI (default):
|
||||
```
|
||||
OPENAI_API_KEY=your_openai_api_key_here
|
||||
```
|
||||
|
||||
Or use a different provider and update `agent-config.yml`:
|
||||
```
|
||||
# ANTHROPIC_API_KEY=your_key_here
|
||||
# GOOGLE_GENERATIVE_AI_API_KEY=your_key_here
|
||||
```
|
||||
|
||||
**Get API Keys:**
|
||||
- **OpenAI**: https://platform.openai.com/account/api-keys
|
||||
- **Anthropic**: https://console.anthropic.com/account/keys
|
||||
- **Google**: https://ai.google.dev
|
||||
|
||||
### 3. Invite Bot to Your Server
|
||||
|
||||
1. In Developer Portal, go to **OAuth2** → **URL Generator**
|
||||
2. Select scopes: `bot`
|
||||
3. Select permissions: `Send Messages`, `Read Messages`, `Read Message History`, `Attach Files`
|
||||
4. Copy the generated URL and visit it to invite your bot to your server
|
||||
|
||||
### 4. Install Dependencies
|
||||
|
||||
Install the required dependencies:
|
||||
|
||||
```bash
|
||||
pnpm install
|
||||
```
|
||||
|
||||
### 5. Run the Bot
|
||||
|
||||
Start the bot:
|
||||
|
||||
```bash
|
||||
pnpm start
|
||||
```
|
||||
|
||||
You should see:
|
||||
```
|
||||
🚀 Initializing Discord bot...
|
||||
Discord bot logged in as YourBotName#1234
|
||||
✅ Discord bot is running!
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### In DMs
|
||||
Simply send a message to the bot - it will respond using the configured LLM.
|
||||
|
||||
### In Server Channels
|
||||
Use the `!ask` prefix:
|
||||
```
|
||||
!ask What is the capital of France?
|
||||
```
|
||||
|
||||
The bot will respond with the agent's response, splitting long messages to respect Discord's 2000-character limit.
|
||||
|
||||
### Image Support
|
||||
Send an image attachment with or without text, and the bot will process it using the agent's vision capabilities.
|
||||
|
||||
### Audio Support
|
||||
Send audio files (MP3, WAV, OGG, etc.), and the bot will:
|
||||
- Transcribe the audio (if model supports speech recognition)
|
||||
- Analyze the audio content
|
||||
- Use audio as context for responses
|
||||
|
||||
Simply attach an audio file to your message and the bot will process it using the agent's multimodal capabilities.
|
||||
|
||||
### Reset Conversation
|
||||
To start a fresh conversation session, DM the bot with:
|
||||
```
|
||||
/reset
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Switching LLM Providers
|
||||
|
||||
The bot comes configured with OpenAI by default. To use a different provider:
|
||||
|
||||
1. **Update `agent-config.yml`** - Change the `llm` section:
|
||||
|
||||
```yaml
|
||||
# For Anthropic Claude:
|
||||
llm:
|
||||
provider: anthropic
|
||||
model: claude-sonnet-4-5-20250929
|
||||
apiKey: $ANTHROPIC_API_KEY
|
||||
|
||||
# For Google Gemini:
|
||||
llm:
|
||||
provider: google
|
||||
model: gemini-2.0-flash
|
||||
apiKey: $GOOGLE_GENERATIVE_AI_API_KEY
|
||||
```
|
||||
|
||||
2. **Set the API key in `.env`**:
|
||||
```
|
||||
ANTHROPIC_API_KEY=your_key_here
|
||||
# or
|
||||
GOOGLE_GENERATIVE_AI_API_KEY=your_key_here
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Create a `.env` file with:
|
||||
|
||||
- **`DISCORD_BOT_TOKEN`** (Required): Your bot's authentication token
|
||||
- **`OPENAI_API_KEY`** (Required for OpenAI): Your OpenAI API key
|
||||
- **`ANTHROPIC_API_KEY`** (Optional): For using Claude models
|
||||
- **`GOOGLE_GENERATIVE_AI_API_KEY`** (Optional): For using Gemini models
|
||||
- **`DISCORD_RATE_LIMIT_ENABLED`** (Optional): Enable/disable rate limiting (default: true)
|
||||
- **`DISCORD_RATE_LIMIT_SECONDS`** (Optional): Cooldown between messages per user (default: 5)
|
||||
|
||||
## Features
|
||||
|
||||
### Rate Limiting
|
||||
By default, the bot enforces a 5-second cooldown per user to prevent spam. Adjust or disable via environment variables.
|
||||
|
||||
### Tool Notifications
|
||||
When the LLM calls a tool (e.g., making an API call), the bot sends a notification message so users can see what's happening:
|
||||
```
|
||||
🔧 Calling tool get_weather with args: {...}
|
||||
```
|
||||
|
||||
### Session Management
|
||||
Each Discord user gets their own persistent conversation session during the bot's lifetime. Messages from different users don't interfere with each other.
|
||||
|
||||
### Large Responses
|
||||
Responses longer than Discord's 2000-character limit are automatically split into multiple messages.
|
||||
|
||||
## Limitations
|
||||
|
||||
- **No persistence across restarts**: Sessions are lost when the bot restarts. For persistent sessions, implement a database layer.
|
||||
- **Simple message handling**: Only responds to text and images. Doesn't support all Discord features like reactions, threads, etc.
|
||||
- **Per-deployment limits**: The bot runs as a single instance. For horizontal scaling, implement clustering.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
Discord Message
|
||||
↓
|
||||
startDiscordBot() wires up event handlers
|
||||
↓
|
||||
agent.generate() processes the message
|
||||
↓
|
||||
Response sent back to Discord
|
||||
↓
|
||||
agentEventBus emits events (tool calls, etc.)
|
||||
↓
|
||||
Tool notifications sent to channel
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Bot doesn't respond to messages
|
||||
- Check that the bot has permission to send messages in the channel
|
||||
- Ensure `DISCORD_BOT_TOKEN` is correct in `.env`
|
||||
- Verify bot has `Message Content Intent` enabled in Developer Portal
|
||||
|
||||
### "DISCORD_BOT_TOKEN is not set"
|
||||
- Check that `.env` file exists in the example directory
|
||||
- Verify the token is correctly copied from Developer Portal
|
||||
|
||||
### Rate limiting errors
|
||||
- Check `DISCORD_RATE_LIMIT_SECONDS` setting
|
||||
- Set `DISCORD_RATE_LIMIT_ENABLED=false` to disable rate limiting
|
||||
|
||||
### Image processing fails
|
||||
- Ensure attachments are under 5MB
|
||||
- Check network connectivity for downloading attachments
|
||||
|
||||
## Next Steps
|
||||
|
||||
To customize this bot:
|
||||
|
||||
1. **Modify `agent-config.yml`**:
|
||||
- Change the LLM provider/model
|
||||
- Add MCP servers for additional capabilities
|
||||
- Customize the system prompt
|
||||
|
||||
2. **Extend `bot.ts`**:
|
||||
- Add new command handlers
|
||||
- Implement additional Discord features
|
||||
- Add logging/monitoring
|
||||
|
||||
3. **Deploy**:
|
||||
- Run on a server/VPS that stays online 24/7
|
||||
- Use process managers like PM2 to auto-restart on crashes
|
||||
- Consider hosting on platforms like Railway, Heroku, or AWS
|
||||
|
||||
## Documentation
|
||||
|
||||
- [Discord.js Documentation](https://discord.js.org/)
|
||||
- [Discord Developer Portal](https://discord.com/developers/applications)
|
||||
- [Dexto Documentation](https://dexto.dev)
|
||||
- [Dexto Agent API](https://docs.dexto.dev)
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
78
dexto/examples/discord-bot/agent-config.yml
Normal file
78
dexto/examples/discord-bot/agent-config.yml
Normal file
@@ -0,0 +1,78 @@
|
||||
# Discord Bot Agent Configuration
|
||||
# This agent is optimized for Discord bot interactions.
|
||||
# For more configuration options, see: https://docs.dexto.dev/guides/configuring-dexto
|
||||
|
||||
# LLM Configuration
|
||||
# The bot uses this LLM provider and model for all interactions
|
||||
llm:
|
||||
provider: openai
|
||||
model: gpt-5-mini
|
||||
apiKey: $OPENAI_API_KEY
|
||||
|
||||
# System Prompt
|
||||
# Defines the bot's personality, capabilities, and behavior
|
||||
# This prompt is customized for Discord-specific constraints (2000 char message limit)
|
||||
systemPrompt: |
|
||||
You are a helpful and friendly Discord bot powered by Dexto. You assist users with a wide range of tasks
|
||||
including answering questions, providing information, and helping with coding, writing, and analysis.
|
||||
|
||||
## Your Capabilities
|
||||
- File System Access: Read and explore files in your working directory
|
||||
- Web Browsing: Visit websites and extract information (when configured)
|
||||
- Code Analysis: Help with programming, debugging, and code review
|
||||
- Information Retrieval: Answer questions and provide explanations
|
||||
- Creative Tasks: Writing, brainstorming, and content generation
|
||||
|
||||
## Response Guidelines
|
||||
- Keep responses concise and Discord-friendly (under 2000 characters where possible)
|
||||
- Use plain text formatting (avoid markdown syntax like ###, ---, etc.)
|
||||
- For code, use simple indentation without backtick formatting
|
||||
- For emphasis, use CAPS or simple punctuation like asterisks *like this*
|
||||
- Break long responses into multiple messages if needed
|
||||
- Be helpful, respectful, and inclusive
|
||||
- If you can't do something, explain why clearly
|
||||
|
||||
## Usage in Discord
|
||||
- Direct Messages: Respond to all messages sent to you
|
||||
- Server Channels: Respond to messages using the !ask prefix (e.g., !ask what is TypeScript?)
|
||||
- Attachments: You can process images attached to messages
|
||||
|
||||
Always aim to be helpful, accurate, and concise in your responses using plain text format.
|
||||
|
||||
# MCP Servers Configuration
|
||||
# These provide the tools and capabilities your bot can use
|
||||
mcpServers:
|
||||
# Add exa server for web search capabilities
|
||||
exa:
|
||||
type: http
|
||||
url: https://mcp.exa.ai/mcp
|
||||
|
||||
# Tool Confirmation Configuration
|
||||
# Discord bots auto-approve tool calls, so we disable confirmation
|
||||
toolConfirmation:
|
||||
mode: auto-approve
|
||||
allowedToolsStorage: memory
|
||||
|
||||
# Storage Configuration
|
||||
# Data storage for sessions, memories, and caching
|
||||
storage:
|
||||
cache:
|
||||
type: in-memory
|
||||
database:
|
||||
type: sqlite
|
||||
# Database file will be created in ~/.dexto/agents/discord-bot-agent/
|
||||
blob:
|
||||
type: local
|
||||
maxBlobSize: 52428800 # 50MB per attachment
|
||||
maxTotalSize: 1073741824 # 1GB total
|
||||
cleanupAfterDays: 30
|
||||
|
||||
# Memory Configuration - enables the bot to remember context
|
||||
memories:
|
||||
enabled: true
|
||||
priority: 40
|
||||
limit: 10
|
||||
includeTags: true
|
||||
|
||||
# Optional: Greeting message shown when bot starts
|
||||
greeting: "Hi! I'm a Discord bot powered by Dexto. Type `!ask your question` in channels or just message me directly!"
|
||||
319
dexto/examples/discord-bot/bot.ts
Normal file
319
dexto/examples/discord-bot/bot.ts
Normal file
@@ -0,0 +1,319 @@
|
||||
import dotenv from 'dotenv';
|
||||
import { Client, GatewayIntentBits, Partials } from 'discord.js';
|
||||
import https from 'https';
|
||||
import http from 'http'; // ADDED for http support
|
||||
import { DextoAgent, logger } from '@dexto/core';
|
||||
|
||||
// Load environment variables
|
||||
dotenv.config();
|
||||
const token = process.env.DISCORD_BOT_TOKEN;
|
||||
|
||||
// User-based cooldown system for Discord interactions
|
||||
const userCooldowns = new Map<string, number>();
|
||||
const RATE_LIMIT_ENABLED = process.env.DISCORD_RATE_LIMIT_ENABLED?.toLowerCase() !== 'false'; // default-on
|
||||
let COOLDOWN_SECONDS = Number(process.env.DISCORD_RATE_LIMIT_SECONDS ?? 5);
|
||||
|
||||
if (Number.isNaN(COOLDOWN_SECONDS) || COOLDOWN_SECONDS < 0) {
|
||||
console.error(
|
||||
'DISCORD_RATE_LIMIT_SECONDS must be a non-negative number. Defaulting to 5 seconds.'
|
||||
);
|
||||
COOLDOWN_SECONDS = 5; // Default to a safe value
|
||||
}
|
||||
|
||||
// Helper to detect MIME type from file extension
|
||||
function getMimeTypeFromPath(filePath: string): string {
|
||||
const ext = filePath.split('.').pop()?.toLowerCase() || '';
|
||||
const mimeTypes: Record<string, string> = {
|
||||
jpg: 'image/jpeg',
|
||||
jpeg: 'image/jpeg',
|
||||
png: 'image/png',
|
||||
gif: 'image/gif',
|
||||
webp: 'image/webp',
|
||||
ogg: 'audio/ogg',
|
||||
mp3: 'audio/mpeg',
|
||||
wav: 'audio/wav',
|
||||
m4a: 'audio/mp4',
|
||||
};
|
||||
return mimeTypes[ext] || 'application/octet-stream';
|
||||
}
|
||||
|
||||
// Helper to download a file URL and convert it to base64
|
||||
async function downloadFileAsBase64(
|
||||
fileUrl: string,
|
||||
fileName?: string
|
||||
): Promise<{ base64: string; mimeType: string }> {
|
||||
return new Promise((resolve, reject) => {
|
||||
const protocol = fileUrl.startsWith('https:') ? https : http; // Determine protocol
|
||||
const MAX_BYTES = 5 * 1024 * 1024; // 5 MB hard cap
|
||||
let downloadedBytes = 0;
|
||||
|
||||
const req = protocol.get(fileUrl, (res) => {
|
||||
// Store the request object
|
||||
if (res.statusCode && res.statusCode >= 400) {
|
||||
// Clean up response stream
|
||||
res.resume();
|
||||
return reject(
|
||||
new Error(`Failed to download file: ${res.statusCode} ${res.statusMessage}`)
|
||||
);
|
||||
}
|
||||
const chunks: Buffer[] = [];
|
||||
res.on('data', (chunk) => {
|
||||
downloadedBytes += chunk.length;
|
||||
if (downloadedBytes > MAX_BYTES) {
|
||||
// Clean up response stream before destroying request
|
||||
res.resume();
|
||||
req.destroy(new Error('Attachment exceeds 5 MB limit')); // Destroy the request
|
||||
// No explicit reject here, as 'error' on req should handle it or timeout will occur
|
||||
return;
|
||||
}
|
||||
chunks.push(chunk);
|
||||
});
|
||||
res.on('end', () => {
|
||||
if (req.destroyed) return; // If request was destroyed due to size limit, do nothing
|
||||
const buffer = Buffer.concat(chunks);
|
||||
let contentType =
|
||||
(res.headers['content-type'] as string) || 'application/octet-stream';
|
||||
|
||||
// If server returns generic octet-stream, try to detect from file name
|
||||
if (contentType === 'application/octet-stream' && fileName) {
|
||||
contentType = getMimeTypeFromPath(fileName);
|
||||
}
|
||||
|
||||
resolve({ base64: buffer.toString('base64'), mimeType: contentType });
|
||||
});
|
||||
// Handle errors on the response stream itself (e.g., premature close)
|
||||
res.on('error', (err) => {
|
||||
if (!req.destroyed) {
|
||||
// Avoid double-rejection if req.destroy() already called this
|
||||
reject(err);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
// Handle errors on the request object (e.g., socket hang up, DNS resolution error, or from req.destroy())
|
||||
req.on('error', (err) => {
|
||||
reject(err);
|
||||
});
|
||||
|
||||
// Optional: Add a timeout for the request
|
||||
req.setTimeout(30000, () => {
|
||||
// 30 seconds timeout
|
||||
if (!req.destroyed) {
|
||||
req.destroy(new Error('File download timed out'));
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
// Insert initDiscordBot to wire up a Discord client given pre-initialized services
|
||||
export function startDiscordBot(agent: DextoAgent) {
|
||||
if (!token) {
|
||||
throw new Error('DISCORD_BOT_TOKEN is not set');
|
||||
}
|
||||
|
||||
const agentEventBus = agent.agentEventBus;
|
||||
|
||||
// Helper to get or create session for a Discord user
|
||||
// Each Discord user gets their own persistent session
|
||||
function getDiscordSessionId(userId: string): string {
|
||||
return `discord-${userId}`;
|
||||
}
|
||||
|
||||
// Create Discord client
|
||||
const client = new Client({
|
||||
intents: [
|
||||
GatewayIntentBits.Guilds,
|
||||
GatewayIntentBits.GuildMessages,
|
||||
GatewayIntentBits.MessageContent,
|
||||
GatewayIntentBits.DirectMessages,
|
||||
],
|
||||
partials: [Partials.Channel],
|
||||
});
|
||||
|
||||
client.once('ready', () => {
|
||||
console.log(`Discord bot logged in as ${client.user?.tag || 'Unknown'}`);
|
||||
});
|
||||
|
||||
client.on('messageCreate', async (message) => {
|
||||
// Ignore bots
|
||||
if (message.author.bot) return;
|
||||
|
||||
if (RATE_LIMIT_ENABLED && COOLDOWN_SECONDS > 0) {
|
||||
// Only apply cooldown if enabled and seconds > 0
|
||||
const now = Date.now();
|
||||
const cooldownEnd = userCooldowns.get(message.author.id) || 0;
|
||||
|
||||
if (now < cooldownEnd) {
|
||||
const timeLeft = (cooldownEnd - now) / 1000;
|
||||
try {
|
||||
await message.reply(
|
||||
`Please wait ${timeLeft.toFixed(1)} more seconds before using this command again.`
|
||||
);
|
||||
} catch (replyError) {
|
||||
console.error('Error sending cooldown message:', replyError);
|
||||
}
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
let userText: string | undefined = message.content;
|
||||
let imageDataInput: { image: string; mimeType: string } | undefined;
|
||||
let fileDataInput: { data: string; mimeType: string; filename?: string } | undefined;
|
||||
|
||||
// Helper to determine if mime type is audio
|
||||
const isAudioMimeType = (mimeType: string): boolean => {
|
||||
return mimeType.startsWith('audio/');
|
||||
};
|
||||
|
||||
// Handle attachments (images and audio)
|
||||
if (message.attachments.size > 0) {
|
||||
const attachment = message.attachments.first();
|
||||
if (attachment && attachment.url) {
|
||||
try {
|
||||
const { base64, mimeType } = await downloadFileAsBase64(
|
||||
attachment.url,
|
||||
attachment.name || 'file'
|
||||
);
|
||||
|
||||
if (isAudioMimeType(mimeType)) {
|
||||
// Handle audio files
|
||||
fileDataInput = {
|
||||
data: base64,
|
||||
mimeType,
|
||||
filename: attachment.name || 'audio.wav',
|
||||
};
|
||||
// Add context if only audio (no text in message)
|
||||
if (!userText) {
|
||||
userText =
|
||||
'(User sent an audio message for transcription and analysis)';
|
||||
}
|
||||
} else if (mimeType.startsWith('image/')) {
|
||||
// Handle image files
|
||||
imageDataInput = { image: base64, mimeType };
|
||||
userText = message.content || '';
|
||||
}
|
||||
} catch (downloadError) {
|
||||
console.error('Failed to download attachment:', downloadError);
|
||||
try {
|
||||
await message.reply(
|
||||
`⚠️ Failed to download attachment: ${downloadError instanceof Error ? downloadError.message : 'Unknown error'}. Please try again or send the message without the attachment.`
|
||||
);
|
||||
} catch (replyError) {
|
||||
console.error('Error sending attachment failure message:', replyError);
|
||||
}
|
||||
// Continue without the attachment - if there's text content, process that
|
||||
if (!userText) {
|
||||
return; // If there's no text and attachment failed, nothing to process
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Only respond to !ask prefix or DMs
|
||||
if (!message.guild || (userText && userText.startsWith('!ask '))) {
|
||||
if (userText && userText.startsWith('!ask ')) {
|
||||
userText = userText.substring(5).trim();
|
||||
}
|
||||
if (!userText) return;
|
||||
|
||||
// Subscribe for toolCall events
|
||||
const toolCallHandler = (payload: {
|
||||
toolName: string;
|
||||
args: unknown;
|
||||
callId?: string;
|
||||
sessionId: string;
|
||||
}) => {
|
||||
message.channel.send(`🔧 Calling tool **${payload.toolName}**`).catch((error) => {
|
||||
console.error(
|
||||
`Failed to send tool call notification for ${payload.toolName} to channel ${message.channel.id}:`,
|
||||
error
|
||||
);
|
||||
});
|
||||
};
|
||||
agentEventBus.on('llm:tool-call', toolCallHandler);
|
||||
|
||||
try {
|
||||
const sessionId = getDiscordSessionId(message.author.id);
|
||||
await message.channel.sendTyping();
|
||||
|
||||
// Build content array from message and attachments
|
||||
const content: import('@dexto/core').ContentPart[] = [];
|
||||
if (userText) {
|
||||
content.push({ type: 'text', text: userText });
|
||||
}
|
||||
if (imageDataInput) {
|
||||
content.push({
|
||||
type: 'image',
|
||||
image: imageDataInput.image,
|
||||
mimeType: imageDataInput.mimeType,
|
||||
});
|
||||
}
|
||||
if (fileDataInput) {
|
||||
content.push({
|
||||
type: 'file',
|
||||
data: fileDataInput.data,
|
||||
mimeType: fileDataInput.mimeType,
|
||||
filename: fileDataInput.filename,
|
||||
});
|
||||
}
|
||||
|
||||
const response = await agent.generate(content, sessionId);
|
||||
|
||||
const responseText = response.content;
|
||||
|
||||
// Handle Discord's 2000 character limit
|
||||
const MAX_LENGTH = 1900; // Leave some buffer
|
||||
if (responseText && responseText.length <= MAX_LENGTH) {
|
||||
await message.reply(responseText);
|
||||
} else if (responseText) {
|
||||
// Split into chunks and send multiple messages
|
||||
let remaining = responseText;
|
||||
let isFirst = true;
|
||||
|
||||
while (remaining && remaining.length > 0) {
|
||||
const chunk = remaining.substring(0, MAX_LENGTH);
|
||||
remaining = remaining.substring(MAX_LENGTH);
|
||||
|
||||
if (isFirst) {
|
||||
await message.reply(chunk);
|
||||
isFirst = false;
|
||||
} else {
|
||||
// For subsequent chunks, use message.channel.send to avoid a chain of replies
|
||||
// Adding a small delay helps with ordering and rate limits
|
||||
await new Promise((resolve) => setTimeout(resolve, 250)); // 250ms delay
|
||||
await message.channel.send(chunk);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
await message.reply(
|
||||
'🤖 I received your message but could not generate a response.'
|
||||
);
|
||||
}
|
||||
|
||||
// Log token usage if available (optional analytics)
|
||||
if (response.usage) {
|
||||
logger.debug(
|
||||
`Session ${sessionId} - Tokens: input=${response.usage.inputTokens}, output=${response.usage.outputTokens}`
|
||||
);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error handling Discord message', error);
|
||||
try {
|
||||
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
|
||||
await message.reply(`❌ Error: ${errorMessage}`);
|
||||
} catch (replyError) {
|
||||
console.error('Error sending error reply:', replyError);
|
||||
}
|
||||
} finally {
|
||||
agentEventBus.off('llm:tool-call', toolCallHandler);
|
||||
// Set cooldown for the user after processing
|
||||
if (RATE_LIMIT_ENABLED && COOLDOWN_SECONDS > 0) {
|
||||
userCooldowns.set(message.author.id, Date.now() + COOLDOWN_SECONDS * 1000);
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
client.login(token);
|
||||
return client;
|
||||
}
|
||||
39
dexto/examples/discord-bot/main.ts
Normal file
39
dexto/examples/discord-bot/main.ts
Normal file
@@ -0,0 +1,39 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
import 'dotenv/config';
|
||||
import { DextoAgent } from '@dexto/core';
|
||||
import { loadAgentConfig, enrichAgentConfig } from '@dexto/agent-management';
|
||||
import { startDiscordBot } from './bot.js';
|
||||
|
||||
async function main() {
|
||||
try {
|
||||
// Load agent configuration from local agent-config.yml
|
||||
console.log('🚀 Initializing Discord bot...');
|
||||
const configPath = './agent-config.yml';
|
||||
const config = await loadAgentConfig(configPath);
|
||||
const enrichedConfig = enrichAgentConfig(config, configPath);
|
||||
|
||||
// Create and start the Dexto agent
|
||||
const agent = new DextoAgent(enrichedConfig, configPath);
|
||||
await agent.start();
|
||||
|
||||
// Start the Discord bot
|
||||
console.log('📡 Starting Discord bot connection...');
|
||||
startDiscordBot(agent);
|
||||
|
||||
console.log('✅ Discord bot is running! Send messages or use !ask <question> prefix.');
|
||||
console.log(' In DMs, just send your message without the !ask prefix.');
|
||||
|
||||
// Graceful shutdown
|
||||
process.on('SIGINT', async () => {
|
||||
console.log('\n🛑 Shutting down...');
|
||||
await agent.stop();
|
||||
process.exit(0);
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('❌ Failed to start Discord bot:', error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
main();
|
||||
30
dexto/examples/discord-bot/package.json
Normal file
30
dexto/examples/discord-bot/package.json
Normal file
@@ -0,0 +1,30 @@
|
||||
{
|
||||
"name": "dexto-discord-bot-example",
|
||||
"version": "1.0.0",
|
||||
"description": "Discord bot integration example using Dexto",
|
||||
"type": "module",
|
||||
"main": "dist/main.js",
|
||||
"scripts": {
|
||||
"start": "tsx main.ts",
|
||||
"build": "tsc",
|
||||
"dev": "tsx watch main.ts"
|
||||
},
|
||||
"keywords": [
|
||||
"dexto",
|
||||
"discord",
|
||||
"bot",
|
||||
"ai",
|
||||
"agent"
|
||||
],
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@dexto/core": "^1.1.3",
|
||||
"@dexto/agent-management": "^1.1.3",
|
||||
"discord.js": "^14.19.3",
|
||||
"dotenv": "^16.3.1"
|
||||
},
|
||||
"devDependencies": {
|
||||
"tsx": "^4.7.1",
|
||||
"typescript": "^5.5.4"
|
||||
}
|
||||
}
|
||||
2019
dexto/examples/discord-bot/pnpm-lock.yaml
generated
Normal file
2019
dexto/examples/discord-bot/pnpm-lock.yaml
generated
Normal file
File diff suppressed because it is too large
Load Diff
60
dexto/examples/file-context-example.yml
Normal file
60
dexto/examples/file-context-example.yml
Normal file
@@ -0,0 +1,60 @@
|
||||
# Example: Agent Configuration with File-Based Context
|
||||
# This demonstrates how to use the new file contributor feature
|
||||
# to automatically include local files as context for your agent
|
||||
|
||||
# Configure the Large Language Model
|
||||
llm:
|
||||
provider: openai
|
||||
model: gpt-5
|
||||
apiKey: $OPENAI_API_KEY
|
||||
|
||||
# Define system prompt with file-based context
|
||||
systemPrompt:
|
||||
contributors:
|
||||
# Main agent instructions
|
||||
- id: base-prompt
|
||||
type: static
|
||||
priority: 0
|
||||
content: |
|
||||
You are a helpful coding assistant with knowledge of the current project.
|
||||
You have access to project documentation and context files.
|
||||
|
||||
# Include project documentation files
|
||||
- id: project-docs
|
||||
type: file
|
||||
priority: 10
|
||||
files:
|
||||
- ./README.md
|
||||
- ./docs/architecture.md
|
||||
- ./CONTRIBUTING.md
|
||||
options:
|
||||
includeFilenames: true
|
||||
separator: "\n\n---\n\n"
|
||||
errorHandling: skip
|
||||
maxFileSize: 50000
|
||||
includeMetadata: false
|
||||
|
||||
# Include coding guidelines
|
||||
- id: coding-standards
|
||||
type: file
|
||||
priority: 20
|
||||
files:
|
||||
- ./docs/coding-standards.md
|
||||
- ./docs/best-practices.txt
|
||||
options:
|
||||
includeFilenames: true
|
||||
includeMetadata: true
|
||||
errorHandling: "skip"
|
||||
|
||||
# Add current date/time
|
||||
- id: current-time
|
||||
type: dynamic
|
||||
priority: 30
|
||||
source: date
|
||||
|
||||
# Optional: Connect to MCP servers for additional tools
|
||||
mcpServers:
|
||||
filesystem:
|
||||
type: stdio
|
||||
command: npx
|
||||
args: ['-y', '@modelcontextprotocol/server-filesystem', '.']
|
||||
10
dexto/examples/package.json
Normal file
10
dexto/examples/package.json
Normal file
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"name": "dexto-examples",
|
||||
"version": "0.0.0",
|
||||
"private": true,
|
||||
"type": "module",
|
||||
"description": "Example MCP servers for testing",
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "^1.25.2"
|
||||
}
|
||||
}
|
||||
184
dexto/examples/resources-demo-server/README.md
Normal file
184
dexto/examples/resources-demo-server/README.md
Normal file
@@ -0,0 +1,184 @@
|
||||
# MCP Resources Demo Server
|
||||
|
||||
A comprehensive MCP server that demonstrates all three major MCP capabilities: **Resources**, **Prompts**, and **Tools**.
|
||||
|
||||
## Purpose
|
||||
|
||||
This server provides a complete reference implementation of the Model Context Protocol, demonstrating how to build an MCP server with multiple capabilities. It's used for:
|
||||
- Testing Dexto's MCP client integration
|
||||
- Comprehensive integration testing
|
||||
- Example implementation for building custom MCP servers
|
||||
|
||||
## What This Provides
|
||||
|
||||
### 1. MCP Resources Capability
|
||||
Provides structured data resources that can be read by AI assistants.
|
||||
|
||||
**Operations:**
|
||||
- **`resources/list`** - Lists available resources with URIs, names, descriptions, and MIME types
|
||||
- **`resources/read`** - Reads content of specific resources by URI
|
||||
|
||||
**Available Resources:**
|
||||
|
||||
1. **Product Metrics Dashboard** (`mcp-demo://product-metrics`)
|
||||
- JSON data with KPIs, growth metrics, and feature usage
|
||||
- Content: Business analytics and performance indicators
|
||||
|
||||
2. **User Feedback Summary** (`mcp-demo://user-feedback`)
|
||||
- Markdown format with sentiment analysis and feature requests
|
||||
- Content: Customer insights and improvement suggestions
|
||||
|
||||
3. **System Status Report** (`mcp-demo://system-status`)
|
||||
- JSON data with infrastructure health and performance metrics
|
||||
- Content: Service status, uptime, and system monitoring data
|
||||
|
||||
### 2. MCP Prompts Capability
|
||||
Provides reusable prompt templates with argument substitution.
|
||||
|
||||
**Operations:**
|
||||
- **`prompts/list`** - Lists available prompts with descriptions and arguments
|
||||
- **`prompts/get`** - Retrieves a prompt with argument values substituted
|
||||
|
||||
**Available Prompts:**
|
||||
|
||||
1. **analyze-metrics** - Analyze product metrics and provide insights
|
||||
- **Arguments:**
|
||||
- `metric_type` (required) - Type of metric: users, revenue, or features
|
||||
- `time_period` (optional) - Time period for analysis (e.g., "Q1 2025")
|
||||
- **Usage:** Generates analysis prompt for product metrics dashboard
|
||||
|
||||
2. **generate-report** - Generate a comprehensive product report
|
||||
- **Arguments:**
|
||||
- `report_type` (required) - Type of report: metrics, feedback, or status
|
||||
- **Usage:** Creates structured report prompt for specified data type
|
||||
|
||||
### 3. MCP Tools Capability
|
||||
Provides executable tools that perform calculations and formatting.
|
||||
|
||||
**Operations:**
|
||||
- **`tools/list`** - Lists available tools with descriptions and schemas
|
||||
- **`tools/call`** - Executes a tool with provided arguments
|
||||
|
||||
**Available Tools:**
|
||||
|
||||
1. **calculate-growth-rate** - Calculate growth rate between two metrics
|
||||
- **Parameters:**
|
||||
- `current_value` (number, required) - Current metric value
|
||||
- `previous_value` (number, required) - Previous metric value
|
||||
- **Returns:** Growth rate percentage, absolute change
|
||||
|
||||
2. **format-metric** - Format a metric value with appropriate unit
|
||||
- **Parameters:**
|
||||
- `value` (number, required) - Metric value to format
|
||||
- `unit` (enum, required) - Unit type: `users`, `dollars`, or `percentage`
|
||||
- **Returns:** Formatted string with proper units and formatting
|
||||
|
||||
## Setup
|
||||
|
||||
1. Install dependencies:
|
||||
```bash
|
||||
cd examples/resources-demo-server
|
||||
npm install
|
||||
```
|
||||
|
||||
2. Run with Dexto:
|
||||
```bash
|
||||
# From project root
|
||||
dexto --agent ./examples/resources-demo-server/agent.yml
|
||||
```
|
||||
|
||||
## Testing All Capabilities
|
||||
|
||||
Try these interactions to test each capability:
|
||||
|
||||
### Testing Resources
|
||||
1. **List Resources**: "What resources are available?"
|
||||
2. **Read Specific Resource**: "Show me the product metrics data"
|
||||
3. **Analyze Content**: "What does the user feedback summary say?"
|
||||
4. **System Information**: "Check the current system status"
|
||||
|
||||
### Testing Prompts
|
||||
1. **List Prompts**: "What prompts are available?"
|
||||
2. **Use Prompt with Args**: "Use the analyze-metrics prompt for revenue in Q4 2024"
|
||||
3. **Generate Report**: "Use generate-report to create a metrics summary"
|
||||
|
||||
### Testing Tools
|
||||
1. **List Tools**: "What tools are available?"
|
||||
2. **Calculate Growth**: "Use calculate-growth-rate with current value 1500 and previous value 1200"
|
||||
3. **Format Metrics**: "Format the value 125000 as users"
|
||||
|
||||
## Expected Behavior
|
||||
|
||||
✅ **With All Capabilities Working:**
|
||||
- Dexto connects to the MCP server successfully
|
||||
- **Resources:** ResourceManager.listAllResources() returns 3 resources with URIs like `mcp:resources-demo:mcp-demo://product-metrics`
|
||||
- **Prompts:** MCPManager.getAllPromptMetadata() returns 2 prompts (analyze-metrics, generate-report)
|
||||
- **Tools:** MCPManager.getAllTools() returns 2 tools (calculate-growth-rate, format-metric)
|
||||
- All capabilities are cached and accessible without network calls after initial connection
|
||||
|
||||
❌ **If MCP Integration Not Working:**
|
||||
- Server connection fails during startup
|
||||
- Capabilities are not discovered or cached
|
||||
- Resource/prompt/tool operations return errors
|
||||
|
||||
## Technical Details
|
||||
|
||||
This server demonstrates the complete MCP protocol implementation:
|
||||
|
||||
1. **Server Side Implementation**:
|
||||
- **Resources**: Implements `resources/list` and `resources/read` handlers
|
||||
- **Prompts**: Implements `prompts/list` and `prompts/get` handlers with argument substitution
|
||||
- **Tools**: Implements `tools/list` and `tools/call` handlers with schema validation
|
||||
- Returns structured data with proper MIME types and schemas
|
||||
- Uses standard MCP SDK patterns from `@modelcontextprotocol/sdk`
|
||||
|
||||
2. **Client Side** (Dexto):
|
||||
- **MCPManager**: Discovers and caches all capabilities from the server
|
||||
- **ResourceManager**: Aggregates MCP resources with qualified URIs
|
||||
- **PromptManager**: Manages prompt templates and argument substitution
|
||||
- **ToolManager**: Executes MCP tools with proper error handling
|
||||
- All capabilities are cached for performance (no network calls after initial discovery)
|
||||
|
||||
3. **Integration Testing**:
|
||||
- Comprehensive integration tests in `packages/core/src/mcp/manager.integration.test.ts`
|
||||
- Tests verify resources, prompts, and tools all work together
|
||||
- Validates caching behavior and multi-server coordination
|
||||
|
||||
## Architecture
|
||||
|
||||
```text
|
||||
┌─────────────────────────────────────────────────────┐
|
||||
│ Dexto Client │
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────┐ │
|
||||
│ │ResourceManager│ │PromptManager│ │MCPManager│ │
|
||||
│ └──────┬───────┘ └──────┬───────┘ └────┬─────┘ │
|
||||
│ │ │ │ │
|
||||
│ └─────────────────┴────────────────┘ │
|
||||
│ MCPClient │
|
||||
└─────────────────────┬───────────────────────────────┘
|
||||
│ MCP Protocol (stdio)
|
||||
│ - resources/*
|
||||
│ - prompts/*
|
||||
│ - tools/*
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────┐
|
||||
│ Resources Demo Server (Node.js) │
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────┐ │
|
||||
│ │ Resources │ │ Prompts │ │ Tools │ │
|
||||
│ │ (3 items) │ │ (2 items) │ │(2 items) │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────┘ │
|
||||
└─────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Use Cases
|
||||
|
||||
This server validates that Dexto can:
|
||||
- ✅ Connect to external MCP servers via stdio transport
|
||||
- ✅ Discover and cache multiple capability types
|
||||
- ✅ Handle resources with custom URI schemes
|
||||
- ✅ Execute prompts with argument substitution
|
||||
- ✅ Call tools with schema validation
|
||||
- ✅ Coordinate multiple MCP servers simultaneously
|
||||
- ✅ Provide zero-latency access after caching
|
||||
30
dexto/examples/resources-demo-server/agent.yml
Normal file
30
dexto/examples/resources-demo-server/agent.yml
Normal file
@@ -0,0 +1,30 @@
|
||||
# MCP Resources Demo Agent Configuration
|
||||
# This agent demonstrates MCP Resources capability from external server
|
||||
|
||||
# MCP Servers configuration
|
||||
mcpServers:
|
||||
resources-demo:
|
||||
type: stdio
|
||||
command: node
|
||||
args:
|
||||
- "${{dexto.agent_dir}}/server.js"
|
||||
timeout: 30000
|
||||
connectionMode: lenient
|
||||
|
||||
# LLM configuration
|
||||
llm:
|
||||
provider: openai
|
||||
model: gpt-5-mini
|
||||
apiKey: $OPENAI_API_KEY
|
||||
|
||||
# System prompt configuration
|
||||
systemPrompt:
|
||||
contributors:
|
||||
- id: primary
|
||||
type: static
|
||||
priority: 0
|
||||
content: |
|
||||
You are a helpful assistant.
|
||||
|
||||
# Optional greeting
|
||||
greeting: "Hello! I'm connected to an MCP server that provides Resources. Ask me about product metrics, user feedback, or system status to see MCP Resources in action!"
|
||||
1146
dexto/examples/resources-demo-server/package-lock.json
generated
Normal file
1146
dexto/examples/resources-demo-server/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
15
dexto/examples/resources-demo-server/package.json
Normal file
15
dexto/examples/resources-demo-server/package.json
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"name": "resources-demo-server",
|
||||
"version": "1.0.0",
|
||||
"description": "MCP server demonstrating Resources capability",
|
||||
"type": "module",
|
||||
"main": "server.js",
|
||||
"scripts": {
|
||||
"start": "node server.js"
|
||||
},
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "^1.25.2"
|
||||
},
|
||||
"keywords": ["mcp", "resources", "demo"],
|
||||
"license": "MIT"
|
||||
}
|
||||
717
dexto/examples/resources-demo-server/server.js
Executable file
717
dexto/examples/resources-demo-server/server.js
Executable file
@@ -0,0 +1,717 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
|
||||
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
|
||||
import {
|
||||
ListResourcesRequestSchema,
|
||||
ReadResourceRequestSchema,
|
||||
ListPromptsRequestSchema,
|
||||
GetPromptRequestSchema,
|
||||
ListToolsRequestSchema,
|
||||
CallToolRequestSchema
|
||||
} from '@modelcontextprotocol/sdk/types.js';
|
||||
|
||||
class ResourcesDemoServer {
|
||||
constructor() {
|
||||
this.server = new Server(
|
||||
{
|
||||
name: 'resources-demo-server',
|
||||
version: '1.0.0',
|
||||
},
|
||||
{
|
||||
capabilities: {
|
||||
resources: {},
|
||||
prompts: {},
|
||||
tools: {},
|
||||
},
|
||||
}
|
||||
);
|
||||
|
||||
this.setupHandlers();
|
||||
}
|
||||
|
||||
setupHandlers() {
|
||||
// Handle resources/list requests
|
||||
this.server.setRequestHandler(ListResourcesRequestSchema, async () => {
|
||||
try {
|
||||
console.error(`📋 MCP Resources: Listing available resources`);
|
||||
|
||||
const resources = [
|
||||
{
|
||||
uri: 'mcp-demo://product-metrics',
|
||||
name: 'Product Metrics Dashboard',
|
||||
description: 'Key performance indicators and product analytics',
|
||||
mimeType: 'application/json',
|
||||
},
|
||||
{
|
||||
uri: 'mcp-demo://user-feedback',
|
||||
name: 'User Feedback Summary',
|
||||
description: 'Customer feedback analysis and insights',
|
||||
mimeType: 'text/markdown',
|
||||
},
|
||||
{
|
||||
uri: 'mcp-demo://system-status',
|
||||
name: 'System Status Report',
|
||||
description: 'Current system health and performance metrics',
|
||||
mimeType: 'application/json',
|
||||
},
|
||||
];
|
||||
|
||||
return {
|
||||
resources: resources,
|
||||
};
|
||||
} catch (error) {
|
||||
console.error(`Error listing MCP resources: ${error.message}`);
|
||||
return {
|
||||
resources: [],
|
||||
};
|
||||
}
|
||||
});
|
||||
|
||||
// Handle resources/read requests
|
||||
this.server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
|
||||
try {
|
||||
const { uri } = request.params;
|
||||
console.error(`📖 MCP Resources: Reading resource ${uri}`);
|
||||
|
||||
const content = await this.getResourceContent(uri);
|
||||
|
||||
return {
|
||||
contents: [content],
|
||||
};
|
||||
} catch (error) {
|
||||
console.error(`Error reading MCP resource ${request.params.uri}: ${error.message}`);
|
||||
throw new Error(`Failed to read resource: ${error.message}`);
|
||||
}
|
||||
});
|
||||
|
||||
// Handle prompts/list requests
|
||||
this.server.setRequestHandler(ListPromptsRequestSchema, async () => {
|
||||
try {
|
||||
console.error(`📝 MCP Prompts: Listing available prompts`);
|
||||
|
||||
const prompts = [
|
||||
{
|
||||
name: 'analyze-metrics',
|
||||
description: 'Analyze product metrics and provide insights',
|
||||
arguments: [
|
||||
{
|
||||
name: 'metric_type',
|
||||
description: 'Type of metric to analyze (users, revenue, features)',
|
||||
required: true,
|
||||
},
|
||||
{
|
||||
name: 'time_period',
|
||||
description: 'Time period for analysis (e.g., "Q4 2024")',
|
||||
required: false,
|
||||
},
|
||||
],
|
||||
},
|
||||
{
|
||||
name: 'generate-report',
|
||||
description: 'Generate a comprehensive product report',
|
||||
arguments: [
|
||||
{
|
||||
name: 'report_type',
|
||||
description: 'Type of report (metrics, feedback, status)',
|
||||
required: true,
|
||||
},
|
||||
],
|
||||
},
|
||||
{
|
||||
name: 'deep-dive-analysis',
|
||||
description: 'Perform deep analysis with linked reference data (demonstrates resource_link)',
|
||||
arguments: [
|
||||
{
|
||||
name: 'focus',
|
||||
description: 'Analysis focus area (growth, satisfaction, operations)',
|
||||
required: false,
|
||||
},
|
||||
],
|
||||
},
|
||||
];
|
||||
|
||||
return {
|
||||
prompts: prompts,
|
||||
};
|
||||
} catch (error) {
|
||||
console.error(`Error listing MCP prompts: ${error.message}`);
|
||||
return {
|
||||
prompts: [],
|
||||
};
|
||||
}
|
||||
});
|
||||
|
||||
// Handle prompts/get requests
|
||||
this.server.setRequestHandler(GetPromptRequestSchema, async (request) => {
|
||||
try {
|
||||
const { name, arguments: args } = request.params;
|
||||
console.error(`📖 MCP Prompts: Reading prompt ${name}`);
|
||||
|
||||
const promptContent = await this.getPromptContent(name, args);
|
||||
|
||||
return {
|
||||
messages: promptContent,
|
||||
};
|
||||
} catch (error) {
|
||||
console.error(`Error reading MCP prompt ${request.params.name}: ${error.message}`);
|
||||
throw new Error(`Failed to read prompt: ${error.message}`);
|
||||
}
|
||||
});
|
||||
|
||||
// Handle tools/list requests
|
||||
this.server.setRequestHandler(ListToolsRequestSchema, async () => {
|
||||
try {
|
||||
console.error(`🔧 MCP Tools: Listing available tools`);
|
||||
|
||||
const tools = [
|
||||
{
|
||||
name: 'calculate-growth-rate',
|
||||
description: 'Calculate growth rate between two metrics',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
current_value: {
|
||||
type: 'number',
|
||||
description: 'Current metric value',
|
||||
},
|
||||
previous_value: {
|
||||
type: 'number',
|
||||
description: 'Previous metric value',
|
||||
},
|
||||
},
|
||||
required: ['current_value', 'previous_value'],
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'format-metric',
|
||||
description: 'Format a metric value with appropriate unit',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
value: {
|
||||
type: 'number',
|
||||
description: 'Metric value to format',
|
||||
},
|
||||
unit: {
|
||||
type: 'string',
|
||||
description: 'Unit type (users, dollars, percentage)',
|
||||
enum: ['users', 'dollars', 'percentage'],
|
||||
},
|
||||
},
|
||||
required: ['value', 'unit'],
|
||||
},
|
||||
},
|
||||
];
|
||||
|
||||
return {
|
||||
tools: tools,
|
||||
};
|
||||
} catch (error) {
|
||||
console.error(`Error listing MCP tools: ${error.message}`);
|
||||
return {
|
||||
tools: [],
|
||||
};
|
||||
}
|
||||
});
|
||||
|
||||
// Handle tools/call requests
|
||||
this.server.setRequestHandler(CallToolRequestSchema, async (request) => {
|
||||
try {
|
||||
const { name, arguments: args } = request.params;
|
||||
console.error(`⚙️ MCP Tools: Calling tool ${name}`);
|
||||
|
||||
const result = await this.callTool(name, args);
|
||||
|
||||
return {
|
||||
content: [
|
||||
{
|
||||
type: 'text',
|
||||
text: JSON.stringify(result, null, 2),
|
||||
},
|
||||
],
|
||||
};
|
||||
} catch (error) {
|
||||
console.error(`Error calling MCP tool ${request.params.name}: ${error.message}`);
|
||||
throw new Error(`Failed to call tool: ${error.message}`);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
async getResourceContent(uri) {
|
||||
switch (uri) {
|
||||
case 'mcp-demo://product-metrics':
|
||||
return {
|
||||
uri: uri,
|
||||
mimeType: 'application/json',
|
||||
text: JSON.stringify({
|
||||
"dashboard": "Product Metrics",
|
||||
"period": "Q4 2024",
|
||||
"metrics": {
|
||||
"monthly_active_users": 125000,
|
||||
"daily_active_users": 45000,
|
||||
"conversion_rate": 3.2,
|
||||
"churn_rate": 2.1,
|
||||
"customer_satisfaction": 4.6,
|
||||
"net_promoter_score": 72
|
||||
},
|
||||
"growth": {
|
||||
"user_growth_rate": 15.3,
|
||||
"revenue_growth_rate": 22.1,
|
||||
"feature_adoption_rate": 68.4
|
||||
},
|
||||
"top_features": [
|
||||
{
|
||||
"name": "Analytics Dashboard",
|
||||
"usage_percentage": 89.2,
|
||||
"satisfaction": 4.7
|
||||
},
|
||||
{
|
||||
"name": "Mobile App",
|
||||
"usage_percentage": 76.5,
|
||||
"satisfaction": 4.4
|
||||
},
|
||||
{
|
||||
"name": "API Integration",
|
||||
"usage_percentage": 45.8,
|
||||
"satisfaction": 4.5
|
||||
}
|
||||
],
|
||||
"geographic_data": {
|
||||
"north_america": 45,
|
||||
"europe": 32,
|
||||
"asia_pacific": 18,
|
||||
"other": 5
|
||||
},
|
||||
"last_updated": "2024-12-15T10:30:00Z"
|
||||
}, null, 2),
|
||||
};
|
||||
|
||||
case 'mcp-demo://user-feedback':
|
||||
return {
|
||||
uri: uri,
|
||||
mimeType: 'text/markdown',
|
||||
text: `# User Feedback Summary - December 2024
|
||||
|
||||
## Overall Sentiment Analysis
|
||||
- **Positive**: 78% (up from 72% last month)
|
||||
- **Neutral**: 15%
|
||||
- **Negative**: 7% (down from 11% last month)
|
||||
|
||||
## Top Positive Feedback Themes
|
||||
|
||||
### 1. User Interface Improvements (42% of positive feedback)
|
||||
> "The new dashboard is so much cleaner and easier to navigate. Finding what I need takes half the time now."
|
||||
|
||||
> "Love the dark mode option! Finally my eyes don't hurt during late-night work sessions."
|
||||
|
||||
### 2. Performance Enhancements (31% of positive feedback)
|
||||
> "The app loads so much faster now. What used to take 10 seconds now happens instantly."
|
||||
|
||||
> "Mobile app performance is night and day better. No more freezing or crashes."
|
||||
|
||||
### 3. New Feature Adoption (27% of positive feedback)
|
||||
> "The AI-powered suggestions are spot on. It's like the app reads my mind."
|
||||
|
||||
> "Export functionality is exactly what we needed for our quarterly reports."
|
||||
|
||||
## Areas for Improvement
|
||||
|
||||
### 1. Search Functionality (38% of suggestions)
|
||||
- Users want more advanced filtering options
|
||||
- Request for saved search presets
|
||||
- Need better search result relevance
|
||||
|
||||
### 2. Mobile Experience (31% of suggestions)
|
||||
- Offline mode for key features
|
||||
- Better handling of poor network conditions
|
||||
- More gestures and shortcuts
|
||||
|
||||
### 3. Integration Capabilities (31% of suggestions)
|
||||
- More third-party app connections
|
||||
- Better API documentation
|
||||
- Webhook support for real-time updates
|
||||
|
||||
## Feature Requests by Priority
|
||||
|
||||
| Feature | Votes | Priority | Est. Effort |
|
||||
|---------|-------|----------|-------------|
|
||||
| Advanced Search Filters | 234 | High | 3 weeks |
|
||||
| Offline Mobile Mode | 187 | High | 5 weeks |
|
||||
| Slack Integration | 156 | Medium | 2 weeks |
|
||||
| Custom Dashboard Widgets | 143 | Medium | 4 weeks |
|
||||
| Bulk Operations | 98 | Low | 6 weeks |
|
||||
|
||||
## Customer Support Insights
|
||||
|
||||
- Average response time: 4.2 hours (target: <4 hours) ✅
|
||||
- First contact resolution: 67% (target: 70%) ⚠️
|
||||
- Customer satisfaction with support: 4.4/5.0 ✅
|
||||
|
||||
## Recommended Actions
|
||||
|
||||
1. **Immediate (Next Sprint)**
|
||||
- Implement basic search filtering
|
||||
- Fix remaining mobile performance issues
|
||||
|
||||
2. **Short-term (Next Month)**
|
||||
- Develop offline mode MVP
|
||||
- Begin Slack integration development
|
||||
|
||||
3. **Medium-term (Q1 2025)**
|
||||
- Launch custom dashboard features
|
||||
- Expand integration marketplace
|
||||
|
||||
## Competitive Analysis Insights
|
||||
|
||||
Users comparing us to competitors highlighted:
|
||||
- **Strengths**: Ease of use, customer support, pricing
|
||||
- **Gaps**: Advanced analytics, enterprise features, mobile capabilities
|
||||
|
||||
*Report generated on December 15, 2024*`,
|
||||
};
|
||||
|
||||
case 'mcp-demo://system-status':
|
||||
return {
|
||||
uri: uri,
|
||||
mimeType: 'application/json',
|
||||
text: JSON.stringify({
|
||||
"status": "operational",
|
||||
"last_updated": "2024-12-15T14:45:00Z",
|
||||
"uptime_percentage": 99.8,
|
||||
"services": {
|
||||
"api_gateway": {
|
||||
"status": "operational",
|
||||
"response_time_ms": 85,
|
||||
"error_rate": 0.002,
|
||||
"requests_per_minute": 1250
|
||||
},
|
||||
"database": {
|
||||
"status": "operational",
|
||||
"connection_pool_usage": 0.45,
|
||||
"query_performance_ms": 12,
|
||||
"active_connections": 67
|
||||
},
|
||||
"cache_layer": {
|
||||
"status": "operational",
|
||||
"hit_rate": 0.94,
|
||||
"memory_usage": 0.72,
|
||||
"keys_count": 245000
|
||||
},
|
||||
"file_storage": {
|
||||
"status": "operational",
|
||||
"storage_usage": 0.68,
|
||||
"upload_speed_mbps": 125,
|
||||
"download_speed_mbps": 180
|
||||
},
|
||||
"background_jobs": {
|
||||
"status": "operational",
|
||||
"queue_size": 23,
|
||||
"processing_rate_per_minute": 450,
|
||||
"failed_jobs_last_hour": 2
|
||||
}
|
||||
},
|
||||
"infrastructure": {
|
||||
"servers": {
|
||||
"web_servers": 4,
|
||||
"api_servers": 6,
|
||||
"database_servers": 2,
|
||||
"cache_servers": 3
|
||||
},
|
||||
"load_balancer": {
|
||||
"status": "healthy",
|
||||
"active_connections": 1850,
|
||||
"ssl_termination": "enabled"
|
||||
},
|
||||
"cdn": {
|
||||
"status": "operational",
|
||||
"cache_hit_ratio": 0.89,
|
||||
"global_edge_locations": 45
|
||||
}
|
||||
},
|
||||
"security": {
|
||||
"ssl_certificate": {
|
||||
"status": "valid",
|
||||
"expires": "2025-03-15T00:00:00Z"
|
||||
},
|
||||
"firewall": {
|
||||
"status": "active",
|
||||
"blocked_requests_last_hour": 127
|
||||
},
|
||||
"ddos_protection": {
|
||||
"status": "active",
|
||||
"threats_mitigated": 5
|
||||
}
|
||||
},
|
||||
"recent_incidents": [],
|
||||
"scheduled_maintenance": {
|
||||
"next_window": "2024-12-22T02:00:00Z",
|
||||
"duration_hours": 2,
|
||||
"description": "Database optimization and index rebuilding"
|
||||
}
|
||||
}, null, 2),
|
||||
};
|
||||
|
||||
default:
|
||||
throw new Error(`Unknown resource URI: ${uri}`);
|
||||
}
|
||||
}
|
||||
|
||||
async getPromptContent(name, args = {}) {
|
||||
switch (name) {
|
||||
case 'analyze-metrics': {
|
||||
const metricType = args.metric_type || 'users';
|
||||
const timePeriod = args.time_period || 'Q4 2024';
|
||||
|
||||
// Fetch the resource content to embed
|
||||
const resourceContent = await this.getResourceContent('mcp-demo://product-metrics');
|
||||
|
||||
// Return multiple messages, each with a single content block
|
||||
// This is spec-compliant: PromptMessage.content must be a single ContentBlock
|
||||
return [
|
||||
{
|
||||
role: 'user',
|
||||
content: {
|
||||
type: 'text',
|
||||
text: `Please analyze the ${metricType} metrics for ${timePeriod}.
|
||||
|
||||
Consider:
|
||||
1. Current trends and patterns
|
||||
2. Growth or decline rates
|
||||
3. Key insights and recommendations
|
||||
4. Areas of concern or opportunity`,
|
||||
},
|
||||
},
|
||||
{
|
||||
role: 'user',
|
||||
content: {
|
||||
type: 'resource',
|
||||
resource: {
|
||||
uri: resourceContent.uri,
|
||||
mimeType: resourceContent.mimeType,
|
||||
text: resourceContent.text,
|
||||
},
|
||||
},
|
||||
},
|
||||
];
|
||||
}
|
||||
|
||||
case 'generate-report': {
|
||||
const reportType = args.report_type || 'metrics';
|
||||
let resourceUri;
|
||||
switch (reportType) {
|
||||
case 'metrics':
|
||||
resourceUri = 'mcp-demo://product-metrics';
|
||||
break;
|
||||
case 'feedback':
|
||||
resourceUri = 'mcp-demo://user-feedback';
|
||||
break;
|
||||
case 'status':
|
||||
resourceUri = 'mcp-demo://system-status';
|
||||
break;
|
||||
default:
|
||||
resourceUri = 'mcp-demo://product-metrics';
|
||||
}
|
||||
|
||||
// Fetch the resource content to embed
|
||||
const resourceContent = await this.getResourceContent(resourceUri);
|
||||
|
||||
// Return multiple messages, each with a single content block
|
||||
// This is spec-compliant: PromptMessage.content must be a single ContentBlock
|
||||
return [
|
||||
{
|
||||
role: 'user',
|
||||
content: {
|
||||
type: 'text',
|
||||
text: `Generate a comprehensive ${reportType} report.
|
||||
|
||||
Include:
|
||||
- Executive summary
|
||||
- Key findings
|
||||
- Data visualization suggestions
|
||||
- Actionable recommendations`,
|
||||
},
|
||||
},
|
||||
{
|
||||
role: 'user',
|
||||
content: {
|
||||
type: 'resource',
|
||||
resource: {
|
||||
uri: resourceContent.uri,
|
||||
mimeType: resourceContent.mimeType,
|
||||
text: resourceContent.text,
|
||||
},
|
||||
},
|
||||
},
|
||||
];
|
||||
}
|
||||
|
||||
case 'deep-dive-analysis': {
|
||||
const focus = args.focus || 'growth';
|
||||
|
||||
// Define analysis based on focus area
|
||||
let analysisPrompt;
|
||||
let relevantResources = [];
|
||||
|
||||
switch (focus) {
|
||||
case 'growth':
|
||||
analysisPrompt = `Conduct a comprehensive growth analysis:
|
||||
|
||||
1. Analyze user acquisition and retention trends from the metrics
|
||||
2. Identify growth drivers and potential bottlenecks
|
||||
3. Cross-reference user feedback to understand growth quality
|
||||
4. Evaluate system capacity for scaling
|
||||
|
||||
Reference the linked data sources below and provide:
|
||||
- Growth trajectory analysis with key inflection points
|
||||
- User sentiment correlation with growth metrics
|
||||
- Infrastructure readiness assessment
|
||||
- Actionable growth recommendations for next quarter`;
|
||||
relevantResources = [
|
||||
'mcp-demo://product-metrics',
|
||||
'mcp-demo://user-feedback',
|
||||
'mcp-demo://system-status',
|
||||
];
|
||||
break;
|
||||
|
||||
case 'satisfaction':
|
||||
analysisPrompt = `Perform deep customer satisfaction analysis:
|
||||
|
||||
1. Examine satisfaction scores and NPS trends in metrics
|
||||
2. Analyze qualitative feedback themes and sentiment
|
||||
3. Correlate feature usage with satisfaction levels
|
||||
4. Assess support team performance impact
|
||||
|
||||
Reference the linked data sources below and provide:
|
||||
- Satisfaction trend analysis with root causes
|
||||
- Feature satisfaction breakdown
|
||||
- Critical improvement areas ranked by impact
|
||||
- Customer retention risk assessment`;
|
||||
relevantResources = [
|
||||
'mcp-demo://product-metrics',
|
||||
'mcp-demo://user-feedback',
|
||||
];
|
||||
break;
|
||||
|
||||
case 'operations':
|
||||
analysisPrompt = `Analyze operational health and performance:
|
||||
|
||||
1. Review system performance metrics and uptime
|
||||
2. Assess infrastructure capacity and efficiency
|
||||
3. Identify operational risks and bottlenecks
|
||||
4. Evaluate technical debt and maintenance needs
|
||||
|
||||
Reference the linked data sources below and provide:
|
||||
- System health score with risk factors
|
||||
- Performance optimization opportunities
|
||||
- Capacity planning recommendations
|
||||
- Incident prevention strategies`;
|
||||
relevantResources = [
|
||||
'mcp-demo://system-status',
|
||||
'mcp-demo://product-metrics',
|
||||
];
|
||||
break;
|
||||
|
||||
default:
|
||||
// Default to growth analysis
|
||||
analysisPrompt = `Conduct a comprehensive growth analysis with available data sources.`;
|
||||
relevantResources = [
|
||||
'mcp-demo://product-metrics',
|
||||
'mcp-demo://user-feedback',
|
||||
'mcp-demo://system-status',
|
||||
];
|
||||
}
|
||||
|
||||
// Build resource reference links using @<uri> syntax
|
||||
// This demonstrates the difference from embedded resources:
|
||||
// - Embedded resources (type: 'resource'): Content included directly in prompt
|
||||
// - Resource references (@<uri>): Pointers that UI/client can fetch separately
|
||||
// - Use references when you have multiple large data sources
|
||||
const resourceRefs = relevantResources.map(uri => `@<${uri}>`).join('\n');
|
||||
|
||||
const fullPrompt = `${analysisPrompt}
|
||||
|
||||
Data sources for analysis:
|
||||
${resourceRefs}`;
|
||||
|
||||
// Return single message with text content including @<uri> references
|
||||
return [
|
||||
{
|
||||
role: 'user',
|
||||
content: {
|
||||
type: 'text',
|
||||
text: fullPrompt,
|
||||
},
|
||||
},
|
||||
];
|
||||
}
|
||||
|
||||
default:
|
||||
throw new Error(`Unknown prompt: ${name}`);
|
||||
}
|
||||
}
|
||||
|
||||
async callTool(name, args = {}) {
|
||||
switch (name) {
|
||||
case 'calculate-growth-rate': {
|
||||
const { current_value, previous_value } = args;
|
||||
if (previous_value === 0) {
|
||||
return {
|
||||
growth_rate: null,
|
||||
error: 'Cannot calculate growth rate with zero previous value',
|
||||
};
|
||||
}
|
||||
const growthRate = ((current_value - previous_value) / previous_value) * 100;
|
||||
return {
|
||||
growth_rate: growthRate.toFixed(2) + '%',
|
||||
current_value,
|
||||
previous_value,
|
||||
absolute_change: current_value - previous_value,
|
||||
};
|
||||
}
|
||||
|
||||
case 'format-metric': {
|
||||
const { value, unit } = args;
|
||||
let formatted;
|
||||
switch (unit) {
|
||||
case 'users':
|
||||
formatted = value.toLocaleString() + ' users';
|
||||
break;
|
||||
case 'dollars':
|
||||
formatted = '$' + value.toLocaleString('en-US', { minimumFractionDigits: 2, maximumFractionDigits: 2 });
|
||||
break;
|
||||
case 'percentage':
|
||||
formatted = value.toFixed(1) + '%';
|
||||
break;
|
||||
default:
|
||||
formatted = value.toString();
|
||||
}
|
||||
return {
|
||||
formatted_value: formatted,
|
||||
raw_value: value,
|
||||
unit: unit,
|
||||
};
|
||||
}
|
||||
|
||||
default:
|
||||
throw new Error(`Unknown tool: ${name}`);
|
||||
}
|
||||
}
|
||||
|
||||
async start() {
|
||||
const transport = new StdioServerTransport();
|
||||
await this.server.connect(transport);
|
||||
|
||||
console.error('🚀 MCP Resources Demo Server started');
|
||||
console.error('📋 Capabilities: Resources, Prompts, Tools');
|
||||
console.error('🔄 Operations: resources/*, prompts/*, tools/*');
|
||||
console.error('💡 Comprehensive demo of MCP protocol features');
|
||||
}
|
||||
}
|
||||
|
||||
// Start the server
|
||||
const server = new ResourcesDemoServer();
|
||||
server.start().catch(error => {
|
||||
console.error('Failed to start server:', error);
|
||||
process.exit(1);
|
||||
});
|
||||
16
dexto/examples/telegram-bot/.env.example
Normal file
16
dexto/examples/telegram-bot/.env.example
Normal file
@@ -0,0 +1,16 @@
|
||||
# Telegram Bot Token
|
||||
# Get this from BotFather: https://t.me/botfather
|
||||
TELEGRAM_BOT_TOKEN=your_telegram_bot_token_here
|
||||
|
||||
# LLM API Key
|
||||
# Required: Set this to your OpenAI API key
|
||||
# Get one at: https://platform.openai.com/account/api-keys
|
||||
OPENAI_API_KEY=your_openai_api_key_here
|
||||
|
||||
# Alternative LLM providers (uncomment one and use it in agent-config.yml)
|
||||
# ANTHROPIC_API_KEY=your_anthropic_api_key_here
|
||||
# GOOGLE_GENERATIVE_AI_API_KEY=your_google_api_key_here
|
||||
|
||||
# Inline query settings (optional)
|
||||
# Maximum concurrent inline queries to process (default: 5)
|
||||
TELEGRAM_INLINE_QUERY_CONCURRENCY=5
|
||||
329
dexto/examples/telegram-bot/README.md
Normal file
329
dexto/examples/telegram-bot/README.md
Normal file
@@ -0,0 +1,329 @@
|
||||
# Telegram Bot Example
|
||||
|
||||
This is a **reference implementation** showing how to integrate DextoAgent with Telegram using grammy. It demonstrates:
|
||||
- Connecting to Telegram Bot API with long polling
|
||||
- Processing messages and commands
|
||||
- Handling inline queries for direct responses in any chat
|
||||
- Handling image attachments
|
||||
- Managing per-user conversation sessions
|
||||
- Integrating tool calls with Telegram messages
|
||||
|
||||
## ⚠️ Important: This is a Reference Implementation
|
||||
|
||||
This example is provided to show how to build Telegram integrations with Dexto. While it works, it's not a production-ready bot and may lack:
|
||||
- Advanced error recovery and retry logic
|
||||
- Comprehensive logging and monitoring
|
||||
- Scalability features for large deployments
|
||||
- Webhook support (currently uses long polling only)
|
||||
- Advanced rate limiting
|
||||
|
||||
Use this as a foundation to build your own customized Telegram bot!
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Get Your Telegram Bot Token
|
||||
|
||||
1. Open Telegram and search for **BotFather** (verify it has the blue checkmark)
|
||||
2. Send `/start` to begin a conversation
|
||||
3. Send `/newbot` to create a new bot
|
||||
4. Follow the prompts:
|
||||
- Give your bot a name (e.g., "Dexto AI Bot")
|
||||
- Give it a username ending in "bot" (e.g., "dexto_ai_bot")
|
||||
5. BotFather will provide your token - save it for the next step
|
||||
|
||||
### 2. Set Up Your Environment Variables
|
||||
|
||||
Copy `.env.example` to `.env`:
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
Edit `.env` and add:
|
||||
|
||||
1. **Your Telegram Bot Token** (required):
|
||||
```
|
||||
TELEGRAM_BOT_TOKEN=your_token_here
|
||||
```
|
||||
|
||||
2. **Your LLM API Key** (required):
|
||||
|
||||
For OpenAI (default):
|
||||
```
|
||||
OPENAI_API_KEY=your_openai_api_key_here
|
||||
```
|
||||
|
||||
Or use a different provider and update `agent-config.yml`:
|
||||
```
|
||||
# ANTHROPIC_API_KEY=your_key_here
|
||||
# GOOGLE_GENERATIVE_AI_API_KEY=your_key_here
|
||||
```
|
||||
|
||||
**Get API Keys:**
|
||||
- **OpenAI**: https://platform.openai.com/account/api-keys
|
||||
- **Anthropic**: https://console.anthropic.com/account/keys
|
||||
- **Google**: https://ai.google.dev
|
||||
|
||||
### 3. Install Dependencies
|
||||
|
||||
Install the required dependencies:
|
||||
|
||||
```bash
|
||||
pnpm install
|
||||
```
|
||||
|
||||
### 4. Run the Bot
|
||||
|
||||
Start the bot:
|
||||
|
||||
```bash
|
||||
pnpm start
|
||||
```
|
||||
|
||||
You should see:
|
||||
```
|
||||
🚀 Initializing Telegram bot...
|
||||
📡 Starting Telegram bot connection...
|
||||
✅ Telegram bot is running! Start with /start command.
|
||||
```
|
||||
|
||||
### 5. Test Your Bot
|
||||
|
||||
1. Open Telegram and search for your bot's username
|
||||
2. Click "Start" or send `/start` command
|
||||
3. You should see a welcome message with buttons
|
||||
|
||||
## Usage
|
||||
|
||||
### Commands
|
||||
|
||||
- **`/start`** - Display welcome message with command buttons and options
|
||||
- **`/ask <question>`** - Ask a question (works in groups with prefix)
|
||||
- **`/explain <topic>`** - Get detailed explanations of topics
|
||||
- **`/summarize <text>`** - Summarize provided content
|
||||
- **`/code <problem>`** - Get help with programming tasks
|
||||
- **`/analyze <data>`** - Analyze information or data
|
||||
- **`/creative <idea>`** - Brainstorm creatively on a topic
|
||||
|
||||
### Features
|
||||
|
||||
#### Quick Command Buttons
|
||||
When you send `/start`, the bot displays interactive buttons for each command. Click a button to start that interaction without typing a command!
|
||||
|
||||
**Available command buttons:**
|
||||
- 💡 Explain
|
||||
- 📋 Summarize
|
||||
- 💻 Code
|
||||
- ✨ Creative
|
||||
- 🔍 Analyze
|
||||
|
||||
#### Text Messages
|
||||
Send any message directly to the bot in DMs, and it will respond using the configured LLM with full conversation context.
|
||||
|
||||
#### Image Support
|
||||
Send photos with optional captions, and the bot will analyze them using the agent's vision capabilities (for models that support vision).
|
||||
|
||||
#### Audio/Voice Messages
|
||||
Send voice messages or audio files, and the bot will:
|
||||
- Transcribe the audio (if model supports speech recognition)
|
||||
- Analyze the audio content
|
||||
- Use voice as context for responses
|
||||
|
||||
Supported audio formats: OGG (Telegram voice), MP3, WAV, and other audio formats your LLM supports.
|
||||
|
||||
#### Inline Queries
|
||||
In any chat (without messaging the bot), use inline mode:
|
||||
```
|
||||
@your_bot_name What is the capital of France?
|
||||
```
|
||||
The bot will respond with a result you can send directly to the chat.
|
||||
|
||||
#### Session Management
|
||||
- **Reset Conversation** - Use the 🔄 Reset button from `/start` to clear conversation history
|
||||
- **Help** - Use the ❓ Help button to see all available features
|
||||
|
||||
#### Per-User Sessions
|
||||
Each Telegram user gets their own isolated conversation session. Multiple users in a group chat will each have separate conversations, preventing cross-user context pollution.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Adding Custom Prompts
|
||||
|
||||
The bot automatically loads prompts from your `agent-config.yml` file. These prompts appear as buttons in `/start` and can be invoked as slash commands.
|
||||
|
||||
**To add a new prompt:**
|
||||
|
||||
```yaml
|
||||
prompts:
|
||||
- type: inline
|
||||
id: mycommand # Used as /mycommand
|
||||
title: "🎯 My Command" # Button label
|
||||
description: "What this command does"
|
||||
prompt: "System instruction:\n\n{{context}}" # Template with {{context}} placeholder
|
||||
category: custom
|
||||
priority: 10
|
||||
```
|
||||
|
||||
**Example prompts included:**
|
||||
|
||||
*Self-contained (execute immediately):*
|
||||
- `/quick-start` - Learn what the bot can do
|
||||
- `/demo` - See tools in action
|
||||
|
||||
*Context-requiring (ask for input):*
|
||||
- `/summarize` - Summarize content
|
||||
- `/explain` - Detailed explanations
|
||||
- `/code` - Programming help
|
||||
- `/translate` - Language translation
|
||||
|
||||
**Using prompts:**
|
||||
1. **As slash commands**: `/summarize Your text here`
|
||||
2. **As buttons**:
|
||||
- Self-contained prompts execute immediately ⚡
|
||||
- Context-requiring prompts ask for input 💬
|
||||
3. **Smart detection**: Bot automatically determines if context is needed
|
||||
4. **Dynamic loading**: Prompts update when you restart the bot
|
||||
|
||||
### Switching LLM Providers
|
||||
|
||||
The bot comes configured with OpenAI by default. To use a different provider:
|
||||
|
||||
1. **Update `agent-config.yml`** - Change the `llm` section:
|
||||
|
||||
```yaml
|
||||
# For Anthropic Claude:
|
||||
llm:
|
||||
provider: anthropic
|
||||
model: claude-sonnet-4-5-20250929
|
||||
apiKey: $ANTHROPIC_API_KEY
|
||||
|
||||
# For Google Gemini:
|
||||
llm:
|
||||
provider: google
|
||||
model: gemini-2.0-flash
|
||||
apiKey: $GOOGLE_GENERATIVE_AI_API_KEY
|
||||
```
|
||||
|
||||
2. **Set the API key in `.env`**:
|
||||
```
|
||||
ANTHROPIC_API_KEY=your_key_here
|
||||
# or
|
||||
GOOGLE_GENERATIVE_AI_API_KEY=your_key_here
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Create a `.env` file with:
|
||||
|
||||
- **`TELEGRAM_BOT_TOKEN`** (Required): Your bot's authentication token from BotFather
|
||||
- **`OPENAI_API_KEY`** (Required for OpenAI): Your OpenAI API key
|
||||
- **`ANTHROPIC_API_KEY`** (Optional): For using Claude models
|
||||
- **`GOOGLE_GENERATIVE_AI_API_KEY`** (Optional): For using Gemini models
|
||||
- **`TELEGRAM_INLINE_QUERY_CONCURRENCY`** (Optional): Max concurrent inline queries (default: 5)
|
||||
|
||||
## Features
|
||||
|
||||
### Session Management
|
||||
Each Telegram user gets their own persistent conversation session during the bot's lifetime. Messages from different users don't interfere with each other.
|
||||
|
||||
### Tool Notifications
|
||||
When the LLM calls a tool (e.g., making an API call), the bot sends a notification message so users can see what's happening:
|
||||
```
|
||||
Calling get_weather with args: {...}
|
||||
```
|
||||
|
||||
### Inline Query Debouncing
|
||||
Repeated inline queries are cached for 2 seconds to reduce redundant processing.
|
||||
|
||||
### Concurrency Control
|
||||
By default, the bot limits concurrent inline query processing to 5 to prevent overwhelming the system. Adjust via `TELEGRAM_INLINE_QUERY_CONCURRENCY`.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
Telegram Message
|
||||
↓
|
||||
startTelegramBot() wires up event handlers
|
||||
↓
|
||||
agent.run() processes the message
|
||||
↓
|
||||
Response sent back to Telegram
|
||||
↓
|
||||
agentEventBus emits events (tool calls, etc.)
|
||||
↓
|
||||
Tool notifications sent to chat
|
||||
```
|
||||
|
||||
## Transport Methods
|
||||
|
||||
### Long Polling (Current)
|
||||
The bot uses long polling by default. It continuously asks Telegram "any new messages?" This is:
|
||||
- ✅ Simpler to implement
|
||||
- ✅ Works behind firewalls
|
||||
- ❌ More network overhead
|
||||
- ❌ Slightly higher latency
|
||||
|
||||
### Webhook (Optional)
|
||||
For production use, consider implementing webhook support for better performance. This would require:
|
||||
- A public URL with HTTPS
|
||||
- Updating grammy configuration
|
||||
- Setting up a reverse proxy if needed
|
||||
|
||||
## Limitations
|
||||
|
||||
- **No persistence across restarts**: Sessions are lost when the bot restarts. For persistent sessions, implement a database layer.
|
||||
- **Long polling**: Not ideal for high-volume bots. Consider webhooks for production.
|
||||
- **Per-deployment limits**: The bot runs as a single instance. For horizontal scaling, implement clustering with a distributed session store.
|
||||
- **No button callbacks for advanced features**: This example shows basic callback handling. Extend for more complex interactions.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Bot doesn't respond to messages
|
||||
- Verify `TELEGRAM_BOT_TOKEN` is correct in `.env`
|
||||
- Check that the bot is online by sending `/start` to BotFather
|
||||
- Ensure the bot is running (`npm start`)
|
||||
|
||||
### "TELEGRAM_BOT_TOKEN is not set"
|
||||
- Check that `.env` file exists in the example directory
|
||||
- Verify the token is correctly copied from BotFather
|
||||
|
||||
### Timeout on inline queries
|
||||
- Check `TELEGRAM_INLINE_QUERY_CONCURRENCY` setting
|
||||
- The bot has a 15-second timeout for inline queries - if your LLM is slow, increase this in bot.ts
|
||||
|
||||
### Image processing fails
|
||||
- Ensure images are valid and not corrupted
|
||||
- Check network connectivity for downloading images
|
||||
|
||||
## Next Steps
|
||||
|
||||
To customize this bot:
|
||||
|
||||
1. **Modify `agent-config.yml`**:
|
||||
- Change the LLM provider/model
|
||||
- Add MCP servers for additional capabilities
|
||||
- Customize the system prompt
|
||||
|
||||
2. **Extend `bot.ts`**:
|
||||
- Add more commands
|
||||
- Implement webhook support
|
||||
- Add logging/monitoring
|
||||
- Add database persistence
|
||||
|
||||
3. **Deploy**:
|
||||
- Run on a server/VPS that stays online 24/7
|
||||
- Use process managers like PM2 to auto-restart on crashes
|
||||
- Consider hosting on platforms like Railway, Heroku, or AWS
|
||||
- Migrate to webhook transport for better scalability
|
||||
|
||||
## Documentation
|
||||
|
||||
- [grammY Documentation](https://grammy.dev/)
|
||||
- [Telegram Bot API](https://core.telegram.org/bots/api)
|
||||
- [BotFather Commands](https://core.telegram.org/bots#botfather)
|
||||
- [Dexto Documentation](https://dexto.dev)
|
||||
- [Dexto Agent API](https://docs.dexto.dev)
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
119
dexto/examples/telegram-bot/agent-config.yml
Normal file
119
dexto/examples/telegram-bot/agent-config.yml
Normal file
@@ -0,0 +1,119 @@
|
||||
# Telegram Bot Agent Configuration
|
||||
# This agent is optimized for Telegram bot interactions.
|
||||
# For more configuration options, see: https://docs.dexto.dev/guides/configuring-dexto
|
||||
|
||||
# LLM Configuration
|
||||
# The bot uses this LLM provider and model for all interactions
|
||||
llm:
|
||||
provider: google
|
||||
model: gemini-2.5-flash
|
||||
apiKey: $GOOGLE_GENERATIVE_AI_API_KEY
|
||||
|
||||
# System Prompt
|
||||
# Defines the bot's personality, capabilities, and behavior
|
||||
# This prompt is customized for Telegram's capabilities and constraints
|
||||
systemPrompt: |
|
||||
You are a friendly and helpful Telegram bot powered by Dexto. You assist users with a wide range of tasks
|
||||
including answering questions, providing information, analysis, coding help, and creative writing.
|
||||
|
||||
## Your Capabilities
|
||||
- File System Access: Read and explore files in your working directory
|
||||
- Web Browsing: Visit websites and extract information (when configured)
|
||||
- Code Analysis: Help with programming, debugging, and code review
|
||||
- Information Retrieval: Answer questions and provide detailed explanations
|
||||
- Creative Tasks: Writing, brainstorming, ideas generation
|
||||
- Inline Queries: Users can use @botname to get quick answers in any chat
|
||||
|
||||
## Response Guidelines
|
||||
- Keep responses conversational and friendly
|
||||
- Use plain text formatting (Telegram doesn't support complex markdown)
|
||||
- For code, use simple indentation without backticks or special formatting
|
||||
- For emphasis, use CAPS or simple punctuation like asterisks *like this*
|
||||
- Break complex topics into digestible parts with clear spacing
|
||||
- Be helpful, respectful, and accurate
|
||||
- If you can't help, explain why clearly and suggest alternatives
|
||||
|
||||
## Commands
|
||||
- /start - Welcome message with menu
|
||||
- /ask <question> - Ask a question in group chats
|
||||
- Send messages in DM for direct conversation
|
||||
|
||||
Always provide helpful, accurate, and friendly responses in plain text format.
|
||||
|
||||
# Tool Confirmation Configuration
|
||||
# Telegram bots auto-approve tool calls, so we disable confirmation
|
||||
toolConfirmation:
|
||||
mode: auto-approve
|
||||
allowedToolsStorage: memory
|
||||
|
||||
# Storage Configuration
|
||||
# Data storage for sessions, memories, and caching
|
||||
storage:
|
||||
cache:
|
||||
type: in-memory
|
||||
database:
|
||||
type: sqlite
|
||||
# Database file will be created in ~/.dexto/agents/telegram-bot-agent/
|
||||
blob:
|
||||
type: local
|
||||
maxBlobSize: 52428800 # 50MB per upload
|
||||
maxTotalSize: 1073741824 # 1GB total
|
||||
cleanupAfterDays: 30
|
||||
|
||||
# Optional: Greeting shown in /start command
|
||||
greeting: "Welcome! I'm a Telegram bot powered by Dexto. I can help with questions, code, writing, analysis, and more! 🤖"
|
||||
|
||||
# Prompts - Define reusable command templates
|
||||
# These appear as buttons in /start and can be invoked as slash commands
|
||||
prompts:
|
||||
# Self-contained prompts (execute immediately when clicked)
|
||||
- type: inline
|
||||
id: quick-start
|
||||
title: "🚀 Quick Start"
|
||||
description: "Learn what I can do and how to use me"
|
||||
prompt: "I'd like to get started quickly. Can you show me a few examples of what you can do in Telegram and help me understand how to work with you?"
|
||||
category: learning
|
||||
priority: 10
|
||||
showInStarters: true
|
||||
|
||||
- type: inline
|
||||
id: demo
|
||||
title: "⚡ Demo Tools"
|
||||
description: "See available tools in action"
|
||||
prompt: "I'd like to see your tools in action. Can you show me what tools you have available and demonstrate one with a practical example?"
|
||||
category: tools
|
||||
priority: 9
|
||||
showInStarters: true
|
||||
|
||||
# Context-requiring prompts (ask for input when clicked)
|
||||
- type: inline
|
||||
id: summarize
|
||||
title: "📋 Summarize"
|
||||
description: "Summarize text, articles, or concepts"
|
||||
prompt: "Please provide a concise summary of the following. Focus on key points and main ideas:\n\n{{context}}"
|
||||
category: productivity
|
||||
priority: 8
|
||||
|
||||
- type: inline
|
||||
id: explain
|
||||
title: "💡 Explain"
|
||||
description: "Get detailed explanations of any topic"
|
||||
prompt: "Please explain the following concept in detail. Break it down into understandable parts:\n\n{{context}}"
|
||||
category: learning
|
||||
priority: 7
|
||||
|
||||
- type: inline
|
||||
id: code
|
||||
title: "💻 Code Help"
|
||||
description: "Get help with programming tasks"
|
||||
prompt: "You are a coding expert. Help with the following programming task. Provide clear, well-commented code examples:\n\n{{context}}"
|
||||
category: development
|
||||
priority: 6
|
||||
|
||||
- type: inline
|
||||
id: translate
|
||||
title: "🌐 Translate"
|
||||
description: "Translate text between languages"
|
||||
prompt: "Translate the following text. Detect the source language and translate to English:\n\n{{context}}"
|
||||
category: language
|
||||
priority: 5
|
||||
583
dexto/examples/telegram-bot/bot.ts
Normal file
583
dexto/examples/telegram-bot/bot.ts
Normal file
@@ -0,0 +1,583 @@
|
||||
#!/usr/bin/env node
|
||||
import 'dotenv/config';
|
||||
import { Bot, InlineKeyboard } from 'grammy';
|
||||
|
||||
// Type for inline query result article (matching what we create)
|
||||
type InlineQueryResultArticle = {
|
||||
type: 'article';
|
||||
id: string;
|
||||
title: string;
|
||||
input_message_content: { message_text: string };
|
||||
description: string;
|
||||
};
|
||||
import * as https from 'https';
|
||||
import { DextoAgent, logger } from '@dexto/core';
|
||||
|
||||
const token = process.env.TELEGRAM_BOT_TOKEN;
|
||||
|
||||
// Concurrency cap and debounce cache for inline queries
|
||||
const MAX_CONCURRENT_INLINE_QUERIES = process.env.TELEGRAM_INLINE_QUERY_CONCURRENCY
|
||||
? Number(process.env.TELEGRAM_INLINE_QUERY_CONCURRENCY)
|
||||
: 5;
|
||||
let currentInlineQueries = 0;
|
||||
const INLINE_QUERY_DEBOUNCE_INTERVAL = 2000; // ms
|
||||
const INLINE_QUERY_CACHE_MAX_SIZE = 1000;
|
||||
const inlineQueryCache: Record<string, { timestamp: number; results: InlineQueryResultArticle[] }> =
|
||||
{};
|
||||
|
||||
// Cleanup old cache entries to prevent unbounded growth
|
||||
function cleanupInlineQueryCache(): void {
|
||||
const now = Date.now();
|
||||
const keys = Object.keys(inlineQueryCache);
|
||||
|
||||
// Remove expired entries
|
||||
for (const key of keys) {
|
||||
if (now - inlineQueryCache[key]!.timestamp > INLINE_QUERY_DEBOUNCE_INTERVAL) {
|
||||
delete inlineQueryCache[key];
|
||||
}
|
||||
}
|
||||
|
||||
// If still over limit, remove oldest entries
|
||||
const remainingKeys = Object.keys(inlineQueryCache);
|
||||
if (remainingKeys.length > INLINE_QUERY_CACHE_MAX_SIZE) {
|
||||
const sortedKeys = remainingKeys.sort(
|
||||
(a, b) => inlineQueryCache[a]!.timestamp - inlineQueryCache[b]!.timestamp
|
||||
);
|
||||
const toRemove = sortedKeys.slice(0, remainingKeys.length - INLINE_QUERY_CACHE_MAX_SIZE);
|
||||
for (const key of toRemove) {
|
||||
delete inlineQueryCache[key];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Cache for prompts loaded from DextoAgent
|
||||
let cachedPrompts: Record<string, import('@dexto/core').PromptInfo> = {};
|
||||
|
||||
// Helper to detect MIME type from file extension
|
||||
function getMimeTypeFromPath(filePath: string): string {
|
||||
const ext = filePath.split('.').pop()?.toLowerCase() || '';
|
||||
const mimeTypes: Record<string, string> = {
|
||||
jpg: 'image/jpeg',
|
||||
jpeg: 'image/jpeg',
|
||||
png: 'image/png',
|
||||
gif: 'image/gif',
|
||||
webp: 'image/webp',
|
||||
ogg: 'audio/ogg',
|
||||
mp3: 'audio/mpeg',
|
||||
wav: 'audio/wav',
|
||||
m4a: 'audio/mp4',
|
||||
};
|
||||
return mimeTypes[ext] || 'application/octet-stream';
|
||||
}
|
||||
|
||||
// Helper to download a file URL and convert it to base64
|
||||
async function downloadFileAsBase64(
|
||||
fileUrl: string,
|
||||
filePath?: string
|
||||
): Promise<{ base64: string; mimeType: string }> {
|
||||
return new Promise((resolve, reject) => {
|
||||
const MAX_BYTES = 5 * 1024 * 1024; // 5 MB hard cap
|
||||
let downloadedBytes = 0;
|
||||
|
||||
const req = https.get(fileUrl, (res) => {
|
||||
if (res.statusCode && res.statusCode >= 400) {
|
||||
res.resume();
|
||||
return reject(
|
||||
new Error(`Failed to download file: ${res.statusCode} ${res.statusMessage}`)
|
||||
);
|
||||
}
|
||||
const chunks: Buffer[] = [];
|
||||
res.on('data', (chunk) => {
|
||||
downloadedBytes += chunk.length;
|
||||
if (downloadedBytes > MAX_BYTES) {
|
||||
res.resume();
|
||||
req.destroy(new Error('Attachment exceeds 5 MB limit'));
|
||||
return;
|
||||
}
|
||||
chunks.push(chunk);
|
||||
});
|
||||
res.on('end', () => {
|
||||
if (req.destroyed) return;
|
||||
const buffer = Buffer.concat(chunks);
|
||||
let contentType =
|
||||
(res.headers['content-type'] as string) || 'application/octet-stream';
|
||||
|
||||
// If server returns generic octet-stream, try to detect from file path
|
||||
if (contentType === 'application/octet-stream' && filePath) {
|
||||
contentType = getMimeTypeFromPath(filePath);
|
||||
}
|
||||
|
||||
resolve({ base64: buffer.toString('base64'), mimeType: contentType });
|
||||
});
|
||||
res.on('error', (err) => {
|
||||
if (!req.destroyed) {
|
||||
reject(err);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
req.on('error', reject);
|
||||
|
||||
req.setTimeout(30000, () => {
|
||||
if (!req.destroyed) {
|
||||
req.destroy(new Error('File download timed out'));
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
// Helper to load prompts from DextoAgent
|
||||
async function loadPrompts(agent: DextoAgent): Promise<void> {
|
||||
try {
|
||||
cachedPrompts = await agent.listPrompts();
|
||||
const count = Object.keys(cachedPrompts).length;
|
||||
logger.info(`📝 Loaded ${count} prompts from DextoAgent`, 'green');
|
||||
} catch (error) {
|
||||
logger.error(`Failed to load prompts: ${error instanceof Error ? error.message : error}`);
|
||||
cachedPrompts = {};
|
||||
}
|
||||
}
|
||||
|
||||
// Insert initTelegramBot to wire up a TelegramBot given pre-initialized services
|
||||
export async function startTelegramBot(agent: DextoAgent) {
|
||||
if (!token) {
|
||||
throw new Error('TELEGRAM_BOT_TOKEN is not set');
|
||||
}
|
||||
|
||||
const agentEventBus = agent.agentEventBus;
|
||||
|
||||
// Load prompts from DextoAgent at startup
|
||||
await loadPrompts(agent);
|
||||
|
||||
// Create and start Telegram Bot
|
||||
const bot = new Bot(token);
|
||||
logger.info('Telegram bot started', 'green');
|
||||
|
||||
// Helper to get or create session for a Telegram user
|
||||
// Each Telegram user gets their own persistent session
|
||||
function getTelegramSessionId(userId: number): string {
|
||||
return `telegram-${userId}`;
|
||||
}
|
||||
|
||||
// /start command with command buttons
|
||||
bot.command('start', async (ctx) => {
|
||||
const keyboard = new InlineKeyboard();
|
||||
|
||||
// Get config prompts (most useful for general tasks)
|
||||
const configPrompts = Object.entries(cachedPrompts)
|
||||
.filter(([_, info]) => info.source === 'config')
|
||||
.slice(0, 6); // Limit to 6 prompts for cleaner UI
|
||||
|
||||
// Add prompt buttons in rows of 2
|
||||
for (let i = 0; i < configPrompts.length; i += 2) {
|
||||
const [name1, info1] = configPrompts[i]!;
|
||||
const button1 = info1.title || name1;
|
||||
keyboard.text(button1, `prompt_${name1}`);
|
||||
|
||||
if (i + 1 < configPrompts.length) {
|
||||
const [name2, info2] = configPrompts[i + 1]!;
|
||||
const button2 = info2.title || name2;
|
||||
keyboard.text(button2, `prompt_${name2}`);
|
||||
}
|
||||
keyboard.row();
|
||||
}
|
||||
|
||||
// Add utility buttons
|
||||
keyboard.text('🔄 Reset', 'reset').text('❓ Help', 'help');
|
||||
|
||||
const helpText =
|
||||
'*Welcome to Dexto AI Bot!* 🤖\n\n' +
|
||||
'I can help you with various tasks. Here are your options:\n\n' +
|
||||
'**Direct Chat:**\n' +
|
||||
"• Send any text, image, or audio and I'll respond\n\n" +
|
||||
'**Slash Commands:**\n' +
|
||||
'• `/ask <question>` - Ask anything\n' +
|
||||
'• Use any loaded prompt as a command (e.g., `/summarize`, `/explain`)\n\n' +
|
||||
'**Quick buttons above** - Click to activate a prompt mode!';
|
||||
|
||||
await ctx.reply(helpText, {
|
||||
parse_mode: 'Markdown',
|
||||
reply_markup: keyboard,
|
||||
});
|
||||
});
|
||||
|
||||
// Dynamic command handlers for all prompts
|
||||
for (const [promptName, promptInfo] of Object.entries(cachedPrompts)) {
|
||||
// Register each prompt as a slash command
|
||||
bot.command(promptName, async (ctx) => {
|
||||
const userContext = ctx.match?.trim() || '';
|
||||
|
||||
if (!ctx.from) {
|
||||
logger.error(`Telegram /${promptName} command received without from field`);
|
||||
return;
|
||||
}
|
||||
|
||||
const sessionId = getTelegramSessionId(ctx.from.id);
|
||||
|
||||
try {
|
||||
await ctx.replyWithChatAction('typing');
|
||||
|
||||
// Use agent.resolvePrompt to get the prompt text with context
|
||||
const result = await agent.resolvePrompt(promptName, {
|
||||
context: userContext,
|
||||
});
|
||||
|
||||
// If prompt has placeholders and no context provided, ask for it
|
||||
if (!result.text.trim() && !userContext) {
|
||||
await ctx.reply(
|
||||
`Please provide context for this prompt.\n\nExample: \`/${promptName} your text here\``,
|
||||
{ parse_mode: 'Markdown' }
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
// Generate response using the resolved prompt
|
||||
const response = await agent.generate(result.text, sessionId);
|
||||
await ctx.reply(response.content || '🤖 No response generated');
|
||||
} catch (err) {
|
||||
logger.error(
|
||||
`Error handling /${promptName} command: ${err instanceof Error ? err.message : err}`
|
||||
);
|
||||
const errorMessage = err instanceof Error ? err.message : 'Unknown error';
|
||||
await ctx.reply(`Error: ${errorMessage}`);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Handle button callbacks (prompt buttons and actions)
|
||||
bot.on('callback_query:data', async (ctx) => {
|
||||
const action = ctx.callbackQuery.data;
|
||||
const sessionId = getTelegramSessionId(ctx.callbackQuery.from.id);
|
||||
|
||||
try {
|
||||
// Handle prompt buttons (e.g., prompt_summarize, prompt_explain)
|
||||
if (action.startsWith('prompt_')) {
|
||||
const promptName = action.substring(7); // Remove 'prompt_' prefix
|
||||
const promptInfo = cachedPrompts[promptName];
|
||||
|
||||
if (!promptInfo) {
|
||||
await ctx.answerCallbackQuery({ text: 'Prompt not found' });
|
||||
return;
|
||||
}
|
||||
|
||||
await ctx.answerCallbackQuery({
|
||||
text: `Executing ${promptInfo.title || promptName}...`,
|
||||
});
|
||||
|
||||
try {
|
||||
await ctx.replyWithChatAction('typing');
|
||||
|
||||
// Try to resolve and execute the prompt directly
|
||||
const result = await agent.resolvePrompt(promptName, {});
|
||||
|
||||
// If prompt resolved to empty (requires context), ask for input
|
||||
if (!result.text.trim()) {
|
||||
const description =
|
||||
promptInfo.description || `Use ${promptInfo.title || promptName}`;
|
||||
await ctx.reply(
|
||||
`Send your text, image, or audio for *${promptInfo.title || promptName}*:`,
|
||||
{
|
||||
parse_mode: 'Markdown',
|
||||
reply_markup: {
|
||||
force_reply: true,
|
||||
selective: true,
|
||||
input_field_placeholder: description,
|
||||
},
|
||||
}
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
// Prompt is self-contained, execute it directly
|
||||
const response = await agent.generate(result.text, sessionId);
|
||||
await ctx.reply(response.content || '🤖 No response generated');
|
||||
} catch (error) {
|
||||
logger.error(
|
||||
`Error executing prompt ${promptName}: ${error instanceof Error ? error.message : error}`
|
||||
);
|
||||
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
|
||||
await ctx.reply(`❌ Error: ${errorMessage}`);
|
||||
}
|
||||
} else if (action === 'reset') {
|
||||
await agent.resetConversation(sessionId);
|
||||
await ctx.answerCallbackQuery({ text: '✅ Conversation reset' });
|
||||
await ctx.reply('🔄 Conversation has been reset.');
|
||||
} else if (action === 'help') {
|
||||
// Build dynamic help text showing available prompts
|
||||
const promptNames = Object.keys(cachedPrompts).slice(0, 10);
|
||||
const promptList = promptNames.map((name) => `\`/${name}\``).join(', ');
|
||||
|
||||
const helpText =
|
||||
'**Available Features:**\n' +
|
||||
'🎤 *Voice Messages* - Send audio for transcription\n' +
|
||||
'🖼️ *Images* - Send photos for analysis\n' +
|
||||
'📝 *Text* - Any question or request\n\n' +
|
||||
'**Slash Commands** (use any prompt):\n' +
|
||||
`${promptList}\n\n` +
|
||||
'**Quick Tip:** Use the buttons from /start for faster interaction!';
|
||||
|
||||
await ctx.answerCallbackQuery();
|
||||
await ctx.reply(helpText, { parse_mode: 'Markdown' });
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(
|
||||
`Error handling callback query: ${error instanceof Error ? error.message : error}`
|
||||
);
|
||||
await ctx.answerCallbackQuery({ text: '❌ Error occurred' });
|
||||
try {
|
||||
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
|
||||
await ctx.reply(`Error: ${errorMessage}`);
|
||||
} catch (e) {
|
||||
logger.error(
|
||||
`Failed to send error message for callback query: ${e instanceof Error ? e.message : e}`
|
||||
);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Group chat slash command: /ask <your question>
|
||||
bot.command('ask', async (ctx) => {
|
||||
const question = ctx.match;
|
||||
if (!question) {
|
||||
await ctx.reply('Please provide a question, e.g. `/ask How do I ...?`', {
|
||||
parse_mode: 'Markdown',
|
||||
});
|
||||
return;
|
||||
}
|
||||
if (!ctx.from) {
|
||||
logger.error('Telegram /ask command received without from field');
|
||||
return;
|
||||
}
|
||||
const sessionId = getTelegramSessionId(ctx.from.id);
|
||||
try {
|
||||
await ctx.replyWithChatAction('typing');
|
||||
const response = await agent.generate(question, sessionId);
|
||||
await ctx.reply(response.content || '🤖 No response generated');
|
||||
} catch (err) {
|
||||
logger.error(
|
||||
`Error handling /ask command: ${err instanceof Error ? err.message : err}`
|
||||
);
|
||||
const errorMessage = err instanceof Error ? err.message : 'Unknown error';
|
||||
await ctx.reply(`Error: ${errorMessage}`);
|
||||
}
|
||||
});
|
||||
|
||||
// Inline query handler (for @botname query in any chat)
|
||||
bot.on('inline_query', async (ctx) => {
|
||||
const query = ctx.inlineQuery.query;
|
||||
if (!query) {
|
||||
return;
|
||||
}
|
||||
|
||||
const userId = ctx.inlineQuery.from.id;
|
||||
const queryText = query.trim();
|
||||
const cacheKey = `${userId}:${queryText}`;
|
||||
const now = Date.now();
|
||||
|
||||
// Debounce: return cached results if query repeated within interval
|
||||
const cached = inlineQueryCache[cacheKey];
|
||||
if (cached && now - cached.timestamp < INLINE_QUERY_DEBOUNCE_INTERVAL) {
|
||||
await ctx.answerInlineQuery(cached.results);
|
||||
return;
|
||||
}
|
||||
|
||||
// Concurrency cap
|
||||
if (currentInlineQueries >= MAX_CONCURRENT_INLINE_QUERIES) {
|
||||
// Too many concurrent inline queries; respond with empty list
|
||||
await ctx.answerInlineQuery([]);
|
||||
return;
|
||||
}
|
||||
|
||||
currentInlineQueries++;
|
||||
try {
|
||||
const sessionId = getTelegramSessionId(userId);
|
||||
const queryTimeout = 15000; // 15 seconds timeout
|
||||
const responsePromise = agent.generate(query, sessionId);
|
||||
|
||||
const response = await Promise.race([
|
||||
responsePromise,
|
||||
new Promise<{ content: string }>((_, reject) =>
|
||||
setTimeout(() => reject(new Error('Query timed out')), queryTimeout)
|
||||
),
|
||||
]);
|
||||
|
||||
const resultText = response.content || 'No response';
|
||||
const results = [
|
||||
{
|
||||
type: 'article' as const,
|
||||
id: ctx.inlineQuery.id,
|
||||
title: 'AI Answer',
|
||||
input_message_content: { message_text: resultText },
|
||||
description: resultText.substring(0, 100),
|
||||
},
|
||||
];
|
||||
|
||||
// Cache the results (cleanup old entries first to prevent unbounded growth)
|
||||
cleanupInlineQueryCache();
|
||||
inlineQueryCache[cacheKey] = { timestamp: now, results };
|
||||
await ctx.answerInlineQuery(results);
|
||||
} catch (error) {
|
||||
logger.error(
|
||||
`Error handling inline query: ${error instanceof Error ? error.message : error}`
|
||||
);
|
||||
// Inform user about the error through inline results
|
||||
try {
|
||||
await ctx.answerInlineQuery([
|
||||
{
|
||||
type: 'article' as const,
|
||||
id: ctx.inlineQuery.id,
|
||||
title: 'Error processing query',
|
||||
input_message_content: {
|
||||
message_text: `Sorry, I encountered an error: ${error instanceof Error ? error.message : 'Unknown error'}`,
|
||||
},
|
||||
description: 'Error occurred while processing your request',
|
||||
},
|
||||
]);
|
||||
} catch (e) {
|
||||
logger.error(
|
||||
`Failed to send inline query error: ${e instanceof Error ? e.message : e}`
|
||||
);
|
||||
}
|
||||
} finally {
|
||||
currentInlineQueries--;
|
||||
}
|
||||
});
|
||||
|
||||
// Message handler with image + audio support and tool notifications
|
||||
bot.on('message', async (ctx) => {
|
||||
let userText = ctx.message.text || ctx.message.caption || '';
|
||||
let imageDataInput: { image: string; mimeType: string } | undefined;
|
||||
let fileDataInput: { data: string; mimeType: string; filename?: string } | undefined;
|
||||
let isAudioMessage = false;
|
||||
|
||||
try {
|
||||
// Detect and process images
|
||||
if (ctx.message.photo && ctx.message.photo.length > 0) {
|
||||
const photo = ctx.message.photo[ctx.message.photo.length - 1]!;
|
||||
const file = await ctx.api.getFile(photo.file_id);
|
||||
const fileUrl = `https://api.telegram.org/file/bot${token}/${file.file_path}`;
|
||||
const { base64, mimeType } = await downloadFileAsBase64(fileUrl, file.file_path);
|
||||
imageDataInput = { image: base64, mimeType };
|
||||
userText = ctx.message.caption || ''; // Use caption if available
|
||||
}
|
||||
|
||||
// Detect and process audio/voice messages
|
||||
if (ctx.message.voice) {
|
||||
isAudioMessage = true;
|
||||
const voice = ctx.message.voice;
|
||||
const file = await ctx.api.getFile(voice.file_id);
|
||||
const fileUrl = `https://api.telegram.org/file/bot${token}/${file.file_path}`;
|
||||
const { base64, mimeType } = await downloadFileAsBase64(fileUrl, file.file_path);
|
||||
|
||||
// Telegram voice messages are always OGG format
|
||||
// Detect from file path, but fallback to audio/ogg
|
||||
const audioMimeType = mimeType.startsWith('audio/') ? mimeType : 'audio/ogg';
|
||||
|
||||
fileDataInput = {
|
||||
data: base64,
|
||||
mimeType: audioMimeType,
|
||||
filename: 'audio.ogg',
|
||||
};
|
||||
|
||||
// Add context if audio-only (no caption)
|
||||
if (!userText) {
|
||||
userText = '(User sent an audio message for transcription and analysis)';
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
logger.error(
|
||||
`Failed to process attached media in Telegram bot: ${err instanceof Error ? err.message : err}`
|
||||
);
|
||||
try {
|
||||
const errorMessage = err instanceof Error ? err.message : 'Unknown error';
|
||||
if (isAudioMessage) {
|
||||
await ctx.reply(`🎤 Error processing audio: ${errorMessage}`);
|
||||
} else {
|
||||
await ctx.reply(`🖼️ Error processing image: ${errorMessage}`);
|
||||
}
|
||||
} catch (sendError) {
|
||||
logger.error(
|
||||
`Failed to send error message to user: ${sendError instanceof Error ? sendError.message : sendError}`
|
||||
);
|
||||
}
|
||||
return; // Stop processing if media handling fails
|
||||
}
|
||||
|
||||
// Validate that we have something to process
|
||||
if (!userText && !imageDataInput && !fileDataInput) return;
|
||||
|
||||
// Get session for this user
|
||||
// ctx.from can be undefined for channel posts or anonymous admin messages
|
||||
if (!ctx.from) {
|
||||
logger.debug(
|
||||
'Telegram message without user context (channel post or anonymous admin); skipping'
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
const sessionId = getTelegramSessionId(ctx.from.id);
|
||||
|
||||
// Subscribe for toolCall events
|
||||
const toolCallHandler = (payload: {
|
||||
toolName: string;
|
||||
args: unknown;
|
||||
callId?: string;
|
||||
sessionId: string;
|
||||
}) => {
|
||||
// Filter by sessionId to avoid cross-session leakage
|
||||
if (payload.sessionId !== sessionId) return;
|
||||
ctx.reply(`🔧 Calling *${payload.toolName}*`, { parse_mode: 'Markdown' }).catch((e) =>
|
||||
logger.warn(`Failed to notify tool call: ${e}`)
|
||||
);
|
||||
};
|
||||
agentEventBus.on('llm:tool-call', toolCallHandler);
|
||||
|
||||
try {
|
||||
await ctx.replyWithChatAction('typing');
|
||||
|
||||
// Build content array from message and attachments
|
||||
const content: import('@dexto/core').ContentPart[] = [];
|
||||
if (userText) {
|
||||
content.push({ type: 'text', text: userText });
|
||||
}
|
||||
if (imageDataInput) {
|
||||
content.push({
|
||||
type: 'image',
|
||||
image: imageDataInput.image,
|
||||
mimeType: imageDataInput.mimeType,
|
||||
});
|
||||
}
|
||||
if (fileDataInput) {
|
||||
content.push({
|
||||
type: 'file',
|
||||
data: fileDataInput.data,
|
||||
mimeType: fileDataInput.mimeType,
|
||||
filename: fileDataInput.filename,
|
||||
});
|
||||
}
|
||||
|
||||
const response = await agent.generate(content, sessionId);
|
||||
|
||||
await ctx.reply(response.content || '🤖 No response generated');
|
||||
|
||||
// Log token usage if available (optional analytics)
|
||||
if (response.usage) {
|
||||
logger.debug(
|
||||
`Session ${sessionId} - Tokens: input=${response.usage.inputTokens}, output=${response.usage.outputTokens}`
|
||||
);
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(
|
||||
`Error handling Telegram message: ${error instanceof Error ? error.message : error}`
|
||||
);
|
||||
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
|
||||
await ctx.reply(`❌ Error: ${errorMessage}`);
|
||||
} finally {
|
||||
agentEventBus.off('llm:tool-call', toolCallHandler);
|
||||
}
|
||||
});
|
||||
|
||||
// Start the bot
|
||||
bot.start();
|
||||
return bot;
|
||||
}
|
||||
38
dexto/examples/telegram-bot/main.ts
Normal file
38
dexto/examples/telegram-bot/main.ts
Normal file
@@ -0,0 +1,38 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
import 'dotenv/config';
|
||||
import { DextoAgent } from '@dexto/core';
|
||||
import { loadAgentConfig, enrichAgentConfig } from '@dexto/agent-management';
|
||||
import { startTelegramBot } from './bot.js';
|
||||
|
||||
async function main() {
|
||||
try {
|
||||
// Load agent configuration from local agent-config.yml
|
||||
console.log('🚀 Initializing Telegram bot...');
|
||||
const configPath = './agent-config.yml';
|
||||
const config = await loadAgentConfig(configPath);
|
||||
const enrichedConfig = enrichAgentConfig(config, configPath);
|
||||
|
||||
// Create and start the Dexto agent
|
||||
const agent = new DextoAgent(enrichedConfig, configPath);
|
||||
await agent.start();
|
||||
|
||||
// Start the Telegram bot
|
||||
console.log('📡 Starting Telegram bot connection...');
|
||||
await startTelegramBot(agent);
|
||||
|
||||
console.log('✅ Telegram bot is running! Start with /start command.');
|
||||
|
||||
// Graceful shutdown
|
||||
process.on('SIGINT', async () => {
|
||||
console.log('\n🛑 Shutting down...');
|
||||
await agent.stop();
|
||||
process.exit(0);
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('❌ Failed to start Telegram bot:', error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
main();
|
||||
30
dexto/examples/telegram-bot/package.json
Normal file
30
dexto/examples/telegram-bot/package.json
Normal file
@@ -0,0 +1,30 @@
|
||||
{
|
||||
"name": "dexto-telegram-bot-example",
|
||||
"version": "1.0.0",
|
||||
"description": "Telegram bot integration example using Dexto",
|
||||
"type": "module",
|
||||
"main": "dist/main.js",
|
||||
"scripts": {
|
||||
"start": "tsx main.ts",
|
||||
"build": "tsc",
|
||||
"dev": "tsx watch main.ts"
|
||||
},
|
||||
"keywords": [
|
||||
"dexto",
|
||||
"telegram",
|
||||
"bot",
|
||||
"ai",
|
||||
"agent"
|
||||
],
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@dexto/core": "^1.1.3",
|
||||
"@dexto/agent-management": "^1.1.3",
|
||||
"grammy": "^1.38.2",
|
||||
"dotenv": "^16.3.1"
|
||||
},
|
||||
"devDependencies": {
|
||||
"tsx": "^4.7.1",
|
||||
"typescript": "^5.5.4"
|
||||
}
|
||||
}
|
||||
1886
dexto/examples/telegram-bot/pnpm-lock.yaml
generated
Normal file
1886
dexto/examples/telegram-bot/pnpm-lock.yaml
generated
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user