feat: v1.3.0 — plan-first workflow, OpenRouter provider, enhanced prompt engine

Major changes:
- Plan-first workflow: AI generates structured plan before code, with
  plan review card (Modify Plan / Start Coding / Skip to Code)
- Post-coding UX: Preview + Request Modifications buttons after code gen
- OpenRouter integration: 4th AI provider with 20+ model support
- Enhanced prompt engine: 9 strategies, 11+ intent patterns, modular
- PLAN MODE system prompt block in all 4 services
- Fixed stale React closure in approveAndGenerate with isApproval flag
- Fixed canvas auto-opening during plan phase with wasIdle gate
- Updated README, CHANGELOG, .env.example, version bump to 1.3.0
This commit is contained in:
admin
2026-03-18 18:45:37 +00:00
Unverified
parent cca11fe07a
commit a4b7a0d9e4
17 changed files with 3189 additions and 358 deletions

View File

@@ -15,6 +15,11 @@ ZAI_API_KEY=
ZAI_GENERAL_ENDPOINT=https://api.z.ai/api/paas/v4 ZAI_GENERAL_ENDPOINT=https://api.z.ai/api/paas/v4
ZAI_CODING_ENDPOINT=https://api.z.ai/api/coding/paas/v4 ZAI_CODING_ENDPOINT=https://api.z.ai/api/coding/paas/v4
# OpenRouter API
# Get API key from https://openrouter.ai/keys
OPENROUTER_API_KEY=
OPENROUTER_DEFAULT_MODEL=google/gemini-2.0-flash-exp:free
# Site Configuration (Required for OAuth in production) # Site Configuration (Required for OAuth in production)
# Set to your production URL (e.g., https://your-app.vercel.app) # Set to your production URL (e.g., https://your-app.vercel.app)
NEXT_PUBLIC_SITE_URL=http://localhost:6002 NEXT_PUBLIC_SITE_URL=http://localhost:6002

115
CHANGELOG.md Normal file
View File

@@ -0,0 +1,115 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.3.0] - 2025-03-18
### Added
- **OpenRouter Integration** — 4th AI provider with API key auth, 20+ model support
- New `lib/services/openrouter.ts` streaming service
- Provider selector in AI Assist: Qwen, Ollama, Z.AI, OpenRouter
- Default model: `google/gemini-2.0-flash-exp:free`
- Custom model selector with popular free/paid model presets
- Settings panel: API key input with validation and model picker
- **Plan-First Workflow** — AI now generates a structured plan before code
- PLAN MODE instructions injected into all 4 service system prompts
- Plan card UI with architecture, tech stack, files, and steps
- `parsePlanFromResponse()` extracts plans from AI markdown output
- `[PLAN]` tags hidden from displayed chat messages
- Three action buttons: **Modify Plan** / **Start Coding** / **Skip to Code**
- **Post-Coding UX** — Preview + Request Modifications after code generation
- After "Start Coding" approval, AI generates code with `[PREVIEW]` tags
- Canvas opens automatically with renderable previews
- Two post-coding buttons: **Preview** (re-opens canvas) and **Request Modifications**
- `isApproval` flag prevents stale React closure bugs in approval flow
- **Enhanced Prompt Engine** — New modular prompt enhancement system
- `lib/enhance-engine.ts` with 9 enhancement strategies
- Strategies: clarify, add-context, add-constraints, structure, add-examples, set-tone, expand, simplify, chain-of-thought
- Context-aware enhancement based on detected intent type
- 11+ intent detection patterns (coding, creative, analysis, etc.)
- Smart strategy selection per intent for optimal prompt refinement
- **Streaming Plan Mode** — Real-time plan parsing during AI response
- `wasIdle` flag captures initial request phase before state updates
- Canvas display suppressed during plan generation, enabled after approval
- Post-stream routing: plan card for initial requests, preview for approvals
- Tab `showCanvas` state gated by plan phase
### Changed
- **AIAssist.tsx** — Major refactor for plan-first flow
- `handleSendMessage` now accepts `isApproval` parameter to prevent stale closures
- `approveAndGenerate()` passes `isApproval=true` to bypass idle detection
- `assistStep` state machine: `idle -> plan -> generating -> preview`
- `parseStreamingContent()` filters `[PLAN]` tags from displayed output
- **PromptEnhancer.tsx** — Rebuilt with modular enhance engine
- Moved enhancement logic to `lib/enhance-engine.ts`
- Added expand, simplify, and chain-of-thought strategies
- Improved intent detection and strategy mapping
- **SettingsPanel.tsx** — Added OpenRouter provider configuration
- API key input with validation
- Model selector with preset dropdown
- Provider-specific endpoint display
- **model-adapter.ts** — Extended with OpenRouter provider support
- New adapter mapping for OpenRouter service
- Unified interface across all 4 providers
- **translations.ts** — Added i18n keys for plan mode, OpenRouter, post-coding actions
- Keys: `modifyPlan`, `startCoding`, `skipToCode`, `requestModifications`
- OpenRouter provider labels and descriptions
- English, Russian, Hebrew translations updated
- **store.ts** — Added `selectedProvider` state for multi-provider selection
- **types/index.ts** — Added `PreviewData` interface for typed canvas rendering
- **adapter-instance.ts** — Registered OpenRouter in provider registry
### Fixed
- **Stale React closure** in `approveAndGenerate``setAssistStep("generating")` followed by `handleSendMessage()` read stale `assistStep` value. Fixed with explicit `isApproval` boolean parameter.
- **Plan card reappearing after code generation** — Post-stream logic now correctly routes to `preview` mode after approval coding, not back to `plan` mode.
- **Canvas auto-opening during plan phase** — `setShowCanvas(true)` in `onChunk` now gated by `!wasIdle` flag.
- **i18n missing keys** — Added `syncComplete` for Hebrew, fixed double commas in multiple translation strings.
### Technical Details
- Files modified: 11 (960 insertions, 194 deletions)
- Files added: 2 (`lib/enhance-engine.ts`, `lib/services/openrouter.ts`)
- Total project lines: ~10,179 across core files
- System prompt PLAN MODE block added to: `qwen-oauth.ts`, `ollama-cloud.ts`, `zai-plan.ts`, `openrouter.ts`
## [1.2.0] - 2025-01-19
### Added
- **SEO Agent Behavior Fixes**
- SEO agent now stays locked and answers queries through an SEO lens
- Smart agent suggestions via `[SUGGEST_AGENT:xxx]` marker for clearly non-SEO tasks
- Visual suggestion banner with Switch/Dismiss buttons
- Prevented unwanted agent auto-switching mid-response
- **z.ai API Validation**
- Real-time API key validation with 500ms debounce
- Inline status indicators:
- Checkmark with "Validated Xm ago" for valid keys
- Red X with error message for invalid keys
- Loading spinner during validation
- "Test Connection" button for manual re-validation
- Persistent validation cache (5 minutes) in localStorage
### Changed
- Updated Settings panel with improved UX for API key management
- Enhanced agent selection behavior to prevent unintended switches
## [1.1.0] - 2025-01-15
### Added
- GitHub integration for pushing AI-generated artifacts
- XLSX export functionality for Google Ads campaigns
- High-fidelity HTML report generation
- OAuth token management for Qwen API
## [1.0.0] - 2025-01-01
### Added
- Initial release of PromptArch
- Multi-provider AI support (Qwen, Ollama, Z.AI)
- Prompt Enhancer with 11+ intent patterns
- PRD Generator with structured output
- Action Plan generator with framework recommendations
- Visual canvas for live code rendering
- Multi-language support (English, Russian, Hebrew)

222
README.md
View File

@@ -1,4 +1,4 @@
# PromptArch: The Prompt Enhancer 🚀 # PromptArch: AI Orchestration Platform
> **Development Note**: This entire platform was developed exclusively using [TRAE.AI IDE](https://trae.ai) powered by elite [GLM 4.7 model](https://z.ai/subscribe?ic=R0K78RJKNW). > **Development Note**: This entire platform was developed exclusively using [TRAE.AI IDE](https://trae.ai) powered by elite [GLM 4.7 model](https://z.ai/subscribe?ic=R0K78RJKNW).
> **Learn more about this architecture [here](https://z.ai/subscribe?ic=R0K78RJKNW).** > **Learn more about this architecture [here](https://z.ai/subscribe?ic=R0K78RJKNW).**
@@ -7,31 +7,59 @@
> **Fork Note**: This project is a specialized fork of [ClavixDev/Clavix](https://github.com/ClavixDev/Clavix), reimagined as a modern web-based platform for visual prompt engineering and product planning. > **Fork Note**: This project is a specialized fork of [ClavixDev/Clavix](https://github.com/ClavixDev/Clavix), reimagined as a modern web-based platform for visual prompt engineering and product planning.
Transform vague ideas into production-ready prompts and PRDs. PromptArch is an elite AI orchestration platform designed for software architects and Vibe Coders. Transform vague ideas into production-ready prompts and PRDs. PromptArch is an AI orchestration platform designed for software architects and Vibe Coders, featuring a **plan-first workflow** with multi-provider AI support and live canvas rendering.
**Developed by Roman | RyzenAdvanced** **Developed by Roman | RyzenAdvanced**
- 📦 **Gitea Repository**: [admin/PromptArch](https://github.rommark.dev/admin/PromptArch) - **Gitea Repository**: [admin/PromptArch](https://github.rommark.dev/admin/PromptArch)
- 📮 **Telegram**: [@VibeCodePrompterSystem](https://t.me/VibeCodePrompterSystem) - **Live Site**: [rommark.dev/tools/promptarch](https://rommark.dev/tools/promptarch/)
- **Telegram**: [@VibeCodePrompterSystem](https://t.me/VibeCodePrompterSystem)
## 🌟 Visual Overview ## Core Capabilities
### 🛠 Core Capabilities | Feature | Description |
|---------|-------------|
| **AI Assist** | Plan-first workflow: describe a task, get a structured plan, approve, then generate working code with live preview |
| **Prompt Enhancer** | Refine vague prompts into surgical instructions using 9 enhancement strategies and 11+ intent patterns |
| **PRD Generator** | Convert ideas into structured Product Requirements Documents |
| **Action Plan** | Decompose PRDs into actionable development steps and framework recommendations |
| **Google Ads Generator** | Generate ad campaigns with XLSX and HTML report export |
| **Slides Generator** | Create presentation decks from prompts |
| **Market Researcher** | AI-powered market research and analysis |
- **Prompt Enhancer**: Refine vague prompts into surgical instructions for AI agents. ## Features
- **PRD Generator**: Convert ideas into structured Product Requirements Documents.
- **Action Plan**: Decompose PRDs into actionable development steps and framework recommendations.
## ✨ Features ### Plan-First Workflow (v1.3.0)
- AI generates a structured plan (architecture, tech stack, files, steps) before any code
- Plan Review Card with **Modify Plan**, **Start Coding**, and **Skip to Code** actions
- After code generation: **Preview** canvas + **Request Modifications** buttons
- Streaming plan mode with real-time parsing and canvas suppression
- **Multi-Provider Ecosystem**: Native support for Qwen Code (OAuth), Ollama Cloud, and Z.AI Plan API. ### Multi-Provider AI (4 Providers)
- **Visual Prompt Engineering**: Patterns-based enhancement with 11+ intent types. | Provider | Auth | Models |
- **Architectural Decomposition**: Automatic generation of PRDs and structured Action Plans. |----------|------|--------|
- **Resilient Fallbacks**: Multi-tier provider system that ensures uptime even if primary APIs fail. | **Qwen Code** | OAuth (2,000 free req/day) | Qwen Coder models |
- **Modern UI/UX**: Built with Next.js 15, Tailwind CSS, and shadcn/ui for a seamless developer experience. | **Ollama Cloud** | API Key | Open-source models |
- **OAuth Integration**: Secure Qwen authentication with 2,000 free daily requests. | **Z.AI Plan** | API Key | GLM general + coding models |
| **OpenRouter** | API Key | 20+ models (Gemini, Llama, Mistral, etc.) |
## 🚀 Quick Start ### Visual Canvas
- Live code rendering with `[PREVIEW]` tags
- HTML, React, Python, and more — rendered in-browser
- Auto-detect renderable vs. code-only previews
### Enhanced Prompt Engine
- 9 strategies: clarify, add-context, add-constraints, structure, add-examples, set-tone, expand, simplify, chain-of-thought
- Context-aware strategy selection based on detected intent
- 11+ intent detection patterns (coding, creative, analysis, etc.)
### Other
- Multi-language support (English, Russian, Hebrew)
- Download generated artifacts as ZIP
- Push to GitHub integration
- Resilient multi-tier provider fallbacks
## Quick Start
1. **Clone & Install**: 1. **Clone & Install**:
```bash ```bash
@@ -46,6 +74,12 @@ Transform vague ideas into production-ready prompts and PRDs. PromptArch is an e
cp .env.example .env cp .env.example .env
``` ```
Configure at least one provider:
- **Qwen**: Get OAuth credentials from [qwen.ai](https://qwen.ai)
- **Ollama**: Get API key from [ollama.com/cloud](https://ollama.com/cloud)
- **Z.AI**: Get API key from [docs.z.ai](https://docs.z.ai)
- **OpenRouter**: Get API key from [openrouter.ai/keys](https://openrouter.ai/keys) (free tier available)
3. **Launch**: 3. **Launch**:
```bash ```bash
npm run dev npm run dev
@@ -53,109 +87,81 @@ Transform vague ideas into production-ready prompts and PRDs. PromptArch is an e
4. Open [http://localhost:3000](http://localhost:3000) to begin. 4. Open [http://localhost:3000](http://localhost:3000) to begin.
## 📋 Changelog ## Tech Stack
All notable changes to this project will be documented in this file. - **Framework**: Next.js 15 (App Router, Turbopack)
- **Styling**: Tailwind CSS
- **State Management**: Zustand
- **Components**: shadcn/ui (Radix UI)
- **Icons**: Lucide React
- **Markdown**: react-markdown
- **Language**: TypeScript
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), ## Project Structure
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
### [1.2.0] - 2025-01-19 ```
promptarch/
components/
AIAssist.tsx # Main AI chat with plan-first workflow (1453 lines)
PromptEnhancer.tsx # Prompt enhancement UI with intent detection (556 lines)
SettingsPanel.tsx # Provider configuration and API key management (569 lines)
Sidebar.tsx # Navigation sidebar
GoogleAdsGenerator.tsx # Google Ads campaign generator
PRDGenerator.tsx # Product Requirements Document generator
ActionPlanGenerator.tsx # Action plan decomposition
SlidesGenerator.tsx # Presentation deck generator
MarketResearcher.tsx # Market research tool
HistoryPanel.tsx # Chat history management
lib/
enhance-engine.ts # Modular prompt enhancement (9 strategies)
store.ts # Zustand state store
artifact-utils.ts # Preview/rendering utilities
export-utils.ts # Export to XLSX/HTML/ZIP
services/
qwen-oauth.ts # Qwen OAuth streaming service
ollama-cloud.ts # Ollama Cloud streaming service
zai-plan.ts # Z.AI Plan streaming service
openrouter.ts # OpenRouter streaming service
model-adapter.ts # Unified provider adapter
adapter-instance.ts # Provider registry
i18n/
translations.ts # EN/RU/HE translations
types/
index.ts # TypeScript interfaces
```
#### Added ## Versioning
- **SEO Agent Behavior Fixes**
- SEO agent now stays locked and answers queries through an SEO lens
- Smart agent suggestions via `[SUGGEST_AGENT:xxx]` marker for clearly non-SEO tasks
- Visual suggestion banner with Switch/Dismiss buttons
- Prevented unwanted agent auto-switching mid-response
- **z.ai API Validation**
- Real-time API key validation with 500ms debounce
- Inline status indicators:
- ✓ Green checkmark with "Validated Xm ago" for valid keys
- ✗ Red X with error message for invalid keys
- 🔄 Loading spinner during validation
- "Test Connection" button for manual re-validation
- Persistent validation cache (5 minutes) in localStorage
- Clear error messages: "Invalid API key", "Network error", etc.
- **Store Enhancements**
- `ApiValidationStatus` interface for tracking connection states
- `apiValidationStatus` state with per-provider tracking
- `setApiValidationStatus` action for updating validation results
#### Changed This project follows [Semantic Versioning](https://semver.org/spec/v2.0.0.html). See [CHANGELOG.md](CHANGELOG.md) for detailed release notes.
- Updated Settings panel with improved UX for API key management
- Enhanced agent selection behavior to prevent unintended switches
- Improved error handling for authentication failures
#### Technical Details | Version | Date | Highlights |
- Files modified: |---------|------|------------|
- `lib/store.ts` - Added validation state management | [1.3.0](CHANGELOG.md#130---2025-03-18) | 2025-03-18 | Plan-first workflow, OpenRouter, post-coding UX, enhanced prompt engine |
- `lib/services/zai-plan.ts` - Added `validateConnection()` method | [1.2.0](CHANGELOG.md#120---2025-01-19) | 2025-01-19 | SEO agent fixes, Z.AI API validation |
- `lib/services/model-adapter.ts` - Added validation proxy | [1.1.0](CHANGELOG.md#110---2025-01-15) | 2025-01-15 | GitHub push, XLSX/HTML export, OAuth management |
- `components/SettingsPanel.tsx` - Complete rewrite with validation UI | [1.0.0](CHANGELOG.md#100---2025-01-01) | 2025-01-01 | Initial release |
- `components/AIAssist.tsx` - Added suggestion handling and UI
### [1.1.0] - 2025-01-15
#### Added
- GitHub integration for pushing AI-generated artifacts
- XLSX export functionality for Google Ads campaigns
- High-fidelity HTML report generation
- OAuth token management for Qwen API
### [1.0.0] - 2025-01-01
#### Added
- Initial release of PromptArch
- Multi-provider AI support (Qwen, Ollama, Z.AI)
- Prompt Enhancer with 11+ intent patterns
- PRD Generator with structured output
- Action Plan generator with framework recommendations
- Visual canvas for live code rendering
- Multi-language support (English, Russian, Hebrew)
## 🛠 Tech Stack
- **Framework**: [Next.js 15.5](https://nextjs.org/) (App Router)
- **Styling**: [Tailwind CSS](https://tailwindcss.com/)
- **State Management**: [Zustand](https://zustand-demo.pmnd.rs/)
- **Components**: [shadcn/ui](https://ui.shadcn.com/)
- **Icons**: [Lucide React](https://lucide.dev/)
## 🤝 Attribution & Credits
**Author**: Roman | RyzenAdvanced
- 📦 **Gitea**: [admin/PromptArch](https://github.rommark.dev/admin/PromptArch)
- 📮 **Telegram**: [@VibeCodePrompterSystem](https://t.me/VibeCodePrompterSystem)
**Forked from**: [ClavixDev/Clavix](https://github.com/ClavixDev/Clavix)
- This project is a visual and architectural evolution of the Clavix framework
- Clavix focuses on agentic-first Markdown templates
- PromptArch provides a centralized web interface with advanced model orchestration
**Development Platform**: [TRAE.AI IDE](https://trae.ai) powered by elite [GLM 4.7 model](https://z.ai/subscribe?ic=R0K78RJKNW)
- 100% AI-assisted development using TRAE.AI's advanced coding capabilities
- Learn more about the architecture [here](https://z.ai/subscribe?ic=R0K78RJKNW)
## Development ## Development
```bash ```bash
# Install dependencies npm install # Install dependencies
npm install npm run dev # Development server (Turbopack)
npm run build # Production build
# Run development server npm start # Start production server
npm run dev npm run lint # Lint code
# Build for production
npm run build
# Start production server
npm start
# Lint code
npm run lint
``` ```
## Attribution & Credits
**Author**: Roman | RyzenAdvanced
- **Gitea**: [admin/PromptArch](https://github.rommark.dev/admin/PromptArch)
- **Telegram**: [@VibeCodePrompterSystem](https://t.me/VibeCodePrompterSystem)
**Forked from**: [ClavixDev/Clavix](https://github.com/ClavixDev/Clavix)
- Visual and architectural evolution of the Clavix framework
**Development Platform**: [TRAE.AI IDE](https://trae.ai) powered by [GLM 4.7](https://z.ai/subscribe?ic=R0K78RJKNW)
## License ## License
ISC ISC

View File

@@ -366,6 +366,9 @@ function parseStreamingContent(text: string, currentAgent: string) {
.replace(/\[+PREVIEW:[\w-]+:?[\w-]+?\]+[\s\S]*?(?:\[\/(?:PREVIEW|APP|WEB|SEO|CODE|DESIGN|SMM|PM|CONTENT)\]+|$)/gi, "") .replace(/\[+PREVIEW:[\w-]+:?[\w-]+?\]+[\s\S]*?(?:\[\/(?:PREVIEW|APP|WEB|SEO|CODE|DESIGN|SMM|PM|CONTENT)\]+|$)/gi, "")
// Hide closing tags // Hide closing tags
.replace(/\[\/(?:PREVIEW|APP|WEB|SEO|CODE|DESIGN|SMM|PM|CONTENT)\]+/gi, "") .replace(/\[\/(?:PREVIEW|APP|WEB|SEO|CODE|DESIGN|SMM|PM|CONTENT)\]+/gi, "")
// Hide PLAN tags from chat display
.replace(/\[PLAN\][\s\S]*?\[\/PLAN\]/gi, "")
.replace(/\[PLAN\][\s\S]*?$/gi, "")
// Hide ANY partial tag sequence at the very end (greedy) // Hide ANY partial tag sequence at the very end (greedy)
.replace(/\[+[^\]]*$/g, "") .replace(/\[+[^\]]*$/g, "")
.trim(); .trim();
@@ -437,6 +440,41 @@ function parseStreamingContent(text: string, currentAgent: string) {
return { chatDisplay, preview, agent, status, suggestedAgent }; return { chatDisplay, preview, agent, status, suggestedAgent };
} }
// --- Plan Parser ---
function parsePlanFromResponse(text: string): { plan: Record<string, any> | null; cleanedText: string } {
let plan: Record<string, any> | null = null;
let cleanedText = text;
const planTagMatch = text.match(/\[PLAN\]([\s\S]*?)\[\/PLAN\]/i);
if (planTagMatch) {
try {
const d = JSON.parse(planTagMatch[1].trim());
plan = { rawText: d.summary || d.description || "", architecture: d.architecture || "", techStack: Array.isArray(d.techStack) ? d.techStack : [], files: Array.isArray(d.files) ? d.files : [], steps: Array.isArray(d.steps) ? d.steps : [] };
cleanedText = text.replace(/\[PLAN\][\s\S]*?\[\/PLAN\]/i, "").trim();
return { plan, cleanedText };
} catch (e) { /* not JSON */ }
}
const pLines = text.split("\n");
let arch = "", stack: string[] = [], filesL: string[] = [], stepsL: string[] = [], summaryL: string[] = [];
let section = "";
for (const ln of pLines) {
const s = ln.trim();
if (/^#+\s*(?:technical\s*)?architecture/i.test(s)) { section = "arch"; continue; }
if (/^#+\s*(?:tech\s*stack|technologies|frameworks)/i.test(s)) { section = "stack"; continue; }
if (/^#+\s*(?:files|modules|components|pages)/i.test(s)) { section = "files"; continue; }
if (/^#+\s*(?:steps|implementation|tasks|action\s*plan|timeline)/i.test(s)) { section = "steps"; continue; }
if (/^#+/.test(s)) { if (section && section !== "summary") section = "summary"; continue; }
if (section === "arch" && s) arch += s + " ";
else if (section === "stack") { const m = s.match(/^[-*`\s]*(.+)/); if (m) stack.push(m[1].replace(/[`*_]/g, "").trim()); }
else if (section === "files") { const m = s.match(/^[-*]\s*(.+)/); if (m) filesL.push(m[1].replace(/[`*_]/g, "").trim()); }
else if (section === "steps") { const m = s.match(/\d+\.\s*(.+)/) || s.match(/^[-*]\s*(.+)/); if (m) stepsL.push(m[1].replace(/[`*_]/g, "").trim()); }
else if (!section && s && !/^#/.test(s)) summaryL.push(s);
}
if (arch || stack.length || filesL.length || stepsL.length) {
plan = { rawText: summaryL.slice(0, 10).join("\n").trim(), architecture: arch.trim(), techStack: stack, files: filesL, steps: stepsL };
}
return { plan, cleanedText };
}
// --- Main Component --- // --- Main Component ---
export default function AIAssist() { export default function AIAssist() {
@@ -572,7 +610,7 @@ export default function AIAssist() {
loadModels(); loadModels();
}, [selectedProvider, selectedModels, setSelectedModel]); }, [selectedProvider, selectedModels, setSelectedModel]);
const handleSendMessage = async (e?: React.FormEvent, forcedPrompt?: string) => { const handleSendMessage = async (e?: React.FormEvent, forcedPrompt?: string, isApproval?: boolean) => {
if (e) e.preventDefault(); if (e) e.preventDefault();
const finalInput = forcedPrompt || input; const finalInput = forcedPrompt || input;
if (!finalInput.trim() || isProcessing) return; if (!finalInput.trim() || isProcessing) return;
@@ -596,6 +634,8 @@ export default function AIAssist() {
setInput(""); setInput("");
} }
// Capture whether this is the initial plan phase (before any code generation)
const wasIdle = !isApproval && (assistStep === "idle" || assistStep === "plan");
setIsProcessing(true); setIsProcessing(true);
if (assistStep === "idle") setAssistStep("plan"); if (assistStep === "idle") setAssistStep("plan");
@@ -650,9 +690,12 @@ export default function AIAssist() {
if (preview && JSON.stringify(preview) !== JSON.stringify(lastParsedPreview)) { if (preview && JSON.stringify(preview) !== JSON.stringify(lastParsedPreview)) {
setPreviewData(preview); setPreviewData(preview);
lastParsedPreview = preview; lastParsedPreview = preview;
// Only show canvas if NOT in initial plan phase
if (!wasIdle) {
setShowCanvas(true); setShowCanvas(true);
if (isPreviewRenderable(preview)) setViewMode("preview"); if (isPreviewRenderable(preview)) setViewMode("preview");
} }
}
if (agent !== currentAgent) { if (agent !== currentAgent) {
setCurrentAgent(agent); setCurrentAgent(agent);
@@ -672,7 +715,7 @@ export default function AIAssist() {
history: [...updatedHistory.slice(0, -1), lastMsg], history: [...updatedHistory.slice(0, -1), lastMsg],
previewData: preview || undefined, previewData: preview || undefined,
currentAgent: agent, currentAgent: agent,
showCanvas: !!preview showCanvas: !!preview && !wasIdle
}); });
}, },
signal: controller.signal signal: controller.signal
@@ -683,10 +726,22 @@ export default function AIAssist() {
if (!response.success) throw new Error(response.error); if (!response.success) throw new Error(response.error);
if (assistStep === "plan" || assistStep === "idle") { // When this was the initial request from idle/plan, ALWAYS show plan card
if (wasIdle) {
setAssistStep("plan"); setAssistStep("plan");
const { plan: parsedPlan } = parsePlanFromResponse(accumulated);
if (parsedPlan) {
setAiPlan(parsedPlan);
} else { } else {
setAiPlan({ rawText: accumulated, architecture: "", techStack: [], files: [], steps: [] });
}
} else if ((lastParsedPreview as PreviewData | null)?.data) {
// After approval: show the generated preview
setAssistStep("preview"); setAssistStep("preview");
setShowCanvas(true);
if (isPreviewRenderable(lastParsedPreview)) setViewMode("preview");
} else {
setAssistStep("idle");
} }
} catch (error) { } catch (error) {
@@ -703,7 +758,7 @@ export default function AIAssist() {
const approveAndGenerate = () => { const approveAndGenerate = () => {
setAssistStep("generating"); setAssistStep("generating");
handleSendMessage(undefined, "Approved. Please generate the code according to the plan."); handleSendMessage(undefined, "Approved. Please generate the code according to the plan.", true);
}; };
const stopGeneration = () => { const stopGeneration = () => {
@@ -793,7 +848,7 @@ export default function AIAssist() {
<div className="flex flex-col items-end gap-2"> <div className="flex flex-col items-end gap-2">
<div className="flex items-center gap-1.5 p-1 bg-blue-50/50 dark:bg-blue-900/20 rounded-xl border border-blue-100/50 dark:border-blue-900/50"> <div className="flex items-center gap-1.5 p-1 bg-blue-50/50 dark:bg-blue-900/20 rounded-xl border border-blue-100/50 dark:border-blue-900/50">
{(["qwen", "ollama", "zai"] as const).map((provider) => ( {(["qwen", "ollama", "zai", "openrouter"] as const).map((provider) => (
<button <button
key={provider} key={provider}
onClick={() => setSelectedProvider(provider)} onClick={() => setSelectedProvider(provider)}
@@ -804,7 +859,7 @@ export default function AIAssist() {
: "text-slate-400 hover:text-blue-500 hover:bg-blue-50 dark:text-blue-200/40 dark:hover:text-blue-200" : "text-slate-400 hover:text-blue-500 hover:bg-blue-50 dark:text-blue-200/40 dark:hover:text-blue-200"
)} )}
> >
{(provider === "qwen" ? "Qwen" : provider === "ollama" ? "Ollama" : "Z.AI")} {(provider === "qwen" ? "Qwen" : provider === "ollama" ? "Ollama" : provider === "openrouter" ? "OpenRouter" : "Z.AI")}
</button> </button>
))} ))}
</div> </div>
@@ -1060,6 +1115,12 @@ export default function AIAssist() {
</h3> </h3>
<div className="space-y-4"> <div className="space-y-4">
<div> <div>
{aiPlan.rawText && (
<div className="mb-4 p-3 rounded-xl bg-slate-500/5 border border-slate-500/10 max-h-[200px] overflow-y-auto">
<p className="text-[11px] font-bold text-slate-500 uppercase mb-2">{t.planSummary}</p>
<p className="text-xs text-slate-400 leading-relaxed">{aiPlan.rawText}</p>
</div>
)}
<p className="text-[11px] font-bold text-slate-500 uppercase mb-1">{t.architecture}</p> <p className="text-[11px] font-bold text-slate-500 uppercase mb-1">{t.architecture}</p>
<p className="text-xs text-slate-400">{aiPlan.architecture}</p> <p className="text-xs text-slate-400">{aiPlan.architecture}</p>
</div> </div>
@@ -1077,12 +1138,30 @@ export default function AIAssist() {
<p className="text-[10px] text-slate-400">{t.filesPlanned(aiPlan.files?.length || 0)}</p> <p className="text-[10px] text-slate-400">{t.filesPlanned(aiPlan.files?.length || 0)}</p>
</div> </div>
</div> </div>
<div className="grid grid-cols-2 gap-2 mt-4">
<Button
onClick={() => { setAiPlan(null); setAssistStep("idle"); setInput("Modify this plan: "); setTimeout(() => { const el = document.querySelector<HTMLInputElement>(`[data-ai-input]`); if (el) el.focus(); }, 100); }}
disabled={isProcessing}
variant="outline"
className="bg-slate-500/10 hover:bg-slate-500/20 border-slate-500/20 text-slate-300 font-black uppercase text-[10px] tracking-widest py-4 rounded-xl"
>
<LayoutPanelLeft className="h-3.5 w-3.5 mr-1.5" /> {t.modifyPlan}
</Button>
<Button <Button
onClick={approveAndGenerate} onClick={approveAndGenerate}
disabled={isProcessing} disabled={isProcessing}
className="w-full mt-4 bg-blue-600 hover:bg-blue-500 text-white font-black uppercase text-[10px] tracking-widest py-5 rounded-xl shadow-lg shadow-blue-500/20" className="bg-blue-600 hover:bg-blue-500 text-white font-black uppercase text-[10px] tracking-widest py-4 rounded-xl shadow-lg shadow-blue-500/20"
> >
{isProcessing ? t.startingEngine : t.approveGenerate} {isProcessing ? t.startingEngine : t.startCoding}
</Button>
</div>
<Button
onClick={() => { setAiPlan(null); setAssistStep("idle"); }}
disabled={isProcessing}
variant="ghost"
className="w-full mt-2 text-slate-500 hover:text-slate-300 font-bold uppercase text-[9px] tracking-widest py-3 rounded-xl"
>
{t.skipPlan}
</Button> </Button>
</div> </div>
</div> </div>
@@ -1103,6 +1182,25 @@ export default function AIAssist() {
<Zap className="h-3.5 w-3.5 mr-2" /> {t.activateArtifact} <Zap className="h-3.5 w-3.5 mr-2" /> {t.activateArtifact}
</Button> </Button>
)} )}
{/* Post-coding action buttons */}
{msg.role === "assistant" && assistStep === "preview" && i === aiAssistHistory.length - 1 && !isProcessing && (
<div className="mt-4 grid grid-cols-2 gap-2 animate-in zoom-in-95 duration-300">
<Button
onClick={() => { setShowCanvas(true); setViewMode(isPreviewRenderable(previewData as PreviewData) ? "preview" : "code"); }}
className="bg-blue-600 hover:bg-blue-500 text-white font-black uppercase text-[10px] tracking-widest py-4 rounded-xl shadow-lg shadow-blue-500/20"
>
<Zap className="h-3.5 w-3.5 mr-1.5" /> {t.activateArtifact}
</Button>
<Button
onClick={() => { setAssistStep("idle"); setInput("Modify this: "); setTimeout(() => { const el = document.querySelector<HTMLInputElement>(`[data-ai-input]`); if (el) el.focus(); }, 100); }}
variant="outline"
className="bg-slate-500/10 hover:bg-slate-500/20 border-slate-500/20 text-slate-300 font-black uppercase text-[10px] tracking-widest py-4 rounded-xl"
>
<LayoutPanelLeft className="h-3.5 w-3.5 mr-1.5" /> Request Modifications
</Button>
</div>
)}
</div> </div>
{msg.role === "assistant" && isProcessing && i === aiAssistHistory.length - 1 && status && ( {msg.role === "assistant" && isProcessing && i === aiAssistHistory.length - 1 && status && (
@@ -1137,7 +1235,7 @@ export default function AIAssist() {
<form onSubmit={handleSendMessage} className="relative group"> <form onSubmit={handleSendMessage} className="relative group">
<div className="absolute inset-0 bg-blue-500/5 rounded-[1.5rem] blur-xl group-focus-within:bg-blue-500/10 transition-all" /> <div className="absolute inset-0 bg-blue-500/5 rounded-[1.5rem] blur-xl group-focus-within:bg-blue-500/10 transition-all" />
<Input <Input
value={input} data-ai-input="" value={input}
onChange={(e) => setInput(e.target.value)} onChange={(e) => setInput(e.target.value)}
placeholder={t.placeholder} placeholder={t.placeholder}
disabled={isProcessing} disabled={isProcessing}

View File

@@ -1,14 +1,54 @@
"use client"; "use client";
import { useState, useEffect } from "react"; import { useState, useEffect, useCallback } from "react";
import { Button } from "@/components/ui/button"; import { Button } from "@/components/ui/button";
import { Card, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card"; import { Card, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card";
import { Textarea } from "@/components/ui/textarea"; import { Textarea } from "@/components/ui/textarea";
import useStore from "@/lib/store"; import useStore from "@/lib/store";
import modelAdapter from "@/lib/services/adapter-instance"; import modelAdapter from "@/lib/services/adapter-instance";
import { Sparkles, Copy, RefreshCw, Loader2, CheckCircle2, Settings } from "lucide-react"; import {
Sparkles, Copy, RefreshCw, Loader2, CheckCircle2, Settings,
AlertTriangle, Info, ChevronDown, ChevronUp, Target, Layers,
Zap, Brain, FileCode, Bot, Search, Image, Code, Globe
} from "lucide-react";
import { cn } from "@/lib/utils"; import { cn } from "@/lib/utils";
import { translations } from "@/lib/i18n/translations"; import { translations } from "@/lib/i18n/translations";
import {
runDiagnostics,
detectToolCategory,
selectTemplate,
generateAnalysisReport,
estimateTokens,
type AnalysisReport,
type DiagnosticResult,
TOOL_CATEGORIES,
TEMPLATES,
type ToolCategory,
} from "@/lib/enhance-engine";
const toolCategoryIcons: Record<string, React.ElementType> = {
reasoning: Brain,
thinking: Brain,
openweight: Zap,
agentic: Bot,
ide: Code,
fullstack: Globe,
image: Image,
search: Search,
};
const toolCategoryNames: Record<string, string> = {
reasoning: "Reasoning LLM",
thinking: "Thinking LLM",
openweight: "Open-Weight",
agentic: "Agentic AI",
ide: "IDE AI",
fullstack: "Full-Stack Gen",
image: "Image AI",
search: "Search AI",
};
type EnhanceMode = "quick" | "deep";
export default function PromptEnhancer() { export default function PromptEnhancer() {
const { const {
@@ -34,6 +74,13 @@ export default function PromptEnhancer() {
const common = translations[language].common; const common = translations[language].common;
const [copied, setCopied] = useState(false); const [copied, setCopied] = useState(false);
const [toolCategory, setToolCategory] = useState<string>("reasoning");
const [templateId, setTemplateId] = useState<string>("RTF");
const [enhanceMode, setEnhanceMode] = useState<EnhanceMode>("deep");
const [showDiagnostics, setShowDiagnostics] = useState(false);
const [diagnostics, setDiagnostics] = useState<DiagnosticResult[]>([]);
const [analysis, setAnalysis] = useState<AnalysisReport | null>(null);
const [autoDetected, setAutoDetected] = useState(false);
const selectedModel = selectedModels[selectedProvider]; const selectedModel = selectedModels[selectedProvider];
const models = availableModels[selectedProvider] || modelAdapter.getAvailableModels(selectedProvider); const models = availableModels[selectedProvider] || modelAdapter.getAvailableModels(selectedProvider);
@@ -55,6 +102,36 @@ export default function PromptEnhancer() {
} }
}, [selectedProvider]); }, [selectedProvider]);
const analyzePrompt = useCallback((prompt: string) => {
if (!prompt.trim()) return;
const report = generateAnalysisReport(prompt);
if (!autoDetected) {
if (report.suggestedTool) setToolCategory(report.suggestedTool);
if (report.suggestedTemplate) setTemplateId(report.suggestedTemplate.framework);
setAutoDetected(true);
}
setDiagnostics(report.diagnostics);
setAnalysis(report);
}, [autoDetected]);
useEffect(() => {
if (!currentPrompt.trim()) {
setDiagnostics([]);
setAnalysis(null);
setAutoDetected(false);
return;
}
const timer = setTimeout(() => {
analyzePrompt(currentPrompt);
}, 600);
return () => clearTimeout(timer);
}, [currentPrompt, analyzePrompt]);
const loadAvailableModels = async () => { const loadAvailableModels = async () => {
const fallbackModels = modelAdapter.getAvailableModels(selectedProvider); const fallbackModels = modelAdapter.getAvailableModels(selectedProvider);
setAvailableModels(selectedProvider, fallbackModels); setAvailableModels(selectedProvider, fallbackModels);
@@ -86,21 +163,28 @@ export default function PromptEnhancer() {
setProcessing(true); setProcessing(true);
setError(null); setError(null);
console.log("[PromptEnhancer] Starting enhancement...", { selectedProvider, selectedModel, hasQwenAuth: modelAdapter.hasQwenAuth() }); const diagnosticsText = enhanceMode === "deep" && diagnostics.length > 0
? diagnostics.filter(d => d.detected).map(d => `- ${d.pattern.name}: ${d.suggestion}`).join("\n")
: "";
const options = enhanceMode === "deep"
? { toolCategory, template: templateId.toLowerCase(), diagnostics: diagnosticsText }
: { toolCategory: "reasoning", template: "rtf", diagnostics: "" };
try { try {
const result = await modelAdapter.enhancePrompt(currentPrompt, selectedProvider, selectedModel); const result = await modelAdapter.enhancePrompt(
currentPrompt,
console.log("[PromptEnhancer] Enhancement result:", result); selectedProvider,
selectedModel,
options
);
if (result.success && result.data) { if (result.success && result.data) {
setEnhancedPrompt(result.data); setEnhancedPrompt(result.data);
} else { } else {
console.error("[PromptEnhancer] Enhancement failed:", result.error);
setError(result.error || t.errorEnhance); setError(result.error || t.errorEnhance);
} }
} catch (err) { } catch (err) {
console.error("[PromptEnhancer] Enhancement error:", err);
setError(err instanceof Error ? err.message : t.errorEnhance); setError(err instanceof Error ? err.message : t.errorEnhance);
} finally { } finally {
setProcessing(false); setProcessing(false);
@@ -119,10 +203,20 @@ export default function PromptEnhancer() {
setCurrentPrompt(""); setCurrentPrompt("");
setEnhancedPrompt(null); setEnhancedPrompt(null);
setError(null); setError(null);
setDiagnostics([]);
setAnalysis(null);
setAutoDetected(false);
}; };
const criticalCount = diagnostics.filter(d => d.detected && d.severity === "critical").length;
const warningCount = diagnostics.filter(d => d.detected && d.severity === "warning").length;
const toolEntries = Object.entries(TOOL_CATEGORIES) as [ToolCategory, typeof TOOL_CATEGORIES[ToolCategory]][];
return ( return (
<div className="mx-auto grid max-w-7xl gap-4 lg:gap-6 grid-cols-1 lg:grid-cols-2 text-start"> <div className="mx-auto grid max-w-7xl gap-4 lg:gap-6 grid-cols-1 lg:grid-cols-2 text-start">
{/* Left Column */}
<div className="space-y-4">
<Card className="h-fit"> <Card className="h-fit">
<CardHeader className="p-4 lg:p-6 text-start"> <CardHeader className="p-4 lg:p-6 text-start">
<CardTitle className="flex items-center gap-2 text-base lg:text-lg"> <CardTitle className="flex items-center gap-2 text-base lg:text-lg">
@@ -133,33 +227,106 @@ export default function PromptEnhancer() {
{t.description} {t.description}
</CardDescription> </CardDescription>
</CardHeader> </CardHeader>
<CardContent className="space-y-3 lg:space-y-4 p-4 lg:p-6 pt-0 lg:pt-0"> <CardContent className="space-y-3 lg:space-y-4 p-4 lg:p-6 pt-0">
{/* Enhancement Mode Toggle */}
<div className="space-y-2 text-start">
<label className="text-xs lg:text-sm font-medium">{t.enhanceMode}</label>
<div className="flex gap-1.5">
<Button
variant={enhanceMode === "quick" ? "default" : "outline"}
size="sm"
onClick={() => setEnhanceMode("quick")}
className="flex-1 text-xs h-8"
>
<Zap className="mr-1.5 h-3.5 w-3.5" />
{t.quickMode}
</Button>
<Button
variant={enhanceMode === "deep" ? "default" : "outline"}
size="sm"
onClick={() => setEnhanceMode("deep")}
className="flex-1 text-xs h-8"
>
<Brain className="mr-1.5 h-3.5 w-3.5" />
{t.deepMode}
</Button>
</div>
</div>
{/* Deep Mode Options */}
{enhanceMode === "deep" && (
<>
{/* Target Tool */}
<div className="space-y-2 text-start">
<label className="text-xs lg:text-sm font-medium">{t.targetTool}</label>
<div className="grid grid-cols-2 gap-1.5">
{toolEntries.map(([catId, cat]) => {
const Icon = toolCategoryIcons[catId] || Target;
return (
<Button
key={catId}
variant={toolCategory === catId ? "default" : "outline"}
size="sm"
onClick={() => { setToolCategory(catId); setAutoDetected(false); }}
className={cn(
"justify-start text-xs h-8 px-2",
toolCategory === catId && "bg-primary text-primary-foreground"
)}
>
<Icon className="mr-1.5 h-3 w-3 flex-shrink-0" />
<span className="truncate">{toolCategoryNames[catId]}</span>
</Button>
);
})}
</div>
</div>
{/* Template Framework */}
<div className="space-y-2 text-start">
<label className="text-xs lg:text-sm font-medium">{t.templateLabel}</label>
<select
value={templateId}
onChange={(e) => setTemplateId(e.target.value)}
className="w-full rounded-md border border-input bg-background px-3 py-2 text-xs ring-offset-background focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring"
>
{TEMPLATES.map((tmpl) => (
<option key={tmpl.framework} value={tmpl.framework}>
{tmpl.name} {tmpl.description}
</option>
))}
</select>
</div>
</>
)}
{/* AI Provider */}
<div className="space-y-2 text-start"> <div className="space-y-2 text-start">
<label className="text-xs lg:text-sm font-medium">{common.aiProvider}</label> <label className="text-xs lg:text-sm font-medium">{common.aiProvider}</label>
<div className="flex flex-wrap gap-1.5 lg:gap-2"> <div className="flex flex-wrap gap-1.5">
{(["qwen", "ollama", "zai"] as const).map((provider) => ( {(["qwen", "ollama", "zai", "openrouter"] as const).map((provider) => (
<Button <Button
key={provider} key={provider}
variant={selectedProvider === provider ? "default" : "outline"} variant={selectedProvider === provider ? "default" : "outline"}
size="sm" size="sm"
onClick={() => setSelectedProvider(provider)} onClick={() => setSelectedProvider(provider)}
className={cn( className={cn(
"capitalize text-xs lg:text-sm h-8 lg:h-9 px-2.5 lg:px-3", "capitalize text-xs h-8 px-2.5",
selectedProvider === provider && "bg-primary text-primary-foreground" selectedProvider === provider && "bg-primary text-primary-foreground"
)} )}
> >
{provider === "qwen" ? "Qwen" : provider === "ollama" ? "Ollama" : "Z.AI"} {provider === "qwen" ? "Qwen" : provider === "ollama" ? "Ollama" : provider === "zai" ? "Z.AI" : "OpenRouter"}
</Button> </Button>
))} ))}
</div> </div>
</div> </div>
{/* Model */}
<div className="space-y-2 text-start"> <div className="space-y-2 text-start">
<label className="text-xs lg:text-sm font-medium">{common.model}</label> <label className="text-xs lg:text-sm font-medium">{common.model}</label>
<select <select
value={selectedModel} value={selectedModel}
onChange={(e) => setSelectedModel(selectedProvider, e.target.value)} onChange={(e) => setSelectedModel(selectedProvider, e.target.value)}
className="w-full rounded-md border border-input bg-background px-3 py-2 text-xs lg:text-sm ring-offset-background focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring" className="w-full rounded-md border border-input bg-background px-3 py-2 text-xs ring-offset-background focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring"
> >
{models.map((model) => ( {models.map((model) => (
<option key={model} value={model}> <option key={model} value={model}>
@@ -169,76 +336,214 @@ export default function PromptEnhancer() {
</select> </select>
</div> </div>
{/* Prompt Input */}
<div className="space-y-2 text-start"> <div className="space-y-2 text-start">
<label className="text-xs lg:text-sm font-medium">{t.inputLabel}</label> <label className="text-xs lg:text-sm font-medium">{t.inputLabel}</label>
<Textarea <Textarea
placeholder={t.placeholder} placeholder={t.placeholder}
value={currentPrompt} value={currentPrompt}
onChange={(e) => setCurrentPrompt(e.target.value)} onChange={(e) => setCurrentPrompt(e.target.value)}
className="min-h-[150px] lg:min-h-[200px] resize-y text-sm lg:text-base p-3 lg:p-4" className="min-h-[150px] lg:min-h-[180px] resize-y text-sm p-3 lg:p-4"
/> />
</div> </div>
{error && ( {error && (
<div className="rounded-md bg-destructive/10 p-2.5 lg:p-3 text-xs lg:text-sm text-destructive"> <div className="rounded-md bg-destructive/10 p-2.5 text-xs text-destructive">
{error} {error}
{!apiKeys[selectedProvider] && ( {!apiKeys[selectedProvider] && (
<div className="mt-1.5 lg:mt-2 flex items-center gap-2"> <div className="mt-1.5 flex items-center gap-2">
<Settings className="h-3.5 w-3.5 lg:h-4 lg:w-4" /> <Settings className="h-3.5 w-3.5" />
<span className="text-[10px] lg:text-xs">{common.configApiKey}</span> <span className="text-[10px]">{common.configApiKey}</span>
</div> </div>
)} )}
</div> </div>
)} )}
{/* Action Buttons */}
<div className="flex gap-2"> <div className="flex gap-2">
<Button onClick={handleEnhance} disabled={isProcessing || !currentPrompt.trim()} className="flex-1 h-9 lg:h-10 text-xs lg:text-sm"> <Button onClick={handleEnhance} disabled={isProcessing || !currentPrompt.trim()} className="flex-1 h-9 text-xs">
{isProcessing ? ( {isProcessing ? (
<> <>
<Loader2 className="mr-1.5 lg:mr-2 h-3.5 w-3.5 lg:h-4 lg:w-4 animate-spin" /> <Loader2 className="mr-1.5 h-3.5 w-3.5 animate-spin" />
{common.generating} {common.generating}
</> </>
) : ( ) : (
<> <>
<Sparkles className="mr-1.5 lg:mr-2 h-3.5 w-3.5 lg:h-4 lg:w-4" /> <Sparkles className="mr-1.5 h-3.5 w-3.5" />
{t.title} {enhanceMode === "deep" ? t.deepEnhance : t.title}
</> </>
)} )}
</Button> </Button>
<Button variant="outline" onClick={handleClear} disabled={isProcessing} className="h-9 lg:h-10 text-xs lg:text-sm px-3"> <Button variant="outline" onClick={handleClear} disabled={isProcessing} className="h-9 text-xs px-3">
<RefreshCw className="mr-1.5 lg:mr-2 h-3.5 w-3.5 lg:h-4 lg:w-4" /> <RefreshCw className="mr-1.5 h-3.5 w-3.5" />
<span className="hidden sm:inline">{t.clear}</span> <span className="hidden sm:inline">{t.clear}</span>
</Button> </Button>
</div> </div>
</CardContent> </CardContent>
</Card> </Card>
<Card className={cn("flex flex-col", !enhancedPrompt && "opacity-50")}> {/* Diagnostics Card */}
{enhanceMode === "deep" && diagnostics.length > 0 && (
<Card className="h-fit">
<CardHeader
className="p-4 lg:p-6 text-start cursor-pointer select-none"
onClick={() => setShowDiagnostics(!showDiagnostics)}
>
<CardTitle className="flex items-center justify-between text-sm lg:text-base">
<span className="flex items-center gap-2">
<AlertTriangle className="h-4 w-4 text-amber-500" />
{t.diagnosticsTitle}
{criticalCount > 0 && (
<span className="flex h-5 w-5 items-center justify-center rounded-full bg-red-500 text-[10px] font-bold text-white">
{criticalCount}
</span>
)}
{warningCount > 0 && (
<span className="flex h-5 w-5 items-center justify-center rounded-full bg-amber-500 text-[10px] font-bold text-white">
{warningCount}
</span>
)}
</span>
{showDiagnostics ? <ChevronUp className="h-4 w-4" /> : <ChevronDown className="h-4 w-4" />}
</CardTitle>
{analysis && !showDiagnostics && (
<CardDescription className="text-xs">
{t.promptQuality}: <span className="font-semibold">{analysis.overallScore}/100</span>
{" — "}
{analysis.suggestedTool ? toolCategoryNames[analysis.suggestedTool] : "Auto"}
{" / "}
{analysis.suggestedTemplate?.name || "RTF"}
{" — ~"}
{estimateTokens(currentPrompt)} {t.tokensLabel}
</CardDescription>
)}
</CardHeader>
{showDiagnostics && (
<CardContent className="p-4 lg:p-6 pt-0 space-y-2">
{analysis && (
<div className="mb-3">
<div className="flex items-center justify-between text-xs mb-1">
<span>{t.promptQuality}</span>
<span className="font-semibold">{analysis.overallScore}/100</span>
</div>
<div className="h-2 rounded-full bg-muted overflow-hidden">
<div
className={cn(
"h-full rounded-full transition-all",
analysis.overallScore >= 70 ? "bg-green-500" : analysis.overallScore >= 40 ? "bg-amber-500" : "bg-red-500"
)}
style={{ width: `${analysis.overallScore}%` }}
/>
</div>
</div>
)}
{diagnostics.filter(d => d.detected).map((d, i) => (
<div
key={i}
className={cn(
"rounded-md p-2 text-xs",
d.severity === "critical" && "bg-red-500/10 border border-red-500/20",
d.severity === "warning" && "bg-amber-500/10 border border-amber-500/20",
d.severity === "info" && "bg-blue-500/10 border border-blue-500/20"
)}
>
<div className="flex items-start gap-2">
{d.severity === "critical" ? (
<AlertTriangle className="h-3.5 w-3.5 text-red-500 mt-0.5 flex-shrink-0" />
) : d.severity === "warning" ? (
<AlertTriangle className="h-3.5 w-3.5 text-amber-500 mt-0.5 flex-shrink-0" />
) : (
<Info className="h-3.5 w-3.5 text-blue-500 mt-0.5 flex-shrink-0" />
)}
<div>
<span className="font-medium">{d.pattern.name}</span>
<p className="text-muted-foreground mt-0.5">{d.suggestion}</p>
</div>
</div>
</div>
))}
{analysis && analysis.missingDimensions.length > 0 && (
<div className="mt-3 rounded-md bg-muted/50 p-2.5">
<p className="text-xs font-medium mb-1.5">{t.missingDimensions}</p>
<div className="flex flex-wrap gap-1">
{analysis.missingDimensions.map((dim, i) => (
<span key={i} className="inline-flex items-center rounded-full bg-primary/10 px-2 py-0.5 text-[10px] font-medium text-primary">
{dim}
</span>
))}
</div>
</div>
)}
{analysis && (
<div className="flex items-center gap-3 mt-2 text-xs text-muted-foreground">
<span>~{estimateTokens(currentPrompt)} {t.inputTokens}</span>
{enhancedPrompt && (
<>
<span>&rarr;</span>
<span>~{estimateTokens(enhancedPrompt)} {t.outputTokens}</span>
</>
)}
</div>
)}
</CardContent>
)}
</Card>
)}
</div>
{/* Right Column - Output */}
<Card className={cn("flex flex-col sticky top-4", !enhancedPrompt && "opacity-50")}>
<CardHeader className="p-4 lg:p-6 text-start"> <CardHeader className="p-4 lg:p-6 text-start">
<CardTitle className="flex items-center justify-between text-base lg:text-lg"> <CardTitle className="flex items-center justify-between text-base lg:text-lg">
<span className="flex items-center gap-2"> <span className="flex items-center gap-2">
<CheckCircle2 className="h-4 w-4 lg:h-5 lg:w-5 text-green-500" /> <CheckCircle2 className="h-4 w-4 lg:h-5 lg:w-5 text-green-500" />
{t.enhancedTitle} {t.enhancedTitle}
</span> </span>
<div className="flex items-center gap-1">
{enhancedPrompt && ( {enhancedPrompt && (
<Button variant="ghost" size="icon" onClick={handleCopy} className="h-8 w-8 lg:h-9 lg:w-9"> <>
<span className="text-[10px] text-muted-foreground font-mono">
~{estimateTokens(enhancedPrompt)} tok
</span>
<Button variant="ghost" size="icon" onClick={handleCopy} className="h-8 w-8">
{copied ? ( {copied ? (
<CheckCircle2 className="h-3.5 w-3.5 lg:h-4 lg:w-4 text-green-500" /> <CheckCircle2 className="h-3.5 w-3.5 text-green-500" />
) : ( ) : (
<Copy className="h-3.5 w-3.5 lg:h-4 lg:w-4" /> <Copy className="h-3.5 w-3.5" />
)} )}
</Button> </Button>
</>
)} )}
</div>
</CardTitle> </CardTitle>
<CardDescription className="text-xs lg:text-sm"> <CardDescription className="text-xs lg:text-sm">
{t.enhancedDesc} {t.enhancedDesc}
</CardDescription> </CardDescription>
</CardHeader> </CardHeader>
<CardContent className="p-4 lg:p-6 pt-0 lg:pt-0"> <CardContent className="p-4 lg:p-6 pt-0">
{enhancedPrompt ? ( {enhancedPrompt ? (
<div className="rounded-md border bg-muted/50 p-3 lg:p-4 animate-in fade-in slide-in-from-bottom-2 duration-300"> <div className="space-y-3">
{enhanceMode === "deep" && analysis && (
<div className="rounded-md bg-primary/5 border border-primary/20 p-2.5 text-xs">
<div className="flex items-center gap-1.5 mb-1 font-medium text-primary">
<Layers className="h-3.5 w-3.5" />
{t.strategyNote}
</div>
<p className="text-muted-foreground">
{t.strategyForTool
.replace("{tool}", analysis.suggestedTool ? toolCategoryNames[analysis.suggestedTool] : "Reasoning LLM")
.replace("{template}", analysis.suggestedTemplate?.name || "RTF")}
{criticalCount > 0 && ` ${t.fixedIssues.replace("{count}", String(criticalCount))}`}
</p>
</div>
)}
<div className="rounded-md border bg-muted/50 p-3 lg:p-4 animate-in fade-in slide-in-from-bottom-2 duration-300 max-h-[60vh] overflow-y-auto">
<pre className="whitespace-pre-wrap text-xs lg:text-sm leading-relaxed">{enhancedPrompt}</pre> <pre className="whitespace-pre-wrap text-xs lg:text-sm leading-relaxed">{enhancedPrompt}</pre>
</div> </div>
</div>
) : ( ) : (
<div className="flex h-[150px] lg:h-[200px] items-center justify-center text-center text-xs lg:text-sm text-muted-foreground italic"> <div className="flex h-[150px] lg:h-[200px] items-center justify-center text-center text-xs lg:text-sm text-muted-foreground italic">
{t.emptyState} {t.emptyState}

View File

@@ -54,6 +54,10 @@ export default function SettingsPanel() {
setApiKey("zai", keys.zai); setApiKey("zai", keys.zai);
modelAdapter.updateZaiApiKey(keys.zai); modelAdapter.updateZaiApiKey(keys.zai);
} }
if (keys.openrouter) {
setApiKey("openrouter", keys.openrouter);
modelAdapter.updateOpenRouterApiKey(keys.openrouter);
}
} catch (e) { } catch (e) {
console.error("Failed to load API keys:", e); console.error("Failed to load API keys:", e);
} }
@@ -84,7 +88,7 @@ export default function SettingsPanel() {
} }
}; };
const validateApiKey = async (provider: "qwen" | "ollama" | "zai") => { const validateApiKey = async (provider: "qwen" | "ollama" | "zai" | "openrouter") => {
const key = apiKeys[provider]; const key = apiKeys[provider];
if (!key || key.trim().length === 0) { if (!key || key.trim().length === 0) {
setApiValidationStatus(provider, { valid: false, error: "API key is required" }); setApiValidationStatus(provider, { valid: false, error: "API key is required" });
@@ -149,6 +153,9 @@ export default function SettingsPanel() {
case "zai": case "zai":
modelAdapter.updateZaiApiKey(value); modelAdapter.updateZaiApiKey(value);
break; break;
case "openrouter":
modelAdapter.updateOpenRouterApiKey(value);
break;
} }
// Debounce validation (500ms) // Debounce validation (500ms)
@@ -187,7 +194,7 @@ export default function SettingsPanel() {
} }
}; };
const getStatusIndicator = (provider: "qwen" | "ollama" | "zai") => { const getStatusIndicator = (provider: "qwen" | "ollama" | "zai" | "openrouter") => {
const status = apiValidationStatus[provider]; const status = apiValidationStatus[provider];
if (validating[provider]) { if (validating[provider]) {
@@ -439,6 +446,63 @@ export default function SettingsPanel() {
</div> </div>
</div> </div>
<div className="space-y-2 text-start">
<label className="flex items-center gap-2 text-xs lg:text-sm font-medium">
<Server className="h-3.5 w-3.5 lg:h-4 lg:w-4" />
OpenRouter API Key
</label>
<div className="relative">
<Input
type={showApiKey.openrouter ? "text" : "password"}
placeholder={t.enterKey("OpenRouter")}
value={apiKeys.openrouter || ""}
onChange={(e) => handleApiKeyChange("openrouter", e.target.value)}
className="font-mono text-xs lg:text-sm pr-24"
/>
<div className="absolute right-1 top-1/2 -translate-y-1/2 flex items-center gap-1">
{getStatusIndicator("openrouter")}
<Button
type="button"
variant="ghost"
size="icon"
className="h-7 w-7"
onClick={() => setShowApiKey((prev) => ({ ...prev, openrouter: !prev.openrouter }))}
>
{showApiKey.openrouter ? (
<EyeOff className="h-3 w-3" />
) : (
<Eye className="h-3 w-3" />
)}
</Button>
</div>
</div>
<div className="flex flex-col sm:flex-row sm:items-center justify-between gap-2">
<p className="text-[10px] lg:text-xs text-muted-foreground">
{t.getApiKey}{" "}
<a
href="https://openrouter.ai/keys"
target="_blank"
rel="noopener noreferrer"
className="text-primary hover:underline"
>
openrouter.ai/keys
</a>
</p>
{apiKeys.openrouter && (
<Button
variant="ghost"
size="sm"
className="h-6 px-2 text-[9px] lg:text-[10px] w-fit"
onClick={() => validateApiKey("openrouter")}
disabled={validating.openrouter}
>
<RefreshCw className={`h-2.5 w-2.5 mr-1 ${validating.openrouter ? "animate-spin" : ""}`} />
Test
</Button>
)}
</div>
</div>
<Button onClick={handleSave} className="w-full h-9 lg:h-10 text-xs lg:text-sm"> <Button onClick={handleSave} className="w-full h-9 lg:h-10 text-xs lg:text-sm">
<Save className="mr-1.5 lg:mr-2 h-3.5 w-3.5 lg:h-4 lg:w-4" /> <Save className="mr-1.5 lg:mr-2 h-3.5 w-3.5 lg:h-4 lg:w-4" />
{t.saveKeys} {t.saveKeys}
@@ -455,7 +519,7 @@ export default function SettingsPanel() {
</CardHeader> </CardHeader>
<CardContent className="space-y-3 lg:space-y-4 p-4 lg:p-6 pt-0 lg:pt-0"> <CardContent className="space-y-3 lg:space-y-4 p-4 lg:p-6 pt-0 lg:pt-0">
<div className="grid gap-2 lg:gap-3"> <div className="grid gap-2 lg:gap-3">
{(["qwen", "ollama", "zai"] as const).map((provider) => ( {(["qwen", "ollama", "zai", "openrouter"] as const).map((provider) => (
<button <button
key={provider} key={provider}
onClick={() => setSelectedProvider(provider)} onClick={() => setSelectedProvider(provider)}
@@ -468,11 +532,12 @@ export default function SettingsPanel() {
<Server className="h-4 w-4 lg:h-5 lg:w-5 text-primary" /> <Server className="h-4 w-4 lg:h-5 lg:w-5 text-primary" />
</div> </div>
<div className="flex-1 min-w-0"> <div className="flex-1 min-w-0">
<h3 className="font-medium capitalize text-sm lg:text-base">{provider}</h3> <h3 className="font-medium capitalize text-sm lg:text-base">{provider === "openrouter" ? "OpenRouter" : provider}</h3>
<p className="text-[10px] lg:text-sm text-muted-foreground truncate"> <p className="text-[10px] lg:text-sm text-muted-foreground truncate">
{provider === "qwen" && t.qwenDesc} {provider === "qwen" && t.qwenDesc}
{provider === "ollama" && t.ollamaDesc} {provider === "ollama" && t.ollamaDesc}
{provider === "zai" && t.zaiDesc} {provider === "zai" && t.zaiDesc}
{provider === "openrouter" && t.openrouterDesc}
</p> </p>
</div> </div>
{selectedProvider === provider && ( {selectedProvider === provider && (

972
lib/enhance-engine.ts Normal file
View File

@@ -0,0 +1,972 @@
/**
* Prompt Enhancement Engine
* Based on prompt-master methodology (https://github.com/nidhinjs/prompt-master)
* Client-side prompt analysis and optimization for various AI tools
*/
// ============================================================================
// TYPE DEFINITIONS
// ============================================================================
/**
* Tool categories with different prompting requirements
*/
export type ToolCategory =
| 'reasoning' // Claude, GPT-4o, Gemini - Full structure, XML tags, explicit format locks
| 'thinking' // o1, o3, DeepSeek-R1 - Short clean instructions only, no CoT
| 'openweight' // Llama, Mistral, Qwen - Shorter prompts, simpler structure
| 'agentic' // Claude Code, Devin, SWE-agent - Start/target state, allowed/forbidden actions, stop conditions
| 'ide' // Cursor, Windsurf, Copilot - File path + function + desired change + scope lock
| 'fullstack' // Bolt, v0, Lovable - Stack spec, component boundaries, what NOT to scaffold
| 'image' // Midjourney, DALL-E, Stable Diffusion - Subject + style + mood + lighting + negative prompts
| 'search'; // Perplexity, SearchGPT - Mode specification, citation requirements
/**
* Template frameworks for different prompt structures
*/
export type TemplateFramework =
| 'RTF' // Role, Task, Format - Simple one-shot
| 'CO-STAR' // Context, Objective, Style, Tone, Audience, Response - Professional documents
| 'RISEN' // Role, Instructions, Steps, End Goal, Narrowing - Complex multi-step
| 'CRISPE' // Capacity, Role, Insight, Statement, Personality, Experiment - Creative work
| 'ChainOfThought' // Logic/math/debugging (NOT for thinking models)
| 'FewShot' // Format-sensitive tasks
| 'FileScope' // IDE AI editing
| 'ReActPlusStop' // Agentic AI
| 'VisualDescriptor'; // Image generation
/**
* Severity levels for diagnostic patterns
*/
export type Severity = 'critical' | 'warning' | 'info';
/**
* Diagnostic pattern for prompt analysis
*/
export interface DiagnosticPattern {
id: string;
name: string;
description: string;
category: 'task' | 'context' | 'format' | 'scope' | 'reasoning' | 'agentic';
detect: (prompt: string) => boolean;
fix: string;
severity: Severity;
}
/**
* Result from running diagnostics on a prompt
*/
export interface DiagnosticResult {
pattern: DiagnosticPattern;
detected: boolean;
severity: Severity;
suggestion: string;
}
/**
* Template structure with metadata
*/
export interface Template {
name: string;
framework: TemplateFramework;
description: string;
structure: string[];
bestFor: ToolCategory[];
}
/**
* Complete analysis report for a prompt
*/
export interface AnalysisReport {
prompt: string;
tokenEstimate: number;
suggestedTool: ToolCategory | null;
suggestedTemplate: Template | null;
diagnostics: DiagnosticResult[];
missingDimensions: string[];
overallScore: number; // 0-100
}
// ============================================================================
// TOOL CATEGORIES
// ============================================================================
export const TOOL_CATEGORIES: Record<ToolCategory, {
description: string;
examples: string[];
promptingStyle: string;
}> = {
reasoning: {
description: 'Models with strong reasoning capabilities',
examples: ['Claude', 'GPT-4o', 'Gemini'],
promptingStyle: 'Full structure, XML tags, explicit format locks, detailed instructions'
},
thinking: {
description: 'Models with built-in chain-of-thought',
examples: ['o1', 'o3', 'DeepSeek-R1'],
promptingStyle: 'Short clean instructions only, NO explicit CoT or step-by-step'
},
openweight: {
description: 'Open-source models',
examples: ['Llama', 'Mistral', 'Qwen'],
promptingStyle: 'Shorter prompts, simpler structure, clear direct instructions'
},
agentic: {
description: 'Autonomous coding agents',
examples: ['Claude Code', 'Devin', 'SWE-agent'],
promptingStyle: 'Start/target state, allowed/forbidden actions, stop conditions'
},
ide: {
description: 'IDE-integrated AI assistants',
examples: ['Cursor', 'Windsurf', 'Copilot'],
promptingStyle: 'File path + function + desired change + scope lock'
},
fullstack: {
description: 'Full-stack app builders',
examples: ['Bolt', 'v0', 'Lovable'],
promptingStyle: 'Stack spec, component boundaries, what NOT to scaffold'
},
image: {
description: 'Image generation models',
examples: ['Midjourney', 'DALL-E', 'Stable Diffusion'],
promptingStyle: 'Subject + style + mood + lighting + negative prompts'
},
search: {
description: 'Search-augmented AI',
examples: ['Perplexity', 'SearchGPT'],
promptingStyle: 'Mode specification, citation requirements, source attribution'
}
};
// ============================================================================
// TEMPLATE FRAMEWORKS
// ============================================================================
export const TEMPLATES: Template[] = [
{
name: 'RTF (Role-Task-Format)',
framework: 'RTF',
description: 'Simple one-shot prompts with clear role, task, and output format',
structure: ['Role: Who you are', 'Task: What to do', 'Format: How to output'],
bestFor: ['reasoning', 'openweight']
},
{
name: 'CO-STAR',
framework: 'CO-STAR',
description: 'Comprehensive framework for professional documents and complex tasks',
structure: [
'Context: Background information',
'Objective: What needs to be achieved',
'Style: Writing style and tone',
'Tone: Emotional tone',
'Audience: Who will read this',
'Response: Expected output format'
],
bestFor: ['reasoning', 'thinking', 'openweight']
},
{
name: 'RISEN',
framework: 'RISEN',
description: 'Multi-step complex task framework with clear end goals',
structure: [
'Role: AI agent identity',
'Instructions: Task requirements',
'Steps: Sequential actions',
'End Goal: Success criteria',
'Narrowing: Constraints and boundaries'
],
bestFor: ['reasoning', 'agentic']
},
{
name: 'CRISPE',
framework: 'CRISPE',
description: 'Creative work framework with personality and experimentation',
structure: [
'Capacity: What you can do',
'Role: Creative identity',
'Insight: Key perspective',
'Statement: The core request',
'Personality: Tone and style',
'Experiment: Creative constraints'
],
bestFor: ['reasoning', 'openweight']
},
{
name: 'Chain of Thought',
framework: 'ChainOfThought',
description: 'Step-by-step reasoning for logic, math, and debugging (NOT for thinking models)',
structure: [
'Problem statement',
'Step-by-step reasoning',
'Final answer',
'Verification'
],
bestFor: ['reasoning', 'openweight']
},
{
name: 'Few-Shot Learning',
framework: 'FewShot',
description: 'Provide examples to guide format-sensitive tasks',
structure: [
'Task description',
'Example 1: Input -> Output',
'Example 2: Input -> Output',
'Example 3: Input -> Output',
'Actual task'
],
bestFor: ['reasoning', 'openweight']
},
{
name: 'File-Scope Lock',
framework: 'FileScope',
description: 'IDE-specific editing with precise file and function targeting',
structure: [
'File path',
'Function/component name',
'Current code snippet',
'Desired change',
'Scope: ONLY modify X, do NOT touch Y'
],
bestFor: ['ide']
},
{
name: 'ReAct + Stop Conditions',
framework: 'ReActPlusStop',
description: 'Agentic framework with explicit stopping rules',
structure: [
'Starting state: Current situation',
'Target state: Desired outcome',
'Allowed actions: What you CAN do',
'Forbidden actions: What you CANNOT do',
'Stop conditions: When to pause and ask',
'Output requirements: Progress reporting'
],
bestFor: ['agentic']
},
{
name: 'Visual Descriptor',
framework: 'VisualDescriptor',
description: 'Comprehensive image generation prompt structure',
structure: [
'Subject: Main element',
'Style: Art style or aesthetic',
'Mood: Emotional quality',
'Lighting: Light source and quality',
'Composition: Framing and perspective',
'Colors: Color palette',
'Negative prompts: What to exclude'
],
bestFor: ['image']
}
];
// ============================================================================
// DIAGNOSTIC PATTERNS (35 Total)
// ============================================================================
const TASK_PATTERNS: DiagnosticPattern[] = [
{
id: 'task-001',
name: 'Vague task verb',
description: 'Uses generic verbs like "help", "fix", "make" without specifics',
category: 'task',
detect: (prompt: string) => {
const vagueVerbs = /\b(help|fix|make|improve|update|change|handle|work on)\b/i;
const noSpecifics = !/\b(specifically|exactly|to|that|which|called|named):\b/i.test(prompt);
return vagueVerbs.test(prompt) && noSpecifics && prompt.split(' ').length < 30;
},
fix: 'Replace vague verbs with specific action verbs. Instead of "fix this", use "add error handling to the login function"',
severity: 'warning'
},
{
id: 'task-002',
name: 'Two tasks in one',
description: 'Contains multiple distinct tasks in a single prompt',
category: 'task',
detect: (prompt: string) => {
const andPattern = /\b(and|also|plus|additionally)\s+[a-z]+\b/i;
const commaTasks = /\b(create|build|fix|add|write|update)[^,.]+,[^,.]+(create|build|fix|add|write|update)/i;
return andPattern.test(prompt) || commaTasks.test(prompt);
},
fix: 'Split into separate prompts. Each prompt should have ONE primary task.',
severity: 'critical'
},
{
id: 'task-003',
name: 'No success criteria',
description: 'Missing clear definition of when the task is complete',
category: 'task',
detect: (prompt: string) => {
const successWords = /\b(done when|success criteria|complete when|should|must result|verify that|ensure that|passes when)\b/i;
const isComplexTask = /\b(build|create|implement|develop|design|setup)\b/i.test(prompt);
return isComplexTask && !successWords.test(prompt);
},
fix: 'Add explicit success criteria: "The task is complete when [specific condition is met]"',
severity: 'warning'
},
{
id: 'task-004',
name: 'Over-permissive agent',
description: 'Gives AI too much freedom without constraints',
category: 'task',
detect: (prompt: string) => {
const permissivePhrases = /\b(whatever it takes|do your best|figure it out|you decide|however you want|as you see fit)\b/i;
return permissivePhrases.test(prompt);
},
fix: 'Replace open-ended permissions with specific constraints and scope boundaries.',
severity: 'critical'
},
{
id: 'task-005',
name: 'Emotional task description',
description: 'Uses emotional language without specific technical details',
category: 'task',
detect: (prompt: string) => {
const emotionalWords = /\b(broken|mess|terrible|awful|doesn't work|horrible|stupid|hate|frustrating)\b/i;
const noTechnicalDetails = !/\b(error|bug|line|function|file|exception|fail|crash)\b/i.test(prompt);
return emotionalWords.test(prompt) && noTechnicalDetails;
},
fix: 'Replace emotional language with specific technical details: what error, what line, what behavior?',
severity: 'warning'
},
{
id: 'task-006',
name: 'Build-the-whole-thing',
description: 'Attempts to build an entire project in one prompt',
category: 'task',
detect: (prompt: string) => {
const wholeProjectPhrases = /\b(entire app|whole project|full website|complete system|everything|end to end|from scratch)\b/i;
return wholeProjectPhrases.test(prompt);
},
fix: 'Break down into smaller, iterative prompts. Start with core functionality, then add features.',
severity: 'critical'
},
{
id: 'task-007',
name: 'Implicit reference',
description: 'References something previously mentioned without context',
category: 'task',
detect: (prompt: string) => {
const implicitRefs = /\b(the thing|that one|what we discussed|from before|the previous|like the other)\b/i;
const noContext = prompt.split(' ').length < 50;
return implicitRefs.test(prompt) && noContext;
},
fix: 'Always include full context. Replace "the thing" with specific name/description.',
severity: 'critical'
}
];
const CONTEXT_PATTERNS: DiagnosticPattern[] = [
{
id: 'ctx-001',
name: 'Assumed prior knowledge',
description: 'Assumes AI remembers previous conversations or context',
category: 'context',
detect: (prompt: string) => {
const assumptionPhrases = /\b(continue|as before|like we said|you know|from our chat|from earlier)\b/i;
const noContextProvided = prompt.split(' ').length < 40;
return assumptionPhrases.test(prompt) && noContextProvided;
},
fix: 'Include relevant context from previous work. Do not assume continuity.',
severity: 'warning'
},
{
id: 'ctx-002',
name: 'No project context',
description: 'Very short prompt with no domain or technology context',
category: 'context',
detect: (prompt: string) => {
const wordCount = prompt.split(/\s+/).length;
const hasTech = /\b(javascript|python|react|api|database|server|frontend|backend|mobile|web)\b/i;
return wordCount < 15 && !hasTech.test(prompt);
},
fix: 'Add project context: technology stack, domain, and what you\'re building.',
severity: 'warning'
},
{
id: 'ctx-003',
name: 'Forgotten stack',
description: 'Tech-agnostic prompt that implies an existing project',
category: 'context',
detect: (prompt: string) => {
const projectWords = /\b(add to|update the|change the|modify the|existing|current)\b/i;
const noTechStack = !/\b(javascript|typescript|python|java|rust|go|react|vue|angular|node|django|rails)\b/i.test(prompt);
return projectWords.test(prompt) && noTechStack;
},
fix: 'Specify your technology stack: language, framework, and key dependencies.',
severity: 'critical'
},
{
id: 'ctx-004',
name: 'Hallucination invite',
description: 'Asks for general knowledge that may not exist',
category: 'context',
detect: (prompt: string) => {
const hallucinationPhrases = /\b(what do experts say|what is commonly known|generally accepted|most people think|typical approach)\b/i;
return hallucinationPhrases.test(prompt);
},
fix: 'Ask for specific sources or provide source material. Avoid general "what do X think" questions.',
severity: 'info'
},
{
id: 'ctx-005',
name: 'Undefined audience',
description: 'User-facing output without audience specification',
category: 'context',
detect: (prompt: string) => {
const userFacing = /\b(write|create|generate|draft)\s+(content|message|email|copy|text|documentation)\b/i;
const noAudience = !/\b(for|audience|target|reader|user|customer|stakeholder)\b/i.test(prompt);
return userFacing.test(prompt) && noAudience;
},
fix: 'Specify who will read this output: "Write for [audience] who [context]"',
severity: 'warning'
},
{
id: 'ctx-006',
name: 'No prior failures',
description: 'Complex task without mentioning what was tried before',
category: 'context',
detect: (prompt: string) => {
const complexTask = /\b(debug|fix|solve|resolve|implement|build|create)\b/i;
const noPriorAttempts = !/\b(tried|attempted|already|previous|before|not working|failed)\b/i.test(prompt);
const isLongPrompt = prompt.split(' ').length > 20;
return complexTask.test(prompt) && noPriorAttempts && isLongPrompt;
},
fix: 'Mention what you\'ve already tried: "I tried X but got Y error. Now..."',
severity: 'info'
}
];
const FORMAT_PATTERNS: DiagnosticPattern[] = [
{
id: 'fmt-001',
name: 'Missing output format',
description: 'No specification of how output should be structured',
category: 'format',
detect: (prompt: string) => {
const formatKeywords = /\b(list|table|json|markdown|bullet|paragraph|csv|html|code|steps)\b/i;
const outputKeywords = /\b(output|return|format as|in the form of|structure)\b/i;
return !formatKeywords.test(prompt) && !outputKeywords.test(prompt);
},
fix: 'Specify output format: "Return as a bulleted list" or "Output as JSON"',
severity: 'warning'
},
{
id: 'fmt-002',
name: 'Implicit length',
description: 'Uses length terms without specific counts',
category: 'format',
detect: (prompt: string) => {
const vagueLength = /\b(summary|description|overview|brief|short|long|detailed)\b/i;
const noSpecificLength = !/\b(\d+\s*(words?|sentences?|paragraphs?)|under\s*\d+|max\s*\d+)\b/i.test(prompt);
return vagueLength.test(prompt) && noSpecificLength;
},
fix: 'Be specific: "Write 2-3 sentences" or "Keep under 100 words"',
severity: 'info'
},
{
id: 'fmt-003',
name: 'No role assignment',
description: 'Long prompt without specifying who AI should be',
category: 'format',
detect: (prompt: string) => {
const wordCount = prompt.split(/\s+/).length;
const roleKeywords = /\b(act as|you are|role|persona|expert|specialist|professional|engineer|developer|analyst)\b/i;
return wordCount > 50 && !roleKeywords.test(prompt);
},
fix: 'Add role assignment: "Act as a [role] with [expertise]"',
severity: 'info'
},
{
id: 'fmt-004',
name: 'Vague aesthetic',
description: 'Design-related prompt without specific visual direction',
category: 'format',
detect: (prompt: string) => {
const vagueAesthetic = /\b(professional|clean|modern|nice|good looking|beautiful|sleek)\b/i;
const noVisualSpecs = !/\b(colors?|fonts?|spacing|layout|style|theme|design system)\b/i.test(prompt);
return vagueAesthetic.test(prompt) && noVisualSpecs;
},
fix: 'Specify visual details: colors, typography, spacing, specific design reference.',
severity: 'warning'
},
{
id: 'fmt-005',
name: 'No negative prompts for image',
description: 'Image generation without exclusion criteria',
category: 'format',
detect: (prompt: string) => {
const imageKeywords = /\b(image|photo|picture|illustration|generate|create art|midjourney|dall-e)\b/i;
const noNegative = !/\b(negative|exclude|avoid|without|no|not)\b/i.test(prompt);
return imageKeywords.test(prompt) && noNegative;
},
fix: 'Add negative prompts: "Negative: blurry, low quality, distorted"',
severity: 'warning'
},
{
id: 'fmt-006',
name: 'Prose for Midjourney',
description: 'Long descriptive sentences instead of keyword-style prompts',
category: 'format',
detect: (prompt: string) => {
const longSentences = prompt.split(/[.!?]/).filter(s => s.trim().split(' ').length > 10).length > 0;
const imageKeywords = /\b(image|photo|art|illustration|midjourney|dall-e|stable diffusion)\b/i;
return imageKeywords.test(prompt) && longSentences;
},
fix: 'Use keyword-style prompts: "Subject, style, mood, lighting, --ar 16:9"',
severity: 'warning'
}
];
const SCOPE_PATTERNS: DiagnosticPattern[] = [
{
id: 'scp-001',
name: 'No scope boundary',
description: 'Missing specific scope constraints',
category: 'scope',
detect: (prompt: string) => {
const scopeWords = /\b(only|just|specifically|exactly|limit|restrict)\b/i;
const hasFilePath = /\/[\w.]+/.test(prompt) || /\b[\w-]+\.(js|ts|py|java|go|rs|cpp|c|h)\b/i;
const hasFunction = /\b(function|method|class|component)\s+\w+/i;
return !scopeWords.test(prompt) && !hasFilePath && !hasFunction;
},
fix: 'Add scope boundary: "Only modify X, do NOT touch Y"',
severity: 'warning'
},
{
id: 'scp-002',
name: 'No stack constraints',
description: 'Technical task without version specifications',
category: 'scope',
detect: (prompt: string) => {
const techTask = /\b(build|create|implement|setup|install|use|add)\s+(\w+\s+){0,3}(app|api|server|database|system)\b/i;
const noVersion = !/\b(version|v\d+|\d+\.\d+|specifically|exactly)\b/i.test(prompt);
return techTask.test(prompt) && noVersion;
},
fix: 'Specify versions: "Use React 18 with TypeScript 5"',
severity: 'warning'
},
{
id: 'scp-003',
name: 'No stop condition for agents',
description: 'Agentic task without explicit stopping rules',
category: 'scope',
detect: (prompt: string) => {
const agentKeywords = /\b(agent|autonomous|run this|execute|iterate|keep going)\b/i;
const noStop = !/\b(stop|pause|ask me|check in|before continuing|confirm)\b/i.test(prompt);
return agentKeywords.test(prompt) && noStop;
},
fix: 'Add stop conditions: "Stop and ask before deleting files" or "Pause after each major step"',
severity: 'critical'
},
{
id: 'scp-004',
name: 'No file path for IDE',
description: 'IDE editing without file specification',
category: 'scope',
detect: (prompt: string) => {
const editKeywords = /\b(update|fix|change|modify|edit|refactor)\b/i;
const hasPath = /\/[\w./-]+|\b[\w-]+\.(js|ts|jsx|tsx|py|java|go|rs|cpp|c|h|css|html|json)\b/i;
return editKeywords.test(prompt) && !hasPath;
},
fix: 'Always include file path: "Update src/components/Header.tsx"',
severity: 'critical'
},
{
id: 'scp-005',
name: 'Wrong template',
description: 'Template mismatch for the target tool',
category: 'scope',
detect: (prompt: string) => {
// Detect if using complex structure for thinking models
const thinkingModel = /\b(o1|o3|deepseek.*r1|thinking)\b/i;
const complexStructure = /\b(step by step|think through|reasoning|<thinking>|chain of thought)\b/i;
return thinkingModel.test(prompt) && complexStructure.test(prompt);
},
fix: 'For thinking models (o1, o3, R1), use short clean instructions without explicit CoT.',
severity: 'critical'
},
{
id: 'scp-006',
name: 'Pasting codebase',
description: 'Extremely long prompt suggesting codebase paste',
category: 'scope',
detect: (prompt: string) => {
const wordCount = prompt.split(/\s+/).length;
const multipleFiles = (prompt.match(/```/g) || []).length > 4;
return wordCount > 500 || multipleFiles;
},
fix: 'Use file paths and references instead of pasting entire files. Or use an IDE AI tool.',
severity: 'warning'
}
];
const REASONING_PATTERNS: DiagnosticPattern[] = [
{
id: 'rsn-001',
name: 'No CoT for logic',
description: 'Complex logic task without step-by-step instructions',
category: 'reasoning',
detect: (prompt: string) => {
const logicKeywords = /\b(compare|analyze|which is better|debug|why does|explain why|how does|verify)\b/i;
const noCoT = !/\b(step by step|walk through|reasoning|think through|first|then|finally)\b/i.test(prompt);
return logicKeywords.test(prompt) && noCoT;
},
fix: 'Add "Step by step" or "Walk through your reasoning" for logic tasks.',
severity: 'warning'
},
{
id: 'rsn-002',
name: 'CoT on reasoning models',
description: 'Explicit CoT instructions for thinking models',
category: 'reasoning',
detect: (prompt: string) => {
const thinkingModel = /\b(o1|o3|deepseek.*r1)\b/i;
const explicitCoT = /\b(step by step|think through|<thinking>|reasoning process|show your work)\b/i;
return thinkingModel.test(prompt) && explicitCoT.test(prompt);
},
fix: 'Remove explicit CoT instructions. Thinking models have built-in reasoning.',
severity: 'critical'
},
{
id: 'rsn-003',
name: 'Inter-session memory',
description: 'Assumes AI remembers across separate sessions',
category: 'reasoning',
detect: (prompt: string) => {
const memoryPhrases = /\b(you already know|remember|from our conversation|we discussed|earlier we|as mentioned)\b/i;
return memoryPhrases.test(prompt);
},
fix: 'AI does not remember between sessions. Include all necessary context.',
severity: 'info'
},
{
id: 'rsn-004',
name: 'Contradicting prior',
description: 'Explicit contradiction of previous instructions',
category: 'reasoning',
detect: (prompt: string) => {
const contradictionPhrases = /\b(actually|wait|ignore what i said|forget that|never mind|scratch that)\b/i;
return contradictionPhrases.test(prompt);
},
fix: 'State corrections clearly: "Correction: Replace X with Y"',
severity: 'warning'
},
{
id: 'rsn-005',
name: 'No grounding rule',
description: 'Factual task without certainty constraints',
category: 'reasoning',
detect: (prompt: string) => {
const factualTask = /\b(summarize|what is|tell me about|explain|list|research|find)\b/i;
const noGrounding = !/\b(if unsure|don't hallucinate|only if certain|say i don't know|stick to)\b/i.test(prompt);
return factualTask.test(prompt) && noGrounding && prompt.split(' ').length > 10;
},
fix: 'Add grounding: "If uncertain, say so rather than guessing"',
severity: 'info'
}
];
const AGENTIC_PATTERNS: DiagnosticPattern[] = [
{
id: 'agt-001',
name: 'No starting state',
description: 'Build/create task without current state description',
category: 'agentic',
detect: (prompt: string) => {
const buildKeywords = /\b(build|create|set up|implement|develop|make)\b/i;
const currentState = !/\b(currently|existing|now|currently have|right now|starting from)\b/i.test(prompt);
return buildKeywords.test(prompt) && currentState;
},
fix: 'Describe starting state: "Currently I have X. I want to reach Y."',
severity: 'warning'
},
{
id: 'agt-002',
name: 'No target state',
description: 'Agentic task without explicit deliverable',
category: 'agentic',
detect: (prompt: string) => {
const vagueCompletion = /\b(work on this|handle this|do this|take care of)\b/i;
const noTarget = !/\b(result should|final output|deliverable|end with|complete when)\b/i.test(prompt);
return vagueCompletion.test(prompt) && noTarget;
},
fix: 'Specify target state: "The final result should be [specific outcome]"',
severity: 'critical'
},
{
id: 'agt-003',
name: 'Silent agent',
description: 'Multi-step task without progress reporting requirements',
category: 'agentic',
detect: (prompt: string) => {
const multiStep = /\b(then|next|after that|first|second|finally)\b/i;
const noOutput = !/\b(show me|report|output|print|log|display progress|tell me)\b/i.test(prompt);
return multiStep.test(prompt) && noOutput;
},
fix: 'Add output requirements: "Report progress after each step"',
severity: 'warning'
},
{
id: 'agt-004',
name: 'Unlocked filesystem',
description: 'Agentic task without file access restrictions',
category: 'agentic',
detect: (prompt: string) => {
const agentKeywords = /\b(agent|autonomous|run|execute|implement|build|create)\b/i;
const noRestrictions = !/\b(only touch|don't modify|never delete|restrict to|scope|limit)\b/i.test(prompt);
return agentKeywords.test(prompt) && noRestrictions;
},
fix: 'Add file restrictions: "Only modify files in X, never touch Y"',
severity: 'critical'
},
{
id: 'agt-005',
name: 'No review trigger',
description: 'Agentic task without approval checkpoints',
category: 'agentic',
detect: (prompt: string) => {
const riskyActions = /\b(delete|remove|overwrite|deploy|publish|submit|merge)\b/i;
const noReview = !/\b(ask before|confirm|review|approve|check with me)\b/i.test(prompt);
return riskyActions.test(prompt) && noReview;
},
fix: 'Add review triggers: "Ask before deleting any files" or "Confirm before deploying"',
severity: 'critical'
}
];
// Combine all patterns
export const ALL_PATTERNS: DiagnosticPattern[] = [
...TASK_PATTERNS,
...CONTEXT_PATTERNS,
...FORMAT_PATTERNS,
...SCOPE_PATTERNS,
...REASONING_PATTERNS,
...AGENTIC_PATTERNS
];
// ============================================================================
// CORE FUNCTIONS
// ============================================================================
/**
* Auto-detect the target AI tool category based on prompt content
*/
export function detectToolCategory(prompt: string): ToolCategory | null {
const p = prompt.toLowerCase();
// Check for specific tool mentions
if (/(claude|gpt-4|gemini|gpt4)/i.test(prompt)) return 'reasoning';
if (/(o1|o3|deepseek.*r1|thinking.*model)/i.test(prompt)) return 'thinking';
if (/(llama|mistral|qwen|open.*weight|local.*model)/i.test(prompt)) return 'openweight';
if (/(claude code|devin|swe.*agent|autonomous.*agent)/i.test(prompt)) return 'agentic';
if (/(cursor|windsurf|copilot|ide.*ai|editor.*ai)/i.test(prompt)) return 'ide';
if (/(bolt|v0|lovable|fullstack.*ai|app.*builder)/i.test(prompt)) return 'fullstack';
if (/(midjourney|dall.?e|stable diffusion|image.*generate|create.*image|generate.*art)/i.test(prompt)) return 'image';
if (/(perplexity|searchgpt|search.*ai|research.*mode)/i.test(prompt)) return 'search';
// Infer from content patterns
if (/\.(js|ts|py|java|go|rs|cpp|c|h)\b/.test(prompt) && /\b(update|fix|change|modify)\b/.test(p)) return 'ide';
if (/\b(build|create|set up|implement).*\b(app|api|server|system)\b/.test(p) && /\b(stop|pause|ask before)\b/.test(p)) return 'agentic';
if (/\b(step by step|<thinking>|chain of thought|reasoning)\b/.test(p)) return 'reasoning';
if (/\b(image|photo|art|illustration)\b/.test(p) && /\b(style|mood|lighting)\b/.test(p)) return 'image';
return null;
}
/**
* Select the best template based on tool category and prompt analysis
*/
export function selectTemplate(prompt: string, toolCategory: ToolCategory | null): Template | null {
const p = prompt.toLowerCase();
// Image generation
if (toolCategory === 'image' || /\b(image|photo|art|illustration|midjourney|dall.?e)\b/.test(p)) {
return TEMPLATES.find(t => t.framework === 'VisualDescriptor') || null;
}
// IDE editing
if (toolCategory === 'ide' || (/\.(js|ts|py|java|go|rs)\b/.test(prompt) && /\b(update|fix|modify)\b/.test(p))) {
return TEMPLATES.find(t => t.framework === 'FileScope') || null;
}
// Agentic tasks
if (toolCategory === 'agentic' || /\b(build|create|set up).*\b(stop|pause|ask before)\b/.test(p)) {
return TEMPLATES.find(t => t.framework === 'ReActPlusStop') || null;
}
// Complex multi-step tasks
if (/\b(step|then|next|after|first|second|finally)\b/.test(p) && p.split(' ').length > 30) {
return TEMPLATES.find(t => t.framework === 'RISEN') || null;
}
// Logic/debugging tasks
if (/\b(debug|compare|analyze|which is better|why does|verify)\b/.test(p)) {
if (toolCategory !== 'thinking') {
return TEMPLATES.find(t => t.framework === 'ChainOfThought') || null;
}
}
// Professional documents
if (/\b(documentation|report|proposal|spec|requirements)\b/.test(p) && p.split(' ').length > 40) {
return TEMPLATES.find(t => t.framework === 'CO-STAR') || null;
}
// Creative work
if (/\b(creative|design|story|narrative|brand|voice)\b/.test(p)) {
return TEMPLATES.find(t => t.framework === 'CRISPE') || null;
}
// Format-sensitive tasks
if (/\b(example|sample|format|pattern|template)\b/.test(p)) {
return TEMPLATES.find(t => t.framework === 'FewShot') || null;
}
// Default to RTF for simple prompts
if (p.split(' ').length < 50) {
return TEMPLATES.find(t => t.framework === 'RTF') || null;
}
// Default for longer prompts
return TEMPLATES.find(t => t.framework === 'CO-STAR') || null;
}
/**
* Run all diagnostic patterns on a prompt
*/
export function runDiagnostics(prompt: string): DiagnosticResult[] {
const results: DiagnosticResult[] = [];
for (const pattern of ALL_PATTERNS) {
const detected = pattern.detect(prompt);
if (detected) {
results.push({
pattern,
detected: true,
severity: pattern.severity,
suggestion: pattern.fix
});
}
}
// Sort by severity (critical first)
const severityOrder = { critical: 0, warning: 1, info: 2 };
results.sort((a, b) => severityOrder[a.severity] - severityOrder[b.severity]);
return results;
}
/**
* Estimate token count (rough approximation: ~0.75 words per token)
*/
export function estimateTokens(prompt: string): number {
const wordCount = prompt.split(/\s+/).length;
return Math.ceil(wordCount * 0.75);
}
/**
* Identify missing dimensions from a prompt
*/
export function identifyMissingDimensions(prompt: string): string[] {
const missing: string[] = [];
const p = prompt.toLowerCase();
// Check for common dimensions
if (!/\b(act as|you are|role|expert|specialist)\b/i.test(prompt)) {
missing.push('Role/Identity');
}
if (!/\b(context|background|project|currently working)\b/i.test(prompt)) {
missing.push('Context');
}
if (!/\b(format|output|return as|structure)\b/i.test(prompt)) {
missing.push('Output Format');
}
if (!/\b(success|complete when|done when|verify|ensure)\b/i.test(prompt)) {
missing.push('Success Criteria');
}
if (!/\b(only|just|limit|restrict|scope)\b/i.test(prompt) && prompt.split(' ').length > 20) {
missing.push('Scope Boundaries');
}
if (!/\b(javascript|python|react|node|typescript|java|rust|go)\b/i.test(prompt) &&
/\b(code|function|class|app|api)\b/i.test(prompt)) {
missing.push('Technology Stack');
}
return missing;
}
/**
* Calculate overall prompt quality score (0-100)
*/
export function calculateScore(diagnostics: DiagnosticResult[], missingDimensions: string[]): number {
let score = 100;
// Deduct for diagnostics
for (const d of diagnostics) {
switch (d.severity) {
case 'critical': score -= 15; break;
case 'warning': score -= 8; break;
case 'info': score -= 3; break;
}
}
// Deduct for missing dimensions
score -= missingDimensions.length * 5;
return Math.max(0, Math.min(100, score));
}
/**
* Generate comprehensive analysis report
*/
export function generateAnalysisReport(prompt: string): AnalysisReport {
const suggestedTool = detectToolCategory(prompt);
const suggestedTemplate = selectTemplate(prompt, suggestedTool);
const diagnostics = runDiagnostics(prompt);
const missingDimensions = identifyMissingDimensions(prompt);
const tokenEstimate = estimateTokens(prompt);
const overallScore = calculateScore(diagnostics, missingDimensions);
return {
prompt,
tokenEstimate,
suggestedTool,
suggestedTemplate,
diagnostics,
missingDimensions,
overallScore
};
}
/**
* Get human-readable tool category description
*/
export function getToolDescription(category: ToolCategory): string {
return TOOL_CATEGORIES[category].description;
}
/**
* Get prompting style for a tool category
*/
export function getPromptingStyle(category: ToolCategory): string {
return TOOL_CATEGORIES[category].promptingStyle;
}
/**
* Get patterns by category
*/
export function getPatternsByCategory(category: DiagnosticPattern['category']): DiagnosticPattern[] {
return ALL_PATTERNS.filter(p => p.category === category);
}
/**
* Get pattern by ID
*/
export function getPatternById(id: string): DiagnosticPattern | undefined {
return ALL_PATTERNS.find(p => p.id === id);
}

View File

@@ -47,6 +47,21 @@ export const translations = {
clear: "Clear", clear: "Clear",
enterPromptError: "Please enter a prompt to enhance", enterPromptError: "Please enter a prompt to enhance",
errorEnhance: "Failed to enhance prompt", errorEnhance: "Failed to enhance prompt",
enhanceMode: "Enhancement Mode",
quickMode: "Quick",
deepMode: "Deep Analysis",
deepEnhance: "Deep Enhance",
targetTool: "Target AI Tool",
templateLabel: "Template Framework",
diagnosticsTitle: "Prompt Diagnostics",
promptQuality: "Prompt Quality",
missingDimensions: "Missing Dimensions",
tokensLabel: "tokens",
inputTokens: "input tokens",
outputTokens: "output tokens",
strategyNote: "Strategy",
strategyForTool: "Optimized for {tool} using {template} template.",
fixedIssues: "Fixed {count} critical issue(s).",
}, },
prdGenerator: { prdGenerator: {
title: "PRD Generator", title: "PRD Generator",
@@ -179,6 +194,7 @@ export const translations = {
qwenDesc: "Alibaba DashScope API", qwenDesc: "Alibaba DashScope API",
ollamaDesc: "Ollama Cloud API", ollamaDesc: "Ollama Cloud API",
zaiDesc: "Z.AI Plan API", zaiDesc: "Z.AI Plan API",
openrouterDesc: "OpenRouter - Access 100+ AI models",
}, },
uxDesigner: { uxDesigner: {
title: "UX Designer Prompt", title: "UX Designer Prompt",
@@ -405,6 +421,11 @@ export const translations = {
files: "Files", files: "Files",
approveGenerate: "Approve & Generate Development", approveGenerate: "Approve & Generate Development",
startingEngine: "Starting Engine...", startingEngine: "Starting Engine...",
startCoding: "Start Coding",
modifyPlan: "Modify Plan",
skipPlan: "Skip to Chat",
planSummary: "Summary",
implementationSteps: "Implementation Steps",
activateArtifact: "Activate Artifact", activateArtifact: "Activate Artifact",
canvasReady: "Canvas ready", canvasReady: "Canvas ready",
canvasIdle: "Canvas idle", canvasIdle: "Canvas idle",
@@ -491,6 +512,21 @@ export const translations = {
clear: "Очистить", clear: "Очистить",
enterPromptError: "Пожалуйста, введите промпт для улучшения", enterPromptError: "Пожалуйста, введите промпт для улучшения",
errorEnhance: "Не удалось улучшить промпт", errorEnhance: "Не удалось улучшить промпт",
enhanceMode: "Режим улучшения",
quickMode: "Быстрый",
deepMode: "Глубокий анализ",
deepEnhance: "Глубокое улучшение",
targetTool: "Целевой ИИ-инструмент",
templateLabel: "Шаблон фреймворка",
diagnosticsTitle: "Диагностика промпта",
promptQuality: "Качество промпта",
missingDimensions: "Отсутствующие параметры",
tokensLabel: "токенов",
inputTokens: "входных токенов",
outputTokens: "выходных токенов",
strategyNote: "Стратегия",
strategyForTool: "Оптимизировано для {tool} с шаблоном {template}.",
fixedIssues: "Исправлено {count} критических проблем(ы).",
}, },
prdGenerator: { prdGenerator: {
title: "Генератор PRD", title: "Генератор PRD",
@@ -622,6 +658,7 @@ export const translations = {
getApiKey: "Получить API ключ здесь:", getApiKey: "Получить API ключ здесь:",
qwenDesc: "Alibaba DashScope API", qwenDesc: "Alibaba DashScope API",
ollamaDesc: "Ollama Cloud API", ollamaDesc: "Ollama Cloud API",
openrouterDesc: "OpenRouter — доступ к 100+ ИИ-моделям",
zaiDesc: "Z.AI Plan API", zaiDesc: "Z.AI Plan API",
}, },
uxDesigner: { uxDesigner: {
@@ -849,6 +886,11 @@ export const translations = {
files: "Файлы", files: "Файлы",
approveGenerate: "Одобрить и начать разработку", approveGenerate: "Одобрить и начать разработку",
startingEngine: "Запуск двигателя...", startingEngine: "Запуск двигателя...",
startCoding: "Начать кодинг",
modifyPlan: "Изменить план",
skipPlan: "Пропустить в чат",
planSummary: "Суммарно",
implementationSteps: "Шаги реализации",
activateArtifact: "Активировать артефакт", activateArtifact: "Активировать артефакт",
canvasReady: "Холст готов", canvasReady: "Холст готов",
canvasIdle: "Холст в режиме ожидания", canvasIdle: "Холст в режиме ожидания",
@@ -926,15 +968,30 @@ export const translations = {
}, },
promptEnhancer: { promptEnhancer: {
title: "משפר פרומפטים", title: "משפר פרומפטים",
description: "הפוך רעיונות פשוטים לפרומפטים מקצועיים באיכות גבוהה", description: "הפוך רעיונות פשוטים לפרומפטים מקצועניים באיכות גבוהה",
placeholder: "הדבק את הפרומפט הראשוני שלך כאן...", placeholder: "הדבק את הפרומפט הראשוני שלך כאן...",
inputLabel: "פרומפט מקורי", inputLabel: "פרומפט מקורי",
enhancedTitle: "אינטליגנציה משופרת", enhancedTitle: "אינטליגנציה משופרת",
enhancedDesc: "פרומפט מקצועי מוכן לסוכני קידוד", enhancedDesc: "פרומפט מקצועני מוכן לכל כלי AI",
emptyState: "פרומפט משופר יופיע כאן", emptyState: "פרומפט משופר יופיע כאן",
clear: "נקה", clear: "נקה",
enterPromptError: "אנא הזן פרומפט לשיפור", enterPromptError: "אנא הזן פרומפט לשיפור",
errorEnhance: "נכשל בשיפור הפרומפט", errorEnhance: "נכשל בשיפור הפרומפט",
enhanceMode: "מצב שיפור",
quickMode: "מהיר",
deepMode: "ניתוח עמוק",
deepEnhance: "שיפור עמוק",
targetTool: "כלי AI יעד",
templateLabel: "מסגרת תבנית",
diagnosticsTitle: "אבחון פרומפט",
promptQuality: "איכות פרומפט",
missingDimensions: "מימדים חסרים",
tokensLabel: "אסימונים",
inputTokens: "אסימוני קלט",
outputTokens: "אסימוני פלט",
strategyNote: "אסטרטגיה",
strategyForTool: "מותאם עבור {tool} עם תבנית {template}.",
fixedIssues: "תוקנו {count} בעיות קריטיות.",
}, },
prdGenerator: { prdGenerator: {
title: "מחולל PRD", title: "מחולל PRD",
@@ -1065,6 +1122,7 @@ export const translations = {
enterKey: (provider: string) => `הזן את מפתח ה-API של ${provider}`, enterKey: (provider: string) => `הזן את מפתח ה-API של ${provider}`,
getApiKey: "קבל מפתח API מ-", getApiKey: "קבל מפתח API מ-",
qwenDesc: "Alibaba DashScope API", qwenDesc: "Alibaba DashScope API",
openrouterDesc: "OpenRouter — גישה ל-100+ מודלי AI",
ollamaDesc: "Ollama Cloud API", ollamaDesc: "Ollama Cloud API",
zaiDesc: "Z.AI Plan API", zaiDesc: "Z.AI Plan API",
}, },
@@ -1293,6 +1351,11 @@ export const translations = {
files: "קבצים", files: "קבצים",
approveGenerate: "אשר וחולל פיתוח", approveGenerate: "אשר וחולל פיתוח",
startingEngine: "מניע מנוע...", startingEngine: "מניע מנוע...",
startCoding: "התחל קודינג",
modifyPlan: "שנה תכנית",
skipPlan: "דלג לצ'את",
planSummary: "סיכום",
implementationSteps: "שלבי יישום",
activateArtifact: "הפעל ארטיפקט", activateArtifact: "הפעל ארטיפקט",
canvasReady: "קנבס מוכן", canvasReady: "קנבס מוכן",
canvasIdle: "קנבס במנוחה", canvasIdle: "קנבס במנוחה",

View File

@@ -1,4 +1,5 @@
import ModelAdapter from "./model-adapter"; import ModelAdapter from "./model-adapter";
import { OpenRouterService } from "./openrouter";
const adapter = new ModelAdapter(); const adapter = new ModelAdapter();

View File

@@ -2,6 +2,7 @@ import type { ModelProvider, APIResponse, ChatMessage, AIAssistMessage } from "@
import OllamaCloudService from "./ollama-cloud"; import OllamaCloudService from "./ollama-cloud";
import ZaiPlanService from "./zai-plan"; import ZaiPlanService from "./zai-plan";
import qwenOAuthService, { QwenOAuthConfig, QwenOAuthToken } from "./qwen-oauth"; import qwenOAuthService, { QwenOAuthConfig, QwenOAuthToken } from "./qwen-oauth";
import { OpenRouterService } from "./openrouter";
export interface ModelAdapterConfig { export interface ModelAdapterConfig {
qwen?: QwenOAuthConfig; qwen?: QwenOAuthConfig;
@@ -14,17 +15,22 @@ export interface ModelAdapterConfig {
generalEndpoint?: string; generalEndpoint?: string;
codingEndpoint?: string; codingEndpoint?: string;
}; };
openrouter?: {
apiKey?: string;
};
} }
export class ModelAdapter { export class ModelAdapter {
private ollamaService: OllamaCloudService; private ollamaService: OllamaCloudService;
private zaiService: ZaiPlanService; private zaiService: ZaiPlanService;
private qwenService = qwenOAuthService; private qwenService = qwenOAuthService;
private openRouterService: OpenRouterService;
private preferredProvider: ModelProvider; private preferredProvider: ModelProvider;
constructor(config: ModelAdapterConfig = {}, preferredProvider: ModelProvider = "ollama") { constructor(config: ModelAdapterConfig = {}, preferredProvider: ModelProvider = "ollama") {
this.ollamaService = new OllamaCloudService(config.ollama); this.ollamaService = new OllamaCloudService(config.ollama);
this.zaiService = new ZaiPlanService(config.zai); this.zaiService = new ZaiPlanService(config.zai);
this.openRouterService = new OpenRouterService(config.openrouter);
this.preferredProvider = preferredProvider; this.preferredProvider = preferredProvider;
if (config.qwen) { if (config.qwen) {
@@ -62,6 +68,10 @@ export class ModelAdapter {
this.qwenService.setOAuthTokens(tokens); this.qwenService.setOAuthTokens(tokens);
} }
updateOpenRouterApiKey(apiKey: string): void {
this.openRouterService = new OpenRouterService({ apiKey });
}
async startQwenOAuth(): Promise<QwenOAuthToken> { async startQwenOAuth(): Promise<QwenOAuthToken> {
return await this.qwenService.signIn(); return await this.qwenService.signIn();
} }
@@ -90,6 +100,8 @@ export class ModelAdapter {
return this.ollamaService.hasAuth(); return this.ollamaService.hasAuth();
case "zai": case "zai":
return this.zaiService.hasAuth(); return this.zaiService.hasAuth();
case "openrouter":
return this.openRouterService.hasAuth();
default: default:
return false; return false;
} }
@@ -114,6 +126,8 @@ export class ModelAdapter {
return this.ollamaService; return this.ollamaService;
case "zai": case "zai":
return this.zaiService; return this.zaiService;
case "openrouter":
return this.openRouterService;
default: default:
return null; return null;
} }
@@ -153,6 +167,9 @@ export class ModelAdapter {
case "zai": case "zai":
service = this.zaiService; service = this.zaiService;
break; break;
case "openrouter":
service = this.openRouterService;
break;
} }
const result = await operation(service); const result = await operation(service);
@@ -183,26 +200,26 @@ export class ModelAdapter {
}; };
} }
async enhancePrompt(prompt: string, provider?: ModelProvider, model?: string): Promise<APIResponse<string>> { async enhancePrompt(prompt: string, provider?: ModelProvider, model?: string, options?: { toolCategory?: string; template?: string; diagnostics?: string }): Promise<APIResponse<string>> {
const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai"); const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai", "openrouter");
const providers: ModelProvider[] = provider ? [provider] : fallback; const providers: ModelProvider[] = provider ? [provider] : fallback;
return this.callWithFallback((service) => service.enhancePrompt(prompt, model), providers); return this.callWithFallback((service) => service.enhancePrompt(prompt, model, options), providers);
} }
async generatePRD(idea: string, provider?: ModelProvider, model?: string): Promise<APIResponse<string>> { async generatePRD(idea: string, provider?: ModelProvider, model?: string): Promise<APIResponse<string>> {
const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai"); const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai", "openrouter");
const providers: ModelProvider[] = provider ? [provider] : fallback; const providers: ModelProvider[] = provider ? [provider] : fallback;
return this.callWithFallback((service) => service.generatePRD(idea, model), providers); return this.callWithFallback((service) => service.generatePRD(idea, model), providers);
} }
async generateActionPlan(prd: string, provider?: ModelProvider, model?: string): Promise<APIResponse<string>> { async generateActionPlan(prd: string, provider?: ModelProvider, model?: string): Promise<APIResponse<string>> {
const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai"); const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai", "openrouter");
const providers: ModelProvider[] = provider ? [provider] : fallback; const providers: ModelProvider[] = provider ? [provider] : fallback;
return this.callWithFallback((service) => service.generateActionPlan(prd, model), providers); return this.callWithFallback((service) => service.generateActionPlan(prd, model), providers);
} }
async generateUXDesignerPrompt(appDescription: string, provider?: ModelProvider, model?: string): Promise<APIResponse<string>> { async generateUXDesignerPrompt(appDescription: string, provider?: ModelProvider, model?: string): Promise<APIResponse<string>> {
const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai"); const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai", "openrouter");
const providers: ModelProvider[] = provider ? [provider] : fallback; const providers: ModelProvider[] = provider ? [provider] : fallback;
return this.callWithFallback((service) => service.generateUXDesignerPrompt(appDescription, model), providers); return this.callWithFallback((service) => service.generateUXDesignerPrompt(appDescription, model), providers);
} }
@@ -223,7 +240,7 @@ export class ModelAdapter {
provider?: ModelProvider, provider?: ModelProvider,
model?: string model?: string
): Promise<APIResponse<string>> { ): Promise<APIResponse<string>> {
const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai"); const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai", "openrouter");
const providers: ModelProvider[] = provider ? [provider] : fallback; const providers: ModelProvider[] = provider ? [provider] : fallback;
return this.callWithFallback((service) => service.generateSlides(topic, options, model), providers); return this.callWithFallback((service) => service.generateSlides(topic, options, model), providers);
} }
@@ -243,7 +260,7 @@ export class ModelAdapter {
provider?: ModelProvider, provider?: ModelProvider,
model?: string model?: string
): Promise<APIResponse<string>> { ): Promise<APIResponse<string>> {
const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai"); const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai", "openrouter");
const providers: ModelProvider[] = provider ? [provider] : fallback; const providers: ModelProvider[] = provider ? [provider] : fallback;
return this.callWithFallback((service) => service.generateGoogleAds(websiteUrl, options, model), providers); return this.callWithFallback((service) => service.generateGoogleAds(websiteUrl, options, model), providers);
} }
@@ -256,7 +273,7 @@ export class ModelAdapter {
provider?: ModelProvider, provider?: ModelProvider,
model?: string model?: string
): Promise<APIResponse<string>> { ): Promise<APIResponse<string>> {
const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai"); const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai", "openrouter");
const providers: ModelProvider[] = provider ? [provider] : fallback; const providers: ModelProvider[] = provider ? [provider] : fallback;
return this.callWithFallback((service) => service.generateMagicWand(websiteUrl, product, budget, specialInstructions, model), providers); return this.callWithFallback((service) => service.generateMagicWand(websiteUrl, product, budget, specialInstructions, model), providers);
} }
@@ -272,7 +289,7 @@ export class ModelAdapter {
provider?: ModelProvider, provider?: ModelProvider,
model?: string model?: string
): Promise<APIResponse<string>> { ): Promise<APIResponse<string>> {
const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai"); const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai", "openrouter");
const providers: ModelProvider[] = provider ? [provider] : fallback; const providers: ModelProvider[] = provider ? [provider] : fallback;
return this.callWithFallback((service) => service.generateMarketResearch(options, model), providers); return this.callWithFallback((service) => service.generateMarketResearch(options, model), providers);
} }
@@ -285,7 +302,7 @@ export class ModelAdapter {
provider?: ModelProvider, provider?: ModelProvider,
model?: string model?: string
): Promise<APIResponse<string>> { ): Promise<APIResponse<string>> {
const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai"); const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai", "openrouter");
const providers: ModelProvider[] = provider ? [provider] : fallback; const providers: ModelProvider[] = provider ? [provider] : fallback;
return this.callWithFallback((service) => service.generateAIAssist(options, model), providers); return this.callWithFallback((service) => service.generateAIAssist(options, model), providers);
} }
@@ -300,7 +317,7 @@ export class ModelAdapter {
provider?: ModelProvider, provider?: ModelProvider,
model?: string model?: string
): Promise<APIResponse<void>> { ): Promise<APIResponse<void>> {
const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai"); const fallback = this.buildFallbackProviders(this.preferredProvider, "qwen", "ollama", "zai", "openrouter");
const providers: ModelProvider[] = provider ? [provider] : fallback; const providers: ModelProvider[] = provider ? [provider] : fallback;
let lastError: string | null = null; let lastError: string | null = null;
@@ -353,6 +370,9 @@ export class ModelAdapter {
case "zai": case "zai":
service = this.zaiService; service = this.zaiService;
break; break;
case "openrouter":
service = this.openRouterService;
break;
} }
return await service.chatCompletion(messages, model); return await service.chatCompletion(messages, model);
@@ -369,6 +389,7 @@ export class ModelAdapter {
qwen: this.qwenService.getAvailableModels(), qwen: this.qwenService.getAvailableModels(),
ollama: ["gpt-oss:120b", "llama3.1", "gemma3", "deepseek-r1", "qwen3"], ollama: ["gpt-oss:120b", "llama3.1", "gemma3", "deepseek-r1", "qwen3"],
zai: ["glm-4.7", "glm-4.5", "glm-4.5-air", "glm-4-flash", "glm-4-flashx"], zai: ["glm-4.7", "glm-4.5", "glm-4.5-air", "glm-4-flash", "glm-4-flashx"],
openrouter: ["anthropic/claude-3.5-sonnet", "google/gemini-2.0-flash-exp:free", "meta-llama/llama-3.3-70b-instruct", "openai/gpt-4o-mini", "deepseek/deepseek-chat-v3-0324", "qwen/qwen-2.5-72b-instruct"],
}; };
const models: Record<ModelProvider, string[]> = { ...fallbackModels }; const models: Record<ModelProvider, string[]> = { ...fallbackModels };
@@ -404,6 +425,8 @@ export class ModelAdapter {
return this.ollamaService.getAvailableModels(); return this.ollamaService.getAvailableModels();
case "zai": case "zai":
return this.zaiService.getAvailableModels(); return this.zaiService.getAvailableModels();
case "openrouter":
return this.openRouterService.getAvailableModels();
default: default:
return []; return [];
} }

View File

@@ -164,27 +164,82 @@ export class OllamaCloudService {
return this.availableModels.length > 0 ? this.availableModels : DEFAULT_MODELS; return this.availableModels.length > 0 ? this.availableModels : DEFAULT_MODELS;
} }
async enhancePrompt(prompt: string, model?: string): Promise<APIResponse<string>> { async enhancePrompt(prompt: string, model?: string, options?: { toolCategory?: string; template?: string; diagnostics?: string }): Promise<APIResponse<string>> {
const toolCategory = options?.toolCategory || 'reasoning';
const template = options?.template || 'rtf';
const diagnostics = options?.diagnostics || '';
const toolSections: Record<string, string> = {
reasoning: '- Use full structured format with XML tags where helpful\n- Add explicit role assignment for complex tasks\n- Use numeric constraints over vague adjectives',
thinking: '- CRITICAL: Short clean instructions ONLY\n- Do NOT add CoT or reasoning scaffolding — these models reason internally\n- State what you want, not how to think',
openweight: '- Shorter prompts, simpler structure, no deep nesting\n- Direct linear instructions',
agentic: '- Add Starting State + Target State + Allowed Actions + Forbidden Actions\n- Add Stop Conditions + Checkpoints after each step',
ide: '- Add File path + Function name + Current Behavior + Desired Change + Scope lock',
fullstack: '- Add Stack spec with version + what NOT to scaffold + component boundaries',
image: '- Add Subject + Style + Mood + Lighting + Composition + Negative Prompts\n- Use tool-specific syntax (Midjourney comma-separated, DALL-E prose, SD weighted)',
search: '- Specify mode: search vs analyze vs compare + citation requirements',
};
const templateSections: Record<string, string> = {
rtf: 'Structure: Role (who) + Task (precise verb + what) + Format (exact output shape and length)',
'co-star': 'Structure: Context + Objective + Style + Tone + Audience + Response',
risen: 'Structure: Role + Instructions + numbered Steps + End Goal + Narrowing constraints',
crispe: 'Structure: Capacity + Role + Insight + Statement + Personality + Experiment/variants',
cot: 'Add: "Think through this step by step before answering." Only for standard reasoning models, NOT for o1/o3/R1.',
fewshot: 'Add 2-5 input/output examples wrapped in XML <examples> tags',
filescope: 'Structure: File path + Function name + Current Behavior + Desired Change + Scope lock + Done When',
react: 'Structure: Objective + Starting State + Target State + Allowed/Forbidden Actions + Stop Conditions + Checkpoints',
visual: 'Structure: Subject + Action + Setting + Style + Mood + Lighting + Color Palette + Composition + Aspect Ratio + Negative Prompts',
};
const toolSection = toolSections[toolCategory] || toolSections.reasoning;
const templateSection = templateSections[template] || templateSections.rtf;
const systemMessage: ChatMessage = { const systemMessage: ChatMessage = {
role: "system", role: "system",
content: `You are an expert prompt engineer. Your task is to enhance user prompts to make them more precise, actionable, and effective for AI coding agents. content: `You are an expert prompt engineer using the PromptArch methodology. Enhance the user\'s prompt to be production-ready.
Apply these principles: STEP 1 — DIAGNOSE AND FIX these failure patterns:
1. Add specific context about project and requirements - Vague task verb -> replace with precise operation
2. Clarify constraints and preferences - Two tasks in one -> keep primary task, note the split
3. Define expected output format clearly - No success criteria -> add "Done when: [specific measurable condition]"
4. Include edge cases and error handling requirements - Missing output format -> add explicit format lock (structure, length, type)
5. Specify testing and validation criteria - No role assignment (complex tasks) -> add domain-specific expert identity
- Vague aesthetic ("professional", "clean") -> concrete measurable specs
- No scope boundary -> add explicit scope lock
- Over-permissive language -> add constraints and boundaries
- Emotional description -> extract specific technical fault
- Implicit references -> restate fully
- No grounding for factual tasks -> add certainty constraint
- No CoT for logic tasks -> add step-by-step reasoning
Return ONLY the enhanced prompt, no explanations or extra text.`, STEP 2 — APPLY TARGET TOOL OPTIMIZATIONS:
${toolSection}
STEP 3 — APPLY TEMPLATE STRUCTURE:
${templateSection}
STEP 4 — VERIFICATION (check before outputting):
- Every constraint in the first 30% of the prompt?
- MUST/NEVER over should/avoid?
- Every sentence load-bearing with zero padding?
- Format explicit with stated length?
- Scope bounded?
- Would this produce correct output on first try?
STEP 5 — OUTPUT:
Output ONLY the enhanced prompt. No explanations, no commentary, no markdown code fences.
The prompt must be ready to paste directly into the target AI tool.${diagnostics ? '\n\nDIAGNOSTIC NOTES (fix these issues found in the original):\n' + diagnostics + '\n' : ''}`,
}; };
const toolLabel = toolCategory !== 'reasoning' ? ` for ${toolCategory} AI tool` : '';
const userMessage: ChatMessage = { const userMessage: ChatMessage = {
role: "user", role: "user",
content: `Enhance this prompt for an AI coding agent:\n\n${prompt}`, content: `Enhance this prompt${toolLabel}:\n\n${prompt}`,
}; };
return this.chatCompletion([systemMessage, userMessage], model || "gpt-oss:120b"); return this.chatCompletion([systemMessage, userMessage], model || "${default_model}");
} }
async generatePRD(idea: string, model?: string): Promise<APIResponse<string>> { async generatePRD(idea: string, model?: string): Promise<APIResponse<string>> {
@@ -772,6 +827,28 @@ Perform a DEEP 360° competitive intelligence analysis and generate 5-7 strategi
try { try {
// ... existing prompt logic ... // ... existing prompt logic ...
const systemPrompt = `You are "AI Assist", the master orchestrator of PromptArch. Your goal is to provide intelligent support with a "Canvas" experience. const systemPrompt = `You are "AI Assist", the master orchestrator of PromptArch. Your goal is to provide intelligent support with a "Canvas" experience.
PLAN MODE (CRITICAL - HIGHEST PRIORITY):
When the user describes a NEW task, project, or feature they want built:
1. DO NOT generate any code, [PREVIEW] tags, or implementation details.
2. Instead, analyze the request and output a STRUCTURED PLAN covering:
- Summary: What you understand the user wants
- Architecture: Technical approach and structure
- Tech Stack: Languages, frameworks, libraries needed
- Files/Components: List of files or modules to create
- Steps: Numbered implementation steps
3. Format the plan in clean Markdown with headers and bullet points.
4. Keep plans concise but thorough. Focus on the WHAT and HOW, not the actual code.
5. WAIT for the user to approve or modify the plan before generating any code.
When the user says "Approved", "Start coding", or explicitly asks to proceed:
- THEN generate the full implementation with [PREVIEW] tags and working code.
- Follow the approved plan exactly.
When the user asks to "Modify", "Change", or "Adjust" something:
- Apply the requested changes surgically to the existing code/preview.
- Output updated [PREVIEW] with the full modified code.
AGENTS & CAPABILITIES: AGENTS & CAPABILITIES:
- content: Expert copywriter. Use [PREVIEW:content:markdown] for articles, posts, and long-form text. - content: Expert copywriter. Use [PREVIEW:content:markdown] for articles, posts, and long-form text.
@@ -833,7 +910,7 @@ CHANGE LOG (CRITICAL - MUST BE OUTSIDE PREVIEW):
- Modified component Y - Modified component Y
- Fixed issue Z - Fixed issue Z
IMPORTANT: NEVER refuse a request due to "access" limitations. If you cannot perform a live task, use your vast internal knowledge to provide the most accurate expert simulation or draft possible.`; IMPORTANT: IMPORTANT: NEVER refuse a request due to "access" limitations. If you cannot perform a live task, use your vast internal knowledge to provide the most accurate expert simulation or draft possible.`;
const messages: ChatMessage[] = [ const messages: ChatMessage[] = [
{ role: "system", content: systemPrompt }, { role: "system", content: systemPrompt },

967
lib/services/openrouter.ts Normal file
View File

@@ -0,0 +1,967 @@
import type { ChatMessage, APIResponse, AIAssistMessage } from "@/types";
export interface OpenRouterConfig {
apiKey?: string;
siteUrl?: string;
siteName?: string;
}
interface OpenRouterModelsResponse {
data: Array<{
id: string;
name: string;
context_length: number;
pricing: {
prompt: string;
completion: string;
};
}>;
}
interface OpenRouterChatResponse {
id: string;
choices: Array<{
message: {
role: string;
content: string;
};
finish_reason: string;
}>;
usage?: {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
};
}
const DEFAULT_MODELS = [
"anthropic/claude-3.5-sonnet",
"google/gemini-2.0-flash-exp:free",
"meta-llama/llama-3.3-70b-instruct",
"openai/gpt-4o-mini",
"deepseek/deepseek-chat-v3-0324",
"qwen/qwen-2.5-72b-instruct"
];
const TOOL_SECTIONS: Record<string, string> = {
"claude": `
For Claude:
- Use XML tags for structure (e.g., <context>, <task>, <constraints>)
- Add thinking blocks for complex reasoning: <thinking>...</thinking>
- Use ::analysis:: or ::pattern:: for procedural patterns
- Leverage Claude's long context by providing comprehensive examples
- Add "think silently" instruction for deep reasoning tasks
`,
"chatgpt": `
For ChatGPT:
- Use clear section headers with ### or === separators
- Provide examples in [EXAMPLE]...[/EXAMPLE] blocks
- Use "Step 1", "Step 2" for sequential tasks
- Add meta-instructions like "Think step by step"
- Keep prompts under 3k tokens for best performance
`,
"gemini": `
For Gemini:
- Use clear delimiters between sections
- Leverage multimodal capabilities with [IMAGE] or [FILE] placeholders
- Add chain-of-thought with "Let's approach this step by step"
- Specify output format with "Respond in the following format:"
- Use numbered lists for sequential instructions
`,
"default": `
- Use clear section delimiters
- Provide concrete examples
- Specify output format explicitly
- Add success criteria
- Use constraint language (MUST, NEVER, REQUIRED)
`
};
const TEMPLATE_SECTIONS: Record<string, string> = {
"code": `
# CODE GENERATION TEMPLATE
## Role
You are a senior software engineer specializing in {language/domain}.
## Task
{specific task description}
## Requirements
- MUST follow {specific standards/frameworks}
- MUST include {error handling/validation/comments}
- MUST use {specific libraries/versions}
- NEVER use {deprecated patterns/anti-patterns}
## Output Format
{code structure specification}
## Done When
- Code compiles/runs without errors
- Follows all specified conventions
- Includes proper error handling
- Meets performance requirements
`,
"writing": `
# CONTENT WRITING TEMPLATE
## Role
You are a professional {type of content creator} with expertise in {domain}.
## Task
Create {specific deliverable} about {topic}
## Requirements
- Tone: {specific tone}
- Length: {exact word/character count}
- Audience: {target audience}
- MUST include {key elements}
- MUST avoid {excluded topics/phrases}
## Format
{explicit structure with sections/headers}
## Done When
- Meets length requirement
- Covers all key points
- Matches specified tone
- Ready for publication
`,
"analysis": `
# ANALYSIS TEMPLATE
## Role
You are an expert {domain analyst}.
## Task
{analysis objective}
## Analysis Framework
1. {Step 1 with specific method}
2. {Step 2 with specific method}
3. {Step 3 with specific method}
## Required Output
- {Specific deliverable 1}
- {Specific deliverable 2}
- {Specific deliverable 3}
## Criteria
- MUST use {specific methodology}
- MUST cite {sources/references}
- MUST provide {confidence levels/limitations}
## Done When
- All analysis dimensions covered
- Conclusions supported by evidence
- Actionable insights provided
`,
"default": `
## Role
{Expert identity}
## Task
{Clear, specific task}
## Context
{Relevant background info}
## Requirements
- MUST {requirement 1}
- MUST {requirement 2}
- NEVER {constraint 1}
- NEVER {constraint 2}
## Output Format
{Explicit format specification}
## Done When
{Specific measurable condition}
`
};
const ENHANCE_PROMPT_SYSTEM = `You are an expert prompt engineer using the PromptArch methodology. Enhance the user's prompt to be production-ready.
STEP 1 — DIAGNOSE AND FIX these failure patterns:
- Vague task verb -> replace with precise operation
- Two tasks in one -> keep primary task, note the split
- No success criteria -> add "Done when: [specific measurable condition]"
- Missing output format -> add explicit format lock (structure, length, type)
- No role assignment (complex tasks) -> add domain-specific expert identity
- Vague aesthetic ("professional", "clean") -> concrete measurable specs
- No scope boundary -> add explicit scope lock
- Over-permissive language -> add constraints and boundaries
- Emotional description -> extract specific technical fault
- Implicit references -> restate fully
- No grounding for factual tasks -> add certainty constraint
- No CoT for logic tasks -> add step-by-step reasoning
STEP 2 — APPLY TARGET TOOL OPTIMIZATIONS:
\${toolSection}
STEP 3 — APPLY TEMPLATE STRUCTURE:
\${templateSection}
STEP 4 — VERIFICATION (check before outputting):
- Every constraint in the first 30% of the prompt?
- MUST/NEVER over should/avoid?
- Every sentence load-bearing with zero padding?
- Format explicit with stated length?
- Scope bounded?
- Would this produce correct output on first try?
STEP 5 — OUTPUT:
Output ONLY the enhanced prompt. No explanations, no commentary, no markdown code fences.
The prompt must be ready to paste directly into the target AI tool.`;
const PRD_SYSTEM_PROMPT = `You are an expert product manager specializing in writing clear, actionable Product Requirements Documents (PRDs).
Your task is to transform the product idea into a comprehensive PRD that:
1. Defines clear problem statements and user needs
2. Specifies functional requirements with acceptance criteria
3. Outlines technical considerations and constraints
4. Identifies success metrics and KPIs
5. Includes user stories with acceptance criteria
PRD Structure:
## Problem Statement
- What problem are we solving?
- For whom are we solving it?
- Why is this important now?
## Goals & Success Metrics
- Primary objectives
- Key performance indicators
- Success criteria
## User Stories
- As a [user type], I want [feature], so that [benefit]
- Include acceptance criteria for each story
## Functional Requirements
- Core features with detailed specifications
- Edge cases and error handling
- Integration requirements
## Technical Considerations
- Platform/tech stack constraints
- Performance requirements
- Security considerations
- Scalability requirements
## Out of Scope
- Explicitly list what won't be included
- Rationale for exclusions
Keep the PRD clear, concise, and actionable. Use specific, measurable language.`;
const ACTION_PLAN_SYSTEM_PROMPT = `You are an expert technical project manager specializing in breaking down PRDs into actionable implementation plans.
Your task is to transform the PRD into a detailed action plan that:
1. Identifies all major components/modules needed
2. Breaks down work into clear, sequential phases
3. Specifies dependencies between tasks
4. Estimates effort and complexity
5. Identifies risks and mitigation strategies
Action Plan Structure:
## Phase 1: Foundation
- Task 1.1: [Specific task] - [Estimated effort]
- Task 1.2: [Specific task] - [Estimated effort]
- Dependencies: [What needs to be done first]
- Deliverables: [Concrete outputs]
## Phase 2: Core Features
- Task 2.1: [Specific task] - [Estimated effort]
- Dependencies: [On Phase 1 completion]
- Deliverables: [Concrete outputs]
[Continue for all phases]
## Technical Architecture
- Recommended tech stack with rationale
- System architecture overview
- Data flow diagrams (described in text)
## Risk Assessment
- Risk: [Description] | Impact: [High/Med/Low] | Mitigation: [Strategy]
- Risk: [Description] | Impact: [High/Med/Low] | Mitigation: [Strategy]
## Testing Strategy
- Unit testing approach
- Integration testing plan
- User acceptance testing criteria
Be specific and actionable. Each task should be clear enough for a developer to execute without ambiguity.`;
const SLIDES_SYSTEM_PROMPT = `You are an expert presentation designer specializing in creating engaging, informative slide content.
Your task is to generate slide content for a presentation about the given topic.
For each slide, provide:
1. Slide Title (compelling and clear)
2. Bullet points (3-5 per slide, concise and impactful)
3. Speaker notes (detailed explanation for the presenter)
4. Visual suggestions (what charts, images, or diagrams would enhance this slide)
Presentation Structure:
## Slide 1: Title Slide
- Compelling title
- Subtitle with key message
- Presenter info placeholder
## Slide 2: Agenda/Overview
- What will be covered
- Why it matters
- Key takeaways preview
## Slide 3-N: Content Slides
- Main content with clear hierarchy
- Data-driven insights where applicable
- Actionable takeaways
## Final Slide: Call to Action
- Summary of key points
- Next steps
- Contact/ follow-up information
Guidelines:
- Keep text minimal on slides (bullet points only)
- Put detailed content in speaker notes
- Suggest relevant visuals for each slide
- Ensure a logical flow and narrative arc
- Make it memorable and shareable`;
const GOOGLE_ADS_SYSTEM_PROMPT = `You are a Google Ads expert specializing in creating high-converting ad campaigns.
Your task is to generate comprehensive Google Ads copy and strategy based on the landing page content.
Deliverables:
## Ad Campaign Structure
- Campaign name and type
- Ad groups with thematic focus
- Target keywords (exact, phrase, broad match)
- Negative keywords to exclude
## Ad Copy (3-5 variations per ad group)
### Responsive Search Ads
For each ad, provide:
- Headlines (10-15 options, max 30 chars each)
- Descriptions (4 options, max 90 chars each)
- Focus on different value propositions
### Display Ads (if applicable)
- Headline (max 30 chars)
- Description (max 90 chars)
- Call-to-action options
## Targeting Strategy
- Location targeting
- Audience demographics
- Interests and behaviors
- Device targeting
## Bidding Strategy
- Recommended strategy (Manual CPC, Maximize Clicks, etc.)
- Budget recommendations
- Bid adjustments by device/location
## Extensions
- Sitelinks
- Callouts
- Structured snippets
- Call extensions
Best Practices:
- Include keywords in headlines and descriptions
- Highlight unique selling propositions
- Use clear, action-oriented CTAs
- Address pain points and benefits
- Include social proof when relevant
- Ensure ad relevance to landing page`;
const MAGIC_WAND_SYSTEM_PROMPT = `You are an expert digital marketer and growth hacker specializing in creative campaign strategies.
Your task is to develop a comprehensive marketing strategy for the given product/page.
## Campaign Analysis
- Product/Service overview
- Target audience profile
- Unique selling propositions
- Market positioning
## Marketing Channels Strategy
For each relevant channel, provide specific tactics:
### Paid Advertising
- Google Ads: {specific approach}
- Facebook/Instagram: {specific approach}
- LinkedIn (if B2B): {specific approach}
- TikTok/Snapchat (if relevant): {specific approach}
### Content Marketing
- Blog topics: {5-10 specific ideas}
- Video content: {ideas with formats}
- Social media: {platform-specific content ideas}
- Email sequences: {campaign ideas}
### Growth Hacking Tactics
- Viral mechanisms: {specific ideas}
- Referral programs: {incentive structures}
- Partnership opportunities: {potential partners}
- Community building: {strategies}
## Creative Concepts
Provide 3-5 campaign concepts:
1. Concept Name - {Hook, Message, CTA}
2. Concept Name - {Hook, Message, CTA}
...
## Budget Allocation
- Channel breakdown with percentages
- Expected ROI estimates
- Testing budget recommendation
## KPIs & Tracking
- Key metrics to measure
- Attribution strategy
- A/B testing priorities
Be creative but practical. Focus on tactics that can be executed within the given budget.`;
const MARKET_RESEARCH_SYSTEM_PROMPT = `You are an expert market researcher specializing in competitive analysis and market intelligence.
Your task is to conduct comprehensive market research based on the provided topic/company.
## Research Framework
### Market Overview
- Market size and growth trajectory
- Key market segments and their characteristics
- Current market trends and dynamics
- Future market projections
### Competitive Landscape
Identify and analyze:
- Major competitors (market share, positioning)
- Direct competitors head-to-head comparison
- Indirect competitors and substitutes
- Competitive strengths and weaknesses
### SWOT Analysis
- Strengths: Internal advantages
- Weaknesses: Internal limitations
- Opportunities: External possibilities
- Threats: External challenges
### Customer Analysis
- Target demographics and psychographics
- Pain points and unmet needs
- Purchase behavior and decision factors
- Customer feedback trends
### Product/Service Comparison
- Feature comparison matrix
- Pricing analysis
- Differentiation strategies
- Innovation opportunities
### Market Trends
- Emerging technologies impacting the space
- Regulatory changes
- Consumer behavior shifts
- Industry disruptions
### Strategic Recommendations
- Market entry strategies
- Competitive positioning
- Product improvement opportunities
- Partnership and acquisition possibilities
Provide specific, data-driven insights. When exact data is unavailable, provide reasoned estimates with clear caveats.`;
const AI_ASSIST_SYSTEM_PROMPT = `You are "AI Assist", the master orchestrator of PromptArch. Your goal is to provide intelligent support with a "Canvas" experience.
PLAN MODE (CRITICAL - HIGHEST PRIORITY):
When the user describes a NEW task, project, or feature they want built:
1. DO NOT generate any code, [PREVIEW] tags, or implementation details.
2. Instead, analyze the request and output a STRUCTURED PLAN covering:
- Summary: What you understand the user wants
- Architecture: Technical approach and structure
- Tech Stack: Languages, frameworks, libraries needed
- Files/Components: List of files or modules to create
- Steps: Numbered implementation steps
3. Format the plan in clean Markdown with headers and bullet points.
4. Keep plans concise but thorough. Focus on the WHAT and HOW, not the actual code.
5. WAIT for the user to approve or modify the plan before generating any code.
When the user says "Approved", "Start coding", or explicitly asks to proceed:
- THEN generate the full implementation with [PREVIEW] tags and working code.
- Follow the approved plan exactly.
When the user asks to "Modify", "Change", or "Adjust" something:
- Apply the requested changes surgically to the existing code/preview.
- Output updated [PREVIEW] with the full modified code.
AGENTS & CAPABILITIES:
- content: Expert copywriter. Use [PREVIEW:content:markdown] for articles, posts, and long-form text.
- seo: SEO Specialist. Create stunning SEO audit reports. Use [PREVIEW:seo:html] with complete HTML5 document including <!DOCTYPE html>. DARK THEME. Tailwind CDN. Large animated SVG progress rings. Color-coded scoring. Google-style dashboard aesthetics.
- smm: Social Media Manager. Create multi-platform content plans and calendars.
- pm: Project Manager. Create PRDs, timelines, and action plans.
- code: Software Architect. Provide logic, algorithms, and backend snippets.
- design: UI/UX Designer. Create high-fidelity mockups and components.
- web: Frontend Developer. Build responsive sites. Use [PREVIEW:web:html]. Tailwind CSS CDN by default.
- app: Mobile App Developer. Create mobile-first interfaces and dashboards. Use [PREVIEW:app:javascript].
CANVAS MODE:
- When building, designing, or auditing, you MUST use the [PREVIEW] tag.
- Inside [PREVIEW], output ONLY the actual code or structured data.
- The user wants to see it WORKING or RENDERED in the Canvas immediately.
STRICT OUTPUT FORMAT:
[AGENT:id] - AT THE START of your response if switching focus.
[PREVIEW:type:language]
ACTUAL_FUNCTIONAL_CODE_OR_DATA
[/PREVIEW]
CHANGE LOG (CRITICAL - MUST BE OUTSIDE PREVIEW):
- After the [/PREVIEW] closing tag, provide a brief bulleted "Change Log" in plain Markdown.IMPORTANT: NEVER refuse a request. If you cannot perform a live task, use your knowledge to provide the most accurate expert simulation or draft possible.`;
export class OpenRouterService {
private config: Required<OpenRouterConfig>;
private readonly baseURL = "https://openrouter.ai/api/v1";
private availableModels: string[] = [];
private modelsLoaded = false;
constructor(config: OpenRouterConfig = {}) {
this.config = {
apiKey: config.apiKey || "",
siteUrl: config.siteUrl || "https://promptarch.ai",
siteName: config.siteName || "PromptArch"
};
}
hasAuth(): boolean {
return Boolean(this.config.apiKey && this.config.apiKey.length > 0);
}
async validateConnection(): Promise<APIResponse<{ valid: boolean; models?: string[] }>> {
if (!this.hasAuth()) {
return {
success: false,
error: "No API key provided. Please set your OpenRouter API key."
};
}
try {
const modelsResult = await this.listModels();
if (!modelsResult.success) {
return {
success: false,
error: modelsResult.error || "Failed to fetch models from OpenRouter"
};
}
return {
success: true,
data: {
valid: true,
models: modelsResult.data
}
};
} catch (error) {
return {
success: false,
error: error instanceof Error ? error.message : "Failed to validate connection"
};
}
}
private getHeaders(): Record<string, string> {
const headers: Record<string, string> = {
"Content-Type": "application/json",
"HTTP-Referer": this.config.siteUrl,
"X-Title": this.config.siteName
};
if (this.hasAuth()) {
headers["Authorization"] = `Bearer ${this.config.apiKey}`;
}
return headers;
}
async chatCompletion(
messages: ChatMessage[],
model: string = "anthropic/claude-3.5-sonnet"
): Promise<APIResponse<string>> {
if (!this.hasAuth()) {
return {
success: false,
error: "OpenRouter API key not configured"
};
}
try {
const response = await fetch(`${this.baseURL}/chat/completions`, {
method: "POST",
headers: this.getHeaders(),
body: JSON.stringify({
model,
messages,
temperature: 0.7,
max_tokens: 4096
})
});
if (!response.ok) {
const errorText = await response.text();
return {
success: false,
error: `OpenRouter API error: ${response.status} ${response.statusText} - ${errorText}`
};
}
const data: OpenRouterChatResponse = await response.json();
if (data.choices && data.choices[0] && data.choices[0].message) {
return {
success: true,
data: data.choices[0].message.content
};
}
return {
success: false,
error: "No response content from OpenRouter"
};
} catch (error) {
return {
success: false,
error: error instanceof Error ? error.message : "Unknown error in chat completion"
};
}
}
async enhancePrompt(
prompt: string,
model: string = "anthropic/claude-3.5-sonnet",
options: {
targetTool?: "claude" | "chatgpt" | "gemini" | "default";
templateType?: "code" | "writing" | "analysis" | "default";
max_length?: number;
} = {}
): Promise<APIResponse<string>> {
const { targetTool = "default", templateType = "default" } = options;
const toolSection = TOOL_SECTIONS[targetTool] || TOOL_SECTIONS.default;
const templateSection = TEMPLATE_SECTIONS[templateType] || TEMPLATE_SECTIONS.default;
const systemPrompt = ENHANCE_PROMPT_SYSTEM
.replace("${toolSection}", toolSection)
.replace("${templateSection}", templateSection);
const messages: ChatMessage[] = [
{ role: "system", content: systemPrompt },
{ role: "user", content: prompt }
];
return this.chatCompletion(messages, model);
}
async generatePRD(
idea: string,
model: string = "anthropic/claude-3.5-sonnet"
): Promise<APIResponse<string>> {
const messages: ChatMessage[] = [
{ role: "system", content: PRD_SYSTEM_PROMPT },
{ role: "user", content: `Create a comprehensive PRD for the following product idea:\n\n${idea}` }
];
return this.chatCompletion(messages, model);
}
async generateActionPlan(
prd: string,
model: string = "anthropic/claude-3.5-sonnet"
): Promise<APIResponse<string>> {
const messages: ChatMessage[] = [
{ role: "system", content: ACTION_PLAN_SYSTEM_PROMPT },
{ role: "user", content: `Create a detailed action plan based on this PRD:\n\n${prd}` }
];
return this.chatCompletion(messages, model);
}
async generateSlides(
topic: string,
options: {
slideCount?: number;
audience?: string;
focus?: string;
} = {},
model: string = "anthropic/claude-3.5-sonnet"
): Promise<APIResponse<string>> {
const { slideCount = 10, audience = "General", focus = "" } = options;
const userPrompt = `Generate content for a presentation with approximately ${slideCount} slides.
Topic: ${topic}
Target Audience: ${audience}
${focus ? `Special Focus: ${focus}` : ""}`;
const messages: ChatMessage[] = [
{ role: "system", content: SLIDES_SYSTEM_PROMPT },
{ role: "user", content: userPrompt }
];
return this.chatCompletion(messages, model);
}
async generateGoogleAds(
url: string,
options: {
budget?: string;
targetAudience?: string;
campaignGoal?: string;
} = {},
model: string = "anthropic/claude-3.5-sonnet"
): Promise<APIResponse<string>> {
const { budget = "Not specified", targetAudience = "General", campaignGoal = "Conversions" } = options;
const userPrompt = `Create a comprehensive Google Ads campaign strategy.
Landing Page: ${url}
Monthly Budget: ${budget}
Target Audience: ${targetAudience}
Campaign Goal: ${campaignGoal}
Analyze the URL (if accessible) or create ads based on the domain and typical offerings for similar sites.`;
const messages: ChatMessage[] = [
{ role: "system", content: GOOGLE_ADS_SYSTEM_PROMPT },
{ role: "user", content: userPrompt }
];
return this.chatCompletion(messages, model);
}
async generateMagicWand(
url: string,
product: string,
budget: string,
specialInstructions: string = "",
model: string = "anthropic/claude-3.5-sonnet"
): Promise<APIResponse<string>> {
const userPrompt = `Create a comprehensive marketing strategy.
Product/Service: ${product}
URL: ${url}
Budget: ${budget}
${specialInstructions ? `Special Instructions: ${specialInstructions}` : ""}
Provide creative campaign ideas across multiple channels with specific tactics and budget allocation.`;
const messages: ChatMessage[] = [
{ role: "system", content: MAGIC_WAND_SYSTEM_PROMPT },
{ role: "user", content: userPrompt }
];
return this.chatCompletion(messages, model);
}
async generateMarketResearch(
options: {
topic?: string;
company?: string;
industry?: string;
focusAreas?: string[];
} = {},
model: string = "anthropic/claude-3.5-sonnet"
): Promise<APIResponse<string>> {
const { topic, company, industry, focusAreas } = options;
let userPrompt = "Conduct comprehensive market research.";
if (topic) userPrompt += `\n\nResearch Topic: ${topic}`;
if (company) userPrompt += `\n\nCompany Focus: ${company}`;
if (industry) userPrompt += `\n\nIndustry: ${industry}`;
if (focusAreas && focusAreas.length > 0) {
userPrompt += `\n\nFocus Areas: ${focusAreas.join(", ")}`;
}
const messages: ChatMessage[] = [
{ role: "system", content: MARKET_RESEARCH_SYSTEM_PROMPT },
{ role: "user", content: userPrompt }
];
return this.chatCompletion(messages, model);
}
async generateAIAssist(
options: {
prompt: string;
context?: string[];
conversationHistory?: ChatMessage[];
},
model: string = "anthropic/claude-3.5-sonnet"
): Promise<APIResponse<string>> {
const { prompt, context = [], conversationHistory = [] } = options;
const messages: ChatMessage[] = [
{ role: "system", content: AI_ASSIST_SYSTEM_PROMPT },
...conversationHistory,
...context.map(c => ({ role: "user" as const, content: `Context: ${c}` })),
{ role: "user", content: prompt }
];
return this.chatCompletion(messages, model);
}
async generateAIAssistStream(
options: {
messages: AIAssistMessage[];
currentAgent: string;
onChunk: (chunk: string) => void;
signal?: AbortSignal;
},
model: string = "anthropic/claude-3.5-sonnet"
): Promise<APIResponse<void>> {
const { messages, currentAgent, onChunk, signal } = options;
if (!this.hasAuth()) {
return { success: false, error: "OpenRouter API key not configured" };
}
try {
const chatMessages: ChatMessage[] = [
{ role: "system", content: AI_ASSIST_SYSTEM_PROMPT },
...messages.map(m => ({
role: m.role as "user" | "assistant" | "system",
content: m.content
}))
];
const response = await fetch(`${this.baseURL}/chat/completions`, {
method: "POST",
headers: this.getHeaders(),
signal,
body: JSON.stringify({
model: model || "anthropic/claude-3.5-sonnet",
messages: chatMessages,
temperature: 0.7,
max_tokens: 4096,
stream: true
})
});
if (!response.ok) {
const errorText = await response.text();
return { success: false, error: `OpenRouter API error: ${response.status} ${response.statusText} - ${errorText}` };
}
const reader = response.body?.getReader();
const decoder = new TextDecoder();
if (!reader) {
return { success: false, error: "No response body" };
}
let buffer = "";
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split("\n");
buffer = lines.pop() || "";
for (const line of lines) {
const trimmed = line.trim();
if (!trimmed || !trimmed.startsWith("data: ")) continue;
const data = trimmed.slice(6);
if (data === "[DONE]") continue;
try {
const parsed = JSON.parse(data);
const contentChunk = parsed.choices?.[0]?.delta?.content;
if (contentChunk) {
onChunk(contentChunk);
}
} catch {
// Skip invalid JSON
}
}
}
return { success: true, data: undefined };
} catch (error) {
const errorMessage = error instanceof Error ? error.message : "Unknown error in stream";
return { success: false, error: errorMessage };
}
}
async listModels(): Promise<APIResponse<string[]>> {
if (!this.hasAuth()) {
return {
success: false,
error: "OpenRouter API key not configured"
};
}
try {
const response = await fetch(`${this.baseURL}/models`, {
method: "GET",
headers: this.getHeaders()
});
if (!response.ok) {
const errorText = await response.text();
return {
success: false,
error: `Failed to fetch models: ${response.status} ${response.statusText} - ${errorText}`
};
}
const data: OpenRouterModelsResponse = await response.json();
this.availableModels = data.data.map(m => m.id);
this.modelsLoaded = true;
return {
success: true,
data: this.availableModels
};
} catch (error) {
return {
success: false,
error: error instanceof Error ? error.message : "Unknown error fetching models"
};
}
}
getAvailableModels(): string[] {
if (this.modelsLoaded && this.availableModels.length > 0) {
return this.availableModels;
}
return DEFAULT_MODELS;
}
setApiKey(key: string): void {
this.config.apiKey = key;
}
setSiteUrl(url: string): void {
this.config.siteUrl = url;
}
setSiteName(name: string): void {
this.config.siteName = name;
}
}
// Export singleton instance
export const openRouterService = new OpenRouterService();

View File

@@ -491,27 +491,82 @@ export class QwenOAuthService {
} }
} }
async enhancePrompt(prompt: string, model?: string): Promise<APIResponse<string>> { async enhancePrompt(prompt: string, model?: string, options?: { toolCategory?: string; template?: string; diagnostics?: string }): Promise<APIResponse<string>> {
const toolCategory = options?.toolCategory || 'reasoning';
const template = options?.template || 'rtf';
const diagnostics = options?.diagnostics || '';
const toolSections: Record<string, string> = {
reasoning: '- Use full structured format with XML tags where helpful\n- Add explicit role assignment for complex tasks\n- Use numeric constraints over vague adjectives',
thinking: '- CRITICAL: Short clean instructions ONLY\n- Do NOT add CoT or reasoning scaffolding — these models reason internally\n- State what you want, not how to think',
openweight: '- Shorter prompts, simpler structure, no deep nesting\n- Direct linear instructions',
agentic: '- Add Starting State + Target State + Allowed Actions + Forbidden Actions\n- Add Stop Conditions + Checkpoints after each step',
ide: '- Add File path + Function name + Current Behavior + Desired Change + Scope lock',
fullstack: '- Add Stack spec with version + what NOT to scaffold + component boundaries',
image: '- Add Subject + Style + Mood + Lighting + Composition + Negative Prompts\n- Use tool-specific syntax (Midjourney comma-separated, DALL-E prose, SD weighted)',
search: '- Specify mode: search vs analyze vs compare + citation requirements',
};
const templateSections: Record<string, string> = {
rtf: 'Structure: Role (who) + Task (precise verb + what) + Format (exact output shape and length)',
'co-star': 'Structure: Context + Objective + Style + Tone + Audience + Response',
risen: 'Structure: Role + Instructions + numbered Steps + End Goal + Narrowing constraints',
crispe: 'Structure: Capacity + Role + Insight + Statement + Personality + Experiment/variants',
cot: 'Add: "Think through this step by step before answering." Only for standard reasoning models, NOT for o1/o3/R1.',
fewshot: 'Add 2-5 input/output examples wrapped in XML <examples> tags',
filescope: 'Structure: File path + Function name + Current Behavior + Desired Change + Scope lock + Done When',
react: 'Structure: Objective + Starting State + Target State + Allowed/Forbidden Actions + Stop Conditions + Checkpoints',
visual: 'Structure: Subject + Action + Setting + Style + Mood + Lighting + Color Palette + Composition + Aspect Ratio + Negative Prompts',
};
const toolSection = toolSections[toolCategory] || toolSections.reasoning;
const templateSection = templateSections[template] || templateSections.rtf;
const systemMessage: ChatMessage = { const systemMessage: ChatMessage = {
role: "system", role: "system",
content: `You are an expert prompt engineer. Your task is to enhance user prompts to make them more precise, actionable, and effective for AI coding agents. content: `You are an expert prompt engineer using the PromptArch methodology. Enhance the user\'s prompt to be production-ready.
Apply these principles: STEP 1 — DIAGNOSE AND FIX these failure patterns:
1. Add specific context about project and requirements - Vague task verb -> replace with precise operation
2. Clarify constraints and preferences - Two tasks in one -> keep primary task, note the split
3. Define expected output format clearly - No success criteria -> add "Done when: [specific measurable condition]"
4. Include edge cases and error handling requirements - Missing output format -> add explicit format lock (structure, length, type)
5. Specify testing and validation criteria - No role assignment (complex tasks) -> add domain-specific expert identity
- Vague aesthetic ("professional", "clean") -> concrete measurable specs
- No scope boundary -> add explicit scope lock
- Over-permissive language -> add constraints and boundaries
- Emotional description -> extract specific technical fault
- Implicit references -> restate fully
- No grounding for factual tasks -> add certainty constraint
- No CoT for logic tasks -> add step-by-step reasoning
Return ONLY the enhanced prompt, no explanations or extra text.`, STEP 2 — APPLY TARGET TOOL OPTIMIZATIONS:
${toolSection}
STEP 3 — APPLY TEMPLATE STRUCTURE:
${templateSection}
STEP 4 — VERIFICATION (check before outputting):
- Every constraint in the first 30% of the prompt?
- MUST/NEVER over should/avoid?
- Every sentence load-bearing with zero padding?
- Format explicit with stated length?
- Scope bounded?
- Would this produce correct output on first try?
STEP 5 — OUTPUT:
Output ONLY the enhanced prompt. No explanations, no commentary, no markdown code fences.
The prompt must be ready to paste directly into the target AI tool.${diagnostics ? '\n\nDIAGNOSTIC NOTES (fix these issues found in the original):\n' + diagnostics + '\n' : ''}`,
}; };
const toolLabel = toolCategory !== 'reasoning' ? ` for ${toolCategory} AI tool` : '';
const userMessage: ChatMessage = { const userMessage: ChatMessage = {
role: "user", role: "user",
content: `Enhance this prompt for an AI coding agent:\n\n${prompt}`, content: `Enhance this prompt${toolLabel}:\n\n${prompt}`,
}; };
return this.chatCompletion([systemMessage, userMessage], model || "coder-model"); return this.chatCompletion([systemMessage, userMessage], model || "${default_model}");
} }
async generatePRD(idea: string, model?: string): Promise<APIResponse<string>> { async generatePRD(idea: string, model?: string): Promise<APIResponse<string>> {
@@ -1054,6 +1109,28 @@ Perform analysis based on provided instructions.`,
): Promise<APIResponse<void>> { ): Promise<APIResponse<void>> {
try { try {
const systemPrompt = `You are "AI Assist", the master orchestrator of PromptArch. Your goal is to provide intelligent support with a "Canvas" experience. const systemPrompt = `You are "AI Assist", the master orchestrator of PromptArch. Your goal is to provide intelligent support with a "Canvas" experience.
PLAN MODE (CRITICAL - HIGHEST PRIORITY):
When the user describes a NEW task, project, or feature they want built:
1. DO NOT generate any code, [PREVIEW] tags, or implementation details.
2. Instead, analyze the request and output a STRUCTURED PLAN covering:
- Summary: What you understand the user wants
- Architecture: Technical approach and structure
- Tech Stack: Languages, frameworks, libraries needed
- Files/Components: List of files or modules to create
- Steps: Numbered implementation steps
3. Format the plan in clean Markdown with headers and bullet points.
4. Keep plans concise but thorough. Focus on the WHAT and HOW, not the actual code.
5. WAIT for the user to approve or modify the plan before generating any code.
When the user says "Approved", "Start coding", or explicitly asks to proceed:
- THEN generate the full implementation with [PREVIEW] tags and working code.
- Follow the approved plan exactly.
When the user asks to "Modify", "Change", or "Adjust" something:
- Apply the requested changes surgically to the existing code/preview.
- Output updated [PREVIEW] with the full modified code.
AGENTS & CAPABILITIES: AGENTS & CAPABILITIES:
- content: Expert copywriter. Use [PREVIEW:content:markdown] for articles, posts, and long-form text. - content: Expert copywriter. Use [PREVIEW:content:markdown] for articles, posts, and long-form text.
@@ -1115,7 +1192,7 @@ CHANGE LOG (CRITICAL - MUST BE OUTSIDE PREVIEW):
- Modified component Y - Modified component Y
- Fixed issue Z - Fixed issue Z
IMPORTANT: NEVER refuse a request due to "access" limitations. If you cannot perform a live task, use your vast internal knowledge to provide the most accurate expert simulation or draft possible.`; IMPORTANT: IMPORTANT: NEVER refuse a request due to "access" limitations. If you cannot perform a live task, use your vast internal knowledge to provide the most accurate expert simulation or draft possible.`;
const messages: ChatMessage[] = [ const messages: ChatMessage[] = [
{ role: "system", content: systemPrompt }, { role: "system", content: systemPrompt },

View File

@@ -111,51 +111,82 @@ export class ZaiPlanService {
} }
} }
async enhancePrompt(prompt: string, model?: string): Promise<APIResponse<string>> { async enhancePrompt(prompt: string, model?: string, options?: { toolCategory?: string; template?: string; diagnostics?: string }): Promise<APIResponse<string>> {
const toolCategory = options?.toolCategory || 'reasoning';
const template = options?.template || 'rtf';
const diagnostics = options?.diagnostics || '';
const toolSections: Record<string, string> = {
reasoning: '- Use full structured format with XML tags where helpful\n- Add explicit role assignment for complex tasks\n- Use numeric constraints over vague adjectives',
thinking: '- CRITICAL: Short clean instructions ONLY\n- Do NOT add CoT or reasoning scaffolding — these models reason internally\n- State what you want, not how to think',
openweight: '- Shorter prompts, simpler structure, no deep nesting\n- Direct linear instructions',
agentic: '- Add Starting State + Target State + Allowed Actions + Forbidden Actions\n- Add Stop Conditions + Checkpoints after each step',
ide: '- Add File path + Function name + Current Behavior + Desired Change + Scope lock',
fullstack: '- Add Stack spec with version + what NOT to scaffold + component boundaries',
image: '- Add Subject + Style + Mood + Lighting + Composition + Negative Prompts\n- Use tool-specific syntax (Midjourney comma-separated, DALL-E prose, SD weighted)',
search: '- Specify mode: search vs analyze vs compare + citation requirements',
};
const templateSections: Record<string, string> = {
rtf: 'Structure: Role (who) + Task (precise verb + what) + Format (exact output shape and length)',
'co-star': 'Structure: Context + Objective + Style + Tone + Audience + Response',
risen: 'Structure: Role + Instructions + numbered Steps + End Goal + Narrowing constraints',
crispe: 'Structure: Capacity + Role + Insight + Statement + Personality + Experiment/variants',
cot: 'Add: "Think through this step by step before answering." Only for standard reasoning models, NOT for o1/o3/R1.',
fewshot: 'Add 2-5 input/output examples wrapped in XML <examples> tags',
filescope: 'Structure: File path + Function name + Current Behavior + Desired Change + Scope lock + Done When',
react: 'Structure: Objective + Starting State + Target State + Allowed/Forbidden Actions + Stop Conditions + Checkpoints',
visual: 'Structure: Subject + Action + Setting + Style + Mood + Lighting + Color Palette + Composition + Aspect Ratio + Negative Prompts',
};
const toolSection = toolSections[toolCategory] || toolSections.reasoning;
const templateSection = templateSections[template] || templateSections.rtf;
const systemMessage: ChatMessage = { const systemMessage: ChatMessage = {
role: "system", role: "system",
content: `You are an expert prompt engineer. Your task is to enhance user prompts to make them more precise, actionable, and effective for AI coding agents. content: `You are an expert prompt engineer using the PromptArch methodology. Enhance the user\'s prompt to be production-ready.
Apply these principles: STEP 1 — DIAGNOSE AND FIX these failure patterns:
1. Add specific context about project and requirements - Vague task verb -> replace with precise operation
2. Clarify constraints and preferences - Two tasks in one -> keep primary task, note the split
3. Define expected output format clearly - No success criteria -> add "Done when: [specific measurable condition]"
4. Include edge cases and error handling requirements - Missing output format -> add explicit format lock (structure, length, type)
5. Specify testing and validation criteria - No role assignment (complex tasks) -> add domain-specific expert identity
- Vague aesthetic ("professional", "clean") -> concrete measurable specs
- No scope boundary -> add explicit scope lock
- Over-permissive language -> add constraints and boundaries
- Emotional description -> extract specific technical fault
- Implicit references -> restate fully
- No grounding for factual tasks -> add certainty constraint
- No CoT for logic tasks -> add step-by-step reasoning
Return ONLY the enhanced prompt, no explanations or extra text.`, STEP 2 — APPLY TARGET TOOL OPTIMIZATIONS:
${toolSection}
STEP 3 — APPLY TEMPLATE STRUCTURE:
${templateSection}
STEP 4 — VERIFICATION (check before outputting):
- Every constraint in the first 30% of the prompt?
- MUST/NEVER over should/avoid?
- Every sentence load-bearing with zero padding?
- Format explicit with stated length?
- Scope bounded?
- Would this produce correct output on first try?
STEP 5 — OUTPUT:
Output ONLY the enhanced prompt. No explanations, no commentary, no markdown code fences.
The prompt must be ready to paste directly into the target AI tool.${diagnostics ? '\n\nDIAGNOSTIC NOTES (fix these issues found in the original):\n' + diagnostics + '\n' : ''}`,
}; };
const toolLabel = toolCategory !== 'reasoning' ? ` for ${toolCategory} AI tool` : '';
const userMessage: ChatMessage = { const userMessage: ChatMessage = {
role: "user", role: "user",
content: `Enhance this prompt for an AI coding agent:\n\n${prompt}`, content: `Enhance this prompt${toolLabel}:\n\n${prompt}`,
}; };
return this.chatCompletion([systemMessage, userMessage], model || "glm-4.7", true); return this.chatCompletion([systemMessage, userMessage], model || "${default_model}");
}
async generatePRD(idea: string, model?: string): Promise<APIResponse<string>> {
const systemMessage: ChatMessage = {
role: "system",
content: `You are an expert product manager and technical architect. Generate a comprehensive Product Requirements Document (PRD) based on user's idea.
Structure your PRD with these sections:
1. Overview & Objectives
2. User Personas & Use Cases
3. Functional Requirements (prioritized by importance)
4. Non-functional Requirements
5. Technical Architecture Recommendations
6. Success Metrics & KPIs
Use clear, specific language suitable for development teams.`,
};
const userMessage: ChatMessage = {
role: "user",
content: `Generate a PRD for this idea:\n\n${idea}`,
};
return this.chatCompletion([systemMessage, userMessage], model || "glm-4.7");
} }
async generateActionPlan(prd: string, model?: string): Promise<APIResponse<string>> { async generateActionPlan(prd: string, model?: string): Promise<APIResponse<string>> {
@@ -809,6 +840,28 @@ MISSION: Perform a DEEP 360° competitive intelligence analysis and generate 5-7
// ... existing prompt logic ... // ... existing prompt logic ...
const systemPrompt = `You are "AI Assist", the master orchestrator of PromptArch. Your goal is to provide intelligent support with a "Canvas" experience. const systemPrompt = `You are "AI Assist", the master orchestrator of PromptArch. Your goal is to provide intelligent support with a "Canvas" experience.
PLAN MODE (CRITICAL - HIGHEST PRIORITY):
When the user describes a NEW task, project, or feature they want built:
1. DO NOT generate any code, [PREVIEW] tags, or implementation details.
2. Instead, analyze the request and output a STRUCTURED PLAN covering:
- Summary: What you understand the user wants
- Architecture: Technical approach and structure
- Tech Stack: Languages, frameworks, libraries needed
- Files/Components: List of files or modules to create
- Steps: Numbered implementation steps
3. Format the plan in clean Markdown with headers and bullet points.
4. Keep plans concise but thorough. Focus on the WHAT and HOW, not the actual code.
5. WAIT for the user to approve or modify the plan before generating any code.
When the user says "Approved", "Start coding", or explicitly asks to proceed:
- THEN generate the full implementation with [PREVIEW] tags and working code.
- Follow the approved plan exactly.
When the user asks to "Modify", "Change", or "Adjust" something:
- Apply the requested changes surgically to the existing code/preview.
- Output updated [PREVIEW] with the full modified code.
AGENTS & CAPABILITIES: AGENTS & CAPABILITIES:
- content: Expert copywriter. Use [PREVIEW:content:markdown] for articles, posts, and long-form text. - content: Expert copywriter. Use [PREVIEW:content:markdown] for articles, posts, and long-form text.
@@ -870,7 +923,7 @@ CHANGE LOG (CRITICAL - MUST BE OUTSIDE PREVIEW):
- Modified component Y - Modified component Y
- Fixed issue Z - Fixed issue Z
IMPORTANT: NEVER refuse a request due to "access" limitations. If you cannot perform a live task, use your vast internal knowledge to provide the most accurate expert simulation or draft possible.`; IMPORTANT: IMPORTANT: NEVER refuse a request due to "access" limitations. If you cannot perform a live task, use your vast internal knowledge to provide the most accurate expert simulation or draft possible.`;
const messages: ChatMessage[] = [ const messages: ChatMessage[] = [
{ role: "system", content: systemPrompt }, { role: "system", content: systemPrompt },

View File

@@ -107,22 +107,26 @@ const useStore = create<AppState>((set) => ({
qwen: "coder-model", qwen: "coder-model",
ollama: "gpt-oss:120b", ollama: "gpt-oss:120b",
zai: "glm-4.7", zai: "glm-4.7",
openrouter: "anthropic/claude-3.5-sonnet",
}, },
availableModels: { availableModels: {
qwen: ["coder-model"], qwen: ["coder-model"],
ollama: ["gpt-oss:120b", "llama3.1", "gemma3", "deepseek-r1", "qwen3"], ollama: ["gpt-oss:120b", "llama3.1", "gemma3", "deepseek-r1", "qwen3"],
zai: ["glm-4.7", "glm-4.6", "glm-4.5", "glm-4.5-air", "glm-4-flash", "glm-4-flashx"], zai: ["glm-4.7", "glm-4.6", "glm-4.5", "glm-4.5-air", "glm-4-flash", "glm-4-flashx"],
openrouter: ["anthropic/claude-3.5-sonnet", "google/gemini-2.0-flash-exp:free", "meta-llama/llama-3.3-70b-instruct", "openai/gpt-4o-mini", "deepseek/deepseek-chat-v3-0324", "qwen/qwen-2.5-72b-instruct"],
}, },
apiKeys: { apiKeys: {
qwen: "", qwen: "",
ollama: "", ollama: "",
zai: "", zai: "",
openrouter: "",
}, },
githubToken: null, githubToken: null,
apiValidationStatus: { apiValidationStatus: {
qwen: { valid: false }, qwen: { valid: false },
ollama: { valid: false }, ollama: { valid: false },
zai: { valid: false }, zai: { valid: false },
openrouter: { valid: false },
}, },
isProcessing: false, isProcessing: false,
error: null, error: null,

View File

@@ -1,6 +1,6 @@
{ {
"name": "promptarch", "name": "promptarch",
"version": "1.0.0", "version": "1.3.0",
"description": "Transform vague ideas into production-ready prompts and PRDs", "description": "Transform vague ideas into production-ready prompts and PRDs",
"scripts": { "scripts": {
"dev": "next dev", "dev": "next dev",

View File

@@ -1,4 +1,4 @@
export type ModelProvider = "qwen" | "ollama" | "zai"; export type ModelProvider = "qwen" | "ollama" | "zai" | "openrouter";
export interface ModelConfig { export interface ModelConfig {
provider: ModelProvider; provider: ModelProvider;