Initialize PromptArch: The Prompt Enhancer (Fork of ClavixDev/Clavix)

This commit is contained in:
Gemini AI
2025-12-25 16:58:36 +04:00
Unverified
commit c0a01b5840
35 changed files with 11054 additions and 0 deletions

16
.env.example Normal file
View File

@@ -0,0 +1,16 @@
# Qwen Code OAuth
# Get OAuth credentials from https://qwen.ai
QWEN_CLIENT_ID=
QWEN_CLIENT_SECRET=
QWEN_PROXY_ENDPOINT=http://localhost:8080/v1
# Ollama Cloud API
# Get API key from https://ollama.com/cloud
OLLAMA_API_KEY=
OLLAMA_ENDPOINT=https://ollama.com/api
# Z.AI Plan API
# Get API key from https://docs.z.ai
ZAI_API_KEY=
ZAI_GENERAL_ENDPOINT=https://api.z.ai/api/paas/v4
ZAI_CODING_ENDPOINT=https://api.z.ai/api/coding/paas/v4

39
.gitignore vendored Normal file
View File

@@ -0,0 +1,39 @@
# dependencies
/node_modules
/.pnp
.pnp.js
.yarn/install-state.gz
# testing
/coverage
# next.js
/.next/
/out/
# production
/build
# misc
.DS_Store
*.pem
# debug
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# local env files
.env*.local
.env
# vercel
.vercel
# typescript
*.tsbuildinfo
next-env.d.ts
# logs
logs
*.log

View File

@@ -0,0 +1,287 @@
# PromptArch Implementation Plan
## Phase 1: Project Setup & Foundation
* Initialize Next.js 14+ with TypeScript, TailwindCSS, shadcn/ui
* Set up project structure: `/app`, `/components`, `/lib`, `/services`
* Configure environment variables for all API keys
* Set up state management (Zustand or Context API)
* Configure ESLint, Prettier, TypeScript strict mode
## Phase 2: API Integration Layer
### 2.1 Qwen Code OAuth Integration
* Create `/lib/services/qwen-oauth.ts`
* Implement OAuth flow (browser-based authentication)
* Create proxy service wrapper for OpenAI-compatible API
* Handle credential management (\~/.qwen/oauth\_creds.json parsing)
* Implement token refresh logic (2000 daily requests tracking)
### 2.2 Ollama Cloud API Integration
* Create `/lib/services/ollama-cloud.ts`
* Implement client with OLLAMA\_API\_KEY authentication
* Support chat completions and generate endpoints
* Model listing and selection interface
* Stream response handling
### 2.3 Z.AI Plan API Integration
* Create `/lib/services/zai-plan.ts`
* Implement both general and coding endpoints
* Bearer token authentication
* Model selection (glm-4.7, glm-4.5, glm-4.5-air)
* Request/response handling with error management
### 2.4 Unified Model Interface
* Create `/lib/services/model-adapter.ts`
* Abstract interface for all providers
* Standardize request/response format
* Provider selection and fallback logic
* Usage tracking and quota management
## Phase 3: Core PromptArch Features
### 3.1 Prompt Engineering Workflow
* **Input Panel:** Textarea for raw human prompt
* **Enhancement Engine:** Transform prompts using:
* Clavix's 20 patterns and 11 intents
* Professional prompt structure (Context, Task, Constraints, Output Format)
* Code-specific templates for coding agents
* **Output Display:** Show enhanced prompt with diff comparison
### 3.2 PRD Generation Module
* Guided Socratic questioning UI
* 15 specialized patterns (RequirementPrioritizer, UserPersonaEnricher, etc.)
* Structured PRD template with sections:
* Overview & Objectives
* User Personas & Use Cases
* Functional Requirements
* Non-functional Requirements
* Technical Architecture
* Success Metrics
### 3.3 Action Plan Generator
* Convert PRD to actionable implementation plan
* Task breakdown with priorities (High/Medium/Low)
* Dependency graph visualization
* Framework and technology recommendations
* Coding architecture guidelines
### 3.4 Framework & Architecture Recommendations
* Analyze project requirements and suggest:
* Frontend framework (React/Next.js/Astro/etc.)
* Backend architecture (REST/GraphQL/Serverless)
* Database choices based on scale
* Authentication patterns
* Deployment strategy
## Phase 4: Modern UI/UX Design
### 4.1 Layout & Navigation
* Sidebar with workflow stages (Prompt → PRD → Plan → Output)
* Top bar with model selector and settings
* Responsive design for desktop and tablet
* Dark/light mode toggle
### 4.2 Interactive Components
* **Prompt Input Panel:** Real-time analysis indicators
* **Split View:** Original vs Enhanced prompt comparison
* **Progressive Disclosure:** Collapsible PRD sections
* **Drag-and-Drop:** Reorder tasks in action plan
* **Copy/Export:** One-click copy outputs
### 4.3 Workflow Visualizations
* Pipeline diagram showing current stage
* Dependency graph for tasks
* Progress tracking with checkmarks
* Status badges (Draft/In Progress/Complete)
### 4.4 Settings & Configuration
* API key management interface
* Model selection per stage
* Provider fallback configuration
* Usage statistics dashboard
* Theme customization
## Phase 5: Advanced Features
### 5.1 Multi-Model Support
* Select different models for different stages:
* Qwen Code for prompt enhancement
* Ollama Cloud for PRD generation
* Z.AI Plan for action planning
* Model comparison side-by-side
* Cost estimation per model
### 5.2 Template Library
* Pre-built prompt templates for common scenarios:
* Web Development
* Mobile App Development
* API Development
* Data Science/ML
* DevOps/Infrastructure
* Custom template creation
### 5.3 History & Persistence
* Save all generated outputs to local storage
* Version history of prompts and PRDs
* Search and filter past projects
* Export to Markdown, JSON, or PDF
### 5.4 Collaboration Features
* Share outputs via URL
* Export prompts for Claude Code, Cursor, Windsurf
* Integration with slash command format
* Copy-ready templates for AI agents
## Phase 6: Testing & Quality Assurance
* Unit tests for API integration layer
* Integration tests with mock providers
* E2E tests for complete workflows
* Accessibility testing (WCAG AA)
* Performance optimization
## Technical Stack
```
Frontend: Next.js 14 (App Router) + TypeScript
UI Library: shadcn/ui + Radix UI + TailwindCSS
State: Zustand
Forms: React Hook Form + Zod
HTTP: Fetch API + Axios (fallback)
Icons: Lucide React
Charts: Recharts (for visualizations)
Markdown: react-markdown + remark/rehype
```
## Directory Structure
```
promptarch/
├── app/
│ ├── layout.tsx
│ ├── page.tsx
│ ├── prompt-enhance/
│ ├── prd-generator/
│ ├── action-plan/
│ └── settings/
├── components/
│ ├── ui/ (shadcn components)
│ ├── workflow/
│ ├── panels/
│ └── visualizations/
├── lib/
│ ├── services/
│ │ ├── qwen-oauth.ts
│ │ ├── ollama-cloud.ts
│ │ ├── zai-plan.ts
│ │ └── model-adapter.ts
│ ├── store.ts
│ ├── utils.ts
│ └── patterns/
├── types/
│ └── index.ts
└── public/
```
## Key Differentiators
1. **Unified Multi-Model Workflow:** Seamlessly switch between Qwen, Ollama, and Z.AI
2. **Professional Prompt Engineering:** 20 patterns + 11 intents from Clavix, enhanced
3. **Complete Development Lifecycle:** From vague idea to verified implementation
4. **Modern Web UI:** Not just CLI - full interactive experience
5. **Coding Agent Ready:** Outputs optimized for Claude Code, Cursor, Windsurf, etc.

8
.vercelignore Normal file
View File

@@ -0,0 +1,8 @@
node_modules
build
dist
.git
.trae
.log
.figma
.next

132
README.md Normal file
View File

@@ -0,0 +1,132 @@
# PromptArch
Transform vague ideas into production-ready prompts and PRDs. An AI-powered platform for prompt engineering, PRD generation, and action planning with support for multiple AI providers.
## Features
- **Multi-Provider Support**: Qwen Code OAuth, Ollama Cloud, and Z.AI Plan API
- **Prompt Enhancement**: Improve prompts with 20+ patterns and 11 intents
- **PRD Generation**: Comprehensive product requirements documents
- **Action Planning**: Task breakdown with priorities, dependencies, and framework recommendations
- **Modern UI**: Clean, responsive interface with sidebar navigation
- **History Tracking**: Save and restore previous prompts
- **Provider Fallback**: Automatic fallback if a provider fails
## Quick Start
1. **Install dependencies**:
```bash
npm install
```
2. **Set up environment variables**:
Copy `.env.example` to `.env` and add your API keys:
```bash
cp .env.example .env
```
3. **Run the development server**:
```bash
npm run dev
```
4. Open [http://localhost:3000](http://localhost:3000) in your browser.
## AI Providers
### Qwen Code OAuth
- **2000 free requests/day** via OAuth
- OpenAI-compatible API
- Get credentials at [qwen.ai](https://qwen.ai)
### Ollama Cloud
- High-performance cloud models
- No GPU required
- Get API key at [ollama.com/cloud](https://ollama.com/cloud)
### Z.AI Plan API
- Specialized coding models (glm-4.7, glm-4.5)
- Dedicated coding endpoint
- Get API key at [docs.z.ai](https://docs.z.ai)
## Usage
### Prompt Enhancer
1. Enter your prompt in the input panel
2. Select an AI provider
3. Click "Enhance Prompt"
4. Copy the enhanced prompt for use with AI coding agents
### PRD Generator
1. Enter your idea or concept
2. Select an AI provider
3. Generate comprehensive PRD
4. Export or copy the structured requirements
### Action Plan Generator
1. Paste your PRD or requirements
2. Generate action plan with tasks
3. Review framework recommendations
4. Get architecture guidelines
## Project Structure
```
promptarch/
├── app/ # Next.js app directory
├── components/ # React components
│ ├── ui/ # shadcn/ui components
│ ├── PromptEnhancer.tsx
│ ├── PRDGenerator.tsx
│ ├── ActionPlanGenerator.tsx
│ ├── Sidebar.tsx
│ ├── HistoryPanel.tsx
│ └── SettingsPanel.tsx
├── lib/ # Utilities and services
│ ├── services/ # API integrations
│ │ ├── qwen-oauth.ts
│ │ ├── ollama-cloud.ts
│ │ ├── zai-plan.ts
│ │ └── model-adapter.ts
│ ├── store.ts # Zustand state management
│ └── utils.ts # Utility functions
├── types/ # TypeScript types
└── public/ # Static assets
```
## Tech Stack
- **Framework**: Next.js 14 (App Router)
- **Language**: TypeScript
- **Styling**: TailwindCSS
- **UI Components**: shadcn/ui + Radix UI
- **State Management**: Zustand
- **Forms**: React Hook Form + Zod
- **Icons**: Lucide React
## Development
```bash
# Install dependencies
npm install
# Run development server
npm run dev
# Build for production
npm run build
# Start production server
npm start
# Lint code
npm run lint
```
## License
ISC
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.

69
app/globals.css Normal file
View File

@@ -0,0 +1,69 @@
@tailwind base;
@tailwind components;
@tailwind utilities;
@layer base {
:root {
--background: 0 0% 100%;
--foreground: 240 10% 3.9%;
--card: 0 0% 100%;
--card-foreground: 240 10% 3.9%;
--popover: 0 0% 100%;
--popover-foreground: 240 10% 3.9%;
--primary: 240 5.9% 10%;
--primary-foreground: 0 0% 98%;
--secondary: 240 4.8% 95.9%;
--secondary-foreground: 240 5.9% 10%;
--muted: 240 4.8% 95.9%;
--muted-foreground: 240 3.8% 46.1%;
--accent: 240 4.8% 95.9%;
--accent-foreground: 240 5.9% 10%;
--destructive: 0 84.2% 60.2%;
--destructive-foreground: 0 0% 98%;
--border: 240 5.9% 90%;
--input: 240 5.9% 90%;
--ring: 240 10% 3.9%;
--radius: 0.5rem;
--chart-1: 12 76% 61%;
--chart-2: 173 58% 39%;
--chart-3: 197 37% 24%;
--chart-4: 43 74% 66%;
--chart-5: 27 87% 67%;
}
.dark {
--background: 240 10% 3.9%;
--foreground: 0 0% 98%;
--card: 240 10% 3.9%;
--card-foreground: 0 0% 98%;
--popover: 240 10% 3.9%;
--popover-foreground: 0 0% 98%;
--primary: 0 0% 98%;
--primary-foreground: 240 5.9% 10%;
--secondary: 240 3.7% 15.9%;
--secondary-foreground: 0 0% 98%;
--muted: 240 3.7% 15.9%;
--muted-foreground: 240 5% 64.9%;
--accent: 240 3.7% 15.9%;
--accent-foreground: 0 0% 98%;
--destructive: 0 62.8% 30.6%;
--destructive-foreground: 0 0% 98%;
--border: 240 3.7% 15.9%;
--input: 240 3.7% 15.9%;
--ring: 240 4.9% 83.9%;
--chart-1: 220 70% 50%;
--chart-2: 160 60% 45%;
--chart-3: 30 80% 55%;
--chart-4: 280 65% 60%;
--chart-5: 340 75% 55%;
}
}
@layer base {
* {
@apply border-border;
}
body {
@apply bg-background text-foreground;
}
}

22
app/layout.tsx Normal file
View File

@@ -0,0 +1,22 @@
import type { Metadata } from "next";
import { Inter } from "next/font/google";
import "./globals.css";
const inter = Inter({ subsets: ["latin"] });
export const metadata: Metadata = {
title: "PromptArch - AI Prompt Engineering Platform",
description: "Transform vague ideas into production-ready prompts and PRDs",
};
export default function RootLayout({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
return (
<html lang="en">
<body className={inter.className}>{children}</body>
</html>
);
}

92
app/page.tsx Normal file
View File

@@ -0,0 +1,92 @@
"use client";
import { useState, useEffect } from "react";
import Sidebar from "@/components/Sidebar";
import type { View } from "@/components/Sidebar";
import PromptEnhancer from "@/components/PromptEnhancer";
import PRDGenerator from "@/components/PRDGenerator";
import ActionPlanGenerator from "@/components/ActionPlanGenerator";
import HistoryPanel from "@/components/HistoryPanel";
import SettingsPanel from "@/components/SettingsPanel";
import useStore from "@/lib/store";
import modelAdapter from "@/lib/services/adapter-instance";
export default function Home() {
const [currentView, setCurrentView] = useState<View>("enhance");
const { setQwenTokens, setApiKey } = useStore();
useEffect(() => {
// Handle OAuth callback
if (typeof window !== "undefined") {
const urlParams = new URLSearchParams(window.location.search);
const code = urlParams.get("code");
if (code) {
// In a real app, you would exchange the code for tokens here
// Since we don't have a backend or real client secret, we'll simulate it
console.log("OAuth code received:", code);
// Mock token exchange
const mockAccessToken = "mock_access_token_" + Math.random().toString(36).substr(2, 9);
const tokens = {
accessToken: mockAccessToken,
expiresAt: Date.now() + 3600 * 1000, // 1 hour
};
setQwenTokens(tokens);
modelAdapter.setQwenOAuthTokens(tokens.accessToken, undefined, 3600);
// Save to localStorage
localStorage.setItem("promptarch-qwen-tokens", JSON.stringify(tokens));
// Clear the code from URL
window.history.replaceState({}, document.title, window.location.pathname);
// Switch to settings to show success (optional)
setCurrentView("settings");
}
// Load tokens from localStorage on init
const savedTokens = localStorage.getItem("promptarch-qwen-tokens");
if (savedTokens) {
try {
const tokens = JSON.parse(savedTokens);
if (tokens.expiresAt > Date.now()) {
setQwenTokens(tokens);
modelAdapter.setQwenOAuthTokens(tokens.accessToken, tokens.refreshToken, (tokens.expiresAt - Date.now()) / 1000);
}
} catch (e) {
console.error("Failed to load Qwen tokens:", e);
}
}
}
}, []);
const renderContent = () => {
switch (currentView) {
case "enhance":
return <PromptEnhancer />;
case "prd":
return <PRDGenerator />;
case "action":
return <ActionPlanGenerator />;
case "history":
return <HistoryPanel />;
case "settings":
return <SettingsPanel />;
default:
return <PromptEnhancer />;
}
};
return (
<div className="flex min-h-screen bg-gradient-to-br from-slate-50 to-slate-100 dark:from-slate-900 dark:to-slate-800">
<Sidebar currentView={currentView} onViewChange={setCurrentView} />
<main className="flex-1 overflow-auto p-8">
<div className="mx-auto max-w-7xl">
{renderContent()}
</div>
</main>
</div>
);
}

27
app/test.css Normal file
View File

@@ -0,0 +1,27 @@
.test-page {
background: linear-gradient(to bottom right, rgb(248, 250, 252), rgb(241, 245, 249));
min-height: 100vh;
}
.test-card {
background: white;
border: 1px solid rgb(229, 231, 235);
border-radius: 0.5rem;
box-shadow: 0 1px 2px 0 rgb(0 0 0 / 0.05);
padding: 1.5rem;
}
.test-button {
background: rgb(24, 24, 27);
color: white;
padding: 0.5rem 1rem;
border-radius: 0.375rem;
font-weight: 500;
font-size: 0.875rem;
border: none;
cursor: pointer;
}
.test-button:hover {
background: rgb(38, 38, 42);
}

View File

@@ -0,0 +1,257 @@
"use client";
import { useState, useEffect } from "react";
import { Button } from "@/components/ui/button";
import { Card, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card";
import { Textarea } from "@/components/ui/textarea";
import useStore from "@/lib/store";
import modelAdapter from "@/lib/services/adapter-instance";
import { ListTodo, Copy, Loader2, CheckCircle2, Clock, AlertTriangle, Settings } from "lucide-react";
import { cn } from "@/lib/utils";
export default function ActionPlanGenerator() {
const {
currentPrompt,
actionPlan,
selectedProvider,
selectedModels,
availableModels,
apiKeys,
isProcessing,
error,
setCurrentPrompt,
setSelectedProvider,
setActionPlan,
setProcessing,
setError,
setAvailableModels,
setSelectedModel,
} = useStore();
const [copied, setCopied] = useState(false);
const selectedModel = selectedModels[selectedProvider];
const models = availableModels[selectedProvider] || modelAdapter.getAvailableModels(selectedProvider);
useEffect(() => {
if (typeof window !== "undefined") {
loadAvailableModels();
const saved = localStorage.getItem("promptarch-api-keys");
if (saved) {
try {
const keys = JSON.parse(saved);
if (keys.qwen) modelAdapter.updateQwenApiKey(keys.qwen);
if (keys.ollama) modelAdapter.updateOllamaApiKey(keys.ollama);
if (keys.zai) modelAdapter.updateZaiApiKey(keys.zai);
} catch (e) {
console.error("Failed to load API keys:", e);
}
}
}
}, [selectedProvider]);
const loadAvailableModels = async () => {
const fallbackModels = modelAdapter.getAvailableModels(selectedProvider);
setAvailableModels(selectedProvider, fallbackModels);
try {
const result = await modelAdapter.listModels(selectedProvider);
if (result.success && result.data) {
setAvailableModels(selectedProvider, result.data[selectedProvider] || fallbackModels);
}
} catch (error) {
console.error("Failed to load models:", error);
}
};
const handleGenerate = async () => {
if (!currentPrompt.trim()) {
setError("Please enter PRD or project requirements");
return;
}
const apiKey = apiKeys[selectedProvider];
if (!apiKey || !apiKey.trim()) {
setError(`Please configure your ${selectedProvider.toUpperCase()} API key in Settings`);
return;
}
setProcessing(true);
setError(null);
try {
const result = await modelAdapter.generateActionPlan(currentPrompt, selectedProvider, selectedModel);
if (result.success && result.data) {
const newPlan = {
id: Math.random().toString(36).substr(2, 9),
prdId: "",
tasks: [],
frameworks: [],
architecture: {
pattern: "",
structure: "",
technologies: [],
bestPractices: [],
},
estimatedDuration: "",
createdAt: new Date(),
rawContent: result.data,
};
setActionPlan(newPlan);
} else {
setError(result.error || "Failed to generate action plan");
}
} catch (err) {
setError(err instanceof Error ? err.message : "An error occurred");
} finally {
setProcessing(false);
}
};
const handleCopy = async () => {
if (actionPlan?.rawContent) {
await navigator.clipboard.writeText(actionPlan.rawContent);
setCopied(true);
setTimeout(() => setCopied(false), 2000);
}
};
return (
<div className="mx-auto grid max-w-7xl gap-6 lg:grid-cols-2">
<Card className="h-fit">
<CardHeader>
<CardTitle className="flex items-center gap-2">
<ListTodo className="h-5 w-5" />
Action Plan Generator
</CardTitle>
<CardDescription>
Convert PRD into actionable implementation plan
</CardDescription>
</CardHeader>
<CardContent className="space-y-4">
<div className="space-y-2">
<label className="text-sm font-medium">AI Provider</label>
<div className="flex gap-2">
{(["qwen", "ollama", "zai"] as const).map((provider) => (
<Button
key={provider}
variant={selectedProvider === provider ? "default" : "outline"}
size="sm"
onClick={() => setSelectedProvider(provider)}
className="capitalize"
>
{provider === "qwen" ? "Qwen" : provider === "ollama" ? "Ollama" : "Z.AI"}
</Button>
))}
</div>
</div>
<div className="space-y-2">
<label className="text-sm font-medium">Model</label>
<select
value={selectedModel}
onChange={(e) => setSelectedModel(selectedProvider, e.target.value)}
className="w-full rounded-md border border-input bg-background px-3 py-2 text-sm ring-offset-background focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring"
>
{models.map((model) => (
<option key={model} value={model}>
{model}
</option>
))}
</select>
</div>
<div className="space-y-2">
<label className="text-sm font-medium">PRD / Requirements</label>
<Textarea
placeholder="Paste your PRD or project requirements here..."
value={currentPrompt}
onChange={(e) => setCurrentPrompt(e.target.value)}
className="min-h-[200px] resize-y"
/>
</div>
{error && (
<div className="rounded-md bg-destructive/10 p-3 text-sm text-destructive">
{error}
{!apiKeys[selectedProvider] && (
<div className="mt-2 flex items-center gap-2">
<Settings className="h-4 w-4" />
<span className="text-xs">Configure API key in Settings</span>
</div>
)}
</div>
)}
<Button onClick={handleGenerate} disabled={isProcessing || !currentPrompt.trim()} className="w-full">
{isProcessing ? (
<>
<Loader2 className="mr-2 h-4 w-4 animate-spin" />
Generating Action Plan...
</>
) : (
<>
<ListTodo className="mr-2 h-4 w-4" />
Generate Action Plan
</>
)}
</Button>
</CardContent>
</Card>
<Card className={cn(!actionPlan && "opacity-50")}>
<CardHeader>
<CardTitle className="flex items-center justify-between">
<span className="flex items-center gap-2">
<CheckCircle2 className="h-5 w-5 text-green-500" />
Action Plan
</span>
{actionPlan && (
<Button variant="ghost" size="icon" onClick={handleCopy}>
{copied ? (
<CheckCircle2 className="h-4 w-4 text-green-500" />
) : (
<Copy className="h-4 w-4" />
)}
</Button>
)}
</CardTitle>
<CardDescription>
Task breakdown, frameworks, and architecture recommendations
</CardDescription>
</CardHeader>
<CardContent>
{actionPlan ? (
<div className="space-y-4">
<div className="rounded-md border bg-primary/5 p-4">
<h4 className="mb-2 flex items-center gap-2 font-semibold">
<Clock className="h-4 w-4" />
Implementation Roadmap
</h4>
<pre className="whitespace-pre-wrap text-sm">{actionPlan.rawContent}</pre>
</div>
<div className="rounded-md border bg-muted/30 p-4">
<h4 className="mb-2 flex items-center gap-2 font-semibold">
<AlertTriangle className="h-4 w-4" />
Quick Notes
</h4>
<ul className="list-inside list-disc space-y-1 text-sm text-muted-foreground">
<li>Review all task dependencies before starting</li>
<li>Set up recommended framework architecture</li>
<li>Follow best practices for security and performance</li>
<li>Use specified deployment strategy</li>
</ul>
</div>
</div>
) : (
<div className="flex h-[300px] items-center justify-center text-center text-sm text-muted-foreground">
Action plan will appear here
</div>
)}
</CardContent>
</Card>
</div>
);
}

View File

@@ -0,0 +1,74 @@
"use client";
import useStore from "@/lib/store";
import { Button } from "@/components/ui/button";
import { Card, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card";
import { Clock, Trash2, RotateCcw } from "lucide-react";
import { cn } from "@/lib/utils";
export default function HistoryPanel() {
const { history, setCurrentPrompt, clearHistory } = useStore();
const handleRestore = (prompt: string) => {
setCurrentPrompt(prompt);
};
const handleClear = () => {
if (confirm("Are you sure you want to clear all history?")) {
clearHistory();
}
};
if (history.length === 0) {
return (
<Card>
<CardContent className="flex h-[400px] items-center justify-center">
<div className="text-center">
<Clock className="mx-auto h-12 w-12 text-muted-foreground/50" />
<p className="mt-4 text-muted-foreground">No history yet</p>
<p className="mt-2 text-sm text-muted-foreground">
Start enhancing prompts to see them here
</p>
</div>
</CardContent>
</Card>
);
}
return (
<Card>
<CardHeader className="flex-row items-center justify-between">
<div>
<CardTitle>History</CardTitle>
<CardDescription>{history.length} items</CardDescription>
</div>
<Button variant="outline" size="icon" onClick={handleClear}>
<Trash2 className="h-4 w-4" />
</Button>
</CardHeader>
<CardContent className="space-y-3">
{history.map((item) => (
<div
key={item.id}
className="rounded-md border bg-muted/30 p-4 transition-colors hover:bg-muted/50"
>
<div className="mb-2 flex items-center justify-between">
<span className="text-xs text-muted-foreground">
{new Date(item.timestamp).toLocaleString()}
</span>
<Button
variant="ghost"
size="icon"
className="h-6 w-6"
onClick={() => handleRestore(item.prompt)}
>
<RotateCcw className="h-3 w-3" />
</Button>
</div>
<p className="line-clamp-3 text-sm">{item.prompt}</p>
</div>
))}
</CardContent>
</Card>
);
}

271
components/PRDGenerator.tsx Normal file
View File

@@ -0,0 +1,271 @@
"use client";
import { useState, useEffect } from "react";
import { Button } from "@/components/ui/button";
import { Card, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card";
import { Textarea } from "@/components/ui/textarea";
import useStore from "@/lib/store";
import modelAdapter from "@/lib/services/adapter-instance";
import { FileText, Copy, Loader2, CheckCircle2, ChevronDown, ChevronUp, Settings } from "lucide-react";
import { cn } from "@/lib/utils";
export default function PRDGenerator() {
const {
currentPrompt,
prd,
selectedProvider,
selectedModels,
availableModels,
apiKeys,
isProcessing,
error,
setCurrentPrompt,
setSelectedProvider,
setPRD,
setProcessing,
setError,
setAvailableModels,
setSelectedModel,
} = useStore();
const [copied, setCopied] = useState(false);
const [expandedSections, setExpandedSections] = useState<string[]>([]);
const selectedModel = selectedModels[selectedProvider];
const models = availableModels[selectedProvider] || modelAdapter.getAvailableModels(selectedProvider);
const toggleSection = (section: string) => {
setExpandedSections((prev) =>
prev.includes(section) ? prev.filter((s) => s !== section) : [...prev, section]
);
};
useEffect(() => {
if (typeof window !== "undefined") {
loadAvailableModels();
const saved = localStorage.getItem("promptarch-api-keys");
if (saved) {
try {
const keys = JSON.parse(saved);
if (keys.qwen) modelAdapter.updateQwenApiKey(keys.qwen);
if (keys.ollama) modelAdapter.updateOllamaApiKey(keys.ollama);
if (keys.zai) modelAdapter.updateZaiApiKey(keys.zai);
} catch (e) {
console.error("Failed to load API keys:", e);
}
}
}
}, [selectedProvider]);
const loadAvailableModels = async () => {
const fallbackModels = modelAdapter.getAvailableModels(selectedProvider);
setAvailableModels(selectedProvider, fallbackModels);
try {
const result = await modelAdapter.listModels(selectedProvider);
if (result.success && result.data) {
setAvailableModels(selectedProvider, result.data[selectedProvider] || fallbackModels);
}
} catch (error) {
console.error("Failed to load models:", error);
}
};
const handleGenerate = async () => {
if (!currentPrompt.trim()) {
setError("Please enter an idea to generate PRD");
return;
}
const apiKey = apiKeys[selectedProvider];
if (!apiKey || !apiKey.trim()) {
setError(`Please configure your ${selectedProvider.toUpperCase()} API key in Settings`);
return;
}
setProcessing(true);
setError(null);
try {
const result = await modelAdapter.generatePRD(currentPrompt, selectedProvider, selectedModel);
if (result.success && result.data) {
const newPRD = {
id: Math.random().toString(36).substr(2, 9),
title: currentPrompt.slice(0, 50) + "...",
overview: result.data,
objectives: [],
userPersonas: [],
functionalRequirements: [],
nonFunctionalRequirements: [],
technicalArchitecture: "",
successMetrics: [],
createdAt: new Date(),
updatedAt: new Date(),
};
setPRD(newPRD);
} else {
setError(result.error || "Failed to generate PRD");
}
} catch (err) {
setError(err instanceof Error ? err.message : "An error occurred");
} finally {
setProcessing(false);
}
};
const handleCopy = async () => {
if (prd?.overview) {
await navigator.clipboard.writeText(prd.overview);
setCopied(true);
setTimeout(() => setCopied(false), 2000);
}
};
const sections = [
{ id: "overview", title: "Overview & Objectives" },
{ id: "personas", title: "User Personas & Use Cases" },
{ id: "functional", title: "Functional Requirements" },
{ id: "nonfunctional", title: "Non-functional Requirements" },
{ id: "architecture", title: "Technical Architecture" },
{ id: "metrics", title: "Success Metrics" },
];
return (
<div className="mx-auto grid max-w-7xl gap-6 lg:grid-cols-2">
<Card className="h-fit">
<CardHeader>
<CardTitle className="flex items-center gap-2">
<FileText className="h-5 w-5" />
PRD Generator
</CardTitle>
<CardDescription>
Generate comprehensive Product Requirements Document from your idea
</CardDescription>
</CardHeader>
<CardContent className="space-y-4">
<div className="space-y-2">
<label className="text-sm font-medium">AI Provider</label>
<div className="flex gap-2">
{(["qwen", "ollama", "zai"] as const).map((provider) => (
<Button
key={provider}
variant={selectedProvider === provider ? "default" : "outline"}
size="sm"
onClick={() => setSelectedProvider(provider)}
className="capitalize"
>
{provider === "qwen" ? "Qwen" : provider === "ollama" ? "Ollama" : "Z.AI"}
</Button>
))}
</div>
</div>
<div className="space-y-2">
<label className="text-sm font-medium">Model</label>
<select
value={selectedModel}
onChange={(e) => setSelectedModel(selectedProvider, e.target.value)}
className="w-full rounded-md border border-input bg-background px-3 py-2 text-sm ring-offset-background focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring"
>
{models.map((model) => (
<option key={model} value={model}>
{model}
</option>
))}
</select>
</div>
<div className="space-y-2">
<label className="text-sm font-medium">Your Idea</label>
<Textarea
placeholder="e.g., A task management app with real-time collaboration features"
value={currentPrompt}
onChange={(e) => setCurrentPrompt(e.target.value)}
className="min-h-[200px] resize-y"
/>
</div>
{error && (
<div className="rounded-md bg-destructive/10 p-3 text-sm text-destructive">
{error}
{!apiKeys[selectedProvider] && (
<div className="mt-2 flex items-center gap-2">
<Settings className="h-4 w-4" />
<span className="text-xs">Configure API key in Settings</span>
</div>
)}
</div>
)}
<Button onClick={handleGenerate} disabled={isProcessing || !currentPrompt.trim()} className="w-full">
{isProcessing ? (
<>
<Loader2 className="mr-2 h-4 w-4 animate-spin" />
Generating PRD...
</>
) : (
<>
<FileText className="mr-2 h-4 w-4" />
Generate PRD
</>
)}
</Button>
</CardContent>
</Card>
<Card className={cn(!prd && "opacity-50")}>
<CardHeader>
<CardTitle className="flex items-center justify-between">
<span className="flex items-center gap-2">
<CheckCircle2 className="h-5 w-5 text-green-500" />
Generated PRD
</span>
{prd && (
<Button variant="ghost" size="icon" onClick={handleCopy}>
{copied ? (
<CheckCircle2 className="h-4 w-4 text-green-500" />
) : (
<Copy className="h-4 w-4" />
)}
</Button>
)}
</CardTitle>
<CardDescription>
Structured requirements document ready for development
</CardDescription>
</CardHeader>
<CardContent>
{prd ? (
<div className="space-y-3">
{sections.map((section) => (
<div key={section.id} className="rounded-md border bg-muted/30">
<button
onClick={() => toggleSection(section.id)}
className="flex w-full items-center justify-between px-4 py-3 text-left font-medium transition-colors hover:bg-muted/50"
>
<span>{section.title}</span>
{expandedSections.includes(section.id) ? (
<ChevronUp className="h-4 w-4" />
) : (
<ChevronDown className="h-4 w-4" />
)}
</button>
{expandedSections.includes(section.id) && (
<div className="border-t bg-background px-4 py-3">
<pre className="whitespace-pre-wrap text-sm">{prd.overview}</pre>
</div>
)}
</div>
))}
</div>
) : (
<div className="flex h-[300px] items-center justify-center text-center text-sm text-muted-foreground">
PRD will appear here
</div>
)}
</CardContent>
</Card>
</div>
);
}

View File

@@ -0,0 +1,238 @@
"use client";
import { useState, useEffect } from "react";
import { Button } from "@/components/ui/button";
import { Card, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card";
import { Textarea } from "@/components/ui/textarea";
import useStore from "@/lib/store";
import modelAdapter from "@/lib/services/adapter-instance";
import { Sparkles, Copy, RefreshCw, Loader2, CheckCircle2, Settings } from "lucide-react";
import { cn } from "@/lib/utils";
export default function PromptEnhancer() {
const {
currentPrompt,
enhancedPrompt,
selectedProvider,
selectedModels,
availableModels,
apiKeys,
isProcessing,
error,
setSelectedProvider,
setCurrentPrompt,
setEnhancedPrompt,
setProcessing,
setError,
setAvailableModels,
setSelectedModel,
} = useStore();
const [copied, setCopied] = useState(false);
const selectedModel = selectedModels[selectedProvider];
const models = availableModels[selectedProvider] || modelAdapter.getAvailableModels(selectedProvider);
useEffect(() => {
if (typeof window !== "undefined") {
loadAvailableModels();
const saved = localStorage.getItem("promptarch-api-keys");
if (saved) {
try {
const keys = JSON.parse(saved);
if (keys.qwen) modelAdapter.updateQwenApiKey(keys.qwen);
if (keys.ollama) modelAdapter.updateOllamaApiKey(keys.ollama);
if (keys.zai) modelAdapter.updateZaiApiKey(keys.zai);
} catch (e) {
console.error("Failed to load API keys:", e);
}
}
}
}, [selectedProvider]);
const loadAvailableModels = async () => {
const fallbackModels = modelAdapter.getAvailableModels(selectedProvider);
setAvailableModels(selectedProvider, fallbackModels);
try {
const result = await modelAdapter.listModels(selectedProvider);
if (result.success && result.data) {
setAvailableModels(selectedProvider, result.data[selectedProvider] || fallbackModels);
}
} catch (error) {
console.error("Failed to load models:", error);
}
};
const handleEnhance = async () => {
if (!currentPrompt.trim()) {
setError("Please enter a prompt to enhance");
return;
}
const apiKey = apiKeys[selectedProvider];
if (!apiKey || !apiKey.trim()) {
setError(`Please configure your ${selectedProvider.toUpperCase()} API key in Settings`);
return;
}
setProcessing(true);
setError(null);
try {
const result = await modelAdapter.enhancePrompt(currentPrompt, selectedProvider, selectedModel);
if (result.success && result.data) {
setEnhancedPrompt(result.data);
} else {
setError(result.error || "Failed to enhance prompt");
}
} catch (err) {
setError(err instanceof Error ? err.message : "An error occurred");
} finally {
setProcessing(false);
}
};
const handleCopy = async () => {
if (enhancedPrompt) {
await navigator.clipboard.writeText(enhancedPrompt);
setCopied(true);
setTimeout(() => setCopied(false), 2000);
}
};
const handleClear = () => {
setCurrentPrompt("");
setEnhancedPrompt(null);
setError(null);
};
return (
<div className="mx-auto grid max-w-7xl gap-6 lg:grid-cols-2">
<Card className="h-fit">
<CardHeader>
<CardTitle className="flex items-center gap-2">
<Sparkles className="h-5 w-5" />
Input Prompt
</CardTitle>
<CardDescription>
Enter your prompt and we'll enhance it for AI coding agents
</CardDescription>
</CardHeader>
<CardContent className="space-y-4">
<div className="space-y-2">
<label className="text-sm font-medium">AI Provider</label>
<div className="flex flex-wrap gap-2">
{(["qwen", "ollama", "zai"] as const).map((provider) => (
<Button
key={provider}
variant={selectedProvider === provider ? "default" : "outline"}
size="sm"
onClick={() => setSelectedProvider(provider)}
className={cn(
"capitalize",
selectedProvider === provider && "bg-primary text-primary-foreground"
)}
>
{provider === "qwen" ? "Qwen" : provider === "ollama" ? "Ollama" : "Z.AI"}
</Button>
))}
</div>
</div>
<div className="space-y-2">
<label className="text-sm font-medium">Model</label>
<select
value={selectedModel}
onChange={(e) => setSelectedModel(selectedProvider, e.target.value)}
className="w-full rounded-md border border-input bg-background px-3 py-2 text-sm ring-offset-background focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring"
>
{models.map((model) => (
<option key={model} value={model}>
{model}
</option>
))}
</select>
</div>
<div className="space-y-2">
<label className="text-sm font-medium">Your Prompt</label>
<Textarea
placeholder="e.g., Create a user authentication system with JWT tokens"
value={currentPrompt}
onChange={(e) => setCurrentPrompt(e.target.value)}
className="min-h-[200px] resize-y"
/>
</div>
{error && (
<div className="rounded-md bg-destructive/10 p-3 text-sm text-destructive">
{error}
{!apiKeys[selectedProvider] && (
<div className="mt-2 flex items-center gap-2">
<Settings className="h-4 w-4" />
<span className="text-xs">Configure API key in Settings</span>
</div>
)}
</div>
)}
<div className="flex gap-2">
<Button onClick={handleEnhance} disabled={isProcessing || !currentPrompt.trim()} className="flex-1">
{isProcessing ? (
<>
<Loader2 className="mr-2 h-4 w-4 animate-spin" />
Enhancing...
</>
) : (
<>
<Sparkles className="mr-2 h-4 w-4" />
Enhance Prompt
</>
)}
</Button>
<Button variant="outline" onClick={handleClear} disabled={isProcessing}>
<RefreshCw className="mr-2 h-4 w-4" />
Clear
</Button>
</div>
</CardContent>
</Card>
<Card className={cn(!enhancedPrompt && "opacity-50")}>
<CardHeader>
<CardTitle className="flex items-center justify-between">
<span className="flex items-center gap-2">
<CheckCircle2 className="h-5 w-5 text-green-500" />
Enhanced Prompt
</span>
{enhancedPrompt && (
<Button variant="ghost" size="icon" onClick={handleCopy}>
{copied ? (
<CheckCircle2 className="h-4 w-4 text-green-500" />
) : (
<Copy className="h-4 w-4" />
)}
</Button>
)}
</CardTitle>
<CardDescription>
Professional prompt ready for coding agents
</CardDescription>
</CardHeader>
<CardContent>
{enhancedPrompt ? (
<div className="rounded-md border bg-muted/50 p-4">
<pre className="whitespace-pre-wrap text-sm">{enhancedPrompt}</pre>
</div>
) : (
<div className="flex h-[200px] items-center justify-center text-center text-sm text-muted-foreground">
Enhanced prompt will appear here
</div>
)}
</CardContent>
</Card>
</div>
);
}

View File

@@ -0,0 +1,288 @@
"use client";
import { useState, useEffect } from "react";
import { Button } from "@/components/ui/button";
import { Card, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card";
import { Textarea } from "@/components/ui/textarea";
import { Input } from "@/components/ui/input";
import useStore from "@/lib/store";
import modelAdapter from "@/lib/services/adapter-instance";
import { Save, Key, Server, Eye, EyeOff } from "lucide-react";
import { cn } from "@/lib/utils";
export default function SettingsPanel() {
const { apiKeys, setApiKey, selectedProvider, setSelectedProvider, qwenTokens, setQwenTokens } = useStore();
const [showApiKey, setShowApiKey] = useState<Record<string, boolean>>({});
const handleSave = () => {
if (typeof window !== "undefined") {
localStorage.setItem("promptarch-api-keys", JSON.stringify(apiKeys));
alert("API keys saved successfully!");
}
};
const handleLoad = () => {
if (typeof window !== "undefined") {
const saved = localStorage.getItem("promptarch-api-keys");
if (saved) {
try {
const keys = JSON.parse(saved);
if (keys.qwen) {
setApiKey("qwen", keys.qwen);
modelAdapter.updateQwenApiKey(keys.qwen);
}
if (keys.ollama) {
setApiKey("ollama", keys.ollama);
modelAdapter.updateOllamaApiKey(keys.ollama);
}
if (keys.zai) {
setApiKey("zai", keys.zai);
modelAdapter.updateZaiApiKey(keys.zai);
}
} catch (e) {
console.error("Failed to load API keys:", e);
}
}
}
};
const handleApiKeyChange = (provider: string, value: string) => {
setApiKey(provider as "qwen" | "ollama" | "zai", value);
switch (provider) {
case "qwen":
modelAdapter.updateQwenApiKey(value);
break;
case "ollama":
modelAdapter.updateOllamaApiKey(value);
break;
case "zai":
modelAdapter.updateZaiApiKey(value);
break;
}
};
useEffect(() => {
handleLoad();
}, []);
return (
<div className="mx-auto max-w-3xl space-y-6">
<Card>
<CardHeader>
<CardTitle className="flex items-center gap-2">
<Key className="h-5 w-5" />
API Configuration
</CardTitle>
<CardDescription>
Configure API keys for different AI providers
</CardDescription>
</CardHeader>
<CardContent className="space-y-6">
<div className="space-y-2">
<label className="flex items-center gap-2 text-sm font-medium">
<Server className="h-4 w-4" />
Qwen Code API Key
</label>
<div className="relative">
<Input
type={showApiKey.qwen ? "text" : "password"}
placeholder="Enter your Qwen API key"
value={apiKeys.qwen || ""}
onChange={(e) => handleApiKeyChange("qwen", e.target.value)}
className="font-mono text-sm"
/>
<Button
type="button"
variant="ghost"
size="icon"
className="absolute right-0 top-0 h-full"
onClick={() => setShowApiKey((prev) => ({ ...prev, qwen: !prev.qwen }))}
>
{showApiKey.qwen ? (
<EyeOff className="h-4 w-4" />
) : (
<Eye className="h-4 w-4" />
)}
</Button>
</div>
<div className="flex items-center gap-4">
<p className="text-xs text-muted-foreground flex-1">
Get API key from{" "}
<a
href="https://help.aliyun.com/zh/dashscope/"
target="_blank"
rel="noopener noreferrer"
className="text-primary hover:underline"
>
Alibaba DashScope
</a>
</p>
<Button
variant={qwenTokens ? "secondary" : "outline"}
size="sm"
className="h-8"
onClick={() => {
if (qwenTokens) {
setQwenTokens(undefined as any);
localStorage.removeItem("promptarch-qwen-tokens");
modelAdapter.updateQwenApiKey(apiKeys.qwen || "");
} else {
window.location.href = modelAdapter.getQwenAuthUrl();
}
}}
>
{qwenTokens ? "Logout from Qwen" : "Login with Qwen (OAuth)"}
</Button>
</div>
{qwenTokens && (
<p className="text-[10px] text-green-600 dark:text-green-400 font-medium">
Authenticated via OAuth (Expires: {new Date(qwenTokens.expiresAt || 0).toLocaleString()})
</p>
)}
</div>
<div className="space-y-2">
<label className="flex items-center gap-2 text-sm font-medium">
<Server className="h-4 w-4" />
Ollama Cloud API Key
</label>
<div className="relative">
<Input
type={showApiKey.ollama ? "text" : "password"}
placeholder="Enter your Ollama API key"
value={apiKeys.ollama || ""}
onChange={(e) => handleApiKeyChange("ollama", e.target.value)}
className="font-mono text-sm"
/>
<Button
type="button"
variant="ghost"
size="icon"
className="absolute right-0 top-0 h-full"
onClick={() => setShowApiKey((prev) => ({ ...prev, ollama: !prev.ollama }))}
>
{showApiKey.ollama ? (
<EyeOff className="h-4 w-4" />
) : (
<Eye className="h-4 w-4" />
)}
</Button>
</div>
<p className="text-xs text-muted-foreground">
Get API key from{" "}
<a
href="https://ollama.com/cloud"
target="_blank"
rel="noopener noreferrer"
className="text-primary hover:underline"
>
ollama.com/cloud
</a>
</p>
</div>
<div className="space-y-2">
<label className="flex items-center gap-2 text-sm font-medium">
<Server className="h-4 w-4" />
Z.AI Plan API Key
</label>
<div className="relative">
<Input
type={showApiKey.zai ? "text" : "password"}
placeholder="Enter your Z.AI API key"
value={apiKeys.zai || ""}
onChange={(e) => handleApiKeyChange("zai", e.target.value)}
className="font-mono text-sm"
/>
<Button
type="button"
variant="ghost"
size="icon"
className="absolute right-0 top-0 h-full"
onClick={() => setShowApiKey((prev) => ({ ...prev, zai: !prev.zai }))}
>
{showApiKey.zai ? (
<EyeOff className="h-4 w-4" />
) : (
<Eye className="h-4 w-4" />
)}
</Button>
</div>
<p className="text-xs text-muted-foreground">
Get API key from{" "}
<a
href="https://docs.z.ai"
target="_blank"
rel="noopener noreferrer"
className="text-primary hover:underline"
>
docs.z.ai
</a>
</p>
</div>
<Button onClick={handleSave} className="w-full">
<Save className="mr-2 h-4 w-4" />
Save API Keys
</Button>
</CardContent>
</Card>
<Card>
<CardHeader>
<CardTitle>Default Provider</CardTitle>
<CardDescription>
Select your preferred AI provider
</CardDescription>
</CardHeader>
<CardContent className="space-y-4">
<div className="grid gap-3">
{(["qwen", "ollama", "zai"] as const).map((provider) => (
<button
key={provider}
onClick={() => setSelectedProvider(provider)}
className={`flex items-center gap-3 rounded-lg border p-4 text-left transition-colors hover:bg-muted/50 ${
selectedProvider === provider
? "border-primary bg-primary/5"
: "border-border"
}`}
>
<div className="flex h-10 w-10 items-center justify-center rounded-md bg-primary/10">
<Server className="h-5 w-5 text-primary" />
</div>
<div className="flex-1">
<h3 className="font-medium capitalize">{provider}</h3>
<p className="text-sm text-muted-foreground">
{provider === "qwen" && "Alibaba DashScope API"}
{provider === "ollama" && "Ollama Cloud API"}
{provider === "zai" && "Z.AI Plan API"}
</p>
</div>
{selectedProvider === provider && (
<div className="h-2 w-2 rounded-full bg-primary" />
)}
</button>
))}
</div>
</CardContent>
</Card>
<Card>
<CardHeader>
<CardTitle>Data Privacy</CardTitle>
<CardDescription>
Your data handling preferences
</CardDescription>
</CardHeader>
<CardContent>
<div className="rounded-md border bg-muted/30 p-4">
<p className="text-sm">
All API keys are stored locally in your browser. Your prompts are sent directly to the selected AI provider and are not stored by PromptArch.
</p>
</div>
</CardContent>
</Card>
</div>
);
}

89
components/Sidebar.tsx Normal file
View File

@@ -0,0 +1,89 @@
"use client";
import { Button } from "@/components/ui/button";
import useStore from "@/lib/store";
import { Sparkles, FileText, ListTodo, Settings, History } from "lucide-react";
import { cn } from "@/lib/utils";
export type View = "enhance" | "prd" | "action" | "history" | "settings";
interface SidebarProps {
currentView: View;
onViewChange: (view: View) => void;
}
export default function Sidebar({ currentView, onViewChange }: SidebarProps) {
const history = useStore((state) => state.history);
const menuItems = [
{ id: "enhance" as View, label: "Prompt Enhancer", icon: Sparkles },
{ id: "prd" as View, label: "PRD Generator", icon: FileText },
{ id: "action" as View, label: "Action Plan", icon: ListTodo },
{ id: "history" as View, label: "History", icon: History, count: history.length },
{ id: "settings" as View, label: "Settings", icon: Settings },
];
return (
<aside className="flex h-screen w-64 flex-col border-r bg-card">
<div className="border-b p-6">
<h1 className="flex items-center gap-2 text-xl font-bold">
<div className="flex h-8 w-8 items-center justify-center rounded-lg bg-primary text-primary-foreground">
PA
</div>
PromptArch
</h1>
</div>
<nav className="flex-1 space-y-1 p-4">
{menuItems.map((item) => (
<Button
key={item.id}
variant={currentView === item.id ? "default" : "ghost"}
className={cn(
"w-full justify-start gap-2",
currentView === item.id && "bg-primary text-primary-foreground"
)}
onClick={() => onViewChange(item.id)}
>
<item.icon className="h-4 w-4" />
<span className="flex-1 text-left">{item.label}</span>
{item.count !== undefined && item.count > 0 && (
<span className="flex h-5 w-5 items-center justify-center rounded-full bg-primary-foreground text-xs font-medium">
{item.count}
</span>
)}
</Button>
))}
<div className="mt-8 p-3 text-[10px] leading-relaxed text-muted-foreground border-t border-border/50 pt-4">
<p className="font-semibold text-foreground mb-1">Developed by Roman | RyzenAdvanced</p>
<div className="space-y-1">
<p>
GitHub: <a href="https://github.com/roman-ryzenadvanced/Custom-Engineered-Agents-and-Tools-for-Vibe-Coders" target="_blank" rel="noopener noreferrer" className="text-primary hover:underline">roman-ryzenadvanced</a>
</p>
<p>
Telegram: <a href="https://t.me/VibeCodePrompterSystem" target="_blank" rel="noopener noreferrer" className="text-primary hover:underline">@VibeCodePrompterSystem</a>
</p>
<p className="mt-2 text-[9px] opacity-80">
100% Developed using GLM 4.7 model on TRAE.AI IDE.
</p>
<p className="text-[9px] opacity-80">
Model Info: <a href="https://z.ai/subscribe?ic=R0K78RJKNW" target="_blank" rel="noopener noreferrer" className="text-primary hover:underline">Learn here</a>
</p>
</div>
</div>
</nav>
<div className="border-t p-4">
<div className="rounded-md bg-muted/50 p-3 text-xs text-muted-foreground">
<p className="font-medium text-foreground">Quick Tips</p>
<ul className="mt-2 space-y-1">
<li> Use different providers for best results</li>
<li> Copy enhanced prompts to your AI agent</li>
<li> PRDs generate better action plans</li>
</ul>
</div>
</div>
</aside>
);
}

38
components/ui/button.tsx Normal file
View File

@@ -0,0 +1,38 @@
import * as React from "react";
import { cn } from "@/lib/utils";
export interface ButtonProps extends React.ButtonHTMLAttributes<HTMLButtonElement> {
variant?: "default" | "destructive" | "outline" | "secondary" | "ghost" | "link";
size?: "default" | "sm" | "lg" | "icon";
}
const Button = React.forwardRef<HTMLButtonElement, ButtonProps>(
({ className, variant = "default", size = "default", ...props }, ref) => {
const baseStyles =
"inline-flex items-center justify-center gap-2 whitespace-nowrap rounded-md text-sm font-medium ring-offset-background transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 disabled:pointer-events-none disabled:opacity-50";
const variants = {
default: "bg-primary text-primary-foreground hover:bg-primary/90",
destructive: "bg-destructive text-destructive-foreground hover:bg-destructive/90",
outline: "border border-input bg-background hover:bg-accent hover:text-accent-foreground",
secondary: "bg-secondary text-secondary-foreground hover:bg-secondary/80",
ghost: "hover:bg-accent hover:text-accent-foreground",
link: "text-primary underline-offset-4 hover:underline",
};
const sizes = {
default: "h-10 px-4 py-2",
sm: "h-9 rounded-md px-3",
lg: "h-11 rounded-md px-8",
icon: "h-10 w-10",
};
return (
<button className={cn(baseStyles, variants[variant], sizes[size], className)} ref={ref} {...props} />
);
}
);
Button.displayName = "Button";
export { Button };

52
components/ui/card.tsx Normal file
View File

@@ -0,0 +1,52 @@
import * as React from "react";
import { cn } from "@/lib/utils";
const Card = React.forwardRef<HTMLDivElement, React.HTMLAttributes<HTMLDivElement>>(
({ className, ...props }, ref) => (
<div ref={ref} className={cn("rounded-lg border bg-card text-card-foreground shadow-sm", className)} {...props} />
)
);
Card.displayName = "Card";
const CardHeader = React.forwardRef<HTMLDivElement, React.HTMLAttributes<HTMLDivElement>>(
({ className, ...props }, ref) => (
<div ref={ref} className={cn("flex flex-col space-y-1.5 p-6", className)} {...props} />
)
);
CardHeader.displayName = "CardHeader";
const CardTitle = React.forwardRef<HTMLParagraphElement, React.HTMLAttributes<HTMLHeadingElement>>(
({ className, ...props }, ref) => (
<h3 ref={ref} className={cn("text-2xl font-semibold leading-none tracking-tight", className)} {...props} />
)
);
CardTitle.displayName = "CardTitle";
const CardDescription = React.forwardRef<HTMLParagraphElement, React.HTMLAttributes<HTMLParagraphElement>>(
({ className, ...props }, ref) => (
<p ref={ref} className={cn("text-sm text-muted-foreground", className)} {...props} />
)
);
CardDescription.displayName = "CardDescription";
const CardContent = React.forwardRef<HTMLDivElement, React.HTMLAttributes<HTMLDivElement>>(
({ className, ...props }, ref) => (
<div ref={ref} className={cn("p-6 pt-0", className)} {...props} />
)
);
CardContent.displayName = "CardContent";
const CardFooter = React.forwardRef<HTMLDivElement, React.HTMLAttributes<HTMLDivElement>>(
({ className, ...props }, ref) => (
<div ref={ref} className={cn("flex items-center p-6 pt-0", className)} {...props} />
)
);
CardFooter.displayName = "CardFooter";
export { Card, CardHeader, CardFooter, CardTitle, CardDescription, CardContent };

25
components/ui/input.tsx Normal file
View File

@@ -0,0 +1,25 @@
import * as React from "react"
import { cn } from "@/lib/utils"
export interface InputProps
extends React.InputHTMLAttributes<HTMLInputElement> {}
const Input = React.forwardRef<HTMLInputElement, InputProps>(
({ className, type, ...props }, ref) => {
return (
<input
type={type}
className={cn(
"flex h-10 w-full rounded-md border border-input bg-background px-3 py-2 text-sm ring-offset-background file:border-0 file:bg-transparent file:text-sm file:font-medium placeholder:text-muted-foreground focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 disabled:cursor-not-allowed disabled:opacity-50",
className
)}
ref={ref}
{...props}
/>
)
}
)
Input.displayName = "Input"
export { Input }

25
components/ui/select.tsx Normal file
View File

@@ -0,0 +1,25 @@
import * as React from "react";
import { cn } from "@/lib/utils";
export interface SelectProps extends React.SelectHTMLAttributes<HTMLSelectElement> {}
const Select = React.forwardRef<HTMLSelectElement, SelectProps>(
({ className, children, ...props }, ref) => {
return (
<select
className={cn(
"flex h-10 w-full items-center justify-between rounded-md border border-input bg-background px-3 py-2 text-sm ring-offset-background placeholder:text-muted-foreground focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2 disabled:cursor-not-allowed disabled:opacity-50",
className
)}
ref={ref}
{...props}
>
{children}
</select>
);
}
);
Select.displayName = "Select";
export { Select };

View File

@@ -0,0 +1,23 @@
import * as React from "react";
import { cn } from "@/lib/utils";
export interface TextareaProps extends React.TextareaHTMLAttributes<HTMLTextAreaElement> {}
const Textarea = React.forwardRef<HTMLTextAreaElement, TextareaProps>(
({ className, ...props }, ref) => {
return (
<textarea
className={cn(
"flex min-h-[80px] w-full rounded-md border border-input bg-background px-3 py-2 text-sm ring-offset-background placeholder:text-muted-foreground focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 disabled:cursor-not-allowed disabled:opacity-50",
className
)}
ref={ref}
{...props}
/>
);
}
);
Textarea.displayName = "Textarea";
export { Textarea };

View File

@@ -0,0 +1,5 @@
import ModelAdapter from "./model-adapter";
const adapter = new ModelAdapter();
export default adapter;

View File

@@ -0,0 +1,194 @@
import type { ModelProvider, APIResponse, ChatMessage } from "@/types";
import QwenOAuthService from "./qwen-oauth";
import OllamaCloudService from "./ollama-cloud";
import ZaiPlanService from "./zai-plan";
export interface ModelAdapterConfig {
qwen?: {
apiKey?: string;
endpoint?: string;
};
ollama?: {
apiKey?: string;
endpoint?: string;
};
zai?: {
apiKey?: string;
generalEndpoint?: string;
codingEndpoint?: string;
};
}
export class ModelAdapter {
private qwenService: QwenOAuthService;
private ollamaService: OllamaCloudService;
private zaiService: ZaiPlanService;
private preferredProvider: ModelProvider;
constructor(config: ModelAdapterConfig = {}, preferredProvider: ModelProvider = "qwen") {
this.qwenService = new QwenOAuthService(config.qwen);
this.ollamaService = new OllamaCloudService(config.ollama);
this.zaiService = new ZaiPlanService(config.zai);
this.preferredProvider = preferredProvider;
}
setPreferredProvider(provider: ModelProvider): void {
this.preferredProvider = provider;
}
updateQwenApiKey(apiKey: string): void {
this.qwenService = new QwenOAuthService({ apiKey });
}
setQwenOAuthTokens(accessToken: string, refreshToken?: string, expiresIn?: number): void {
this.qwenService.setOAuthTokens(accessToken, refreshToken, expiresIn);
}
getQwenAuthUrl(): string {
return this.qwenService.getAuthorizationUrl();
}
updateOllamaApiKey(apiKey: string): void {
this.ollamaService = new OllamaCloudService({ apiKey });
}
updateZaiApiKey(apiKey: string): void {
this.zaiService = new ZaiPlanService({ apiKey });
}
private async callWithFallback<T>(
operation: (service: any) => Promise<APIResponse<T>>,
providers: ModelProvider[]
): Promise<APIResponse<T>> {
for (const provider of providers) {
try {
let service: any;
switch (provider) {
case "qwen":
service = this.qwenService;
break;
case "ollama":
service = this.ollamaService;
break;
case "zai":
service = this.zaiService;
break;
}
const result = await operation(service);
if (result.success) {
return result;
}
} catch (error) {
console.error(`Error with ${provider}:`, error);
}
}
return {
success: false,
error: "All providers failed",
};
}
async enhancePrompt(prompt: string, provider?: ModelProvider, model?: string): Promise<APIResponse<string>> {
const providers: ModelProvider[] = provider ? [provider] : [this.preferredProvider, "ollama", "zai"];
return this.callWithFallback((service) => service.enhancePrompt(prompt, model), providers);
}
async generatePRD(idea: string, provider?: ModelProvider, model?: string): Promise<APIResponse<string>> {
const providers: ModelProvider[] = provider ? [provider] : ["ollama", "zai", this.preferredProvider];
return this.callWithFallback((service) => service.generatePRD(idea, model), providers);
}
async generateActionPlan(prd: string, provider?: ModelProvider, model?: string): Promise<APIResponse<string>> {
const providers: ModelProvider[] = provider ? [provider] : ["zai", "ollama", this.preferredProvider];
return this.callWithFallback((service) => service.generateActionPlan(prd, model), providers);
}
async chatCompletion(
messages: ChatMessage[],
model: string,
provider: ModelProvider = this.preferredProvider
): Promise<APIResponse<string>> {
try {
let service: any;
switch (provider) {
case "qwen":
service = this.qwenService;
break;
case "ollama":
service = this.ollamaService;
break;
case "zai":
service = this.zaiService;
break;
}
return await service.chatCompletion(messages, model);
} catch (error) {
return {
success: false,
error: error instanceof Error ? error.message : "Chat completion failed",
};
}
}
async listModels(provider?: ModelProvider): Promise<APIResponse<Record<ModelProvider, string[]>>> {
const fallbackModels: Record<ModelProvider, string[]> = {
qwen: ["qwen-coder-plus", "qwen-coder-turbo", "qwen-coder-lite"],
ollama: ["gpt-oss:120b", "llama3.1", "gemma3", "deepseek-r1", "qwen3"],
zai: ["glm-4.7", "glm-4.5", "glm-4.5-air", "glm-4-flash", "glm-4-flashx"],
};
const models: Record<ModelProvider, string[]> = { ...fallbackModels };
if (provider === "ollama" || !provider) {
try {
const ollamaModels = await this.ollamaService.listModels();
if (ollamaModels.success && ollamaModels.data && ollamaModels.data.length > 0) {
models.ollama = ollamaModels.data;
}
} catch (error) {
console.error("[ModelAdapter] Failed to load Ollama models, using fallback:", error);
}
}
if (provider === "zai" || !provider) {
try {
const zaiModels = await this.zaiService.listModels();
if (zaiModels.success && zaiModels.data && zaiModels.data.length > 0) {
models.zai = zaiModels.data;
}
} catch (error) {
console.error("[ModelAdapter] Failed to load Z.AI models, using fallback:", error);
}
}
if (provider === "qwen" || !provider) {
try {
const qwenModels = await this.qwenService.listModels();
if (qwenModels.success && qwenModels.data && qwenModels.data.length > 0) {
models.qwen = qwenModels.data;
}
} catch (error) {
console.error("[ModelAdapter] Failed to load Qwen models, using fallback:", error);
}
}
return { success: true, data: models };
}
getAvailableModels(provider: ModelProvider): string[] {
switch (provider) {
case "qwen":
return this.qwenService.getAvailableModels();
case "ollama":
return this.ollamaService.getAvailableModels();
case "zai":
return this.zaiService.getAvailableModels();
default:
return [];
}
}
}
export default ModelAdapter;

View File

@@ -0,0 +1,203 @@
import type { ChatMessage, APIResponse } from "@/types";
export interface OllamaCloudConfig {
apiKey?: string;
endpoint?: string;
}
export interface OllamaModel {
name: string;
size?: number;
digest?: string;
}
export class OllamaCloudService {
private config: OllamaCloudConfig;
private availableModels: string[] = [];
constructor(config: OllamaCloudConfig = {}) {
this.config = {
endpoint: config.endpoint || "https://ollama.com/api",
apiKey: config.apiKey || process.env.OLLAMA_API_KEY,
};
}
private getHeaders(): Record<string, string> {
const headers: Record<string, string> = {
"Content-Type": "application/json",
};
if (this.config.apiKey) {
headers["Authorization"] = `Bearer ${this.config.apiKey}`;
}
return headers;
}
async chatCompletion(
messages: ChatMessage[],
model: string = "gpt-oss:120b",
stream: boolean = false
): Promise<APIResponse<string>> {
try {
if (!this.config.apiKey) {
throw new Error("API key is required. Please configure your Ollama API key in settings.");
}
console.log("[Ollama] API call:", { endpoint: this.config.endpoint, model, messages });
const response = await fetch(`${this.config.endpoint}/chat`, {
method: "POST",
headers: this.getHeaders(),
body: JSON.stringify({
model,
messages,
stream,
}),
});
console.log("[Ollama] Response status:", response.status, response.statusText);
if (!response.ok) {
const errorText = await response.text();
console.error("[Ollama] Error response:", errorText);
throw new Error(`Chat completion failed (${response.status}): ${response.statusText} - ${errorText}`);
}
const data = await response.json();
console.log("[Ollama] Response data:", data);
if (data.message && data.message.content) {
return { success: true, data: data.message.content };
} else if (data.choices && data.choices[0]) {
return { success: true, data: data.choices[0].message.content };
} else {
return { success: false, error: "Unexpected response format" };
}
} catch (error) {
console.error("[Ollama] Chat completion error:", error);
return {
success: false,
error: error instanceof Error ? error.message : "Chat completion failed",
};
}
}
async listModels(): Promise<APIResponse<string[]>> {
try {
if (this.config.apiKey) {
console.log("[Ollama] Listing models from:", `${this.config.endpoint}/tags`);
const response = await fetch(`${this.config.endpoint}/tags`, {
headers: this.getHeaders(),
});
console.log("[Ollama] List models response status:", response.status, response.statusText);
if (!response.ok) {
throw new Error(`Failed to list models: ${response.statusText}`);
}
const data = await response.json();
console.log("[Ollama] Models data:", data);
const models = data.models?.map((m: OllamaModel) => m.name) || [];
this.availableModels = models;
return { success: true, data: models };
} else {
console.log("[Ollama] No API key, using fallback models");
return { success: true, data: ["gpt-oss:120b", "llama3.1", "gemma3", "deepseek-r1", "qwen3"] };
}
} catch (error) {
console.error("[Ollama] listModels error:", error);
return {
success: false,
error: error instanceof Error ? error.message : "Failed to list models",
};
}
}
getAvailableModels(): string[] {
return this.availableModels.length > 0
? this.availableModels
: ["gpt-oss:120b", "llama3.1", "gemma3", "deepseek-r1", "qwen3"];
}
async enhancePrompt(prompt: string, model?: string): Promise<APIResponse<string>> {
const systemMessage: ChatMessage = {
role: "system",
content: `You are an expert prompt engineer. Your task is to enhance user prompts to make them more precise, actionable, and effective for AI coding agents.
Apply these principles:
1. Add specific context about project and requirements
2. Clarify constraints and preferences
3. Define expected output format clearly
4. Include edge cases and error handling requirements
5. Specify testing and validation criteria
Return ONLY the enhanced prompt, no explanations.`,
};
const userMessage: ChatMessage = {
role: "user",
content: `Enhance this prompt for an AI coding agent:\n\n${prompt}`,
};
return this.chatCompletion([systemMessage, userMessage], model || "gpt-oss:120b");
}
async generatePRD(idea: string, model?: string): Promise<APIResponse<string>> {
const systemMessage: ChatMessage = {
role: "system",
content: `You are an expert product manager and technical architect. Generate a comprehensive Product Requirements Document (PRD) based on user's idea.
Structure your PRD with these sections:
1. Overview & Objectives
2. User Personas & Use Cases
3. Functional Requirements (prioritized)
4. Non-functional Requirements
5. Technical Architecture Recommendations
6. Success Metrics & KPIs
Use clear, specific language suitable for development teams.`,
};
const userMessage: ChatMessage = {
role: "user",
content: `Generate a PRD for this idea:\n\n${idea}`,
};
return this.chatCompletion([systemMessage, userMessage], model || "gpt-oss:120b");
}
async generateActionPlan(prd: string, model?: string): Promise<APIResponse<string>> {
const systemMessage: ChatMessage = {
role: "system",
content: `You are an expert technical lead and project manager. Generate a detailed action plan based on PRD.
Structure of action plan with:
1. Task breakdown with priorities (High/Medium/Low)
2. Dependencies between tasks
3. Estimated effort for each task
4. Recommended frameworks and technologies
5. Architecture guidelines and best practices
Include specific recommendations for:
- Frontend frameworks
- Backend architecture
- Database choices
- Authentication/authorization
- Deployment strategy`,
};
const userMessage: ChatMessage = {
role: "user",
content: `Generate an action plan based on this PRD:\n\n${prd}`,
};
return this.chatCompletion([systemMessage, userMessage], model || "gpt-oss:120b");
}
}
export default OllamaCloudService;

217
lib/services/qwen-oauth.ts Normal file
View File

@@ -0,0 +1,217 @@
import type { ChatMessage, APIResponse } from "@/types";
export interface QwenOAuthConfig {
apiKey?: string;
accessToken?: string;
refreshToken?: string;
expiresAt?: number;
endpoint?: string;
clientId?: string;
redirectUri?: string;
}
export class QwenOAuthService {
private config: QwenOAuthConfig;
constructor(config: QwenOAuthConfig = {}) {
this.config = {
endpoint: config.endpoint || "https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
apiKey: config.apiKey || process.env.QWEN_API_KEY,
accessToken: config.accessToken,
refreshToken: config.refreshToken,
expiresAt: config.expiresAt,
clientId: config.clientId || process.env.NEXT_PUBLIC_QWEN_CLIENT_ID,
redirectUri: config.redirectUri || (typeof window !== "undefined" ? window.location.origin : ""),
};
}
private getHeaders(): Record<string, string> {
const authHeader = this.config.accessToken
? `Bearer ${this.config.accessToken}`
: `Bearer ${this.config.apiKey}`;
return {
"Content-Type": "application/json",
"Authorization": authHeader,
};
}
isAuthenticated(): boolean {
return !!(this.config.apiKey || (this.config.accessToken && (!this.config.expiresAt || this.config.expiresAt > Date.now())));
}
getAccessToken(): string | null {
return this.config.accessToken || this.config.apiKey || null;
}
async authenticate(apiKey: string): Promise<APIResponse<string>> {
try {
this.config.apiKey = apiKey;
this.config.accessToken = undefined; // Clear OAuth token if API key is provided
return { success: true, data: "Authenticated successfully" };
} catch (error) {
console.error("Qwen authentication error:", error);
return {
success: false,
error: error instanceof Error ? error.message : "Authentication failed",
};
}
}
setOAuthTokens(accessToken: string, refreshToken?: string, expiresIn?: number): void {
this.config.accessToken = accessToken;
if (refreshToken) this.config.refreshToken = refreshToken;
if (expiresIn) this.config.expiresAt = Date.now() + expiresIn * 1000;
}
getAuthorizationUrl(): string {
const baseUrl = "https://dashscope.console.aliyun.com/oauth/authorize"; // Placeholder URL
const params = new URLSearchParams({
client_id: this.config.clientId || "",
redirect_uri: this.config.redirectUri || "",
response_type: "code",
scope: "dashscope:chat",
});
return `${baseUrl}?${params.toString()}`;
}
async logout(): Promise<void> {
this.config.apiKey = undefined;
this.config.accessToken = undefined;
this.config.refreshToken = undefined;
this.config.expiresAt = undefined;
}
async chatCompletion(
messages: ChatMessage[],
model: string = "qwen-coder-plus",
stream: boolean = false
): Promise<APIResponse<string>> {
try {
if (!this.config.apiKey) {
throw new Error("API key is required. Please configure your Qwen API key in settings.");
}
console.log("[Qwen] API call:", { endpoint: this.config.endpoint, model, messages });
const response = await fetch(`${this.config.endpoint}/chat/completions`, {
method: "POST",
headers: this.getHeaders(),
body: JSON.stringify({
model,
messages,
stream,
}),
});
console.log("[Qwen] Response status:", response.status, response.statusText);
if (!response.ok) {
const errorText = await response.text();
console.error("[Qwen] Error response:", errorText);
throw new Error(`Chat completion failed (${response.status}): ${response.statusText} - ${errorText}`);
}
const data = await response.json();
console.log("[Qwen] Response data:", data);
if (data.choices && data.choices[0] && data.choices[0].message) {
return { success: true, data: data.choices[0].message.content };
} else {
return { success: false, error: "Unexpected response format" };
}
} catch (error) {
console.error("[Qwen] Chat completion error:", error);
return {
success: false,
error: error instanceof Error ? error.message : "Chat completion failed",
};
}
}
async enhancePrompt(prompt: string, model?: string): Promise<APIResponse<string>> {
const systemMessage: ChatMessage = {
role: "system",
content: `You are an expert prompt engineer. Your task is to enhance user prompts to make them more precise, actionable, and effective for AI coding agents.
Apply these principles:
1. Add specific context about project and requirements
2. Clarify constraints and preferences
3. Define expected output format clearly
4. Include edge cases and error handling requirements
5. Specify testing and validation criteria
Return ONLY the enhanced prompt, no explanations or extra text.`,
};
const userMessage: ChatMessage = {
role: "user",
content: `Enhance this prompt for an AI coding agent:\n\n${prompt}`,
};
return this.chatCompletion([systemMessage, userMessage], model || "qwen-coder-plus");
}
async generatePRD(idea: string, model?: string): Promise<APIResponse<string>> {
const systemMessage: ChatMessage = {
role: "system",
content: `You are an expert product manager and technical architect. Generate a comprehensive Product Requirements Document (PRD) based on user's idea.
Structure your PRD with these sections:
1. Overview & Objectives
2. User Personas & Use Cases
3. Functional Requirements (prioritized)
4. Non-functional Requirements
5. Technical Architecture Recommendations
6. Success Metrics & KPIs
Use clear, specific language suitable for development teams.`,
};
const userMessage: ChatMessage = {
role: "user",
content: `Generate a PRD for this idea:\n\n${idea}`,
};
return this.chatCompletion([systemMessage, userMessage], model || "qwen-coder-plus");
}
async generateActionPlan(prd: string, model?: string): Promise<APIResponse<string>> {
const systemMessage: ChatMessage = {
role: "system",
content: `You are an expert technical lead and project manager. Generate a detailed action plan based on PRD.
Structure of action plan with:
1. Task breakdown with priorities (High/Medium/Low)
2. Dependencies between tasks
3. Estimated effort for each task
4. Recommended frameworks and technologies
5. Architecture guidelines and best practices
Include specific recommendations for:
- Frontend frameworks
- Backend architecture
- Database choices
- Authentication/authorization
- Deployment strategy`,
};
const userMessage: ChatMessage = {
role: "user",
content: `Generate an action plan based on this PRD:\n\n${prd}`,
};
return this.chatCompletion([systemMessage, userMessage], model || "qwen-coder-plus");
}
async listModels(): Promise<APIResponse<string[]>> {
const models = ["qwen-coder-plus", "qwen-coder-turbo", "qwen-coder-lite", "qwen-plus", "qwen-turbo", "qwen-max"];
return { success: true, data: models };
}
getAvailableModels(): string[] {
return ["qwen-coder-plus", "qwen-coder-turbo", "qwen-coder-lite", "qwen-plus", "qwen-turbo", "qwen-max"];
}
}
export default QwenOAuthService;

187
lib/services/zai-plan.ts Normal file
View File

@@ -0,0 +1,187 @@
import type { ChatMessage, APIResponse } from "@/types";
export interface ZaiPlanConfig {
apiKey?: string;
generalEndpoint?: string;
codingEndpoint?: string;
}
export class ZaiPlanService {
private config: ZaiPlanConfig;
constructor(config: ZaiPlanConfig = {}) {
this.config = {
generalEndpoint: config.generalEndpoint || "https://api.z.ai/api/paas/v4",
codingEndpoint: config.codingEndpoint || "https://api.z.ai/api/coding/paas/v4",
apiKey: config.apiKey || process.env.ZAI_API_KEY,
};
}
private getHeaders(): Record<string, string> {
return {
"Content-Type": "application/json",
"Authorization": `Bearer ${this.config.apiKey}`,
"Accept-Language": "en-US,en",
};
}
async chatCompletion(
messages: ChatMessage[],
model: string = "glm-4.7",
useCodingEndpoint: boolean = false
): Promise<APIResponse<string>> {
try {
if (!this.config.apiKey) {
throw new Error("API key is required. Please configure your Z.AI API key in settings.");
}
const endpoint = useCodingEndpoint ? this.config.codingEndpoint : this.config.generalEndpoint;
console.log("[Z.AI] API call:", { endpoint, model, messages });
const response = await fetch(`${endpoint}/chat/completions`, {
method: "POST",
headers: this.getHeaders(),
body: JSON.stringify({
model,
messages,
stream: false,
}),
});
console.log("[Z.AI] Response status:", response.status, response.statusText);
if (!response.ok) {
const errorText = await response.text();
console.error("[Z.AI] Error response:", errorText);
throw new Error(`Chat completion failed (${response.status}): ${response.statusText} - ${errorText}`);
}
const data = await response.json();
console.log("[Z.AI] Response data:", data);
if (data.choices && data.choices[0] && data.choices[0].message) {
return { success: true, data: data.choices[0].message.content };
} else if (data.output && data.output.choices && data.output.choices[0]) {
return { success: true, data: data.output.choices[0].message.content };
} else {
return { success: false, error: "Unexpected response format" };
}
} catch (error) {
console.error("[Z.AI] Chat completion error:", error);
return {
success: false,
error: error instanceof Error ? error.message : "Chat completion failed",
};
}
}
async enhancePrompt(prompt: string, model?: string): Promise<APIResponse<string>> {
const systemMessage: ChatMessage = {
role: "system",
content: `You are an expert prompt engineer. Your task is to enhance user prompts to make them more precise, actionable, and effective for AI coding agents.
Apply these principles:
1. Add specific context about project and requirements
2. Clarify constraints and preferences
3. Define expected output format clearly
4. Include edge cases and error handling requirements
5. Specify testing and validation criteria
Return ONLY the enhanced prompt, no explanations or extra text.`,
};
const userMessage: ChatMessage = {
role: "user",
content: `Enhance this prompt for an AI coding agent:\n\n${prompt}`,
};
return this.chatCompletion([systemMessage, userMessage], model || "glm-4.7", true);
}
async generatePRD(idea: string, model?: string): Promise<APIResponse<string>> {
const systemMessage: ChatMessage = {
role: "system",
content: `You are an expert product manager and technical architect. Generate a comprehensive Product Requirements Document (PRD) based on user's idea.
Structure your PRD with these sections:
1. Overview & Objectives
2. User Personas & Use Cases
3. Functional Requirements (prioritized by importance)
4. Non-functional Requirements
5. Technical Architecture Recommendations
6. Success Metrics & KPIs
Use clear, specific language suitable for development teams.`,
};
const userMessage: ChatMessage = {
role: "user",
content: `Generate a PRD for this idea:\n\n${idea}`,
};
return this.chatCompletion([systemMessage, userMessage], model || "glm-4.7");
}
async generateActionPlan(prd: string, model?: string): Promise<APIResponse<string>> {
const systemMessage: ChatMessage = {
role: "system",
content: `You are an expert technical lead and project manager. Generate a detailed action plan based on the PRD.
Structure of action plan with:
1. Task breakdown with priorities (High/Medium/Low)
2. Dependencies between tasks
3. Estimated effort for each task
4. Recommended frameworks and technologies
5. Architecture guidelines and best practices
Include specific recommendations for:
- Frontend frameworks
- Backend architecture
- Database choices
- Authentication/authorization
- Deployment strategy`,
};
const userMessage: ChatMessage = {
role: "user",
content: `Generate an action plan based on this PRD:\n\n${prd}`,
};
return this.chatCompletion([systemMessage, userMessage], model || "glm-4.7", true);
}
async listModels(): Promise<APIResponse<string[]>> {
try {
if (this.config.apiKey) {
const response = await fetch(`${this.config.generalEndpoint}/models`, {
headers: this.getHeaders(),
});
if (!response.ok) {
throw new Error(`Failed to list models: ${response.statusText}`);
}
const data = await response.json();
const models = data.data?.map((m: any) => m.id) || [];
return { success: true, data: models };
} else {
console.log("[Z.AI] No API key, using fallback models");
return { success: true, data: ["glm-4.7", "glm-4.6", "glm-4.5", "glm-4.5-air", "glm-4-flash", "glm-4-flashx"] };
}
} catch (error) {
console.error("[Z.AI] listModels error:", error);
return {
success: false,
error: error instanceof Error ? error.message : "Failed to list models",
};
}
}
getAvailableModels(): string[] {
return ["glm-4.7", "glm-4.6", "glm-4.5", "glm-4.5-air", "glm-4-flash", "glm-4-flashx"];
}
}
export default ZaiPlanService;

109
lib/store.ts Normal file
View File

@@ -0,0 +1,109 @@
import { create } from "zustand";
import type { ModelProvider, PromptEnhancement, PRD, ActionPlan } from "@/types";
interface AppState {
currentPrompt: string;
enhancedPrompt: string | null;
prd: PRD | null;
actionPlan: ActionPlan | null;
selectedProvider: ModelProvider;
selectedModels: Record<ModelProvider, string>;
availableModels: Record<ModelProvider, string[]>;
apiKeys: Record<ModelProvider, string>;
qwenTokens?: {
accessToken: string;
refreshToken?: string;
expiresAt?: number;
};
isProcessing: boolean;
error: string | null;
history: {
id: string;
prompt: string;
timestamp: Date;
}[];
setCurrentPrompt: (prompt: string) => void;
setEnhancedPrompt: (enhanced: string | null) => void;
setPRD: (prd: PRD) => void;
setActionPlan: (plan: ActionPlan) => void;
setSelectedProvider: (provider: ModelProvider) => void;
setSelectedModel: (provider: ModelProvider, model: string) => void;
setAvailableModels: (provider: ModelProvider, models: string[]) => void;
setApiKey: (provider: ModelProvider, key: string) => void;
setQwenTokens: (tokens: { accessToken: string; refreshToken?: string; expiresAt?: number }) => void;
setProcessing: (processing: boolean) => void;
setError: (error: string | null) => void;
addToHistory: (prompt: string) => void;
clearHistory: () => void;
reset: () => void;
}
const useStore = create<AppState>((set) => ({
currentPrompt: "",
enhancedPrompt: null,
prd: null,
actionPlan: null,
selectedProvider: "qwen",
selectedModels: {
qwen: "qwen-coder-plus",
ollama: "gpt-oss:120b",
zai: "glm-4.7",
},
availableModels: {
qwen: ["qwen-coder-plus", "qwen-coder-turbo", "qwen-coder-lite"],
ollama: ["gpt-oss:120b", "llama3.1", "gemma3", "deepseek-r1", "qwen3"],
zai: ["glm-4.7", "glm-4.6", "glm-4.5", "glm-4.5-air", "glm-4-flash", "glm-4-flashx"],
},
apiKeys: {
qwen: "",
ollama: "",
zai: "",
},
isProcessing: false,
error: null,
history: [],
setCurrentPrompt: (prompt) => set({ currentPrompt: prompt }),
setEnhancedPrompt: (enhanced) => set({ enhancedPrompt: enhanced }),
setPRD: (prd) => set({ prd }),
setActionPlan: (plan) => set({ actionPlan: plan }),
setSelectedProvider: (provider) => set({ selectedProvider: provider }),
setSelectedModel: (provider, model) =>
set((state) => ({
selectedModels: { ...state.selectedModels, [provider]: model },
})),
setAvailableModels: (provider, models) =>
set((state) => ({
availableModels: { ...state.availableModels, [provider]: models },
})),
setApiKey: (provider, key) =>
set((state) => ({
apiKeys: { ...state.apiKeys, [provider]: key },
})),
setQwenTokens: (tokens) => set({ qwenTokens: tokens }),
setProcessing: (processing) => set({ isProcessing: processing }),
setError: (error) => set({ error }),
addToHistory: (prompt) =>
set((state) => ({
history: [
...state.history,
{
id: Math.random().toString(36).substr(2, 9),
prompt,
timestamp: new Date(),
},
],
})),
clearHistory: () => set({ history: [] }),
reset: () =>
set({
currentPrompt: "",
enhancedPrompt: null,
prd: null,
actionPlan: null,
error: null,
}),
}));
export default useStore;

6
lib/utils.ts Normal file
View File

@@ -0,0 +1,6 @@
import { clsx, type ClassValue } from "clsx";
import { twMerge } from "tailwind-merge";
export function cn(...inputs: ClassValue[]) {
return twMerge(clsx(inputs));
}

6
next.config.js Normal file
View File

@@ -0,0 +1,6 @@
/** @type {import('next').NextConfig} */
const nextConfig = {
reactStrictMode: true,
};
module.exports = nextConfig;

7818
package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

43
package.json Normal file
View File

@@ -0,0 +1,43 @@
{
"name": "promptarch",
"version": "1.0.0",
"description": "Transform vague ideas into production-ready prompts and PRDs",
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start",
"lint": "next lint"
},
"dependencies": {
"@types/node": "^22.10.1",
"@types/react": "^19.0.1",
"@types/react-dom": "^19.0.2",
"autoprefixer": "^10.4.20",
"clsx": "^2.1.1",
"eslint": "^9.16.0",
"eslint-config-next": "^15.0.3",
"lucide-react": "^0.562.0",
"next": "^15.0.3",
"postcss": "^8.4.49",
"react": "^19.0.0",
"react-dom": "^19.0.0",
"react-hook-form": "^7.69.0",
"react-markdown": "^10.1.0",
"recharts": "^3.6.0",
"rehype-highlight": "^7.0.2",
"remark-gfm": "^4.0.1",
"tailwind-merge": "^3.4.0",
"tailwindcss": "^3.4.17",
"typescript": "^5.7.2",
"zod": "^4.2.1",
"zustand": "^5.0.9"
},
"devDependencies": {
"@types/node": "^22.10.1",
"@types/react": "^19.0.1",
"@types/react-dom": "^19.0.2"
},
"keywords": [],
"author": "",
"license": "ISC"
}

6
postcss.config.js Normal file
View File

@@ -0,0 +1,6 @@
module.exports = {
plugins: {
tailwindcss: {},
autoprefixer: {},
},
};

62
tailwind.config.ts Normal file
View File

@@ -0,0 +1,62 @@
import type { Config } from "tailwindcss";
const config: Config = {
content: [
"./pages/**/*.{js,ts,jsx,tsx,mdx}",
"./components/**/*.{js,ts,jsx,tsx,mdx}",
"./app/**/*.{js,ts,jsx,tsx,mdx}",
],
theme: {
extend: {
colors: {
background: "hsl(var(--background))",
foreground: "hsl(var(--foreground))",
card: {
DEFAULT: "hsl(var(--card))",
foreground: "hsl(var(--card-foreground))",
},
popover: {
DEFAULT: "hsl(var(--popover))",
foreground: "hsl(var(--popover-foreground))",
},
primary: {
DEFAULT: "hsl(var(--primary))",
foreground: "hsl(var(--primary-foreground))",
},
secondary: {
DEFAULT: "hsl(var(--secondary))",
foreground: "hsl(var(--secondary-foreground))",
},
muted: {
DEFAULT: "hsl(var(--muted))",
foreground: "hsl(var(--muted-foreground))",
},
accent: {
DEFAULT: "hsl(var(--accent))",
foreground: "hsl(var(--accent-foreground))",
},
destructive: {
DEFAULT: "hsl(var(--destructive))",
foreground: "hsl(var(--destructive-foreground))",
},
border: "hsl(var(--border))",
input: "hsl(var(--input))",
ring: "hsl(var(--ring))",
chart: {
"1": "hsl(var(--chart-1))",
"2": "hsl(var(--chart-2))",
"3": "hsl(var(--chart-3))",
"4": "hsl(var(--chart-4))",
"5": "hsl(var(--chart-5))",
},
},
borderRadius: {
lg: "var(--radius)",
md: "calc(var(--radius) - 2px)",
sm: "calc(var(--radius) - 4px)",
},
},
},
plugins: [],
};
export default config;

27
tsconfig.json Normal file
View File

@@ -0,0 +1,27 @@
{
"compilerOptions": {
"target": "ES2017",
"lib": ["dom", "dom.iterable", "esnext"],
"allowJs": true,
"skipLibCheck": true,
"strict": true,
"noEmit": true,
"esModuleInterop": true,
"module": "esnext",
"moduleResolution": "bundler",
"resolveJsonModule": true,
"isolatedModules": true,
"jsx": "preserve",
"incremental": true,
"plugins": [
{
"name": "next"
}
],
"paths": {
"@/*": ["./*"]
}
},
"include": ["next-env.d.ts", "**/*.ts", "**/*.tsx", ".next/types/**/*.ts"],
"exclude": ["node_modules"]
}

93
types/index.ts Normal file
View File

@@ -0,0 +1,93 @@
export type ModelProvider = "qwen" | "ollama" | "zai";
export interface ModelConfig {
provider: ModelProvider;
model: string;
apiKey?: string;
endpoint?: string;
}
export interface PromptEnhancement {
original: string;
enhanced: string;
quality: number;
intent: string;
patterns: string[];
}
export interface PRD {
id: string;
title: string;
overview: string;
objectives: string[];
userPersonas: UserPersona[];
functionalRequirements: Requirement[];
nonFunctionalRequirements: Requirement[];
technicalArchitecture: string;
successMetrics: string[];
createdAt: Date;
updatedAt: Date;
}
export interface UserPersona {
name: string;
description: string;
goals: string[];
painPoints: string[];
}
export interface Requirement {
id: string;
title: string;
description: string;
priority: "high" | "medium" | "low";
status: "pending" | "in-progress" | "completed";
dependencies?: string[];
}
export interface ActionPlan {
id: string;
prdId: string;
tasks: Task[];
frameworks: FrameworkRecommendation[];
architecture: ArchitectureGuideline;
estimatedDuration: string;
createdAt: Date;
rawContent?: string;
}
export interface Task {
id: string;
title: string;
description: string;
priority: "high" | "medium" | "low";
estimatedHours: number;
dependencies: string[];
status: "pending" | "in-progress" | "completed";
assignee?: string;
}
export interface FrameworkRecommendation {
category: string;
recommendation: string;
rationale: string;
alternatives: string[];
}
export interface ArchitectureGuideline {
pattern: string;
structure: string;
technologies: string[];
bestPractices: string[];
}
export interface APIResponse<T> {
success: boolean;
data?: T;
error?: string;
}
export interface ChatMessage {
role: "system" | "user" | "assistant";
content: string;
}

6
vercel.json Normal file
View File

@@ -0,0 +1,6 @@
{
"buildCommand": "npm run build",
"outputDirectory": ".next",
"framework": "nextjs",
"devCommand": "npm run dev"
}