Commit Graph

10 Commits

  • fix(automation): Switch to Build Mode for Backend Generation
    - Modified Views.tsx to keep the user in GlobalMode.Build instead of switching to Chat when generating a backend.
    - Added intelligent system prompt routing in LayoutComponents.tsx for [BACKEND_REQUEST] to ensure the AI knows how to generate a server.js file.
  • feat: Add backend generation capability
    - Added 'Build Backend' button to Preview toolbar in Views.tsx
    - Defined BACKEND_GENERATOR_PROMPT in automationService.ts
    - Implemented specific prompt logic to infer API endpoints from frontend code
  • feat: Add Retry button to chat plans and Cancel AI button to build screen
    - Added a 'Retry' button to plan messages in chat, allowing one-click rejection and regeneration.
    - Added a 'Cancel AI' button to the Building screen to abort stuck or unwanted build processes.
  • fix: Robust patch generation and safety timeouts
    - Condensed Canvas Isolation Architecture to streamline AI focus
    - Added robust JSON extraction (brace-finding) to handle conversational AI output
    - Implemented 90s safety timeout for patch generation to prevent infinite 'Generating' state
    - Throttled status updates to improve UI performance during streaming
  • feat: Implement Canvas Isolation Architecture for surgical code precision
    - Replaced old refinement protocol with technical Canvas Isolation Architecture
    - Integrated layered canvas logic (Base, Change, Merge, Validation)
    - Enforced isolation chamber and quarantine zone constraints in prompts
    - Updated QA self-check gate to strictly monitor for scope creep
  • fix: Enable Ollama models for code generation
    - Added getActiveModel() function to automationService for dynamic model selection
    - Replaced all hardcoded 'qwen-coder-plus' strings with getActiveModel() calls
    - Added localStorage sync when model is changed in orchestrator
    - This enables Ollama and other models to work in all automation tasks
  • fix: Ollama click handler and stream completion
    - Fixed sidebar Ollama Cloud section to always open modal on click
    - Improved stream handling with proper buffer flush on end
    - Added 120s timeout for Ollama requests
    - Added better logging for debugging
    - Fixed activeRequest cleanup on errors
  • fix: Fix Ollama API key storage - keytar ESM import and file fallback
    - Fixed keytar ESM import to use .default property
    - Added comprehensive error handling and logging for vault operations
    - Fixed fallback to file-based storage when keytar is unavailable
    - Added debug logs to help troubleshoot key storage issues
  • feat: Add Ollama Cloud integration with 20+ free AI models
    - Added AI Model Manager to sidebar for quick model switching
    - Integrated Ollama Cloud API with official models from ollama.com
    - Added AISettingsModal with searchable model catalog
    - Models include: GPT-OSS 120B, DeepSeek V3.2, Gemini 3 Pro, Qwen3 Coder, etc.
    - Added 'Get Key' button linking to ollama.com/settings/keys
    - Updated README with Ollama Cloud documentation and free API key instructions
    - Fixed ChatPanel export issue
    - Added Brain icon for reasoning models