- Refreshed the list of available free models to match the application's source code.
- Added new models like Qwen3 Next, Gemma 3, and Mistral Large 3.
- Added section on 'UX Package Generator' to README.md.
- Explained how the feature creates portable frontend payloads for external backend development.
- Replaced 'Build Backend' feature with a frontend exporter.
- Clicking the button now creates and downloads a 'ux_package.json' containing all frontend assets (HTML, CSS, JS) and metadata.
- This package is intended to be loaded into external coding AIs for backend development.
- Updated LayoutComponents.tsx to correctly set 'requestKind' to 'plan' for backend requests.
- Excluded backend requests from 'isModificationMode' logic to prevent incorrect prompt routing.
- Modified Views.tsx to keep the user in GlobalMode.Build instead of switching to Chat when generating a backend.
- Added intelligent system prompt routing in LayoutComponents.tsx for [BACKEND_REQUEST] to ensure the AI knows how to generate a server.js file.
- Added 'Build Backend' button to Preview toolbar in Views.tsx
- Defined BACKEND_GENERATOR_PROMPT in automationService.ts
- Implemented specific prompt logic to infer API endpoints from frontend code
- Added a 'Retry' button to plan messages in chat, allowing one-click rejection and regeneration.
- Added a 'Cancel AI' button to the Building screen to abort stuck or unwanted build processes.
- Condensed Canvas Isolation Architecture to streamline AI focus
- Added robust JSON extraction (brace-finding) to handle conversational AI output
- Implemented 90s safety timeout for patch generation to prevent infinite 'Generating' state
- Throttled status updates to improve UI performance during streaming
- Added getActiveModel() function to automationService for dynamic model selection
- Replaced all hardcoded 'qwen-coder-plus' strings with getActiveModel() calls
- Added localStorage sync when model is changed in orchestrator
- This enables Ollama and other models to work in all automation tasks
- Fixed sidebar Ollama Cloud section to always open modal on click
- Improved stream handling with proper buffer flush on end
- Added 120s timeout for Ollama requests
- Added better logging for debugging
- Fixed activeRequest cleanup on errors
- Fixed keytar ESM import to use .default property
- Added comprehensive error handling and logging for vault operations
- Fixed fallback to file-based storage when keytar is unavailable
- Added debug logs to help troubleshoot key storage issues
- Added AI Model Manager to sidebar for quick model switching
- Integrated Ollama Cloud API with official models from ollama.com
- Added AISettingsModal with searchable model catalog
- Models include: GPT-OSS 120B, DeepSeek V3.2, Gemini 3 Pro, Qwen3 Coder, etc.
- Added 'Get Key' button linking to ollama.com/settings/keys
- Updated README with Ollama Cloud documentation and free API key instructions
- Fixed ChatPanel export issue
- Added Brain icon for reasoning models