- Updated LayoutComponents.tsx to correctly set 'requestKind' to 'plan' for backend requests.
- Excluded backend requests from 'isModificationMode' logic to prevent incorrect prompt routing.
- Modified Views.tsx to keep the user in GlobalMode.Build instead of switching to Chat when generating a backend.
- Added intelligent system prompt routing in LayoutComponents.tsx for [BACKEND_REQUEST] to ensure the AI knows how to generate a server.js file.
- Added a 'Retry' button to plan messages in chat, allowing one-click rejection and regeneration.
- Added a 'Cancel AI' button to the Building screen to abort stuck or unwanted build processes.
- Fixed sidebar Ollama Cloud section to always open modal on click
- Improved stream handling with proper buffer flush on end
- Added 120s timeout for Ollama requests
- Added better logging for debugging
- Fixed activeRequest cleanup on errors
- Added AI Model Manager to sidebar for quick model switching
- Integrated Ollama Cloud API with official models from ollama.com
- Added AISettingsModal with searchable model catalog
- Models include: GPT-OSS 120B, DeepSeek V3.2, Gemini 3 Pro, Qwen3 Coder, etc.
- Added 'Get Key' button linking to ollama.com/settings/keys
- Updated README with Ollama Cloud documentation and free API key instructions
- Fixed ChatPanel export issue
- Added Brain icon for reasoning models