- Added 'Build Backend' button to Preview toolbar in Views.tsx
- Defined BACKEND_GENERATOR_PROMPT in automationService.ts
- Implemented specific prompt logic to infer API endpoints from frontend code
- Condensed Canvas Isolation Architecture to streamline AI focus
- Added robust JSON extraction (brace-finding) to handle conversational AI output
- Implemented 90s safety timeout for patch generation to prevent infinite 'Generating' state
- Throttled status updates to improve UI performance during streaming
- Added getActiveModel() function to automationService for dynamic model selection
- Replaced all hardcoded 'qwen-coder-plus' strings with getActiveModel() calls
- Added localStorage sync when model is changed in orchestrator
- This enables Ollama and other models to work in all automation tasks