- Added getActiveModel() function to automationService for dynamic model selection - Replaced all hardcoded 'qwen-coder-plus' strings with getActiveModel() calls - Added localStorage sync when model is changed in orchestrator - This enables Ollama and other models to work in all automation tasks
4230159036
·
2025-12-20 13:56:21 +04:00
History
Run and deploy your AI Studio app
This contains everything you need to run your app locally.
View your app in AI Studio: https://ai.studio/apps/drive/12OdXUKxlvepe5h8CMj5H0ih_7lE9H239
Run Locally
Prerequisites: Node.js
- Install dependencies:
npm install - Set the
GEMINI_API_KEYin .env.local to your Gemini API key - Run the app:
npm run dev