Backend:
- Created context-engine/client.ts - HTTP client for Context-Engine API
- Created context-engine/service.ts - Lifecycle management of Context-Engine sidecar
- Created context-engine/index.ts - Module exports
- Created server/routes/context-engine.ts - API endpoints for status/health/query
Integration:
- workspaces/manager.ts: Trigger indexing when workspace becomes ready (non-blocking)
- index.ts: Initialize ContextEngineService on server start (lazy mode)
- ollama-cloud.ts: Inject RAG context into chat requests when available
Frontend:
- model-selector.tsx: Added Context-Engine status indicator
- Green dot = Ready (RAG enabled)
- Blue pulsing dot = Indexing
- Red dot = Error
- Hidden when Context-Engine not running
All operations are non-blocking with graceful fallback when Context-Engine is unavailable.
Added microtask yield (setTimeout 0) after processing each batch of SSE
lines. This allows the main thread event loop to process UI updates and
user interaction between streaming updates, preventing the UI from
becoming completely unresponsive during rapid streaming.
- Added 60 second timeout per chunk in parseStreamingResponse
- Added 120 second timeout to makeRequest with AbortController
- This prevents the server from hanging indefinitely on slow/unresponsive API
This should fix the UI freeze when sending messages to Ollama Cloud models.
Features added:
- Custom Agent Creator dialog with AI generation support (up to 30k chars)
- Plus button next to agent selector to create new agents
- Zread MCP Server from Z.AI in marketplace (remote HTTP config)
- Extended MCP config types to support remote/http/sse servers
Bug fixes:
- Filter SDK Z.AI/GLM providers to ensure our custom routing with full message history
- This fixes the issue where changing models mid-chat lost conversationcontext
Added comprehensive AI model integrations:
Z.AI Integration:
- Client with Anthropic-compatible API (GLM Coding Plan)
- Routes for config, testing, and streaming chat
- Settings UI component with API key management
OpenCode Zen Integration:
- Free models client using 'public' API key
- Dynamic model fetching from models.dev
- Supports GPT-5 Nano, Big Pickle, Grok Code Fast 1, MiniMax M2.1
- No API key required for free tier!
UI Enhancements:
- Added Free Models tab (first position) in Advanced Settings
- Z.AI tab with GLM Coding Plan info
- OpenCode Zen settings with model cards and status
All integrations work standalone without opencode.exe dependency.
Combined autonomous and auto-approval modes into APEX PRO:
- Single toggle enables both functionalities
- Orange color scheme with gentle blinking animation
- Pulsing shadow effect when enabled
- Small glowing indicator dot
- Tooltip explains combined functionality
- SHIELD button remains separate for auto-approval only
Added all missing MULTIX enhancements matching the original screenshot:
1. STREAMING indicator:
- Animated purple badge with sparkles icon
- Shows live token count during streaming
- Pulsing animation effect
2. Status badges:
- PENDING/RUNNING/DONE badges for tasks
- Color-coded based on status
3. APEX/SHIELD renamed:
- 'Auto' -> 'APEX' with tooltip
- 'Shield' -> 'SHIELD' with tooltip
4. THINKING indicator:
- Bouncing dots animation (3 dots)
- Shows THINKING or SENDING status
5. STOP button:
- Red stop button appears during agent work
- Calls cancel endpoint to interrupt
6. Detailed token stats bar:
- INPUT/OUTPUT tokens
- REASONING tokens (amber)
- CACHE READ (emerald)
- CACHE WRITE (cyan)
- COST (violet)
- MODEL (indigo)
7. Message navigation sidebar:
- YOU/ASST labels for each message
- Click to scroll to message
- Appears on right side when viewing task
Added all missing MULTIX enhancements matching the original screenshot:
1. STREAMING indicator:
- Animated purple badge with sparkles icon
- Shows live token count during streaming
- Pulsing animation effect
2. Status badges:
- PENDING/RUNNING/DONE badges for tasks
- Color-coded based on status
3. APEX/SHIELD renamed:
- 'Auto' → 'APEX' with tooltip
- 'Shield' → 'SHIELD' with tooltip
4. THINKING indicator:
- Bouncing dots animation (3 dots)
- Shows THINKING or SENDING status
5. STOP button:
- Red stop button appears during agent work
- Calls cancel endpoint to interrupt
6. Detailed token stats bar:
- INPUT/OUTPUT tokens
- REASONING tokens (amber)
- CACHE READ (emerald)
- CACHE WRITE (cyan)
- COST (violet)
- MODEL (indigo)
7. Message navigation sidebar:
- YOU/ASST labels for each message
- Click to scroll to message
- Appears on right side when viewing task
- Copy complete source code packages from original CodeNomad project
- Add root package.json with npm workspace configuration
- Include electron-app, server, ui, tauri-app, and opencode-config packages
- Fix Launch-Windows.bat and Launch-Dev-Windows.bat to work with correct npm scripts
- Fix Launch-Unix.sh to work with correct npm scripts
- Launchers now correctly call npm run dev:electron which launches Electron app