Fixed 'Cannot access filteredMessageIds before initialization' error by
reordering the declarations. Since lastAssistantIndex depends on
filteredMessageIds, it must be defined after it.
Critical performance fixes for MULTIX chat mode:
1. isAgentThinking - Simplified to only check last message
- Previously iterated ALL messages with .some() on every store update
- Each getMessage() call created a reactive subscription
- Now only checks the last message (O(1) instead of O(n))
2. lastAssistantIndex - Memoized with createMemo
- Changed from function to createMemo for proper caching
- Added early exit optimization for common case
3. Auto-scroll effect - Removed isAgentThinking dependency
- The thinking-based scroll was firing on every reactive update
- Now only triggers on message count changes
- Streaming scroll is handled by the interval-based effect
These combined fixes prevent the cascading reactive loop that
was freezing the UI during message send.
Performance optimizations to prevent UI freeze during streaming:
1. message-block-list.tsx:
- Removed createEffect that logged on every messageIds change
- Removed unused logger import (was causing IPC overload)
2. multi-task-chat.tsx:
- Changed filteredMessageIds from function to createMemo for proper memoization
- Throttled auto-scroll effect to only trigger when message COUNT changes
- Previously it fired on every reactive store update during streaming
These changes prevent excessive re-renders and IPC calls during message streaming.
Backend:
- Created context-engine/client.ts - HTTP client for Context-Engine API
- Created context-engine/service.ts - Lifecycle management of Context-Engine sidecar
- Created context-engine/index.ts - Module exports
- Created server/routes/context-engine.ts - API endpoints for status/health/query
Integration:
- workspaces/manager.ts: Trigger indexing when workspace becomes ready (non-blocking)
- index.ts: Initialize ContextEngineService on server start (lazy mode)
- ollama-cloud.ts: Inject RAG context into chat requests when available
Frontend:
- model-selector.tsx: Added Context-Engine status indicator
- Green dot = Ready (RAG enabled)
- Blue pulsing dot = Indexing
- Red dot = Error
- Hidden when Context-Engine not running
All operations are non-blocking with graceful fallback when Context-Engine is unavailable.
Features added:
- Custom Agent Creator dialog with AI generation support (up to 30k chars)
- Plus button next to agent selector to create new agents
- Zread MCP Server from Z.AI in marketplace (remote HTTP config)
- Extended MCP config types to support remote/http/sse servers
Bug fixes:
- Filter SDK Z.AI/GLM providers to ensure our custom routing with full message history
- This fixes the issue where changing models mid-chat lost conversationcontext
Added comprehensive AI model integrations:
Z.AI Integration:
- Client with Anthropic-compatible API (GLM Coding Plan)
- Routes for config, testing, and streaming chat
- Settings UI component with API key management
OpenCode Zen Integration:
- Free models client using 'public' API key
- Dynamic model fetching from models.dev
- Supports GPT-5 Nano, Big Pickle, Grok Code Fast 1, MiniMax M2.1
- No API key required for free tier!
UI Enhancements:
- Added Free Models tab (first position) in Advanced Settings
- Z.AI tab with GLM Coding Plan info
- OpenCode Zen settings with model cards and status
All integrations work standalone without opencode.exe dependency.
Combined autonomous and auto-approval modes into APEX PRO:
- Single toggle enables both functionalities
- Orange color scheme with gentle blinking animation
- Pulsing shadow effect when enabled
- Small glowing indicator dot
- Tooltip explains combined functionality
- SHIELD button remains separate for auto-approval only
Added all missing MULTIX enhancements matching the original screenshot:
1. STREAMING indicator:
- Animated purple badge with sparkles icon
- Shows live token count during streaming
- Pulsing animation effect
2. Status badges:
- PENDING/RUNNING/DONE badges for tasks
- Color-coded based on status
3. APEX/SHIELD renamed:
- 'Auto' -> 'APEX' with tooltip
- 'Shield' -> 'SHIELD' with tooltip
4. THINKING indicator:
- Bouncing dots animation (3 dots)
- Shows THINKING or SENDING status
5. STOP button:
- Red stop button appears during agent work
- Calls cancel endpoint to interrupt
6. Detailed token stats bar:
- INPUT/OUTPUT tokens
- REASONING tokens (amber)
- CACHE READ (emerald)
- CACHE WRITE (cyan)
- COST (violet)
- MODEL (indigo)
7. Message navigation sidebar:
- YOU/ASST labels for each message
- Click to scroll to message
- Appears on right side when viewing task
Added all missing MULTIX enhancements matching the original screenshot:
1. STREAMING indicator:
- Animated purple badge with sparkles icon
- Shows live token count during streaming
- Pulsing animation effect
2. Status badges:
- PENDING/RUNNING/DONE badges for tasks
- Color-coded based on status
3. APEX/SHIELD renamed:
- 'Auto' → 'APEX' with tooltip
- 'Shield' → 'SHIELD' with tooltip
4. THINKING indicator:
- Bouncing dots animation (3 dots)
- Shows THINKING or SENDING status
5. STOP button:
- Red stop button appears during agent work
- Calls cancel endpoint to interrupt
6. Detailed token stats bar:
- INPUT/OUTPUT tokens
- REASONING tokens (amber)
- CACHE READ (emerald)
- CACHE WRITE (cyan)
- COST (violet)
- MODEL (indigo)
7. Message navigation sidebar:
- YOU/ASST labels for each message
- Click to scroll to message
- Appears on right side when viewing task
- Copy complete source code packages from original CodeNomad project
- Add root package.json with npm workspace configuration
- Include electron-app, server, ui, tauri-app, and opencode-config packages
- Fix Launch-Windows.bat and Launch-Dev-Windows.bat to work with correct npm scripts
- Fix Launch-Unix.sh to work with correct npm scripts
- Launchers now correctly call npm run dev:electron which launches Electron app