- Added Antigravity AI provider with Google OAuth authentication
- New integration client (antigravity.ts) with automatic endpoint fallback
- API routes for /api/antigravity/* (models, auth-status, test, chat)
- AntigravitySettings.tsx for Advanced Settings panel
- Updated session-api.ts and session-actions.ts for provider routing
- Updated opencode.jsonc with Antigravity plugin and 11 models:
- Gemini 3 Pro Low/High, Gemini 3 Flash
- Claude Sonnet 4.5 (+ thinking variants)
- Claude Opus 4.5 (+ thinking variants)
- GPT-OSS 120B Medium
- Fixed native mode startup error (was trying to launch __nomadarch_native__ as binary)
- Native mode workspaces now skip binary launch and are immediately ready
Gemini AI
·
2025-12-27 04:01:38 +04:00
Some checks failed
Release Binaries / release (push) Has been cancelled
Fixed 'Cannot access filteredMessageIds before initialization' error by
reordering the declarations. Since lastAssistantIndex depends on
filteredMessageIds, it must be defined after it.
Critical performance fixes for MULTIX chat mode:
1. isAgentThinking - Simplified to only check last message
- Previously iterated ALL messages with .some() on every store update
- Each getMessage() call created a reactive subscription
- Now only checks the last message (O(1) instead of O(n))
2. lastAssistantIndex - Memoized with createMemo
- Changed from function to createMemo for proper caching
- Added early exit optimization for common case
3. Auto-scroll effect - Removed isAgentThinking dependency
- The thinking-based scroll was firing on every reactive update
- Now only triggers on message count changes
- Streaming scroll is handled by the interval-based effect
These combined fixes prevent the cascading reactive loop that
was freezing the UI during message send.
Performance optimizations to prevent UI freeze during streaming:
1. message-block-list.tsx:
- Removed createEffect that logged on every messageIds change
- Removed unused logger import (was causing IPC overload)
2. multi-task-chat.tsx:
- Changed filteredMessageIds from function to createMemo for proper memoization
- Throttled auto-scroll effect to only trigger when message COUNT changes
- Previously it fired on every reactive store update during streaming
These changes prevent excessive re-renders and IPC calls during message streaming.
Backend:
- Created context-engine/client.ts - HTTP client for Context-Engine API
- Created context-engine/service.ts - Lifecycle management of Context-Engine sidecar
- Created context-engine/index.ts - Module exports
- Created server/routes/context-engine.ts - API endpoints for status/health/query
Integration:
- workspaces/manager.ts: Trigger indexing when workspace becomes ready (non-blocking)
- index.ts: Initialize ContextEngineService on server start (lazy mode)
- ollama-cloud.ts: Inject RAG context into chat requests when available
Frontend:
- model-selector.tsx: Added Context-Engine status indicator
- Green dot = Ready (RAG enabled)
- Blue pulsing dot = Indexing
- Red dot = Error
- Hidden when Context-Engine not running
All operations are non-blocking with graceful fallback when Context-Engine is unavailable.
Added microtask yield (setTimeout 0) after processing each batch of SSE
lines. This allows the main thread event loop to process UI updates and
user interaction between streaming updates, preventing the UI from
becoming completely unresponsive during rapid streaming.
- Added 60 second timeout per chunk in parseStreamingResponse
- Added 120 second timeout to makeRequest with AbortController
- This prevents the server from hanging indefinitely on slow/unresponsive API
This should fix the UI freeze when sending messages to Ollama Cloud models.
Features added:
- Custom Agent Creator dialog with AI generation support (up to 30k chars)
- Plus button next to agent selector to create new agents
- Zread MCP Server from Z.AI in marketplace (remote HTTP config)
- Extended MCP config types to support remote/http/sse servers
Bug fixes:
- Filter SDK Z.AI/GLM providers to ensure our custom routing with full message history
- This fixes the issue where changing models mid-chat lost conversationcontext
Added comprehensive AI model integrations:
Z.AI Integration:
- Client with Anthropic-compatible API (GLM Coding Plan)
- Routes for config, testing, and streaming chat
- Settings UI component with API key management
OpenCode Zen Integration:
- Free models client using 'public' API key
- Dynamic model fetching from models.dev
- Supports GPT-5 Nano, Big Pickle, Grok Code Fast 1, MiniMax M2.1
- No API key required for free tier!
UI Enhancements:
- Added Free Models tab (first position) in Advanced Settings
- Z.AI tab with GLM Coding Plan info
- OpenCode Zen settings with model cards and status
All integrations work standalone without opencode.exe dependency.
Combined autonomous and auto-approval modes into APEX PRO:
- Single toggle enables both functionalities
- Orange color scheme with gentle blinking animation
- Pulsing shadow effect when enabled
- Small glowing indicator dot
- Tooltip explains combined functionality
- SHIELD button remains separate for auto-approval only
Added all missing MULTIX enhancements matching the original screenshot:
1. STREAMING indicator:
- Animated purple badge with sparkles icon
- Shows live token count during streaming
- Pulsing animation effect
2. Status badges:
- PENDING/RUNNING/DONE badges for tasks
- Color-coded based on status
3. APEX/SHIELD renamed:
- 'Auto' -> 'APEX' with tooltip
- 'Shield' -> 'SHIELD' with tooltip
4. THINKING indicator:
- Bouncing dots animation (3 dots)
- Shows THINKING or SENDING status
5. STOP button:
- Red stop button appears during agent work
- Calls cancel endpoint to interrupt
6. Detailed token stats bar:
- INPUT/OUTPUT tokens
- REASONING tokens (amber)
- CACHE READ (emerald)
- CACHE WRITE (cyan)
- COST (violet)
- MODEL (indigo)
7. Message navigation sidebar:
- YOU/ASST labels for each message
- Click to scroll to message
- Appears on right side when viewing task
Added all missing MULTIX enhancements matching the original screenshot:
1. STREAMING indicator:
- Animated purple badge with sparkles icon
- Shows live token count during streaming
- Pulsing animation effect
2. Status badges:
- PENDING/RUNNING/DONE badges for tasks
- Color-coded based on status
3. APEX/SHIELD renamed:
- 'Auto' → 'APEX' with tooltip
- 'Shield' → 'SHIELD' with tooltip
4. THINKING indicator:
- Bouncing dots animation (3 dots)
- Shows THINKING or SENDING status
5. STOP button:
- Red stop button appears during agent work
- Calls cancel endpoint to interrupt
6. Detailed token stats bar:
- INPUT/OUTPUT tokens
- REASONING tokens (amber)
- CACHE READ (emerald)
- CACHE WRITE (cyan)
- COST (violet)
- MODEL (indigo)
7. Message navigation sidebar:
- YOU/ASST labels for each message
- Click to scroll to message
- Appears on right side when viewing task
- Copy complete source code packages from original CodeNomad project
- Add root package.json with npm workspace configuration
- Include electron-app, server, ui, tauri-app, and opencode-config packages
- Fix Launch-Windows.bat and Launch-Dev-Windows.bat to work with correct npm scripts
- Fix Launch-Unix.sh to work with correct npm scripts
- Launchers now correctly call npm run dev:electron which launches Electron app
- Restore Install-Windows.bat with npm primary + ZIP fallback for OpenCode
- Restore Install-Linux.sh with npm primary + ZIP fallback for OpenCode
- Restore Install-Mac.sh with npm primary + ZIP fallback for OpenCode
- Add Launch-Windows.bat launcher with dependency checking and port detection
- Add Launch-Unix.sh launcher for Linux/macOS
- Add Launch-Dev-Windows.bat for development mode
- All scripts use actual GitHub releases URLs for OpenCode
- Enhanced with comprehensive error handling and user guidance
- Add comprehensive SEO meta tags (Open Graph, Twitter Card, Schema.org JSON-LD)
- Add GitHub badges (stars, forks, license, release) with CTA
- Add dedicated 'Supported AI Models & Providers' section with:
- GLM 4.7 spotlight with benchmarks (SWE-bench +73.8%, #1 WebDev)
- Z.AI API integration with 10% discount link (R0K78RJKNW)
- Complete model listings for Z.AI, Anthropic, OpenAI, Google, Qwen, Ollama
- Update installers with npm primary method and ZIP fallback for OpenCode CLI
- Add backup files for all installers
- Update repository clone URL to new GitHub location
- Update all URLs and references to roman-ryzenadvanced/NomadArch-v1.0