Changes:
1. Exported getStoredAntigravityToken and isAntigravityTokenValid from session-api.ts
2. Imported token helpers into session-actions.ts
3. Added token validation and user notifications to streamAntigravityChat
4. Fixed TypeScript implicit any error in fetchAntigravityProvider
Gemini AI
·
2025-12-28 04:01:58 +04:00
Some checks failed
Release Binaries / release (push) Has been cancelled
Changes:
1. Added cachedProviderId signal in MultiX v2 component
2. Updated syncFromStore to also sync provider ID from task session
3. Immediately update cached model and provider IDs on model change
4. Pass correct providerId to LiteModelSelector component
This fixes the issue where selecting Qwen models caused chat input to stop
responding, because the provider ID was not being tracked correctly.
Gemini AI
·
2025-12-28 03:40:56 +04:00
Some checks failed
Release Binaries / release (push) Has been cancelled
Changes:
1. Enhanced removeDuplicateProviders() to filter out duplicate providers from SDK
when the same provider exists in extras (qwen-oauth, zai, ollama-cloud, antigravity)
2. Added logic to remove any Qwen-related SDK providers when qwen-oauth is authenticated
3. Fixed missing setActiveParentSession import in instance-shell2.tsx
These changes ensure:
- No duplicate models appear in the model selector
- Qwen OAuth models don't duplicate with any SDK Qwen providers
- TypeScript compilation passes successfully
Gemini AI
·
2025-12-28 03:27:31 +04:00
Some checks failed
Release Binaries / release (push) Has been cancelled
Windows:
- Multiple Node.js installation methods (winget, chocolatey, direct MSI)
- Clear restart instructions when Node.js is newly installed
- Fallback to user profile directory if current dir not writable
- Comprehensive health checks and error reporting
Linux:
- Support for apt, dnf, yum, pacman, zypper, apk package managers
- NodeSource repository for newer Node.js versions
- nvm fallback installation method
- Graceful error handling with detailed troubleshooting
macOS:
- Xcode Command Line Tools detection and installation
- Homebrew auto-installation
- Apple Silicon (arm64) support with correct PATH setup
- Multiple Node.js fallbacks (brew, nvm, official pkg)
All platforms:
- Binary-Free Mode as default (no OpenCode binary required)
- Beautiful terminal output with progress indicators
- Detailed logging to install.log
- Post-install health checks
Gemini AI
·
2025-12-28 00:43:08 +04:00
Some checks failed
Release Binaries / release (push) Has been cancelled
1. Implemented auto-selection of tasks in MultiXV2 to prevent empty initial state.
2. Added force-loading logic for task session messages with debouncing.
3. Updated session-actions to return full assistant text and immediately persist native messages.
4. Fixed caching logic in instance-shell2 to retain active task sessions in memory.
Gemini AI
·
2025-12-27 20:36:43 +04:00
Some checks failed
Release Binaries / release (push) Has been cancelled
- Added Antigravity AI provider with Google OAuth authentication
- New integration client (antigravity.ts) with automatic endpoint fallback
- API routes for /api/antigravity/* (models, auth-status, test, chat)
- AntigravitySettings.tsx for Advanced Settings panel
- Updated session-api.ts and session-actions.ts for provider routing
- Updated opencode.jsonc with Antigravity plugin and 11 models:
- Gemini 3 Pro Low/High, Gemini 3 Flash
- Claude Sonnet 4.5 (+ thinking variants)
- Claude Opus 4.5 (+ thinking variants)
- GPT-OSS 120B Medium
- Fixed native mode startup error (was trying to launch __nomadarch_native__ as binary)
- Native mode workspaces now skip binary launch and are immediately ready
Gemini AI
·
2025-12-27 04:01:38 +04:00
Some checks failed
Release Binaries / release (push) Has been cancelled
Fixed 'Cannot access filteredMessageIds before initialization' error by
reordering the declarations. Since lastAssistantIndex depends on
filteredMessageIds, it must be defined after it.
Critical performance fixes for MULTIX chat mode:
1. isAgentThinking - Simplified to only check last message
- Previously iterated ALL messages with .some() on every store update
- Each getMessage() call created a reactive subscription
- Now only checks the last message (O(1) instead of O(n))
2. lastAssistantIndex - Memoized with createMemo
- Changed from function to createMemo for proper caching
- Added early exit optimization for common case
3. Auto-scroll effect - Removed isAgentThinking dependency
- The thinking-based scroll was firing on every reactive update
- Now only triggers on message count changes
- Streaming scroll is handled by the interval-based effect
These combined fixes prevent the cascading reactive loop that
was freezing the UI during message send.
Performance optimizations to prevent UI freeze during streaming:
1. message-block-list.tsx:
- Removed createEffect that logged on every messageIds change
- Removed unused logger import (was causing IPC overload)
2. multi-task-chat.tsx:
- Changed filteredMessageIds from function to createMemo for proper memoization
- Throttled auto-scroll effect to only trigger when message COUNT changes
- Previously it fired on every reactive store update during streaming
These changes prevent excessive re-renders and IPC calls during message streaming.
Backend:
- Created context-engine/client.ts - HTTP client for Context-Engine API
- Created context-engine/service.ts - Lifecycle management of Context-Engine sidecar
- Created context-engine/index.ts - Module exports
- Created server/routes/context-engine.ts - API endpoints for status/health/query
Integration:
- workspaces/manager.ts: Trigger indexing when workspace becomes ready (non-blocking)
- index.ts: Initialize ContextEngineService on server start (lazy mode)
- ollama-cloud.ts: Inject RAG context into chat requests when available
Frontend:
- model-selector.tsx: Added Context-Engine status indicator
- Green dot = Ready (RAG enabled)
- Blue pulsing dot = Indexing
- Red dot = Error
- Hidden when Context-Engine not running
All operations are non-blocking with graceful fallback when Context-Engine is unavailable.
Added microtask yield (setTimeout 0) after processing each batch of SSE
lines. This allows the main thread event loop to process UI updates and
user interaction between streaming updates, preventing the UI from
becoming completely unresponsive during rapid streaming.
- Added 60 second timeout per chunk in parseStreamingResponse
- Added 120 second timeout to makeRequest with AbortController
- This prevents the server from hanging indefinitely on slow/unresponsive API
This should fix the UI freeze when sending messages to Ollama Cloud models.
Features added:
- Custom Agent Creator dialog with AI generation support (up to 30k chars)
- Plus button next to agent selector to create new agents
- Zread MCP Server from Z.AI in marketplace (remote HTTP config)
- Extended MCP config types to support remote/http/sse servers
Bug fixes:
- Filter SDK Z.AI/GLM providers to ensure our custom routing with full message history
- This fixes the issue where changing models mid-chat lost conversationcontext