Changes:
1. Exported getStoredAntigravityToken and isAntigravityTokenValid from session-api.ts
2. Imported token helpers into session-actions.ts
3. Added token validation and user notifications to streamAntigravityChat
4. Fixed TypeScript implicit any error in fetchAntigravityProvider
Gemini AI
·
2025-12-28 04:01:58 +04:00
Some checks failed
Release Binaries / release (push) Has been cancelled
Changes:
1. Enhanced removeDuplicateProviders() to filter out duplicate providers from SDK
when the same provider exists in extras (qwen-oauth, zai, ollama-cloud, antigravity)
2. Added logic to remove any Qwen-related SDK providers when qwen-oauth is authenticated
3. Fixed missing setActiveParentSession import in instance-shell2.tsx
These changes ensure:
- No duplicate models appear in the model selector
- Qwen OAuth models don't duplicate with any SDK Qwen providers
- TypeScript compilation passes successfully
Gemini AI
·
2025-12-28 03:27:31 +04:00
Some checks failed
Release Binaries / release (push) Has been cancelled
1. Implemented auto-selection of tasks in MultiXV2 to prevent empty initial state.
2. Added force-loading logic for task session messages with debouncing.
3. Updated session-actions to return full assistant text and immediately persist native messages.
4. Fixed caching logic in instance-shell2 to retain active task sessions in memory.
Gemini AI
·
2025-12-27 20:36:43 +04:00
Some checks failed
Release Binaries / release (push) Has been cancelled
- Added Antigravity AI provider with Google OAuth authentication
- New integration client (antigravity.ts) with automatic endpoint fallback
- API routes for /api/antigravity/* (models, auth-status, test, chat)
- AntigravitySettings.tsx for Advanced Settings panel
- Updated session-api.ts and session-actions.ts for provider routing
- Updated opencode.jsonc with Antigravity plugin and 11 models:
- Gemini 3 Pro Low/High, Gemini 3 Flash
- Claude Sonnet 4.5 (+ thinking variants)
- Claude Opus 4.5 (+ thinking variants)
- GPT-OSS 120B Medium
- Fixed native mode startup error (was trying to launch __nomadarch_native__ as binary)
- Native mode workspaces now skip binary launch and are immediately ready
Gemini AI
·
2025-12-27 04:01:38 +04:00
Some checks failed
Release Binaries / release (push) Has been cancelled
Backend:
- Created context-engine/client.ts - HTTP client for Context-Engine API
- Created context-engine/service.ts - Lifecycle management of Context-Engine sidecar
- Created context-engine/index.ts - Module exports
- Created server/routes/context-engine.ts - API endpoints for status/health/query
Integration:
- workspaces/manager.ts: Trigger indexing when workspace becomes ready (non-blocking)
- index.ts: Initialize ContextEngineService on server start (lazy mode)
- ollama-cloud.ts: Inject RAG context into chat requests when available
Frontend:
- model-selector.tsx: Added Context-Engine status indicator
- Green dot = Ready (RAG enabled)
- Blue pulsing dot = Indexing
- Red dot = Error
- Hidden when Context-Engine not running
All operations are non-blocking with graceful fallback when Context-Engine is unavailable.
- Added 60 second timeout per chunk in parseStreamingResponse
- Added 120 second timeout to makeRequest with AbortController
- This prevents the server from hanging indefinitely on slow/unresponsive API
This should fix the UI freeze when sending messages to Ollama Cloud models.
Added comprehensive AI model integrations:
Z.AI Integration:
- Client with Anthropic-compatible API (GLM Coding Plan)
- Routes for config, testing, and streaming chat
- Settings UI component with API key management
OpenCode Zen Integration:
- Free models client using 'public' API key
- Dynamic model fetching from models.dev
- Supports GPT-5 Nano, Big Pickle, Grok Code Fast 1, MiniMax M2.1
- No API key required for free tier!
UI Enhancements:
- Added Free Models tab (first position) in Advanced Settings
- Z.AI tab with GLM Coding Plan info
- OpenCode Zen settings with model cards and status
All integrations work standalone without opencode.exe dependency.
- Copy complete source code packages from original CodeNomad project
- Add root package.json with npm workspace configuration
- Include electron-app, server, ui, tauri-app, and opencode-config packages
- Fix Launch-Windows.bat and Launch-Dev-Windows.bat to work with correct npm scripts
- Fix Launch-Unix.sh to work with correct npm scripts
- Launchers now correctly call npm run dev:electron which launches Electron app