Commit Graph

46 Commits

  • fix: complete session persistence overhaul (Codex 5.2)
    1. Implemented auto-selection of tasks in MultiXV2 to prevent empty initial state.
    2. Added force-loading logic for task session messages with debouncing.
    3. Updated session-actions to return full assistant text and immediately persist native messages.
    4. Fixed caching logic in instance-shell2 to retain active task sessions in memory.
  • feat: Add Google Device Authorization Flow for Antigravity native mode
    - Implemented proper OAuth device flow using gcloud CLI client ID
    - Added /api/antigravity/device-auth/start endpoint
    - Added /api/antigravity/device-auth/poll endpoint with polling
    - Added /api/antigravity/device-auth/refresh for token renewal
    - Updated AntigravitySettings UI with user code display
    - Auto-opens Google sign-in page and polls for completion
    - Seamless authentication experience matching SDK mode
  • fix: Replace broken web OAuth with manual token entry for Antigravity
    - Removed Google OAuth popup flow (invalid_client error)
    - Added manual access token input option
    - Added instructions to use SDK mode with antigravity plugin (recommended)
    - Added copy button for CLI command
  • feat: Add Antigravity provider integration + fix native mode startup
    - Added Antigravity AI provider with Google OAuth authentication
    - New integration client (antigravity.ts) with automatic endpoint fallback
    - API routes for /api/antigravity/* (models, auth-status, test, chat)
    - AntigravitySettings.tsx for Advanced Settings panel
    - Updated session-api.ts and session-actions.ts for provider routing
    - Updated opencode.jsonc with Antigravity plugin and 11 models:
      - Gemini 3 Pro Low/High, Gemini 3 Flash
      - Claude Sonnet 4.5 (+ thinking variants)
      - Claude Opus 4.5 (+ thinking variants)
      - GPT-OSS 120B Medium
    
    - Fixed native mode startup error (was trying to launch __nomadarch_native__ as binary)
    - Native mode workspaces now skip binary launch and are immediately ready
  • v0.5.0: Binary-Free Mode - No OpenCode binary required
     Major Features:
    - Native session management without OpenCode binary
    - Provider routing: OpenCode Zen (free), Qwen OAuth, Z.AI
    - Streaming chat with tool execution loop
    - Mode detection API (/api/meta/mode)
    - MCP integration fix (resolved infinite loading)
    - NomadArch Native option in UI with comparison info
    
    🆓 Free Models (No API Key):
    - GPT-5 Nano (400K context)
    - Grok Code Fast 1 (256K context)
    - GLM-4.7 (205K context)
    - Doubao Seed Code (256K context)
    - Big Pickle (200K context)
    
    📦 New Files:
    - session-store.ts: Native session persistence
    - native-sessions.ts: REST API for sessions
    - lite-mode.ts: UI mode detection client
    - native-sessions.ts (UI): SolidJS store
    
    🔧 Updated:
    - All installers: Optional binary download
    - All launchers: Mode detection display
    - Binary selector: Added NomadArch Native option
    - README: Binary-Free Mode documentation
  • Fix initialization order: Define filteredMessageIds before lastAssistantIndex
    Fixed 'Cannot access filteredMessageIds before initialization' error by
    reordering the declarations. Since lastAssistantIndex depends on
    filteredMessageIds, it must be defined after it.
  • Fix UI freeze: Optimize reactive memos and remove trigger loops
    Critical performance fixes for MULTIX chat mode:
    
    1. isAgentThinking - Simplified to only check last message
       - Previously iterated ALL messages with .some() on every store update
       - Each getMessage() call created a reactive subscription
       - Now only checks the last message (O(1) instead of O(n))
    
    2. lastAssistantIndex - Memoized with createMemo
       - Changed from function to createMemo for proper caching
       - Added early exit optimization for common case
    
    3. Auto-scroll effect - Removed isAgentThinking dependency
       - The thinking-based scroll was firing on every reactive update
       - Now only triggers on message count changes
       - Streaming scroll is handled by the interval-based effect
    
    These combined fixes prevent the cascading reactive loop that
    was freezing the UI during message send.
  • Fix UI freeze: Remove high-frequency logging and throttle scroll effects
    Performance optimizations to prevent UI freeze during streaming:
    
    1. message-block-list.tsx:
       - Removed createEffect that logged on every messageIds change
       - Removed unused logger import (was causing IPC overload)
    
    2. multi-task-chat.tsx:
       - Changed filteredMessageIds from function to createMemo for proper memoization
       - Throttled auto-scroll effect to only trigger when message COUNT changes
       - Previously it fired on every reactive store update during streaming
    
    These changes prevent excessive re-renders and IPC calls during message streaming.
  • Integrate Context-Engine RAG service for enhanced LLM responses
    Backend:
    - Created context-engine/client.ts - HTTP client for Context-Engine API
    - Created context-engine/service.ts - Lifecycle management of Context-Engine sidecar
    - Created context-engine/index.ts - Module exports
    - Created server/routes/context-engine.ts - API endpoints for status/health/query
    
    Integration:
    - workspaces/manager.ts: Trigger indexing when workspace becomes ready (non-blocking)
    - index.ts: Initialize ContextEngineService on server start (lazy mode)
    - ollama-cloud.ts: Inject RAG context into chat requests when available
    
    Frontend:
    - model-selector.tsx: Added Context-Engine status indicator
      - Green dot = Ready (RAG enabled)
      - Blue pulsing dot = Indexing
      - Red dot = Error
      - Hidden when Context-Engine not running
    
    All operations are non-blocking with graceful fallback when Context-Engine is unavailable.
  • Fix UI freeze by adding yield to SSE streaming loop
    Added microtask yield (setTimeout 0) after processing each batch of SSE
    lines. This allows the main thread event loop to process UI updates and
    user interaction between streaming updates, preventing the UI from
    becoming completely unresponsive during rapid streaming.
  • Add server-side timeout handling to Ollama Cloud streaming
    - Added 60 second timeout per chunk in parseStreamingResponse
    - Added 120 second timeout to makeRequest with AbortController
    - This prevents the server from hanging indefinitely on slow/unresponsive API
    
    This should fix the UI freeze when sending messages to Ollama Cloud models.
  • Add custom agent creator, Zread MCP, fix model change context continuity
    Features added:
    - Custom Agent Creator dialog with AI generation support (up to 30k chars)
    - Plus button next to agent selector to create new agents
    - Zread MCP Server from Z.AI in marketplace (remote HTTP config)
    - Extended MCP config types to support remote/http/sse servers
    
    Bug fixes:
    - Filter SDK Z.AI/GLM providers to ensure our custom routing with full message history
    - This fixes the issue where changing models mid-chat lost conversationcontext
  • feat: integrate Z.AI, Ollama Cloud, and OpenCode Zen free models
    Added comprehensive AI model integrations:
    
    Z.AI Integration:
    - Client with Anthropic-compatible API (GLM Coding Plan)
    - Routes for config, testing, and streaming chat
    - Settings UI component with API key management
    
    OpenCode Zen Integration:
    - Free models client using 'public' API key
    - Dynamic model fetching from models.dev
    - Supports GPT-5 Nano, Big Pickle, Grok Code Fast 1, MiniMax M2.1
    - No API key required for free tier!
    
    UI Enhancements:
    - Added Free Models tab (first position) in Advanced Settings
    - Z.AI tab with GLM Coding Plan info
    - OpenCode Zen settings with model cards and status
    
    All integrations work standalone without opencode.exe dependency.
  • feat: add APEX PRO mode combining SOLO + APEX
    Combined autonomous and auto-approval modes into APEX PRO:
    - Single toggle enables both functionalities
    - Orange color scheme with gentle blinking animation
    - Pulsing shadow effect when enabled
    - Small glowing indicator dot
    - Tooltip explains combined functionality
    - SHIELD button remains separate for auto-approval only
  • feat: add API Key Manager button, fix overflow, update branding
    Changes:
    1. Fixed MULTIX overflow issue - added max-h-full and overflow-hidden to prevent content from pushing interface out of frame
    
    2. Added API Key Manager button in header:
       - Key icon with emerald hover effect
       - Opens modal with provider list (NomadArch Free, Ollama Cloud, OpenAI, Anthropic, OpenRouter)
       - Shows provider status and configuration
    
    3. Updated branding:
       - Window title: 'NomadArch 1.0'
       - Loading screen: 'NomadArch 1.0 - A fork of OpenCode'
       - Updated page titles
    
    4. Added Settings and Key icons to imports
  • feat: add hover preview tooltips to message sidebar
    Enhanced YOU/ASST message navigation:
    - Click scrolls to message (already implemented)
    - Hover shows preview tooltip with:
      - Role label (You/Assistant)
      - Message number
      - First 100 chars of message content
    - Smooth slide-in animation on hover
    - Slightly larger buttons with scale effect on hover
  • feat: add enhanced MULTIX UI features
    Added all missing MULTIX enhancements matching the original screenshot:
    
    1. STREAMING indicator:
       - Animated purple badge with sparkles icon
       - Shows live token count during streaming
       - Pulsing animation effect
    
    2. Status badges:
       - PENDING/RUNNING/DONE badges for tasks
       - Color-coded based on status
    
    3. APEX/SHIELD renamed:
       - 'Auto' -> 'APEX' with tooltip
       - 'Shield' -> 'SHIELD' with tooltip
    
    4. THINKING indicator:
       - Bouncing dots animation (3 dots)
       - Shows THINKING or SENDING status
    
    5. STOP button:
       - Red stop button appears during agent work
       - Calls cancel endpoint to interrupt
    
    6. Detailed token stats bar:
       - INPUT/OUTPUT tokens
       - REASONING tokens (amber)
       - CACHE READ (emerald)
       - CACHE WRITE (cyan)
       - COST (violet)
       - MODEL (indigo)
    
    7. Message navigation sidebar:
       - YOU/ASST labels for each message
       - Click to scroll to message
       - Appears on right side when viewing task
  • feat: add enhanced MULTIX UI features
    Added all missing MULTIX enhancements matching the original screenshot:
    
    1. STREAMING indicator:
       - Animated purple badge with sparkles icon
       - Shows live token count during streaming
       - Pulsing animation effect
    
    2. Status badges:
       - PENDING/RUNNING/DONE badges for tasks
       - Color-coded based on status
    
    3. APEX/SHIELD renamed:
       - 'Auto' → 'APEX' with tooltip
       - 'Shield' → 'SHIELD' with tooltip
    
    4. THINKING indicator:
       - Bouncing dots animation (3 dots)
       - Shows THINKING or SENDING status
    
    5. STOP button:
       - Red stop button appears during agent work
       - Calls cancel endpoint to interrupt
    
    6. Detailed token stats bar:
       - INPUT/OUTPUT tokens
       - REASONING tokens (amber)
       - CACHE READ (emerald)
       - CACHE WRITE (cyan)
       - COST (violet)
       - MODEL (indigo)
    
    7. Message navigation sidebar:
       - YOU/ASST labels for each message
       - Click to scroll to message
       - Appears on right side when viewing task
  • feat: restore GLM 4.7 fixes - auto-scroll and retry logic
    Changes from GLM 4.7 Progress Log:
    
    1. Multi-task chat auto-scroll (multi-task-chat.tsx):
       - Added createEffect that monitors message count changes
       - Auto-scrolls using requestAnimationFrame + setTimeout(50ms)
       - Scrolls when new messages arrive or during streaming
    
    2. Electron black screen fix (main.ts):
       - Added exponential backoff retry (1s, 2s, 4s, 8s, 16s max)
       - Added 30-second timeout for load operations
       - Added user-friendly error screen with retry button
       - Handles errno -3 network errors gracefully
       - Max 5 retry attempts before showing error
  • restore: bring back all custom UI enhancements from checkpoint
    Restored from commit 52be710 (checkpoint before qwen oauth + todo roller):
    
    Enhanced UI Features:
    - SMART FIX button with AI code analysis
    - APEX (Autonomous Programming EXecution) mode
    - SHIELD (Auto-approval) mode
    - MULTIX MODE multi-task pipeline interface
    - Live streaming token counter
    - Thinking indicator with bouncing dots animation
    
    Components restored:
    - packages/ui/src/components/chat/multi-task-chat.tsx
    - packages/ui/src/components/instance/instance-shell2.tsx
    - packages/ui/src/components/settings/OllamaCloudSettings.tsx
    - packages/ui/src/components/settings/QwenCodeSettings.tsx
    - packages/ui/src/stores/solo-store.ts
    - packages/ui/src/stores/task-actions.ts
    - packages/ui/src/stores/session-events.ts (autonomous mode)
    - packages/server/src/integrations/ollama-cloud.ts
    - packages/server/src/server/routes/ollama.ts
    - packages/server/src/server/routes/qwen.ts
    
    This ensures all custom features are preserved in source control.
  • restore: recover deleted documentation, CI/CD, and infrastructure files
    Restored from origin/main (b4663fb):
    - .github/ workflows and issue templates
    - .gitignore (proper exclusions)
    - .opencode/agent/web_developer.md
    - AGENTS.md, BUILD.md, PROGRESS.md
    - dev-docs/ (9 architecture/implementation docs)
    - docs/screenshots/ (4 UI screenshots)
    - images/ (CodeNomad icons)
    - package-lock.json (dependency lockfile)
    - tasks/ (25+ project task files)
    
    Also restored original source files that were modified:
    - packages/ui/src/App.tsx
    - packages/ui/src/lib/logger.ts
    - packages/ui/src/stores/instances.ts
    - packages/server/src/server/routes/workspaces.ts
    - packages/server/src/workspaces/manager.ts
    - packages/server/src/workspaces/runtime.ts
    - packages/server/package.json
    
    Kept new additions:
    - Install-*.bat/.sh (enhanced installers)
    - Launch-*.bat/.sh (new launchers)
    - README.md (SEO optimized with GLM 4.7)
  • fix: restore complete source code and fix launchers
    - Copy complete source code packages from original CodeNomad project
    - Add root package.json with npm workspace configuration
    - Include electron-app, server, ui, tauri-app, and opencode-config packages
    - Fix Launch-Windows.bat and Launch-Dev-Windows.bat to work with correct npm scripts
    - Fix Launch-Unix.sh to work with correct npm scripts
    - Launchers now correctly call npm run dev:electron which launches Electron app
  • feat: restore and update installer/launcher scripts
    - Restore Install-Windows.bat with npm primary + ZIP fallback for OpenCode
    - Restore Install-Linux.sh with npm primary + ZIP fallback for OpenCode
    - Restore Install-Mac.sh with npm primary + ZIP fallback for OpenCode
    - Add Launch-Windows.bat launcher with dependency checking and port detection
    - Add Launch-Unix.sh launcher for Linux/macOS
    - Add Launch-Dev-Windows.bat for development mode
    - All scripts use actual GitHub releases URLs for OpenCode
    - Enhanced with comprehensive error handling and user guidance
  • feat: SEO optimize README and add GLM 4.7 integration
    - Add comprehensive SEO meta tags (Open Graph, Twitter Card, Schema.org JSON-LD)
    - Add GitHub badges (stars, forks, license, release) with CTA
    - Add dedicated 'Supported AI Models & Providers' section with:
      - GLM 4.7 spotlight with benchmarks (SWE-bench +73.8%, #1 WebDev)
      - Z.AI API integration with 10% discount link (R0K78RJKNW)
      - Complete model listings for Z.AI, Anthropic, OpenAI, Google, Qwen, Ollama
    - Update installers with npm primary method and ZIP fallback for OpenCode CLI
    - Add backup files for all installers
    - Update repository clone URL to new GitHub location
    - Update all URLs and references to roman-ryzenadvanced/NomadArch-v1.0