68 Commits

  • Integrate Context-Engine RAG service for enhanced LLM responses
    Backend:
    - Created context-engine/client.ts - HTTP client for Context-Engine API
    - Created context-engine/service.ts - Lifecycle management of Context-Engine sidecar
    - Created context-engine/index.ts - Module exports
    - Created server/routes/context-engine.ts - API endpoints for status/health/query
    
    Integration:
    - workspaces/manager.ts: Trigger indexing when workspace becomes ready (non-blocking)
    - index.ts: Initialize ContextEngineService on server start (lazy mode)
    - ollama-cloud.ts: Inject RAG context into chat requests when available
    
    Frontend:
    - model-selector.tsx: Added Context-Engine status indicator
      - Green dot = Ready (RAG enabled)
      - Blue pulsing dot = Indexing
      - Red dot = Error
      - Hidden when Context-Engine not running
    
    All operations are non-blocking with graceful fallback when Context-Engine is unavailable.
  • Fix UI freeze by adding yield to SSE streaming loop
    Added microtask yield (setTimeout 0) after processing each batch of SSE
    lines. This allows the main thread event loop to process UI updates and
    user interaction between streaming updates, preventing the UI from
    becoming completely unresponsive during rapid streaming.
  • Add server-side timeout handling to Ollama Cloud streaming
    - Added 60 second timeout per chunk in parseStreamingResponse
    - Added 120 second timeout to makeRequest with AbortController
    - This prevents the server from hanging indefinitely on slow/unresponsive API
    
    This should fix the UI freeze when sending messages to Ollama Cloud models.
  • Add custom agent creator, Zread MCP, fix model change context continuity
    Features added:
    - Custom Agent Creator dialog with AI generation support (up to 30k chars)
    - Plus button next to agent selector to create new agents
    - Zread MCP Server from Z.AI in marketplace (remote HTTP config)
    - Extended MCP config types to support remote/http/sse servers
    
    Bug fixes:
    - Filter SDK Z.AI/GLM providers to ensure our custom routing with full message history
    - This fixes the issue where changing models mid-chat lost conversationcontext
  • feat: integrate Z.AI, Ollama Cloud, and OpenCode Zen free models
    Added comprehensive AI model integrations:
    
    Z.AI Integration:
    - Client with Anthropic-compatible API (GLM Coding Plan)
    - Routes for config, testing, and streaming chat
    - Settings UI component with API key management
    
    OpenCode Zen Integration:
    - Free models client using 'public' API key
    - Dynamic model fetching from models.dev
    - Supports GPT-5 Nano, Big Pickle, Grok Code Fast 1, MiniMax M2.1
    - No API key required for free tier!
    
    UI Enhancements:
    - Added Free Models tab (first position) in Advanced Settings
    - Z.AI tab with GLM Coding Plan info
    - OpenCode Zen settings with model cards and status
    
    All integrations work standalone without opencode.exe dependency.
  • feat: add APEX PRO mode combining SOLO + APEX
    Combined autonomous and auto-approval modes into APEX PRO:
    - Single toggle enables both functionalities
    - Orange color scheme with gentle blinking animation
    - Pulsing shadow effect when enabled
    - Small glowing indicator dot
    - Tooltip explains combined functionality
    - SHIELD button remains separate for auto-approval only
  • feat: add API Key Manager button, fix overflow, update branding
    Changes:
    1. Fixed MULTIX overflow issue - added max-h-full and overflow-hidden to prevent content from pushing interface out of frame
    
    2. Added API Key Manager button in header:
       - Key icon with emerald hover effect
       - Opens modal with provider list (NomadArch Free, Ollama Cloud, OpenAI, Anthropic, OpenRouter)
       - Shows provider status and configuration
    
    3. Updated branding:
       - Window title: 'NomadArch 1.0'
       - Loading screen: 'NomadArch 1.0 - A fork of OpenCode'
       - Updated page titles
    
    4. Added Settings and Key icons to imports
  • feat: add hover preview tooltips to message sidebar
    Enhanced YOU/ASST message navigation:
    - Click scrolls to message (already implemented)
    - Hover shows preview tooltip with:
      - Role label (You/Assistant)
      - Message number
      - First 100 chars of message content
    - Smooth slide-in animation on hover
    - Slightly larger buttons with scale effect on hover
  • feat: add enhanced MULTIX UI features
    Added all missing MULTIX enhancements matching the original screenshot:
    
    1. STREAMING indicator:
       - Animated purple badge with sparkles icon
       - Shows live token count during streaming
       - Pulsing animation effect
    
    2. Status badges:
       - PENDING/RUNNING/DONE badges for tasks
       - Color-coded based on status
    
    3. APEX/SHIELD renamed:
       - 'Auto' -> 'APEX' with tooltip
       - 'Shield' -> 'SHIELD' with tooltip
    
    4. THINKING indicator:
       - Bouncing dots animation (3 dots)
       - Shows THINKING or SENDING status
    
    5. STOP button:
       - Red stop button appears during agent work
       - Calls cancel endpoint to interrupt
    
    6. Detailed token stats bar:
       - INPUT/OUTPUT tokens
       - REASONING tokens (amber)
       - CACHE READ (emerald)
       - CACHE WRITE (cyan)
       - COST (violet)
       - MODEL (indigo)
    
    7. Message navigation sidebar:
       - YOU/ASST labels for each message
       - Click to scroll to message
       - Appears on right side when viewing task
  • feat: add enhanced MULTIX UI features
    Added all missing MULTIX enhancements matching the original screenshot:
    
    1. STREAMING indicator:
       - Animated purple badge with sparkles icon
       - Shows live token count during streaming
       - Pulsing animation effect
    
    2. Status badges:
       - PENDING/RUNNING/DONE badges for tasks
       - Color-coded based on status
    
    3. APEX/SHIELD renamed:
       - 'Auto' → 'APEX' with tooltip
       - 'Shield' → 'SHIELD' with tooltip
    
    4. THINKING indicator:
       - Bouncing dots animation (3 dots)
       - Shows THINKING or SENDING status
    
    5. STOP button:
       - Red stop button appears during agent work
       - Calls cancel endpoint to interrupt
    
    6. Detailed token stats bar:
       - INPUT/OUTPUT tokens
       - REASONING tokens (amber)
       - CACHE READ (emerald)
       - CACHE WRITE (cyan)
       - COST (violet)
       - MODEL (indigo)
    
    7. Message navigation sidebar:
       - YOU/ASST labels for each message
       - Click to scroll to message
       - Appears on right side when viewing task
  • feat: restore GLM 4.7 fixes - auto-scroll and retry logic
    Changes from GLM 4.7 Progress Log:
    
    1. Multi-task chat auto-scroll (multi-task-chat.tsx):
       - Added createEffect that monitors message count changes
       - Auto-scrolls using requestAnimationFrame + setTimeout(50ms)
       - Scrolls when new messages arrive or during streaming
    
    2. Electron black screen fix (main.ts):
       - Added exponential backoff retry (1s, 2s, 4s, 8s, 16s max)
       - Added 30-second timeout for load operations
       - Added user-friendly error screen with retry button
       - Handles errno -3 network errors gracefully
       - Max 5 retry attempts before showing error
  • restore: bring back all custom UI enhancements from checkpoint
    Restored from commit 52be710 (checkpoint before qwen oauth + todo roller):
    
    Enhanced UI Features:
    - SMART FIX button with AI code analysis
    - APEX (Autonomous Programming EXecution) mode
    - SHIELD (Auto-approval) mode
    - MULTIX MODE multi-task pipeline interface
    - Live streaming token counter
    - Thinking indicator with bouncing dots animation
    
    Components restored:
    - packages/ui/src/components/chat/multi-task-chat.tsx
    - packages/ui/src/components/instance/instance-shell2.tsx
    - packages/ui/src/components/settings/OllamaCloudSettings.tsx
    - packages/ui/src/components/settings/QwenCodeSettings.tsx
    - packages/ui/src/stores/solo-store.ts
    - packages/ui/src/stores/task-actions.ts
    - packages/ui/src/stores/session-events.ts (autonomous mode)
    - packages/server/src/integrations/ollama-cloud.ts
    - packages/server/src/server/routes/ollama.ts
    - packages/server/src/server/routes/qwen.ts
    
    This ensures all custom features are preserved in source control.
  • restore: recover deleted documentation, CI/CD, and infrastructure files
    Restored from origin/main (b4663fb):
    - .github/ workflows and issue templates
    - .gitignore (proper exclusions)
    - .opencode/agent/web_developer.md
    - AGENTS.md, BUILD.md, PROGRESS.md
    - dev-docs/ (9 architecture/implementation docs)
    - docs/screenshots/ (4 UI screenshots)
    - images/ (CodeNomad icons)
    - package-lock.json (dependency lockfile)
    - tasks/ (25+ project task files)
    
    Also restored original source files that were modified:
    - packages/ui/src/App.tsx
    - packages/ui/src/lib/logger.ts
    - packages/ui/src/stores/instances.ts
    - packages/server/src/server/routes/workspaces.ts
    - packages/server/src/workspaces/manager.ts
    - packages/server/src/workspaces/runtime.ts
    - packages/server/package.json
    
    Kept new additions:
    - Install-*.bat/.sh (enhanced installers)
    - Launch-*.bat/.sh (new launchers)
    - README.md (SEO optimized with GLM 4.7)
  • fix: restore complete source code and fix launchers
    - Copy complete source code packages from original CodeNomad project
    - Add root package.json with npm workspace configuration
    - Include electron-app, server, ui, tauri-app, and opencode-config packages
    - Fix Launch-Windows.bat and Launch-Dev-Windows.bat to work with correct npm scripts
    - Fix Launch-Unix.sh to work with correct npm scripts
    - Launchers now correctly call npm run dev:electron which launches Electron app