Changes: 1. Added cachedProviderId signal in MultiX v2 component 2. Updated syncFromStore to also sync provider ID from task session 3. Immediately update cached model and provider IDs on model change 4. Pass correct providerId to LiteModelSelector component This fixes the issue where selecting Qwen models caused chat input to stop responding, because the provider ID was not being tracked correctly.
🏛️ NomadArch
Advanced AI Coding Workspace
NomadArch is an enhanced fork of CodeNomad — now with GLM 4.7, multi-model support, and MULTIX Mode
Features • AI Models • Installation • Usage • What's New • Credits
🎯 Overview
NomadArch is an enhanced fork of CodeNomad, featuring significant UI/UX improvements, additional AI integrations, and a more robust architecture. This is a full-featured AI coding assistant with support for multiple AI providers including GLM 4.7, Anthropic, OpenAI, Google, Qwen, and local models via Ollama.
✨ Key Improvements Over CodeNomad
- 🔧 Fixed Qwen OAuth authentication flow
- 🚀 Enhanced MULTIX Mode with live token streaming
- 🎨 Improved UI/UX with detailed tooltips
- ✅ Auto-build verification on launch
- 📦 Comprehensive installer scripts for all platforms
- 🔌 Port conflict detection and resolution hints
- 🆓 NEW: Binary-Free Mode - No external binaries required!
🆓 Binary-Free Mode (v0.5.0)
NomadArch now works without requiring the OpenCode binary! This means:
| Benefit | Description |
|---|---|
| ⚡ Faster Setup | No binary downloads, just npm install |
| 🌍 Universal | Works on all platforms without platform-specific binaries |
| 🆓 Free Models | Access free AI models without any binary |
| 🔄 Seamless | Automatically uses native mode when binary unavailable |
Free Models Available (No API Key Required):
- 🧠 GPT-5 Nano - 400K context, reasoning + tools
- ⚡ Grok Code Fast 1 - 256K context, optimized for code
- 🌟 GLM-4.7 - 205K context, top-tier performance
- 🚀 Doubao Seed Code - 256K context, specialized for coding
- 🥒 Big Pickle - 200K context, efficient and fast
🤖 Supported AI Models
NomadArch supports a wide range of AI models from multiple providers, giving you flexibility to choose the best model for your coding tasks.
🔥 Featured Model: GLM 4.7 (Z.AI)
GLM 4.7 is the latest state-of-the-art open model from Z.AI, now fully integrated into NomadArch. Released in December 2025, GLM 4.7 ranks #1 for Web Development and #6 overall on the LM Arena leaderboard.
| Feature | Description |
|---|---|
| 📊 128K Context Window | Process entire codebases in a single session |
| 🧠 Interleaved Thinking | Advanced reasoning with multi-step analysis |
| 💭 Preserved Thinking | Maintains reasoning chain across long conversations |
| 🔄 Turn-level Thinking | Optimized per-response reasoning for efficiency |
Benchmark Performance
| Benchmark | Score | Notes |
|---|---|---|
| SWE-bench | +73.8% | Over GLM-4.6 |
| SWE-bench Multilingual | +66.7% | Over GLM-4.6 |
| Terminal Bench 2.0 | +41% | Over GLM-4.6 |
| LM Arena WebDev | #1 | Open Model Ranking |
| LM Arena Overall | #6 | Open Model Ranking |
🎯 Get 10% discount on Z.AI with code:
R0K78RJKNW
📋 All Supported Models
🌟 Z.AI Models
| Model | Context | Specialty |
|---|---|---|
| GLM 4.7 | 128K | Web Development, Coding |
| GLM 4.6 | 128K | General Coding |
| GLM-4 | 128K | Versatile |
🟣 Anthropic Models
| Model | Context | Specialty |
|---|---|---|
| Claude 3.7 Sonnet | 200K | Complex Reasoning |
| Claude 3.5 Sonnet | 200K | Balanced Performance |
| Claude 3 Opus | 200K | Maximum Quality |
🟢 OpenAI Models
| Model | Context | Specialty |
|---|---|---|
| GPT-5 Preview | 200K | Latest Capabilities |
| GPT-4.1 | 128K | Production Ready |
| GPT-4 Turbo | 128K | Fast & Efficient |
🔵 Google Models
| Model | Context | Specialty |
|---|---|---|
| Gemini 2.0 Pro | 1M+ | Massive Context |
| Gemini 2.0 Flash | 1M+ | Ultra Fast |
🟠 Qwen & Local Models
| Model | Context/Size | Specialty |
|---|---|---|
| Qwen 2.5 Coder | 32K | Code Specialized |
| Qwen 2.5 | 32K | General Purpose |
| DeepSeek Coder (Ollama) | Varies | Code |
| Llama 3.1 (Ollama) | Varies | General |
📦 Installation
Quick Start (Recommended)
Windows
Install-Windows.bat
Launch-Windows.bat
Linux
chmod +x Install-Linux.sh && ./Install-Linux.sh
./Launch-Unix.sh
macOS
chmod +x Install-Mac.sh && ./Install-Mac.sh
./Launch-Unix.sh
Manual Installation
git clone https://github.com/roman-ryzenadvanced/NomadArch-v1.0.git
cd NomadArch
npm install
npm run dev:electron
🚀 Features
Core Features
| Feature | Description |
|---|---|
| 🤖 Multi-Provider AI | GLM 4.7, Anthropic, OpenAI, Google, Qwen, Ollama |
| 🖥️ Electron Desktop App | Native feel with modern web technologies |
| 📁 Workspace Management | Organize your projects efficiently |
| 💬 Real-time Streaming | Live responses from AI models |
| 🔧 Smart Fix | AI-powered code error detection and fixes |
| 🔌 Ollama Integration | Run local AI models for privacy |
UI/UX Highlights
| Mode | Description |
|---|---|
| ⚡ MULTIX Mode | Multi-task parallel AI conversations with live token counting |
| 🛡️ SHIELD Mode | Auto-approval for hands-free operation |
| 🚀 APEX Mode | Autonomous AI that chains tasks together |
🆕 What's New
🎨 Branding & Identity
- ✅ New Branding: "NomadArch" with proper attribution to OpenCode
- ✅ Updated Loading Screen: New branding with fork attribution
- ✅ Updated Empty States: All screens show NomadArch branding
🔐 Qwen OAuth Integration
- ✅ Fixed OAuth Flow: Resolved "Body cannot be empty" error
- ✅ Proper API Bodies: POST requests now include proper JSON bodies
- ✅ Fixed Device Poll Schema: Corrected Fastify schema validation
🚀 MULTIX Mode Enhancements
- ✅ Live Streaming Token Counter: Visible in header during AI processing
- ✅ Thinking Roller Indicator: Animated indicator with bouncing dots
- ✅ Token Stats Display: Shows input/output tokens processed
- ✅ Auto-Scroll: Intelligent scrolling during streaming
🐛 Bug Fixes
- ✅ Fixed Qwen OAuth "empty body" errors
- ✅ Fixed MultiX panel being pushed off screen
- ✅ Fixed top menu/toolbar disappearing
- ✅ Fixed layout breaking when scrolling
- ✅ Fixed sessions not showing on workspace entry
🎮 Button Guide
| Button | Description |
|---|---|
| AUTHED | Shows authentication status (Green = connected) |
| AI MODEL | Click to switch between AI models |
| SMART FIX | AI analyzes code for errors and applies fixes |
| BUILD | Compiles and builds your project |
| APEX | Autonomous mode - AI chains actions automatically |
| SHIELD | Auto-approval mode - AI makes changes without prompts |
| MULTIX MODE | Opens multi-task pipeline for parallel conversations |
📁 Project Structure
NomadArch/
├── Install-*.bat/.sh # Platform installers
├── Launch-*.bat/.sh # Platform launchers
├── packages/
│ ├── electron-app/ # Electron main process
│ ├── server/ # Backend (Fastify)
│ ├── ui/ # Frontend (SolidJS + Vite)
│ └── opencode-config/ # OpenCode configuration
└── README.md
🔧 Requirements
| Requirement | Version |
|---|---|
| Node.js | v18+ |
| npm | v9+ |
| OS | Windows 10+, macOS 11+, Linux |
🆘 Troubleshooting
Common Issues & Solutions
Dependencies not installed?
# Run the installer for your platform
Install-Windows.bat # Windows
./Install-Linux.sh # Linux
./Install-Mac.sh # macOS
Port conflict?
# Kill process on port 3000/3001
taskkill /F /PID <PID> # Windows
kill -9 <PID> # Unix
OAuth fails?
- Check internet connection
- Complete OAuth in browser
- Clear browser cookies and retry
🙏 Credits
Built with amazing open source projects:
| Category | Projects |
|---|---|
| Framework | SolidJS, Vite, TypeScript, Electron |
| UI | TailwindCSS, Kobalte, SUID Material |
| Backend | Fastify, Ollama |
| AI | OpenCode CLI, Various AI SDKs |
📄 License
This project is a fork of CodeNomad.
Made with ❤️ by NeuralNomadsAI
NomadArch is an enhanced fork of CodeNomad