Edit README.md

This commit is contained in:
Roman (RyzenAdvanced)
2025-12-07 09:20:42 +00:00
Unverified
parent 4c0b726f05
commit 0a5ea42029

202
README.md
View File

@@ -1,103 +1,99 @@
# Claude Code PowerShell Python App <!DOCTYPE html>
<html lang="en">
A sophisticated PowerShell wrapper application that provides coding assistance in the style of Claude Code, using Qwen3-Coder models with support for both LM Studio server and direct model loading. <head>
<meta charset="UTF-8">
## Files Created <meta name="viewport" content="width=device-width, initial-scale=1.0">
- `lm_studio_client.py` - Enhanced Python client script with Qwen3-Coder features <title>Adding GLM 4.6 Model to TRAE</title>
- `lm_studio.ps1` - PowerShell wrapper script <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:wght@400;600;700&display=swap" rel="stylesheet">
- `README.md` - This documentation <link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">
<style>
## Prerequisites * {
1. Python 3.7+ installed and in PATH margin: 0;
2. For LM Studio: LM Studio running with server enabled on http://127.0.0.1:1234 padding: 0;
3. For Qwen direct: transformers and torch libraries box-sizing: border-box;
4. Python requests library (auto-installed by script) }
5. Optional: Flask for web interface body {
font-family: 'Source Sans Pro', sans-serif;
## Usage color: #ffffff;
}
### PowerShell Commands: .slide {
width: 1280px;
**List available models (LM Studio only):** height: 720px;
```powershell position: relative;
.\lm_studio.ps1 -ListModels overflow: hidden;
``` background: linear-gradient(rgba(15, 40, 80, 0.85), rgba(15, 40, 80, 0.85)),
url('https://z-cdn-media.chatglm.cn/files/4617a19d-468f-476c-a1cf-2174c7e32e07.png?auth_key=1865095428-03a2a4e3ca714c509205ef9de26b2f53-0-65c9a6a247722b83f86067c9285cd476');
**Single prompt:** background-size: cover;
```powershell background-position: center;
.\lm_studio.ps1 -Prompt "Write a Python function to sort a list" display: flex;
``` flex-direction: column;
justify-content: center;
**Interactive chat mode:** align-items: center;
```powershell }
.\lm_studio.ps1 -Interactive .logo-container {
``` position: absolute;
top: 40px;
**With specific language focus:** right: 40px;
```powershell }
.\lm_studio.ps1 -Interactive -Language python .zai-logo {
``` height: 60px;
width: auto;
**Using Qwen direct model:** filter: brightness(0) invert(1);
```powershell opacity: 0.9;
.\lm_studio.ps1 -Client qwen -Prompt "Hello" }
``` .content {
text-align: center;
**Fill-in-the-middle code completion:** max-width: 80%;
```powershell display: flex;
.\lm_studio.ps1 -Client qwen -FimPrefix "def sort_list(arr):" -FimSuffix "return sorted_arr" flex-direction: column;
``` align-items: center;
}
**Start web interface:** .title {
```powershell font-size: 64px;
.\lm_studio.ps1 -Web -Port 8080 font-weight: 700;
``` margin-bottom: 24px;
line-height: 1.2;
**Start terminal user interface:** color: #ffffff;
```powershell }
.\lm_studio.ps1 -Tui .subtitle {
``` font-size: 20px;
font-weight: 400;
### Direct Python Usage: color: #e0f2ff;
margin-bottom: 40px;
```bash }
python lm_studio_client.py --help .highlight {
python lm_studio_client.py --client qwen --prompt "Create a REST API" color: #4fc3f7;
python lm_studio_client.py --interactive font-weight: 600;
python lm_studio_client.py --client qwen --fim-prefix "def hello():" --fim-suffix "print('world')" }
python lm_studio_client.py --web --port 5000 .icon-container {
python lm_studio_client.py --tui margin-top: 30px;
``` }
.material-icons {
## Features font-size: 48px;
- **Dual Client Support**: LM Studio server or direct Qwen3-Coder model loading color: #4fc3f7;
- **Interactive Chat**: Real-time conversation with the AI coding assistant }
- **Terminal User Interface**: Curses-based TUI for interactive chat .author-link {
- **Fill-in-the-Middle**: Advanced code completion for partial code snippets color: #4fc3f7;
- **Language-Specific Assistance**: Focus on specific programming languages text-decoration: none;
- **Web Interface**: Modern web UI with tabs for different features font-weight: 600;
- **Model Selection**: Choose from available models }
- **Auto-Dependency Installation**: Automatically installs required Python packages .author-link:hover {
- **Error Handling**: Robust error handling and validation text-decoration: underline;
}
## Qwen3-Coder Features </style>
- **Agentic Coding**: Advanced coding capabilities with tool use </head>
- **Long Context**: Support for up to 256K tokens <body>
- **358 Programming Languages**: Comprehensive language support <div class="slide">
- **Fill-in-the-Middle**: Specialized code completion <div class="logo-container">
- **Function Calling**: Tool integration capabilities <img src="https://sfile.chatglm.cn/images-ppt/1820f42c924a.jpg" alt="Z.AI Logo" class="zai-logo">
</div>
## Setup <div class="content">
1. For LM Studio: Ensure LM Studio is running with server enabled <h1 class="title">Adding <span class="highlight">GLM 4.6 Model</span> to TRAE</h1>
2. For Qwen direct: Install transformers: `pip install transformers torch` <h2 class="subtitle">Step-by-step instructions to integrate GLM 4.6 with TRAE platform<br><br>by <a href="https://t.me/VibeCodePrompterSystem" class="author-link">RyzenAdvanced (Roman)</a></h2>
3. For web interface: Install Flask: `pip install flask` <div class="icon-container">
4. Run the PowerShell script from the same directory <i class="material-icons">integration_instructions</i>
5. The script will auto-install required Python dependencies </div>
</div>
## Web Interface </div>
The web interface provides three main tabs: </body>
- **Chat**: Interactive conversation with the AI </html>
- **Fill-in-the-Middle**: Code completion for partial snippets
- **Code Assistant**: Generate code from descriptions
Access at `http://localhost:5000` (or custom port)