Edit README.md
This commit is contained in:
202
README.md
202
README.md
@@ -1,103 +1,99 @@
|
||||
# Claude Code PowerShell Python App
|
||||
|
||||
A sophisticated PowerShell wrapper application that provides coding assistance in the style of Claude Code, using Qwen3-Coder models with support for both LM Studio server and direct model loading.
|
||||
|
||||
## Files Created
|
||||
- `lm_studio_client.py` - Enhanced Python client script with Qwen3-Coder features
|
||||
- `lm_studio.ps1` - PowerShell wrapper script
|
||||
- `README.md` - This documentation
|
||||
|
||||
## Prerequisites
|
||||
1. Python 3.7+ installed and in PATH
|
||||
2. For LM Studio: LM Studio running with server enabled on http://127.0.0.1:1234
|
||||
3. For Qwen direct: transformers and torch libraries
|
||||
4. Python requests library (auto-installed by script)
|
||||
5. Optional: Flask for web interface
|
||||
|
||||
## Usage
|
||||
|
||||
### PowerShell Commands:
|
||||
|
||||
**List available models (LM Studio only):**
|
||||
```powershell
|
||||
.\lm_studio.ps1 -ListModels
|
||||
```
|
||||
|
||||
**Single prompt:**
|
||||
```powershell
|
||||
.\lm_studio.ps1 -Prompt "Write a Python function to sort a list"
|
||||
```
|
||||
|
||||
**Interactive chat mode:**
|
||||
```powershell
|
||||
.\lm_studio.ps1 -Interactive
|
||||
```
|
||||
|
||||
**With specific language focus:**
|
||||
```powershell
|
||||
.\lm_studio.ps1 -Interactive -Language python
|
||||
```
|
||||
|
||||
**Using Qwen direct model:**
|
||||
```powershell
|
||||
.\lm_studio.ps1 -Client qwen -Prompt "Hello"
|
||||
```
|
||||
|
||||
**Fill-in-the-middle code completion:**
|
||||
```powershell
|
||||
.\lm_studio.ps1 -Client qwen -FimPrefix "def sort_list(arr):" -FimSuffix "return sorted_arr"
|
||||
```
|
||||
|
||||
**Start web interface:**
|
||||
```powershell
|
||||
.\lm_studio.ps1 -Web -Port 8080
|
||||
```
|
||||
|
||||
**Start terminal user interface:**
|
||||
```powershell
|
||||
.\lm_studio.ps1 -Tui
|
||||
```
|
||||
|
||||
### Direct Python Usage:
|
||||
|
||||
```bash
|
||||
python lm_studio_client.py --help
|
||||
python lm_studio_client.py --client qwen --prompt "Create a REST API"
|
||||
python lm_studio_client.py --interactive
|
||||
python lm_studio_client.py --client qwen --fim-prefix "def hello():" --fim-suffix "print('world')"
|
||||
python lm_studio_client.py --web --port 5000
|
||||
python lm_studio_client.py --tui
|
||||
```
|
||||
|
||||
## Features
|
||||
- **Dual Client Support**: LM Studio server or direct Qwen3-Coder model loading
|
||||
- **Interactive Chat**: Real-time conversation with the AI coding assistant
|
||||
- **Terminal User Interface**: Curses-based TUI for interactive chat
|
||||
- **Fill-in-the-Middle**: Advanced code completion for partial code snippets
|
||||
- **Language-Specific Assistance**: Focus on specific programming languages
|
||||
- **Web Interface**: Modern web UI with tabs for different features
|
||||
- **Model Selection**: Choose from available models
|
||||
- **Auto-Dependency Installation**: Automatically installs required Python packages
|
||||
- **Error Handling**: Robust error handling and validation
|
||||
|
||||
## Qwen3-Coder Features
|
||||
- **Agentic Coding**: Advanced coding capabilities with tool use
|
||||
- **Long Context**: Support for up to 256K tokens
|
||||
- **358 Programming Languages**: Comprehensive language support
|
||||
- **Fill-in-the-Middle**: Specialized code completion
|
||||
- **Function Calling**: Tool integration capabilities
|
||||
|
||||
## Setup
|
||||
1. For LM Studio: Ensure LM Studio is running with server enabled
|
||||
2. For Qwen direct: Install transformers: `pip install transformers torch`
|
||||
3. For web interface: Install Flask: `pip install flask`
|
||||
4. Run the PowerShell script from the same directory
|
||||
5. The script will auto-install required Python dependencies
|
||||
|
||||
## Web Interface
|
||||
The web interface provides three main tabs:
|
||||
- **Chat**: Interactive conversation with the AI
|
||||
- **Fill-in-the-Middle**: Code completion for partial snippets
|
||||
- **Code Assistant**: Generate code from descriptions
|
||||
|
||||
Access at `http://localhost:5000` (or custom port)
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Adding GLM 4.6 Model to TRAE</title>
|
||||
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:wght@400;600;700&display=swap" rel="stylesheet">
|
||||
<link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">
|
||||
<style>
|
||||
* {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
body {
|
||||
font-family: 'Source Sans Pro', sans-serif;
|
||||
color: #ffffff;
|
||||
}
|
||||
.slide {
|
||||
width: 1280px;
|
||||
height: 720px;
|
||||
position: relative;
|
||||
overflow: hidden;
|
||||
background: linear-gradient(rgba(15, 40, 80, 0.85), rgba(15, 40, 80, 0.85)),
|
||||
url('https://z-cdn-media.chatglm.cn/files/4617a19d-468f-476c-a1cf-2174c7e32e07.png?auth_key=1865095428-03a2a4e3ca714c509205ef9de26b2f53-0-65c9a6a247722b83f86067c9285cd476');
|
||||
background-size: cover;
|
||||
background-position: center;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
justify-content: center;
|
||||
align-items: center;
|
||||
}
|
||||
.logo-container {
|
||||
position: absolute;
|
||||
top: 40px;
|
||||
right: 40px;
|
||||
}
|
||||
.zai-logo {
|
||||
height: 60px;
|
||||
width: auto;
|
||||
filter: brightness(0) invert(1);
|
||||
opacity: 0.9;
|
||||
}
|
||||
.content {
|
||||
text-align: center;
|
||||
max-width: 80%;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
}
|
||||
.title {
|
||||
font-size: 64px;
|
||||
font-weight: 700;
|
||||
margin-bottom: 24px;
|
||||
line-height: 1.2;
|
||||
color: #ffffff;
|
||||
}
|
||||
.subtitle {
|
||||
font-size: 20px;
|
||||
font-weight: 400;
|
||||
color: #e0f2ff;
|
||||
margin-bottom: 40px;
|
||||
}
|
||||
.highlight {
|
||||
color: #4fc3f7;
|
||||
font-weight: 600;
|
||||
}
|
||||
.icon-container {
|
||||
margin-top: 30px;
|
||||
}
|
||||
.material-icons {
|
||||
font-size: 48px;
|
||||
color: #4fc3f7;
|
||||
}
|
||||
.author-link {
|
||||
color: #4fc3f7;
|
||||
text-decoration: none;
|
||||
font-weight: 600;
|
||||
}
|
||||
.author-link:hover {
|
||||
text-decoration: underline;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="slide">
|
||||
<div class="logo-container">
|
||||
<img src="https://sfile.chatglm.cn/images-ppt/1820f42c924a.jpg" alt="Z.AI Logo" class="zai-logo">
|
||||
</div>
|
||||
<div class="content">
|
||||
<h1 class="title">Adding <span class="highlight">GLM 4.6 Model</span> to TRAE</h1>
|
||||
<h2 class="subtitle">Step-by-step instructions to integrate GLM 4.6 with TRAE platform<br><br>by <a href="https://t.me/VibeCodePrompterSystem" class="author-link">RyzenAdvanced (Roman)</a></h2>
|
||||
<div class="icon-container">
|
||||
<i class="material-icons">integration_instructions</i>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
Reference in New Issue
Block a user