Update student guide with full app.py documentation
Add clone/venv setup instructions, feature descriptions for both tabs, sidebar parameter table, and clarify that files stay local. Made-with: Cursor
This commit is contained in:
parent
deee5038d1
commit
d59285fe69
@ -109,12 +109,21 @@ curl http://silicon.fhgr.ch:7080/v1/chat/completions \
|
|||||||
|
|
||||||
## Streamlit Chat & File Editor App
|
## Streamlit Chat & File Editor App
|
||||||
|
|
||||||
A simple web UI is included for chatting with the model and editing files.
|
A web UI is included for chatting with the model and editing files. It runs
|
||||||
|
on your own machine and connects to the GPU server.
|
||||||
|
|
||||||
### Setup
|
### Setup
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install streamlit openai
|
# Clone the repository
|
||||||
|
git clone https://gitea.fhgr.ch/herzogfloria/LLM_Inferenz_Server_1.git
|
||||||
|
cd LLM_Inferenz_Server_1
|
||||||
|
|
||||||
|
# Create a virtual environment and install dependencies
|
||||||
|
python3 -m venv .venv
|
||||||
|
source .venv/bin/activate # macOS / Linux
|
||||||
|
# .venv\Scripts\activate # Windows
|
||||||
|
pip install -r requirements.txt
|
||||||
```
|
```
|
||||||
|
|
||||||
### Run
|
### Run
|
||||||
@ -123,18 +132,40 @@ pip install streamlit openai
|
|||||||
streamlit run app.py
|
streamlit run app.py
|
||||||
```
|
```
|
||||||
|
|
||||||
This opens a browser with two tabs:
|
Opens at `http://localhost:8501` in your browser.
|
||||||
|
|
||||||
- **Chat** — Conversational interface with streaming responses. You can save
|
### Features
|
||||||
the model's last response directly to a file.
|
|
||||||
- **File Editor** — Create and edit `.py`, `.tex`, `.html`, or any text file.
|
|
||||||
Use the "Generate with LLM" button to have the model modify your file based
|
|
||||||
on an instruction (e.g. "add error handling" or "fix the LaTeX formatting").
|
|
||||||
|
|
||||||
Files are stored in a `workspace/` folder next to `app.py`.
|
**Chat Tab**
|
||||||
|
- Conversational interface with streaming responses
|
||||||
|
- "Save code" button extracts code from the LLM response and saves it to a
|
||||||
|
workspace file (strips markdown formatting automatically)
|
||||||
|
|
||||||
> **Tip**: The app runs on your local machine and connects to the server — you
|
**File Editor Tab**
|
||||||
> don't need to install anything on the GPU server.
|
- Create and edit `.py`, `.tex`, `.html`, or any text file
|
||||||
|
- Syntax-highlighted preview of file content
|
||||||
|
- "Generate with LLM" button: describe a change in natural language and the
|
||||||
|
model rewrites the file (e.g. "add error handling", "fix the LaTeX formatting",
|
||||||
|
"translate comments to German")
|
||||||
|
|
||||||
|
**Sidebar Controls**
|
||||||
|
- **Connection**: API Base URL and API Key
|
||||||
|
- **LLM Parameters**: Adjustable for each request
|
||||||
|
|
||||||
|
| Parameter | Default | What it does |
|
||||||
|
|-----------|---------|--------------|
|
||||||
|
| Thinking Mode | Off | Toggle chain-of-thought reasoning (better for complex tasks, slower) |
|
||||||
|
| Temperature | 0.7 | Lower = predictable, higher = creative |
|
||||||
|
| Max Tokens | 4096 | Maximum response length |
|
||||||
|
| Top P | 0.95 | Nucleus sampling threshold |
|
||||||
|
| Presence Penalty | 0.0 | Encourage diverse topics |
|
||||||
|
|
||||||
|
- **File Manager**: Create new files and switch between them
|
||||||
|
|
||||||
|
All generated files are stored in a `workspace/` folder next to `app.py`.
|
||||||
|
|
||||||
|
> **Tip**: The app runs entirely on your local machine. Only the LLM requests
|
||||||
|
> go to the server — your files stay local.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user