LLM Chat Interface
ID: loading...
Clear chat
📊 Statistics
📊 Statistics
×
All prompts:
0
Average response time:
0.0s
Generation speed:
0.0 t/s
Tokens per minute:
0
Last request:
-
Welcome! You can ask questions about the local LLM model.
Send
Max Tokens:
Temperature:
Auto-scrolling
The model generates a response...