Terminal to Model is a VS Code extension that captures terminal output, lets you edit it, and sends it to a Large Language Model (LLM) for processing. It provides a seamless workflow for turning raw terminal data into actionable AI‑generated responses directly within VS Code.
Features
Capture Terminal Output – Automatically grabs the output of any terminal session.
Edit Before Sending – Open the captured text in an editor to refine or annotate.
LLM Integration – Sends the edited text to a configurable LLM endpoint and displays the response in a webview.
Configurable Settings – Choose the endpoint, model, temperature, max tokens, and more via VS Code settings.
Activity Bar View – A dedicated “LLM” activity bar icon gives quick access to the Terminal → LLM view.
Installation
Open the Extensions view (Ctrl+Shift+X).
Search for Terminal to Model (publisher: cline).
Click Install.
Reload VS Code if prompted.
Alternatively, you can install the packaged VSIX:
vsce package # creates a .vsix file
code --install-extension terminal-to-model-0.0.1.vsix
Usage
Open a terminal (`Ctrl+``) and run any command.
When you want to process the output, open the LLM view from the Activity Bar.
The extension will capture the latest terminal output, open it in an editor, and let you edit.
Press Send to LLM (button in the webview) to submit the text.
The LLM’s response appears in the same view.
Extension Settings
Configure the extension via Settings → Extensions → Terminal to Model or directly in settings.json:
Setting
Type
Default
Description
terminalToModel.endpoint
string
http://luoxianhang.xyz:8009/v1/chat/completions
HTTP endpoint of the LLM service.
terminalToModel.model
string
gpt-oss-120b
Model name to use.
terminalToModel.apiKey
string
vllm
Optional API key if the service requires authentication.
terminalToModel.systemPrompt
string
You are a helpful assistant.
System prompt sent to the LLM.
terminalToModel.temperature
number
0.7
Sampling temperature (0‑1).
terminalToModel.maxTokens
integer
1024
Maximum number of tokens in the LLM response.
Known Issues
The extension currently captures only the most recent terminal output; long‑running sessions may need manual selection.
Some LLM services require CORS headers; ensure your endpoint permits requests from VS Code.
Release Notes
0.0.1 (Initial Release)
Capture terminal output.
Edit and send to configurable LLM endpoint.
Activity bar view with webview for responses.
Basic configuration options.
Contributing
Contributions are welcome! Feel free to open issues or submit pull requests on the repository.