Skip to content
| Marketplace
Sign in
Visual Studio Code>Other>Terminal to ModelNew to Visual Studio Code? Get it now.
Terminal to Model

Terminal to Model

YangXian

|
3 installs
| (0) | Free
Capture terminal output, edit it, and send to an LLM.
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

Terminal to Model

Terminal to Model is a VS Code extension that captures terminal output, lets you edit it, and sends it to a Large Language Model (LLM) for processing. It provides a seamless workflow for turning raw terminal data into actionable AI‑generated responses directly within VS Code.

Features

  • Capture Terminal Output – Automatically grabs the output of any terminal session.
  • Edit Before Sending – Open the captured text in an editor to refine or annotate.
  • LLM Integration – Sends the edited text to a configurable LLM endpoint and displays the response in a webview.
  • Configurable Settings – Choose the endpoint, model, temperature, max tokens, and more via VS Code settings.
  • Activity Bar View – A dedicated “LLM” activity bar icon gives quick access to the Terminal → LLM view.

Installation

  1. Open the Extensions view (Ctrl+Shift+X).
  2. Search for Terminal to Model (publisher: cline).
  3. Click Install.
  4. Reload VS Code if prompted.

Alternatively, you can install the packaged VSIX:

vsce package   # creates a .vsix file
code --install-extension terminal-to-model-0.0.1.vsix

Usage

  1. Open a terminal (`Ctrl+``) and run any command.
  2. When you want to process the output, open the LLM view from the Activity Bar.
  3. The extension will capture the latest terminal output, open it in an editor, and let you edit.
  4. Press Send to LLM (button in the webview) to submit the text.
  5. The LLM’s response appears in the same view.

Extension Settings

Configure the extension via Settings → Extensions → Terminal to Model or directly in settings.json:

Setting Type Default Description
terminalToModel.endpoint string http://luoxianhang.xyz:8009/v1/chat/completions HTTP endpoint of the LLM service.
terminalToModel.model string gpt-oss-120b Model name to use.
terminalToModel.apiKey string vllm Optional API key if the service requires authentication.
terminalToModel.systemPrompt string You are a helpful assistant. System prompt sent to the LLM.
terminalToModel.temperature number 0.7 Sampling temperature (0‑1).
terminalToModel.maxTokens integer 1024 Maximum number of tokens in the LLM response.

Known Issues

  • The extension currently captures only the most recent terminal output; long‑running sessions may need manual selection.
  • Some LLM services require CORS headers; ensure your endpoint permits requests from VS Code.

Release Notes

0.0.1 (Initial Release)

  • Capture terminal output.
  • Edit and send to configurable LLM endpoint.
  • Activity bar view with webview for responses.
  • Basic configuration options.

Contributing

Contributions are welcome! Feel free to open issues or submit pull requests on the repository.

License

MIT

Enjoy!

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2025 Microsoft