Skip to content
| Marketplace
Sign in
Visual Studio Code>Other>local llm pilot completionNew to Visual Studio Code? Get it now.
local llm pilot completion

local llm pilot completion

mutemoon

|
106 installs
| (0) | Free
local completion
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

local-llm-pilot-completion

Inline completion with codeqwen1.5.

Setup

llm server

Two ways to run codeqwen1.5 at local pc:

  1. ollama

ollama pull codeqwen:v1.5-code

ollama serve

  1. lmdeploy

lmdeploy serve api_server Qwen/CodeQwen1.5-7B

Test ollama/lmdeploy api url.

vscode settings

set ollama/openai api url.

Requirements

  • running ollama or lmdeploy server with codeqwen1.5

Release Notes

0.0.1

Initial release of local llm pilot completion

0.0.2

Support openai api type

0.0.4

Support single line and trim

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2025 Microsoft