local-llm-pilot-completionInline completion with codeqwen1.5. Setupllm serverTwo ways to run codeqwen1.5 at local pc: ollama pull codeqwen:v1.5-code ollama serve lmdeploy serve api_server Qwen/CodeQwen1.5-7B Test ollama/lmdeploy api url. vscode settingsset ollama/openai api url. Requirements
Release Notes0.0.1Initial release of local llm pilot completion 0.0.2Support openai api type 0.0.4Support single line and trim |