MINI AI PILOT: A Minimalist AI Programming Assistant
Q: What system requirements does my computer need to meet to deploy a local LLM?
A: In the author's tests, the 1.3B model can be used without any pressure with just 4G of VRAM. If your computer has lower specifications, you can modify the 36 after the n-gpu-layers in CMD_FLAGS.txt to 0, which will completely use the CPU for inference, but note that this will significantly reduce the speed.
Q: Why does code completion seem slower than Q&A?
A: Code completion is not streamed, it only returns results after all are generated. Moreover, the generation speed is related to your computer configuration. If your computer has lower specifications, it is recommended to wait a moment after pressing Alt+Q.
Q: Does it support auto-triggering code completion after input or line break?
A: This plugin is intentionally designed to only support active triggering of completion (Alt+Q or right-click), as automatic triggering can be very annoying at times.
Q: What operating systems are supported?
A: The one-click LLM deployment package uploaded by the author only supports Windows systems. If you need to use it on other systems, you can install and deploy it yourself according to the official documentation of text-generation-webui.
Suggestions and Feedback
For any suggestions or feedback, you can contact me at email@example.com or leave a message in the issues.
This project is licensed under the MIT License.