🌟 Overview
LocalPilot is a powerful Visual Studio extension that integrates local Large Language Models (LLMs) via Ollama. It provides a seamless, high-performance coding experience without the need for cloud-based subscriptions or data privacy concerns.
🚀 Key Features
💬 Advanced Chat Panel
A dedicated side panel for complex reasoning, code generation, and deep-dive technical discussions.
|
⚡ Contextual Quick Actions
Instant access to Refactor, Explain, or Document code directly from your right-click context menu.
|
🛠️ Flexible Configuration
Easily manage your Ollama connection and assign different models for chat and autocomplete tasks.
|
✨ Ghost-Text & Performance
- 🚀 Real-time Suggestions: Zero-latency inline code completions.
- 🏠 100% Local: Your code never leaves your workstation.
- ⚡ Optimized: Designed for minimal impact on IDE performance.
|
🛡️ Why LocalPilot?
- 🔒 Absolute Privacy: Your source code stays on your machine. No telemetry, no cloud hooks, no data leakage. Perfect for enterprise and sensitive projects.
- ⚡ Zero Latency: No waiting for cloud API responses. Local inference provides near-instantaneous completions.
- 💰 One-time Setup, Zero Cost: No recurring subscriptions. Use the power of your own hardware to fuel your development.
- 🎨 Native Experience: Designed to feel like a built-in Visual Studio feature, supporting both Light and Dark themes natively.
🛠️ Getting Started
1️⃣ Prerequisites
You must have Ollama installed and running on your machine.
- Download: ollama.com
- Launch a Model: We recommend code-centric models like
llama3, codellama, or phi3.
ollama run llama3
2️⃣ Installation
- Visit the Visual Studio Marketplace.
- Click Download, or search for "LocalPilot" within the Visual Studio Extension Manager:
- Extensions > Manage Extensions > Online
- Restart Visual Studio to complete the installation.
3️⃣ Configuration
Navigate to Tools > Options > LocalPilot > Settings.
- Ollama Base URL: Usually
http://localhost:11434. Click "Test Connection" to verify.
- Model Assignments: Assign preferred models for Chat and Inline Completions.
[!TIP]
For optimal performance, use a lightweight model like phi3 or starcoder2:3b for Inline Completions, and a larger model like llama3:8b or deepseek-coder for the Chat Assistant.
📖 Usage Guide
💡 Inline Completion
Simply start typing in any supported file. LocalPilot will provide translucent "ghost-text" suggestions.
Tab: Accept the suggestion.
Esc: Dismiss the suggestion.
⚡ Contextual Actions
Right-click on any code selection to access the LocalPilot menu:
- Explain Code: Breakdown complex logic.
- Generate Docs: Auto-generate XML/docstring comments.
- Refactor: Suggest improvements for readability and performance.
🤝 Contributing
We welcome community contributions! Whether it's bugs, features, or documentation, your help is appreciated.
🛠️ How to Help
- Check Issues: See the Existing Issues to avoid duplicates.
- Clear Reports: For bugs, include your VS version, Ollama model, and reproduction steps.
- Pull Requests: Create a branch from
main, ensure the project builds, and submit your PR with a clear description.
💻 Hardware Requirements
Since LocalPilot runs Large Language Models (LLMs) entirely on your local machine via Ollama, your hardware performance directly impacts the speed and responsiveness of AI suggestions.
🏁 Minimum Requirements
- CPU: Recent Multi-core processor (Intel i5/AMD Ryzen 5 or equivalent).
- RAM: 8GB (16GB+ strongly recommended for a smooth experience).
- GPU: 4GB VRAM (Dedicated NVIDIA or Apple Silicon GPU preferred for faster inference).
- Storage: 5GB+ for model storage (SSD/NVMe highly recommended).
🚀 Recommended for "Pro" Experience
- RAM: 32GB+ for handling larger models (13B+) alongside Visual Studio.
- GPU: NVIDIA RTX 3060/4060 or higher with 12GB+ VRAM.
- NVIDIA CUDA: Ensure latest drivers are installed for GPU acceleration.
[!IMPORTANT]
LocalPilot is designed for efficiency, but because it performs all AI processing locally, it requires capable hardware. If suggestions feel slow, consider using a smaller, quantized model (e.g., phi3:mini or starcoder2:3b) in the settings.