Skip to content
| Marketplace
Sign in
Visual Studio Code>Programming Languages>Local LLM ChatNew to Visual Studio Code? Get it now.
Local LLM Chat

Local LLM Chat

Karthik Ramachandran

|
30 installs
| (0) | Free
Chat with local LLM models using Ollama - Free Version
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

Local LLM Chat Extension

A very simple VS Code extension that allows you to auto install local llm models and chat with the models using Ollama directly from your terminal with almost zero effort from your side. This extension provides a seamless interface to interact with various language models directly from your VS Code environment by taking care of downloading the models you desire and running them locally.

VS Code Extension License Version Privacy

✨ Features

  • 🤖 Chat with any Ollama-supported language model - Support for CodeLlama, Llama2, TinyLlama, and many more
  • 🔄 Switch between different models during chat - No need to restart the session
  • 🚀 Easy model installation and management - Automatic model downloading and installation
  • ⚙️ Automatic environment setup and configuration - Virtual environment creation and dependency management
  • 🔒 100% Local and Private - All conversations and data remain on your machine, nothing is sent to remote servers
  • 🛡️ Complete Data Privacy - Your code, conversations, and sensitive information never leave your local environment
  • 🛠️ Smart error handling - Detailed error messages and automatic recovery attempts

📋 Requirements

  1. Ollama: The extension requires Ollama to be installed on your system.

    • Windows: winget install ollama
    • Mac: brew install ollama
    • Linux: curl -fsSL https://ollama.com/install.sh | sh
  2. Python: Python 3.8 or higher is required.

  3. Internet Connection: Required for initial model download only.

🚀 Quick Start

  1. Install the extension from the VS Code marketplace
  2. Open the Command Palette (Ctrl+Shift+P or Cmd+Shift+P)
  3. Type "Talk to Local LLM" and select the command
  4. Follow the setup wizard - The extension will automatically:
    • Check and setup the required environment
    • Create a virtual environment for dependencies
    • Install necessary Python packages
    • Start the Ollama server if needed
  5. Select or download a model from the available options
  6. Start chatting!

💬 Usage

Available Commands

  • exit - End the chat session
  • help - Show available commands
  • switch - Switch to a different model

Supported Models

The extension works with any model available through Ollama, including:

  • CodeLlama - Optimized for code generation and understanding
  • Llama2 - General purpose conversational AI
  • TinyLlama - Lightweight model for resource-constrained environments
  • Mistral - High-performance language model
  • And many more!

🛠️ Configuration

The extension automatically handles:

  • Virtual Environment Creation - Isolated Python environment for dependencies
  • Dependency Installation - Automatic installation of required packages
  • Ollama Server Management - Starting and monitoring the Ollama service

🔧 Troubleshooting

Common Issues

Ollama server not starting:

  • Ensure Ollama is properly installed
  • Check if port 11434 is available
  • Try running ollama serve manually in a terminal

Model download fails:

  • Check your internet connection
  • Ensure sufficient disk space
  • Try downloading the model manually: ollama pull <model-name>

Python environment issues:

  • Ensure Python 3.8+ is installed
  • Check that pip is available and up to date
  • Try running the extension with administrator privileges

🛡️ Error Handling

The extension includes robust error handling:

  • Detailed error messages
  • Automatic recovery attempts
  • Clear setup instructions

🐛 Known Issues

  • The Ollama server needs to be running for the extension to work
  • Initial model downloads may take several minutes depending on model size
  • Some models require significant system resources (RAM/CPU)

📝 Release Notes

1.0.0

  • ✨ Initial release
  • 🎉 Support for all Ollama models
  • 🔄 Interactive model selection
  • 🐍 Virtual environment management
  • 📦 Automatic dependency installation

🤝 Contributing

We welcome contributions! Feel free to:

  • 🐛 Report bugs by opening issues
  • 💡 Suggest new features
  • 🔧 Submit pull requests

Visit our GitHub repository to get started.

📄 License

This extension is licensed under the GNU General Public License v3.0. See LICENSE for more details.

🙏 Acknowledgments

  • Ollama for the amazing local LLM runtime
  • The open-source community for model development
  • All contributors and users providing feedback

Enjoy chatting with your local LLMs! 🎉

For support, please visit our GitHub Issues page.

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2025 Microsoft