Local Mellum - Local AI Code Completion for VS Code
Local Mellum is a Visual Studio Code extension that provides AI-powered code completion using a local language model through Ollama. It helps you write code without sending your data to external servers.
Features
- Local AI-powered code completion
- Context-aware suggestions based on your codebase
- Privacy-focused - all processing happens on your machine
- No internet connection required for operation
Requirements
Linux
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama service
ollama serve
# Pull the required model
ollama pull JetBrains/Mellum-4b-sft-all
# Verify Ollama is running
curl http://localhost:11434/api/tags
macOS
# Install Ollama using Homebrew
brew install ollama
# Start Ollama service
ollama serve &
# Alternatively, download and install from the official website:
# https://ollama.ai/download/mac
# Then start from Applications folder
# Pull the required model
ollama pull JetBrains/Mellum-4b-sft-all
# Verify Ollama is running
curl http://localhost:11434/api/tags
Windows
- Install from the official website: https://ollama.ai/download/windows
- Ollama service should start automatically
- Pull the required model (through CMD)
ollama pull JetBrains/Mellum-4b-sft-all
Configuration
Model Selection
By default, Local Mellum uses the JetBrains/Mellum-4b-sft-all
model. You can configure a different model in your VS Code settings:
- Open VS Code Settings (File > Preferences > Settings or
Ctrl+,
)
- Search for "Local Mellum"
- Set the "Model Name" to the name of your preferred Ollama model
{
"localMellum.modelName": "JetBrains/Mellum-4b-sft-all"
}
- Execute
local-mellum.restart
command to apply changes.
Make sure the model you specify is available in your Ollama installation. You can pull a new model using:
ollama pull your-model-name
License
All third-party licenses are available in LICENSE.md
Feedback and Support
TODO