Skip to content
| Marketplace
Sign in
Visual Studio Code>Programming Languages>Local Mellum CompletionNew to Visual Studio Code? Get it now.
Local Mellum Completion

Local Mellum Completion

JetBrains

jetbrains.com
|
52 installs
| (0) | Free
Local Mellum for Visual Studio Code
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

Local Mellum - Local AI Code Completion for VS Code

Local Mellum is a Visual Studio Code extension that provides AI-powered code completion using a local language model through Ollama. It helps you write code without sending your data to external servers.

Features

  • Local AI-powered code completion
  • Context-aware suggestions based on your codebase
  • Privacy-focused - all processing happens on your machine
  • No internet connection required for operation

Requirements

  • Running Ollama with the proper model downloaded (see Setup Guide below)
  • Your machine should follow the default requirements for local Mellum execution: https://www.jetbrains.com/help/ide-services/jetbrains-mellum.html#mellum-requirements

Setup Guide by Platform

Linux

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Start Ollama service
ollama serve

# Pull the required model
ollama pull JetBrains/Mellum-4b-sft-all

# Verify Ollama is running
curl http://localhost:11434/api/tags

macOS

# Install Ollama using Homebrew
brew install ollama

# Start Ollama service
ollama serve &

# Alternatively, download and install from the official website:
# https://ollama.ai/download/mac
# Then start from Applications folder

# Pull the required model
ollama pull JetBrains/Mellum-4b-sft-all

# Verify Ollama is running
curl http://localhost:11434/api/tags

Windows

  1. Install from the official website: https://ollama.ai/download/windows
  2. Ollama service should start automatically
  3. Pull the required model (through CMD)
ollama pull JetBrains/Mellum-4b-sft-all

Configuration

Model Selection

By default, Local Mellum uses the JetBrains/Mellum-4b-sft-all model. You can configure a different model in your VS Code settings:

  1. Open VS Code Settings (File > Preferences > Settings or Ctrl+,)
  2. Search for "Local Mellum"
  3. Set the "Model Name" to the name of your preferred Ollama model
{
  "localMellum.modelName": "JetBrains/Mellum-4b-sft-all"
}
  1. Execute local-mellum.restart command to apply changes.

Make sure the model you specify is available in your Ollama installation. You can pull a new model using:

ollama pull your-model-name

License

All third-party licenses are available in LICENSE.md

Feedback and Support

TODO

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2025 Microsoft