CommitLLM
Privacy by design. You choose the exact AI API endpoint (OpenAI-Compatible: OpenAI, Ollama, etc), specify the API key, and you're good to go.
Generate conventional commit messages from staged Git diffs using any OpenAI-compatible API.
Features
- Privacy-focused — No telemetry. Your diffs go only to your chosen endpoint. Use local models for complete privacy.
- One-click commit message generation — Click the sparkle icon in the Source Control panel or press
Ctrl+Shift+Alt+G (Cmd+Shift+Alt+G on Mac)
- Streaming responses — Watch the commit message appear character-by-character in the SCM input box
- Any OpenAI-compatible provider — Works with OpenAI, Ollama, LM Studio, Groq, Together AI, and more
- Conventional Commits by default — Follows the
type(scope): description format
- Customizable templates — Choose between conventional, freeform, or write your own prompt
- Secure API key storage — API keys are stored in VSCode's encrypted secret storage
Requirements
- VSCode 1.118.0 or later
- A Git repository open in your workspace
- Staged changes ready to commit
- An API key for your chosen AI provider
Setup
- Install the extension
- Open Settings (
Ctrl+,) and search for "CommitLLM"
- Set your provider's Base URL (default:
https://api.openai.com/v1)
- Set your Model name (default:
gpt-4o)
- Run the command "CommitLLM: Set API Key" and enter your API key
Using Ollama (Local Models)
- Install Ollama and pull a model (e.g.,
ollama pull llama3)
- Start Ollama with the OpenAI-compatible endpoint:
OLLAMA_ORIGINS=* ollama serve
- In CommitLLM settings:
- Base URL:
http://localhost:11434/v1
- Model:
llama3 (or whichever model you pulled)
- API Key: any non-empty string (Ollama doesn't validate it)
Usage
- Stage your changes in Git
- Click the $(git-commit) CommitLLM status bar item, or press
Ctrl+Shift+Alt+G
- The AI-generated commit message streams into the SCM input box
- Edit if needed, then commit as usual
Extension Settings
| Setting |
Default |
Description |
commitllm.baseUrl |
https://api.openai.com/v1 |
API base URL |
commitllm.model |
gpt-4o |
Model name |
commitllm.maxTokens |
512 |
Max tokens in response |
commitllm.temperature |
0.7 |
Sampling temperature (0–2) |
commitllm.maxDiffLength |
12000 |
Max diff characters to send |
commitllm.promptTemplate |
conventional |
Message style: conventional, freeform, or custom |
commitllm.customPromptTemplate |
"" |
Custom system prompt (when template is "custom") |
Known Issues
- Large diffs are truncated to avoid exceeding context windows
- Requires staged changes; unstaged changes are ignored
Privacy
CommitLLM is designed with privacy as a core principle:
- You control the destination — Diffs are sent only to the API endpoint you configure. We don't intercept, redirect, or store your data.
- No telemetry — The extension itself doesn't collect usage data, metrics, or analytics.
- Local-first option — Use Ollama or LM Studio for complete privacy. Your code never leaves your machine.
Your diff data flows exactly where you specify — nothing more, nothing less.
Release Notes
0.0.1
Initial release with:
- OpenAI-compatible API support
- Streaming commit message generation
- Conventional Commits formatting
- Multiple provider support (OpenAI, Ollama, etc.)
License
MIT
| |