Local AI Git Commit Generator
Generate meaningful, conventional git commit messages using any local Ollama AI model — completely offline, no API keys needed.
Features
- AI-Powered Commit Messages — Generates Conventional Commits format messages by analyzing your diffs
- Any Ollama Model — Works with any model installed on your Ollama instance (qwen2.5-coder, llama3, codellama, mistral, deepseek-coder, etc.)
- Per-File Messages — Each changed file gets its own tailored commit message
- Editable Messages — Click on any generated message to modify it before committing
- Batch Commit — Select multiple files with checkboxes and commit them sequentially
- Stage / Unstage — Stage or unstage files directly from the panel
- Background Generation — Messages are generated in the background as you code
- Project Context Aware — Scans your project structure to generate more accurate messages
- Fully Offline — All AI processing happens locally via Ollama. No data leaves your machine.
Prerequisites
Ollama installed and running locally
# macOS / Linux
curl -fsSL https://ollama.ai/install.sh | sh
# Then start the server
ollama serve
Pull a model (any model works)
# Recommended for code tasks
ollama pull qwen2.5-coder:14b
# Or use any other model
ollama pull llama3
ollama pull codellama
ollama pull mistral
ollama pull deepseek-coder:6.7b
Installation
From VS Code Marketplace
- Open VS Code
- Go to Extensions (
Cmd+Shift+X / Ctrl+Shift+X)
- Search for "Local AI Git Commit Generator"
- Click Install
From VSIX File
- Download the
.vsix file from the Releases page
- In VS Code:
Cmd+Shift+P → "Extensions: Install from VSIX..."
- Select the downloaded file
Usage
Open the Panel
- Command Palette:
Cmd+Shift+P → Local AI Git Commit: Generate Commit Messages
- Keyboard Shortcut:
Cmd+Shift+G Cmd+Shift+M (Mac) / Ctrl+Shift+G Ctrl+Shift+M (Windows/Linux)
Workflow
- Make changes to your code as usual
- Open the panel — it detects all changed files automatically
- Review messages — AI generates a commit message for each file
- Edit if needed — click on any message to modify it
- Stage & Commit — use the per-file buttons, or:
- Batch commit — check multiple files → click "Commit Selected"
Per-File Actions
| Button |
Description |
| Regenerate |
Ask the AI to generate a new message |
| Stage |
Stage the file (git add) |
| Unstage |
Unstage the file (git reset HEAD) |
| Commit |
Stage + commit the file with the message |
Batch Commit
- Select files using checkboxes (or "Select All")
- Click "Commit Selected (N)"
- Files are committed one at a time, sequentially — each commit is verified before the next starts
- A live progress panel shows success/failure per file
Configuration
Open VS Code Settings (Cmd+,) and search for "Local AI Commit":
| Setting |
Default |
Description |
localAiCommit.model |
qwen2.5-coder:14b |
Ollama model name. Run ollama list to see installed models. |
localAiCommit.ollamaUrl |
http://localhost:11434 |
Ollama server URL. Change if running on a different port or remote machine. |
localAiCommit.temperature |
0.2 |
AI creativity (0 = deterministic, 2 = very creative). Lower is better for commit messages. |
localAiCommit.maxConcurrent |
3 |
Max parallel AI requests. Increase for faster generation on powerful machines. |
Example: Using a Different Model
// settings.json
{
"localAiCommit.model": "llama3",
"localAiCommit.temperature": 0.1
}
Example: Remote Ollama Server
{
"localAiCommit.ollamaUrl": "http://192.168.1.100:11434"
}
All generated messages follow Conventional Commits:
type(scope): description
Allowed Types
| Type |
Description |
feat |
A new feature |
fix |
A bug fix |
docs |
Documentation changes |
style |
Formatting, no logic change |
refactor |
Code restructure (no feature/fix) |
perf |
Performance improvements |
test |
Adding or fixing tests |
build |
Build system / dependency updates |
ci |
CI configuration changes |
chore |
Maintenance tasks |
revert |
Revert a previous commit |
Rules
- Scope is optional and always lowercase
- Description is lowercase, no trailing period
- Max header length: 100 characters
Examples
feat(auth): add biometric login support
fix(api): handle null response from server
refactor(home): simplify feed list rendering
chore: update dependencies to latest versions
test(utils): add unit tests for date formatter
Troubleshooting
"Ollama is not running"
# Start the Ollama server
ollama serve
"Model not found"
# List installed models
ollama list
# Pull the model configured in settings
ollama pull qwen2.5-coder:14b
Messages are generic or wrong
- Make sure diffs are not too large (the extension truncates at 12KB per file)
- Try a larger/better model:
qwen2.5-coder:14b or deepseek-coder:33b
- Lower the temperature to
0.1 for more predictable output
Extension not activating
- Ensure you have a workspace folder open (not just a single file)
- The folder must be a git repository (
git init if not)
Building from Source
git clone <repo-url>
cd local-ai-git-commit-generator
npm install
npm run build
# Package as .vsix
npx @vscode/vsce package
License
MIT
Setup
npm install
npm run build
Running the Extension
- Open this folder in VS Code
- Press F5 (or Run → Start Debugging)
- A new VS Code window (Extension Development Host) opens
- Open any git repository in that window
- Open the Command Palette (
Cmd+Shift+P) and run:
AI Commit: Generate Commit Message
Usage
- The panel shows all staged + unstaged changed files
- Each file gets an AI-generated commit message (streaming)
- Edit any message manually by clicking the text
- Regenerate per file with the ↻ button
- Toggle between Per-file and Combined commit modes
- Click Commit when ready
Keyboard Shortcut
Cmd+Shift+G Cmd+Shift+M (macOS) / Ctrl+Shift+G Ctrl+Shift+M (Windows/Linux)
Architecture
src/
├── extension.ts # Entry point, command registration
├── types.ts # Shared TypeScript types
├── gitService.ts # Git operations (simple-git)
├── ollamaService.ts # Ollama API with streaming + concurrency
├── panelProvider.ts # Webview panel lifecycle & message handling
└── webview/
└── webviewContent.ts # Full HTML/CSS/JS for the webview UI