AI Commit Message Generator for VS Code 🤖✨
Local AI Commit Message Generation - Transform code diffs into conventional
commits using open-source models. Privacy-focused, offline-capable solution for
developers.
Demo showing commit message generation process (left) and settings (right)
Features 🌟
- 🔒 Privacy First - No data leaves your machine
- ⚡ Multi-Backend Support - Compatible with popular AI runners
- 📜 Commit Standard Compliance - Conventional Commits 1.0.0
- 🖥️ Hardware Aware - Optimized for various setups
- 🌐 Model Agnostic - Use any compatible LLM
Quick Start 🚀
- Install extension:
code --install-extension Its-Satyajit.ai-commit-message
- Set up AI backend:
# For CPU-focused systems
ollama pull phi-3
# For GPU-equipped machines
ollama pull deepseek-r1:8b
- Generate your first AI commit via VS Code Source Control view
Hardware Requirements 🖥️
Tested Environment
OS: openSUSE Tumbleweed
CPU: Intel i7-8750H (6c/12t @4.1GHz)
GPU: NVIDIA GTX 1050 Ti Mobile 4GB
RAM: 16GB DDR4
Storage: NVMe SSD
Minimum Recommendations
- CPU: 4-core (2015+)
- RAM: 8GB
- Storage: SSD
- Node: ^22
- Vscode: ^1.92.0
Model Compatibility 🧠
Model Family |
Example Models |
Speed* |
Quality* |
Use When... |
Lightweight |
phi-3 , mistral |
22 t/s |
██▌ |
Quick iterations |
Balanced |
llama3 , qwen |
14 t/s |
███▎ |
Daily development |
Quality-Focused |
deepseek-r1 |
7 t/s |
████▋ |
Complex changes |
* Metrics from personal testing on mobile GTX 1050 Ti (Q4_K_M quantization)
Speed vs Quality Tradeoff
▲
│
Quality │.....█████ (deepseek-r1)
│...███ (llama3)
│.██▌ (phi-3)
└───────────────────▶ Time
Configuration ⚙️
Backend Setup
Option 1: Ollama (Simplest)
curl -fsSL https://ollama.com/install.sh | sh
ollama serve
* For more info, visit Ollama
Option 2: LM Studio (Advanced)
lmstudio serve --model ./models/deepseek-r1.gguf --gpulayers 20
* For more info, visit LM Studio
Extension Settings
{
"commitMessageGenerator.provider": "ollama",
"commitMessageGenerator.apiUrl": "http://localhost:11434",
"commitMessageGenerator.model": "deepseek-r1:8b",
"commitMessageGenerator.temperature": 0.7,
"commitMessageGenerator.maxTokens": 5000,
"commitMessageGenerator.apiKey": "your_api_key (if required by your OpenAI-compatible/ollama endpoint)",
"commitMessageGenerator.types": [
"feat: A new feature",
"fix: A bug fix",
"chore: Maintenance tasks",
"docs: Documentation updates",
],
"commitMessageGenerator.scopes": ["ui", "api", "config"]
}
Optimization Guide
GPU Acceleration
# NVIDIA Settings
export OLLAMA_GPUS=1
export GGML_CUDA_OFFLOAD=20
# Memory Allocation (4GB VRAM example)
┌───────────────────────┐
│ GPU Layers: 18/20 │
│ Batch Size: 128 │
│ Threads: 6 │
└───────────────────────┘
- Start with
phi-3
for quick feedback
- Switch to
deepseek-r1
for final commits
- Use
--no-mmap
if experiencing slowdowns
- Reduce GPU layers when memory constrained
Troubleshooting 🔧
Issue |
First Steps |
Advanced Fixes |
Slow generation |
1. Check CPU usage 2. Verify quantization |
Use --no-mmap flag |
Model loading fails |
1. Confirm SHA256 checksum 2. Check disk space |
Try different quantization |
GPU not detected |
1. Verify drivers 2. Check CUDA version |
Set CUDA_VISIBLE_DEVICES=0 |
FAQ ❓
Why local AI instead of cloud services?
- Privacy: Code never leaves your machine
- Offline Use: Works without internet
- Cost: No API fees
- Customization: Use models tailored to your needs
How to choose between models?
Quick Sessions → phi-3
/mistral
:
- Prototyping
- Personal projects
- Low-resource machines
Important Commits → deepseek-r1
:
- Production code
- Team projects
- Complex refactors
Legal & Ethics
Neutrality Statement
This project and I am not affiliated with, endorsed by, or sponsored by:
- Ollama
- LM Studio
- Any model creators
Mentioned tools/models are personal preferences based on technical merits.
Contributing 🤝
- Fork repository
- Install dependencies:
npm install
- Build extension:
npm run package
- Submit PR with changes
Full contribution guidelines
License 📄
MIT License - View License
Built by Developers, for Developers - From quick fixes to production-grade
commits 💻⚡
Report Issue