🚀 Professional AI-powered commit message generation for Visual Studio Code. Leverage 13 different AI providers including OpenAI GPT-4o, Claude-3-5-sonnet, Gemini-2.5-flash, DeepSeek-reasoner, Grok-3, Perplexity-sonar, and 50+ models to create consistent, conventional commit messages that improve co
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Professional AI-powered commit message generation for Visual Studio Code. Leverage 13 different AI providers including OpenAI GPT-4o, Claude-3-5-sonnet, Gemini-2.5-flash, DeepSeek-reasoner, Grok-3, Perplexity-sonar, and 50+ models to create consistent, conventional commit messages that improve code history quality and team collaboration.
Access 13 different AI providers with unified configuration and intelligent fallback handling. From zero-setup GitHub Copilot to privacy-focused local Ollama deployments supporting GPT-4o, Claude-opus-4, Gemini-2.5-pro, DeepSeek-chat, Grok-3, Perplexity-sonar, Mistral-large, and 50+ additional models.
Advanced Git Integration
Smart diff analysis with automatic staging detection, binary file handling, and comprehensive repository state management. Handles complex scenarios including merge conflicts and mixed changes.
Conventional Commits Standard
Automatic formatting with proper type categorization (feat|fix|docs|style|refactor|test|chore), scope detection, and breaking change identification.
Intelligent Diff Processing
Enhanced change detection for staged and unstaged files with context-aware analysis and user guidance for complex scenarios.
Dynamic Model Selection
Real-time model browsing for compatible providers with search and filtering capabilities. 2M+ token context windows supported for large repositories with models like Gemini-2.5-pro and Claude-3-5-sonnet.
Smart Prompt Management
Save Last Custom Prompt feature allows you to automatically save and reuse custom prompts across commit generations. Saved prompts appear as defaults in future sessions with clipboard copy options for easy editing.
Professional Workflow Integration
Native VS Code SCM panel integration with loading indicators, comprehensive status feedback, and standardized prompt engineering across all providers.
AI Provider Ecosystem
Model Comparison & Selection Guide
Provider
Featured Models
Context
Free Tier
Setup
Strengths
Limitations
GitHub Copilot
gpt-4o, claude-3.5-sonnet, o3
128k
No
5sec
Zero config, VS Code native
Requires subscription
Google Gemini
2.5-flash, 2.5-pro, 2.0-flash
2M
15 RPM
2min
Massive context, thinking model
Rate limited (free)
Grok (X.ai)
grok-3, grok-3-fast, grok-3-mini
128k
Limited
2min
Real-time data access, fast
X Premium required
DeepSeek
reasoner, chat
128k
50 RPM
2min
Advanced reasoning, cost-effective
Newer provider
Perplexity
sonar-pro, sonar-reasoning, sonar
127k
Limited
2min
Real-time web search, reasoning
Paid tier recommended
Mistral AI
large-latest, medium, small
128k
1 RPM
2min
EU-compliant, multilingual
Low free tier limits
Ollama
deepseek-r1, llama3.3, phi4, qwen3
128k
Unlimited
5min
Complete privacy, no API costs
Hardware dependent
OpenAI
gpt-4o, gpt-4.1, o3, o4-mini
128k
No
2min
Industry standard, multimodal
Paid only, higher cost
Anthropic
claude-opus-4, sonnet-4, haiku
200k
No
2min
Superior reasoning, long context
Paid only
Together AI
Llama-3.3-70B, Mixtral-8x7B
128k
60 RPM
2min
Optimized inference, generous free
Variable model quality
Hugging Face
Mistral-7B, Zephyr-7B, OpenHermes
32k
Varies
2min
Open source, customizable
Inconsistent performance
Cohere
command-r, command-a-03-2025
128k
20 RPM
2min
RAG-optimized, retrieval focus
Limited model variety
OpenRouter
Multiple providers & models
Varies
Limited
2min
Access to premium models
Complex pricing
Quick Selection Guide
For Immediate Use: GitHub Copilot with gpt-4o or claude-3.5-sonnet (existing subscription) or Google Gemini 2.5-flash (best free tier)
For Privacy: Ollama with local phi4, llama3.3:70b, or codellama deployment
For Performance: OpenAI GPT-4.1, Anthropic Claude-opus-4, or DeepSeek-reasoner
For Large Projects: Google Gemini-2.5-pro (2M context) or Anthropic Claude-sonnet-4 (200k context)
Access to multiple providers including google/gemma-3-27b-it:free and premium models from various providers
Technical Architecture
Advanced Diff Analysis Engine
Smart Staging Detection: Automatic detection of staged vs unstaged changes with user guidance
Binary File Handling: Proper detection and exclusion of binary files from analysis
Repository State Management: Comprehensive handling of complex git repository states
Edge Case Processing: Robust handling of empty diffs, merge conflicts, and repository anomalies
Commit Message Intelligence
Context Analysis: Deep analysis of file diffs, change patterns, and repository history powered by advanced models like GPT-4o, Claude-opus-4, and Gemini-2.5-pro
Scope Detection: Automatic identification of affected modules and components
Breaking Change Recognition: Intelligent detection of API changes and breaking modifications
Verbosity Control: Toggle between detailed descriptions and concise summaries
Enterprise-Ready Configuration
Standardized Prompts: Consistent prompt templates across all 12 AI providers and 50+ models
Custom Context Enhancement: Optional domain-specific prompt customization with automatic saving and reuse
Smart Prompt Management: Save Last Custom Prompt feature for workflow efficiency and consistency
Token Optimization: Smart content truncation and cost management with pre-generation estimation
Rate Limit Management: Advanced monitoring with minute-based tracking and anomaly detection
Debug Mode: Comprehensive API interaction logging and response analysis
Quick Start Guide
1. Choose Your Provider Strategy
For Immediate Use: GitHub Copilot with gpt-4o or claude-3.5-sonnet (if you have VS Code Copilot subscription) For Free Usage: Google Gemini-2.5-flash or DeepSeek-reasoner for best free tier experience For Privacy: Ollama with local deepseek-r1, llama3.3, phi4, qwen3, gemma3, or codellama deployment For Performance: OpenAI gpt-4.1, Anthropic claude-opus-4, or Google gemini-2.5-pro
2. Configuration
Open VS Code Source Control panel
Click the settings icon in GitMind section
Select your preferred AI provider
Add API key (skip for GitHub Copilot and Ollama)
Choose optimal model for your use case
Optional: Enable "Prompt Customization" and "Save Last Custom Prompt" for enhanced workflow
3. Generate Commits
Stage your changes in Git
Click the "AI" button in Source Control panel
Optional: Add custom context (saved automatically if enabled)
Review and edit the generated commit message
Commit your changes
Model Selection Guidelines
Context Window Considerations
Large repositories/refactoring: 128k+ tokens (Gemini-2.5-pro, Claude-opus-4, GPT-4o)
Standard commits: 32k tokens sufficient (Mistral-medium, most models)
Intelligent change categorization with automatic detection of feature additions, bug fixes, documentation updates, and refactoring patterns using advanced models like Claude-opus-4 and GPT-4.1.
Smart Prompt Management System
Comprehensive prompt lifecycle management with automatic saving, intelligent reuse, and seamless workflow integration:
Automatic Saving: Custom prompts are automatically saved when "Save Last Custom Prompt" is enabled
Smart Defaults: Saved prompts appear as default values in future commit generations
Clipboard Integration: Built-in copy-to-clipboard functionality for external editing
Management Commands: Dedicated VS Code commands for viewing and clearing saved prompts
Persistent Storage: Prompts persist across VS Code sessions and workspaces using global configuration
Unified Prompt Architecture
Standardized prompt engineering across all 12 providers and 50+ models ensures consistent, high-quality commit messages regardless of chosen AI model.
Advanced Prompt Management
Save Last Custom Prompt: Automatically save and reuse custom prompts across sessions
Smart Defaults: Saved prompts appear as default values with clipboard copy options
Prompt Commands: Dedicated commands for viewing (View Last Custom Prompt) and clearing (Clear Last Custom Prompt) saved prompts
Persistent Storage: Prompts persist across VS Code sessions and workspaces
Professional Integration
Native VS Code Source Control panel integration
Batch processing support for multiple file changes
Manual override with edit preservation
Comprehensive error handling with actionable guidance
Diagnostic & Monitoring Tools
Real-time token usage estimation
API response analysis and debugging
Rate limit monitoring and optimization
Model performance analytics
Configuration Options
Access settings via Command Palette: AI Commit Assistant: Open Settings or the settings icon in the Source Control panel.
Provider Management
AI provider selection with real-time validation
Secure API key configuration
Model selection with performance metrics
Prompt Customization
Enable custom context prompts for commit generation
Save Last Custom Prompt toggle for automatic prompt reuse
Persistent prompt storage across sessions
Message Formatting
Conventional commit standard compliance
Verbosity level control
Custom scope and type configuration
Advanced Settings
Debug mode for development
Custom prompt templates
Token usage optimization
Rate limit configuration
Requirements
Visual Studio Code ^1.100.0
Git repository (initialized)
API key from chosen provider OR Ollama installation for local deployment
Professional Use Cases
Individual Developers
Maintain consistent commit history with intelligent message generation that adapts to your coding patterns and project context using models like GPT-4o, Claude-3.5-sonnet, and Gemini-2.5-flash.
Team Collaboration
Standardize commit message formats across team members with configurable conventional commit standards and custom prompt templates powered by enterprise-grade models.
Enterprise Deployment
Scale across organizations with support for multiple AI providers, usage analytics, and centralized configuration management using premium models like Claude-opus-4 and GPT-4.1.
Open Source Projects
Leverage free tier providers (Gemini-2.5-flash, DeepSeek-reasoner) or local Ollama deployment (deepseek-r1, phi4, llama3.3, qwen3, gemma3) to maintain professional commit standards without API costs.
GitMind transforms your development workflow with intelligent, context-aware commit message generation. Supporting 11 AI providers with 50+ models including GPT-4o, Claude-opus-4, Gemini-2.5-pro, DeepSeek-R1, Llama-3.3, Phi-4 and advanced diff analysis, it delivers professional-grade commit messages that improve code history quality and team collaboration efficiency.
Development and Testing
GitMind includes a comprehensive automated test suite to ensure reliability and quality before publication.
Test Coverage
The extension features 7 comprehensive test suites covering all main functionality:
Settings UI Testing: UI persistence, validation, and provider switching
AI Providers Testing: All 13 providers with API validation and model selection
Extension Commands Testing: All commands, error handling, and status management
Git Integration Testing: Diff parsing, commit messages, and repository validation
Webview Components Testing: Settings panel, onboarding, and messaging
Error Handling Testing: API errors, network issues, rate limits, and recovery
Configuration Management Testing: Settings persistence, schema validation, and migration
Quality Assurance
✅ 100% TypeScript compilation with no errors or warnings
✅ Complete VS Code API mocking for isolated testing
✅ Comprehensive error simulation for robust error handling
✅ End-to-end integration testing for all workflows
✅ Modular test architecture for maintainability
Running Tests
# Compile and run all tests
npm test
# Check test status and coverage
npm run test:status
# Validate test compilation
npm run test:validate
GitMind collects anonymous usage data to help improve the extension. This helps us understand how the extension is used and identify areas for improvement.
What data is collected:
Usage Analytics: Command usage frequency, success/failure rates, and performance metrics
Provider Statistics: Which AI providers are used (but not API keys or responses)
Technical Information: VS Code version, OS platform, extension version
Error Reports: Anonymous error logs and exception details
User Flow: Navigation patterns within the extension
What is NOT collected:
No code content: Your actual code, commit messages, or diff content
No personal information: Names, emails, or other personal identifiers
No API keys: Your API credentials are never transmitted
No repository information: Project names, file paths, or repository details
How to disable telemetry:
You can disable telemetry at any time by:
Opening VS Code Settings (Ctrl/Cmd + ,)
Searching for "telemetry"
Setting "Telemetry: Telemetry Level" to "off"
GitMind respects your privacy settings and will not collect any data if telemetry is disabled.
Data usage:
The collected data helps us:
Improve extension reliability and performance
Understand which features are most valuable
Prioritize development efforts
Fix bugs and compatibility issues
All data is processed in accordance with Microsoft's privacy policies and is used solely for improving the GitMind extension.