Self Hosted AI Code Review
Empower your development team with intelligent, automated code reviews powered by self-hosted AI models. Keep your code secure within your infrastructure while getting the benefits of AI-assisted code analysis.
🚀 Why Self Hosted AI Code Review?
Traditional code reviews are time-consuming and can create bottlenecks in your development workflow. This extension brings AI-powered code analysis directly into your Azure DevOps pipelines, helping catch issues early while maintaining complete control over your data.
Key Benefits
✅ Privacy First - All processing happens on your infrastructure. Your code never leaves your environment.
✅ Cost Effective - No API costs or usage limits. Run unlimited reviews using your own hardware.
✅ Fully Customizable - Choose from various open-source models and fine-tune for your specific needs.
✅ Fast Feedback - Get instant AI-powered reviews on every Pull Request or Gated Check-in.
✅ Git & TFVC Support - Works seamlessly with both Azure Repos Git and TFVC repositories.
🎯 Perfect For
- Enterprise Teams requiring data sovereignty and security compliance
- Organizations with strict data privacy regulations
- Development Teams wanting to reduce code review bottlenecks
- Projects with high-volume code changes needing automated first-pass reviews
- Teams working in air-gapped or restricted network environments
💡 How It Works
Git Pull Requests
- Developer creates or updates a Pull Request
- Azure Pipeline automatically triggers
- AI reviews each changed file and identifies potential issues
- Intelligent comments appear directly in the PR thread
- Team reviews AI feedback alongside human code review
TFVC Gated Check-ins
- Developer shelves changes and triggers gated check-in
- Build pipeline fetches shelveset contents
- AI analyzes each modified file
- Review results appear in Build Summary
- Critical issues can block check-in to maintain code quality
🤖 Powered by Leading AI Models
Choose from a variety of state-of-the-art code analysis models:
- DeepSeek Coder (1.3B - 33B) - Specialized for code understanding and generation
- CodeLlama (7B - 34B) - Meta's purpose-built code model
- StarCoder - Open-source model trained on permissively licensed code
- Custom Models - Bring your own fine-tuned models
All models run locally via Ollama, ensuring your code stays private and secure.
⚡ Features at a Glance
Intelligent Analysis
- Identifies potential bugs and logic errors
- Suggests code improvements and best practices
- Detects security vulnerabilities
- Highlights code complexity issues
- Reviews coding standards compliance
Flexible Configuration
- Multiple model support with automatic fallback
- Customizable file exclusion patterns
- Support for wildcards and glob patterns
- Configurable API endpoints
Seamless Integration
- Works with existing Azure DevOps workflows
- No changes required to repository structure
- Automatic cleanup of previous review comments
- Respects binary file types automatically
Developer Friendly
- Clear, actionable feedback
- Context-aware suggestions
- Non-intrusive workflow integration
- Easy to enable/disable per pipeline
📊 Real-World Impact
Before AI Code Review:
- ⏱️ Average review time: 4-6 hours
- 🐛 Issues found: Varies by reviewer availability
- 📉 Review bottlenecks during peak times
After AI Code Review:
- ⚡ Initial review: Instant (< 2 minutes)
- 🎯 Consistent quality checks on every change
- 📈 Human reviewers focus on architecture and logic
- 🚀 Faster merge times
🔒 Security & Compliance
- On-Premise Deployment - Run entirely within your network
- No Data Leakage - Code never transmitted to external services
- Access Control - Leverages Azure DevOps security model
- Audit Trail - All reviews logged in pipeline history
- Compliance Ready - Meets GDPR, SOC 2, and other regulations
🛠️ Quick Setup (5 Minutes)
Install Ollama on your build agent
curl -fsSL https://ollama.ai/install.sh | sh
Pull an AI model
ollama pull deepseek-coder:1.3b-instruct
Add task to your pipeline
- task: SelfHostedAICodeReview@1
inputs:
apiUrl: "http://localhost:11434"
model: "deepseek-coder:1.3b-instruct"
Enable OAuth token in pipeline settings
That's it! Your next PR will receive automated AI review.
📈 Flexible Model Selection
Choose the right model for your needs:
| Model Size |
Review Speed |
Detail Level |
Best For |
| 1.3B |
⚡⚡⚡ Fastest |
Good |
Quick checks, frequent commits |
| 6.7B |
⚡⚡ Fast |
Better |
Balanced performance |
| 13B+ |
⚡ Detailed |
Best |
Critical code, releases |
All models run locally - pick based on your hardware capabilities and review depth requirements.
🌐 Works With Your Workflow
Supported Scenarios
- ✅ Azure Repos Git Pull Requests
- ✅ TFVC Gated Check-ins and Shelvesets
- ✅ Multi-stage pipelines
- ✅ Template-based pipelines
- ✅ Self-hosted and Microsoft-hosted agents
Integration Points
- Pull Request comments and threads
- Build summary attachments (TFVC)
- Pipeline logs and artifacts
- Status checks and gates
- Smart Caching - Ollama caches model layers for faster subsequent reviews
- Parallel Processing - Multiple files reviewed concurrently
- Binary Detection - Automatically skips non-reviewable files
- Incremental Reviews - Only analyzes changed portions of files
- GPU Acceleration - Optional GPU support for 3-10x speed boost
🎓 Getting the Most Value
Best Practices
- Start Small - Begin with a smaller model to test workflow
- Tune Exclusions - Skip generated files, lock files, and documentation
- Combine with Human Review - Use AI as first pass, humans for final approval
- Monitor Performance - Track review times and adjust model size accordingly
- Iterate - Refine exclusion patterns based on team feedback
Common Use Cases
- Pre-commit Checks - Catch issues before code review requests
- Security Scanning - Identify potential vulnerabilities automatically
- Standards Enforcement - Ensure coding guidelines are followed
- Knowledge Sharing - Help junior developers learn best practices
- Technical Debt - Flag areas needing refactoring
🔄 Continuous Improvement
The extension supports any Ollama-compatible model, meaning you can:
- Upgrade to newer, better models as they're released
- Fine-tune models on your codebase for better accuracy
- Switch models based on project requirements
- Test multiple models to find the best fit
- Documentation - Comprehensive guides and examples
- GitHub Issues - Report bugs and request features
- Regular Updates - New features and model support
- Community Driven - Open to contributions and feedback
🚀 Get Started Today
Transform your code review process with AI-powered analysis while maintaining complete control over your data and infrastructure.
Install now from Azure DevOps Marketplace → [Install Extension]
System Requirements
- Azure DevOps Server 2019+ or Azure DevOps Services
- Build Agent with Node.js 20+
- Ollama installed (or OpenAI-compatible API)
- OAuth Token access enabled in pipeline
Pricing
Free - This extension is free to use. You only need to provide the infrastructure to run Ollama.
Questions? Check our comprehensive README or reach out for support.
Built by developers, for developers 🛠️