Advanced Azure DevOps PR Reviewer
An intelligent, AI-powered Pull Request reviewer for Azure DevOps that uses Azure OpenAI to provide precise, contextual code reviews.
🚀 Features
🤖 Advanced AI-Powered Review
- Advanced Reasoning Pipelines: Uses structured multi-step prompts to analyze code systematically
- Azure OpenAI Integration: Leverages state-of-the-art language models for accurate code analysis
- Context-Aware Review: Understands PR context and only writes reviews when necessary
- Maximum 100 LLM Calls: Efficient resource usage with configurable limits
🔍 Comprehensive Code Analysis
- Code Quality Review: Identifies bugs, performance issues, and maintainability concerns
- Security Scanning: Detects vulnerabilities like SQL injection, XSS, hardcoded secrets
- Style & Standards: Ensures adherence to coding standards and best practices
- Test Coverage: Analyzes test adequacy and suggests improvements
- Precise Diff Tracking: Comments are anchored to the exact modified lines with automatic range selection, even when Azure DevOps omits diff hunks
🛠️ Azure DevOps Integration
- Inline Comments: Posts specific feedback directly on code lines
- File-Level Comments: Provides comprehensive file overviews
- PR Summary: Generates detailed review summaries with actionable recommendations
- Smart Filtering: Skips binary files and focuses on reviewable code
📊 Intelligent Decision Making
- Context Analysis: Determines if detailed review is needed based on PR scope
- Confidence Scoring: Only suggests changes above configurable confidence thresholds
- Actionable Feedback: Provides specific code suggestions and improvements
- Review Recommendations: Suggests approve, approve with suggestions, or request changes
🏗️ Architecture
The extension orchestrates a structured review pipeline:
PR Context → Context Analysis → File Review → Security Scan → Code Suggestions → Final Assessment
↓ ↓ ↓ ↓ ↓ ↓
Determine Review Each Security Generate Post Results Task Result
Review Need File Analysis Suggestions to Azure & Summary
Core Components
- Review Orchestrator: Coordinates the entire review process
- LLM Orchestrator: Manages the reasoning flow and LLM interactions
- Azure DevOps Service: Handles all Azure DevOps API interactions
- Review State Management: Tracks review progress and maintains context
📋 Prerequisites
Azure OpenAI Setup
- Azure OpenAI Resource: Create an Azure OpenAI resource in your Azure subscription
- Model Deployment: Deploy a GPT-5-codex or an older model
- API Access: Ensure your Azure DevOps pipeline has access to the Azure OpenAI endpoint
- Preview Models: For GPT‑4.1/GPT‑5 deployments, use the latest preview API version (e.g.,
2024-08-01-preview) and enable the Responses API input.
Azure DevOps Configuration
Build Service Permissions: The build service needs permissions to:
- Read repository content
- Create and manage PR comments
- Access PR details and changes
- Contribute to pull requests (Project Settings → Repos → Repositories → Security → select <ProjectName> Build Service → grant Contribute to pull requests)
Pipeline Variables: Configure the following variables:
azure_openai_endpoint: Your Azure OpenAI endpoint URL
azure_openai_api_key: Your Azure OpenAI API key
azure_openai_deployment_name: Your model deployment name
Expose the OAuth Token to the Job
The extension posts inline comments via the pipeline’s OAuth token. Make sure scripts can access it:
Endpoint Format Reminder
Azure endpoints follow
https://{resource}.openai.azure.com/openai/deployments/{deployment}/responses?api-version={version}.
Older GPT‑3.5/4 deployments that still require /chat/completions should keep using the legacy endpoint.
🚀 Installation
1. Install the Extension
- Download the extension from the Azure DevOps marketplace
- Install it in your Azure DevOps organization
2. Add to Pipeline
Add the task to your Azure DevOps pipeline YAML:
- task: GENAIADVANCEDPRREVIEWER@2
continueOnError: true
inputs:
azure_openai_endpoint: '$(azure_openai_endpoint)'
azure_openai_api_key: '$(azure_openai_api_key)'
azure_openai_deployment_name: '$(azure_openai_deployment_name)'
azure_openai_api_version: '$(azure_openai_api_version)'
azure_openai_use_responses_api: true
max_llm_calls: '100'
review_threshold: '0.7'
enable_code_suggestions: true
enable_security_scanning: true
support_self_signed_certificate: false
Set up pipeline variables in Azure DevOps:
variables:
azure_openai_endpoint: https://yourendpoint.openai.azure.com
azure_openai_deployment_name: gpt-5-codex
azure_openai_api_version: 2025-04-01-preview
# store the key as a secret variable
azure_openai_api_key: $(azure_openai_api_key)
⚙️ Configuration Options
| Parameter |
Type |
Required |
Default |
Description |
azure_openai_endpoint |
string |
✅ |
- |
Azure OpenAI endpoint URL |
azure_openai_api_key |
string |
✅ |
- |
Azure OpenAI API key |
azure_openai_deployment_name |
string |
✅ |
- |
Model deployment name |
max_llm_calls |
string |
❌ |
100 |
Maximum LLM calls allowed |
review_threshold |
string |
❌ |
0.7 |
Confidence threshold for suggestions |
enable_code_suggestions |
boolean |
❌ |
true |
Enable AI code suggestions |
enable_security_scanning |
boolean |
❌ |
true |
Enable security vulnerability scanning |
support_self_signed_certificate |
boolean |
❌ |
false |
Support self-signed certificates |
azure_openai_api_version |
string |
❌ |
2025-04-01-preview |
Azure OpenAI API version (preview required for GPT‑5 deployments) |
azure_openai_use_responses_api |
boolean |
❌ |
false |
Call the modern Responses API (required for GPT‑4.1 and GPT‑5 deployments) |
mcp_servers |
multi-line string |
❌ |
- |
JSON array describing MCP servers that enrich each review with additional context |
🔌 MCP Server Integration
Model Context Protocol (MCP) servers let you plug repository-specific knowledge bases or business rules into the reviewer. Provide them as a JSON array via the mcp_servers input (typically using a multi-line string in YAML).
YAML configuration example
- task: GENAIADVANCEDPRREVIEWER@2
inputs:
azure_openai_endpoint: 'https://your-resource.openai.azure.com/'
azure_openai_api_key: '$(AZURE_OPENAI_API_KEY)'
azure_openai_deployment_name: 'gpt-5-codex'
mcp_servers: |
[
{
"name": "repository-knowledge",
"endpoint": "https://example.com/mcp/context",
"headers": {
"Authorization": "Bearer $(MCP_TOKEN)"
},
"timeoutMs": 8000,
"payloadTemplate": "{\"query\":\"best practices for {{file_path}}\",\"fileDiff\":\"{{file_diff}}\",\"pr\":\"{{pr_context}}\"}"
}
]
Supported fields
name (required): Friendly identifier used in logs.
endpoint (required): HTTP URL of the MCP server endpoint.
method: HTTP method (POST by default).
headers: Additional request headers (e.g., bearer tokens).
timeoutMs: Request timeout in milliseconds (defaults to 10s).
payloadTemplate: Optional JSON template string. The agent replaces placeholders like {{file_path}}, {{file_diff}}, {{file_content}}, {{pr_context}}, and {{metadata}} before sending the request. When omitted, a default payload containing the diff, file content, and PR metadata is used.
Response expectations
- Plain strings are treated as context items.
- JSON arrays should contain strings or objects with a
text property.
- JSON objects can return
context, contexts, content, or summary fields (strings or string arrays).
- Non-parsable responses are captured as raw text, ensuring the reviewer still receives the additional context.
🔧 How It Works
1. Context Analysis
The agent first analyzes the PR context to determine if a detailed review is necessary:
- PR title and description
- Changed files and scope
- Branch information
- Author and reviewer details
2. File-by-File Review
For each changed file, the agent:
- Retrieves file content and diff
- Performs comprehensive code analysis
- Identifies issues and improvements
- Generates specific suggestions
3. Security Analysis
When enabled, performs security scanning for:
- SQL injection vulnerabilities
- XSS and injection attacks
- Hardcoded secrets
- Insecure authentication patterns
- Input validation issues
4. Code Suggestions
Generates actionable improvements:
- Before/after code examples
- Performance optimizations
- Readability improvements
- Best practice recommendations
5. Final Assessment
Provides comprehensive review summary:
- Overall quality assessment
- Issue categorization and counts
- Approval recommendations
- Actionable next steps
📊 Review Output
- 🐛 Bug: Logic errors and functional issues
- 🔒 Security: Security vulnerabilities and concerns
- 💡 Improvement: Code quality and maintainability suggestions
- 🎨 Style: Coding standards and formatting issues
- 🧪 Test: Test coverage and testing recommendations
Review Summary
The extension posts a comprehensive summary comment including:
- Overall assessment (approve/approve with suggestions/request changes)
- Statistics on issues found by category
- Summary of key findings
- Specific recommendations for the PR author
🎯 Best Practices
For Developers
- Clear PR Descriptions: Provide context about what the PR accomplishes
- Focused Changes: Keep PRs focused on single concerns
- Test Coverage: Include tests for new functionality
- Code Standards: Follow your team's coding standards
For Pipeline Administrators
- Resource Management: Set appropriate
max_llm_calls based on your needs
- Threshold Tuning: Adjust
review_threshold based on team preferences
- Security Scanning: Enable security scanning for production code
- Monitoring: Monitor LLM usage and costs
- OAuth Token Access: Confirm
persistCredentials: true (or the classic “Allow scripts to access the OAuth token” toggle) so the reviewer can post PR comments.
For Teams
- Review Culture: Use the extension as a learning tool, not just a gate
- Feedback Integration: Incorporate AI suggestions into team coding standards
- Continuous Improvement: Regularly review and adjust configuration
- Knowledge Sharing: Use AI insights to improve team coding practices
🔍 Troubleshooting
Common Issues
Authentication Errors
- Verify Azure OpenAI API key is correct
- Ensure the key has access to the specified deployment
- Check if the key has expired
Permission Errors
- Verify build service has repository read access
- Ensure build service can create PR comments
- Check organization-level permissions
High LLM Usage
- Reduce
max_llm_calls if hitting limits
- Adjust
review_threshold to filter out low-confidence suggestions
- Ensure the PR branch contains actual line modifications (not whitespace-only changes)
- Check pipeline logs for
🔧 Built fallback unified diff messages—these confirm the reviewer successfully reconstructed diff hunks
- Verify the Azure DevOps build service has permission to call the PR diff APIs (
pullRequests/{id}/changes, diffs/commits)
Azure OpenAI 400 Bad Request
- GPT-4.1/GPT-5 deployments require newer API versions (e.g.,
2024-08-01-preview) — set the azure_openai_api_version input accordingly
- Enable the
azure_openai_use_responses_api flag for models that only support the Responses API
- Review the task logs for the exact error body — it will be surfaced when the request fails
- Consider disabling code suggestions for large PRs
- Monitor Azure OpenAI service performance
- Check network connectivity to Azure OpenAI
- Consider using smaller models for faster responses
The extension provides detailed logging:
- Configuration validation
- File processing progress
- LLM call tracking
- Error details and stack traces
Verbose logging
You can enable verbose debug logs (shows LLM prompts and response previews) by setting the environment variable ADVPR_VERBOSE=1. The task manifest sets this by default for the packaged task, but you can override it in your pipeline or agent environment if you prefer quieter logs.
LLM Call Optimization
- Context Analysis: 1-2 calls per PR
- File Review: 3-5 calls per file (depending on complexity)
- Security Scan: 1-2 calls per file
- Code Suggestions: 1-2 calls per file with issues
- Final Assessment: 1 call per PR
Cost Considerations
- Monitor Azure OpenAI usage and costs
- Adjust
max_llm_calls based on budget constraints
- Use appropriate model tiers for your needs
- Consider batch processing for large repositories
🔮 Future Enhancements
Planned Features
- Custom Review Templates: Team-specific review criteria
- Integration with SonarQube: Combined static and AI analysis
- Multi-Language Support: Enhanced support for various programming languages
- Review History: Track review quality and improvement over time
- Team Learning: Share insights across team members
Extensibility
- Plugin Architecture: Support for custom review modules
- API Integration: Webhook support for external tools
- Custom Models: Support for fine-tuned models
- Review Workflows: Configurable review processes
🤝 Contributing
We welcome contributions! Please see our Contributing Guide for details.
Development Setup
- Clone the repository
- Install dependencies:
npm install
- Build the project:
npm run build
- Run tests:
npm test
Code Standards
- Follow TypeScript best practices
- Include comprehensive error handling
- Add unit tests for new features
- Update documentation for changes
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgments
- Azure OpenAI Team: For providing the underlying AI capabilities
- Open Source Community: For inspiration and feedback on advanced review workflows
- Azure DevOps Team: For the robust platform and APIs
- Open Source Contributors: For the various libraries and tools used
📞 Support
- Issues: Report bugs and feature requests on GitHub
- Documentation: Check this README and inline code comments
- Community: Join our discussions and share experiences
- Enterprise: Contact us for enterprise support and customization
Made with ❤️ for the Azure DevOps community