Open LLM Council (OLC)
Professional multi-model AI deliberation for VS Code
Open LLM Council brings the power of ensemble AI to your development workflow. Instead of relying on a single model's perspective, consult multiple AI models simultaneously and receive a synthesized, comprehensive answer.

Features
Multi-Model Consultation
Consult multiple AI models (GPT-4, Claude, Gemini, and more) in a single query. Each model provides its unique perspective on your question.
Intelligent Synthesis
A synthesis model combines all responses into a unified, comprehensive answer that captures the best insights from each perspective.
Flexible Configuration
- Council Size: Select minimal (3), standard (5), or extended (7+) models per query
- Synthesis Control: Configure which model synthesizes the final answer
Multiple Interfaces
- Chat Participant: Use
@council directly in GitHub Copilot Chat
- Dedicated Panel: Full-featured webview UI for comprehensive deliberations
- Activity Bar: Quick access from the VS Code sidebar
How It Works
Open LLM Council implements a 3-stage deliberation process:
| Stage |
Description |
| 1. Gather |
Your question is sent to multiple AI models simultaneously |
| 2. Review |
(Optional) Models review and critique each other's responses |
| 3. Synthesize |
A chairman model combines all perspectives into a final answer |
Usage
Chat Participant
In GitHub Copilot Chat, use the @council participant:
@council How should I structure a React application with authentication?
Commands
| Command |
Description |
@council <question> |
Standard council deliberation |
@council /quick <question> |
Quick mode with fewer models |
@council /debate <question> |
Full deliberation with peer review |
@council /models |
List available AI models |
Webview Panel
Open the dedicated UI panel:
- Press
Ctrl+Shift+P (or Cmd+Shift+P on Mac)
- Run:
Open LLM Council: Open Council Panel
Or click the Open LLM Council icon in the Activity Bar.
Applying Settings
After changing settings, click the 🔄 Refresh button in the sidebar header to apply them immediately.
Configuration
Access settings via File > Preferences > Settings and search for "Open LLM Council".
| Setting |
Options |
Description |
councilSize |
minimal, standard, extended |
Number of models to consult (2/3/4) |
councilMember1-4 |
Any available model |
Choose specific council members |
chairmanModel |
Any available model |
Model for final synthesis |
enableDebateMode |
true/false |
Enable peer review stage |
streamResponses |
true/false |
Stream responses in real-time |
Available Models
| Model |
Description |
| GPT-4.1 |
OpenAI's advanced reasoning model |
| GPT-4o |
OpenAI's fast, capable model |
| GPT-5 mini |
OpenAI's latest generation compact model |
| Grok Code Fast 1 |
xAI's fast code assistance model |
Requirements
- VS Code: Version 1.85.0 or higher
- GitHub Copilot: Active subscription (Free, Pro, Business, or Enterprise)
- GitHub Copilot Chat: Extension must be installed
Troubleshooting
"GitHub Copilot Not Ready" message
If you see this message, Copilot is not properly connected:
- Open GitHub Copilot Chat - Press
Ctrl+Shift+I (or Cmd+Shift+I on Mac)
- Sign in - Make sure you're signed into your GitHub account
- Check subscription - Verify your Copilot subscription is active
- Reload - Click the "Retry" button in the extension or reload VS Code
Models not available
If certain models aren't appearing:
- Update VS Code - Ensure you have VS Code 1.85.0 or higher
- Update Copilot - Update GitHub Copilot Chat extension to the latest version
- Use
/models command - Type @council /models in Copilot Chat to see available models
Extension not responding
- Reload Window - Press
Ctrl+Shift+P → "Developer: Reload Window"
- Check Output - View → Output → Select "Open LLM Council" to see logs
- Disable/Enable - Try disabling and re-enabling the extension
Installation
From VS Code Marketplace
- Open VS Code
- Go to Extensions (
Ctrl+Shift+X)
- Search for "Open LLM Council"
- Click Install
From VSIX
code --install-extension open-llm-council-2.0.0.vsix
Development
# Clone the repository
git clone https://github.com/ChetanJain281/llm-council.git
cd llm-council
# Install dependencies
npm install
# Compile
npm run compile
# Watch mode for development
npm run watch
# Run extension (Press F5 in VS Code)
Building for Production
# Type check, lint, and bundle
npm run package
# Create VSIX package
npm run package:vsix
Contributing
Contributions are welcome! Please feel free to submit issues and pull requests.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature)
- Commit your changes (
git commit -m 'Add amazing feature')
- Push to the branch (
git push origin feature/amazing-feature)
- Open a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
Inspired by Andrej Karpathy's llm-council concept of multi-model deliberation.
Made with care for the developer community
CI/CD (optional): A GitHub Actions workflow is included at .github/workflows/release.yml to build on tags and publish when VSCE_PAT is set in repository secrets.
Why Use a Council?
Single AI responses can be biased toward certain approaches. By consulting multiple DIFFERENT models:
- True diversity - Different models have different training and biases
- Better coverage - Claude might catch what GPT misses
- Reduced blind spots - Multiple vendors means multiple perspectives
- Collective wisdom - Synthesis combines the best of all
License
MIT