Foundry Local for GitHub Copilot Chat
Official Microsoft Extension
Integrate local AI models with GitHub Copilot Chat using Microsoft's Foundry Local platform. Run powerful language models locally on your machine while maintaining full privacy and control over your data.
🚀 Features
- Local AI Models: Run state-of-the-art language models locally without sending data to external services
- GitHub Copilot Integration: Seamlessly integrates with VS Code's native chat interface
- Privacy First: All processing happens locally on your machine
- Multiple Model Support: Access to various models including Phi-3, Qwen, DeepSeek, and more
- High Performance: Optimized for local inference with efficient resource usage
- Enterprise Ready: Perfect for organizations requiring data privacy and compliance
🔧 Supported Models
This extension automatically discovers and provides access to your locally cached Foundry Local models:
- Phi-3 Mini 4K: Fast, efficient model perfect for code assistance (4K context)
- Qwen2.5 7B: Versatile model with strong reasoning capabilities (32K context)
- DeepSeek R1: Advanced reasoning model for complex tasks (128K context)
- And many more: Any model available in the Foundry Local catalog
📋 Prerequisites
Before using this extension, you need to have Microsoft Foundry Local installed and running:
- Install Foundry Local: Download from Microsoft Foundry Local Repository
- Start the Service: Run
foundry service start
- Download Models: Use
foundry model download <model-name>
to download your preferred models
System Requirements
- Foundry Local: Version 0.5.116 or higher
- VS Code: Version 1.103.0 or higher
- Memory: Varies by model (typically 8-16GB RAM recommended)
- Storage: Space for model files (varies by model size)
🚀 Quick Start
Installation
- Install from VS Code Marketplace: Search for "Foundry Local Chat" in the Extensions view
- Reload VS Code when prompted
Prerequisites
- VS Code version 1.103.0 or higher
- Node.js and npm installed
Installation and Development
Clone this repository
Navigate to the extension directory:
cd foundry-local-chat-wip
Install dependencies:
npm install
Compile the extension:
npm run compile
Press F5
to launch a new Extension Development Host window
The extension will be active and ready to provide chat models
Building and Watching
- Build once:
npm run compile
- Watch mode:
npm run watch
(automatically recompiles on file changes)
- Lint code:
npm run lint
💬 Usage
Once both Foundry Local and this extension are installed:
- Open GitHub Copilot Chat in VS Code (Ctrl+Shift+I or Cmd+Shift+I)
- Select a Model: Click the model picker button and choose "Manage models"
- Enable Foundry Local Models: Check the models you want to use from the Foundry Local provider
- Start Chatting: Select your preferred local model and start asking questions
The extension will automatically detect your locally cached models and make them available in the chat interface.
🔧 Configuration
Model Management
- Download new models: Use
foundry model download <model-name>
in your terminal
- List available models: Use
foundry model list
to see downloaded models
- Refresh models: Restart VS Code or reload the window to detect newly downloaded models
Troubleshooting
If models don't appear:
- Ensure Foundry Local service is running:
foundry service start
- Verify models are downloaded:
foundry model list
- Check VS Code's output panel for any error messages
- Restart VS Code if needed
🏗️ Architecture
This extension implements VS Code's Language Model API to provide local AI capabilities:
- Provider Registration: Registers as a
chatProvider
with VS Code
- Model Discovery: Automatically detects cached Foundry Local models
- Streaming Responses: Provides real-time chat responses
- Token Management: Handles context window limits for different models
🤝 Contributing
This is an official Microsoft project. For contributions:
- Check existing issues and discussions
- Follow Microsoft's contribution guidelines
- Submit pull requests with detailed descriptions
- Ensure all tests pass and code follows project standards
📝 License
This project is licensed under the MIT License - see the LICENSE file for details.
📞 Support
For support and questions:
Made with ❤️ by Microsoft