Skip to content
| Marketplace
Sign in
Visual Studio Code>AI>Foundry Local ChatNew to Visual Studio Code? Get it now.
Foundry Local Chat

Foundry Local Chat

Microsoft

microsoft.com
|
80 installs
| (0) | Free
Provides local AI models for Github Copilot Chat via Foundry Local SDK.
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

Foundry Local for GitHub Copilot Chat

Official Microsoft Extension

Integrate local AI models with GitHub Copilot Chat using Microsoft's Foundry Local platform. Run powerful language models locally on your machine while maintaining full privacy and control over your data.

🚀 Features

  • Local AI Models: Run state-of-the-art language models locally without sending data to external services
  • GitHub Copilot Integration: Seamlessly integrates with VS Code's native chat interface
  • Privacy First: All processing happens locally on your machine
  • Multiple Model Support: Access to various models including Phi-3, Qwen, DeepSeek, and more
  • High Performance: Optimized for local inference with efficient resource usage
  • Enterprise Ready: Perfect for organizations requiring data privacy and compliance

🔧 Supported Models

This extension automatically discovers and provides access to your locally cached Foundry Local models:

  • Phi-3 Mini 4K: Fast, efficient model perfect for code assistance (4K context)
  • Qwen2.5 7B: Versatile model with strong reasoning capabilities (32K context)
  • DeepSeek R1: Advanced reasoning model for complex tasks (128K context)
  • And many more: Any model available in the Foundry Local catalog

📋 Prerequisites

Before using this extension, you need to have Microsoft Foundry Local installed and running:

  1. Install Foundry Local: Download from Microsoft Foundry Local Repository
  2. Start the Service: Run foundry service start
  3. Download Models: Use foundry model download <model-name> to download your preferred models

System Requirements

  • Foundry Local: Version 0.5.116 or higher
  • VS Code: Version 1.103.0 or higher
  • Memory: Varies by model (typically 8-16GB RAM recommended)
  • Storage: Space for model files (varies by model size)

🚀 Quick Start

Installation

  1. Install from VS Code Marketplace: Search for "Foundry Local Chat" in the Extensions view
  2. Reload VS Code when prompted

Prerequisites

  • VS Code version 1.103.0 or higher
  • Node.js and npm installed

Installation and Development

  1. Clone this repository

  2. Navigate to the extension directory:

    cd foundry-local-chat-wip
    
  3. Install dependencies:

    npm install
    
  4. Compile the extension:

    npm run compile
    
  5. Press F5 to launch a new Extension Development Host window

  6. The extension will be active and ready to provide chat models

Building and Watching

  • Build once: npm run compile
  • Watch mode: npm run watch (automatically recompiles on file changes)
  • Lint code: npm run lint

💬 Usage

Once both Foundry Local and this extension are installed:

  1. Open GitHub Copilot Chat in VS Code (Ctrl+Shift+I or Cmd+Shift+I)
  2. Select a Model: Click the model picker button and choose "Manage models"
  3. Enable Foundry Local Models: Check the models you want to use from the Foundry Local provider
  4. Start Chatting: Select your preferred local model and start asking questions

The extension will automatically detect your locally cached models and make them available in the chat interface.

🔧 Configuration

Model Management

  • Download new models: Use foundry model download <model-name> in your terminal
  • List available models: Use foundry model list to see downloaded models
  • Refresh models: Restart VS Code or reload the window to detect newly downloaded models

Troubleshooting

If models don't appear:

  1. Ensure Foundry Local service is running: foundry service start
  2. Verify models are downloaded: foundry model list
  3. Check VS Code's output panel for any error messages
  4. Restart VS Code if needed

🏗️ Architecture

This extension implements VS Code's Language Model API to provide local AI capabilities:

  • Provider Registration: Registers as a chatProvider with VS Code
  • Model Discovery: Automatically detects cached Foundry Local models
  • Streaming Responses: Provides real-time chat responses
  • Token Management: Handles context window limits for different models

🤝 Contributing

This is an official Microsoft project. For contributions:

  1. Check existing issues and discussions
  2. Follow Microsoft's contribution guidelines
  3. Submit pull requests with detailed descriptions
  4. Ensure all tests pass and code follows project standards

📝 License

This project is licensed under the MIT License - see the LICENSE file for details.

🔗 Related Projects

  • Microsoft Foundry Local - The underlying local AI platform
  • VS Code Extension API - Documentation for VS Code extensions
  • GitHub Copilot - AI-powered coding assistant

📞 Support

For support and questions:

  • Issues: Report bugs and feature requests on GitHub Issues
  • Documentation: Visit the Foundry Local documentation
  • Community: Join discussions in the Foundry Local repository

Made with ❤️ by Microsoft

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2025 Microsoft