Skip to content
| Marketplace
Sign in
Visual Studio Code>Machine Learning>Copilot Chat OpenAI Dev ProxyNew to Visual Studio Code? Get it now.
Copilot Chat OpenAI Dev Proxy

Copilot Chat OpenAI Dev Proxy

Amadeus

amadeus.com
|
1 install
| (0) | Free
An extension that allows you start a little development-grade server on your machine(localhost) to proxy your OpenAI API compliant requests to models behind GitHub Copilot.
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

Chat Participant OpenAI Proxy

Chat Participant OpenAI Proxy is a Visual Studio Code extension that provides a development-grade proxy server to route OpenAI API compatible requests through GitHub Copilot's language models.

Features

  • Creates a local REST API server (localhost:8080) that mimics OpenAI's Chat Completions API
  • Supports standard chat completion requests with user and assistant messages
  • Provides OpenAPI documentation via Swagger UI at /api-docs endpoint
  • Integrates with VSCode's built-in language model capabilities
  • Compatible with standard OpenAI API request/response formats
  • Supports model selection, temperature and tools(MCP) controls
  • Basic request validation and error handling

Requirements

  1. Valid GitHub Copilot license(any plan - including free tier).
  2. Visual Studio Code

Usage

  1. Install the extension in VS Code(`
  2. Use the command @llmproxy /start to start the proxy server
  3. Send requests to http://localhost:8080/v1/chat/completions using standard OpenAI API format
  4. View API documentation at http://localhost:8080/api-docs

Examples

General request format:

POST /v1/chat/completions
Content-Type: application/json

{
  "model": "string",
  "messages": [
    {
      "role": "user|system|assistant",
      "content": "string"
    }
  ],
  "temperature": number,
  "stream": boolean
}

Request with OpenAI Python SDK:

import openai

# Dummy API key - it is not used in this proxy
client = OpenAI(
    api_key='',
)
client.base_url = "http://localhost:8080/v1"

all_models = ['gpt-3.5-turbo',]

for model in all_models:
   print(f"Model: {model}")
   try:
      response = client.chat.completions.create(
            model=model,
            messages=[{
               "role": "user",
               "content": "Answer the following question and assign it to a variable named smellies. Question:What are the most smelly cheeses?!IMPORTANT: don't return any comments. Only the JSON structure please."
            }]
      )
      print(response.choices[0].message.content)
   except Exception as e:
      print(e)

Example output:

{
  "smellies": ["Limburger", "Epoisses", "Roquefort", "Camembert", "Munster"]
}

How to build and package the extension from source

To package your extension is a very straight-forward process. All you need to do is to run the following command:

npm install
npm run package:production

A new VSCode extension will be created with the name chat-participant-openai-proxy-0.0.1.vsix. If you would like to change the name, you need to modify the name property in package.json.

Installation and run

  • If you are still in the development phase, you can simply debug your extension from within your editor.

  • If you packaged your extension you can install the extension from the .vsix file

Architecture Documentation

Comprehensive architectural documentation is available in the docs/ directory:

  • Central hub with navigation guide
  • System architecture with Mermaid diagrams
  • Detailed implementation flows
  • Developer guide for contributors

These documents cover system design, data flows, security architecture, API details, and development patterns with extensive Mermaid diagrams for visual understanding.

Contributions

You can contribute to this repository too! As a matter of fact, we encourage you to contribute.

How can I contribute?. Well, I'm glad you asked :).

  1. Submit code that improves this project. Easy steps:
    • Fork the repo
    • Create a new branch for your fix or new feature
    • Implement the fix or new feature
    • Open a Pull Request
  2. Raise an issue
  3. Raise awareness and spread the word about this project :)
  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2025 Microsoft