A lightweight VS Code extension that bridges GitHub Copilot's Language Model API to an OpenAI-compatible REST API. Zero runtime dependencies. Quick Start • Features • Usage • Configuration • Contributing Features
UI ControlsStatus BarA single status bar item lets you monitor and control the proxy at a glance.
Rich TooltipHover over the status bar item for a rich tooltip with quick actions — no need to open the Command Palette.
The tooltip provides:
Metrics DashboardOpen via the Command Palette (Copilot LLM Proxy: View Metrics) or click the metrics status bar item.
InstallationVS Code Marketplace
From
|
| Setting | Value |
|---|---|
| Base URL | http://localhost:4141/v1 |
| API Key | any non-empty string (e.g. unused) — or the key you configured in settings |
Authentication
The proxy supports optional API key authentication. When an API key is configured, all requests must include it in the Authorization header:
Authorization: Bearer <your-api-key>
To configure a key, run Copilot LLM Proxy: Configure API Key from the Command Palette or click API Key in the status bar tooltip. When no key is set, the proxy accepts any non-empty string in the Authorization header (most OpenAI SDKs require this field to be present).
API Endpoints
| Method | Endpoint | Description |
|---|---|---|
GET |
/v1/models |
List all available models |
GET |
/v1/models/:id |
Get a specific model |
POST |
/v1/chat/completions |
Chat completion (streaming & non-streaming) |
curl Examples
# List available models
curl http://localhost:4141/v1/models
# Chat completion
curl http://localhost:4141/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'
# Streaming
curl http://localhost:4141/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": true
}'
# Tool calls
curl http://localhost:4141/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "What is the weather in Tokyo?"}],
"tools": [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
},
"required": ["location"]
}
}
}]
}'
Model Resolution
When you specify a model ID in the request, the proxy resolves it in order:
- Exact match by model
id - Fallback match by model
family - Returns available models in the error if not found
Run GET /v1/models to see all available model IDs.
Configuration
| Setting | Default | Description |
|---|---|---|
copilotApiProxy.port |
4141 |
Port number for the proxy server |
copilotApiProxy.apiKey |
"" |
API key for authentication. If set, clients must send Authorization: Bearer <key> |
copilotApiProxy.autoStart |
false |
Automatically start the proxy when VS Code launches |
copilotApiProxy.logLevel |
INFO |
Log level: DEBUG, INFO, WARN, ERROR |
Commands
All commands are available from the Command Palette (Cmd+Shift+P / Ctrl+Shift+P):
| Command | Description |
|---|---|
| Copilot LLM Proxy: Start Server | Start the proxy server |
| Copilot LLM Proxy: Stop Server | Stop the proxy server |
| Copilot LLM Proxy: View Metrics | Open the metrics dashboard |
| Copilot LLM Proxy: Configure Port | Change the server port |
| Copilot LLM Proxy: Configure API Key | Set or clear the authentication key |
| Copilot LLM Proxy: Toggle Auto-start | Enable or disable auto-start on launch |
Prerequisites
- VS Code 1.93 or later
- GitHub Copilot extension (signed in)
- The first API request triggers a consent dialog to allow Copilot model access — click Allow
Notes
- Token usage counts are estimated (~4 chars/token) since the VS Code LM API doesn't expose exact counts
- The
api_keyfield is required by OpenAI SDKs but ignored by the proxy when no key is configured — use any non-empty value
Contributing
Contributions are welcome! Here's how to get started:
- Fork the repository
- Create a feature branch (
git checkout -b feature/my-feature) - Make your changes and test them in the Extension Development Host (
F5) - Commit your changes (
git commit -m 'Add my feature') - Push to your branch (
git push origin feature/my-feature) - Open a Pull Request
Development Setup
git clone <repo-url> && cd copilot-llm-proxy
npm install
npm run compile
Press F5 in VS Code to launch the Extension Development Host with the extension loaded.
Guidelines
- Keep changes focused and minimal
- Follow the existing code style
- Test streaming and non-streaming responses before submitting
- Update documentation if your change affects the public API or configuration
Disclaimer
This extension is not affiliated with, endorsed by, or sponsored by GitHub, Microsoft, or OpenAI. "Copilot" and "GitHub Copilot" are trademarks of GitHub/Microsoft. "OpenAI" is a trademark of OpenAI. All trademarks belong to their respective owners.
License
MIT — Copyright (c) 2026 Ilayanambi Ponramu





