Omni Chat Provider
Use OpenAI, Anthropic, Gemini, Ollama, and OpenAI-compatible backends inside VS Code Copilot Chat through one extension.
The extension contributes a single vendor, OmniChat, to Copilot's language model system. Each model group in Chat: Manage Language Models points at one configured providerId, so removing a group only removes the Copilot-side mount. Your own omnichat.providers, omnichat.models, and stored API keys stay intact.
Features
- Native adapters for
openai, openai-responses, anthropic, gemini, and ollama
- Provider-level connection settings with model-level overrides
- Per-provider API keys stored in VS Code Secret Storage
- Native Copilot model management via
Chat: Manage Language Models
- Stable internal model routing with clean picker labels
- Retry, delay, custom headers, and system-prompt interception
- Optional model variants via
configId
- Commit message generation command
How It Works
There are three layers:
omnichat.providers
Defines backend connections such as baseUrl, apiMode, and shared headers.
omnichat.models
Defines the actual models shown in Copilot, each mapped to a provider via provider or owned_by.
Chat: Manage Language Models
Creates Copilot model groups that attach one providerId to the OmniChat vendor.
Deleting a Copilot group does not delete OmniChat settings or OmniChat secrets.
Requirements
- VS Code
1.104+
- GitHub Copilot Chat installed
Setup
1. Install the extension
Install the VSIX or install from Open VSX after publishing.
2. Define providers
Add provider backends in settings.json:
"omnichat.providers": [
{
"id": "openai",
"baseUrl": "https://api.openai.com/v1",
"apiMode": "openai"
},
{
"id": "anthropic",
"baseUrl": "https://api.anthropic.com",
"apiMode": "anthropic",
"headers": {
"anthropic-version": "2023-06-01"
}
},
{
"id": "ollama",
"baseUrl": "http://localhost:11434",
"apiMode": "ollama"
}
]
3. Define models
Each model must point at a provider:
"omnichat.models": [
{
"id": "gpt-5.4",
"provider": "openai",
"context_length": 128000,
"max_completion_tokens": 8192,
"vision": true
},
{
"id": "gpt-5.4",
"provider": "openai",
"configId": "reasoning",
"reasoning_effort": "high",
"max_completion_tokens": 16384
},
{
"id": "claude-sonnet-4",
"provider": "anthropic",
"max_tokens": 8192,
"thinking": {
"type": "enabled",
"budget_tokens": 4096
}
},
{
"id": "llama3.1:70b",
"provider": "ollama",
"num_ctx": 32768,
"temperature": 0.4
}
]
4. Add or edit a provider
Run:
OmniChat: Edit Provider
This flow lets you:
- Pick an existing provider and edit its
apiMode, baseUrl, and API key
- Add a new provider with the same form
Provider API keys are stored in VS Code Secret Storage under omnichat.apiKey.<providerId>.
5. Mount the provider into Copilot
Run:
Chat: Manage Language Models
Then:
- Add a new language model group
- Choose
OmniChat
- Enter the
providerId you want to mount
That group will now expose only the models belonging to that provider.
Configuration Reference
omnichat.providers
Provider-level backend settings:
id
baseUrl
apiMode
headers
omnichat.models
Model-level settings:
id
provider or owned_by
configId
displayName
family
context_length
vision
temperature
top_p
headers
delay
extra
useForCommitGeneration
include_reasoning_in_request
API-specific model fields
OpenAI
max_tokens
max_completion_tokens
reasoning_effort
frequency_penalty
presence_penalty
OpenAI Responses
max_output_tokens
reasoning
Anthropic
max_tokens
thinking
top_k
Gemini
maxOutputTokens
topK
topP
thinkingConfig
Ollama
num_predict
num_ctx
num_gpu
top_k
min_p
repeat_penalty
Retry
"omnichat.retry": {
"enabled": true,
"maxAttempts": 3,
"intervalMs": 1000,
"statusCodes": [429, 500, 502, 503, 504],
"retryEmptyResponse": true,
"timeoutMs": 120000
}
Delay
Global:
"omnichat.delay": 1000
Per model:
{
"id": "gemini-flash",
"provider": "gemini",
"delay": 1500
}
System prompt handling
"omnichat.systemPrompt.mode": "passthrough",
"omnichat.systemPrompt.content": ""
Modes:
passthrough
replace
append
disable
Build
Install dependencies:
pnpm ci
Compile:
pnpm run compile
Package VSIX:
pnpm run package
GitHub Actions
The repository includes .github/workflows/openvsx.yml.
It does two jobs:
- Build and package
extension.vsix
- Publish that VSIX to Open VSX using
OPENVSX_TOKEN
Required secret
Set this repository secret before publishing:
Triggering publish
Publishing runs on:
- Manual workflow dispatch
- Git tag pushes matching
v*
Open VSX Publishing
Local publish:
pnpm run package
pnpm run publish:openvsx
This uses:
pnpx @vscode/vsce package
pnpx ovsx publish --packagePath extension.vsix
You still need a valid Open VSX token in your environment as OVSX_PAT.