Generic Provider for CopilotUse frontier open LLMs like Qwen3 Coder, Kimi K2, DeepSeek V3.1, GLM 4.5 and more in VS Code with GitHub Copilot Chat powered by any OpenAI-compatible provider 🔥 ThanksHeavily inspired (and then extended) by https://github.com/JohnnyZ93/oai-compatible-copilot ✨ Features
Requirements
⚡ Quick StartOption A: Using the Configuration GUI (Recommended)1. Use the GUI
2. Set API KeysIf an API key is not found for a provider, you will be prompted in the QuickPick box.
Repeat for each provider. Keys are stored securely in VS Code's secret storage as 3. Use in Copilot Chat
📖 Configuration GuideConfiguration is managed in VS Code's The configuration is split into two main parts: Provider Configuration (
|
| Field | Type | Required | Description |
|---|---|---|---|
id |
string |
Yes | A unique, lowercase identifier for the provider (e.g., "openrouter", "zai"). |
vercelType |
string |
Yes | The provider type. Must be one of openai, openai-compatible, openrouter, or google. |
displayName |
string |
No | A user-friendly name for the provider that appears in the UI. |
baseUrl |
string |
Yes | The base URL of the provider's API endpoint (e.g., "https://api.example.com/v1"). |
headers |
object |
No | Custom HTTP headers to be sent with every request to this provider. |
Model Configuration (generic-copilot.models)
Models define the specific LLMs you want to use. Each model must be associated with a provider.
Schema:
| Field | Type | Required | Description |
|---|---|---|---|
id |
string |
Yes | The internal unique identifier. |
provider |
string |
Yes | The id of a configured provider. The model will inherit baseUrl and headers from this provider. |
slug |
string |
Yes | The actual model value that will be sent to the inference provider. |
displayName |
string |
No | A user-friendly name for the model. If not set, a name is generated from id and slug. |
model_properties |
object |
No | Internal metadata used by the extension to control behavior. These are not sent to the provider's API. |
model_parameters |
object |
No | Parameters that are sent in the body of the request to the provider's API. |
model_properties Schema
| Field | Type | Description |
|---|---|---|
context_length |
number |
The maximum context window size for the model. Defaults to 128000. |
owned_by |
string |
The provider name. This is typically inherited from the provider's id and doesn't need to be set manually. |
family |
string |
The model family (e.g., "gpt", "claude", "gemini"). Affects how Copilot interacts with the model. Defaults to "generic". |
model_parameters Schema
| Field | Type | Description |
|---|---|---|
temperature |
number |
Controls randomness. Lower values are more deterministic. Range: [0, 2]. Defaults to 1. |
extra |
object |
A container for any other parameters you want to send to the API. These are passed through directly. |
⚙️ Configuration Example
Here is a complete example for your settings.json file, demonstrating how to configure multiple providers and models.
{
"generic-copilot.providers": [
{
"id": "openrouter-connection",
"vercelType": "openrouter",
"displayName": "OpenRouter",
},
{
"id": "zai",
"vercelType": "openai-compatible",
"displayName": "Zai",
"baseUrl": "https://open.zaidata.com/v1",
"headers": {
"X-Source": "vscode-extension"
}
}
],
"generic-copilot.models": [
{
"_comment": "A simple model configuration inheriting from OpenRouter.",
"slug": "claude-sonnet-default",
"id": "anthropic/claude-3.5-sonnet",
"provider": "openrouter",
"model_properties": {
"context_length": 200000,
"family": "claude"
},
"model_parameters": {
"temperature": 0.7
}
},
{
"slug": "glm-4.6-fast",
"id": "glm-4.6",
"provider": "zai",
"displayName": "GLM-4.6 (Fast)",
"model_properties": {
"context_length": 256000
},
"model_parameters": {
"temperature": 0.1
}
},
{
"_comment": "A model with custom parameters passed via the 'extra' field.",
"slug": "gemini-flash-custom",
"id": "google/gemini-flash-1.5",
"provider": "openrouter",
"model_parameters": {
"temperature": 0.5,
"extra": {
"top_p": 0.9,
"stop": ["\n"]
}
}
}
]
}
�🔑 API Key Management
Per-Provider Keys
Each provider has its own API key stored securely:
- Storage Key:
generic-copilot.apiKey.<provider-key> - Example: For provider
key: "iflow", the storage key isgeneric-copilot.apiKey.iflow
🎛️ Advanced Configuration
Custom Headers
Headers can be set at the provider level and will be inherited by all models associated with that provider. See the Provider Configuration section for details.
API Request Format
When making requests to the model provider:
- Model ID Mapping: The
idfrommodel_propertiesis sent as themodelparameter in the API request - Parameters Only: Only
model_parameters(temperature, max_tokens, etc.) are included in the request body - Excluded Metadata:
model_propertieslikebaseUrl,context_length, andfamilyare NOT sent to the API - they're used internally by the extension. - Unknown Keys: Custom parameters can be added via
model_parameters.extraand will be passed through to the API
💡 Tips & Best Practices
Use family and model names carefully. Copilot changes behavior based on these names:
Model Name variations
- gpt-5-codex | gpt-5-codex : uses Codex-style prompt branch
- gpt-5* | gpt-5 : can use apply_patch exclusively; agent prompts differ for gpt-5
- o4-mini | o4-mini : allowed apply_patch and prefers JSON notebook representation
- claude-3.5-sonnet | claude-3.5-sonnet : prefers instructions in user message and after history
Family Name variations
- GPT family | gpt (excl. gpt-4o) : supports apply_patch, prefers JSON notebook representation
- Claude / Anthropic | claude / Anthropic : supports multi_replace/replace_string, can use replace_string exclusively, MCP image_url disallowed
- Gemini | gemini : supports replace_string, healing/strong-replace hints required, cannot accept image_url in requests. Supports Gemini 3 thought signatures (requires
googleprovider). - Grok | grok-code : supports replace_string and can use replace_string exclusively
Naming Convention
Use lowercase provider keys that match the service name for consistency:
- ✅
"key": "openai" - ✅
"key": "anthropic" - ❌
"key": "OpenAI"
ConfigId for Variants
Use descriptive configId values:
"thinking"/"no-thinking""fast"/"accurate"
Headers for Custom Auth
If a provider uses non-standard authentication, set it in the headers object of the provider's configuration.
Gemini 3 & Thought Signatures
Gemini 3 models (e.g. Gemini 3 Pro) introduce a requirement for preserving "thought signatures" during multi-turn conversations involving function calls.
This extension implements automated handling of these signatures only when using the google provider type.
To properly support Gemini 3:
- Use
vercelType: "google"for your provider configuration. - Ensure your model
familyis set to"gemini".
🐛 Troubleshooting
Models Not Appearing
- Check provider
keymatches exactly in both provider and model config - Verify
baseUrlis correct and accessible - Look for errors in VS Code Developer Console (
Help > Toggle Developer Tools)
Authentication Errors
- Verify API key is set: Run "Set Multi-Provider Apikey" command
- Check if provider requires custom headers in its provider configuration.
- Ensure
baseUrlincludes correct path (usually/v1)
Provider Not Found
- Confirm
providerfield in your model configuration matches a provider'sidexactly (case-sensitive). - Check Developer Console for warnings about missing providers.
- Verify JSON syntax is valid (no trailing commas, quotes closed).
- Remember: Only
baseUrlandheadersare inherited from providers.
📄 License
- License: MIT License Copyright (c) 2025