Unify Chat Provider
Integrate multiple LLM API providers into VS Code's GitHub Copilot Chat using the Language Model API.
English |
简体中文
Features
- 🐑 Free Tier Access: Aggregates the latest free mainstream models, configurable in just a few steps!
- 🔌 Perfect Compatibility: Supports all major LLM API formats (OpenAI Chat Completions, OpenAI Responses, Anthropic Messages, Ollama Chat, Gemini).
- 🎯 Deep Adaptation: Adapts to special API features and best practices of 45+ mainstream providers.
- 🚀 Best Performance: Built-in recommended parameters for 200+ mainstream models, allowing you to maximize model potential without tuning.
- 📦 Out of the Box: One-click configuration, or one-click migration from mainstream applications and extensions, with automatic syncing of official model lists, no tedious operations required.
- 💾 Import and Export: Complete import/export support; import existing configs via Base64, JSON, URL, or URI.
- 💎 Great UX: Visual interface configuration, fully open model parameters, supports unlimited provider and model configurations, and supports coexistence of multiple configuration variants for the same provider and model.
- ✨ One More Thing: One-click use of your Claude Code, Gemini CLI, Antigravity, Github Copilot, Qwen Code, OpenAI Codex (ChatGPT Plus/Pro), iFlow CLI account quotas.
Installation
- Search for Unify Chat Provider in the VS Code Extension Marketplace and install it.
- Download the latest
.vsix file from GitHub Releases, then install it in VS Code via Install from VSIX... or by dragging it into the Extensions view.
Quick Start
Check out the Cookbook, you can start using it in minutes:
- Free Claude 4.5 & Gemini 3 Series Models:
- Partially Free Claude, GPT, Gemini, Grok Series Models:
- Free Kimi K2.5, GLM 4.7, MiniMax M2.1 Series Models:
- Free Kimi K2.5, GLM 4.7, MiniMax M2.1, Qwen3, DeepSeek Series Models:
- More Recipes:
You can also check the Provider Support Table:
When you have added multiple providers or models:
Currently, you might also be looking for:
If the above content still doesn't help you, please continue reading the rest of this document, or create an Issue for help.
🍱 Cookbook
Add Gemini CLI / Antigravity Account
⚠️ Warning: This may violate Google's Terms of Service, please be aware of the risk of account banning!
- You need to prepare a Google account.
- Open the VS Code Command Palette and search for
Unify Chat Provider: Add Provider From Well-Known Provider List.
- Select
Google Antigravity in the list, leave Project ID blank and press Enter.
- Allow the extension to open the browser for authorized login, and log in to your account in the browser.
- After logging in, return to VS Code and click the
Save button at the bottom of the configuration interface to complete.
- Optional: Repeat the above steps to add the
Google Gemini CLI provider.
The quotas for Antigravity and Gemini CLI for the same account are independent, so it is recommended to add both to get more free quotas.
Gemini CLI Permission Error Solution:
When using Gemini CLI models, you may see:
- Permission 'cloudaicompanion.companions.generateChat' denied on resource '//cloudaicompanion.googleapis.com/projects/...'
- 'No project ID found for Gemini CLI.'
That means you need to have your own Project ID.
- Go to Google Cloud Console
- Create or select a project
- Enable the Gemini for Google Cloud API (
cloudaicompanion.googleapis.com)
- When authorizing, explicitly fill in the
Project ID instead of leaving it blank.
Add GitHub Copilot Account
VS Code's Copilot Chat itself supports logging into a GitHub Copilot account, so this is generally used to quickly switch between multiple accounts.
- You need to prepare a Github account.
- Open the VS Code Command Palette and search for
Unify Chat Provider: Add Provider From Well-Known Provider List.
- Select
Github Copilot in the list, and choose Github.com or Github Enterprise depending on whether your account is an enterprise subscription.
- Allow the extension to open the browser for authorized login, and log in to your account in the browser.
- After logging in, return to VS Code and click the
Save button at the bottom of the configuration interface to complete.
Add Nvidia Account
- You need to prepare an Nvidia account.
- Open the VS Code Command Palette and search for
Unify Chat Provider: Add Provider From Well-Known Provider List.
- Select
Nvidia in the list, fill in the API Key generated in the user panel and press Enter.
- Click the
Save button at the bottom of the configuration interface to complete.
If you need to use the Kimi K2.5 model, please add it from the built-in model list, as the official API may not have returned this model information yet.
Add iFlow API Key or CLI Account
- You need to prepare an iFlow account.
- Open the VS Code Command Palette and search for
Unify Chat Provider: Add Provider From Well-Known Provider List.
- Select
iFlow in the list, and choose from two verification methods:
API Key: Fill in the API Key generated in the iFlow console.
iFlow CLI: Allow the extension to open the browser for authorized login, and log in to your account in the browser.
- After verification is completed, return to VS Code and click the
Save button at the bottom of the configuration interface to complete.
Impersonate Claude Code Client
⚠️ Warning: This may violate the provider's Terms of Service, please be aware of the risk of account banning!
When do you need to use this?
- Some Coding Plan subscriptions or relay sites require you to strictly use their API Key in Claude Code.
- You need to use Claude Code's account quota in Github Copilot.
Steps:
- You need to prepare a Claude Code account or API Key (whether official Key or not).
- Open the VS Code Command Palette and search for
Unify Chat Provider: Add Provider From Well-Known Provider List.
- Select
Claude Code in the list, and choose from two verification methods:
API Key: Fill in the API Key used in Claude Code.
Claude Code: Allow the extension to open the browser for authorized login, and log in to your account in the browser.
- If your
Base URL is not the official https://api.anthropic.com:
- In the pop-up configuration interface, click
Provider Settings... -> API Base URL and fill in the URL you want to use.
- Return to the previous interface.
- Click the
Save button at the bottom of the configuration interface to complete.
Basic Operations
The UI is integrated into the VS Code Command Palette for a more native experience. Here’s the basic workflow:
- Open the Command Palette:
- From the menu:
View -> Command Palette...
- Or with the shortcut:
Ctrl+Shift+P (Windows/Linux) or Cmd+Shift+P (Mac)
- Search commands:
- Type
Unify Chat Provider: or ucp: to find all commands.
- Run a command:
- Select a command with mouse or arrow keys, then press Enter.
One-Click Migration
See the Application Migration Support Table to learn which apps and extensions are supported.
If your app/extension is not in the list, you can configure it via One-Click Configuration or Manual Configuration.
Steps:
Open the VS Code Command Palette and search for Unify Chat Provider: Import Config From Other Applications.
- The UI lists all supported apps/extensions and the detected config file paths.
- Use the button group on the far right of each item for additional actions:
1. `Custom Path`: Import from a custom config file path.
2. `Import From Config Content`: Paste the config content directly.
Choose the app/extension you want to import, then you’ll be taken to the config import screen.
- This screen lets you review and edit the config that will be imported.
- For details, see the Provider Settings section.
Click Save to complete the import and start using the imported models in Copilot Chat.
One-Click Configuration
See the Provider Support Table for providers supported by one-click configuration.
If your provider is not in the list, you can add it via Manual Configuration.
Steps:
Open the VS Code Command Palette and search for Unify Chat Provider: Add Provider From Well-Known Provider List.
Select the provider you want to add.
Follow the prompts to configure authentication (usually an API key, or it may require logging in via the browser), then you’ll be taken to the config import screen.
- This screen lets you review and edit the config that will be imported.
- For details, see the Provider Settings section.
Click Save to complete the import and start using the models in Copilot Chat.
Manual Configuration
This section uses DeepSeek as an example, adding the provider and two models.
DeepSeek supports One-Click Configuration. This section shows the manual setup for demonstration purposes.
Preparation: get the API information from the provider docs, at least the following:
API Format: The API format (e.g., OpenAI Chat Completions, Anthropic Messages).
API Base URL: The base URL of the API.
Authentication: Usually an API key; obtained from the user center or console after registration.
Open the VS Code Command Palette and search for Unify Chat Provider: Add Provider.
- This screen is similar to the [Provider Settings](#provider-settings) screen, and includes in-place documentation for each field.
Fill in the provider name: Name.
- The name must be unique and is shown in the model list. Here we use
DeepSeek.
- You can create multiple configs for the same provider with different names, e.g.,
DeepSeek-Person, DeepSeek-Team.
Choose the API format: API Format.
- DeepSeek uses the
OpenAI Chat Completion format, so select that.
- To see all supported formats, refer to the API Format Support Table.
Set the base URL: API Base URL.
- DeepSeek’s base URL is
https://api.deepseek.com.
Configure authentication: Authentication.
- DeepSeek uses API Key for authentication, so select
API Key.
- Enter the API key generated from the DeepSeek console.
Click Models to go to the model management screen.
Enable Auto-Fetch Official Models.
Click Save to finish. You can now use the models in Copilot Chat.
Manage Providers
- You can create unlimited provider configurations, and multiple configs can coexist for the same provider.
- Provider names must be unique.
Provider List
Open the VS Code Command Palette and search for Unify Chat Provider: Manage Providers.
The UI also shows all existing providers. Click a provider item to enter the Model List screen.
The button group on the right of each provider item provides additional actions:
Export: Export this provider config. See Import and Export.
Duplicate: Clone this provider config to create a new one.
Delete: Delete this provider config.
Provider Settings
Models: This button only appears while adding or importing a config; click it to enter the Model List screen.
This screen shows all configuration fields for the provider. For field details, see Provider Parameters.
Manage Models
- Each provider can have unlimited model configurations.
- The same model ID can exist under different providers.
- Within a single provider config, you cannot have multiple identical model IDs directly, but you can create multiple configs by adding a
#xxx suffix.
- For example, you can add both
glm4.7 and glm4.7#thinking to quickly switch thinking on/off.
- The
#xxx suffix is automatically removed when sending requests.
- Model names can be duplicated, but using distinct names is recommended to avoid confusion.
Model List
Add Model Manually
This screen is similar to the Model Settings screen; you can read the in-place documentation to understand each field.
One-Click Add Models
This screen lists all models that can be added with one click. You can import multiple selected models at once.
See the Model Support Table for the full list of supported models.
Auto-Fetch Official Models
This feature periodically fetches the latest official model list from the provider’s API and automatically configures recommended parameters, greatly simplifying model setup.
Tip
A provider’s API may not return recommended parameters. In that case, recommended parameters are looked up from an internal database by model ID. See the Model Support Table for models that have built-in recommendations.
- Auto-fetched models show an
internet icon before the model name.
- If an auto-fetched model ID conflicts with a manually configured one, only the manually configured model is shown.
- Auto-fetched models are refreshed periodically; you can also click
(click to fetch) to refresh manually.
- Run the VS Code command
Unify Chat Provider: Refresh All Provider's Official Models to trigger refresh for all providers.
Model Settings
Export: Export this model config. See Import and Export.
Duplicate: Clone this model config to create a new one.
Delete: Delete this model config.
This screen shows all configuration fields for the model. For field details, see Model Parameters.
Adjust Parameters
Global Settings
| Name |
ID |
Description |
| Global Network Settings |
networkSettings |
Network timeout/retry settings, which only affect chat requests. |
| Store API Key in Settings |
storeApiKeyInSettings |
Please see Cloud Sync Compatibility for details. |
| Enable Detailed Logging |
verbose |
Enables more detailed logging for troubleshooting errors. |
Provider Parameters
The following fields correspond to ProviderConfig (field names used in import/export JSON).
| Name |
ID |
Description |
| API Format |
type |
Provider type (determines the API format and compatibility logic). |
| Provider Name |
name |
Unique name for this provider config (used for list display and references). |
| API Base URL |
baseUrl |
API base URL, e.g. https://api.anthropic.com. |
| Authentication |
auth |
Authentication config object (none / api-key / oauth2). |
| Models |
models |
Array of model configurations (ModelConfig[]). |
| Extra Headers |
extraHeaders |
HTTP headers appended to every request (Record<string, string>). |
| Extra Body Fields |
extraBody |
Extra fields appended to request body (Record<string, unknown>), for provider-specific parameters. |
| Timeout |
timeout |
Timeout settings for HTTP requests and SSE streaming (milliseconds). |
| Connection Timeout |
timeout.connection |
Maximum time to wait for establishing a TCP connection; default 60000 (60 seconds). |
| Response Interval Timeout |
timeout.response |
Maximum time to wait between SSE chunks; default 300000 (5 minutes). |
| Retry |
retry |
Retry settings for transient errors (chat requests only). |
| Max Retries |
retry.maxRetries |
Maximum number of retry attempts; default 10. |
| Initial Delay |
retry.initialDelayMs |
Initial delay before the first retry (milliseconds); default 1000. |
| Max Delay |
retry.maxDelayMs |
Maximum delay cap for retries (milliseconds); default 60000. |
| Backoff Multiplier |
retry.backoffMultiplier |
Exponential backoff multiplier; default 2. |
| Jitter Factor |
retry.jitterFactor |
Jitter factor (0-1) to randomize delay; default 0.1. |
| Auto-Fetch Official Models |
autoFetchOfficialModels |
Whether to periodically fetch and auto-update the official model list from the provider API. |
Model Parameters
The following fields correspond to ModelConfig (field names used in import/export JSON).
| Name |
ID |
Description |
| Model ID |
id |
Model identifier (you can use a #xxx suffix to create multiple configs for the same model; the suffix is removed when sending requests). |
| Display Name |
name |
Name shown in the UI (usually falls back to id if empty). |
| Model Family |
family |
A grouping identifier for grouping/matching models (e.g., gpt-4, claude-3). |
| Max Input Tokens |
maxInputTokens |
Maximum input/context tokens (some providers interpret this as total context for “input + output”). |
| Max Output Tokens |
maxOutputTokens |
Maximum generated tokens (required by some providers, e.g., Anthropic’s max_tokens). |
| Capabilities |
capabilities |
Capability declaration (for UI and routing logic; may also affect request construction). |
| Tool Calling |
capabilities.toolCalling |
Whether tool/function calling is supported; if a number, it represents the maximum tool count. |
| Image Input |
capabilities.imageInput |
Whether image input is supported. |
| Streaming |
stream |
Whether streaming responses are enabled (if unset, default behavior is used). |
| Temperature |
temperature |
Sampling temperature (randomness). |
| Top-K |
topK |
Top-k sampling. |
| Top-P |
topP |
Top-p (nucleus) sampling. |
| Frequency Penalty |
frequencyPenalty |
Frequency penalty. |
| Presence Penalty |
presencePenalty |
Presence penalty. |
| Parallel Tool Calling |
parallelToolCalling |
Whether to allow parallel tool calling (true enable, false disable, undefined use default). |
| Verbosity |
verbosity |
Constrain verbosity: low / medium / high (not supported by all providers). |
| Thinking |
thinking |
Thinking/reasoning related config (support varies by provider). |
| Thinking Mode |
thinking.type |
enabled / disabled / auto |
| Thinking Budget Tokens |
thinking.budgetTokens |
Token budget for thinking. |
| Thinking Effort |
thinking.effort |
none / minimal / low / medium / high / xhigh |
| Extra Headers |
extraHeaders |
HTTP headers appended to this model request (Record<string, string>). |
| Extra Body Fields |
extraBody |
Extra fields appended to this model request body (Record<string, unknown>). |
Import and Export
Supported import/export payloads:
- Single provider configuration
- Single model configuration
- Multiple provider configurations (array)
- Multiple model configurations (array)
Supported import/export formats:
- Base64-url encoded JSON config string (export uses this format only)
- Plain JSON config string
- A URL pointing to a Base64-url encoded or plain JSON config string
URI Support
Supports importing provider configs via VS Code URI.
Example:
vscode://SmallMain.vscode-unify-chat-provider/import-config?config=<input>
<input> supports the same formats as in Import and Export.
Override Config Fields
You can add query parameters to override certain fields in the imported config.
Example:
vscode://SmallMain.vscode-unify-chat-provider/import-config?config=<input>&auth={"method":"api-key","apiKey":"my-api-key"}
The import will override the auth field before importing.
Provider Advocacy
If you are a developer for an LLM provider, you can add a link like the following on your website so users can add your model to this extension with one click:
<a href="vscode://SmallMain.vscode-unify-chat-provider/import-config?config=eyJ0eXBlIjoi...">Add to Unify Chat Provider</a>
Cloud Sync Compatibility
Extension configs are stored in settings.json, so they work with VS Code Settings Sync.
However, sensitive information is stored in VS Code Secret Storage by default, which currently does not sync.
So after syncing to another device, you may be prompted to re-enter keys or re-authorize.
If you want to sync sync-safe sensitive data (e.g., API keys), enable storeApiKeyInSettings.
OAuth credentials are always kept in Secret Storage to avoid multi-device token refresh conflicts.
This can increase the risk of user data leakage, so evaluate the risk before enabling.
| API |
ID |
Typical Endpoint |
Notes |
| OpenAI Chat Completion API |
openai-chat-completion |
/v1/chat/completions |
If the base URL doesn’t end with a version suffix, /v1 is appended automatically. |
| OpenAI Responses API |
openai-responses |
/v1/responses |
If the base URL doesn’t end with a version suffix, /v1 is appended automatically. |
| Google AI Studio (Gemini API) |
google-ai-studio |
/v1beta/models:generateContent |
Automatically detect the version number suffix. |
| Google Vertex AI |
google-vertex-ai |
/v1beta/models:generateContent |
Provide different base URL based on authentication. |
| Anthropic Messages API |
anthropic |
/v1/messages |
Automatically removes duplicated /v1 suffix. |
| Ollama Chat API |
ollama |
/api/chat |
Automatically removes duplicated /api suffix. |
Provider Support Table
The providers listed below support One-Click Configuration. Implementations follow the best practices from official docs to help you get the best performance.
Tip
Even if a provider is not listed, you can still use it via Manual Configuration.
Experimental Supported Providers:
⚠️ Warning: Adding the following providers may violate their Terms of Service!
- Your account may be suspended or permanently banned.
- You need to accept the risks yourself; all risks are borne by you.
Long-Term Free Quotas:
Qwen Code
- Completely free.
- Supported models:
- qwen3-coder-plus
- qwen3-coder-flash
- qwen3-vl-plus
GitHub Copilot
- Some models have free quotas, others require Copilot subscription. After subscription, it is completely free with monthly refreshing quotas.
- Supported models: Claude, GPT, Grok, Gemini and other mainstream models.
Google Antigravity
- Each model has a certain free quota, refreshing over time.
- Supported models: Claude 4.5 Series, Gemini 3 Series.
Google Gemini CLI
- Each model has a certain free quota, refreshing over time.
- Supported models: Gemini 3 Series, Gemini 2.5 Series.
iFlow
- Completely free.
- Supported models: GLM, Kimi, Qwen, DeepSeek and other mainstream models.
Cerebras
- Some models have free quotas, refreshing over time.
- Supported models:
- GLM 4.7
- GPT-OSS-120B
- Qwen 3 235B Instruct
- ...
Nvidia
- Completely free, but with rate limits.
- Supports almost all open-source weight models.
Volcano Engine
- Each model has a certain free quota, refreshing over time.
- Supported models: Doubao, Kimi, DeepSeek and other mainstream models.
Model Scope
- Each model has a certain free quota, refreshing over time.
- Supported models: GLM, Kimi, Qwen, DeepSeek and other mainstream models.
ZhiPu AI / Z.AI
- Some models are completely free.
- Supported models: GLM Flash series models.
SiliconFlow
- Some models are completely free.
- Supported models: Mostly open-source weight models under 32B.
StreamLake
- Completely free, but with rate limits.
- Supported models:
- KAT-Coder-Pro V1
- KAT-Coder-Air
LongCat
- Has a certain free quota, refreshing over time.
- Supported models:
- LongCat-Flash-Chat
- LongCat-Flash-Thinking
- LongCat-Flash-Thinking-2601
OpenRouter
- Some models have certain free quotas, refreshing over time.
- Supported models: Frequently changing, models with 'free' in the name.
OpenCode Zen
- Some models are completely free.
- Supported models: Frequently changing, models with 'free' in the name.
Ollama Cloud
- Each model has a certain free quota, refreshing over time.
- Supports almost all open-source weight models.
Model Support Table
The models listed below support One-Click Add Models, and have built-in recommended parameters to help you get the best performance.
Tip
Even if a model is not listed, you can still use it via Add Model Manually and tune the parameters yourself.
| Vendor |
Series |
Supported Models |
| OpenAI |
GPT-5 Series |
GPT-5, GPT-5.1, GPT-5.2, GPT-5.2 pro, GPT-5 mini, GPT-5 nano, GPT-5 pro, GPT-5-Codex, GPT-5.1-Codex, GPT-5.2-Codex, GPT-5.1-Codex-Max, GPT-5.1-Codex-mini, GPT-5.2 Chat, GPT-5.1 Chat, GPT-5 Chat |
|
GPT-4 Series |
GPT-4o, GPT-4o mini, GPT-4o Search Preview, GPT-4o mini Search Preview, GPT-4.1, GPT-4.1 mini, GPT-4.1 nano, GPT-4.5 Preview, GPT-4 Turbo, GPT-4 Turbo Preview, GPT-4 |
|
GPT-3 Series |
GPT-3.5 Turbo, GPT-3.5 Turbo Instruct |
|
o Series |
o1, o1 pro, o1 mini, o1 preview, o3, o3 mini, o3 pro, o4 mini |
|
oss Series |
gpt-oss-120b, gpt-oss-20b |
|
Deep Research Series |
o3 Deep Research, o4 mini Deep Research |
|
Other Models |
babbage-002, davinci-002, Codex mini, Computer Use Preview |
| Google |
Gemini 3 Series |
gemini-3-pro-preview, gemini-3-flash-preview |
|
Gemini 2.5 Series |
gemini-2.5-pro, gemini-2.5-flash, gemini-2.5-flash-lite |
|
Gemini 2.0 Series |
gemini-2.0-flash, gemini-2.0-flash-lite |
| Anthropic |
Claude 4 Series |
Claude Sonnet 4.5, Claude Haiku 4.5, Claude Opus 4.5, Claude Sonnet 4, Claude Opus 4.1, Claude Opus 4 |
|
Claude 3 Series |
Claude Sonnet 3.7, Claude Sonnet 3.5, Claude Haiku 3.5, Claude Haiku 3, Claude Opus 3 |
| xAI |
Grok 4 Series |
Grok 4.1 Fast (Reasoning), Grok 4.1 Fast (Non-Reasoning), Grok 4, Grok 4 Fast (Reasoning), Grok 4 Fast (Non-Reasoning) |
|
Grok Code Series |
Grok Code Fast 1 |
|
Grok 3 Series |
Grok 3, Grok 3 Mini |
|
Grok 2 Series |
Grok 2 Vision |
| DeepSeek |
DeepSeek V3 Series |
DeepSeek Chat, DeepSeek Reasoner, DeepSeek V3.2, DeepSeek V3.2 Exp, DeepSeek V3.2 Speciale, DeepSeek V3.1, DeepSeek V3.1 Terminus, DeepSeek V3, DeepSeek V3 (0324) |
|
DeepSeek R1 Series |
DeepSeek R1, DeepSeek R1 (0528) |
|
DeepSeek V2.5 Series |
DeepSeek V2.5 |
|
DeepSeek V2 Series |
DeepSeek V2 |
|
DeepSeek VL Series |
DeepSeek VL, DeepSeek VL2 |
|
DeepSeek Coder Series |
DeepSeek Coder, DeepSeek Coder V2 |
|
DeepSeek Math Series |
DeepSeek Math V2 |
| ByteDance |
Doubao 1.8 Series |
Doubao Seed 1.8 |
|
Doubao 1.6 Series |
Doubao Seed 1.6, Doubao Seed 1.6 Lite, Doubao Seed 1.6 Flash, Doubao Seed 1.6 Vision |
|
Doubao 1.5 Series |
Doubao 1.5 Pro 32k, Doubao 1.5 Pro 32k Character, Doubao 1.5 Lite 32k |
|
Doubao Code Series |
Doubao Seed Code Preview |
|
Other Models |
Doubao Lite 32k Character |
| MiniMax |
MiniMax M2 Series |
MiniMax-M2.1, MiniMax-M2.1-Lightning, MiniMax-M2 |
| LongCat |
LongCat Flash Series |
LongCat Flash Chat, LongCat Flash Thinking, LongCat Flash Thinking 2601 |
| StreamLake |
KAT-Coder Series |
KAT-Coder-Pro V1, KAT-Coder-Exp-72B-1010, KAT-Coder-Air V1 |
| Moonshot AI |
Kimi K2.5 Series |
Kimi K2.5 |
|
Kimi K2 Series |
Kimi K2 Thinking, Kimi K2 Thinking Turbo, Kimi K2 0905 Preview, Kimi K2 0711 Preview, Kimi K2 Turbo Preview, Kimi For Coding |
| Qwen |
Qwen 3 Series |
Qwen3-Max, Qwen3-Max Preview, Qwen3-Coder-Plus, Qwen3-Coder-Flash, Qwen3-VL-Plus, Qwen3-VL-Flash, Qwen3-VL-32B-Instruct, Qwen3 0.6B, Qwen3 1.7B, Qwen3 4B, Qwen3 8B, Qwen3 14B, Qwen3 32B, Qwen3 30B A3B, Qwen3 235B A22B, Qwen3 30B A3B Thinking 2507, Qwen3 30B A3B Instruct 2507, Qwen3 235B A22B Thinking 2507, Qwen3 235B A22B Instruct 2507, Qwen3 Coder 480B A35B Instruct, Qwen3 Coder 30B A3B Instruct, Qwen3-Omni-Flash, Qwen3-Omni-Flash-Realtime, Qwen3-Omni 30B A3B Captioner, Qwen-Omni-Turbo, Qwen-Omni-Turbo-Realtime, Qwen3-VL 235B A22B Thinking, Qwen3-VL 235B A22B Instruct, Qwen3-VL 32B Thinking, Qwen3-VL 30B A3B Thinking, Qwen3-VL 30B A3B Instruct, Qwen3-VL 8B Thinking, Qwen3-VL 8B Instruct, Qwen3 Next 80B A3B Thinking, Qwen3 Next 80B A3B Instruct, Qwen-Plus, Qwen-Flash, Qwen-Turbo, Qwen-Max, Qwen-Long, Qwen-Doc-Turbo, Qwen Deep Research |
|
Qwen 2.5 Series |
Qwen2.5 0.5B Instruct, Qwen2.5 1.5B Instruct, Qwen2.5 3B Instruct, Qwen2.5 7B Instruct, Qwen2.5 14B Instruct, Qwen2.5 32B Instruct, Qwen2.5 72B Instruct, Qwen2.5 7B Instruct (1M), Qwen2.5 14B Instruct (1M), Qwen2.5 Coder 0.5B Instruct, Qwen2.5 Coder 1.5B Instruct, Qwen2.5 Coder 3B Instruct, Qwen2.5 Coder 7B Instruct, Qwen2.5 Coder 14B Instruct, Qwen2.5 Coder 32B Instruct, Qwen2.5 Math 1.5B Instruct, Qwen2.5 Math 7B Instruct, Qwen2.5 Math 72B Instruct, Qwen2.5-VL 3B Instruct, Qwen2.5-VL 7B Instruct, Qwen2.5-VL 32B Instruct, Qwen2.5-Omni-7B, Qwen2 7B Instruct, Qwen2 72B Instruct, Qwen2 57B A14B Instruct, Qwen2-VL 72B Instruct |
|
Qwen 1.5 Series |
Qwen1.5 7B Chat, Qwen1.5 14B Chat, Qwen1.5 32B Chat, Qwen1.5 72B Chat, Qwen1.5 110B Chat |
|
QwQ/QvQ Series |
QwQ-Plus, QwQ 32B, QwQ 32B Preview, QVQ-Max, QVQ-Plus, QVQ 72B Preview |
|
Qwen Coder Series |
Qwen-Coder-Plus, Qwen-Coder-Turbo |
|
Other Models |
Qwen-Math-Plus, Qwen-Math-Turbo, Qwen-VL-OCR, Qwen-VL-Max, Qwen-VL-Plus, Qwen-Plus Character (JA) |
| Xiaomi MiMo |
MiMo V2 Series |
MiMo V2 Flash |
| ZhiPu AI |
GLM 4 Series |
GLM-4.7, GLM-4.7-Flash, GLM-4.7-FlashX, GLM-4.6, GLM-4.5, GLM-4.5-X, GLM-4.5-Air, GLM-4.5-AirX, GLM-4-Plus, GLM-4-Air-250414, GLM-4-Long, GLM-4-AirX, GLM-4-FlashX-250414, GLM-4.5-Flash, GLM-4-Flash-250414, GLM-4.6V, GLM-4.5V, GLM-4.1V-Thinking-FlashX, GLM-4.6V-Flash, GLM-4.1V-Thinking-Flash |
|
CodeGeeX Series |
CodeGeeX-4 |
| Tencent HY |
HY 2.0 Series |
HY 2.0 Think, HY 2.0 Instruct |
|
HY 1.5 Series |
HY Vision 1.5 Instruct |
| OpenCode Zen |
Zen |
Big Pickle |
Application Migration Support Table
The applications listed below support One-Click Migration.
| Application |
Notes |
| Claude Code |
Migration is supported only when using a custom Base URL and API Key. |
| Codex |
Migration is supported only when using a custom Base URL and API Key. |
| Gemini CLI |
Migration is supported only when using the following auth methods: GEMINI_API_KEY, GOOGLE_API_KEY, GOOGLE_APPLICATION_CREDENTIALS. |
Contributing
- Feel free to open an issue to report bugs, request features, or ask for support of new providers/models.
- Pull requests are welcome. See the roadmap.
Development
- Build:
npm run compile
- Watch:
npm run watch
- Interactive release:
npm run release
- GitHub Actions release:
Actions → Release (VS Code Extension) → Run workflow
License
MIT @ SmallMain
| |