ChatGLM Router for GitHub Copilot Chat
A VS Code extension forked from Hugging Face's huggingface-vscode-chat project, modified to integrate ChatGLM (with Coding and General endpoints) into GitHub Copilot Chat.
Compatibility
Due to limited testing resources, compatibility testing may not be exhaustive. Please report any issues on GitHub or in the comments section, and I will address them promptly.
AI Assistance Statement
The development of this plugin utilized ChatGLM Coding to complete most of the API adaptation work.
Demo

Quick Start
- Install the ChatGLM Router extension (search for "ChatGLM Router" in VS Code extensions)
- Open VS Code Copilot Chat interface (Ctrl/Cmd + Shift + A)
- Click the model picker and click "Manage Models..."
- Find "ChatGLM Router" and click "Manage ChatGLM Router"
- Select the provider you want to manage and enter its API Key. For ChatGLM, you can get one from https://open.bigmodel.cn/
- Choose the models you want to add to the model picker
📋 Roadmap
Recently Implemented:
- [x] Multiple Custom Providers - Support for other OpenAI-compatible APIs (DeepSeek, OpenAI, Azure OpenAI, local LLMs, etc.)
- [x] Real-time Token Usage - Status bar shows weekly/monthly token usage
- [x] Usage Statistics - Detailed request and token statistics with weekly/monthly histograms
Planned features for future releases:
- [ ] Token Alias Support - Use custom API endpoints while maintaining unified token billing
- [ ] Streaming Token Usage - Real-time token count display during chat
- [ ] Usage Cost Estimation - Calculate API costs based on token consumption
- [ ] Export Usage Reports - Export statistics to CSV/JSON for further analysis
- [ ] Usage Alerts - Notify when approaching API quota limits
- [ ] Multi-language Model Names - Support for models with non-English identifiers
- [ ] Model Caching - Cache model lists for faster loading
Have a suggestion? Feel free to open an issue on GitHub!
💖 Support This Project
Enjoy using this extension? Your support helps me continue developing and maintaining it!
🚀 Get GLM Coding Premium at a Discount - Use my referral link to get a special deal on GLM Coding subscription:
- 20+ Programming Tools: Seamlessly supports Claude Code, Cline, and more
- Enhanced Coding Power: Supercharge your development workflow
- Limited Time Offer: Exclusive discount for new users
Get GLM Coding Premium →

By subscribing through this link, you get a premium experience while supporting the development of this extension at no extra cost. Thank you for your support! 🙏
Available Models
ChatGLM Coding (Default)
- Optimized for code generation and programming tasks
- Endpoint:
https://open.bigmodel.cn/api/coding/paas/v4
ChatGLM General (Optional)
- For general chat and non-coding tasks
- Endpoint:
https://open.bigmodel.cn/api/paas/v4/
- Enable in settings if needed (disabled by default)
- Same models available, optimized for conversational AI
Custom Providers (Now Supported)
This extension now supports adding custom API providers for any OpenAI-compatible API.
Supported Provider Presets
We have built-in presets for popular providers to make configuration easy:
- OpenAI - GPT-4, GPT-3.5, and more
- DeepSeek - DeepSeek-V3, DeepSeek-Coder
- Anthropic - Claude series models
- Azure OpenAI - Azure-hosted OpenAI services
- Zhipu AI (GLM) - Zhipu GLM series models
- Moonshot AI (Kimi) - Moonshot Kimi series models
- Baichuan - Baichuan series models
- MiniMax - MiniMax series models
- Ollama (Local) - Local Ollama service
- LM Studio (Local) - Local LM Studio service
- Together AI - Together AI API
- Groq - Groq ultra-fast inference API
- Cerebras - Cerebras ultra-fast inference API
- OpenRouter - OpenRouter API (access to multiple models)
How to Add Custom Providers
Method 1: Add from Presets (Recommended)
- Press
Ctrl/Cmd + Shift + P to open the command palette
- Run "ChatGLM Router: Add Custom API Provider"
- Select "Select from Presets"
- Choose a preset from the list (e.g., DeepSeek, OpenAI, etc.)
- Confirm the provider ID (can be customized)
- Run "ChatGLM Router: Manage Custom Providers" to configure API Key
Method 2: Manual Configuration
- Press
Ctrl/Cmd + Shift + P to open the command palette
- Run "ChatGLM Router: Add Custom API Provider"
- Select "Manual Configuration"
- Fill in the following information:
- Provider ID: Unique identifier (e.g.,
my-openai)
- Display Name: Name shown in UI (e.g.,
My OpenAI)
- API Base URL: API endpoint (e.g.,
https://api.openai.com/v1)
- Model Family: Model family identifier (e.g.,
openai)
- Supports Tool Calling: Whether Function Calling is supported
- Run "ChatGLM Router: Manage Custom Providers" to configure API Key
Manage Custom Providers
Run "ChatGLM Router: Manage Custom Providers" to:
- Add new custom providers
- Configure API Key: Configure or update API keys for existing providers
- View details: View provider configuration information
- Delete provider: Remove unused custom providers
You can also configure directly in settings.json:
{
"chatglmRouter.customProviders": [
{
"id": "deepseek",
"name": "DeepSeek",
"baseUrl": "https://api.deepseek.com/v1",
"family": "deepseek",
"supportsTools": true
},
{
"id": "my-openai",
"name": "My OpenAI",
"baseUrl": "https://api.openai.com/v1",
"family": "openai",
"supportsTools": true
}
]
}
After configuration, run "ChatGLM Router: Manage Custom Providers" to configure API keys for each provider.
Configuration
API Key
ChatGLM API Key
Configure your ChatGLM API key via the command palette:
- Press
Ctrl/Cmd + Shift + P
- Run "ChatGLM Router: Manage"
- Select "ChatGLM (Coding & General)"
- Enter your API key from https://open.bigmodel.cn/
Custom Provider API Key
Configure API keys for custom providers:
- Press
Ctrl/Cmd + Shift + P
- Run "ChatGLM Router: Manage Custom Providers"
- Select the provider you want to configure
- Select "Configure API Key"
- Enter the API key for that provider
- The extension will automatically verify the API key and display the number of available models
Clear API Key
Clear ChatGLM API Key
- Run "ChatGLM Router: Clear ChatGLM API Key" to delete the stored ChatGLM API key
Clear Custom Provider API Key
- Run "ChatGLM Router: Manage Custom Providers"
- Select the provider you want to clear
- Select "Configure API Key"
- Clear the input field and confirm
Model Selection
Models are now displayed in the "Provider: Model Name" format:
ChatGLM Built-in Providers
ChatGLM Coding: glm-4.7 - ChatGLM Coding endpoint (default, recommended for VS Code)
ChatGLM Coding: glm-4-air - ChatGLM Coding lightweight model
ChatGLM General: glm-4.7 - ChatGLM General endpoint
ChatGLM General: glm-4-plus - ChatGLM General enhanced model
Custom Provider Examples
DeepSeek: deepseek-chat - DeepSeek-V3 non-reasoning mode
DeepSeek: deepseek-reasoner - DeepSeek-V3 reasoning mode
OpenAI: gpt-4 - OpenAI GPT-4
My OpenAI: gpt-4 - Custom-configured OpenAI
Notes:
- If a provider doesn't have an API key configured, the model picker will show
"Provider Name: Provider Name (API key not configured)"
- Selecting an unconfigured model will prompt you to enter an API key
- The same model (e.g.,
glm-4.7) can appear from different providers and will be displayed separately
Settings
Configure in VS Code Settings under chatglmRouter:
| Setting |
Type |
Default |
Description |
defaultProvider |
string |
chatglm-coding |
Default provider to use |
customProviders |
object[] |
[] |
Custom provider list (see below) |
statistics.enabled |
boolean |
true |
Enable usage statistics tracking |
statistics.statusBar.enabled |
boolean |
true |
Show statistics in status bar |
statistics.modelTooltip.enabled |
boolean |
true |
Show usage in model tooltips |
Custom Provider Configuration
The customProviders setting supports the following fields:
| Field |
Type |
Required |
Description |
id |
string |
✅ |
Unique provider identifier (e.g., deepseek) |
name |
string |
✅ |
Display name (e.g., DeepSeek) |
baseUrl |
string |
✅ |
API base URL (e.g., https://api.deepseek.com/v1) |
family |
string |
✅ |
Model family identifier (e.g., deepseek) |
supportsTools |
boolean |
❌ |
Whether tool calling is supported (default: true) |
defaultMaxTokens |
number |
❌ |
Default max output tokens (default: 8192) |
defaultContextLength |
number |
❌ |
Default context length (default: 128000) |
Example Configuration:
{
"chatglmRouter.customProviders": [
{
"id": "deepseek",
"name": "DeepSeek",
"baseUrl": "https://api.deepseek.com/v1",
"family": "deepseek",
"supportsTools": true,
"defaultMaxTokens": 8192,
"defaultContextLength": 128000
}
]
}
For detailed statistics settings, see Statistics Settings below.
Usage Statistics
Track your API usage with built-in statistics:
Visualized Statistics
- Run "ChatGLM Router: Show Usage Statistics" and select "显示可视化统计" to open a webview with charts.
- View token usage per model for each provider.
Real-time Status Bar
- Weekly and monthly token usage displayed in VS Code status bar
- Auto-updates after each conversation request
- Hover to see detailed statistics
- Click to view full statistics
- Hover over models in the picker to see historical usage
- Shows total tokens, request count, and last used time
- Helps you track which models you use most
View Statistics
- Run "ChatGLM Router: Show Usage Statistics" command
- View total requests and tokens per provider
- See detailed per-model usage
- Refresh statistics with confirmation feedback
Reset Statistics
- Run "ChatGLM Router: Reset Usage Statistics" command
- Clears all stored usage data
Statistics in Output
- Run "ChatGLM Router: Show Statistics in Output" command
- Displays detailed statistics in an output channel
Note: Statistics are stored locally in VS Code's global state and are estimates (4 chars ≈ 1 token).
Statistics Settings {#statistics-settings}
Configure in VS Code Settings under chatglmRouter.statistics:
| Setting |
Options |
Default |
Description |
statusBar.enabled |
boolean |
true |
Show statistics in status bar |
statusBar.displayMode |
normal, compact, minimal |
normal |
Status bar display mode |
statusBar.timeRange |
week, month, both |
both |
Time range to display |
statusBar.showRequestCount |
boolean |
true |
Show request count in status bar |
modelTooltip.enabled |
boolean |
true |
Show usage in model tooltips |
Development
git clone https://github.com/OrientLuna/ChatGLM-vscode-chat
cd ChatGLM-vscode-chat
npm install
npm run compile
Press F5 to launch an Extension Development Host for testing.
Common Scripts
- Build:
npm run compile
- Watch:
npm run watch
- Lint:
npm run lint
- Format:
npm run format
- Test:
npm run test
- Package:
npm run package (generates .vsix file)
Architecture
- Multi-Provider Design: Supports ChatGLM Coding, ChatGLM General, and custom providers
- Provider Preset System: Built-in presets for popular providers with quick configuration
- Provider Registry: Built-in providers configured in
src/config.ts
- Custom Providers: Add via VS Code settings or command palette
- Statistics Tracking: Usage data tracked in
src/statistics.ts
- API-First Model List: Automatically fetches latest models from provider APIs
- Streaming Response: SSE-like streaming with tool call support
- Internationalization: Multi-language support (English and Chinese)
Troubleshooting
Models not appearing
ChatGLM models not appearing
- Check that your ChatGLM API key is configured correctly
- Run "ChatGLM Router: Manage" to verify API key
- Check VS Code developer console for errors (Help → Toggle Developer Tools)
Custom provider models not appearing
- Check provider configuration: Ensure the provider is correctly added to
customProviders settings
- Configure API Key: Run "ChatGLM Router: Manage Custom Providers" to configure API key
- Verify API connection: When configuring API key, the extension will automatically verify and display available model count
- Check API format: Ensure the API endpoint is OpenAI-compatible (supports
/models endpoint)
- Check console: Look for error messages in the developer console
API Errors
ChatGLM API errors
- Verify your API key has the required permissions
- Check that the selected model is available on your chosen endpoint
- Ensure you have sufficient API credits/quotas
Custom provider API errors
- Check API Key: Ensure the API key is correct and valid
- Check endpoint URL: Verify the
baseUrl is configured correctly (usually needs /v1 suffix)
- Check model format: Ensure the API returns model list in OpenAI format
- View error details: The model picker will display specific error messages
- Check network connection: Ensure you can reach the API endpoint
Custom providers not working after adding
- Reload window: Configuration requires running "Developer: Reload Window"
- Check configuration: Ensure the configuration format in
settings.json is correct
- Check provider ID: Ensure the
id field is unique and doesn't conflict with built-in providers
Local providers (Ollama, LM Studio) connection issues
- Ensure service is running: Make sure Ollama or LM Studio is running
- Check port: Verify the port is configured correctly (Ollama default 11434, LM Studio default 1234)
- Check URL: Local addresses are typically
http://localhost:11434/v1 or http://localhost:1234/v1
ChatGLM Coding vs General
- Use ChatGLM Coding for code-related tasks (recommended for VS Code)
- Use ChatGLM General for conversational AI and non-coding tasks
- Enable ChatGLM General in settings:
chatglmRouter.enabledProviders → add chatglm-general
Requirements
License
MIT License © OrientLuna
Support