Overview
A powerful VS Code extension that provides model support for GitHub Copilot Chat, seamlessly integrating 20+ AI providers including ZhipuAI, MiniMax, MoonshotAI, DeepSeek, Codex (OpenAI), Chutes, OpenCode, Blackbox, Vercel AI, Cline, and custom OpenAI/Anthropic compatible models.
Supported Providers
| Provider |
Description |
Key Models |
Highlights |
| Codex |
OpenAI Codex |
GPT-5.2 Codex, GPT-5.3 Codex |
400k context Image Input OAuth Reasoning Modes |
| ZhipuAI |
GLM Coding Plan |
GLM-4.5, GLM-4.6, GLM-4.7, GLM-5, GLM-4.7-Flash |
256K context Web Search MCP SDK Free Tier |
| MiniMax |
Coding Plan |
MiniMax-M2.5, MiniMax-M2.1 |
205K context Web Search Thinking Mode |
| MoonshotAI |
Kimi For Coding |
Kimi-K2-Thinking, Kimi-K2-0905-Preview |
256K context Agentic Coding Thinking Mode |
| DeepSeek |
DeepSeek AI |
DeepSeek-V3.2, DeepSeek-V3.2 Reasoner |
128K context GPT-5 Level Reasoning |
| Chutes |
Chutes AI |
Various models |
Global Request Limit |
| OpenCode |
OpenCode AI |
Claude 4.5, GPT-5 |
Multi-model Access |
| Blackbox |
Blackbox AI |
kimi-k2.5, blackbox-base-2 |
Official API API Key Required |
| Vercel AI |
Vercel AI Gateway |
Dynamic language models |
Dynamic Models Vision Tags Context Metadata |
| Cline |
Cline API |
Dynamic provider/model IDs |
Dynamic Models OpenAI SDK Authenticated Discovery |
| DeepInfra |
DeepInfra |
OpenAI-compatible models |
LLM & Image Models |
| Kilo AI |
Kilo AI |
Dynamic model fetching |
High Performance |
| Zenmux |
Zenmux AI |
Dynamic model fetching |
OpenAI-compatible |
| Lightning AI |
Lightning AI |
Various models |
Dynamic Models |
| Hugging Face |
Hugging Face |
Various models |
Router Integration |
| Mistral AI |
Mistral AI |
Mistral models |
OpenAI-compatible |
| NVIDIA NIM |
NVIDIA NIM |
NVIDIA models |
40 RPM Throttle Model Discovery |
| Ollama Cloud |
Ollama |
Local & Cloud models |
OpenAI-compatible |
| Qwen CLI |
Qwen Code CLI |
Qwen models |
OAuth via CLI |
| Compatible |
Custom API |
User-defined models |
OpenAI/Anthropic Compatible |
Key Features
🔄 Multi-Account Management
Manage multiple accounts per provider with ease
- Add unlimited accounts for each AI provider
- Quick switch between accounts with
Ctrl+Shift+Q / Cmd+Shift+Q
- Visual account status in the status bar
- Secure credential storage using VS Code Secret Storage
⚖️ Load Balancing & Auto-Switching
Automatic load distribution across accounts
- Auto-switch when hitting rate limits or quota exhaustion
- Intelligent retry with exponential backoff strategy
- Real-time quota monitoring and usage statistics
- Seamless failover without interrupting your workflow
🔐 OAuth Authentication
Secure login for supported providers
| Provider |
Auth Method |
Command |
| Codex |
OpenAI OAuth |
Aether: Codex Login |
| Qwen CLI |
Alibaba OAuth |
qwen auth login (CLI) |
🌐 Web Search Integration
Real-time information retrieval
| Tool |
Provider |
Description |
#zhipuWebSearch |
ZhipuAI |
Multi-engine search (Sogou, Quark, Standard) |
#minimaxWebSearch |
MiniMax |
Coding Plan web search |
Example usage in Copilot Chat:
@workspace #zhipuWebSearch What are the latest features in TypeScript 5.5?
✨ Advanced Code Completion
Smart code completion features
| Feature |
Description |
Default |
| FIM (Fill In the Middle) |
Intelligent code completion based on context |
Disabled |
| NES (Next Edit Suggestions) |
Predictive editing suggestions |
Disabled |
Enable in Settings:
{
"chp.fimCompletion.enabled": true,
"chp.nesCompletion.enabled": true
}
Keybindings:
| Action | Windows/Linux | macOS |
| :----- | :------------ | :---- |
| Trigger inline suggestion | Alt+/ | Alt+/ |
| Toggle NES manual mode | Shift+Alt+/ | Shift+Alt+/ |
Installation
📦 From VS Code Marketplace (Recommended)
- Open VS Code
- Go to Extensions (
Ctrl+Shift+X / Cmd+Shift+X)
- Search for "Aether"
- Click Install
Or visit the Marketplace page directly: Aether on Visual Studio Marketplace
📁 From .vsix File
- Download the
.vsix file from Releases
- In VS Code, press
Ctrl+Shift+P / Cmd+Shift+P
- Type "Extensions: Install from VSIX..."
- Select the downloaded file
🔨 Build from Source
# Clone the repository
git clone https://github.com/OEvortex/aether.git
cd aether
# Install dependencies
npm install
# Build the extension
npm run compile
# Package as .vsix
npm run package
# Install the packaged extension
code --install-extension aether-*.vsix
Quick Start
| Provider |
Command |
| Codex (OpenAI) |
Cmd+Shift+P → Aether: Codex Login |
| ZhipuAI |
Cmd+Shift+P → Aether: ZhipuAI Configuration Wizard |
| MiniMax |
Cmd+Shift+P → Aether: MiniMax Configuration Wizard |
| MoonshotAI |
Cmd+Shift+P → Aether: MoonshotAI Configuration Wizard |
| DeepSeek |
Cmd+Shift+P → Aether: Configure DeepSeek |
| Chutes |
Cmd+Shift+P → Aether: Configure Chutes |
| Zenmux |
Cmd+Shift+P → Aether: Configure Zenmux |
| OpenCode |
Cmd+Shift+P → Aether: Configure OpenCode |
| Blackbox |
Cmd+Shift+P → Aether: Configure Blackbox |
| Vercel AI |
Cmd+Shift+P → Aether: Configure Vercel AI |
| Cline |
Cmd+Shift+P → Aether: Configure Cline |
| Hugging Face |
Cmd+Shift+P → Aether: Configure Hugging Face |
| Kilo AI |
Cmd+Shift+P → Aether: Configure Kilo AI |
| Lightning AI |
Cmd+Shift+P → Aether: Lightning AI Configuration Wizard |
| DeepInfra |
Cmd+Shift+P → Aether: Configure DeepInfra |
| NVIDIA NIM |
Cmd+Shift+P → Aether: Configure NVIDIA NIM |
| Mistral AI |
Cmd+Shift+P → Aether: Configure Mistral AI |
| Ollama Cloud |
Cmd+Shift+P → Aether: Configure Ollama Cloud |
| Custom Models |
Cmd+Shift+P → Aether: Compatible Provider Settings |
Step 2: Select Your Model
- Open GitHub Copilot Chat
- Click the model dropdown
- Select a model from your configured provider (e.g.,
⦿ ZhipuAI > glm-4.5)
Cmd+Shift+P → "Aether: Settings"
→ Select a provider
→ Add accounts with API keys
Step 4: Enable Load Balancing
Cmd+Shift+P → "Aether: Settings"
→ Select a provider
→ Toggle "Load Balance" for automatic account switching
Detailed Guide: Managing Providers
Follow these simple steps to add and manage providers using the Settings page:
Step 1: Open Settings
Press Cmd+Shift+P (macOS) or Ctrl+Shift+P (Windows/Linux) and type:
Aether: Settings
Step 2: Select Your Provider
Click on the provider you want to configure (e.g., ZhipuAI, MiniMax, MoonshotAI, etc.)
Step 3: Add Account Credentials
Enter your API key and configure provider settings:
- API Key: Your provider's API key
- Base URL: Custom API endpoint (optional)
- Additional Settings: Provider-specific settings exposed by the extension manifest, such as endpoint, SDK mode, and other provider options
Step 4: Enable Load Balancing (Optional)
Toggle the "Load Balance" switch to enable automatic account switching when rate limits are hit.
Provider Management Features
- Add Multiple Accounts: Add multiple API keys per provider for load balancing
- Edit Settings: Click the edit icon to modify provider details and provider-specific settings
- Delete Account: Remove accounts you no longer need
- Switch Account: Use
Ctrl+Shift+Q / Cmd+Shift+Q for quick switching
- Load Balance: Automatically distribute requests across accounts
- Quota Tracking: Monitor usage and remaining quota in real-time
Configuration Reference
Global Settings
| Setting |
Type |
Default |
Description |
chp.temperature |
number |
0.1 |
Controls output randomness (0-2) |
chp.topP |
number |
1 |
Controls output diversity (0-1) |
chp.maxTokens |
number |
8192 |
Maximum output tokens (32-256000) |
chp.rememberLastModel |
boolean |
true |
Remember last used model |
Provider-Specific Settings
ZhipuAI
| Setting |
Type |
Default |
Description |
chp.zhipu.search.enableMCP |
boolean |
true |
Enable MCP SDK mode for web search |
chp.zhipu.endpoint |
string |
"open.bigmodel.cn" |
API endpoint (open.bigmodel.cn or api.z.ai) |
chp.zhipu.plan |
string |
"coding" |
Plan type (coding or normal) |
chp.zhipu.thinking |
string |
"auto" |
Thinking mode (enabled, disabled, auto) |
chp.zhipu.clearThinking |
boolean |
true |
Clear thinking context between turns |
MiniMax
| Setting |
Type |
Default |
Description |
chp.minimax.endpoint |
string |
"minimaxi.com" |
API endpoint (minimaxi.com or minimax.io) |
Completion Settings
FIM (Fill In the Middle)
| Setting |
Type |
Default |
Description |
chp.fimCompletion.enabled |
boolean |
false |
Enable FIM completion |
chp.fimCompletion.debounceMs |
number |
500 |
Debounce delay (50-1000ms) |
chp.fimCompletion.timeoutMs |
number |
5000 |
Request timeout (1000-30000ms) |
NES (Next Edit Suggestions)
| Setting |
Type |
Default |
Description |
chp.nesCompletion.enabled |
boolean |
false |
Enable NES completion |
chp.nesCompletion.manualOnly |
boolean |
false |
Only trigger manually (Alt+/) |
chp.nesCompletion.debounceMs |
number |
500 |
Debounce delay (50-1000ms) |
chp.nesCompletion.timeoutMs |
number |
5000 |
Request timeout (1000-30000ms) |
Available Models
ZhipuAI (GLM Coding Plan)
| Model |
Input |
Output |
Features |
| GLM-4.5 |
98K |
32K |
Tool Calling |
| GLM-4.5-air |
98K |
32K |
Tool Calling |
| GLM-4.6 |
229K |
32K |
Tool Calling |
| GLM-4.7 |
229K |
32K |
Tool Calling |
| GLM-5 |
229K |
32K |
Tool Calling |
| GLM-4.7-Flash |
229K |
32K |
Free |
MiniMax
| Model |
Input |
Output |
Features |
| MiniMax-M2.5 |
172K |
32K |
Thinking, Tool Calling |
| MiniMax-M2.5-highspeed |
172K |
32K |
~100 TPS, Thinking |
| MiniMax-M2.1 |
172K |
32K |
Thinking, Tool Calling |
MoonshotAI (Kimi)
| Model |
Input |
Output |
Features |
| Kimi For Coding |
224K |
32K |
Tool Calling |
| Kimi-K2-Thinking |
224K |
32K |
Thinking, Agentic |
| Kimi-K2-Thinking-Turbo |
224K |
32K |
Thinking, Fast |
| Kimi-K2-0905-Preview |
224K |
32K |
Agentic Coding |
DeepSeek
| Model |
Input |
Output |
Features |
| DeepSeek-V3.2 |
128K |
16K |
Tool Calling |
| DeepSeek-V3.2 Reasoner |
128K |
16K |
Thinking, Tool Calling |
Codex (OpenAI)
| Model |
Input |
Output |
Features |
| GPT-5.2 Codex |
344K |
65K |
Image Input, Tool Calling |
| GPT-5.3 Codex |
344K |
65K |
Image Input, Tool Calling |
| GPT-5.3 Codex (Low) |
344K |
65K |
Low Reasoning |
| GPT-5.3 Codex (Medium) |
344K |
65K |
Medium Reasoning |
| GPT-5.3 Codex (High) |
344K |
65K |
High Reasoning |
Keybindings
| Action |
Windows/Linux |
macOS |
| Trigger inline suggestion |
Alt+/ |
Alt+/ |
| Toggle NES manual mode |
Shift+Alt+/ |
Shift+Alt+/ |
| Attach selection to Copilot |
Ctrl+Shift+A |
Cmd+Shift+A |
| Insert handle reference |
Ctrl+Shift+H |
Cmd+Shift+H |
| Insert handle (full path) |
Ctrl+Alt+Shift+H |
Cmd+Alt+Shift+H |
| Quick switch account |
Ctrl+Shift+Q |
Cmd+Shift+Q |
Commands Reference
Provider Configuration
| Command |
Description |
Aether: Configure ZhipuAI |
Set ZhipuAI API key |
Aether: ZhipuAI Configuration Wizard |
Full ZhipuAI setup with MCP mode |
Aether: Configure MiniMax |
Set MiniMax API key |
Aether: MiniMax Configuration Wizard |
Full MiniMax setup |
Aether: Configure MoonshotAI |
Set MoonshotAI API key |
Aether: MoonshotAI Configuration Wizard |
Full MoonshotAI setup |
Aether: Configure DeepSeek |
Set DeepSeek API key |
Aether: Configure Chutes |
Set Chutes API key |
Aether: Configure Zenmux |
Set Zenmux API key |
Aether: Configure OpenCode |
Set OpenCode API key |
Aether: Configure Blackbox |
Set Blackbox API key |
Aether: Configure Vercel AI |
Set Vercel AI API key |
Aether: Configure Cline |
Set Cline API key |
Aether: Configure Hugging Face |
Set Hugging Face API key |
Aether: Configure Kilo AI |
Set Kilo AI API key |
Aether: Lightning AI Configuration Wizard |
Full Lightning AI setup |
Aether: Configure DeepInfra |
Set DeepInfra API key |
Aether: Configure NVIDIA NIM |
Set NVIDIA NIM API key |
Aether: Configure Mistral AI |
Set Mistral AI API key |
Aether: Configure Ollama Cloud |
Set Ollama Cloud API key |
Aether: Compatible Provider Settings |
Configure custom models |
OAuth Authentication
| Command |
Description |
Aether: Codex Login |
Login to OpenAI Codex |
Aether: Codex Logout |
Logout from Codex |
Account Management
| Command |
Description |
Aether: Add Account |
Add a new account |
Aether: Switch Account |
Switch to another account |
Aether: Quick Switch Account |
Quick switch with Ctrl+Shift+Q |
Aether: Remove Account |
Remove an account |
Aether: View All Accounts |
List all configured accounts |
Utilities
| Command |
Description |
Aether: Toggle NES Manual Trigger Mode |
Toggle NES manual mode |
Aether: Attach Selection to Copilot Chat |
Attach selected code to chat |
Aether: Insert Handle Reference |
Insert #file:filename:L1-L100 |
Aether: Insert Handle Reference with Full Path |
Insert #handle:path/to/file:L1-L100 |
Aether: Open Aether Settings |
Open settings page |
Custom Models (Compatible Provider)
Add your own OpenAI or Anthropic compatible models:
- Run
Aether: Compatible Provider Settings
- Click "Add Model"
- Configure your model:
{
"id": "my-custom-model",
"name": "My Custom Model",
"baseUrl": "https://api.example.com/v1",
"apiKey": "your-api-key",
"model": "model-name",
"maxInputTokens": 128000,
"maxOutputTokens": 8192,
"sdkMode": "openai",
"capabilities": {
"toolCalling": true,
"imageInput": false
}
}
Requirements
| Requirement |
Version |
| VS Code |
>= 1.104.0 |
| Node.js |
>= 20.0.0 |
| npm |
>= 9.0.0 |
| GitHub Copilot Chat |
Required (extension dependency) |
Development
Build & Test
# Install dependencies
npm install
# Build in development mode
npm run compile:dev
# Build for production
npm run compile
# Watch mode
npm run watch
# Run linting
npm run lint
# Format code
npm run format
# Package extension
npm run package
Project Structure
copilot-helper/
├── src/
│ ├── extension.ts # Entry point
│ ├── accounts/ # Multi-account management
│ ├── copilot/ # Core Copilot integration
│ ├── providers/ # AI provider implementations
│ │ ├── providerRegistry.ts # Provider registry
│ │ ├── zhipu/ # ZhipuAI provider
│ │ ├── minimax/ # MiniMax provider
│ │ ├── moonshot/ # MoonshotAI provider
│ │ ├── codex/ # OpenAI Codex
│ │ └── ... # Other providers
│ ├── tools/ # Web search tools
│ ├── types/ # TypeScript definitions
│ ├── ui/ # Settings pages
│ └── utils/ # Shared utilities
├── dist/ # Compiled output
├── package.json # Extension manifest
└── tsconfig.json # TypeScript config
Credits
Special thanks to these amazing projects:
Get in Touch
Have questions or suggestions? Reach out on Telegram:

License
This project is licensed under the MIT License - see the LICENSE file for details.