Select one of the NVIDIA NIM models returned by your account.
You can also run NVIDIA NIM: Manage NVIDIA NIM API Key from the Command Palette. The extension
will migrate that key into VS Code's language model provider group so the model picker can resolve
NVIDIA NIM models. The VS Code model settings flow is recommended for new setups.
Supported Models
The extension dynamically fetches available models from https://integrate.api.nvidia.com/v1/models.
It does not ship a hardcoded fallback model catalog; the Copilot Chat model picker shows the models
returned by your NVIDIA NIM account.
When NVIDIA's /models response omits tool-calling capability metadata, chat models are treated as
tool-capable so they remain selectable in Copilot Chat Agent mode. Models that explicitly report
tool_calling: false are still treated as non-tool models.
Usage
Open Copilot Chat (Cmd/Ctrl + Alt + I).
Select NVIDIA NIM from the provider selector.
Choose one of the dynamically discovered NVIDIA NIM models and start chatting.
Development
bun install --ignore-scripts
bun run compile
bun run lint
bun run test -- --runInBand
Press F5 in VS Code to launch the Extension Development Host.
Available Scripts
bun run compile – TypeScript コンパイル
bun run watch – ファイル変更監視付きコンパイル
bun run test – テスト実行
bun run lint – ESLint チェック
bun run lint:fix – ESLint 自動修正
bun run format – Prettier フォーマット
bun run package:vsix – VSIX パッケージ作成
Marketplace Packaging
bun run package:vsix
The command above produces a .vsix that can be uploaded in the VS Code Marketplace publisher portal.
Privacy
Your API key is stored securely through VS Code's language model provider configuration and, for
legacy command-palette setup, VS Code SecretStorage.
Chat completions and model discovery requests are sent to https://integrate.api.nvidia.com/v1.