Skip to content
| Marketplace
Sign in
Visual Studio Code>AI>NVIDIA NIM ProviderNew to Visual Studio Code? Get it now.
NVIDIA NIM Provider

NVIDIA NIM Provider

Hidenobu Nagai

|
78 installs
| (0) | Free
Use NVIDIA NIM models in GitHub Copilot Chat for VS Code
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

NVIDIA NIM Provider

VS Code extension that adds an NVIDIA NIM provider to Copilot Chat.

Requirements

  • VS Code 1.104.0 or later
  • GitHub Copilot extension installed and active
  • An NVIDIA NIM API key from build.nvidia.com/models

Installation

From Source

  1. Clone this repository.
  2. Run bun install --ignore-scripts && bun run compile.
  3. Press F5 in VS Code to launch the Extension Development Host.

From VSIX

  1. Run bun install --ignore-scripts && bun run package:vsix.
  2. Install the generated .vsix file via the Extensions view (Install from VSIX...).

Setup

  1. Open Copilot Chat and choose the model picker.
  2. Select Manage Models, then add/configure NVIDIA NIM.
  3. Paste the API key obtained from build.nvidia.com/models.
  4. Select one of the NVIDIA NIM models returned by your account.

You can also run NVIDIA NIM: Manage NVIDIA NIM API Key from the Command Palette. The extension will migrate that key into VS Code's language model provider group so the model picker can resolve NVIDIA NIM models. The VS Code model settings flow is recommended for new setups.

Supported Models

The extension dynamically fetches available models from https://integrate.api.nvidia.com/v1/models. It does not ship a hardcoded fallback model catalog; the Copilot Chat model picker shows the models returned by your NVIDIA NIM account.

When NVIDIA's /models response omits tool-calling capability metadata, chat models are treated as tool-capable so they remain selectable in Copilot Chat Agent mode. Models that explicitly report tool_calling: false are still treated as non-tool models.

Usage

  1. Open Copilot Chat (Cmd/Ctrl + Alt + I).
  2. Select NVIDIA NIM from the provider selector.
  3. Choose one of the dynamically discovered NVIDIA NIM models and start chatting.

Development

bun install --ignore-scripts
bun run compile
bun run lint
bun run test -- --runInBand

Press F5 in VS Code to launch the Extension Development Host.

Available Scripts

  • bun run compile – TypeScript コンパイル
  • bun run watch – ファイル変更監視付きコンパイル
  • bun run test – テスト実行
  • bun run lint – ESLint チェック
  • bun run lint:fix – ESLint 自動修正
  • bun run format – Prettier フォーマット
  • bun run package:vsix – VSIX パッケージ作成

Marketplace Packaging

bun run package:vsix

The command above produces a .vsix that can be uploaded in the VS Code Marketplace publisher portal.

Privacy

  • Your API key is stored securely through VS Code's language model provider configuration and, for legacy command-palette setup, VS Code SecretStorage.
  • Chat completions and model discovery requests are sent to https://integrate.api.nvidia.com/v1.
  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2026 Microsoft