Skip to content
| Marketplace
Sign in
Visual Studio Code>Other>Sidekick - AI Code AssistantNew to Visual Studio Code? Get it now.
Sidekick - AI Code Assistant

Sidekick - AI Code Assistant

Lydia

| (0) | Free
A starter VS Code extension with configurable LLM chat panel.
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

Sidekick (VS Code Extension)

This is a minimal first-step scaffold for a Copilot/Cline-like extension:

  • VS Code extension project initialization
  • Configurable LLM settings
  • Chat panel UI in a Webview
  • Round-trip chat requests to a Chat Completions-compatible API
  • Built-in tool-calling loop with local tool execution

1) Install dependencies

npm install

2) Build the extension

npm run compile

3) Run in Extension Development Host

  1. Open this project in VS Code.
  2. Press F5 (or run Run Sidekick in Debug panel).
  3. In the new Extension Development Host window, run command: Sidekick: Open Chat

4) Configure your model

You can configure in two ways:

  1. Run command Sidekick: Configure Model (recommended).
  2. Or edit Sidekick settings manually.

In VS Code settings, search Sidekick and set:

  • sidekick.apiBaseUrl (default: https://api.openai.com/v1)
  • sidekick.apiKey
  • sidekick.model
  • sidekick.promptCacheKey (required by some routed/custom models)
  • sidekick.extraHeadersJson (JSON object, optional)
  • sidekick.extraBodyJson (JSON object, optional, for provider-specific required params)
  • sidekick.apiMode (auto | chatCompletions | responses)
  • sidekick.systemPrompt

You can also put them in settings.json:

{
  "sidekick.apiBaseUrl": "https://api.openai.com/v1",
  "sidekick.apiKey": "<YOUR_API_KEY>",
  "sidekick.model": "gpt-4o-mini",
  "sidekick.promptCacheKey": "<OPTIONAL_CACHE_KEY>",
  "sidekick.extraHeadersJson": "{}",
  "sidekick.extraBodyJson": "{}",
  "sidekick.apiMode": "auto",
  "sidekick.systemPrompt": "You are a helpful coding assistant."
}

Notes

  • Current request format targets OpenAI-compatible /chat/completions.
  • Sidekick now supports SSE streaming and sends stream: true for providers that require streaming mode.
  • Sidekick no longer sends temperature by default to avoid strict-provider validation failures.
  • Sidekick now exposes functions.* and multi_tool_use.parallel tools to the model, executes tool calls locally, and feeds results back into the next model turn.
  • This is the first-step foundation; next you can add context injection, streaming responses, code actions, and tool calling.
  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2026 Microsoft