Skip to content
| Marketplace
Sign in
Visual Studio Code>Other>ZenCoder AINew to Visual Studio Code?ย Get it now.
ZenCoder AI

ZenCoder AI

Divya Bairavarasu

|
4 installs
| (0) | Free
AI coding assistant โ€” ๐Ÿง˜ The best code flows from a calm mind. ZenCoder keeps you in flow.
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info
ZenCoder Logo

ZenCoder

AI coding assistant โ€” Best things in life are almost free :P

Intelligent model routing ยท Agent mode ยท BYOK ยท Streaming


Overview

ZenCoder is an AI coding assistant that connects to the local ZenCoder daemon (zencoderd) and routes your requests to the best available model โ€” local models first, cloud models when the task demands it.

No subscription. No lock-in. Your keys, your costs.


Features

Feature Description
🤖 Auto-Routing Dispatches each request to the optimal model (local vs. cloud)
🛠 Agent Mode Reads files, proposes multi-file edits, iterates in your workspace
💬 Ask Mode Single-turn Q&A โ€” fast and focused
📋 Plan Mode Generate a step-by-step implementation plan
📚 Skills Auto-detects task type: code review, test gen, refactor, debug
โšก Streaming Token-by-token streaming across all backends
🔗 Context Attach files/folders to include as codebase context
📊 Metrics Token usage, latency, and iteration count per response

Requirements

ZenCoder requires the zencoderd daemon running locally. Install via the one-line installer:

# Requires GitHub CLI โ€” authenticate first:
gh auth login

# Then install (auto-detects OS/chip, starts zencoderd as a background service):
curl -fsSL https://raw.githubusercontent.com/divyabairavarasu/zencoder-releases/main/install.sh | bash

With at least one model available (Ollama, LM Studio, or a configured cloud key).


Getting Started

  1. Start zencoderd (see above)
  2. Open the Command Palette (Ctrl+Shift+P / Cmd+Shift+P)
  3. Run ZenCoder: Chat
  4. Select a model from the dropdown
  5. Choose a mode: Agent / Ask / Plan
  6. Type your request and press Enter

Commands

Command Description
ZenCoder: Chat Open the chat panel
ZenCoder: Explain Selection Explain selected code
ZenCoder: New Chat Session Start a fresh session
ZenCoder: Export Chat Session Save session as Markdown
ZenCoder: Clear Chat Clear the current session
ZenCoder: Detect Local Models Refresh the model list
ZenCoder: Check Service Status Verify daemon connectivity

Keyboard Shortcuts

Shortcut Action
Enter Send message
Shift+Enter New line in input
Ctrl+U / Cmd+U Clear chat

Chat Modes

  • Agent โ€” Full agentic loop. ZenCoder reads workspace files, proposes edits, and iterates. Requires an open workspace folder.
  • Ask โ€” Standard Q&A. Fast and focused.
  • Plan โ€” Generates a structured implementation plan without writing code.

Routing Modes

  • Auto โ€” ZenCoder's router picks local vs. cloud based on the task.
  • Local โ€” Only use locally-running models (Ollama, LM Studio).
  • Cloud โ€” Route to cloud providers only.

Settings

Setting Default Description
zenCoder.agentAddress http://127.0.0.1:7777 Daemon address
zenCoder.maxMessages 100 Chat history retained per session
zenCoder.maxContextBytes 200000 Max bytes of file context per request
zenCoder.contextExtensions .ts .tsx .js .go .py ... Extensions scanned for folder context
zenCoder.contextExcludeFolders node_modules dist .git Folders excluded from context scan
zenCoder.includeWorkspaceByDefault true Auto-include current workspace as context
zenCoder.autoDetectSkills true Auto-apply AI skills based on conversation
zenCoder.lastModel "" Persists last selected model per workspace
zenCoder.favoriteModels [] Pinned models for quick access

Chat Sessions

Sessions are stored under .zencoder/chat/ in your workspace.

  • New Session โ€” Start fresh; previous messages are preserved in history.
  • Export Session โ€” Writes a .md file to .zencoder/chat/exports/.

Development

cd vscode
npm install
npm run build
# Press F5 in VSCode to launch Extension Development Host

Run tests:

npm test           # Unit tests
npm run test:e2e   # End-to-end tests (requires daemon)

Privacy

ZenCoder sends your messages and file context to the model you select. With local models (Ollama, LM Studio), nothing leaves your machine. With cloud models, your data is subject to the respective provider's terms. ZenCoder itself does not collect or transmit any data.


Made with 🧘 โ€” "Best things in life are almost free :P"

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
ยฉ 2026 Microsoft