Jinn Code
Multi-agent AI coding assistant for VS Code — powered by AWS Bedrock and local models.
Built by Jinn Systems · Sharjah, UAE
What is Jinn Code?
Jinn Code is a VS Code extension that brings a four-agent AI pipeline directly into your sidebar. Unlike single-model copilots, Jinn Code routes each part of a coding task to the model best suited for it — a Planner to break down the problem, a Thinker to reason through complexity, a Coder to write the implementation, and a Reviewer to catch issues before they hit your files.
It works with AWS Bedrock (Claude, Llama, Nova, DeepSeek) and any local model server (Ollama, LM Studio, or any OpenAI-compatible endpoint) — and you can mix providers per agent.
Features
- Three modes — Ask (Q&A), Plan (step-by-step breakdown), Agent (full pipeline with file writes)
- Four-agent pipeline — Thinker · Planner · Coder · Reviewer, each independently configurable
- Inline diff — AI-generated code appears as Keep/Discard hunks in your editor, Cursor-style
- Kernel Scheduler — route tasks to models by preset (Performance / Balanced / Custom)
- Live model fetch — pull your available Bedrock models or local server models directly into Settings
- Codebase indexing — keyword + semantic search over your workspace for relevant context injection
- MCP support — connect any Model Context Protocol server for extended tool use
- Export — save chat sessions as
.txt or .json; copy debug logs for sharing
- Cross-platform — Windows, macOS, Linux
Quick Start
Requirements
- VS Code 1.85 or later
- Node.js 18 or later
- An AWS account with Bedrock access and/or a local model server (Ollama, LM Studio)
Install
Search "Jinn Code" in the VS Code Extensions marketplace and click Install.
Or install from the command line:
code --install-extension jinn-systems.jinn-code
Open the panel
Press Ctrl+Shift+J (Windows/Linux) or Cmd+Shift+J (macOS), or click the ✦ icon in the Activity Bar.
Setup
AWS Bedrock
- Open the panel → click the 🔑 key icon in the header
- Enter your AWS Access Key ID and Secret Access Key
- Open Settings (⚙ gear icon) → AWS Bedrock tab
- Set your region (e.g.
us-east-1) and Auth Mode
- Click Fetch Models to pull your available inference profiles
- Go to General tab → assign models to each agent
Auth modes:
| Mode | When to use |
|------|-------------|
| IAM / AWS CLI | Local dev with ~/.aws/credentials or instance role |
| Access Key + Secret | Explicit credentials entered via the UI |
| Endpoint URL | Bedrock Gateway or custom proxy |
Local Models (Ollama / LM Studio)
# Ollama
ollama pull llama3.1:8b
ollama serve
# LM Studio
# Start the local server from the LM Studio UI (default port 1234)
Then in Settings → Local Models tab:
- Enter your endpoint URL (e.g.
http://localhost:11434)
- Click Fetch Models — your running models appear in a dropdown
- Select a model and click Save Local Config
Modes
Ask
Single-agent Q&A. The Thinker reasons through your question with full workspace context. No file writes. Best for understanding code, debugging, and explaining concepts.
Plan
The Planner breaks your task into numbered steps. You review the plan before anything is written. Use Execute Plan to hand it to the Agent pipeline, Edit to refine in the prompt box, or Discard to start over.
Agent
Full pipeline. The Planner outlines the task, the Thinker reasons through complexity (triggered automatically for refactors, architecture changes, and migrations), the Coder writes the implementation, and the Reviewer checks it. Code appears as inline diffs in your editor — accept or discard each hunk individually or use Keep All / Discard All.
Agent Roles
| Agent |
Default model |
Purpose |
| Thinker |
Claude Sonnet |
Deep reasoning, chain-of-thought analysis |
| Planner |
Claude Haiku |
Task decomposition, step sequencing |
| Coder |
Claude Sonnet |
Code generation, file editing |
| Reviewer |
Claude Haiku |
Correctness, edge cases, security check |
Each agent can be independently assigned any provider and model via Settings → General.
Kernel Scheduler
The Kernel Scheduler decides which model handles each task type. Switch presets in Settings → Kernel:
| Preset |
Description |
| Performance |
All tasks use Claude Sonnet 4.6. Maximum quality. |
| Balanced |
Heavy tasks (Thinker, Coder) use Sonnet; fast tasks (Planner, Reviewer) use Haiku. Best value. |
| Custom |
Assign any model to any task manually. |
Changes take effect on the next message — no reload required.
Inline Diff (Keep / Discard)
When the Coder writes to an existing file, changes appear as highlighted hunks in your editor:
- Green lines — AI's new content
- Red ghost text — what was there before
✔ Keep / ✖ Discard CodeLens buttons above each hunk
- Keep All / Discard All bar above the chat input
Clicking Keep or Discard automatically saves the file — no manual Ctrl+S needed.
Keyboard Shortcuts
| Shortcut |
Action |
Ctrl+Shift+J / Cmd+Shift+J |
Open Jinn Code panel |
MCP Servers
Jinn Code supports the Model Context Protocol. Add servers via settings.json:
"jinnCode.mcp": [
{
"name": "filesystem",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/project"]
},
{
"name": "git",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-git"]
}
]
Servers start automatically when the extension activates.
Export & Logs
Chat export — click 📤 in the header:
- Copy to Clipboard — paste full chat as formatted text
- Save as Text (.txt) — timestamped text file
- Save as JSON (.json) — structured export with agent names, timing, and timestamps
Log panel — click 📄 in the header:
- Filter by level (Error / Warning / Info / Debug) or search by keyword
- 📋 Copy — copies a debug summary (errors first, then last 30 entries) for sharing
- Export logs as
.txt or .json
- Red dot badge appears on the log icon when an error occurs
Requirements
- VS Code
^1.85.0
- Node.js
18+ (for fetch and AbortSignal builtins)
- AWS account with Bedrock model access enabled, or a running local model server
Privacy
Credentials are stored in VS Code's SecretStorage and never written to settings.json or disk. No telemetry is collected by this extension.
License
MIT © 2026 Jinn Systems