Altru Coder
An open-source AI coding agent for serious engineering work.
Altru Coder is a VS Code extension, CLI, local agent server, and TypeScript SDK built for agentic development. It can generate code from natural language, inspect and edit files, run terminal commands, stream tool results, manage model providers, and keep coding sessions organized around the active workspace.
This repository is a branded fork of OpenCode with Altru Coder-specific extension UI, provider onboarding, local model saving, prompt enhancement, mode behavior, and model selection work.
Need setup help or model-provider support? Join the WhatsApp community at https://chat.whatsapp.com/J0XfY7LF9RDDNQN54CQOTY or email altru.coder.ai@gmail.com.
Quick Start
git clone https://github.com/kartkbhalodiya/Altru-Coder.git
cd Altru-Coder
bun install
bun run extension
For a faster relaunch after one successful build:
bun run extension -- --no-build
Core Features
| Area |
Capability |
| AI chat coding |
Ask for fixes, refactors, file creation, project explanations, and implementation plans from the VS Code sidebar. |
| Local agent server |
Runs a bundled altru-coder serve process and streams HTTP/SSE events into the extension. |
| Multi-mode workflow |
Switch between Code, Ask, Debug, Orchestrator, and Plan modes with mode-specific prompt behavior. |
| Provider setup |
Add OpenAI-compatible providers, save API keys locally, fetch remote models, and expose saved models in chat. |
| Model picker |
Keeps the chat model selector focused on locally saved models instead of flooding users with every remote model. |
| Prompt enhancement |
Improves rough prompts through the active model before sending work to the agent. |
| Tool execution |
Lets the agent read files, edit code, run shell commands, inspect diagnostics, search the web, and summarize changes. |
| Agent Manager |
Coordinates isolated sessions and worktrees for larger engineering tasks. |
| Mermaid support |
Repairs common flowchart label issues so Mermaid diagrams render more reliably in chat. |
| Glass UI |
Uses a clean glass-style webview surface with rounded chat boxes, visible borders, provider logos, and a polished welcome state. |
| SDK access |
Ships generated TypeScript clients for the same local server API used by the extension. |
| Local-first workflow |
Works from your machine and your workspace; remote services are only used for the model/provider you configure. |
Altru Coder's agent runtime is useful because it can observe, reason, and act through explicit tools instead of only returning text.
| Tool |
What it is used for |
read |
Read source files with line-aware context. |
write |
Create or replace files when the user asks for generated output. |
edit |
Apply focused edits to existing files. |
apply_patch |
Apply structured code patches. |
bash |
Run shell commands, scripts, package managers, and local checks. |
glob |
Find files by path patterns. |
grep |
Search text and symbols across the workspace. |
warpgrep |
Run faster codebase search flows where available. |
diagnostics |
Inspect editor/compiler diagnostics. |
lsp |
Query language-server information. |
webfetch |
Fetch a specific web page when context is required. |
websearch |
Search the web when current external information is needed. |
task |
Delegate structured sub-work inside the agent runtime. |
todo |
Track multi-step work during longer tasks. |
plan |
Enter or exit planning flows. |
question |
Ask the user for a required decision. |
suggest |
Present selectable suggestions inside the UI. |
skill |
Load reusable local guidance for specialized work. |
recall |
Retrieve relevant remembered context. |
Product Surfaces
| Surface |
Description |
| VS Code sidebar |
Primary chat interface with model picker, modes, settings, and tool output. |
| Settings UI |
Provider setup, custom model saving, API-key entry, and local model preferences. |
| CLI TUI |
Terminal-first agent interface for developers who prefer command-line workflows. |
| Local server |
Headless API process used by the extension and SDK. |
| TypeScript SDK |
Programmatic client for integrations and automation. |
| Agent Manager |
Multi-session orchestration surface for worktree-backed work. |
System Map
The diagram uses one muted color family so the data flow stays readable instead of decorative.
%%{init: {"theme":"base","themeVariables":{"primaryColor":"#f3f4f6","primaryTextColor":"#111827","primaryBorderColor":"#6b7280","lineColor":"#6b7280","secondaryColor":"#f9fafb","tertiaryColor":"#ffffff","fontFamily":"Inter,Segoe UI,Arial"}}}%%
flowchart LR
U["Developer"] --> V["VS Code Sidebar"]
U --> C["CLI TUI"]
V --> S["Local altru-coder serve"]
C --> S
S --> A["Agent Runtime"]
A --> T["Tools: read, edit, bash, git, diagnostics"]
A --> M["Model Provider"]
M --> R["Streaming response"]
T --> R
R --> V
S --> K["Generated SDK"]
S --> G["Gateway and telemetry packages"]
Repository Layout
| Path |
Purpose |
packages/opencode |
Core CLI, local HTTP and SSE server, tools, prompts, sessions, and agent runtime. |
packages/altru-coder-vscode |
VS Code extension, bundled CLI launcher, sidebar webview, settings UI, model picker, and Agent Manager. |
packages/sdk/js |
Generated TypeScript SDK for the local server API. |
packages/altru-coder-ui |
Shared SolidJS UI components used by Altru Coder views. |
packages/altru-coder-gateway |
Gateway-facing auth and provider routing package. |
packages/altru-coder-telemetry |
PostHog and OpenTelemetry integration. |
packages/altru-coder-i18n |
Translation and localization utilities. |
packages/plugin |
Plugin and tool interface definitions. |
Provider Logo Matrix
These are the provider logo assets currently shipped in the VS Code extension and used by the model setup flow.
| Icon |
Provider |
Provider type |
 |
Alibaba DashScope |
Hosted OpenAI-compatible |
 |
Anthropic |
Hosted model provider |
 |
Ask Sage |
Hosted model provider |
 |
Azure OpenAI |
Hosted enterprise endpoint |
 |
Cerebras |
Hosted inference |
 |
ClawRouter |
Local router |
 |
Cohere |
Hosted OpenAI-compatible |
 |
CometAPI |
Hosted router |
 |
DeepSeek |
Hosted reasoning and chat |
 |
Docker Model Runner |
Local model runtime |
 |
Fireworks AI |
Hosted inference |
 |
Function Network |
Hosted OpenAI-compatible |
 |
Gemini |
Google model family |
 |
Google |
Hosted Gemini endpoint |
 |
Groq |
Hosted low-latency inference |
 |
Inception Labs |
Hosted OpenAI-compatible |
 |
Lemonade |
Local model runtime |
 |
llama.cpp |
Local model runtime |
 |
llamacpp |
Local model runtime |
 |
Llamafile |
Local model runtime |
 |
LM Studio |
Local model runtime |
 |
Mimo |
Hosted OpenAI-compatible |
 |
Mistral |
Hosted model provider |
 |
Moonshot AI |
Hosted Kimi models |
 |
Morph |
Hosted apply models |
 |
NCompass |
Hosted OpenAI-compatible |
 |
Nebius AI Studio |
Hosted inference |
 |
Novita AI |
Hosted OpenAI-compatible |
 |
NVIDIA NIM |
Hosted NIM endpoint |
 |
Ollama |
Local model runtime |
 |
OpenAI |
Hosted model provider |
 |
OpenRouter |
Hosted router |
 |
OVHcloud |
Hosted OpenAI-compatible |
 |
Qwen |
Alibaba model family |
 |
Replicate |
Hosted model provider |
 |
SambaNova |
Hosted inference |
 |
Scaleway |
Hosted generative APIs |
 |
SiliconFlow |
Hosted OpenAI-compatible |
 |
Tensorix |
Hosted OpenAI-compatible |
 |
Together AI |
Hosted open models |
 |
Venice |
Hosted OpenAI-compatible |
 |
xAI |
Hosted Grok models |
| Z.ai |
Z.ai |
Hosted GLM models |
Built-In Model Presets
The Add Model dialog ships with provider presets so a user can select a provider, enter an API key, optionally fetch remote models, and save the chosen model locally. Saved models are the ones shown in the chat model picker.
| Provider |
Seed models |
| OpenAI |
GPT-5 Codex, GPT-5.2, GPT-5, GPT-4.1, o3 |
| OpenRouter |
OpenRouter Auto, Claude Sonnet 4.5, GPT-5, Gemini 2.5 Pro, DeepSeek R1 |
| ClawRouter |
BlockRun Free, Auto Router |
| CometAPI |
GPT-5 Chat Latest, Claude Sonnet 4.6, Gemini 3.1 Pro Preview, DeepSeek V3.1, Qwen3 Coder Plus |
| Google Gemini |
Gemini 3.1 Pro Preview, Gemini 3 Flash Preview, Gemini 2.5 Pro, Gemini 2.5 Flash |
| Groq |
Llama 3.3 70B Versatile, Llama 3.1 8B Instant, Mixtral 8x7B, Gemma 2 9B |
| Mistral |
Mistral Large, Devstral Medium, Magistral Medium, Codestral |
| DeepSeek |
DeepSeek Chat, DeepSeek Reasoner |
| Inception Labs |
Mercury 2, Mercury Edit 2, Mercury Coder Small |
| xAI |
Grok Code Fast 1, Grok 4.1 Fast Reasoning, Grok 4 Fast, Grok 3 |
| Cerebras |
Llama 3.1 70B, Llama 4 Scout |
| Fireworks AI |
StarCoder 7B, Llama 3.1 405B |
| Function Network |
DeepSeek R1, Qwen2.5 Coder 32B |
| Together AI |
Llama 3.1 405B Turbo, Llama 3.1 70B Turbo, CodeLlama 70B, Mixtral 8x7B |
| DeepInfra |
DeepSeek R1, DeepSeek V3, Qwen2.5 Coder 32B, Llama 3.1 405B |
| Perplexity |
Sonar Deep Research, Sonar Reasoning Pro, Sonar Pro |
| Cohere |
Command A Reasoning, Command A, Command A Vision |
| Moonshot AI |
Moonshot 128K, Moonshot 32K |
| MiniMax |
MiniMax M2.7, MiniMax M2.7 Highspeed, MiniMax M2.5 |
| Mimo |
Mimo VL 7B, Mimo Embedding |
| Novita AI |
DeepSeek R1, DeepSeek V3, Llama 3.1 405B, Llama 3.3 70B |
| Nebius AI Studio |
DeepSeek R1, DeepSeek V3, Llama 3.1 405B |
| NVIDIA NIM |
Qwen3 Coder 480B A35B, GLM-4.7, Kimi K2 Instruct 0905 |
| NCompass |
Qwen2.5 Coder 32B, Qwen2.5 72B, Llama 3.3 70B |
| SambaNova |
Llama 4 Maverick, Llama 3.3 70B, DeepSeek R1, QwQ 32B |
| SiliconFlow |
DeepSeek R1, DeepSeek V3, Qwen2.5 Coder 32B, Nemotron 70B |
| Tensorix |
GLM-5, GLM-4.7, MiniMax M2.5, DeepSeek V3.1 |
| OVHcloud |
Qwen2.5 Coder 32B, Qwen3 Coder 30B, GPT OSS 120B, DeepSeek R1 Distill 70B |
| Nous Research |
Hermes 3 Llama 405B, DeepHermes 3 Mistral 24B |
| Kindo |
GPT-4o, Claude 3.5 Sonnet, Gemini Pro |
| Alibaba DashScope |
Qwen3 Coder Plus, Qwen3 Max, Qwen Plus, Qwen Turbo |
| Z.ai |
GLM-5, GLM-4.7, GLM-4 Plus, GLM-4.5 |
| Morph |
Auto, Morph V3 Fast, Morph V3 Large |
| Hugging Face |
GPT OSS 120B, DeepSeek R1, Qwen2.5 Coder 32B |
| Venice |
Llama 3.3 70B, Qwen2.5 Coder 32B, DeepSeek R1 |
| Scaleway |
Qwen3 Coder 30B, Qwen2.5 Coder 32B, GPT OSS 120B, DeepSeek R1 Distill 70B |
| LM Studio |
Qwen2.5 Coder 32B, Llama 3.1 8B |
| Ollama |
Llama 3.1 8B, Qwen2.5 Coder 7B, Qwen2.5 Coder 32B |
| llama.cpp |
Qwen2.5 Coder 7B, Llama 3.1 8B |
| Llamafile |
Llama 3.1 8B, CodeLlama 7B |
| Lemonade |
Llama 3.1 8B, Qwen2.5 Coder 7B |
| Text Generation WebUI |
Mistral 7B, CodeLlama 34B |
| Llama Stack |
Llama 3.1 8B, Llama 3.1 70B |
| vLLM |
Qwen2.5 Coder 32B, Llama 3.1 70B |
| Docker Model Runner |
Llama 3.1, Qwen2.5 Coder |
Local Model Saving
Altru Coder treats models as local extension settings:
- Open Settings in the Altru Coder sidebar.
- Open Models.
- Select Add model.
- Pick a provider preset.
- Choose a built-in seed model or fetch the provider model list from the API.
- Enter the API key and save.
- The saved model appears in the chat model picker.
The key design rule is simple: the chat picker should show models the user saved, not every possible provider model in the world.
Modes
| Mode |
Default behavior |
| Code |
Implements only when the user asks for implementation or file creation. |
| Ask |
Explains, compares, and answers without changing files by default. |
| Debug |
Investigates failures, reads logs, and proposes or applies fixes. |
| Orchestrator |
Breaks broad work into ordered steps and coordinates execution. |
| Plan |
Produces implementation plans before code changes. |
Development
Install dependencies with Bun:
bun install
Run the CLI:
bun run dev
Build and launch the VS Code extension:
bun run extension
Fast launch after a successful build:
bun run extension -- --no-build
Useful checks:
bun run typecheck
bun run lint
Package-scoped tests should be run from the package directory. Do not run root bun test; the root test script intentionally exits.
Extension Launch Notes
On Windows, a previous Extension Development Host can keep packages/altru-coder-vscode/bin/altru-coder.exe locked. If the build fails with EACCES while removing that file, close the old dev host or stop the stale altru-coder.exe serve process, then rerun the extension command.
Quality Bar
This repo expects production-grade changes:
| Expectation |
Meaning |
| Type safety |
Run the smallest relevant typecheck before declaring work ready. |
| Narrow edits |
Keep Altru Coder-specific changes in Altru Coder paths when possible. |
| Local persistence |
Extension settings and saved models should work without a remote dashboard. |
| Merge discipline |
Shared OpenCode files should be touched only when necessary and marked when required. |
| Real verification |
Prefer real implementation tests over mocks. |
Support
License
MIT. See LICENSE for the license text.
| |