Elixium Forge ⚒️
Your AI. Your Hardware. Your Code Stays Home.
Elixium Forge is an autonomous AI coding agent that lives inside VS Code. It reads your codebase, plans changes, writes files, and executes commands — powered entirely by local models on your own hardware. No cloud. No API keys. No data leaving your network.
Part of the Elixium ecosystem.
The Elixium Ecosystem
Elixium Cloud AI platform elixium.ai
Elixium Companion VS Code → Cloud Elixium marketplace
Elixium Forge VS Code → Local models ← you are here
Quick Start
You need:
- A model server — Ollama, LM Studio, or any OpenAI-compatible endpoint
- A model —
qwen2.5-coder:14b, gemma-4, deepseek-coder, or your pick
Setup:
- Install Elixium Forge from the VS Code Marketplace
- Settings → search "Elixium"
- Set Server URL →
http://localhost:11434 (or your LAN server with /v1 for OpenAI-compatible)
- Set Default Model → your model name
- Open the Elixium Forge sidebar → start building
Auto-detects Ollama native vs OpenAI-compatible from the URL. Works with any model that speaks either protocol.
How It Works
Three-Pass Agent System
Most AI coding tools ask the model to think AND write structured tool calls in the same breath. Local models struggle with this. Forge separates the concerns:
Pass 1 — Think ReAct loop reads files, greps code, plans the change
The model does what it's good at: reasoning
Pass 2 — Apply Separate prompt: "Here is the file. Here are the changes.
Output ONLY the new file." No XML tags needed.
Pass 3 — Audit Read the file back. Verify it exists and has content.
Report the result.
Inspired by Cursor's two-model architecture, adapted for single-model local inference.
|
Tool |
What It Does |
| 🔍 |
GREP |
Search file contents (ripgrep with Node.js fallback) |
| 📂 |
GLOB |
Find files by pattern |
| 📖 |
READ_FILE |
Read with line numbers, optional offset/limit for large files |
| 📁 |
LIST_DIR |
Workspace file tree |
| ✏️ |
EDIT |
Safe edit — requires read-first, enforces uniqueness |
| 📝 |
CREATE_FILE |
Write new files with path traversal protection |
| 🔄 |
SEARCH_REPLACE |
Legacy find-and-replace |
| 🗑️ |
DELETE_FILE |
Moves to trash, permission required |
| ▶️ |
RUN_COMMAND |
Shell execution with permission system |
| 🔧 |
INIT_REPO |
Git init + optional GitHub publish |
| 🎨 |
GENERATE_IMAGE |
Fooocus API integration (optional) |
Agent Modes
| Mode |
Tools |
Use Case |
| Agent |
All 11 |
Autonomous multi-step tasks |
| Plan |
Read-only |
Analysis without modifications |
| Ask |
None |
Conversational Q&A |
What Makes Forge Different
vs. GitHub Copilot / Cursor — Forge runs 100% offline on your hardware. Your code never leaves your network.
vs. Cline / Roo Code — Forge doesn't demand perfect XML tool calls from local models. The fuzzy parser catches whatever format the model throws (call:TOOL{...}, JSON blocks, Python-style calls), and the three-pass system means the model never needs to produce structured output for file writes.
vs. ChatGPT / Claude in browser — Forge has direct workspace access. It reads your actual files, runs your actual commands, writes to your actual disk. It's not a chat window — it's an agent.
Safety
- Permission system — writes, commands, and destructive actions require approval
- Dangerous command blocking —
rm, sudo, chmod blocked by default
- Read-first enforcement — must
READ_FILE before EDIT
- Edit uniqueness check — refuses edits that match multiple locations
- Path traversal protection — can't write outside workspace
- Checkpoint system — asks "keep going?" every 50 turns
Configuration
Settings → search "Elixium":
| Setting |
What |
Example |
| Server URL |
Model server endpoint |
http://localhost:11434 |
| API Key |
Optional, for authenticated servers |
leave blank for Ollama |
| Default Model |
Main coding model |
qwen2.5-coder:14b |
| Fast Model |
Chat, quick tasks |
qwen2.5:7b |
| Deep Model |
Planning, complex reasoning |
qwen2.5:32b |
"elixium.commandExecutionMode": "interactive",
"elixium.agentMode": "react",
"elixium.maxAgentTurns": 100,
"elixium.allowDangerousCommands": false
Troubleshooting
| Problem |
Fix |
| Server unreachable |
Check Server URL in settings. Use IP for LAN servers. |
| Model not responding |
Verify model is loaded: curl http://server:11434/api/tags |
| Agent claims it edited but didn't |
Three-pass system should handle this. Check Output panel. |
| Extension not updating |
Uninstall → Cmd+Q → reinstall VSIX |
| Chat stuck |
Click 🗑️ in sidebar header |
License
MIT · Built by IndirectTek for the Elixium ecosystem.