🔦 Lumen
AI coding assistant for VSCode — smart chat, plan mode, autopilot, MCP support, and live diffs.
Lumen embeds a full-featured AI coding agent into VSCode. Works out of the box with felgof (get an API key at felgof.ru) or any OpenAI-compatible endpoint.
UI available in English and Russian.
✨ Features
- 🎚 Three work modes — Assistant / Plan / Autopilot — one click to switch.
- 🛡 Multi-layer permission gate — configurable glob-based auto-approve; hardcoded protection for
.env, .git/, *.pem, *.key and destructive operations.
- 🔌 Model Context Protocol (MCP) — plug in your own MCP servers; tools are auto-discovered.
- 📋 Project rules —
.md files with frontmatter, filtered by applyTo glob per active file.
- 🌐 Web search — Jina + DuckDuckGo; agents can hit the internet when needed.
- 💬 Streaming chat — live thinking, real-time todos with a progress bar, expandable agent timelines.
- ✍️ Live file-write preview — watch the model stream code directly into the target file with per-hunk Accept / Revert.
- 📌 Sticky todo panel — multi-step plans stay pinned above the message list so you never lose track.
- 🧠 Prompt caching — compatible providers are billed at cache-read rates (~10% of input), cutting cost on long sessions.
- 🗜 Automatic context compaction — when the window fills up, the session is summarized and trimmed automatically (or on demand via
/compact).
- 💰 Detailed usage breakdown — the cost indicator shows input / output / cache-read / cache-write tokens separately, straight from provider
usage frames.
- ⏮ Checkpoints — snapshot the workspace before bulk changes and roll back with one click.
- 📎 Attachments — files, images, code selection;
Ctrl+Shift+L sends the current selection to chat.
- 🔄 Network resilience — up to 30 retries with backoff,
online-event wake-up, connection-lost banner.
- 🔐 Security — API key in VSCode
SecretStorage, outbound secret redactor, SSRF guard, obfuscated production bundle.
🚀 Quick Start
- Get an API key at felgof.ru.
- Install the extension from the VS Code Marketplace.
- Set your API key — two ways:
- Command Palette:
Ctrl+Shift+P → Lumen: Set API Key → paste.
- Or open Settings → Provider → API Key and paste there.
- Click the Lumen icon in the Activity Bar (left sidebar).
- Ask a question — the agent will answer.
The key is stored in the system SecretStorage (Keychain on macOS, Credential Manager on Windows, libsecret on Linux) and is never written to disk in plaintext.
⚙️ Settings
All settings live under Lumen: Settings (or the gear icon in the chat header).
| Tab |
What it controls |
| Provider |
felgof (default) or a custom OpenAI-compatible URL + API key |
| Behavior |
Default mode, permissions, notifications, interface language |
| MCP |
Your MCP servers (stdio / http / sse) |
| Rules |
Project .md rules with frontmatter |
| Diagnostics |
Logs, telemetry, network |
| About |
Version, license, links |
🎯 Work Modes
Assistant (default)
Safe, interactive mode. Before every:
- shell command,
- file write / edit,
- MCP tool call,
Lumen asks for your confirmation. Auto-approve rules are configured via a glob map (**/*.ts: allow, **/secret.ts: deny). Sensitive paths (.env, .git/, *.pem, *.key) always require explicit consent — even if your auto-approve map says **.
Plan
Mode for larger tasks. Lumen drafts a detailed plan in .lumen/plans/<date>-<slug>.md, shows it in a VSCode preview tab, and waits for your approval.
- Approved → automatically switches to Assistant and starts executing the plan.
- Not approved → edit the plan directly in the file or the chat.
Autopilot
Fully autonomous mode. Lumen executes the entire plan without step-by-step confirmation. Ideal for long tasks, migrations, and bulk refactors.
Destructive operations are blocked even in Autopilot:
rm -rf /, rm -rf ~,
git push --force to protected branches,
- deletion of
.git/, .env, system directories,
- arbitrary execution of suspicious shell pipelines.
For those, the agent must return control to the user.
🔌 Custom Provider
Lumen supports any OpenAI-compatible API. In Provider settings pick Custom and fill in:
- Base URL — endpoint (
https://.../v1 or equivalent),
- Model — model ID provided by the service,
- API Key — authorization (stored in
SecretStorage),
- Models URL (optional) — if the provider exposes its model list at a separate URL.
Example configurations
| Provider |
Base URL |
Notes |
| felgof (default) |
(built-in) |
Works right after you enter the key |
| Custom proxy |
https://your-proxy.example.com/v1 |
Any OpenAI-Chat-Completions-compatible server |
| Local LLM |
http://localhost:<port>/v1 |
Ollama, LM Studio, vLLM, etc. |
💰 Cost & Context
The indicator in the chat header shows:
- Context fill — how much of the model's window is used (turns warning at 60%, danger at 85%).
- Usage breakdown — input, output, cache-read and cache-write tokens separately, taken directly from the provider's
usage frames.
- One-click compaction — when context gets tight, run
/compact or press the indicator to summarize the session and free up space. Lumen can also compact automatically when you cross the threshold.
Prompt caching is supported transparently for compatible providers: repeated prompt prefixes are billed at cache-read rates (~10% of input cost), which noticeably lowers the bill on long conversations.
🔐 Security & Privacy
- API keys live in VSCode
SecretStorage — never in settings.json, never in the shipped VSIX.
- Redactor scans outgoing prompts for token shapes (
sk-…, ghp_…, AKIA…) and substitutes [REDACTED] — even if you accidentally paste a key into chat.
- SSRF guard blocks agent requests to private IP ranges (
10.0.0.0/8, 127.0.0.0/8, 169.254.0.0/16, …) and cloud metadata endpoints.
- Destructive-op guard blocks dangerous shell patterns (
rm -rf /, fork bombs, force-push to protected branches) in every mode, including Autopilot.
- Obfuscated production bundle — the extension JavaScript is processed by
javascript-obfuscator before publishing.
- Consent-based telemetry — no code, prompts, or responses are sent to third-party servers without your explicit consent.
⌨️ Commands & Hotkeys
| Command |
Hotkey |
Description |
Lumen: Set API Key |
— |
Enter / update the API key |
Lumen: New Chat |
— |
Start a new chat |
Lumen: History |
— |
Open chat history |
Lumen: Changes |
— |
List all AI-made changes in the workspace |
Lumen: Checkpoints |
— |
Restore the workspace to a prior state |
Lumen: Checkpoint Now |
— |
Create a manual workspace checkpoint |
Lumen: Settings |
— |
Open extension settings |
Lumen: Add Selection to Chat |
Ctrl+Shift+L (macOS: Cmd+Shift+L) |
Send selected code to chat |
Lumen: Accept ALL AI Changes (all files) |
— |
Accept every AI change across all files |
Lumen: Revert ALL AI Changes (all files) |
— |
Revert every AI change |
Lumen: Restore File Backup |
— |
Restore a single file from its backup |
Full list: Palette → Ctrl+Shift+P → type Lumen:.
📚 Resources
Questions, bug reports, feature requests, and discussions happen on our Telegram channel:
📢 t.me/felgof
Release announcements, usage guides, and examples live there as well.
📝 License
Proprietary. Copyright © 2026 Felgof. All rights reserved.
🇷🇺 Для русскоязычных
Интерфейс расширения полностью переведён — переключите язык в Settings → Behavior → Language на «Русский». Сообщество и поддержка — в Telegram: t.me/felgof.
| |