Skip to content
| Marketplace
Sign in
Visual Studio Code>Other>LeanPromptNew to Visual Studio Code? Get it now.
LeanPrompt

LeanPrompt

stillbuild

|
1 install
| (0) | Free
Write better AI prompts from inside VS Code. Structure your context, trim tokens, copy and paste.
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

LeanPrompt

Write better AI prompts from inside VS Code. Stop burning tokens.

Ctrl+Shift+K → structured prompt → copy → paste. Done.


Why

When you paste raw code into Claude or ChatGPT, you get raw answers. Long chats hit token limits. Vague prompts waste context. You start over and explain everything again.

LeanPrompt fixes the workflow — not the AI.


How It Works

Step 1 — Pick what you need

Mode When to use
🔴 Error Code won't compile or crashes
⚠️ Wrong output Runs, but gives the wrong result
✅ Review Works — want a quick confirm + one tip
💡 Concept Type your understanding, AI flags gaps only
❓ Quiz One question at a time, tests your understanding
↩ Continue Mid-chat — skips re-sending context to save tokens

Step 2 — Extract your code

Click Extract from Editor — pulls your active file directly. Supports 2 files simultaneously. Large files are truncated at logical boundaries (not mid-function), with a token estimate shown per file.

Step 3 — Add error or output

Paste your error message or wrong output. LeanPrompt includes it in the prompt with the right framing for the mode you picked.

Step 4 — Add your hypothesis (optional)

What do you think is going wrong? Even one sentence here makes AI responses noticeably better.

Generate → Copy → Paste

Hit Generate Prompt. Copy it. Paste into Claude, ChatGPT, or Gemini.


Features

Token Meter — live estimate as you build, color-coded green → yellow → red. Warns before you hit limits and suggests what to trim.

Follow-up detection — if you're continuing the same problem, LeanPrompt generates a shorter prompt that skips repeated context.

Session Handoff — when a chat gets too long, generate a compact summary (under 200 words) to paste into a fresh chat. Resets your counter automatically.

Chat counter — tracks how many prompts you've sent this session. Turns yellow at 15, red at 20, with a reminder to hand off.

Model toggle — click the badge to switch between Sonnet (fast, efficient, recommended) and Opus (higher quality, slower). Opus warning is shown.

Open AI directly — buttons to open Claude, ChatGPT, or Gemini in your browser without leaving VS Code.

Session persistence — chat count and prompt number survive VS Code restarts.


Installation

VS Code Marketplace

Search LeanPrompt in the Extensions panel (Ctrl+Shift+X).

Manual

git clone https://github.com/Adi-gyt/leanprompt
cd leanprompt
npm install
npm run compile
npx vsce package

Extensions → ··· → Install from VSIX → select the .vsix file.


Keyboard Shortcut

Shortcut Action
Ctrl+Shift+K / Cmd+Shift+K Open LeanPrompt

Privacy

Runs entirely inside VS Code. No data leaves your machine. No telemetry, no API calls, no tracking.


Roadmap

  • [ ] Token usage history across sessions
  • [ ] Custom prompt templates
  • [ ] VHDL / SystemVerilog specific modes

License

MIT


About

Started as a personal tool to stop wasting Claude tokens while learning RTL and FPGA design. Built by an ECE student using TypeScript + VS Code Extension API.

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2026 Microsoft