Summarize the active file with the LLM of your choice. Markdown, XML, HTML, JSON, and plain text stay structurally intact — only the prose is condensed.
Providers: OpenAI · Anthropic · Google Gemini · Ollama (local)
Privacy: zero telemetry. The only network calls are to the provider you configure. Your API key lives in VS Code's SecretStorage, encrypted on your machine.
Format preservation: headings, list markers, table scaffolding, XML tags, and JSON keys are preserved verbatim. Hard facts (numbers, dates, proper nouns, URLs, quoted text) are kept losslessly.
Quick start
Install the extension.
The setup walkthrough opens automatically on first run — pick a provider, paste your API key, pick a model.
Open any file. Run Summarizeme: Summarize current file from the command palette (⇧⌘P) or right-click in the editor.
A new file appears alongside the original: notes.md → notes.summary.md.
Settings
Setting
Default
What it does
summarizeme.provider
anthropic
Which LLM provider to use
summarizeme.model
(provider default)
Model identifier
summarizeme.ollamaBaseUrl
http://localhost:11434
Ollama server URL
summarizeme.customInstructions
(empty)
Style preferences appended to the prompt
summarizeme.compressionPercent
50
Target compression. 10 ≈ 90% of original; 70 ≈ 30%
summarizeme.outputLocation
side
Where to open the summary
All settings are workspace-overridable.
Commands
Summarizeme: Summarize current file
Summarizeme: Run setup walkthrough
Summarizeme: Test connection
Summarizeme: Clear stored credentials
Custom instructions
summarizeme.customInstructions lets you add style preferences like "use Oxford commas" or "prefer bullet lists." The text is appended to the system prompt inside a sandboxed block — it cannot override the built-in summarization rules (format preservation, lossless-on-facts, no new information).