Skip to content
| Marketplace
Sign in
Visual Studio Code>Visualization>Token Explorer · Copilot ReadyNew to Visual Studio Code? Get it now.
Token Explorer · Copilot Ready

Token Explorer · Copilot Ready

Mohit Ghodke

|
1 install
| (0) | Free
Visualise tokenization of Markdown / plain-text files for the AI model of your choice. See live token counts, context-window usage, and estimated cost before sending a file to Copilot.
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

Token Explorer · Copilot Ready

Know your token budget before you hit Send.
Visualise exactly how any Markdown or plain-text file is tokenized — live, as you type — for the AI model of your choice. See token counts, context-window usage, and estimated API cost before handing a file to GitHub Copilot or any other LLM.


What's New in v0.2.0

Workspace Files view — browse, count & budget across all your files at once

A new Workspace Files panel appears in the Token Explorer sidebar as soon as you open any workspace. It:

  • Auto-discovers every Markdown and text file in the workspace (.md, .markdown, .mdx, .txt, .text, .rst, .log) and shows the token count + context-window % next to each file name — no need to open files one by one.
  • Splits files into two sections — Markdown files and Text files — each with its own aggregate (total tokens + % of context window).
  • Multi-file selection via checkboxes — tick any combination of files. A live Selection row at the top instantly aggregates:
    • Combined token count
    • Combined context-window percentage for the active model
    • Estimated input cost (USD) for the selection
    • Warnings when the selection exceeds 90% or 100% of the context window
  • The view reacts to model changes — switching to a different model re-tokenizes every file and updates every number automatically.
  • A Refresh button and a Clear Selection button appear in the view title bar.

Features

Live Token Count in the Status Bar

A persistent status-bar item shows ⟨model⟩ · ⟨N⟩ tokens for the active file, updating automatically as you edit.

Activity-Bar Sidebar

A dedicated Token Explorer panel in the activity bar gives you three views:

View What it shows
Dashboard Model name, token count, context-window usage %, estimated input cost (USD), tokenizer algorithm, and any accuracy caveats
Workspace Files Every .md / .txt file in the workspace with live per-file token counts, context %, and a multi-file selection budget calculator
Tokens Tokenizer algorithm, vocab size, context limit, and a link to the token preview

Workspace Files — Multi-file Budget Calculator

The Workspace Files view (new in v0.2) lets you plan prompt budgets across multiple files:

  1. Open a workspace — the view populates automatically and tokenizes every Markdown and text file in the background.
  2. Each file shows ⟨N⟩ tok · ⟨X.XX⟩% alongside its name. Hover for a detailed tooltip (tokens, context %, estimated cost, file size).
  3. Tick the checkboxes next to the files you plan to pass to a prompt. The Selection row at the top updates in real time:
    • Number of files selected
    • Total token count
    • Percentage of the active model's context window that the selection consumes
    • Estimated input cost
  4. Switch models via Token Explorer: Select Model — all counts and percentages update to reflect the new model.

Example: Tick four Markdown docs you want to include in a Copilot prompt. The selection row immediately shows that they consume 18,420 tokens — 1.84% of GPT-4.1's 1M context window, costing an estimated $0.0368 in API input tokens.

Inline Editor Highlighting

Enable Token Explorer: Toggle Inline Highlighting to colour each token directly inside the editor using a six-colour cycling palette — ideal for understanding exactly where token boundaries fall.

Token Preview Webview

Open a side panel that renders the document as coloured token chips, one chip per token. Newlines and tabs are shown as visible glyphs (⏎ / →). Per-model caveats are listed below the chips.

26-Model Coverage

Switch models at any time from the command palette or settings. Every model in the list below is selectable:

OpenAI / GPT family — GPT-5.5, GPT-5.4, GPT-5.4 mini, GPT-5.3 Codex, GPT-5.2, GPT-5.2 Codex, GPT-5 mini, GPT-4.1, GPT-4o, GPT-3.5 Turbo
Anthropic / Claude family — Claude Opus 4.7, Claude Sonnet 4.6, Claude Sonnet 4.5, Claude Sonnet 4, Claude Haiku 4.5, Claude 3.5 Sonnet
Google / Gemini family — Gemini 3.1 Pro, Gemini 3 Flash, Gemini 2.5 Pro, Gemini 1.5 Pro
Other — Grok Code Fast 1, Raptor mini (GitHub Copilot internal), LLaMA 3 70B, LLaMA 2 70B, Mistral 7B v0.1, Cohere Command R+

Real tokenizer libraries are used wherever possible:

Tokenizer Models
js-tiktoken (OpenAI BPE) GPT-4.1, GPT-4o, GPT-3.5 Turbo, and all GPT-5 variants
@anthropic-ai/tokenizer All Claude variants
@xenova/transformers (Gemma) All Gemini variants (within ~1% of real Gemini counts)
Heuristic BPE proxy LLaMA, Mistral, Cohere, Grok, Raptor, and newer models without a public tokenizer

Accuracy caveats are shown inline whenever a proxy tokenizer or estimated pricing is used, so you always know exactly how much to trust a number.

Copilot Chat Participant — @tokens

Ask questions about any open file directly inside Copilot Chat:

Slash command What it does
@tokens /count Token count, context %, and estimated cost for the active file
@tokens /cost Estimated API input cost only
@tokens /compare Side-by-side table of token counts across all 26 models

Getting Started

  1. Install the extension.
  2. Open any workspace folder — the Workspace Files view immediately starts scanning and tokenizing every Markdown and text file.
  3. Open any Markdown (.md) or plain-text (.txt) file for live status-bar updates and the Dashboard view.
  4. The status bar shows the token count for the default model (gpt-4.1).

Plan a multi-file prompt budget

  1. Click the Token Explorer icon in the activity bar.
  2. Open the Workspace Files section.
  3. Tick the files you intend to include in a prompt.
  4. Read the Selection row — it shows the total token count and exactly how much of the model's context window your selection consumes.

Change the Active Model

  • Command palette → Token Explorer: Select Model
  • Settings → tokenExplorer.defaultModel

All views (Dashboard, Workspace Files) update automatically.

Open the Token Preview

Command palette → Token Explorer: Open Token Preview
A panel opens beside your editor showing every token as a coloured chip.

Ask Copilot

With a Markdown file open, type @tokens /count in the Copilot Chat panel.


Commands

Command Description
Token Explorer: Select Model Pick the active model / tokenizer
Token Explorer: Open Token Preview Open the side webview with coloured token chips
Token Explorer: Toggle Inline Highlighting Toggle per-token background colours in the editor
Token Explorer: Refresh Force a re-tokenization of the active file and workspace files list
Token Explorer: Copy Token Count Copy the current count to the clipboard
Token Explorer: Compare All Models for This File Print a comparison table to the Output channel
Token Explorer: Refresh Workspace Files Rescan and re-tokenize all workspace files
Token Explorer: Clear Selection Deselect all checked files in the Workspace Files view

Settings

Setting Default Description
tokenExplorer.defaultModel gpt-4.1 Model used when opening a file
tokenExplorer.inlineHighlighting false Highlight every token directly in the editor
tokenExplorer.debounceMs 250 Delay (ms) before re-tokenizing after an edit
tokenExplorer.showStatusBar true Show the live status-bar item

Requirements

  • VS Code 1.95.0 or later
  • No external API keys required — all tokenizers run fully offline inside the extension

Known Limitations

  • Only Markdown and plain-text files are supported in the Workspace Files view. Support for additional languages (Python, TypeScript, etc.) is planned.
  • Tokenization of very large files (>1 MB) may be slow when inline highlighting is on.
  • Gemini, Grok, Raptor, and several newer GPT-5 variants use proxy or heuristic tokenizers. Token counts are approximate. Accuracy caveats are always displayed in the UI.
  • Cost estimates for Copilot-bundled models (Raptor, GPT-5.x) are shown as $0 or indicative only; these models are not billed per token through the standard API.
  • The Workspace Files view excludes node_modules, .git, dist, out, build, .next, and .vscode-test folders.

Changelog

v0.2.0

  • New: Workspace Files view — auto-discover and tokenize every Markdown / text file in the workspace; token count and context % shown inline next to each file name.
  • New: Multi-file selection budget calculator — check multiple files to see combined token count, context-window %, and estimated input cost in real time.
  • New: Separate Markdown / Text sections — files are grouped by type in the Workspace Files view.
  • New: Workspace Files toolbar buttons — Refresh and Clear Selection in the view title bar.
  • Activation now triggers on onStartupFinished so the Workspace Files view populates immediately on any workspace open.

v0.1.0

  • Initial release: live status-bar token count, Dashboard view, Tokens view, inline highlighting, Token Preview webview, 26-model coverage, and @tokens Copilot chat participant.

License

MIT

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2026 Microsoft