Skip to content
| Marketplace
Sign in
Visual Studio Code>Machine Learning>Ellavox AINew to Visual Studio Code? Get it now.
Ellavox AI

Ellavox AI

EllavoxAI

| (0) | Free
Detect prompts in any file — Markdown, JS, TS, Python, and more — and display token counts with cost estimates for GPT-4o, Claude, Llama 3.
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

Ellavox AI

Automatically detect prompts in your files and show token counts with cost estimates for GPT-4o, GPT-4, Claude, and Llama 3.

Features

  • Auto-detect prompts in any file type:
    • Markdown — headers (## System Prompt), fenced code blocks, XML tags (<system>...</system>)
    • Code files — variables named prompt, systemPrompt, userMessage, etc. in JS/TS/Python
    • Plaintext / .prompt — treats the entire file as a prompt
  • Token counts via CodeLens annotations above each detected block
  • Cost estimates based on current model pricing (per 1M input tokens)
  • Hover details — hover over any prompt block for a full breakdown: tokens, characters, words, lines, cost, and context window usage
  • Model switcher — click the status bar or any CodeLens to switch between tokenizers
  • Count selection — select any text and run the command to count tokens

Supported Models

Model Tokenizer Price (input/1M) Context Window
GPT-4o / o1 / o3 o200k_base $2.50 128K
GPT-4 / GPT-3.5 cl100k_base $30.00 128K
Claude cl100k_base (approx.) $3.00 200K
Llama 3 / 3.1+ Llama 3 BPE $0.59 128K

Claude token counts are approximate — Anthropic does not publish a standalone tokenizer for Claude 3+.

Usage

  1. Open any file containing prompts
  2. Token counts appear as CodeLens above each detected prompt block
  3. Hover over a prompt block for a detailed breakdown
  4. Click the status bar item (bottom-right) or any CodeLens to switch models
  5. Select text and run Ellavox AI: Count Tokens in Selection from the command palette

Commands

Command Description
Ellavox AI: Select Model Switch the active tokenizer model
Ellavox AI: Count Tokens in Selection Count tokens in the current text selection

Settings

Setting Default Description
llmTokenCount.defaultModel gpt-4o Default tokenizer model

What Gets Detected

In Markdown / plaintext:

  • Sections under prompt-related headers (## System, ## User, ## Prompt, etc.)
  • Fenced code blocks
  • XML-style tags (<system>, <user>, <assistant>, <prompt>, etc.)

In code files (JS/TS/Python/etc.):

  • Variables whose name contains prompt, system, user, assistant, message, instruction, or context
  • Supports template literals, triple-quoted strings, and single-line strings

Development

npm install
npm run build    # one-time build
npm run watch    # rebuild on changes

Press F5 to launch the Extension Development Host for testing.

License

MIT

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2026 Microsoft