Skip to content
| Marketplace
Sign in
Visual Studio Code>Machine Learning>VerbaNew to Visual Studio Code? Get it now.
Verba

Verba

Talent Factory GmbH

|
4 installs
| (0) | Free
The Developer's Dictation Extension – Voice dictation with AI-powered post-processing for VS Code
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

Verba

The Developer's Dictation Extension
Voice dictation with AI-powered post-processing for VS Code.

Visual Studio Marketplace License: MIT Platform

Verba records speech via your microphone, transcribes it with OpenAI Whisper, and post-processes the transcript with Claude — all directly inside VS Code. Filler words are removed, sentences are smoothed, and the result is inserted at your cursor position.


Features

  • Dictation in Editor and Terminal -- Cmd+Shift+D (Mac) / Ctrl+Shift+D (Windows/Linux) starts and stops recording. Text is inserted contextually in the editor or terminal.
  • Prompt Templates -- Choose a template before each recording: Free Text, Commit Message, JavaDoc, Markdown, or Email. The template controls how Claude post-processes the transcript.
  • Fully Configurable -- Templates are defined in settings.json and freely extensible. Add custom templates with any prompt.
  • Bring Your Own Key -- Use your own OpenAI and Anthropic API keys. No subscription costs, full data control. Keys are stored securely in VS Code's SecretStorage.

Prerequisites

  • ffmpeg must be installed (audio recording)
  • OpenAI API Key (Whisper transcription)
  • Anthropic API Key (Claude post-processing)

Installing ffmpeg

macOS:

brew install ffmpeg

Linux (Debian/Ubuntu):

sudo apt install ffmpeg

Linux (Fedora):

sudo dnf install ffmpeg

Windows:

Download from ffmpeg.org and add to PATH, or via Chocolatey:

choco install ffmpeg

Platform-Specific Notes

Platform Audio Backend Microphone Selection
macOS AVFoundation Default microphone
Linux PulseAudio Default microphone
Windows DirectShow Configurable via verba.audioDevice or Quick Pick

Linux: PulseAudio must be running (default on Ubuntu, Fedora, and most desktop distributions).

Windows: On first use, a Quick Pick dialog lets you select the microphone. You can change it anytime with the command Verba: Select Audio Device or by setting verba.audioDevice in Settings. Verba detects devices via ffmpeg (v7 and v8+ formats) with a PowerShell fallback.

Installation

Install from the VS Code Marketplace:

ext install talent-factory.verba

Or search for "Verba" in the VS Code Extensions sidebar.

Quick Start

  1. Cmd+Shift+D -- Quick Pick with template selection appears
  2. Choose a template (e.g., "Free Text") -- recording starts
  3. Speak
  4. Cmd+Shift+D -- recording stops, text is transcribed and processed
  5. Result appears at your cursor position

On first use, you will be prompted for your API keys, which are stored securely.

Terminal Mode

When the integrated terminal is focused, dictated text is inserted there instead. With verba.terminal.executeCommand: true, the text is additionally submitted with Enter.

Configuration

Custom Templates

Define custom templates in settings.json:

{
  "verba.templates": [
    {
      "name": "Free Text",
      "prompt": "Clean up the transcript: remove filler words, smooth broken sentence starts, fix transcription errors. Keep the original language and meaning. Return only the cleaned text."
    },
    {
      "name": "Code Review",
      "prompt": "Convert this transcript into structured code review feedback with bullet points for issues found and suggestions. Keep the original language."
    }
  ]
}

Each template consists of name (displayed in Quick Pick) and prompt (instruction sent to Claude for post-processing).

Settings

Setting Type Default Description
verba.audioDevice String "" Audio input device name (Windows). Leave empty for auto-detection.
verba.templates Array 5 built-in templates Prompt templates for post-processing
verba.terminal.executeCommand Boolean false Submit text in terminal with Enter

Architecture

Microphone --> ffmpeg (WAV) --> Whisper API --> Claude API --> Editor/Terminal
                                                (Template)
Module Purpose
recorder.ts ffmpeg child process for audio recording
transcriptionService.ts OpenAI Whisper API integration
cleanupService.ts Anthropic Claude API integration
pipeline.ts Processing stage orchestration
templatePicker.ts Quick Pick menu for template selection
insertText.ts Text insertion into editor or terminal
statusBarManager.ts Status bar display (Idle/Recording/Transcribing)

Development

npm run compile     # Compile TypeScript
npm run watch       # Watch mode
npm run test:unit   # Unit tests
npm run test        # All tests (compile + unit + integration)

Contributing

Found a bug or have a feature request? Open an issue.

License

MIT

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2026 Microsoft