HDL Wave AIAI-assisted hardware verification for VS Code. Connect your active simulation waveform to an LLM — ask questions about signal behavior, debug logic errors, and cross-reference transitions against your HDL source in natural language. Built as a companion to the VaporView waveform viewer. How it works
Requirements
Quick StartUsing Claude (Anthropic)
Using a local model via Ollama
Set in VS Code Settings:
Or run Ollama via Docker Compose — see the docker-compose example in the repo. UsageWorkflow
Tips
Generating a VCD for testingIf you don't have a VCD handy, Icarus Verilog can generate one from any Verilog testbench:
Add Tool-Use (RAG) ModeFor large designs with millions of signal transitions, the extension uses a tool-calling approach instead of dumping all transitions into the LLM context. The LLM receives a compact waveform summary and queries signal data on-demand through tools. This is enabled by default ( MCP ServerThe extension includes a standalone MCP (Model Context Protocol) server that exposes waveform query tools to any MCP-compatible client — Claude Code, Claude Desktop, Cursor, and others. This runs independently of VS Code. SetupBuild the server (if not already built):
This produces Claude Code
Or to pre-load a waveform on startup:
Claude DesktopAdd to your
Project-level config (
|
| Tool | Description |
|---|---|
load_waveform |
Load a VCD or FST file (replaces any previously loaded waveform) |
list_signals |
List all signals with transition counts |
query_transitions |
Get transitions for a signal in a time range (capped at 150) |
get_value_at |
Get the value of a signal at a specific timestamp |
find_hdl_modules |
Search directories for HDL modules ranked by relevance to loaded waveform signals |
Example Prompt
After loading a waveform, try:
Load the waveform at /path/to/design.vcd, then analyze signal activity between t=4200000 and t=4220000. What instructions is the CPU fetching and are there any anomalies?
FST Support
FST files require fst2vcd (part of GTKWave) to be installed and on your PATH.
Extension Settings
| Setting | Default | Description |
|---|---|---|
hdlWaveAi.provider |
anthropic |
LLM provider: anthropic or openai-compatible |
hdlWaveAi.anthropic.apiKey |
— | Anthropic API key |
hdlWaveAi.anthropic.model |
claude-sonnet-4-6 |
Anthropic model ID |
hdlWaveAi.openaiCompatible.baseUrl |
http://localhost:11434/v1 |
Base URL for OpenAI-compatible API |
hdlWaveAi.openaiCompatible.apiKey |
ollama |
API key (any string works for Ollama) |
hdlWaveAi.openaiCompatible.model |
qwen2.5-coder:32b |
Model name |
hdlWaveAi.waveform.useToolMode |
true |
Use tool-calling (RAG) mode for waveform analysis |
hdlWaveAi.waveform.sampleStepSize |
1 |
Time step size for waveform sampling |
hdlWaveAi.waveform.maxTransitions |
300 |
Max transitions sent to the LLM in legacy mode (evenly sampled if exceeded) |
hdlWaveAi.waveform.defaultEndTime |
10000 |
Fallback end time when no VaporView markers are set |
hdlWaveAi.hdl.searchPaths |
[] |
Extra absolute paths to search for HDL source files |
hdlWaveAi.hdl.maxModules |
5 |
Max HDL modules to include, ranked by relevance |
hdlWaveAi.hdl.maxCharsPerModule |
4000 |
Max characters per module before truncation |
hdlWaveAi.chat.conversational |
true |
Keep prior exchanges in context |
hdlWaveAi.chat.maxHistory |
20 |
Max messages retained in conversational mode |
Tuning for larger models
Models with bigger context windows (32b+) can handle more data. Increase these settings:
"hdlWaveAi.waveform.maxTransitions": 1000,
"hdlWaveAi.hdl.maxModules": 10,
"hdlWaveAi.hdl.maxCharsPerModule": 8000
Commands
| Command | Description |
|---|---|
HDL Wave AI: Open Chat |
Open the AI chat panel |
HDL Wave AI: Debug VaporView State |
Dump VaporView state to the Output channel for troubleshooting |
Troubleshooting
No waveform context / "No signals tracked yet" Add signals to VaporView before opening the chat. The extension reads whatever is currently displayed in the signal list.
HDL context not found
Either open your RTL directory as the VS Code workspace root, or add the path to hdlWaveAi.hdl.searchPaths in settings.
LLM not responding / very slow
For large VCDs with no markers set, the extension may sample many time steps. Set markers in VaporView to limit the time range, or increase sampleStepSize.
Check the Output channel
Run HDL Wave AI: Debug VaporView State and open the HDL Wave AI output channel (View → Output) to see what signals, URIs, and state are being read.
License
AGPL-3.0 — see LICENSE.