ChatDBG - A Prompt Engineer's Best Friend
ChatDBG is a VS Code extension that helps you debug and analyze your OpenAI chat completions. It provides a local proxy server that logs all your LLM interactions, letting you inspect, search, and analyze them directly in VS Code.
Features
- 🔍 Log and Inspect: Automatically logs all OpenAI chat completions locally
- 🔄 Real-time Updates: View completions as they happen
- 📝 Rich History: Search and browse through your chat completion history
- 🔗 Source Code Links: Jump directly to the code referenced in messages
- 💾 Local Storage: Everything runs locally with SQLite - no external services needed
- 🔌 OpenAI Compatible: Works with any OpenAI-compatible endpoint (OpenAI, Ollama, Azure, etc.)
- ⚡ Completion Caching: Automatically caches and returns completions for identical prompts, speeding up tests and reducing costs
Security & Privacy
ChatDBG takes your data privacy seriously:
- 🔒 Local Only: All data is stored locally on your machine in a SQLite database
- 🔑 No External Services: ChatDBG only acts as a proxy between your application and OpenAI - no data is sent to any third-party services
- 📝 Data Control: You have full control over your data as it's all stored locally on your machine
- ⏱️ Automatic Cleanup: All prompt and completion data is automatically purged after 24 hours
Getting Started
Install the extension from the VS Code marketplace
Configure your application to use ChatDBG as a proxy:
export OPENAI_BASE_URL=http://localhost:7777/v1
(Optional) Configure a custom LLM provider in VS Code settings:
- Set
chatdbg.llmProviderUrl
to your provider's base URL (e.g., http://localhost:11434
for Ollama)
- Defaults to OpenAI's API if not specified
Start making OpenAI API calls - they'll automatically appear in the ChatDBG panel
Using ChatDBG
Viewing Completions
- Open the ChatDBG panel in the VS Code activity bar (look for the Bug icon)
- Click on a logged chat/completion to view it
Inspecting Messages
Click on any completion to see its details. Individual messages are
collapsed by default. Click on a message to expand it.
Click on the "Find Code" button next to any message to jump to the source code or template file that generated the message.
No more crawling through logs! All your LLM interactions are now in one place.
License
This project is licensed under the MIT License - see the LICENSE.md file for details.
Completion Caching
ChatDBG offers prompt caching to dramatically speed up and reduce the costs of your tests. When re-running a test with an identical prompt and metadata (model, temperature, etc.), you will receive an immediate cached answer, saving time and resources.
Benefits
- Speed: Get instant responses for repeated prompts, accelerating your test cycles.
- Cost Efficiency: Reduce API call costs by reusing cached completions.
Trade-offs
- Variability: Tests that rely on variability from identical prompts won't benefit from this feature. If your tests are depending on or testing the non-determinism of your model, you can disable caching.
How to Use
Caching is enabled by default. You can disable it by changing the chatdbg.cachingEnabled
setting in your VS Code configuration.