Skip to content
| Marketplace
Sign in
Visual Studio Code>Other>Prompt Builder Multi-LLM DebuggerNew to Visual Studio Code? Get it now.
Prompt Builder Multi-LLM Debugger

Prompt Builder Multi-LLM Debugger

nagendra080389

|
19 installs
| (0) | Free
Compare responses from multiple LLMs in Salesforce Prompt Builder
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

⚡🧠 Prompt Builder Multi-LLM Debugger

A VS Code extension for debugging and comparing Salesforce Prompt Builder templates across multiple Large Language Models (LLMs) side-by-side.

License: MIT VS Code Salesforce GitHub Issues

Extension Screenshot

Prompt Execution Workflow

This diagram illustrates the end-to-end flow of how a prompt is executed, from the VS Code extension to the Salesforce Einstein APIs and back.

Prompt Execution Workflow

✨ Features

🔄 Multi-Model Comparison

Compare prompt responses from multiple LLMs simultaneously:

  • OpenAI Models: GPT-5.2, GPT-5.1, GPT-5, GPT-4.1, GPT-4o, GPT-4o mini, O3, O4 Mini
  • Anthropic Claude: Sonnet 4.5, Opus 4.5, Haiku 4.5 (via Amazon Bedrock)
  • Google Gemini: Gemini 3 Pro, Gemini 2.5 Pro, Gemini 2.0 Flash (via Vertex AI)
  • Amazon Models: Nova Pro, Nova Lite
  • Azure OpenAI: GPT-4, GPT-3.5 Turbo

🎯 Smart Features

  • Auto-Detect Inputs: Automatically detects and populates prompt template variables
  • Multiple Display Modes: View responses as HTML, Plain Text, Raw, or Formatted JSON
  • Performance Metrics: Response time, actual token usage (input/output breakdown), Einstein Requests calculation, and finish reason for each model
  • Accurate Token Counts: Real token counts from Models API (not estimates!), enabling precise cost calculations
  • Safety Scores: View AI safety analysis for generated content
  • Diff View: Side-by-side word-level comparison between two models with highlighted differences
  • Export Results: Download comparison results as JSON for further analysis
  • Copy Individual Results: Quick copy-to-clipboard functionality

⚙️ Configurable Settings

  • API version selection
  • Temperature control (0-2)
  • Max tokens limit
  • Request timeout
  • Maximum models for comparison (default: 3)
  • Default display mode

🎨 Modern UI

  • Clean, card-based interface
  • Responsive design (works on all screen sizes)
  • Real-time status updates
  • Visual model selection chips
  • Dark/Light theme support (follows VS Code theme)

📋 Requirements

  • VS Code: Version 1.80.0 or higher
  • Salesforce CLI: Installed and authenticated (sf command)
  • Salesforce Org: With Einstein/Agentforce enabled
  • Prompt Builder: At least one prompt template configured

Installing Salesforce CLI

# macOS
brew install salesforce-cli

# Windows
# Download from: https://developer.salesforce.com/tools/sfdxcli

# Linux
npm install -g @salesforce/cli

Authenticating with Salesforce

# Login to your org
sf org login web

# Verify authentication
sf org display

🚀 Installation

From VS Code Marketplace (Coming Soon)

  1. Open VS Code
  2. Go to Extensions (Cmd+Shift+X / Ctrl+Shift+X)
  3. Search for "Prompt Builder Debugger"
  4. Click Install

From VSIX File

  1. Download the latest .vsix from Releases
  2. Open VS Code
  3. Go to Extensions → "..." menu → Install from VSIX
  4. Select the downloaded file

From Source

# Clone the repository
git clone https://github.com/Nagendra080389/prompt-builder-multi-debugger_tracker.git
cd prompt-builder-multi-debugger

# Install dependencies
npm install

# Compile TypeScript
npm run compile

# Package extension
npm run package

# Install the generated .vsix file
code --install-extension prompt-builder-multi-debugger-0.0.1.vsix

📖 Usage

Quick Start

  1. Open a Salesforce Project

    • Ensure you have a sfdx-project.json file
    • The extension auto-activates for Salesforce projects
  2. Launch the Debugger

    • Click the ⚡🧠 icon in the Activity Bar (left sidebar)
    • Or use Command Palette: "Prompt Builder: Open Multi-LLM Debugger"
  3. Select Prompt Template

    • Choose from your org's prompt templates
    • Input variables auto-populate
  4. Select Models

    • Click to select up to 3 models
    • See selected models as visual chips
  5. Execute Comparison

    • Click "Execute Comparison"
    • View side-by-side results
  6. Analyze Results

    • Compare response quality
    • Check performance metrics
    • Review safety scores
    • Switch display modes (HTML/Plain/Raw/Formatted)

Display Modes

Toggle between different view modes for each result:

  • HTML: Renders HTML formatting (links, bold, lists)
  • Plain: Strips HTML, shows plain text only
  • Raw: Shows raw response with visible HTML tags
  • Formatted: Auto-formats JSON with syntax highlighting

Configuration

Access settings via: VS Code Settings → Extensions → Prompt Builder Debugger

Setting Default Description
API Version v65.0 Salesforce API version
Temperature 0.7 LLM temperature (0-2)
Max Tokens 500 Maximum response tokens
Request Timeout 30000ms API request timeout
Max Models 3 Maximum models for comparison
Default Display Mode html Default view mode for results

🎯 Use Cases

1. Prompt Engineering

Test different models to find the best one for your use case:

  • Compare response quality across providers
  • Identify which model understands your prompt best
  • Optimize for speed vs. quality trade-offs

2. Cost Optimization

Compare actual token usage across models:

  • Find the most token-efficient model
  • View accurate Einstein Request calculations (175 tokens = 1 request)
  • Balance cost vs. performance with real data
  • Make informed vendor decisions based on precise token counts
  • Models API provides actual token usage (not estimates!)

3. Quality Assurance

Test prompt templates before deployment:

  • Verify consistent responses across models
  • Check safety scores for harmful content
  • Validate output formatting

4. Debugging

Troubleshoot prompt template issues:

  • See auto-detected input variables
  • Test different input values
  • View raw API responses

5. Documentation

Export results for sharing with team:

  • Generate comparison reports
  • Document model performance
  • Share findings with stakeholders

🛠️ Development

Project Structure

prompt-builder-multi-debugger/
├── src/
│   ├── extension.ts              # Extension entry point
│   ├── services/
│   │   ├── SalesforceAuth.ts     # SF CLI authentication
│   │   ├── PromptBuilderAPI.ts   # Prompt Builder API calls
│   │   └── LLMService.ts         # LLM execution service
│   ├── webview/
│   │   └── WebviewProvider.ts    # UI logic and HTML
│   └── types/
│       └── index.ts              # TypeScript type definitions
├── models-config.json            # Available LLM models
├── icon.svg                      # Extension icon
├── package.json                  # Extension manifest
└── tsconfig.json                # TypeScript configuration

Building from Source

# Install dependencies
npm install

# Compile TypeScript
npm run compile

# Watch mode (auto-compile on save)
npm run watch

# Run tests
npm run test

# Lint code
npm run lint

# Package extension
npm run package

Running in Development

  1. Open the project in VS Code
  2. Press F5 to launch Extension Development Host
  3. A new VS Code window opens with the extension loaded
  4. Test your changes

🤝 Feedback & Issues

We welcome your feedback and bug reports!

How to Report Issues

  1. Check if the issue already exists: View Issues
  2. Create a new issue: Report Bug
  3. Provide detailed information:
    • Extension version
    • VS Code version
    • Steps to reproduce
    • Expected vs actual behavior
    • Screenshots (if applicable)

Feature Requests

Have an idea for a new feature? Open a discussion or create a feature request issue!

📊 Model Support

The extension supports 25+ LLM models from multiple providers. See models-config.json for the complete list.

Adding New Models

To add a new model, edit models-config.json:

{
  "fallbackModels": [
    {
      "id": "sfdc_ai__DefaultNewModel",
      "name": "New Model Name",
      "provider": "Provider Name"
    }
  ]
}

🐛 Troubleshooting

Extension Not Activating

  • Ensure you have sfdx-project.json in your workspace
  • Check VS Code version (must be 1.80+)

"No access token found"

  • Run: sf org login web
  • Verify: sf org display

"No templates found"

  • Ensure your org has Prompt Builder enabled
  • Check you have at least one prompt template
  • Verify API access permissions

Models Not Loading

  • Check Salesforce org has Einstein enabled
  • Verify API version in settings
  • Try refreshing (close/reopen debugger)

API Timeout Errors

  • Increase timeout in settings
  • Check network connection
  • Verify Salesforce org is accessible

📝 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • Built with VS Code Extension API
  • Uses Salesforce CLI
  • Powered by Salesforce Einstein API
  • Model data from Salesforce Supported Models

📞 Support

  • Issues: GitHub Issues
  • Discussions: GitHub Discussions
  • Documentation: Wiki

📈 Changelog

See CHANGELOG.md for version history and release notes.

⚠️ Disclaimer

This is an independent open-source project and is not officially affiliated with or endorsed by Salesforce, OpenAI, Anthropic, Google, or Amazon. All trademarks belong to their respective owners.

The extension uses publicly available Salesforce APIs and requires proper authentication and permissions.


⭐ If you find this extension helpful, please star the tracker repo!

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2026 Microsoft