Skip to content
| Marketplace
Sign in
Visual Studio Code>Machine Learning>AutoPromptLabNew to Visual Studio Code? Get it now.
AutoPromptLab

AutoPromptLab

ChinaAIInfra

|
31 installs
| (3) | Free
A multi-agent engine that automates prompt tuning to deliver peak quality.
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

AutoPromptLab

A multi-agent engine that automates prompt tuning to deliver peak quality.

AutoPromptLab runs an automated optimization loop — evaluate, score, analyze, optimize, commit — powered by Microsoft AutoGen agents. This VS Code extension brings the full workflow into your editor with real-time monitoring and control.

Features

Session Management

Create, start, stop, and resume optimization sessions directly from the sidebar. Import completed CLI runs to review results in the UI.

Real-time Agent Monitoring

Watch the multi-agent orchestrator in action. See routing decisions, agent messages, and tool calls as they happen via WebSocket streaming.

Metrics Dashboard

Track five evaluation dimensions — completeness, clarity, engagement, factual accuracy, and groundedness — with score cards, radar charts, and iteration history.

Bad Case Browser

Explore low-scoring evaluation cases with filtering and clustering. Drill into failure patterns to understand where prompts fall short.

Prompt Diff Viewer

Compare prompt changes across optimization iterations with side-by-side diffs. Review exactly what the optimizer modified and why.

Copilot Integration

Use VS Code's built-in Copilot API as the LLM backend — no separate API keys needed. The extension proxies LLM requests through Copilot's language model API with automatic retry and rate limiting.

Multiple LLM Backends

Choose from Azure OpenAI, Papyrus, local LLM services, or VS Code Copilot as the model backend for optimization agents.

Getting Started

  1. Install the extension
  2. Click the AutoPromptLab icon in the Activity Bar
  3. The backend server starts automatically
  4. Create a new session and configure your optimization parameters
  5. Start the session to begin the optimization loop

Commands

Command Description
AutoPromptLab: Start Server Start the backend server
AutoPromptLab: Stop Server Stop the backend server
AutoPromptLab: Restart Server Restart the backend server
AutoPromptLab: Open Sidebar Focus the AutoPromptLab sidebar

Extension Settings

Setting Default Description
autopromptlab.serverMode auto auto starts a local server; manual connects to an existing one
autopromptlab.serverUrl http://localhost:8080 Backend URL (manual mode)
autopromptlab.serverPort 8237 Backend port (auto mode)
autopromptlab.braintrustApiKey Braintrust API key for non-pme org
autopromptlab.braintrustAzureApiKey Braintrust API key for bt-azure org
autopromptlab.picassoPath Path to Picasso repository
autopromptlab.picassoSessionRoot Root for session Picasso clones
autopromptlab.aipaEvalPath Path to AIPA evals repository

Requirements

  • Python 3.12+ with uv package manager
  • Active Azure credentials (for Azure OpenAI backend) or VS Code Copilot access

Architecture

VS Code Extension
├── Sidebar Webview (React + Zustand)
├── Server Manager (FastAPI lifecycle)
├── Copilot Proxy (LLM request forwarding)
└── Status Bar (connection state)

FastAPI Backend
├── Orchestrator (point-to-point agent routing)
├── 11 AutoGen Agents (evaluate → analyze → optimize)
├── WebSocket (real-time event streaming)
└── REST API (session CRUD, metrics, prompts)

License

MIT

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2026 Microsoft