Skip to content
| Marketplace
Sign in
Visual Studio Code>Other>Agent InstructorNew to Visual Studio Code? Get it now.
Agent Instructor

Agent Instructor

Stephan Bisser

|
132 installs
| (0) | Free
A VS Code extension to analyze agent instructions.
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

Agent Instructor
Agent Instructor

Agent Instructor for VS Code

A powerful VS Code extension designed to help developers create, analyze, and refine instructions for declarative AI agents.

Features

1. Analyze Instructions

Evaluates existing agent instructions for clarity and potential improvements:

  • Generates a clarity score (0-100)
  • Identifies ambiguous phrases
  • Provides specific improvement suggestions
  • Offers one-click corrections
  • Visualizes analysis results with interactive charts
  • Displays results in a responsive, single-screen layout
  • Scrollable corrections table with sticky headers

Analyze Instructions

2. Generate Instructions

Creates comprehensive instructions for new agents:

  • Interactive agent description input
  • AI-powered instruction generation
  • Automatic formatting and organization
  • Maintains existing content when adding new instructions
  • Supports multiple instruction sets per file

Generate Instructions

3. Multi-Platform Support

Works seamlessly with multiple AI platforms:

  • OpenAI: Direct integration with OpenAI API
  • Azure OpenAI (Traditional): Classic chat completions endpoint
  • Azure OpenAI (Responses API): New endpoint supporting GPT-5-mini and newer models
  • Automatic endpoint detection and format handling
  • Intelligent request/response parsing for different API versions

Installation

  1. Open VS Code
  2. Go to Extensions (Ctrl+Shift+X)
  3. Search for "Agent Instructor"
  4. Click Install

Configuration

Configure the extension in VS Code settings (File > Preferences > Settings > Extensions > Agent Instructor):

Setting Description Default
apiKey Your LLM service API key ""
endpointType Choose openai or azure "openai"
endpointUrl LLM API endpoint URL ""
model Model name (OpenAI only) "gpt-4"
deploymentName Deployment name (Azure Responses API only) ""
maxTokens Maximum tokens in response (1-4096) 1000

Settings

Endpoint Configuration Examples

OpenAI

{
    "agentInstructor.endpointType": "openai",
    "agentInstructor.endpointUrl": "",
    "agentInstructor.apiKey": "your-openai-api-key",
    "agentInstructor.model": "gpt-4"
}

Azure OpenAI (Traditional Chat Completions)

{
    "agentInstructor.endpointType": "azure",
    "agentInstructor.endpointUrl": "https://your-resource.openai.azure.com/openai/deployments/your-deployment/chat/completions?api-version=2024-02-15-preview",
    "agentInstructor.apiKey": "your-azure-api-key"
}

Azure OpenAI (New Responses API - GPT-5-mini)

{
    "agentInstructor.endpointType": "azure",
    "agentInstructor.endpointUrl": "https://your-resource.openai.azure.com/openai/responses?api-version=2025-04-01-preview",
    "agentInstructor.apiKey": "your-azure-api-key",
    "agentInstructor.deploymentName": "gpt-5-mini"
}

Usage

Analyzing Instructions

  1. Open your instruction.txt file
  2. Command Palette (Ctrl+Shift+P)
  3. Select "Agent Instructor: Analyze Instructions"
  4. Review the analysis in the sidebar:
    • Clarity Score
    • Identified Issues
    • Suggested Improvements
  5. Click "Apply Correction" to implement suggestions

Generating Instructions

  1. Create or open an instruction.txt file
  2. Command Palette (Ctrl+Shift+P)
  3. Select "Agent Instructor: Generate Instructions"
  4. Enter your agent description
  5. Review and edit generated instructions

Best Practices

  • Keep instruction files named as instruction.txt
  • Use clear, specific agent descriptions when generating
  • Review and customize generated instructions
  • Regularly analyze existing instructions for clarity
  • Apply suggested improvements selectively based on your needs
  • For Azure OpenAI:
    • Use traditional endpoints for GPT-4 and earlier models
    • Use Responses API endpoint for GPT-5-mini and newer models
    • Set appropriate deploymentName when using Responses API
  • Increase maxTokens to 3000-4000 for comprehensive analysis results
  • Check Developer Console for detailed error information if issues occur

Troubleshooting

Common issues and solutions:

  1. API Connection Failed

    • Verify API key is correct
    • Check endpoint URL format (see configuration examples above)
    • Ensure internet connectivity
    • For Azure, verify deployment name matches your resource
  2. Invalid File Type

    • Ensure file is named instruction.txt
    • Open file in editor before running commands
  3. Generation/Analysis Timeout

    • Try increasing maxTokens setting (recommended: 3000-4000 for complete responses)
    • Check internet connection stability
    • For Azure Responses API, ensure deployment name is configured
  4. Empty or Incomplete Responses

    • Increase maxTokens setting (GPT-5-mini may need 3000+)
    • Check Developer Console (Help > Toggle Developer Tools) for detailed error logs
    • Verify endpoint URL includes correct API version
  5. Azure Responses API Issues

    • Ensure you're using the correct endpoint format: /openai/responses?api-version=2025-04-01-preview
    • Set the deploymentName configuration to your deployment name
    • Note: The Responses API uses different parameters than traditional endpoints

Development

Building from Source

git clone https://github.com/stephanbisser/agent-instructor.git
cd agent-instructor
npm install
npm run compile

Running Tests

npm run test

Release Notes

0.1.0

  • NEW: Support for Azure OpenAI Responses API (GPT-5-mini compatible)
  • NEW: deploymentName configuration for Azure deployments
  • NEW: model configuration for OpenAI endpoints
  • IMPROVED: Responsive UI layout that fits on one screen
  • IMPROVED: Scrollable corrections table with sticky headers
  • IMPROVED: Enhanced error handling with detailed messages
  • IMPROVED: Automatic detection of endpoint types and formats
  • FIXED: Layout overflow issues
  • FIXED: JSON parsing for different API response formats

0.0.9

  • Enhanced maxTokens configuration
  • Improved error handling
  • Enhanced UI responsiveness

0.0.6

  • Added maxTokens configuration
  • Improved error handling
  • Enhanced UI responsiveness

0.0.1

  • Initial preview release
  • Basic analysis features
  • Instruction generation support

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Submit a pull request

License

This project is licensed under the MIT License.

Support

For issues and feature requests, please use the GitHub Issues page.

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2025 Microsoft