|
| Setting | Description | Default |
|---|---|---|
apiKey |
Your LLM service API key | "" |
endpointType |
Choose openai or azure |
"openai" |
endpointUrl |
LLM API endpoint URL | "" |
model |
Model name (OpenAI only) | "gpt-4" |
deploymentName |
Deployment name (Azure Responses API only) | "" |
maxTokens |
Maximum tokens in response (1-4096) | 1000 |

Endpoint Configuration Examples
OpenAI
{
"agentInstructor.endpointType": "openai",
"agentInstructor.endpointUrl": "",
"agentInstructor.apiKey": "your-openai-api-key",
"agentInstructor.model": "gpt-4"
}
Azure OpenAI (Traditional Chat Completions)
{
"agentInstructor.endpointType": "azure",
"agentInstructor.endpointUrl": "https://your-resource.openai.azure.com/openai/deployments/your-deployment/chat/completions?api-version=2024-02-15-preview",
"agentInstructor.apiKey": "your-azure-api-key"
}
Azure OpenAI (New Responses API - GPT-5-mini)
{
"agentInstructor.endpointType": "azure",
"agentInstructor.endpointUrl": "https://your-resource.openai.azure.com/openai/responses?api-version=2025-04-01-preview",
"agentInstructor.apiKey": "your-azure-api-key",
"agentInstructor.deploymentName": "gpt-5-mini"
}
Usage
Analyzing Instructions
- Open your
instruction.txtfile - Command Palette (Ctrl+Shift+P)
- Select "Agent Instructor: Analyze Instructions"
- Review the analysis in the sidebar:
- Clarity Score
- Identified Issues
- Suggested Improvements
- Click "Apply Correction" to implement suggestions
Generating Instructions
- Create or open an
instruction.txtfile - Command Palette (Ctrl+Shift+P)
- Select "Agent Instructor: Generate Instructions"
- Enter your agent description
- Review and edit generated instructions
Best Practices
- Keep instruction files named as
instruction.txt - Use clear, specific agent descriptions when generating
- Review and customize generated instructions
- Regularly analyze existing instructions for clarity
- Apply suggested improvements selectively based on your needs
- For Azure OpenAI:
- Use traditional endpoints for GPT-4 and earlier models
- Use Responses API endpoint for GPT-5-mini and newer models
- Set appropriate
deploymentNamewhen using Responses API
- Increase
maxTokensto 3000-4000 for comprehensive analysis results - Check Developer Console for detailed error information if issues occur
Troubleshooting
Common issues and solutions:
API Connection Failed
- Verify API key is correct
- Check endpoint URL format (see configuration examples above)
- Ensure internet connectivity
- For Azure, verify deployment name matches your resource
Invalid File Type
- Ensure file is named
instruction.txt - Open file in editor before running commands
- Ensure file is named
Generation/Analysis Timeout
- Try increasing
maxTokenssetting (recommended: 3000-4000 for complete responses) - Check internet connection stability
- For Azure Responses API, ensure deployment name is configured
- Try increasing
Empty or Incomplete Responses
- Increase
maxTokenssetting (GPT-5-mini may need 3000+) - Check Developer Console (Help > Toggle Developer Tools) for detailed error logs
- Verify endpoint URL includes correct API version
- Increase
Azure Responses API Issues
- Ensure you're using the correct endpoint format:
/openai/responses?api-version=2025-04-01-preview - Set the
deploymentNameconfiguration to your deployment name - Note: The Responses API uses different parameters than traditional endpoints
- Ensure you're using the correct endpoint format:
Development
Building from Source
git clone https://github.com/stephanbisser/agent-instructor.git
cd agent-instructor
npm install
npm run compile
Running Tests
npm run test
Release Notes
0.1.0
- NEW: Support for Azure OpenAI Responses API (GPT-5-mini compatible)
- NEW:
deploymentNameconfiguration for Azure deployments - NEW:
modelconfiguration for OpenAI endpoints - IMPROVED: Responsive UI layout that fits on one screen
- IMPROVED: Scrollable corrections table with sticky headers
- IMPROVED: Enhanced error handling with detailed messages
- IMPROVED: Automatic detection of endpoint types and formats
- FIXED: Layout overflow issues
- FIXED: JSON parsing for different API response formats
0.0.9
- Enhanced maxTokens configuration
- Improved error handling
- Enhanced UI responsiveness
0.0.6
- Added maxTokens configuration
- Improved error handling
- Enhanced UI responsiveness
0.0.1
- Initial preview release
- Basic analysis features
- Instruction generation support
Contributing
- Fork the repository
- Create a feature branch
- Submit a pull request
License
This project is licensed under the MIT License.
Support
For issues and feature requests, please use the GitHub Issues page.

