Copilot Context Optimizer
A VS Code extension that provides specialized tools for GitHub Copilot to efficiently work with files and terminal commands without overwhelming chat context with large amounts of data.
🌐 Looking for universal AI assistant compatibility? Check out our MCP Server version that works with GitHub Copilot, Cursor AI, Claude Desktop, and other MCP-compatible assistants!
🎯 The Problem This Extension Solves
Have you ever experienced this with GitHub Copilot?
- 🔄 Copilot keeps summarizing conversations instead of helping with your actual questions
- 📄 Large files overwhelm the chat when you just need to check one specific thing
- 🖥️ Terminal outputs flood the context with hundreds of lines when you only need key information
- ⚠️ "Context limit reached" messages interrupting your workflow
- 🧠 Copilot "forgets" earlier parts of your conversation due to context overflow
The Root Cause: Copilot's chat context has limited space. When you paste large files, dump terminal outputs, or have long conversations, Copilot runs out of context space and either:
- Starts summarizing/truncating the conversation history
- Loses track of earlier context and decisions
- Becomes less helpful as it focuses on managing context rather than solving your problems
The Solution: This extension provides Copilot with specialized tools that extract only the specific information you need, keeping your chat context clean and focused on productive problem-solving rather than data management.
🚀 Features
- Seamless Copilot Integration: All tools appear as built-in tools in GitHub Copilot Chat
- Context-Efficient Operations: Get specific information without loading entire files or command outputs into chat
- Multi-Format File Support: Works with text files, code files, images, PDFs, and more
- Intelligent Terminal Processing: Execute commands and extract only relevant information
- Multi-LLM Support: Choose between Google Gemini, Claude (Anthropic), or OpenAI models
- Smart Configuration: Guided first-time setup with provider and model selection
- Conversational Follow-ups: Ask follow-up questions about terminal executions
- Robust Architecture: Clean, modular codebase with comprehensive error handling and validation
- Progress Reporting: Real-time feedback during long-running operations
- Secure Configuration: Safe API key storage with VS Code's secret management
🌐 Universal Compatibility: MCP Server Version Available
Want to use these tools with other AI assistants and IDEs?
We also provide a Context Optimizer MCP Server that brings the same powerful context optimization functionality to any AI assistant that supports the Model Context Protocol (MCP).
Supported AI Assistants & IDEs
- GitHub Copilot (VS Code with native MCP support)
- Cursor AI
- Claude Desktop
- Other MCP-compatible assistants
Key Benefits of the MCP Server Version
- Universal Compatibility: Works across different AI assistants and development environments
- Same Powerful Tools: File analysis, terminal execution, follow-up questions, and research tools
- Simple Configuration: Environment variable-based setup with no config files to manage
- Production Ready: Comprehensive testing and security controls
🔗 Get Started: Context Optimizer MCP Server Repository
Choose the VS Code extension for the best GitHub Copilot experience, or use the MCP server for universal compatibility across different AI assistants.
🔍 Ask About File
Extract specific information from files without reading their entire contents into chat context.
When Copilot Uses This Tool:
- Large files that would overwhelm chat context
- Specific queries about code functions, classes, or patterns
- Documentation extraction from specs or README files
- Image analysis or binary file inspection
- Quick checks for specific content without full file dumps
Example Use Cases:
- "Does the UserProfile.tsx file have a validateEmail function?"
- "What's the main purpose described in the API spec document?"
- "Get all import statements from utils.js"
- "What does this error screenshot show?"
🔧 Run Command & Extract Data
Execute terminal commands and intelligently extract specific information using natural language prompts. This tool runs commands using VS Code's Shell Integration API and processes the output with LLM analysis to provide focused, relevant results.
When Copilot Uses This Tool:
- Commands that produce verbose output requiring analysis
- Build outputs, logs, or diagnostic information processing
- Package management and dependency analysis
- Git history and repository analysis
- System information gathering
Example Use Cases:
- "Check what npm packages are outdated in this project"
- "Find any errors in the last git commits"
- "Analyze the build output for warnings"
- "Get a summary of disk usage in the project directory"
💬 Ask Follow-up Question
Continue conversations about previous terminal command executions without re-running commands.
When Available:
- Only after using "Run Command & Extract Data"
- Maintains conversation context from the specific terminal execution
- Allows deeper analysis of the same results
🔍 Research Topic
Conducts quick, focused web research using Exa.ai's powerful research capabilities.
When Copilot Uses This Tool:
- Software development topics requiring current information
- Framework comparisons and best practices
- API documentation and implementation guidance
- Technology trends and adoption patterns
- Quick technical clarifications
Example Use Cases:
- "What's the current best practice for React state management?"
- "Compare different Node.js testing frameworks"
- "How to implement OAuth2 with JWT tokens?"
🔬 Deep Research
Conducts comprehensive, in-depth research using Exa.ai's exhaustive analysis capabilities.
When Copilot Uses This Tool:
- Complex architectural decisions requiring thorough analysis
- Technology migration planning
- Security assessment and vulnerability research
- Performance optimization strategies
- Strategic technology adoption planning
Example Use Cases:
- "Comprehensive analysis of microservices vs monolith for e-commerce platforms"
- "In-depth security considerations for implementing payment processing"
- "Performance implications of different database choices for high-traffic applications"
Both research tools require an Exa.ai API key (get one at https://dashboard.exa.ai/api-keys)
🛠️ Installation
Prerequisites
- VS Code 1.101.0 or higher (required for Language Model Tools API)
- GitHub Copilot extension enabled
- API key for at least one supported LLM provider
Install the Extension
- Download the latest
.vsix
file from releases
- In VS Code:
Ctrl+Shift+P
→ "Extensions: Install from VSIX"
- Select the downloaded file
- Reload VS Code
First-Time Setup
On first use, you'll be guided through:
- Provider Selection: Choose between Gemini (free tier), Claude, or OpenAI (both paid)
- Model Selection: Confirm default model or specify custom model
- API Key Entry: Secure input with automatic opening of provider's API key page
💬 Usage in Copilot Chat
All tools are used automatically by GitHub Copilot when you need their functionality. Simply ask Copilot natural language questions, and it will choose the appropriate tool.
File Analysis Examples
You: "Does the config file have any database settings?"
Copilot: Uses Ask About File tool with the config file and your specific question
You: "What are the main components exported from the utils folder?"
Copilot: Uses Ask About File tool to analyze relevant files in the utils folder
Terminal Processing Examples
You: "Show me what packages need updating"
Copilot: Uses Run Command & Extract Data with npm outdated
and extraction prompt for update information
You: "Are there any recent commits that fixed bugs?"
Copilot: Uses Run Command & Extract Data with git log
and searches for bug-related commits
Follow-up Conversations
After running a terminal command, you can ask follow-up questions:
You: "What about security vulnerabilities in those packages?"
Copilot: Uses Ask Follow-up Question to analyze the previous npm output for security issues
Research Examples
You: "Research the current state of AI regulation in the European Union"
Copilot: Uses Research Topic to conduct comprehensive web research on EU AI regulation
You: "What are the latest performance improvements in Next.js 15?"
Copilot: Uses Research Topic to find and analyze current information about Next.js 15 features
⚙️ Configuration
Settings are found under Copilot Context Optimizer in VS Code Settings:
Setting |
Description |
Default |
llmProvider |
AI provider (gemini/claude/openai) for file analysis and terminal processing |
"" (prompts on first use) |
geminiKey |
Google Gemini API key |
"" |
claudeKey |
Claude (Anthropic) API key |
"" |
openaiKey |
OpenAI API key |
"" |
exaKey |
Exa.ai API key for research tools |
"" |
modelName |
Model to use for processing |
Auto-selected based on provider |
showProgressNotifications |
Show progress notifications (status bar progress always shown) |
false |
To use the Research Topic and Deep Research tools:
- Get an Exa.ai API Key: Visit Exa.ai Dashboard
- Configure the Extension:
- Open VS Code Settings (
Ctrl+,
)
- Search for "Copilot Context Optimizer"
- Enter your Exa.ai API key in the
exaKey
field
- Start Researching: The tools will automatically be available to GitHub Copilot
Note: Research tools use Exa.ai exclusively and don't require the main LLM provider configuration.
Default Models
When modelName
is not specified, the following default models are used:
Provider |
Default Model |
Description |
Gemini |
gemini-2.5-flash |
Fast, efficient model with generous free tier |
Claude |
claude-3-5-sonnet-20241022 |
High-quality reasoning and analysis |
OpenAI |
gpt-4o-mini |
Cost-effective GPT-4 variant |
LLM Processing Parameters
All providers use consistent parameters for reliable, focused responses:
- Temperature: 0.1 (low randomness for consistent results)
- Max Tokens: 2000 (balancing detail with context efficiency)
Getting API Keys
Google Gemini (Recommended - Free Tier Available):
- Visit Google AI Studio
- Create a new API key
- Free tier includes generous usage limits
Claude (Anthropic):
- Visit Anthropic Console
- Create a new API key
- High-quality responses with excellent reasoning capabilities
OpenAI:
📁 Supported File Types
Text-Based Files
- Source code files (
.js
, .ts
, .py
, .java
, .cpp
, etc.)
- Configuration files (
.json
, .yaml
, .toml
, .ini
, etc.)
- Documentation (
.md
, .txt
, .rst
, etc.)
- Web files (
.html
, .css
, .xml
, etc.)
- File Size Limit: 1MB maximum per file to maintain optimal LLM context usage
- Images (
.png
, .jpg
, .gif
, .svg
, etc.)
- PDF documents
- Office documents (limited support)
- Other formats supported by your chosen LLM provider
🎯 Use Cases
Development Workflows
Code Analysis:
- Check for specific functions or classes without reading entire files
- Extract API endpoints from route files
- Analyze configuration files for specific settings
- Review test files for coverage of specific functionality
Debugging & Troubleshooting:
- Analyze error logs without overwhelming chat context
- Check build outputs for specific warnings or errors
- Review git history for bug-related changes
- Examine stack traces and error messages
Project Management:
- Analyze package dependencies for updates or vulnerabilities
- Check project structure and organization
- Review documentation for completeness
- Audit code for specific patterns or compliance
System Administration
Monitoring & Analysis:
- Process system logs for specific events
- Analyze disk usage and system resources
- Review network configurations
- Monitor application performance metrics
Maintenance Tasks:
- Check for system updates and package status
- Analyze backup logs and status
- Review security audit results
- Monitor service health and availability
⚠️ Terminal Command Failures
When a terminal command fails (non-zero exit code or abrupt exit), the extension disposes the terminal immediately and always returns a result to Copilot with a clear failure indicator. This minimizes VS Code system notifications for abnormal termination and ensures Copilot receives a proper tool output for all command executions.
🧪 Development Status
Current Status: ✅ STABLE AND MAINTAINED
This extension is stable with fully functional tools that are actively maintained. New tools and enhancements continue to be added regularly.
✅ Completed Core Features
- Five Language Model Tools: FileAnalysisTool, TerminalExtractorTool, TerminalFollowUpTool, ResearchTopicTool, and DeepResearchTool
- Dual Research System: Quick research and comprehensive deep research tools with Exa.ai integration
- Multi-LLM Provider Support: Complete implementation for Google Gemini, Claude, OpenAI, and Exa.ai
- Advanced Terminal Integration: VS Code Shell Integration API with intelligent fallback mechanisms
- Comprehensive Security: Input validation, path protection, and secure configuration management
- Robust Architecture: Manager classes, error handling, progress reporting, and resource management
- Extensive Testing: 34+ test files covering functionality, security, performance, and edge cases
🔄 Ongoing Development
- Stable Foundation: Core functionality is stable and production-ready
- Active Maintenance: Regular updates for compatibility and performance improvements
- Feature Enhancement: New tools and capabilities added based on user feedback
- Security Updates: Continuous security monitoring and improvement
🔧 Technical Details
Context Optimization Strategy
This extension helps Copilot work more efficiently by:
- File Analysis: Instead of loading entire files (which can be thousands of lines) into chat context, only the specific answer to your question is returned
- Terminal Processing: Instead of dumping raw command output (which can be hundreds of lines), only the extracted relevant information is provided using LLM analysis
- Session Management: Follow-up questions reuse previous context without re-executing commands
Terminal Integration Details
- VS Code Shell Integration: Uses VS Code's Shell Integration API for reliable command execution with real output capture
- Intelligent Fallback: Automatically falls back to standard terminal.sendText() when shell integration unavailable
- Cross-Platform Support: Works with any shell type supported by VS Code (PowerShell, CMD, Bash, Zsh, etc.)
- Timeout Management: Configurable timeouts for shell integration detection and command execution
- Resource Lifecycle: Proper terminal creation, management, and disposal with event handling
- Session Management: Terminal sessions with automatic expiration (30 minutes) and conversation context
- Progress Reporting: Real-time progress updates with VS Code's progress API integration
Configuration & Security Behavior
- Dynamic Provider Switching: Model name automatically resets when switching LLM providers
- Startup Validation: Configuration validated on extension startup to catch issues early
- Guided Setup: First-time users guided through provider selection and API key setup
- Secure Storage: API keys stored securely in VS Code's secret management system
- Error Context Preservation: Maintains error context while sanitizing sensitive information
- Input Sanitization: All file paths and commands validated against security threats
Security & Privacy
Comprehensive Security Measures:
- Path Traversal Protection: Prevents access to files outside the workspace with multiple validation layers
- Command Injection Prevention: Input validation against dangerous terminal commands and shell metacharacters
- API Key Security: Secure storage in VS Code's secret management with sanitized error messages
- Input Sanitization: All user inputs validated against injection attempts and malicious content
- Resource Isolation: Proper session management preventing cross-contamination of data
Privacy Guarantees:
- API keys are stored securely in VS Code's configuration system
- File contents are only sent to your chosen LLM provider when specifically requested
- Terminal outputs are processed locally and only relevant extracted data is retained
- No data is sent to third parties beyond your configured LLM provider
- Session data is automatically cleaned up and expires after 30 minutes of inactivity
🏗️ Architecture & Design
This extension is built with a clean, modular architecture designed for maintainability, security, and extensibility:
Core Architecture Components
Manager Classes:
- TerminalManager: Singleton class providing VS Code shell integration with fallback mechanisms, timeout handling, and comprehensive cancellation support
- ProgressManager: Unified progress reporting system with different UI modes and specialized options for various operations
Security & Validation:
- InputValidator: Comprehensive validation for file paths, terminal commands, and user inputs with injection prevention
- ErrorHandler: Centralized error management with user-friendly messages and context preservation
- FileUtils: Secure file operations with path traversal protection and size limit enforcement
Base Architecture:
- BaseTool: Abstract base class providing common functionality like cancellation checking, result creation, and error handling for all tools
- Provider Abstraction: Unified interface supporting multiple LLM providers through a factory pattern
- Configuration Management: Robust configuration system with validation, guided setup, and secure API key storage
Security Features
- Path Traversal Protection: Prevents access to files outside the workspace
- Command Injection Prevention: Input validation against dangerous terminal commands
- API Key Security: Secure storage and sanitized error messages to prevent key exposure
- Input Sanitization: Comprehensive validation of all user inputs and file paths
Technical Implementation
- Singleton Patterns: For session and terminal management ensuring resource efficiency
- Cancellation Token Support: Full cancellation support across all async operations
- Shell Integration: VS Code shell integration with intelligent fallback to standard terminal
- Resource Management: Proper cleanup and disposal patterns preventing memory leaks
- Error Resilience: Graceful handling of network issues, API errors, and edge cases
Modular Design
src/
├── shared/ # Shared utilities and abstractions
│ ├── providers/ # LLM provider implementations
│ ├── configuration/ # Configuration management system
│ ├── validation.ts # Input validation utilities
│ └── errorHandling.ts # Centralized error handling
├── tools/ # Language Model Tools implementation
│ ├── base/ # Base tool architecture
│ ├── managers/ # Terminal and progress management
│ └── [tools].ts # Individual tool implementations
└── utils/ # Utility functions and helpers
├── fileUtils.ts # Secure file operations
└── errorHandling.ts # Error handling utilities
Key Design Principles
- Single Responsibility: Each module has a clear, focused purpose
- Dependency Injection: Tools receive their dependencies rather than creating them
- Error Resilience: Graceful handling of network issues, API errors, and edge cases
- Resource Cleanup: Proper disposal of terminals, timers, and event listeners
- Extensibility: Easy to add new providers, tools, and features
🧪 Development & Testing
The extension is built with modern TypeScript and follows industry best practices:
Code Quality
- TypeScript: Full type safety with strict compiler settings
- ESLint: Comprehensive linting for code consistency
- Modular Architecture: Clean separation of concerns
- Error Handling: Comprehensive error management and user feedback
- Input Validation: Thorough validation of user inputs and file operations
Build System
- esbuild: Fast, modern bundling for optimal performance
- Source Maps: Full debugging support in development
- Type Checking: Continuous type validation during development
- Watch Mode: Automatic rebuilding during development
Development Setup
To run and debug the extension:
- Clone the repository
- Install dependencies:
npm install
- Press F5 to launch the Extension Development Host
- This will compile the extension and open a new VS Code window with your extension loaded
- The "Run Extension" configuration is the default, which compiles once before running
- Use "Run Extension (Watch Mode)" for continuous compilation during development
Available Debug Configurations
- Run Extension: Compile once and run the extension
- Run Extension (Watch Mode): Run with automatic recompilation on file changes
- Extension Tests: Run the test suite
Development Commands
npm run compile
: Build the extension once
npm run watch
: Build and watch for changes
npm run test
: Run the complete test suite
npm run lint
: Check code style and validation
npm run package
: Create production build
Testing and Validation
- Test Command: Use
Ctrl+Shift+P
→ "Test Context Optimizer Tools" to verify extension functionality
- Comprehensive Test Suite: 32 test files covering all aspects of functionality
- Continuous Integration: Automated testing for reliability and stability
- Security Testing: Dedicated security test scenarios for robust protection
Comprehensive Testing
The extension includes an extensive test suite with comprehensive coverage across all functionality:
Test Categories
- Core Functionality Tests: Extension lifecycle, tool registration, session management, base tool architecture
- Security & Validation Tests: Command injection prevention, API key security, path traversal protection, input sanitization
- Architecture Tests: Provider abstraction, configuration system, shared utilities, manager classes
- Performance Tests: Session performance, memory usage, concurrent operations, resource cleanup
- Integration Tests: End-to-end workflows, LLM integration, VS Code API integration, terminal manager integration
- Edge Case Tests: Network failures, terminal edge cases, resource exhaustion, data validation
Test Structure
src/test/
├── Core Tests (extension, configuration, session management)
├── Architecture Tests (baseTool, validation, error handling, providers)
├── Security Tests (comprehensive security validation scenarios)
├── Tool Tests (file analysis, terminal extractor, follow-up)
├── Manager Tests (terminal manager, progress manager)
├── Performance Tests (benchmarks and resource management)
├── e2e/ (end-to-end workflow testing)
└── edgeCases/ (network and terminal edge cases)
Testing Highlights
- Security Focus: Comprehensive security scenarios including command injection and prompt injection protection
- Performance Validation: Performance benchmarks with memory and resource management testing
- Edge Case Resilience: Terminal edge cases including binary data, ANSI sequences, and concurrent operations
- Multi-Provider Testing: Testing across Google Gemini, Claude, and OpenAI providers
- Error Recovery: Error handling scenarios with graceful recovery testing
- Architecture Validation: Testing of modular architecture and provider abstraction
Running Tests
- All Tests:
npm test
(runs complete test suite)
- Compile Tests:
npm run compile-tests
(compiles test TypeScript)
- Lint:
npm run lint
(code style validation)
Test Quality Metrics
- 32 Test Files across comprehensive scenarios
- Comprehensive Coverage of core functionality, security, and performance
- Multiple Test Categories including unit, integration, and edge case testing
- Type Safety with full TypeScript validation
- No data is sent to third parties beyond your configured LLM provider
Error Handling Architecture
The extension uses a comprehensive error type system for robust error handling:
- Structured Error Types: All errors use
ErrorCode
enum and ExtensionError
class for consistency
- User-Friendly Messages: Errors include clear, formatted messages with emojis and markdown styling
- Error Categories:
- Configuration errors (missing API keys, providers, models)
- Input validation errors (missing topic, file path, question)
- File operation errors (not found, too large, access denied)
- Network/API errors (authentication, rate limits)
- Service-specific errors (Exa.ai timeouts, task failures)
- Terminal errors (no session, execution failures)
- LLM processing errors
Example Error Messages:
❌ **Missing Research Topic**: Please provide a research topic to investigate.
❌ **Configuration Required**: Please complete the extension configuration to use this tool.
❌ **File Too Large**: The file exceeds the size limit. This tool works best with smaller files.
Error Type Usage:
// Creating specific errors
throw ExtensionError.missingInput('topic', 'ResearchTool');
throw ExtensionError.fileError('not_found', '/path/to/file');
throw ExtensionError.exaError('timeout');
// Getting user-friendly messages
const userMessage = error.getUserFriendlyMessage();
🐛 Troubleshooting
Common Issues
Tools not appearing in Copilot Chat:
- Ensure VS Code is version 1.101.0 or higher
- Verify GitHub Copilot extension is enabled and active
- Reload VS Code after installing the extension
- Test tools availability:
Ctrl+Shift+P
→ "Test Context Optimizer Tools"
Configuration prompts not appearing:
- Check VS Code settings under "Copilot Context Optimizer"
- Manually configure provider and API key if needed
- Ensure API key format is correct for your chosen provider
- Restart VS Code after configuration changes
Terminal commands failing:
- Ensure the command works in your system's native terminal
- Check that you have necessary permissions for command execution
- Verify the command syntax is correct for your operating system
- Commands use VS Code's Shell Integration API with intelligent fallback support
- Check the terminal output panel for detailed error information
Extension functionality issues:
- Test extension status:
Ctrl+Shift+P
→ "Test Context Optimizer Tools"
- Check the VS Code Developer Console (
Help
→ Toggle Developer Tools
) for errors
- Verify all three tools are registered and available to Copilot
- Ensure no conflicting extensions are interfering
LLM API errors:
- Verify API key is valid and has sufficient credits/quota
- Check network connectivity
- Try switching to a different model if available
- Check error message formatting for specific provider requirements
🐞 Known Issues & Recent Fixes
- Previously, the tool could return results before the terminal output was fully complete, especially if VS Code Shell Integration was not available or ready.
- Now, the tool always waits for the terminal process to fully complete and all output to be captured before returning results.
- If shell integration is unavailable, the tool clearly indicates that output cannot be captured and prompts the user to check the terminal manually.
- Improved error handling and logging for command execution and output capture.
- Integration tests now cover long-running and edge-case command scenarios.
📝 Changelog
See CHANGELOG.md for detailed version history.
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🤝 Contributing
Contributions are welcome! Please feel free to submit issues, feature requests, or pull requests.
🙏 Acknowledgments
- GitHub Copilot team for the Language Model Tools API
- Google, Anthropic, and OpenAI for their excellent LLM APIs
- VS Code team for the extensibility platform
Exa.ai Service Implementation (Stage 4)
- Replaced PerplexityService with ExaService in
src/shared/exaService.ts
using the exa-js SDK.
- ExaService provides static methods for initialization, quick and deep research, polling, and error handling.
- Added ExaResponse interface and updated LLMResponse in
src/shared/types.ts
for Exa.ai JSON output.
- Comprehensive unit tests for ExaService in
src/test/exaService.test.ts
using Mocha and Sinon.
- All Exa.ai research requests use a strict schema:
{ result: string }
(markdown report).
- Progress callback and error handling are supported.
See the ExaService source and tests for details.
🟦 Enhanced Progress Reporting (2025-06-24)
The extension now supports advanced progress reporting for long-running async tasks, such as Exa.ai research polling:
- Polling Status Messages: Progress notifications now update dynamically with messages like "Creating research task...", "Research in progress...", "Processing results...", and "Finalizing report..." during research operations.
- Status Bar Integration: Progress is always shown in the status bar, even if notifications are disabled in settings.
- Notification Settings: The
showProgressNotifications
setting is respected for all progress operations.
- API: Developers can use
ProgressManager.pollStatus
to easily add polling-based progress to new tools.
Example Usage
await ProgressManager.pollStatus(
ProgressManager.createProgressOptions('Conducting research...'),
async () => {
// Return the current status message, or null to stop polling
return getCurrentResearchStatus();
},
1000 // Poll every 1 second
);
See the ProgressManager and progressManager.test.ts for details.
Provider Selection
- Exa.ai is the only provider for the new research tools (
ResearchTopicTool
, DeepResearchTool
).
- There is no provider selection for these tools; Exa is used automatically.
- Provider selection (Gemini, Claude, OpenAI) is only available for the other tools (Run & Extract, Follow Up, Ask About File). Exa is not offered as a selectable provider for those tools.
Integration Test Setup
To run integration tests with Exa.ai, Gemini, Claude, or OpenAI:
Integration Test Environment Loading
The test suite now robustly loads .env.test
for integration tests, searching multiple locations:
- Project root (relative to where you run the test command)
- Relative to the compiled output directory (e.g.,
out/test/helpers
)
- A custom path via the
TEST_ENV_PATH
environment variable
This ensures API keys are always loaded, regardless of how or where tests are run. Debug logging will indicate which path was used or if the file was not found.
Exa.ai Research Task Polling
- After creating a research task, the polling loop now checks the task status every 10 seconds (using the get-a-task endpoint) and continues until the status is "completed". This matches the intended Exa API usage pattern. See Exa API docs for details.
Breaking Changes
- The Run Command & Extract Data tool now requires VS Code Shell Integration. If shell integration is not available, the tool will throw an error and will not attempt to run the command or capture output. This ensures reliability and avoids incomplete results.
Shell Integration Catastrophic Error Handling
- The test suite now expects a catastrophic error (ExtensionError with SHELL_INTEGRATION_ERROR) if VS Code Shell Integration is unavailable for terminal command execution. There is no fallback to sendText.
- The integration test for shell integration timeout was updated to simulate immediate unavailability and check for the correct error.
- See CHANGELOG for details.