A powerful VS Code extension that leverages Phi-3 (an advanced local LLM) to enhance your coding experience with AI-powered code explanations, refactoring, and test generation.
✨ Features
🔍 Code Explanation: Get detailed explanations of selected code blocks to understand complex logic
♻️ Intelligent Refactoring: Automatically improve code quality, readability, and performance
🧪 Test Generation: Create comprehensive unit tests for your code with a single click
🧠 Fully Local AI: Runs entirely on your machine with no data sent to external servers
🖥️ GPU Acceleration: Utilizes CUDA for faster processing when available
🔋 Automatic Environment Setup: Handles all dependencies in an isolated virtual environment
Code2Assist: Generate Tests – Generate unit tests for selected code
Code2Assist: Check Dependencies – Verify all required dependencies are installed
Code2Assist: Run GPU Diagnostics – Check CUDA support and GPU capabilities
Code2Assist: Reset Environment – Reset the Python virtual environment if issues occur
🛠️ How It Works
Code2Assist uses Microsoft's Phi-3-mini-4k-instruct model, a powerful yet efficient LLM designed specifically for code understanding and generation. The extension:
Runs the model locally using the Hugging Face Transformers library
Creates task-specific prompts based on your selected code
Processes the model's output to provide useful, contextual results
Manages all dependencies in an isolated Python virtual environment
🛟 Troubleshooting
GPU Not Detected
Run Code2Assist: Run GPU Diagnostics to see detailed information
Ensure you have the latest NVIDIA drivers installed
Check if CUDA is properly set up on your system
You can force GPU usage in settings if detection fails
Dependency Issues
Run Code2Assist: Check Dependencies to verify the environment
If issues persist, use Code2Assist: Reset Environment to rebuild the virtual environment
Check the output panel for specific error messages
⚡ Performance Optimization
For faster processing, ensure you have a CUDA-capable GPU
Close other GPU-intensive applications when using the extension
Consider adjusting the maximum token length in settings for quicker responses
⚙️ Configuration
Configure Code2Assist through VS Code settings:
code2assist.useGPU: Enable/disable GPU acceleration (default: true if available)
code2assist.maxTokens: Maximum token length for generated content (default: 1024)
code2assist.forceGpuUsage: Force GPU usage even if auto-detection fails (default: false)
🔒 Privacy & Security
Code2Assist processes all code locally on your machine. No data is sent to external servers, ensuring:
Complete privacy of your code
Ability to work in air-gapped environments
No dependency on internet connectivity for core functionality