Fight vibe coding. Understand your own code. Intellegode is a VS Code extension that leverages a local Large Language Model to quiz you on code you just wrote or AI-generated—ensuring deep comprehension before you move on. Everything runs strictly local. No cloud telemetry, no subscriptions. How It Works
PrerequisitesInstallation & Setup1. Install the Extension Search for Intellegode in the VS Code Extensions Marketplace and select Install. 2. Start Ollama Ensure the Ollama server is running in the background. If you installed Ollama natively:
3. Pull the Language Model Intellegode requires the Qwen model to operate. Run the following command in your terminal:
4. Start Your First Session Highlight a snippet of code in your editor and press Ctrl+Alt+Q to begin your session. Architecture
ConfigurationIn VS Code, navigate to File > Preferences > Settings and search for
Developer Environment VariablesWhen running the extension in development mode, the following flags are supported:
TroubleshootingExtension Hangs / System FreezesCause: Ollama is attempting to load the language model into GPU VRAM but running out of allocation space. Solution: If your hardware has 4GB VRAM or less, force CPU mode. You can set the environment variable before launching VS Code from the terminal:
Note: CPU-only inference is inherently slower but remains entirely stable on lower-end systems. Connection Refused to OllamaSolution:
Model Not Found ErrorSolution: The required model has not been downloaded to the Ollama runtime. Pull the required model:
Request Timeout ErrorCause: The model is sustaining a cold-boot load that exceeds the timeout threshold.
Solution:
Wait a few seconds for the model to cache into memory and try again. Alternatively, increase the timeout limit via the Roadmap
|