Automatically analyze open files in the background if they match the configured languages (or all languages if llmBugDetector.languages is empty).
After making changes to a supported file, wait for the configured delay (llmBugDetector.analysisInterval).
The extension then queries the configured Large Language Model via the VS Code LLM API to identify potential issues.
Diagnostics are displayed in the editor, highlighting potential bugs or areas for improvement from the LLM with messages like:
"Assigning a string to an integer variable."
"Storing address of vector element that may be invalidated."
Note: This extension relies on the VS Code Language Model API for its functionality. Ensure your VS Code version supports this API.
Configuration
You can adjust the extension settings in VS Code User or Workspace settings under "LLM Bug Detector":
Available Settings
llmBugDetector.enabled (boolean, default: true)
Enables or disables the automatic background analysis
llmBugDetector.model (string or null, default: "gemini-2.5-pro")
Specifies the LLM model to use for analysis. If left empty, it defaults to gemini-2.5-pro. (This behavior is active when the VS Code API allows model selection).
Available Models for llmBugDetector.model
GPT 3.5 Turbo : gpt-3.5-turbo
GPT-4o mini : gpt-4o-mini
GPT 4 : gpt-4
GPT 4 Turbo : gpt-4-0125-preview
GPT-4o : gpt-4o
o1 (Preview) : o1
o3-mini : o3-mini
Claude 3.5 Sonnet : claude-3.5-sonnet
Claude 3.7 Sonnet : claude-3.7-sonnet
Claude 3.7 Sonnet Thinking : claude-3.7-sonnet-thought