Trusys – LLM Security Scanner (VS Code Extension)VS Code extension that runs the LLM Security Scanner inside the editor: it shows AI/LLM security findings in the Problems panel and supports optional AI analysis and database upload. What the extension does
Requirements: VS Code 1.74.0+, Python 3.11+. No manual setup: the extension installs the scanner and dependencies automatically when you open a folder or run a scan. You don’t need to run any commands or configure anything. Supported frameworksThe extension uses the same Semgrep rules as the CLI. Supported languages and LLM/framework rule sets include:
Generate Eval Tests supports FastMCP ( Capabilities
How to use1. Install and run a scan
2. Change what gets scanned and shown
3. Use AI analysis (optional)
4. Upload results to your backend (optional)
5. Generate Eval Tests (optional)
6. Clear results
7. Reinstall or fix scanner/dependencies
Commands (Command Palette:
|
| Command | What it does |
|---|---|
| LLM Security: Scan Workspace | Scans the whole workspace; results only in Problems (no upload). |
| LLM Security: Scan Current File | Scans the active editor file; results only in Problems. |
| LLM Security: Scan and Upload to Database | Scans workspace and uploads results to your backend (needs upload settings). |
| LLM Security: Clear Results | Clears all extension diagnostics from the Problems panel. |
| LLM Security: Install Dependencies | Runs the extension’s installer for the scanner and Semgrep. |
| LLM Security: Generate Eval Tests | Generates eval test JSON for FastMCP, LangChain, LlamaIndex, or LangGraph (extracts tools, AI-generated prompts with eval_type mix; LangGraph includes graph_structure). Requires AI provider/model and API key. |
Settings (search “LLM Security” in VS Code Settings)
Scan behavior
llmSecurityScanner.enabled– Turn the extension on/off.llmSecurityScanner.scanOnSave– Scan when a file is saved.llmSecurityScanner.scanOnOpen– Scan when a file is opened.llmSecurityScanner.scanDelay– Delay (ms) before running a scan after a change.llmSecurityScanner.severityFilter– Which severities to show (e.g.["critical","high","medium"]).llmSecurityScanner.includePatterns– Glob patterns for files to include (e.g.["*.py"]).llmSecurityScanner.excludePatterns– Glob patterns to exclude (e.g.["**/__pycache__/**"]).
Scanner
llmSecurityScanner.pythonPath– Python executable used to run the scanner (e.g.python3orvenv/bin/python).llmSecurityScanner.rulesDirectory– Path to rules (relative to workspace or absolute).llmSecurityScanner.autoInstallDependencies– Whether to auto-install scanner/Semgrep on activation.
AI analysis and Eval Tests (optional)
llmSecurityScanner.enableAiAnalysis– Enable/disable AI analysis for scans (when implemented).llmSecurityScanner.aiProvider–openaioranthropic. Used by Generate Eval Tests and AI analysis.llmSecurityScanner.aiModel– e.g.gpt-4,gpt-3.5-turbo,claude-3-opus-20240229. Used by Generate Eval Tests and AI analysis.llmSecurityScanner.aiApiKey– API key (or useOPENAI_API_KEY/ANTHROPIC_API_KEY). Used by Generate Eval Tests and AI analysis.llmSecurityScanner.evalTestMaxPromptsPerTool– Max prompts per tool when generating eval tests (default: 3).llmSecurityScanner.evalFramework– Framework for eval extraction:mcp(FastMCP),langchain(LangChain),llamaindex(LlamaIndex), orlanggraph(LangGraph) (default: mcp).llmSecurityScanner.aiConfidenceThreshold– Minimum confidence for AI verdict (0–1).llmSecurityScanner.aiMaxFindings– Max number of findings to send to AI (limits cost).
Database upload (optional)
llmSecurityScanner.uploadEndpoint– Backend URL (e.g.https://api.example.com/api/v1/scans).llmSecurityScanner.applicationId– Application ID in your backend.llmSecurityScanner.apiKey– API key for the upload endpoint.
Viewing results in VS Code
- Problems panel: View → Problems, or
Ctrl+Shift+M/Cmd+Shift+M. Each finding shows rule, message, file, and line. - In editor: Red/yellow/blue squiggles and gutter markers (by severity). Click to go to the line.
- Remediation: Shown in the problem message or in the hover where supported.
Troubleshooting (extension-only)
Extension doesn’t run or “scanner not found”
- Ensure Python 3.11+ is installed and that
llmSecurityScanner.pythonPathpoints to it. - Run LLM Security: Install Dependencies or install manually:
pip install trusys-llm-scan(and ensure Semgrep is available). - Check Output → “LLM Security Scanner” for errors.
"llm_scan package not found" or setup fails
- The extension normally sets up the scanner automatically when you run a scan. If you see this error, ensure a folder is open (File → Open Folder) and run Scan Workspace or Scan Current File again; setup will run and the scan will retry.
- If it still fails, run LLM Security: Install Dependencies from the Command Palette. You can also set
llmSecurityScanner.pythonPathto a Python that already has the scanner (e.g. a venv withpip install trusys-llm-scan).
No findings
- Confirm
severityFilterincludes the severities you expect. - Check
includePatterns/excludePatterns(e.g. file might be excluded). - Run Scan Workspace or Scan Current File manually and watch Output for scanner output.
AI analysis not running
- Ensure
enableAiAnalysisistrueandaiProvider/aiModelare set. - Set
aiApiKeyorOPENAI_API_KEY/ANTHROPIC_API_KEY. Check Output for API errors.
Generate Eval Tests fails
- Set
aiProviderandaiModelin settings; setaiApiKeyorOPENAI_API_KEY/ANTHROPIC_API_KEYin the environment. - Ensure the workspace has Python files that define tools for the selected Eval Framework (e.g. FastMCP
@mcp.tool(), LangChain@tool, LlamaIndexFunctionTool.from_defaults, LangGraphStateGraph+ToolNode). Check Output → “LLM Security Scanner” for API or timeout errors. - To run evals on the generated JSON, use the CLI:
python -m llm_scan.eval --eval-json <path> --graph <module:attribute>(see main repo TEST_GENERATION.md).
Database upload fails
- Confirm
uploadEndpoint,applicationId, andapiKeyare set and that the backend is reachable. - Use Scan and Upload to Database (upload is not done by the normal scan commands).
Performance
- Increase
scanDelay, or disablescanOnSave/scanOnOpenand scan only via commands. - Narrow
includePatternsor addexcludePatterns; setaiMaxFindingsif using AI.
Installing the extension
From VSIX (e.g. local build)
cd vscode-extension
npm install
npm run compile
vsce package
Then in VS Code: Extensions → … → Install from VSIX → choose the .vsix file.
Development (F5)
Open the vscode-extension folder in VS Code, press F5 to launch Extension Development Host. Use llmSecurityScanner.pythonPath in the host so it uses a Python with the scanner installed.
More information
- Scanner and rules: see the main repository README and TEST_GENERATION.md for CLI, eval test generation, eval types (tool_selection, safety, prompt_injection, etc.), and running concrete evals (
python -m llm_scan.eval). - Backend/dashboard: see the main repo’s
backend/documentation for server setup.