TensorClad
AI-Native Application Security Scanner for VS Code
Installation
Features
Demo
Vulnerabilities
Configuration
Contributing
Demo
TensorClad detecting security vulnerabilities in real-time as you code
Why TensorClad?
As AI applications become mainstream, a new class of security vulnerabilities has emerged. Traditional SAST tools excel at finding SQL injection and XSS, but they're blind to prompt injection, API key leakage in LLM configs, and unvalidated model outputs.
TensorClad fills this gap. It's a static analysis tool built specifically for developers working with OpenAI, LangChain, Anthropic, and other AI frameworks.
The Problem
Consider this typical AI application code:
# This code has 3 security issues
api_key = "sk-proj-abc123..." # TC001: Hardcoded API key
def chat(user_input):
prompt = f"Help the user with: {user_input}" # TC010: Prompt injection risk
response = openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content # TC030: Unvalidated output
TensorClad detects all three issues as you type, showing warnings in the Problems panel with explanations and fix suggestions.
Features
Real-Time Scanning
No need to run a separate CLI tool. TensorClad analyzes your code on every keystroke and highlights issues inline with squiggly underlines, just like TypeScript errors.
Security Dashboard
Run TensorClad: Show Security Report to see a summary of all detected vulnerabilities across your workspace, organized by severity and file.
Framework-Aware Detection
Purpose-built detection rules for popular AI/LLM frameworks:
| Framework |
Support |
| OpenAI SDK |
Full |
| LangChain |
Full |
| Anthropic Claude |
Full |
| Azure OpenAI |
Full |
| LlamaIndex |
Full |
| Google AI |
Coming Soon |
Installation
From VS Code Marketplace
- Open VS Code
- Press
Ctrl+Shift+X (Windows/Linux) or Cmd+Shift+X (Mac)
- Search for "TensorClad"
- Click Install
From VSIX (Manual)
code --install-extension tensorclad-0.1.0.vsix
Detected Vulnerabilities
TensorClad identifies security issues specific to AI/LLM applications. Each finding includes a code (e.g., TC001), severity level, and remediation guidance.
API Key Exposure (TC001-003)
Hardcoded API keys are the most common security issue in AI applications.
# ❌ Detected: TC001 - OpenAI API key in source code
openai.api_key = "sk-proj-abc123def456..."
# ✅ Secure: Load from environment
openai.api_key = os.getenv("OPENAI_API_KEY")
Prompt Injection (TC010)
Direct user input in prompts allows attackers to manipulate LLM behavior.
# ❌ Detected: TC010 - User input directly in prompt
prompt = f"Summarize this text: {user_input}"
# ✅ Secure: Validate and sanitize input
prompt = f"Summarize this text: {sanitize_input(user_input)}"
Hardcoded System Prompts (TC020)
System prompts in source code can leak business logic and are hard to update.
# ⚠️ Warning: TC020 - Consider externalizing prompts
messages = [
{"role": "system", "content": "You are a helpful assistant..."}
]
# ✅ Better: Load from configuration
messages = [
{"role": "system", "content": load_prompt("assistant")}
]
Unvalidated LLM Output (TC030)
LLM responses are untrusted. Never execute them directly.
# ❌ Detected: TC030 - Executing unvalidated output
result = response.choices[0].message.content
exec(result) # Remote code execution risk!
# ✅ Secure: Validate output before use
result = response.choices[0].message.content
if is_safe_output(result):
process(result)
PII Leakage (TC050)
Logging user data can violate privacy regulations.
# ❌ Detected: TC050 - PII in logs
print(f"User email: {user.email}, Query: {query}")
# ✅ Secure: Redact sensitive data
print(f"User: [REDACTED], Query: {redact_pii(query)}")
AI agents that execute arbitrary code need strict validation.
# ❌ Detected: TC060 - Dynamic code execution
eval(llm_response)
# ✅ Secure: Whitelist allowed operations
if operation in ALLOWED_OPERATIONS:
execute_sandboxed(operation)
Complete Rule Reference
| Code |
Category |
Severity |
What It Detects |
| TC001 |
API Keys |
Error |
OpenAI API keys in source |
| TC002 |
API Keys |
Error |
Anthropic API keys in source |
| TC003 |
API Keys |
Error |
Azure/other API keys in source |
| TC010 |
Prompt Injection |
Error |
User input concatenated into prompts |
| TC011 |
Input Validation |
Warning |
Unsanitized input passed to LLM |
| TC020 |
Configuration |
Warning |
Hardcoded system prompts |
| TC030 |
Output Validation |
Warning |
LLM output used without validation |
| TC040 |
RAG Security |
Warning |
Unsanitized vector DB queries |
| TC050 |
Data Privacy |
Error |
PII in logs or LLM context |
| TC060 |
Code Execution |
Error |
eval/exec with LLM output |
| TC070 |
Token Security |
Error |
Credentials exposed in responses |
| TC080 |
Rate Limiting |
Warning |
API calls without rate limits |
Commands
Open the Command Palette (Ctrl+Shift+P) and type "TensorClad":
| Command |
Description |
TensorClad: Scan Current File |
Manually trigger a scan of the active file |
TensorClad: Scan Entire Workspace |
Scan all Python/JS/TS files in the workspace |
TensorClad: Show Security Report |
Open the security dashboard in a new tab |
TensorClad: Clear Diagnostics |
Remove all TensorClad warnings |
TensorClad: Install Git Hooks |
Install pre-commit and pre-push hooks |
TensorClad: Uninstall Git Hooks |
Remove TensorClad git hooks |
TensorClad: Check Git Hooks Status |
View current git hooks installation status |
Configuration
Customize TensorClad in your VS Code settings (settings.json):
{
"tensorclad.enabled": true,
"tensorclad.scanOnSave": true,
"tensorclad.scanOnOpen": true,
"tensorclad.excludePatterns": [
"**/node_modules/**",
"**/dist/**",
"**/.venv/**"
],
"tensorclad.gitHooks.enabled": true,
"tensorclad.gitHooks.blockOnError": true,
"tensorclad.gitHooks.blockOnWarning": false
}
Configuration Options
| Setting |
Type |
Default |
Description |
tensorclad.enabled |
boolean |
true |
Enable/disable scanning |
tensorclad.scanOnSave |
boolean |
true |
Scan when files are saved |
tensorclad.scanOnOpen |
boolean |
true |
Scan when files are opened |
tensorclad.excludePatterns |
array |
[...] |
Glob patterns to exclude |
tensorclad.gitHooks.enabled |
boolean |
true |
Enable git hooks integration |
tensorclad.gitHooks.blockOnError |
boolean |
true |
Block push on security errors |
tensorclad.gitHooks.blockOnWarning |
boolean |
false |
Block push on security warnings |
Git Hooks Integration
TensorClad can integrate with Git to prevent pushing code with security vulnerabilities. This provides a last line of defense before insecure code reaches your repository.
How It Works
When installed, TensorClad adds pre-commit and pre-push hooks that scan staged/changed files for security issues:
- Pre-commit hook: Scans staged files before each commit
- Pre-push hook: Scans all changed files before pushing to remote
If critical vulnerabilities are found (based on your configuration), the commit or push is blocked with a detailed report.
Installing Git Hooks
- Open the Command Palette (
Ctrl+Shift+P)
- Run
TensorClad: Install Git Hooks
- The hooks will be installed to your repository's
.git/hooks/ directory
What Gets Blocked
By default, pushes are blocked when code contains:
| Code |
Issue |
Blocked by Default |
| TC001-003 |
Hardcoded API keys |
✅ Yes |
| TC010 |
Prompt injection vulnerabilities |
✅ Yes |
| TC050 |
PII exposure |
✅ Yes |
| TC060 |
Unsafe code execution |
✅ Yes |
You can configure whether warnings (non-error severity issues) also block pushes via tensorclad.gitHooks.blockOnWarning.
Bypassing Hooks (Emergency Only)
If you need to push despite warnings (not recommended):
git push --no-verify
⚠️ Warning: This bypasses all security checks. Use only when absolutely necessary.
Supported Languages
| Language |
Extensions |
Status |
| Python |
.py |
✅ Full support |
| JavaScript |
.js, .jsx |
✅ Full support |
| TypeScript |
.ts, .tsx |
✅ Full support |
| Java |
.java |
🔜 Planned |
| Go |
.go |
🔜 Planned |
| C# |
.cs |
🔜 Planned |
Contributing
Contributions are welcome! Whether it's adding new detection rules, improving documentation, or fixing bugs.
Quick Start
# Clone the repository
git clone https://github.com/santhoshravindran7/TensorClad.git
cd TensorClad
# Install dependencies
npm install
# Compile
npm run compile
# Watch mode (auto-recompile)
npm run watch
# Launch extension in debug mode
# Press F5 in VS Code
Adding Custom Rules
Add detection rules in src/rules/ruleEngine.ts:
{
id: 'custom-rule',
type: VulnerabilityType.CustomType,
severity: vscode.DiagnosticSeverity.Warning,
message: 'Description of the security issue',
code: 'TC100',
patterns: [/your-regex-pattern/g],
languageIds: ['python', 'javascript', 'typescript'],
documentation: 'How to fix this issue'
}
Roadmap
Planned for upcoming releases:
- [ ] Custom rule builder (YAML/JSON configuration)
- [ ] Quick-fix code actions for common issues
- [ ] CI/CD integration (GitHub Actions, GitLab CI)
- [ ] Compliance reporting (OWASP LLM Top 10, NIST AI RMF)
- [ ] Team policy enforcement
- [ ] Additional language support (Java, Go, C#, Rust)
Have a feature request? Open an issue.
Resources
License
MIT License - see LICENSE for details.
Support
Built for developers building the next generation of AI applications
⭐ Star on GitHub