🛡️ AI Code Guard
Privacy-first security scanner for MERN stack applications
Created by Aman Gupta, a developer who believes security tools shouldn't compromise your code privacy.
👨💻 Why I Built This
As a developer working on sensitive projects, I was frustrated with security tools that required uploading code to cloud services. Many teams avoid security scanning altogether due to privacy concerns, NDA restrictions, or compliance requirements.
I created AI Code Guard to solve this problem:
- 🔒 100% local scanning - Your code never leaves your machine
- 🚀 Real-time feedback - Catch issues while coding, not during code review
- 🎯 Context-aware - Smart detection for frontend vs backend code
- 🛡️ Production-ready - 19 comprehensive security rules for MERN stack
Whether you're building a startup MVP, working on enterprise software, or contributing to open source, your code stays private while staying secure.
— Aman Gupta
🔐 Privacy-First Philosophy
AI Code Guard operates with absolute privacy guarantees:
✅ All analysis happens locally on your machine
✅ No code is ever uploaded, stored, or transmitted
✅ No network calls are made
✅ No AI model training on your code
✅ In-memory processing only
✅ Source code is discarded immediately after scanning
Only metadata (findings) are generated - file paths, line numbers, rule IDs, and severity levels.
🚀 Features
5 Security Rules detecting critical vulnerabilities:
- Hardcoded secrets (API keys, tokens)
- Console logging of PII
- Sensitive data in JSX
- Dangerous eval() usage
- dangerouslySetInnerHTML
CLI Tool for CI/CD integration
VS Code Extension for real-time feedback
Zero dependencies for core engine (only uses Babel parser)
Strict by design - false positives are acceptable to ensure no vulnerabilities slip through
📦 Installation
Prerequisites
Install Dependencies
cd ai-code-guard
npm install
cd engine && npm install
cd ../cli && npm install
cd ../vscode-extension && npm install
🔧 Usage
# Install CLI globally (from cli directory)
cd cli
npm link
# Scan a file
aicode scan src/components/Login.jsx
# Scan entire directory
aicode scan src/
# JSON output for CI/CD
aicode scan src/ --json
Exit Codes:
0 - No CRITICAL issues found
1 - CRITICAL issues found (blocks CI/CD)
VS Code Extension
Development Mode
- Open
vscode-extension folder in VS Code
- Press
F5 to launch Extension Development Host
- Open a JavaScript/React project
- Save a file to trigger scan
- See red squiggly lines for issues
Install Locally
cd vscode-extension
npm install -g @vscode/vsce
vsce package
code --install-extension ai-code-guard-vscode-1.0.0.vsix
Commands
- AI Code Guard: Scan Current File - Manually scan active file
- AI Code Guard: Scan Entire Workspace - Scan all JS/TS files
Settings
{
"aiCodeGuard.enableOnSave": true,
"aiCodeGuard.showCriticalOnly": false
}
🧪 Testing Each Rule
Rule 1: Hardcoded Secrets
Create test file test-secrets.js:
// ❌ CRITICAL - Will be detected
const apiKey = "sk_live_1234567890abcdefghijklmnop";
const stripeKey = "pk_live_abcdefghijklmnopqrstuvwxyz";
// ❌ CRITICAL - Will be detected
const config = {
secret: "my-super-secret-password-12345"
};
// ✅ OK - Placeholder
const apiKey = "your_api_key_here";
Expected output: 2 CRITICAL findings
Rule 2: Console Logging PII
Create test file test-console.js:
// ❌ HIGH - Will be detected
const user = { email: "test@example.com", name: "John" };
console.log(user);
// ❌ HIGH - Will be detected
console.log("User data:", account);
// ✅ OK
console.log("Button clicked");
Expected output: 2 HIGH findings
Rule 3: Sensitive Data in JSX
Create test file test-jsx.jsx:
// ❌ CRITICAL - Will be detected
function Dashboard({ token }) {
return <div>{token}</div>;
}
// ❌ CRITICAL - Will be detected
function Profile() {
return <div>API Key: {user.apiKey}</div>;
}
// ✅ OK
function Welcome({ username }) {
return <div>Hello {username}</div>;
}
Expected output: 2 CRITICAL findings
Rule 4: Dangerous eval()
Create test file test-eval.js:
// ❌ CRITICAL - Will be detected
eval("alert('XSS')");
// ❌ CRITICAL - Will be detected
const fn = new Function("x", "return x * 2");
// ❌ CRITICAL - Will be detected
setTimeout("console.log('bad')", 1000);
// ✅ OK
setTimeout(() => console.log('good'), 1000);
Expected output: 3 CRITICAL findings
Rule 5: dangerouslySetInnerHTML
Create test file test-innerhtml.jsx:
// ❌ HIGH - Will be detected
function RenderHtml({ html }) {
return <div dangerouslySetInnerHTML={{ __html: html }} />;
}
// ✅ OK
function RenderText({ text }) {
return <div>{text}</div>;
}
Expected output: 1 HIGH finding
🔄 GitHub Actions Integration
Create .github/workflows/security-scan.yml:
name: Security Scan
on: [push, pull_request]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install AI Code Guard
run: |
cd path/to/ai-code-guard/cli
npm install
npm link
- name: Run Security Scan
run: |
aicode scan src/ --json > scan-results.json
- name: Upload Results
if: always()
uses: actions/upload-artifact@v3
with:
name: security-scan-results
path: scan-results.json
The scan will fail the build if any CRITICAL issues are found.
📝 Publishing VS Code Extension
To VS Code Marketplace
- Create publisher account at https://marketplace.visualstudio.com/manage
- Update
publisher in package.json
- Get Personal Access Token from Azure DevOps
- Publish:
cd vscode-extension
npm install -g @vscode/vsce
vsce login your-publisher-name
vsce publish
To Open VSX (VS Codium)
npm install -g ovsx
ovsx publish
🏗️ Architecture
ai-code-guard/
├── engine/ # Core scanning engine (shared)
│ ├── parser.js # Babel AST parser
│ ├── scanner.js # Rule runner
│ ├── rules/ # Individual security rules
│ │ ├── hardcoded-secrets.js
│ │ ├── console-log-pii.js
│ │ ├── jsx-sensitive-data.js
│ │ ├── dangerous-eval.js
│ │ └── dangerous-innerhtml.js
│ └── index.js
├── cli/ # Command-line tool
│ ├── bin/aicode.js # CLI entry point
│ └── index.js
└── vscode-extension/ # VS Code extension
├── extension.js # Extension logic
└── package.json
🎯 Adding Custom Rules
Create a new rule file in engine/rules/:
const { traverse } = require('../parser');
const RULE_ID = 'my-custom-rule';
function check(ast, report) {
traverse(ast, {
// Visitor for specific AST node types
CallExpression(node) {
// Your detection logic
if (/* condition */) {
report({
rule: RULE_ID,
severity: 'CRITICAL', // or 'HIGH', 'MEDIUM'
message: 'Description of the issue',
line: node.loc?.start.line || 0,
});
}
},
});
}
module.exports = {
id: RULE_ID,
name: 'My Custom Rule',
severity: 'CRITICAL',
check,
};
Rules are automatically loaded from the rules/ directory.
❌ Not a replacement for backend security
❌ Not a vulnerability scanner for dependencies
❌ Not runtime monitoring
❌ Not AI-powered (uses static AST analysis)
❌ Does not scan node_modules
This tool focuses on detecting human coding mistakes in frontend code, especially those introduced by AI code generators.
🤝 Contributing
This is a privacy-first tool. When contributing:
- Never add features that upload or store code
- Never add network calls
- Never add telemetry or analytics
- Keep rules focused on high-severity issues
- False positives are acceptable - false negatives are not
📄 License
MIT License - See LICENSE file
🧠 Philosophy
AI code generators are powerful but can introduce security vulnerabilities. This tool helps developers catch these issues before they reach production, with absolute privacy guarantees.
Trust is everything in security tools. We will never compromise on privacy.
🐛 Troubleshooting
Extension not working
- Check VS Code version (need 1.75+)
- Check file extension (.js, .jsx, .ts, .tsx)
- Check Output panel: "AI Code Guard"
CLI not finding issues
- Verify file extensions are supported
- Check if files are in node_modules (skipped by default)
- Run with single file to test:
aicode scan file.js
False positives
This is intentional. The tool is strict by design. Review each finding - if it's a false positive, consider if your variable naming could be clearer.
📞 Support
- Issues: Open a GitHub issue
- Privacy concerns: We take privacy seriously - report any concerns immediately
Built with ❤️ for developers who care about security and privacy