Overview Version History Q & A Rating & Review
Adversarial Code Generator
Generate secure, verified code using adversarial AI testing directly in VS Code.
⚠️ Requirements
This extension requires a local backend server to function. The backend performs adversarial security verification on generated code.
Prerequisites
Python 3.9 or higher
Git
API key for at least one LLM provider:
OpenRouter (recommended)
OpenAI
Claude/Anthropic
Google Gemini
🚀 Quick Setup
Step 1: Clone and Start Backend Server
# Clone the repository
git clone https://github.com/a7madiv/adversarial-verification-framework
cd adversarial-verification-framework/backend
# Install dependencies
pip install -r requirements.txt
# Start the server
python api/server.py
The server will start on http://localhost:5001
Install this extension from the marketplace
Open VS Code Settings (Ctrl+,)
Search for "Secure LLM"
Configure:
Server URL: http://localhost:5001 (default)
AI Provider: Choose your provider (openrouter, openai, claude, gemini)
API Key: Enter your API key for the chosen provider
Step 3: Start Generating Secure Code!
Click the "Adversarial" button in the status bar, or
Press Ctrl+Shift+L (Windows/Linux) or Cmd+Shift+L (Mac)
Enter your code requirement
Get secure, verified code with security reports!
✨ Features
Adversarial Verification : Code is tested against security attacks before delivery
Multi-Provider Support : Works with OpenAI, Claude, OpenRouter, and Gemini
Security Reports : Every generation includes detailed security analysis
Multiple Verification Layers : Static analysis, fuzzing, and dynamic testing
Confidence Scoring : Know how secure your generated code is
📝 Example Requirements
Try these prompts:
"Write a function to validate email format using regex"
"Create a secure password validation function"
"Generate a function for safe file upload handling"
"Write a function to sanitize SQL query inputs"
🔒 How It Works
Blue Agent : Generates code using your chosen LLM
Static Gate : Analyzes code for security issues (Bandit, Semgrep, Pylint)
Red Agent : Generates adversarial test cases and exploits
Dynamic Gate : Executes code in sandboxed environment
Adjudicator : Makes final security verdict (PASS/WARN/FAIL)
📊 What You Get
For each code generation:
Main Code File : Secure, verified implementation
Security Report : Detailed analysis including:
Verification status (PASS/WARN/FAIL)
Confidence score
Security violations found
Evidence and recommendations
⚙️ Configuration Options
Setting
Default
Description
secureLlm.serverUrl
http://localhost:5001
Backend server URL
secureLlm.timeout
120000
Request timeout (ms)
secureLlm.aiProvider
openrouter
AI provider to use
secureLlm.openrouterApiKey
""
OpenRouter API key
secureLlm.openaiApiKey
""
OpenAI API key
secureLlm.claudeApiKey
""
Claude API key
secureLlm.geminiApiKey
""
Gemini API key
🛠️ Troubleshooting
"Cannot connect to server"
Make sure the backend server is running: python api/server.py
Check the server URL in settings matches: http://localhost:5001
"API key not found"
Configure your API key in VS Code settings
Or set environment variable in backend .env file
"Request timeout"
Increase timeout in settings (default: 120 seconds)
Complex requirements may take longer
📚 Documentation
🤝 Contributing
Contributions welcome! See the main repository for details.
📄 License
MIT License - see LICENSE for details.
🔗 Links
Note: This extension is designed for developers who want to generate secure code with confidence. The adversarial verification process ensures code is tested against real security attacks before you use it.