RespCode - Multi-Architecture Code Generation and Verification
Generate code with 11+ AI models and execute on x86, ARM64, RISC-V, and Verilog/VHDL simulators.

✨ What's New in v2.3.0
- Dynamic Model Loading - Models are fetched from API, automatically get new models
- 11+ AI Models - Claude, GPT-4o, Gemini, DeepSeek, Llama, Qwen and more
- Improved Caching - Faster model selection with 5-minute cache
- Better Error Handling - Clearer error messages and fallbacks
Features
- Generate - AI generates code and executes it instantly
- Compete - Compare 4 AI models side-by-side
- Collaborate - Models refine each other's code in a pipeline
- Consensus - Models vote and merge the best solution
- Execute - Run your own code on any architecture
Supported Architectures
| Architecture |
Provider |
Description |
| 💻 x86_64 |
Daytona |
Intel/AMD servers |
| 🔥 ARM64 |
Firecracker |
Apple Silicon, AWS Graviton |
| ⚡ RISC-V 64 |
QEMU |
RISC-V development |
| 🔧 ARM32 |
Firecracker |
Embedded ARM |
| 🔌 Verilog/VHDL |
Icarus/GHDL |
HDL simulation |
Supported AI Models
Anthropic
| Model |
Credits |
Description |
| 🟣 Claude Sonnet 4.5 |
6 |
Best quality, complex tasks |
| 🟣 Claude Haiku 3.5 |
1 |
Fast & affordable |
OpenAI
| Model |
Credits |
Description |
| 🟢 GPT-4o |
5 |
OpenAI flagship |
| 🟢 GPT-4o Mini |
1 |
Budget option |
Google
| Model |
Credits |
Description |
| 🟡 Gemini 2.5 Pro |
7 |
Premium quality |
| 🟡 Gemini 2.5 Flash |
2 |
Fast & efficient |
DeepSeek
| Model |
Credits |
Description |
| 🔵 DeepSeek Coder |
2 |
Code specialist |
| 🔵 DeepSeek Chat |
2 |
General purpose |
Groq (Open Source)
| Model |
Credits |
Description |
| 🟠 Llama 3.3 70B |
2 |
Meta's best open model |
| 🟠 Qwen 3 32B |
2 |
Alibaba's code model |
| 🟠 Llama 3.1 8B |
1 |
Fastest inference |
Note: Models are fetched dynamically from the API. New models appear automatically!
Getting Started
- Install the extension from VS Code Marketplace
- Get API Key at respcode.com/settings/api-keys
- Set API Key:
Ctrl+Shift+P → RespCode: Set API Key
- Start generating:
Ctrl+Shift+R → Select action
Commands
| Command |
Shortcut |
Description |
| RespCode: Menu |
Ctrl+Shift+R |
Open command menu |
| RespCode: Execute |
Ctrl+Shift+E |
Run current file |
| RespCode: Generate |
- |
Single model generation |
| RespCode: Compete |
- |
Compare 4 models |
| RespCode: Collaborate |
- |
Pipeline refinement |
| RespCode: Consensus |
- |
Vote for best |
| RespCode: History |
- |
View past prompts |
| RespCode: Credits |
- |
Check balance |
| RespCode: Set API Key |
- |
Configure authentication |
Examples
Generate Code
- Press
Ctrl+Shift+R
- Select "Generate"
- Enter: "Fibonacci function in Rust"
- Select architecture: ARM64
- Select model: Claude Sonnet 4.5
- Code appears in editor, output in panel
Execute Your Code
- Open a C/Python/Rust file
- Press
Ctrl+Shift+E
- Select target architecture
- View output in RespCode panel
Compare AI Models (Compete)
- Press
Ctrl+Shift+R
- Select "Compete"
- Enter your prompt
- All 4 models generate code simultaneously
- See results side-by-side with execution output
Collaborate Mode
- Press
Ctrl+Shift+R
- Select "Collaborate"
- Choose 2-4 models for the pipeline
- First model generates, subsequent models improve
- Final refined code appears in editor
Consensus Mode
- Press
Ctrl+Shift+R
- Select "Consensus"
- All models generate independently
- AI merges the best parts from each
- Synthesized code appears in editor
Credit Costs
| Action |
Cost |
| Execute only |
1 credit |
| Generate (varies by model) |
1-7 credits |
| Compete (4 models) |
~15 credits |
| Collaborate (2-4 models) |
Sum of models |
| Consensus (4 models + merge) |
~20 credits |
Requirements
Troubleshooting
Run RespCode: Set API Key and enter your key from respcode.com/settings/api-keys
"Insufficient credits"
Check your balance at respcode.com/credits and purchase more if needed.
Models not loading
The extension caches models for 5 minutes. Try restarting VS Code or wait for cache to expire.
Links
Support
Changelog
v2.3.0
- Dynamic model fetching from API
- Added 11+ AI models support
- 5-minute model caching
- Fixed Gemini model slug
- Improved error handling
v2.1.2
- Fixed compete/consensus display
- Progress bars for all operations
- Bug fixes
v2.0.0
- Initial public release
- 4 generation modes
- 5 architectures support
License
MIT License - see LICENSE for details.
| |