Skip to content
| Marketplace
Sign in
Visual Studio Code>Machine Learning>Vortex Developer AssistantNew to Visual Studio Code? Get it now.
Vortex Developer Assistant

Vortex Developer Assistant

Guy Lerner

| (0) | Free
Intelligent AI routing extension for VS Code - automatically selects optimal AI models for development tasks with cost optimization and seamless integration
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

Vortex Developer Assistant

Intelligent AI routing with built-in verification for developers who need both cost optimization and trustworthy outputs.

(Recommendation: Create a short GIF showing the Smart vs. Manual toggle, the chat interface, and a command in action, then replace this link.)

The Problem: The AI Trust Crisis

AI developer tools promise massive productivity gains, yet they often fall short.

66% of developers are frustrated with "almost right" AI solutions that require extensive manual debugging.

Hidden API costs from inefficient model usage can quickly spiral out of control.

The fear of sending sensitive code to the cloud creates a barrier to adoption.

Vortex was built to solve this. It's an intelligent AI assistant that you can actually trust—to give you the best results, at the best price, with absolute control over your privacy.

Key Features

Vortex integrates seamlessly into your VS Code workflow, replacing guesswork with intelligent, transparent, and controllable AI assistance.

🧠 Smart & 🎯 Manual Routing

Smart Mode (Default): Vortex analyzes your request (e.g., "generate code," "find bugs") and uses a research-backed configuration to route it to the objectively best AI model for the job (like Claude for coding or Gemini for documentation).

Manual Mode: Want to use a specific provider? Simply toggle to Manual Mode and lock the extension to OpenAI, Anthropic, Google, or your local Ollama instance for all subsequent requests.

🔒 Trust-First Privacy Mode

Gain absolute control over your code with a simple, always-visible toggle in the status bar.

Cloud Mode ☁️: Enables smart routing to the best cloud providers.

Private Mode 🔒: Guarantees 100% local-only processing. All requests are sent to your local Ollama instance, and no data ever leaves your machine.

📊 Transparent & Actionable UI

No Black Box: The status bar always shows you which provider is being used, the cost of the request, token usage, and processing time.

Session-Only Chat: A clean, native VS Code chat panel that keeps your conversation history only for the current session. All history is cleared on restart, ensuring zero persistence and maximum privacy.

On-Demand Panels: Get deeper insights when you need them with header buttons to open temporary panels for Session Performance, Trust Details, and Settings.

🚀 Latest AI Models & Accurate Costs

Vortex comes pre-configured with the latest high-performance models from OpenAI (GPT-4o, o1), Anthropic (Claude 3.5), and Google (Gemini 1.5).

Using provider-specific tokenizers, cost estimates are accurate to within 5% of your actual bill, eliminating surprises.

💻 Comprehensive Local Model Support

Full integration with Ollama for privacy-first local processing.

Extensive support for top coding models like CodeLlama, DeepSeek-Coder, and Qwen2.5-Coder, including different parameter sizes and quantization formats (GGUF, GPTQ, AWQ).

User-Friendly Onboarding: If Ollama isn't running, Vortex provides clear, actionable guidance to help you get started.

🔐 Secure Credential Management

Environment Variables (Recommended): The most secure method. Your API keys are never visible to the extension, providing zero-trust security.

VS Code SecretStorage (Fallback): A user-friendly alternative that stores your keys securely within VS Code's encrypted storage.

Setup & Configuration

Get started in under a minute. Vortex works out-of-the-box with local models or can be configured with your cloud API keys.

For Local Use (100% Private):

  1. Install Ollama on your machine
  2. Pull your desired models (e.g., ollama pull codellama:7b)
  3. Toggle Vortex to 🔒 Private Mode. That's it!

For Cloud Use (Smart Routing):

Method 1 (Recommended): Set your API keys as environment variables. Vortex will automatically detect them on startup.

VORTEX_OPENAI_KEY="sk-..."
VORTEX_ANTHROPIC_KEY="sk-ant-..."
VORTEX_GOOGLE_AI_KEY="..."

Method 2: Open the command palette (Ctrl+Shift+P) and run Vortex: Setup Credentials for a guided flow to save your keys to VS Code's encrypted storage.

Commands

Access all features through the Command Palette (Ctrl+Shift+P):

  • Vortex: Analyze Code - Deep analysis of selected code
  • Vortex: Generate Code - AI-powered code generation
  • Vortex: Review Code - Comprehensive code review
  • Vortex: Generate Tests - Create unit/integration tests
  • Vortex: Toggle Privacy Mode - Switch between 🔒/☁️ modes
  • Vortex: Toggle Routing Mode - Switch between 🧠/🎯 modes
  • Vortex: Manage Local Models - Ollama model management

...and many more!

Installation

  1. Open VS Code
  2. Go to Extensions (Ctrl+Shift+X)
  3. Search for "Vortex"
  4. Install the extension
  5. Follow the setup guide above for configuration

Architecture

Vortex is built on a trust-first architecture with these core components:

  • SimplifiedVortexRouter - Fast routing decisions with emergency fallbacks
  • PrivacyModeManager - Toggle authority with persistent state
  • LocalModelManager - Ollama integration with status monitoring
  • OllamaAdapter - Local model execution with timeout protection

Requirements

  • VS Code 1.80.0 or higher
  • For local models: Ollama installation
  • For cloud providers: Valid API keys

License

MIT License - See LICENSE file for details.

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2025 Microsoft