Vortex Developer AssistantIntelligent AI routing with built-in verification for developers who need both cost optimization and trustworthy outputs. (Recommendation: Create a short GIF showing the Smart vs. Manual toggle, the chat interface, and a command in action, then replace this link.) The Problem: The AI Trust CrisisAI developer tools promise massive productivity gains, yet they often fall short. 66% of developers are frustrated with "almost right" AI solutions that require extensive manual debugging. Hidden API costs from inefficient model usage can quickly spiral out of control. The fear of sending sensitive code to the cloud creates a barrier to adoption. Vortex was built to solve this. It's an intelligent AI assistant that you can actually trust—to give you the best results, at the best price, with absolute control over your privacy. Key FeaturesVortex integrates seamlessly into your VS Code workflow, replacing guesswork with intelligent, transparent, and controllable AI assistance. 🧠 Smart & 🎯 Manual RoutingSmart Mode (Default): Vortex analyzes your request (e.g., "generate code," "find bugs") and uses a research-backed configuration to route it to the objectively best AI model for the job (like Claude for coding or Gemini for documentation). Manual Mode: Want to use a specific provider? Simply toggle to Manual Mode and lock the extension to OpenAI, Anthropic, Google, or your local Ollama instance for all subsequent requests. 🔒 Trust-First Privacy ModeGain absolute control over your code with a simple, always-visible toggle in the status bar. Cloud Mode ☁️: Enables smart routing to the best cloud providers. Private Mode 🔒: Guarantees 100% local-only processing. All requests are sent to your local Ollama instance, and no data ever leaves your machine. 📊 Transparent & Actionable UINo Black Box: The status bar always shows you which provider is being used, the cost of the request, token usage, and processing time. Session-Only Chat: A clean, native VS Code chat panel that keeps your conversation history only for the current session. All history is cleared on restart, ensuring zero persistence and maximum privacy. On-Demand Panels: Get deeper insights when you need them with header buttons to open temporary panels for Session Performance, Trust Details, and Settings. 🚀 Latest AI Models & Accurate CostsVortex comes pre-configured with the latest high-performance models from OpenAI (GPT-4o, o1), Anthropic (Claude 3.5), and Google (Gemini 1.5). Using provider-specific tokenizers, cost estimates are accurate to within 5% of your actual bill, eliminating surprises. 💻 Comprehensive Local Model SupportFull integration with Ollama for privacy-first local processing. Extensive support for top coding models like CodeLlama, DeepSeek-Coder, and Qwen2.5-Coder, including different parameter sizes and quantization formats (GGUF, GPTQ, AWQ). User-Friendly Onboarding: If Ollama isn't running, Vortex provides clear, actionable guidance to help you get started. 🔐 Secure Credential ManagementEnvironment Variables (Recommended): The most secure method. Your API keys are never visible to the extension, providing zero-trust security. VS Code SecretStorage (Fallback): A user-friendly alternative that stores your keys securely within VS Code's encrypted storage. Setup & ConfigurationGet started in under a minute. Vortex works out-of-the-box with local models or can be configured with your cloud API keys. For Local Use (100% Private):
For Cloud Use (Smart Routing):Method 1 (Recommended): Set your API keys as environment variables. Vortex will automatically detect them on startup.
Method 2: Open the command palette (Ctrl+Shift+P) and run CommandsAccess all features through the Command Palette (Ctrl+Shift+P):
...and many more! Installation
ArchitectureVortex is built on a trust-first architecture with these core components:
Requirements
LicenseMIT License - See LICENSE file for details. |