Skip to content
| Marketplace
Sign in
Visual Studio Code>Other>VeloxNew to Visual Studio Code? Get it now.
Velox

Velox

ARNAV TYAGI

|
4 installs
| (0) | Free
Local RAG + Ollama assistant with in-IDE chat, diagrams, and security audit.
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

🌘 Velox

Your Local, Offline, AI-Powered Pair Programmer.

Velox is a premium, locally-hosted VS Code extension designed to bring the power of RAG (Retrieval-Augmented Generation) and architectural visualization directly into your IDE—without sending a single line of your code to the cloud.

Velox

✨ Features

  1. Local RAG Chat (Ask): Ask your AI assistant to explain functions, debug logic, or rewrite code. The extension intelligently parses your active file and answers accurately.
  2. Mermaid Flowchart Generator (Visualize): Need to understand a massive 500-line file? Click "Visualize" to generate a comprehensive, crash-proof architectural map of every component and router directly inside your chat panel.
  3. Automated Security Audits (Audit): Instantly scan your active file for vulnerabilities using local heuristic evaluation.
  4. 100% Offline & Private: Powered by your own local instances of Ollama and Python. Zero cloud APIs, absolute privacy.

🚀 Quick Start Guide

Since this extension runs the AI engine locally on your hardware, there are two brief prerequisite installations.

1. Install Ollama (The Brain)

We use Ollama to power the chat and visualization engine.

  1. Download Ollama and install it on your machine.
  2. Open your terminal and pull the default lightweight coding model:
    ollama pull codellama:7b-instruct
    

(You can change the requested model smoothly via the Extension settings later).

2. Install Python Dependencies (The Vector Memory)

The RAG memory system uses a lightweight local FastAPI backend securely nested inside the extension files.

  1. Ensure you have Python installed.
  2. Navigate to the extension's python-backend directory and install the requirements:
    cd ~/.vscode/extensions/velox/python-backend
    pip install -r requirements.txt
    

(Note: Change the above directory path relative to wherever you cloned or installed the extension).


💻 Usage

  • Open any code file (.js, .ts, .py, etc.).
  • The extension's icon will appear on your VS Code Activity Bar (Sidebar).
  • Click Open Chat.
  • Ask: Ask questions about the current file's logic.
  • Visualize: Click the Visualize button to see a beautifully generated interactive Mermaid flowchart summarizing the file's layout.
  • Audit: Click Audit to run a deep security and bug scan against the active variables in the file.

⚙️ Optimization & Settings

You can tweak the AI's intelligence via File > Preferences > Settings > Extensions > Velox:

  • Context Window (ollamaNumCtx): Default is 4096. If you have 12GB+ of VRAM on your graphics card, try bumping this to 8192 to allow the AI to read massive multi-thousand line files at once!
  • Temperature: Default is 0.2 to keep answers grounded in reality, but you can raise it for more creative code generation.

Built entirely locally, ensuring your proprietary code never leaves your desk.

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2026 Microsoft