Lore — AI Codebase Intelligence
Your codebase has a story. Now you can read it.
Ask plain English questions about any codebase and get accurate, sourced answers — without sending a single line of code to the cloud. Lore runs entirely on your own hardware using local AI models via Ollama.
Privacy-first. Air-gap compatible. No API keys. No telemetry. Ever.
Features
Ask Questions About Your Codebase
Type any question in the Lore sidebar and get a streaming, word-by-word answer sourced from your actual code — with file references.
- "How does authentication work?"
- "Where is the payment processing logic?"
- "What calls this function?"
In-Editor Analysis
- Smell score — see the risk score for the file you're currently editing
- Change impact — find out what breaks if you change the current file
- Architecture diagrams — interactive module, dependency, and data flow diagrams inside VS Code
Keyboard Shortcuts
| Shortcut |
Action |
Ctrl+Shift+L |
Ask a question about the codebase |
Ctrl+Shift+Alt+L |
Ask about selected code |
Select any code and right-click to ask Lore about it directly from the editor context menu.
Setup
Step 1 — Install the Lore backend
git clone https://github.com/smithbuilds/lore
cd lore
python -m venv venv
source venv/Scripts/activate # Windows Git Bash
# source venv/bin/activate # Mac/Linux
pip install -e .
Step 2 — Run the setup wizard
# Auto-detects your stack, pulls the right AI model for your hardware,
# indexes your codebase, and installs the git auto-reindex hook
lore init --path /path/to/your/codebase --wizard
First time? Run lore doctor to check that all dependencies are in place before indexing.
Step 3 — Open VS Code
- Install this extension
- Open your codebase as a workspace folder
- Click the Lore icon in the activity bar
- Start asking questions
Requirements
- Ollama — runs the local AI model (auto-installed by the wizard)
- Python 3.10+ with the Lore CLI installed
- Or: Docker with
docker compose up lore-server -d
Configuration
| Setting |
Default |
Description |
lore.serverUrl |
http://localhost:8000 |
URL of the Lore server |
lore.cliPath |
lore |
Path to the lore CLI executable |
lore.codebasePath |
(workspace root) |
Override the codebase path |
lore.preferServer |
true |
Use server if available, fall back to CLI |
lore.autoSmellOnSave |
false |
Show smell score on every file save |
Privacy
Lore runs entirely on your own hardware. No code ever leaves your network.
- No cloud API calls — the AI model runs via Ollama locally
- No telemetry or usage data — Lore does not phone home
- Air-gap compatible — works with zero internet connectivity
- On-premise vector database — embeddings stored locally in ChromaDB
- SBOM generation is fully offline — CVE checks use a local database
Suitable for defense contractors, healthcare, government, and any organization with strict data compliance requirements.
Full CLI
The Lore VS Code extension connects to the same backend that powers the full CLI. From the terminal you can also run:
lore deps --format html # Shareable HTML security report with CVE badges
lore deps --format sbom # CycloneDX 1.4 SBOM for compliance
lore changelog --last 10 # Plain English changelog from git history
lore dead --path . # Dead code detection
lore onboard --role backend # Onboarding guide for new engineers
lore diagram --format html # Interactive architecture diagrams
Full command reference →
License
Business Source License 1.1 — free for individual, non-commercial use.
Converts to Apache 2.0 on April 2, 2030.
Team and enterprise licensing: get-lore.com
Built by SmithBuilds LLC