Lattice
Instead of sending entire files or relying on text search, Lattice gives your AI assistant precisely the functions, classes, and relationships it needs — typically an average of 40% fewer tokens.
Biggest savings come from using get_skeleton/get_context_capsule to avoid loading entire files and from graph lookups that reduce broad rg + manual read loops.
For targeted single-file edits, token savings are smaller.
For deep-dive discovery tasks roughly 35-55% token savings.
Broad codebase exploration: 60-80% saved.
Targeted known-file work: 0-15% saved.
Net mixed workload average: about 40%.
How It Works
- Indexes your codebase — tree-sitter parses every source file into symbols (functions, classes, interfaces) and edges (calls, imports, extends)
- Builds a dependency graph — petgraph stores the full call graph with centrality scores, cross-directory relationships, and IDF-weighted keyword indices
- Serves Context Capsules — when an LLM asks "how does authentication work?", the query engine returns the most relevant pivot symbols (full source) and context symbols (signatures only), within a token budget
- Remembers across sessions — observations, decisions, and patterns persist in SQLite and surface automatically when relevant
Supported Languages
Python, TypeScript, JavaScript, Rust, Go, Java
Architecture
VS Code Extension (TypeScript)
│
│ JSON-RPC over stdio
▼
Lattice Daemon (Rust)
├── tree-sitter parser (6 languages)
├── petgraph dependency graph
├── query engine (keyword + graph scoring)
├── memory store (SQLite)
└── file watcher (incremental re-indexing)
MCP Server Setup
Lattice exposes its tools via MCP. To connect it to Claude Code, Codex, or any MCP-compatible client, add to your project's .mcp.json:
{
"mcpServers": {
"lattice": {
"type": "stdio",
"command": "/path/to/lattice",
"args": ["--stdio", "--workspace", "/path/to/your/project"]
}
}
}
get_context_capsule — when exploring unfamiliar code or broad questions (use mode: "focused" for targeted lookups)
get_impact_graph — before refactoring to understand blast radius
search_symbols — when looking for a symbol by name across the project
get_skeleton — for a quick overview of a large file's structure
search_logic_flow — to trace call chains between functions
save_observation / get_session_context / search_memory — persist and recall insights across sessions
list_observations — to review stored memories and clean up stale ones
update_observation — to edit an existing observation's content in-place
delete_observation — to remove obsolete or incorrect memories
Query Engine
The query engine (v31) combines keyword matching with graph-based scoring to find relevant symbols. Key mechanisms:
- IDF-weighted keyword scoring — per-word inverse document frequency with a 30% floor
- Graph traversal — follows call/import edges from seed hits, with cross-directory decay
- Hub dampening — log-compressed centrality prevents infrastructure functions from dominating
- Keyword coherence gate — graph-traversed nodes must share at least one query word
- Negative keyword signal — symbols with 2+ strong name parts absent from the query get capped (prevents wrong-subsystem matches)
- Word-boundary matching —
split_identifier prevents "dispatch" from matching "patch"
- Intent detection — adjusts budget and scoring weights for Explore/FixBug/Refactor/AddFeature queries
Benchmarked at 96.5% average precision.
Running Tests
cd daemon && cargo test --workspace
# 89 tests: 85 core + 4 daemon
LLM Memory Instructions
Add the following to your LLM assistant's project memory (CLAUDE.md, AGENTS.md, Codex instructions, or equivalent) when working on codebases with Lattice enabled:
Lattice Context Engine — Available Tools
Lattice provides a dependency graph and context engine for this codebase.
Use these tools when they're the best fit:
get_context_capsule — when exploring unfamiliar code or broad questions (use mode: "focused" for targeted lookups)
get_impact_graph — before refactoring to understand blast radius
search_symbols — when looking for a symbol by name across the project
get_skeleton — for a quick overview of a large file's structure
search_logic_flow — to trace call chains between functions
save_observation / get_session_context / search_memory — persist and recall insights across sessions
list_observations — to review stored memories and clean up stale ones
update_observation — to edit an existing observation's content in-place
delete_observation — to remove obsolete or incorrect memories
For targeted edits to known files, Read/Grep/Edit are fine.
Lattice adds the most value when you don't already know where to look.
## License
MIT