Memix — The Autonomous AI Memory BridgeMemix is an autonomous engineering intelligence layer that runs silently in the background of your IDE, continuously observing your codebase, building a live structural model of it, and preparing precisely the right context for your AI assistant before you even open the chat. While other AI memory tools store notes about what you've told them, Memix understands your code structurally — at the AST level, the dependency graph level, and the function call level — through a continuously updated index that no cloud service can replicate. Requirements
Quick Start
For building from source:
What Memix DoesPersistent Project Brain (Redis-backed)Stores identity, session state, tasks, decisions, patterns, file map, and known issues across every working session. Your AI assistant resumes with full project context rather than starting from zero each time. Three-Layer Structural IntelligenceEvery file save triggers three successive analysis passes that each add a different kind of understanding. The first layer uses tree-sitter AST parsing across 13 supported languages (TypeScript, JavaScript, Rust, Python, Go, Java, C/C++, C#, Ruby, Swift, Kotlin, PHP) to extract function signatures, types, exports, call sites, cyclomatic complexity, and structural patterns. The second layer uses OXC semantic analysis for TypeScript and JavaScript files to resolve import statements to their actual file paths, build a resolved call graph where each edge carries the callee's file and line number, and detect dead imports pointing to files that no longer exist. This turns a nominal dependency graph into a resolved one. The third layer uses the AllMiniLM-L6-v2 embedding model — bundled into the daemon binary, no network required — to compute 384-dimensional vector representations of every skeleton entry. These embeddings power semantic similarity search across the entire codebase structure. Code Skeleton IndexMaintains a continuously updated structural map of the project. The File Skeleton Index (FSI) provides one entry per source file capturing its shape: language, types, functions with signatures and complexity, exports, imports, and dependency relationships. The Function Symbol Index (FuSI) provides per-function entries for hot files, enriched with call graph data. Both layers are stored in an isolated Redis hash and persisted to a local binary file with a write-through Redis mirror for cross-IDE sharing. Background IndexerBuilds the complete skeleton index for the entire workspace automatically at daemon startup, running at a throttled pace (10 files/second by default) so it never disrupts active development. After the first run, the index is restored from the binary file in milliseconds on every subsequent start. 7-Pass Context CompilerAssembles a token-budget-fitted context packet from the structural index, dependency graph, brain entries, session history, and project rules. The compilation pipeline runs dead context elimination (BFS from active file), skeleton extraction, brain deduplication, history compaction, rules pruning, skeleton index injection with betweenness-centrality priority boosting, and optimal budget fitting via a 0/1 dynamic programming knapsack. The result is not a dump of raw files — it is a precisely curated selection of the most relevant structural information for the current task. Token IntelligenceTracks three distinct token dimensions: tokens consumed by AI models, tokens compiled by the context compiler, and tokens estimated as saved through structural compression versus naive full-file injection. Session and lifetime totals are both maintained, with the estimated savings expressed as both a token count and an approximate dollar figure. The compression ratio — compiled tokens divided by the naive paste estimate — shows how much leverage Memix is providing on each context compilation. Dependency Graph with Structural ImportanceMaintains a live directed dependency graph updated on every file save. Betweenness centrality (Brandes' algorithm) and PageRank are computed from the graph and applied as priority boosts in the context compiler. Blast radius analysis uses forward BFS through the reverse dependency graph to compute all files transitively affected when a given file changes, including critical path reconstruction. Resolved Call GraphTracks function-to-function call relationships with a dual-index architecture. When OXC resolves a call target to a specific file and line, the exact callee location is stored. When resolution is unavailable (dynamic dispatch, external libraries), a nominal name-only entry serves as a fallback. Both indexes are queried transparently — callers see the best available information without needing to know which tier answered the query. Autonomous Codebase ObservationThe file watcher runs continuously on the workspace root. File saves trigger semantic diffs (not just line diffs), breaking signature detection across the old and new AST, intent classification using a multi-factor weighted voting engine, and dependency graph updates. Warning entries are created automatically for breaking signature changes, unresolved imports, and security scanner findings. AGENTS.md Protocol SupportGenerates and maintains an Proactive Risk AnalysisBefore invasive edits, scores a file's risk using its dependents in the dependency graph, Code DNA stability metrics, known issues, and git archaeology churn data. Available through the API and visible in the debug panel. Configurable Security ScannerLoads rules from Context DNA + Architecture ExplainabilityComputes a project-wide Code DNA summary from AST patterns across all supported languages: architecture style inference, hot and stable zone identification, circular dependency detection, type coverage ratio, language breakdown, and an explainability summary suitable for direct injection into AI context. Git ArchaeologyMaintains deep git history context: file churn analysis (identifies hot and stable zones), recent author tracking, commit summaries, and structural change patterns. Combined with Code DNA, this provides a complete picture of which areas need careful handling versus which are safe for aggressive refactoring. Intelligent Decision DetectionAutomatically observes code changes and records architectural decisions — capturing the why behind code evolution, not just the what. Uses a three-layer detection system: TOML-based rules (70+ pre-defined), AST pattern matching (tree-sitter), and embedding similarity (AllMiniLM-L6-v2). Decisions are stored in the brain with evidence chains, confidence scores, and rule provenance. User feedback adjusts rule confidence over time for self-improving detection accuracy. Decision Feedback LoopUsers can provide feedback on auto-detected decisions through the API ( Learning Layer + Cross-Project ProfileRecords prompt outcomes, compares model performance by task type, and derives a developer profile from patterns across projects. Surfaces context optimization suggestions specific to the current developer's working style. Brain Hierarchy (Monorepo Support)Layer-based memory resolution supports parent-child context inheritance for monorepo structures. Child packages can override or extend inherited context from parent workspace layers. Team SyncCRDT-based brain synchronization for teams. Push, pull, and merge architectural decisions, patterns, and shared context across team members using a conflict-free replicated data type foundation. Daemon-Managed JSON MirrorEvery brain write is atomically mirrored to Hard Advantages Over Existing Memory ToolsStructural understanding, not just notes. Other memory tools store what you've told them. Memix continuously derives structural facts from your code — call relationships, dependency depth, complexity distribution, export surfaces — that no amount of manual note-taking could replicate. Token-efficient context, not file dumps. The 7-pass context compiler with DP knapsack fitting produces a context packet that is typically 5-10× smaller than a naive file paste with the same information content. The Token Intelligence system shows exactly how much is being saved. Offline, private, machine-local. No code leaves your machine except what you explicitly paste into an AI chat. The AST analysis, embedding computation, dependency graph, and all structural indexes run entirely on your local hardware. Works with any AI model. Because Memix pre-assembles context structurally before it reaches the AI, even cheap or local models perform well — they receive precisely curated, structurally complete prompts rather than being overwhelmed by raw file content. Survives IDE restarts and session breaks. The brain persists across restarts in Redis. The skeleton index persists across restarts in the binary embedding file. Token intelligence lifetime totals persist in the workspace data directory. Nothing resets when you close and reopen the IDE. Documentation
LicenseThe VS Code extension is licensed under the MIT License. The Rust daemon is licensed under the Business Source License 1.1, which converts to Apache 2.0 after four years. Made with ❤️ by Soufiane Loudaini · DigitalVize LLC |