OverviewBuilding Augmented Reality experiences is hard. You're juggling spatial anchors, tracking algorithms, shader code, device-specific APIs, and real-time rendering pipelines — all at once. Most AI assistants give you generic code that breaks the moment it touches an AR runtime. CogniVision is different. It's an AI coding agent built specifically for AR and spatial computing development, embedded directly in Visual Studio Code. Describe the AR experience you want to create — a markerless anchor, a face filter, a WebXR scene, a LiDAR-based occlusion mesh — and CogniVision writes accurate, production-ready code, runs it through your terminal, reads the output, and iterates until the result is right. Every file it edits, every command it runs, every decision it makes is visible to you in real time. You stay in control. You approve changes before they land. What CogniVision Can Build
Key Features🏗️ AR-First Code GenerationCogniVision knows the APIs, lifecycle hooks, and spatial math behind real AR frameworks. When you ask it to implement plane detection, it doesn't give you a stub — it gives you working ARPlaneAnchor delegates, session configuration, and a sceneView setup wired together correctly. 🔍 Deep Project UnderstandingBefore writing anything, CogniVision reads your project: file structure, scene graphs, existing shaders, manifest files, and build configs. It understands what you already have, so it builds on top of your work — not over it. 🐛 Spatial DebuggingAR apps fail in ways general-purpose debuggers don't anticipate: anchor drift, tracking loss, occlusion gaps, incorrect world transforms, or frame-rate drops in render passes. CogniVision runs your app in the terminal, reads crash logs and console output, and proposes targeted fixes based on what it sees. 👁️ Real-Time TransparencyEvery action CogniVision takes is shown as it happens in the chat panel. You can see which files it's reading, what code it's writing, and what commands it's running — all before they're finalized. Nothing happens without your awareness. ✅ Human-in-the-Loop by DesignEvery file write and terminal command surfaces a confirmation step. You can accept, modify, or reject any change. Workspace snapshots are taken at each step, so you can roll back to any previous state at any time. 🧩 Extensible Tool SystemCogniVision supports the Model Context Protocol (MCP), allowing you to connect external data sources — device sensor feeds, AR cloud services, spatial databases — directly into the agent's toolchain. How It Works
InstallationFrom the VS Code Marketplace
Manual Install
UsageStarting an AR TaskOpen the CogniVision panel from the Activity Bar (the AR eye icon). Type what you want to build or fix in the chat input and press Enter. Examples:
Providing Context
Reviewing ChangesCogniVision shows a diff for every file it modifies. You can:
Checkpoints and RollbackCogniVision snapshots your workspace before each significant change. Use the Compare button to see what changed, or Restore to roll back to any previous checkpoint — useful when testing different approaches to a spatial rendering problem. Unique Differentiators
ContributingPull requests and issues are welcome. See CONTRIBUTING.md for guidelines. License |