Tzara Physical AI Agent
VS Code Extension Telekinesis-powered LLM + VLM Human-in-the-loop
Build and ship robotics + computer vision workflows faster with a chat-native Physical AI Agent inside VS Code.
Tzara combines reasoning, documentation grounding, code editing tools, and optional image analysis to help you go from intent -> pipeline -> code.
Why This Is Different
Grounded By Telekinesis
- Uses bundled Telekinesis documentation and skills for targeted, practical answers
- Orchestrates recommendations around real skill families (Cornea, Retina, Pupil, Vitreous, Cortex)
Built To Actually Ship
- Reads and edits your repository through tools
- Uses approval gates for file writes (safe-by-default workflow)
- Supports optional image-to-pipeline reasoning via local VLM
Agentic, But Controlled
- Orchestrator + specialized subagents
- Web search only when needed
- Stronger context than generic "autocomplete" flows
Feature Grid
| Capability |
What it gives you |
| Docs-grounded planning |
Faster pipeline design with Telekinesis context |
| Code-aware generation |
Real edits across your workspace |
| Image-aware reasoning |
Better segmentation/detection strategy from visual context |
| Human-in-the-loop writes |
Review and approve before write actions |
| VS Code-native UX |
One pane for intent, planning, and implementation |
Quickstart
1) Install the extension
- Install Tzara Physical AI Agent from the VS Code Extensions view
- Or install a provided
.vsix via Extensions -> Install from VSIX...
2) Launch the agent
- Open Command Palette (
Ctrl+Shift+P / Cmd+Shift+P)
- Run Tzara: Open Chat
Config
Set these in Settings -> Cortex:
| Setting |
Description |
cortex.anthropicApiKey |
Anthropic key (or ANTHROPIC_API_KEY) |
cortex.userCodePath |
Workspace path (defaults to open folder) |
cortex.tavilyApiKey |
Tavily key for optional web search |
cortex.model |
Claude model ID |
cortex.ollamaBaseUrl |
Ollama URL for image analysis |
cortex.ollamaModel |
Ollama VLM model name |
cortex.iconPath |
Optional custom extension panel icon |
Prompt Starters
- "Use Telekinesis docs and propose a robust pallet segmentation pipeline."
- "Analyze
image.png and suggest preprocessing + detection steps."
- "Read my workspace and implement Retina + Cornea inference end-to-end."
- "Compare two pipeline options and explain tradeoffs for runtime and accuracy."
Telekinesis Skill Domains
- Cornea - Image segmentation
- Retina - Object detection
- Pupil - Image processing
- Vitreous - 3D point cloud processing
- Cortex - Physical AI orchestration
Explore the full ecosystem at docs.telekinesis.ai.
Architecture Snapshot
- Orchestrator Agent: plans and routes tasks
- Docs Subagent: Telekinesis docs + skill lookup
- Code Editor Subagent: repository inspection/edit planning
- Image Analyst Subagent: image-context reasoning
- Approval Interrupts: user confirmation before writes
Dev Notes
Implementation details, project layout, and extension internals are in:
| |