SYNAPTIC Expert
AI-powered development assistant with protocol enforcement, persistent learning, and full project memory for Visual Studio Code.
SYNAPTIC Expert transforms how you interact with AI models during software development. Instead of unstructured chat, every AI response follows a validated protocol with Decision Gates, graduated enforcement, and a learning engine that accumulates project intelligence across cycles — ensuring consistent, high-quality outputs across your entire project lifecycle.
Features
Multi-Provider Support (6 Providers)
Use your preferred AI provider with your own API keys (BYOK):
| Provider |
Models |
Capabilities |
| Anthropic |
Claude Opus 4, Sonnet 4.5, Haiku |
Full multimodal (images, PDF, text) + extended thinking |
| OpenAI |
GPT-4.1, GPT-4o, o3, o4-mini |
Images + text + reasoning models |
| Google Gemini |
Gemini 2.5 Pro/Flash, 2.0 Flash |
Full multimodal (1M context) |
| OpenRouter |
80+ models (curated to Tier 1+2) |
Varies by model |
| Grok (xAI) |
Grok 3, Grok 3 Mini, Grok 2 |
Images + text (131K context) |
| Claude Code CLI |
Local Claude installation |
Built-in tools (no API key needed) |
Curated Model Selector: The model picker only shows SYNAPTIC-tested models (Tier 1 recommended + Tier 2 compatible). Experimental models are available via Settings for advanced users.
Protocol Enforcement
Every AI response is automatically validated against the SYNAPTIC protocol:
- Response Validation: Structural checks ensure responses follow the required template
- Graduated Enforcement: Starts informational (cycles 1-5), progresses to soft (6-15), then standard (16+)
- Compliance Scoring: Real-time grade (A-F) displayed per response
- Auto-Regeneration: Non-compliant responses are automatically reformulated
Decision Gates
Architectural decisions are never made silently. When the AI identifies a choice point:
- 3 Options are always presented (A, B, C) with trade-offs, risk assessment, and confidence
- User approval is required before proceeding
- Decisions are persisted to INTELLIGENCE.json for cross-cycle memory and injected into future prompts
- Smart detection: Understands "go with B", "the first one", "the conservative one", "let's do option C" and many colloquial patterns
Project Intelligence & Learning
SYNAPTIC Expert accumulates project knowledge across cycles:
- Decision Memory: Every Decision Gate selection is recorded with rationale and persists across sessions
- Tech Stack Detection: Automatically learns your project's languages, frameworks, dependencies, and architecture patterns
- LLM Learning Extraction: Post-cycle micro-call extracts structured insights (tech stack, patterns, conventions) from every response
- Confidence Engine: Learnings are scored, reinforced on repetition, and decayed when stale
- INTELLIGENCE.json: Master memory file with decisions, roadmap, learnings, implementation state — all injected into the LLM prompt
Two Modes
- Architect Mode: Analysis and planning only — the AI examines your codebase without making changes
- SYNAPTIC Mode: Full protocol with enforcement, Decision Gates, tool execution, and file operations
- Execute Immediately (Ctrl+Shift+Enter): Skip Decision Gates and execute tools directly
File Attachments
Attach files directly to your prompts for the AI to analyze:
- Images: JPEG, PNG, GIF, WebP
- Documents: PDF (native on Anthropic/Gemini, text extraction on OpenAI/OpenRouter/Grok)
- Code/Text: Any text file (.ts, .js, .py, .md, .csv, etc.)
- Limits: 5 MB per file, 10 MB total, up to 10 attachments
- Compatibility warnings: Automatic alerts when attachments are incompatible with the current provider
Agent Loop
The AI agent executes multi-step tasks autonomously:
- Unlimited Iterations: No artificial cap — the agent runs until the task is complete
- Built-in Tools: Read, write, and edit files, search by pattern or content, run shell commands, fetch web resources, edit notebooks, and more
- Cancel Button: Stop execution at any time
- Activity Log: Real-time log showing each tool call, iteration number, and elapsed time
- Full Streaming: All iterations stream token-by-token
- Extended Thinking: Anthropic's extended thinking blocks displayed in the UI
- Tool Approval: Choose between selective (approve each tool) or auto-approve mode
Chat History
Conversations persist across sessions:
- Per-user, per-project: Each user's history is stored separately for each workspace
- Automatic restore: Previous messages loaded when you reopen the chat panel
- 500 message limit: FIFO buffer keeps history manageable
Subscription Plans
| Plan |
Cycles/Month |
Price |
| Free |
25 |
$0 |
| Pro |
400 |
$20/mo |
| Full |
Unlimited |
$100/mo |
All plans include the same features — only the monthly cycle count differs. A tier badge shows your current plan, and an upgrade banner appears when usage exceeds 80%.
The sidebar shows real-time metrics:
- Current cycle number and active model with tier badge (★/☆/⚠)
- Monthly quota usage with progress bar
- Quick action buttons (Provider, Model, API Keys, Settings)
- Intelligence section with expandable learning cards (Boost/Degrade/Forget/Restore)
- Enforcement score and compliance grade
Configurable Context Budget
Control how much project context is injected into the LLM prompt:
| Setting |
Default |
Description |
synaptic.budget.total |
30,000 |
Total character budget for director files |
synaptic.budget.mantra |
10,000 |
Max chars for MANTRA.md |
synaptic.budget.rules |
10,000 |
Max chars for RULES.md |
synaptic.budget.designDoc |
10,000 |
Max chars for DESIGN_DOC.md |
synaptic.budget.contextDocs |
20,000 |
Max chars for context/ documents |
Budgets start at maximum — the system uses as much context as possible. Reduce one section to free budget for another (cascading allocation).
Getting Started
- Install the extension from the VS Code Marketplace
- Sign in with Google, GitHub, or email/password
- Add an API key for your preferred provider (
Ctrl+Shift+P > "SYNAPTIC: Manage API Keys")
- Open the Chat Panel with
Ctrl+Shift+Y (or Cmd+Shift+Y on Mac)
- Send your first prompt and experience protocol enforcement in action
Keyboard Shortcuts
| Shortcut |
Action |
Ctrl+Shift+Y / Cmd+Shift+Y |
Open Chat Panel |
Ctrl+Enter |
Submit prompt |
Ctrl+Shift+Enter |
Submit with immediate execution (skip Decision Gates) |
Extension Settings
| Setting |
Default |
Description |
synaptic.defaultProvider |
anthropic |
Default LLM provider |
synaptic.defaultModel |
claude-sonnet-4-5 |
Default model (curated models via Model selector, experimental via this field) |
synaptic.toolApproval |
selective |
Tool execution approval mode (selective or auto-approve) |
synaptic.agentMaxIterations |
0 |
Max tool iterations per cycle (0 = unlimited) |
synaptic.enforcementBlocking |
false |
Block non-compliant responses after validation fails (opt-in) |
synaptic.budget.* |
See above |
Context injection budget controls |
Requirements
- VS Code 1.85.0 or later
- Google, GitHub, or email account for authentication
- API key from at least one supported provider (except Claude Code CLI)
Architecture
SYNAPTIC Expert is built as a monorepo with 4 packages:
- @synaptic-sre/shared: Types, constants, protocol loader (zero dependencies)
- @synaptic-sre/enforcement: Response validation, compliance scoring, regeneration engine (zero dependencies)
- @synaptic-sre/workspace: Director files, session persistence, BITACORA service, intelligence normalization (zero dependencies)
- vscode-extension: Extension host + React webview
The enforcement and workspace packages have zero external dependencies — pure TypeScript for maximum portability.
Privacy & Data
- API keys are stored locally in VS Code's secure storage (never sent to our servers)
- Authentication uses OAuth PKCE for Google/GitHub and Firebase Auth for email
- Usage quota is tracked via Firebase Firestore (monthly cycles per plan)
- No telemetry beyond quota tracking
- All LLM calls go directly from your machine to the provider's API
- Project memory (INTELLIGENCE.json) stays local in your workspace
License
MIT
Publisher
Built by GoLab — Conexiones Sorprendentes.
| |