Predict — AI Coding Calibration
Pre-release — Core functionality is stable. Features may evolve based on early feedback.
Are you learning from your AI coding assistant, or just accepting its suggestions?
Predict adds a single intervention to your workflow: before you see what the AI wrote, you predict what it will say. Then you see the comparison. Over time, your calibration data reveals whether you're building genuine expertise or drifting toward passive consumption.
The distinction matters. Early research suggests that passively delegating to AI tools is associated with measurable skill degradation, while active engagement preserves and strengthens capability. Predict makes that interaction pattern visible and measurable.
How It Works
- Predict — When your AI assistant generates a completion, Predict pauses and asks what you expected
- Compare — Side-by-side reveal shows your prediction against the actual output with overlap scoring
- Update — Your calibration metrics track whether your intuition is improving across sessions
No context switching. No separate app. A lightweight feedback loop embedded where you already work.
Installation
- Open VS Code and go to the Extensions view (
Ctrl+Shift+X / Cmd+Shift+X)
- Search for "Predict — AI Coding Calibration"
- Click Install
- Make sure you have an AI coding assistant installed (e.g., GitHub Copilot, Codeium)
- Start coding — Predict will prompt you to predict before showing AI suggestions
- Open the Predict sidebar panel to see your calibration dashboard
Alternatively, install from the command line:
code --install-extension az8tlab.predict-ai-calibration
Tier Progression
Accuracy determines your tier, and each tier adjusts the challenge:
| Tier |
Mode |
What You Predict |
| Observer |
Structural |
The category of code the AI will generate (function, loop, conditional, etc.) |
| Analyst |
Text |
The exact text of AI completions |
| Architect |
Text |
Sustained high accuracy with broad pattern coverage |
Tiers unlock through demonstrated accuracy — not time spent.
Calibration Dashboard
A sidebar panel tracks your development across multiple dimensions:
- Learning curve with trend visualization
- Sensitivity trajectory — d-prime signal detection metric tracking your discrimination ability
- Memory system breakdown — which cognitive patterns you rely on when predicting
- Cognitive load distribution — germane (skill-building), intrinsic (complexity), and extraneous (noise) with contextual guidance on learning value
- Per-language accuracy — calibration quality across different programming languages
- Per-file and per-project tracking — identify where your mental models are strongest
- AI provider detection — works with GitHub Copilot, Codeium, Tabnine, Cursor, Continue, CodeGPT, Supermaven, and others
Screenshots
Screenshots will be added with the stable release. In the meantime, install the pre-release to see the prediction prompt, comparison reveal, and calibration dashboard in action.
What you'll see:
- Prediction prompt — A non-intrusive inline panel asking what you expect the AI to generate
- Comparison reveal — Side-by-side view of your prediction vs. the actual AI output with overlap scoring
- Calibration dashboard — Sidebar panel with learning curves, sensitivity trajectory, cognitive load breakdown, and per-language accuracy
Cognitive Load Awareness
Not all predictions are equal. Predict classifies the cognitive demand of each AI completion:
- Germane — Novel logic, algorithmic reasoning. These build lasting mental models.
- Intrinsic — Inherent code complexity. Challenging but learnable with scaffolding.
- Extraneous — Boilerplate, imports, repetitive patterns. Low learning value.
Routine completions are suppressed by default so your calibration effort goes where it matters most.
Language-Specific Profiles
Calibration adapts to what you're writing. Complexity scoring accounts for nesting depth, generics, concurrency patterns, and framework conventions across Python, TypeScript, JavaScript, Go, Rust, and Java.
Configuration
| Setting |
Default |
Description |
predict.frequency |
balanced |
How often predictions are prompted: aggressive, balanced, gentle, or manual |
predict.predictionMode |
auto |
Prediction mode: auto (tier-based), structure (category), or text (exact) |
predict.enabledLanguages |
[] |
Limit to specific languages (empty = all) |
predict.minimumCompletionLength |
10 |
Minimum characters in an AI completion to trigger a prompt |
predict.aiProvider |
auto |
AI assistant: auto, copilot, codeium, tabnine, cursor, continue, codegpt, supermaven |
predict.sessionCooldownMinutes |
0 |
Minutes between prompts within a session |
predict.comparisonTimeoutSeconds |
15 |
Seconds before the comparison panel auto-dismisses |
predict.languageProfiles |
{} |
Per-language prompt frequency modifiers |
Keyboard Shortcuts
| Shortcut |
Action |
Ctrl+Shift+/ (Cmd+Shift+/) |
Submit a prediction manually |
Ctrl+Alt+Shift+P (Cmd+Alt+Shift+P) |
Toggle Predict on/off |
Ctrl+Alt+Shift+D (Cmd+Alt+Shift+D) |
Open Calibration Dashboard |
Escape |
Skip current prediction |
Commands
- Predict: Toggle Active — Pause or resume calibration prompts
- Predict: Submit Prediction — Manually predict the next AI completion
- Predict: Skip Current Prediction — Dismiss the current prompt
- Predict: Open Calibration Dashboard — Focus the sidebar panel
- Predict: Reset Current Session — Start a fresh calibration session
- Predict: Show Walkthrough — Replay the onboarding introduction
Privacy & Data
Predict collects zero telemetry. No data leaves your machine.
- All prediction data is encrypted locally using AES-256-GCM with machine-bound key derivation
- Session records are HMAC-signed for integrity verification
- No network requests, no cloud sync, no analytics, no usage tracking
- No account required — the extension works entirely offline
- Uninstalling removes all stored data
This is a structural constraint, not a policy decision. There is no telemetry infrastructure in the codebase. The extension makes zero outbound network connections.
Requirements
- VS Code 1.85.0 or later
- An AI coding assistant that provides inline completions (GitHub Copilot, Codeium, Tabnine, Cursor, or similar)
Release Notes
1.2.1
Bundle optimization, cognitive load commentary, provider confidence weighting, per-file and per-project accuracy tracking.
1.2.0
Cognitive load classification (germane/intrinsic/extraneous), language-specific calibration profiles for 6 languages, multi-AI provider detection with confidence weighting.
1.1.0
Tier 2 structural prediction engine with 10 code categories, adjacency scoring for near-miss predictions, tier-based automatic mode selection.
1.0.0
Core predict-compare-update cycle, calibration dashboard, tier progression (Observer/Analyst/Architect), privacy-first encrypted local storage.
Published by az8T Lab