Real-time AI readiness analysis in VS Code. Detect issues that confuse AI models before they become problems.
Why AIReady?
AI coding assistants giving bad suggestions? AIReady finds why
Context window costs too high? AIReady shows where to optimize
Code reviews catching AI-generated duplicates? AIReady prevents them
Want to make your codebase AI-native? AIReady shows you how
Features
🛡️ Real-time Analysis - See your AI readiness score in the status bar
📊 Issue Explorer - Browse detected issues in the sidebar
⚡ Quick Scan - Analyze current file with a single command
🔬 10-Metric Methodology - Deep dive into 10 dimensions of AI-readiness
🔧 Configurable - Set thresholds, severity levels, and more
🤖 MCP Server - Expose AIReady capabilities to MCP-compliant AI agents (Cursor, Windsurf, Claude)
MCP Server Integration
AIReady includes a built-in Model Context Protocol (MCP) server that you can integrate directly into your AI coding assistants. This allows your agent to analyze your codebase context dynamically.
Cursor IDE
Open Cursor Settings.
Navigate to Features -> MCP Servers.
Add a new server with the command: npx -y @aiready/mcp-server
Windsurf IDE
Open settings and add a new MCP Server.
Set the command to: npx -y @aiready/mcp-server
Installation
From VS Code Marketplace
Open VS Code
Go to Extensions (Cmd+Shift+X)
Search for "AIReady"
Click Install
Manual Installation
# Install from VSIX
code --install-extension aiready-vsix
Usage
Commands
Command
Description
AIReady: Scan Workspace
Run full AI readiness analysis
AIReady: Quick Scan (Current File)
Analyze only the active file
AIReady: Show Report
Open the output panel with details
AIReady: Open Settings
Configure AIReady options
AIReady: Show Methodology
Deep dive into the 10 metrics
Configuration
Setting
Default
Description
aiready.threshold
70
Minimum score to pass
aiready.failOn
critical
Severity level to fail on
aiready.tools
["patterns", "context", "consistency", ...]
Tools to run
aiready.autoScan
false
Auto-scan on file save
aiready.showStatusBar
true
Show score in status bar
Status Bar
The extension shows your AI readiness score in the status bar:
✅ 70+ - Good AI readiness
⚠️ 50-69 - Needs improvement
❌ <50 - Critical issues detected
The 10 Dimensions of AI-Readiness
AIReady measures your codebase against 10 critical metrics that determine how well AI agents can understand and maintain your code:
Semantic Duplicates - Logic repeated in different ways that confuses AI context.
Context Fragmentation - How scattered related logic is across the codebase.
Naming Consistency - Unified naming patterns that help AI predict your intent.
Dependency Health - Stability and freshness of your project dependencies.
Change Amplification - Ripple effects when a single requirement evolves.
AI Signal Clarity - Ratio of actual logic (signal) to boilerplate/dead code (noise).
Documentation Health - Accuracy and freshness of docstrings and READMEs.
Agent Grounding - Ease of navigation for autonomous AI agents.
Testability Index - Ability for AI to write and run reliable tests for your code.
Contract Enforcement - Structural type contracts that prevent defensive coding cascades.
Methodology & Deep Dives
Click on any tool score in the sidebar's Summary view to open the AIReady Methodology deep dive. This view provides:
Technical "How": The engineering logic behind each metric.
Scoring Thresholds: What constitutes a pass vs. a fail.
Refactoring Playbook: Actionable steps to improve your score.
Good vs. Bad Examples: Visual code comparisons.
Requirements
VS Code 1.85.0 or higher
Node.js 18+ (for CLI execution)
Release Notes
0.3.32
New 10-Metric Methodology: Integrated full deep-dive support for all 10 AI-readiness metrics.
Methodology Webview: Added a detailed view explaining detection logic, thresholds, and examples.
Interactive Summary: Click tool scores to see how they are calculated and how to fix them.
Refined UI: Improved issue grouping and visualization.