Versus for Copilot
IACDM (Iterative Adversarial Convergence Development Methodology) orchestrator for GitHub Copilot Agent Mode.
An original methodology created by Jasmine Moreira that transforms LLMs from reactive code generators into disciplined agents following a structured 8-phase process — from problem discovery to delivery with tests and post-review.
Why?
AI coding tools (Lovable, Cursor, Claude Code, ChatGPT) all share the same fundamental limitation: zero internal verification capability. They generate statistically plausible output without distinguishing correct code from incorrect code. The METR study (2025) showed experienced developers using AI tools were actually 19% slower — despite believing they were 20% faster.
The real distinction isn't between tools — it's between process and no process.
Versus implements the AG/AV (Generative Agent / Verification Agent) model: the AI generates, external agents (automated tests + human operator) verify at discrete gates. Errors are caught at the earliest possible phase, not in production.
Features
- 8 sequential phases with verifiable exit criteria — no skipping steps
- 8 safeguards (S0-S7) that prevent common AI agent errors (premature convergence, scope creep, reimplementation)
- Convergence score (0-100) that gates advancement until the problem is truly understood
- 15 MCP tools for state management, decision recording, and phase transitions
- Persistent decision registry that survives context switches
- Gateway guard that detects when models skip loading context
- Adversarial critique with 7 universal + 8 conditional specialized lenses
- VS Code sidebar showing methodology state in real time
- Multi-session testing protocol for Phase 6 continuity across sessions
- Meta-iteration (v1→v2):
start_new_cycle() resets to Phase 0 preserving all decisions, specs/, and history
How It Works
Phase 0: Problem Discovery → HSA 5-level exploration (score >= 90 to advance)
Phase 1: Architecture → Define modules, interfaces, patterns
Phase 2: Adversarial Critique → Attack the architecture with specialized lenses
Phase 3: Simplification → Address criticals, simplify (loops with Phase 2)
Phase 4: Convergence Gate → Validate all exit criteria + safeguards
Phase 5: Implementation → Code the final architecture
Phase 6: Tests → 100% passing + exploratory testing
Phase 7: Post-Review → Double-loop learning: evaluate product AND process
Phase 0: Hierarchical Semantic Analysis (HSA)
The most distinctive contribution. Structures problem exploration in 5 levels, each building on the previous:
| Level |
Focus |
What is sought |
| 1. Domain |
Universe where the problem exists |
Vocabulary, theoretical/technical field, tools, state of the art |
| 2. Problem |
What needs to be solved |
5W1H: who, what, when, where, why, how solved today |
| 3. Elements |
Parts composing the problem |
Components, entities, actors, constraints |
| 4. Processes |
How parts relate |
Flows, transformations, dependencies, feedback cycles |
| 5. Product |
Expected outcome |
Deliverables, acceptance criteria, success metrics |
Context Efficiency
The methodology maximizes E = I₀/C (relevant information / total context consumed) through granularization: each session focuses on one module with just its interface signatures, keeping context efficiency near 1.0 instead of degrading to 0.3+ in monolithic sessions.
Quick Start
- Install the extension
- Open the Command Palette (
Ctrl+Shift+P) and run Versus Copilot: Initialize Project
- Enter a project name and description when prompted
- Open a new Copilot Agent Mode conversation
- Type start — the agent loads methodology state and begins Phase 0
Important: After initializing, always start a new conversation so Copilot picks up the MCP server and hooks. If the agent doesn't use MCP tools, run Developer: Reload Window from the Command Palette.
The 8 Safeguards
| ID |
Name |
Protects Against |
| S0 |
Problem Convergence |
Advancing without understanding the problem |
| S1 |
Anti-Bug |
Simplification that introduces bugs |
| S2 |
Stopping Criterion |
AI deciding when to stop (user's decision) |
| S3 |
Premature Convergence |
Stopping iteration too early |
| S4 |
Explicit Verification |
Skipping human validation at gates |
| S5 |
Scope Preservation |
Scope creep during critique-simplification |
| S6 |
Do Not Reimplement |
Recreating what already exists |
| S7 |
Sequence Discipline |
Starting tangential discussions during implementation |
The AG/AV Model
F₀ → G₀ → F₁ → G₁ → ... → Fₙ → Gₙ → delivery
↑ ↑ ↑
AV evaluates AV evaluates AV evaluates
- AG (Generative Agent): The LLM. Produces artifacts without verification capability.
- AV-automatic: Tests, linters, compilers — binary verdict on formalizable properties.
- AV-human: The operator — evaluates semantic adequacy, usability, domain correctness.
- Gate (Gₖ): Discrete point where progression requires AV approval. Rejection feeds concrete error information back to AG.
Requirements
| Requirement |
Minimum |
Why |
| VS Code |
>= 1.99.0 |
Extension API compatibility |
| Node.js |
>= 18.0.0 |
Runs the MCP server (node .versus/server.js) |
| GitHub Copilot |
Latest |
Agent Mode for MCP tools and hooks |
Warning: Node.js is mandatory. Without it, the MCP server cannot start and the agent will ignore the methodology entirely. Install from nodejs.org.
Companion Extension
Use Versus for Claude for Claude Code integration. Both extensions share the same .versus/state.json format.
License
MIT - Created by Jasmine Moreira.