Lightwork Studio
Enforced orchestration runtime for AI-assisted development.
32 rules. 20 hooks. 7 agents. Quality enforcement by code, not hopes.
Works with: Claude Code | VS Code | Cursor | Gemini CLI | Codex | GitHub Copilot
What Is Lightwork?
AI coding tools generate code 10x faster. But 66% of developers say the output is "almost right, but not quite." 45% contains known security flaws. 37.6% more vulnerabilities with each iteration cycle.
Speed without quality is just faster failure.
Lightwork Studio transforms Claude Code into a rigorously enforced production system. Install it, scaffold the framework, and every AI interaction runs through 32 battle-tested rules enforced by 20 hooks.
Who Is Lightwork For?
Non-technical founders: You're the CEO with an idea. Lightwork is your CTO — it handles engineering discipline, quality enforcement, and technical decision-making so you can focus on what to build, not how to build it.
Solo developers: You know the discipline is important but context-switching between coding and quality checking kills your flow. Lightwork enforces the discipline automatically so you stay in the zone.
Senior engineers who use AI tools: You've seen the quality problems firsthand. Lightwork gives you the enforcement hooks to keep AI-generated code at your standards without manually reviewing every line.
Features
Free Tier
- Build Lightwork — One command scaffolds the entire framework into any project
- Pipeline Visualizer — See current phase, task, session count in the sidebar
- Status Bar — Live indicator showing Lightwork state
- 32 Rules — Browse all rules with categories in the sidebar
- 3 Agents — Researcher, Developer, Reviewer
- State Management — Automatic state file creation and tracking
Pro ($29/month — save with quarterly, 6-month, or yearly plans)
- 20 Enforcement Hooks — Rules enforced by code. Can't stop without tests. Can't skip steps.
- All 7 Agents — Adds Planner, Tester, Security, Architect with strict tool separation
- Constraint Snapshot — Evaluate decisions against all constraint fields before implementing
- Quality Dashboard — Live visualization of knowledge connections, weights, and field health
- Intent Capture — Structured intent extraction saved to CONTEXT.md
- Knowledge System — UUID-indexed entries with weight-based retrieval
- Task Triage — PATCH/SMALL/MEDIUM/LARGE classification scales the cycle to complexity
- Haiku Semantic Evaluation — Each /proceed cycle fires an independent evaluation of all 32 rules. Uses your Claude API quota (~1 short message per cycle). Degrades gracefully if unavailable.
- PR Quality Gate — GitHub Actions workflow that enforces Lightwork standards on every PR
Enterprise ($199/user/month — save with quarterly, 6-month, or yearly plans)
- Everything in Pro
- Cognition Engine — Dynamic knowledge graph with Hebbian learning
- Team Dashboard — Aggregated enforcement metrics across developers (trends, not surveillance)
- Branch-Scoped Enforcement — Cycle history audit trail across branches
- Cross-Repo Knowledge Sharing — Export/import knowledge entries between projects
- Role-Based CONTEXT.md — Product Owner, Tech Lead, Designer sections with ownership
- Team Onboarding — Complete TEAM.md with shared vs per-developer file management
- Dedicated Support
The 32 Rules
Every rule earned through failure in production development:
- ONE CYCLE PER PROCEED
- ZERO MANUAL STEPS
- MAXIMUM EFFORT
- OVERKILL GATE
- R/T/P/B PIPELINE
- SESSION BOOTSTRAP
- STATE FILES CURRENT
- ONE FEATURE PER CYCLE
- CONTEXT.MD SACRED
- RESEARCH IS A LIBRARY
- GIT DISCIPLINE
- REAL SCIENCE
- SELF-IMPROVEMENT
- VERIFY BEFORE BUILD
- DISAGREE WHEN EVIDENCE DEMANDS
- SELF-CRITIQUE
- SPEC COMPLETENESS GATE
- UNIVERSAL SECURITY
- TRACE THE FULL PATH
- TEST WITH THE HUMAN
- DOMAIN RESEARCH FIRST
- PLAYTEST FEEDBACK OVERRIDES
- DEPENDENCY COMPLETENESS
- ITERATION-AWARE SECURITY
- LOOP DETECTION
- COMPLETION VERIFICATION
- INTENT VERSIONING
- RATIONALIZATION DETECTION
- DEPLOYMENT IS BUILD
- REGRESSION PREVENTION
- VALIDATE BEFORE BUILD
- DOCUMENTATION INTEGRITY
Why Lightwork Exists
Without orchestration frameworks, measured AI code quality:
- 1.7x more issues per PR (CodeRabbit 2025-26)
- 37.6% vulnerability increase per iteration cycle (IEEE-ISTAS 2025)
- 45% of AI code has known security flaws (Veracode 2025)
- 31.7% fail in clean environments (Reproducibility Study 2026)
- +41% complexity increase with AI agents (GitClear/Cortex 2025)
Every Lightwork rule, hook, and agent exists to address a specific measured failure mode. Every rule earned through failure, not theory.
Prerequisites
Quick Start
- Install from VS Code Marketplace (or Open VSX for Cursor)
- Open any project
- Run command: Lightwork: Build Lightwork
- Open Claude Code and type
/proceed for your first enforced cycle
Research Basis
Grounded in 20+ peer-reviewed sources: DORA/Accelerate, Reflexion (Shinn et al. 2023), Swiss Cheese Model, Praetorian 8-layer architecture, CodeRabbit AI Code Report, IEEE-ISTAS Security, Veracode GenAI Security, and more.
Privacy
Lightwork Studio collects zero data by default. Optional telemetry (opt-in only) collects anonymous usage patterns — never code, file names, specs, or project content. Your CONTEXT.md never leaves your machine.
License
Extension: MIT | Framework methodology: CC BY-NC-SA 4.0
Built in Arkansas. By a human and an AI. Who decided that "good enough" isn't.