Spec-Engine
Transform your 🎯 OverviewSpec-Engine is a VS Code extension that automatically generates detailed Product Requirements Documents (PRD) from your business logic code. Simply write your Key Benefits
✨ Features🚀 Core Features
🎨 UI/UX
🛠️ Developer Experience
🎬 Demo
Workflow Preview
📖 .spec Language ReferenceBasic SyntaxVariable Assignment
Data Types
Comments
Operators
Built-in FunctionsAggregation Functions
Array Functions
Control FlowIf-Else ConditionalSyntax:
Examples:
Complete ExamplesExample 1: E-commerce Pricing
Example 2: Grade Calculation
Example 3: Inventory Management
Language Limitations
📦 InstallationMethod 1: VS Code Marketplace (Recommended)
Method 2: Manual Installation
Prerequisites
🚀 Quick Start1. Create a
|
| Resource | Minimum | Recommended |
|---|---|---|
| OS | macOS 10.15+ / Windows 10+ / Linux | Any modern OS |
| RAM | 2GB | 4GB+ |
| Disk Space | 2GB (for Ollama + model) | 5GB+ |
| CPU | Dual-core | Quad-core+ |
Dependencies
- Ollama (auto-installed)
- Gemma 2B Model (~1.6GB, auto-downloaded on first use)
Note: The extension will:
- Detect if Ollama is missing
- Show installation instructions
- Auto-download the model when needed
🔧 Extension Settings
This extension contributes the following settings:
spec-engine.aiProvider
- Type:
string - Default:
"ollama" - Options:
"ollama"|"gemini" - Description: Select the AI provider (local or cloud)
spec-engine.localModelName
- Type:
string - Default:
"gemma2" - Description: Ollama model name (e.g.,
gemma2,llama3,qwen)
spec-engine.apiKey
- Type:
string - Default:
"" - Description: Google Gemini API key (only needed if provider is
gemini)
spec-engine.modelName
- Type:
string - Default:
"gemini-1.5-flash" - Description: Gemini model version (only used with Gemini provider)
🏗️ How It Works
Architecture Overview
┌─────────────┐
│ .spec File │
└──────┬──────┘
│
▼
┌─────────────────┐
│ SpecEngine │ ← Real-time calculation
│ (Logic Parser) │
└──────┬──────────┘
│
▼
┌─────────────────┐
│ Ollama Server │ ← Local AI (Gemma 2B)
│ (Auto-managed) │
└──────┬──────────┘
│
▼
┌─────────────────┐
│ AI Service │ ← Stream API + Progress
│ (Prompt Eng.) │
└──────┬──────────┘
│
▼
┌─────────────────┐
│ PRD Output │ ← Markdown with GitHub style
└─────────────────┘
Prompt Engineering
- Persona: 10-year senior Product Manager
- Few-Shot Learning: Input/output examples included in prompt
- Structured Output: Predefined sections (Overview, Goals, Logic, Edge Cases, etc.)
- Detail-Enhanced: Explicit instructions for depth and specificity
Stream API + Progress Estimation
- Streaming: Ollama sends text chunks in real-time
- Throttle: Updates every 300ms (smooth performance)
- Estimation: Progress based on text length / 3000 chars (average PRD)
- Increments: 5% steps (not 1%) for a better UX
📊 Performance
Speed Benchmarks (Gemma 2B on Apple Silicon M2)
| Code Length | Generation Time | PRD Length |
|---|---|---|
| 10 lines | ~8 seconds | ~1500 chars |
| 30 lines | ~15 seconds | ~3000 chars |
| 50 lines | ~25 seconds | ~5000 chars |
Resource Usage
- Memory: ~1.8GB (Ollama + Gemma 2B)
- CPU: 30-50% during generation
- Disk: ~1.6GB for model download
User Experience Metrics
- Logic Calculation: < 100ms (instant)
- Progress Updates: Every 300ms (5% increments)
- Rendering: Smooth, no flickering
- Cancellation: Immediate abort
🛠️ Troubleshooting
Common Issues
1. "Ollama not found"
Solution:
# macOS
brew install ollama
# Windows
winget install Ollama.Ollama
# Linux
curl -fsSL https://ollama.com/install.sh | sh
2. "Model 'gemma2' not found"
Solution:
- Click "다운로드 시작" when prompted
- Or manually run:
ollama pull gemma2
3. "Connection refused (localhost:11434)"
Solution:
- Restart VS Code
- Manually start Ollama:
ollama serve - Check firewall settings
4. Generation takes too long
Possible Causes:
- Large input file (50+ lines)
- Slow machine / low RAM
- Model not fully loaded
Solution:
- Use shorter
.specfiles - Close other applications
- Wait for first-time model loading (~30s)
5. Progress bar stuck at 95%
This is normal! The progress caps at 95% until completion to avoid false 100% before the text is ready.
🚀 Roadmap
- [ ] Syntax Highlighting:
.specfile colorization - [ ] Language Support: English PRD output option
- [ ] Custom Prompts: User-defined PRD templates
- [ ] Model Selector: In-app model switcher (Gemma, Llama, Qwen, etc.)
- [ ] PRD History: Version control for generated documents
- [ ] Diff View: Compare PRD versions side-by-side
- [ ] Export Options: PDF, DOCX, HTML
- [ ] Collaborative Mode: Share PRDs via URL
🤝 Contributing
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Development Setup
# Clone the repo
git clone https://github.com/cow-coding/spec-engine.git
cd spec-engine
# Install dependencies
npm install
# Compile TypeScript
npm run compile
# Run in VS Code
# Press F5 to open Extension Development Host
Code of Conduct
Please be respectful and constructive. See CODE_OF_CONDUCT.md.
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgments
Inspiration
- Original idea inspired by: YouTube
Built With
- Ollama: Local LLM runtime
- Google Gemma 2: Base AI model (2B parameters)
- VS Code Extension API: Extension framework
- Marked.js: Markdown rendering
Co-Authors
- Claude Sonnet 4.5: Pair programming partner for this project
💬 Support
- 🐛 Bug Reports: GitHub Issues
- 💡 Feature Requests: GitHub Discussions
- ⭐ Star this repo if you find it useful!
Made with ❤️ and AI