Overview Version History Q & A Rating & Review
🐾 PuPu Editor — AI-Powered IDE
An AI-powered development environment built on top of VS Code, similar to Cursor, Windsurf, and Antigravity. Connects to free models hosted on Hugging Face, NVIDIA, Groq, OpenRouter, or your local machine via Ollama.
✨ Features
🤖 Autonomous Agent System
Plan → Execute → Verify loop — the AI plans tasks, executes them with tools, and verifies results
Tool-calling — the agent can read/write files, search code, run terminal commands
Multi-step tasks — handles complex multi-file operations autonomously
Tool
Description
read_file
Read file contents with line ranges
write_file
Create or overwrite files
list_directory
List directory contents
grep_search
Search for patterns across files
run_command
Execute shell commands
🔌 Model Providers
Ollama (Local) — Free, private, no API key needed
Groq — Ultra-fast inference, free tier
OpenRouter — 300+ models, free models available
Hugging Face — Free inference API
💬 Chat Panel
Sidebar chat with streaming responses
Tool call visualization
Context-aware (automatically includes active file context)
Keyboard shortcut: Ctrl+Shift+L
✍️ Inline Completion (Ghost Text)
Automatic code suggestions as you type
FIM (Fill-in-the-Middle) support
Debounced for performance
🔗 MCP Server Support
Connect to external tools via Model Context Protocol
Stdio and HTTP transports
Configure via .pupu/mcp.json
📚 Skills System
Define reusable instruction sets in .pupu/skills/
YAML frontmatter + markdown format
Auto-injected into agent context
📋 Project Rules
Create .pupurules in your project root
Rules are automatically included in agent system prompt
🚀 Getting Started
Prerequisites
VS Code 1.85+
Node.js 18+
(Recommended) Ollama installed locally
Install & Run
# Clone the project
cd PuPuEditor
# Install dependencies
npm install
# Build
npm run compile:dev
# Run in VS Code
# Press F5 to open Extension Development Host
Press Ctrl+Shift+P → "PuPu: Configure Model Providers"
Select a provider and enter your API key
API keys are stored securely using VS Code's secret storage (OS keychain)
For Ollama (Local, Free)
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama3.2
# That's it! PuPu auto-connects to localhost:11434
📁 Project Structure
src/
├── extension.ts # Entry point
├── PuPuExtension.ts # Main extension class
├── types/index.ts # All TypeScript interfaces
├── agents/
│ └── AgentOrchestrator.ts # Agent with tool-calling loop
├── completion/
│ └── InlineCompletionProvider.ts # Ghost text completions
├── config/
│ └── ConfigurationManager.ts # Settings & secrets
├── managers/
│ ├── ContextManager.ts # Code context extraction
│ └── ModelRouter.ts # Provider selection & FIM
├── mcp/
│ └── MCPServerManager.ts # MCP protocol support
├── providers/
│ ├── OllamaProvider.ts # Local Ollama
│ ├── OpenAICompatibleProvider.ts # Groq, OpenRouter, HuggingFace
│ └── ProviderRegistry.ts # Provider lifecycle
├── skills/
│ └── SkillManager.ts # Skill discovery & loading
├── tools/
│ ├── ToolRegistry.ts # Tool registration & execution
│ ├── file/ # File tools
│ ├── search/ # Search tools
│ └── terminal/ # Terminal tools
└── ui/
├── ChatViewProvider.ts # Sidebar chat UI
├── ChatPanel.ts # Panel wrapper
└── StatusBarManager.ts # Status bar
⌨️ Keyboard Shortcuts
Shortcut
Action
Ctrl+Shift+L
Open Chat Panel
Ctrl+Shift+Space
Complete Code
Ctrl+Shift+E
Explain Selected Code
🔧 Configuration
Setting
Default
Description
pupu.defaultProvider
ollama
Active model provider
pupu.providers.ollama.baseUrl
http://localhost:11434
Ollama URL
pupu.providers.ollama.model
llama3.2
Default model
pupu.maxTokens
2048
Max generation tokens
pupu.temperature
0.7
Generation temperature
pupu.inlineCompletion.enabled
true
Enable ghost text
pupu.agent.maxIterations
25
Max agent tool loops
pupu.agent.requireApproval
true
Require approval for destructive actions
📄 License
MIT