PiAgent - AI Coding Agent for VSCodeA full-featured AI coding agent that lives inside VSCode's Chat panel. Powered by pi-coding-agent — the same engine behind the pi CLI. Mention
Why PiAgent?
Quick Start
That's it. PiAgent initializes a session, picks the best available model, and starts working. FeaturesChat ParticipantMention Tool ExecutionPiAgent has four tools:
All tool output streams to the Chat panel in real time. Full output is available in the Output channel ( Multi-Provider LLM SupportPiAgent supports every provider that pi-coding-agent supports:
Switch models at any time with OAuth LoginUse Supported OAuth providers:
Use Session Management
Sessions are stored on disk at Automatic Context CompactionWhen the conversation approaches the model's context window limit, PiAgent automatically compacts the history — summarizing earlier messages while preserving recent context. This happens transparently; you see a brief progress indicator and can keep working. Slash CommandsType
Command PaletteOpen the command palette (
Keyboard Shortcuts
ConfigurationPiAgent shares most configuration with the pi CLI — API keys, models, and sessions are stored in VSCode SettingsPiAgent has VSCode-specific settings for status bar display and agent behavior. Use
|
| Value | Status bar | Description |
|---|---|---|
inputTokens |
↑141 |
Cumulative input tokens sent to the model |
outputTokens |
↓26k |
Cumulative output tokens received |
cacheRead |
R7.8M |
Tokens read from prompt cache |
cacheWrite |
W99k |
Tokens written to prompt cache |
cost |
$5.159 (sub) |
Session cost in USD. Shows (sub) for OAuth subscriptions |
contextUsage |
49.7%/200k (auto) |
Context window usage %. Shows (auto) when auto-compaction is on |
Default — show everything:
"piagent.statusBar.show": [
"inputTokens",
"outputTokens",
"cacheRead",
"cacheWrite",
"cost",
"contextUsage"
]
Show only cost and context usage:
"piagent.statusBar.show": ["cost", "contextUsage"]
Show only the model name (hide all stats):
"piagent.statusBar.show": []
The tooltip (hover over the status bar) always shows the full breakdown regardless of this setting.
piagent.autoCompaction
Automatically compact the conversation context when approaching the model's token limit. When enabled (default: true), older messages are summarized to free up space while preserving recent context.
piagent.autoRetry
Automatically retry failed API requests with exponential backoff (default: true). Helps handle transient network errors and rate limits.
piagent.blockImages
Block image attachments from being sent to the model (default: false). Useful for reducing token usage or when using models that don't support vision.
piagent.thinkingLevel
Default thinking level for reasoning models like Claude with extended thinking or o1 (default: "medium"). Options: "off", "minimal", "low", "medium", "high". Higher levels allow more thorough reasoning but use more tokens.
API Keys
Set environment variables before launching VSCode, or store them in ~/.pi/agent/auth.json:
# Environment variables
export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...
export GEMINI_API_KEY=...
export DEEPSEEK_API_KEY=...
export MISTRAL_API_KEY=...
export GROQ_API_KEY=...
export XAI_API_KEY=...
Settings
Global settings live at ~/.pi/agent/settings.json. Project-local overrides go in .pi/settings.json at the project root.
Custom Models
Add or override models via ~/.pi/agent/models.json. Any provider that speaks the OpenAI, Anthropic, or Google API can be added as a custom provider. See the pi documentation for details.
Output
- Chat Panel — Streaming markdown responses and inline tool call summaries
- Output Channel — Full untruncated tool output, useful for long bash output or large file reads. Access via
View → Output → PiAgent - Status Bar — Shows the active model, token usage, cost, and context window usage. Click to switch models. Hover for a detailed breakdown. Customize visible items via
piagent.statusBar.show.
Requirements
- VSCode 1.100.0 or later
- Node.js 20.0.0 or later
- An API key for at least one supported provider
FAQ
How is this different from GitHub Copilot?
PiAgent is an autonomous coding agent with file system access and shell execution. It can read your codebase, make multi-file edits, run tests, and iterate on errors. Copilot is primarily an autocomplete and chat tool. PiAgent also lets you bring any model from any provider.
Do I need the pi CLI installed?
No. PiAgent bundles pi-coding-agent as a dependency. However, if you already use the pi CLI, they share the same configuration directory (~/.pi/agent/), so API keys, settings, sessions, and custom models work in both places.
Can I use my Anthropic/OpenAI subscription instead of an API key?
Yes. Use /login directly in VSCode to authenticate with your existing subscription (Anthropic Claude Pro/Max, ChatGPT Plus/Pro, GitHub Copilot, or Google Gemini). PiAgent opens your browser, completes the OAuth flow, and saves credentials to ~/.pi/agent/auth.json. If you've already authenticated via pi /login in the CLI, PiAgent picks up the stored credentials automatically.
Does PiAgent ask for permission before running tools?
No. There is no permission system — no "allow/deny" dialogs for file edits or shell commands. The agent executes tools directly as requested. This matches the behavior of the pi CLI. If you want to review changes before they happen, ask the agent to show you a plan first, or use git to review and revert.
Where are sessions stored?
Sessions are stored at ~/.pi/agent/sessions/ and <project>/.pi/sessions/. They are plain JSON files and are fully portable between PiAgent and the pi CLI.
License
Credits
Built on pi-coding-agent and pi-ai by Mario Zechner. These libraries provide all the core functionality — multi-provider LLM support, tool execution, session management, OAuth authentication, and more. This extension is a VSCode integration layer that brings that power to the Chat panel.