AI Agent Notebook

Jupyter-style .aanb notebooks for vibe coding and agent workflows: Prompt cells send instructions to your LLM (Ollama, OpenAI, Codex, Client / OpenAI-compatible servers, Claude, Gemini); Markdown cells hold curriculum and notes; optional Code cells run small local JavaScript. Chat via @AI-Agent uses the same LLM settings as the notebook.
English summary: Teach and ship agentic notebooks in VS Code—ordered prompts, streaming cell output, cancellation, session resume, and a built-in education guide.
Features
.aanb format — YAML frontmatter + markdown cell headers; serializer round-trips with VS Code notebook APIs.
- Cell execution — streaming output and interrupt/cancel for Prompt cells (
agentNotebook.streamNotebookOutput).
- Notebook UX defaults — line numbers, cell toolbar placement,
.aanb opens as the AI Agent Notebook editor.
- Chat participant —
@AI-Agent with shared LLM configuration (short name matches VS Code’s allowed pattern for handles; fullName stays a readable title in Chat).
- Commands (category: AI Agent Notebook)
.aanb를 노트북으로 열기 (Explorer context on .aanb); 바이브 코딩 교육 가이드 열기; 최근 .aanb 노트북 이어서 열기; 샘플 노트북 열기 (calculator.aanb); Execute Cell (when an AI Agent Notebook is focused); 실행 결과 정리 (removes Output cells and clears run outputs on Prompt/Code/Markdown while keeping source — notebook toolbar, editor title, cell menu, Command Palette, Explorer on .aanb).
- Shortcuts (optional): Ctrl+Shift+Alt+R (resume last), Ctrl+Shift+Alt+S (sample).
- Samples —
samples/simple-calculator/calculator.aanb, samples/prompt-first/demo.aanb.
- Template AI Agent Notebook — 번들
templates/template_*.aanb로 바이브 코딩·창작 워크플로를 시작합니다. 파일 → 새 파일 하위의 **Template으로 AI Agent Notebook 시작하기…**에서 템플릿을 고르거나, template_*.aanb를 연 뒤 노트북 도구 모음·편집기 제목의 **「새 작업용으로 저장… (템플릿)」**으로 복사본을 만듭니다. 개념·설계: education/TEMPLATE_AI_AGENT_NOTEBOOK.md.
Privacy
Prompts and context go only to the LLM you configure. Do not put secrets in .aanb you plan to share. This extension does not send telemetry to the publisher; details: PRIVACY.md.
Quick start
- Install — From Releases, download the latest
agent-notebook-*.vsix, then VS Code → Extensions → … → Install from VSIX…. Reload the window once.
- New notebook — Command Palette: AI Agent Notebook: 새 AI Agent Notebook (Untitled .aanb). You get a Jupyter-style layout: a Markdown explanation cell and a Prompt cell. Empty saved
.aanb files open with the same starter when they contain no cells.
- Open a notebook — Command Palette: AI Agent Notebook: 샘플 노트북 열기, or open
samples/simple-calculator/calculator.aanb. If it opens as plain text: AI Agent Notebook: .aanb를 노트북으로 열기 or right-click the file in Explorer.
- Configure LLM — Settings →
agentNotebook (Ollama URL; keys for OpenAI / Codex; optional client.baseUrl + client.apiKey for LM Studio–style servers; Claude; Gemini).
- Run cells — Use the cell Run button, or the notebook toolbar 전체 셀 실행. Toolbar also has + Markdown / + Prompt / + Code, 모든 출력 지우기 (clears outputs on every cell), and 실행 결과 정리 (Output-type cells removed + Prompt/Code/Markdown outputs cleared; confirms in a dialog).
- Teaching — Command Palette → 바이브 코딩 교육 가이드 열기.
- Templates — 파일 → 새 파일 → Template으로 AI Agent Notebook 시작하기… (또는 명령 팔레트 동일 이름). 저장 후 안내는 3초 후 자동으로 사라지거나 확인으로 바로 닫을 수 있습니다.
Settings (summary)
| Key |
Purpose |
agentNotebook.defaultLLM.provider / model |
Default provider and model |
agentNotebook.ollama.baseUrl |
Ollama base URL |
agentNotebook.openai.apiKey |
OpenAI API key |
agentNotebook.claude.apiKey / claude.baseUrl |
Anthropic Claude API key; optional custom API base (see setting description) |
agentNotebook.gemini.apiKey / gemini.baseUrl |
Google Gemini (AI Studio) API key; optional API root URL |
agentNotebook.codex.apiKey / codex.baseUrl |
OpenAI-platform key for Codex provider (same /v1/chat/completions as OpenAI; optional custom base URL) |
agentNotebook.client.baseUrl / client.apiKey |
Client provider: required OpenAI-compatible API root (…/v1); optional Bearer token |
agentNotebook.streamNotebookOutput |
Stream LLM text into the cell while generating |
agentNotebook.streamProgressIntervalMs |
Minimum ms between streaming UI updates |
agentNotebook.streamChatOutput |
Stream @AI-Agent Chat replies |
agentNotebook.streamChatProgressIntervalMs |
Throttle for Chat streaming updates |
agentNotebook.showWelcomeOnActivate |
One-time welcome after first activation |
agentNotebook.promptContextIncludePriorOutputs |
Include prior cells’ last run outputs in each Prompt (sequential build) |
agentNotebook.promptContextBudgetChars / promptContextMaxOutputCharsPerCell |
Size limits for that transcript |
agentNotebook.promptContextUserMessageMaxChars |
Cap for notebook block inside the LLM user message |
Azure OpenAI (Chat Completions–compatible): set defaultLLM.provider to client, point client.baseUrl at your resource (API version segment + /deployments/<deployment-id> as required by Microsoft’s REST shape), set defaultLLM.model to that deployment name, and set client.apiKey to the API key. Paths differ from https://api.openai.com/v1; adjust baseUrl until POST …/chat/completions matches your endpoint.
Build & version
- Shipped artifact — Single bundled
out/extension.js (esbuild). Runtime dependencies in package.json are empty; axios and js-yaml are compiled into the bundle.
npm run compile — Increments semver patch in VERSION and package.json (build counter: 0.1.n → 0.1.n+1), then runs esbuild.
npm run bundle — esbuild only; no version bump (used by tests, vscode:prepublish, and CI).
npm run watch — esbuild watch mode.
npm run typecheck — tsc --noEmit.
Windows release script
From the repo root, release.bat runs npm ci → compile (bump + bundle) → test:coverage → packages dist\agent-notebook-<version>.vsix → creates or updates a GitHub release with gh when authenticated (gh auth login or GH_TOKEN).
CI
GitHub Actions (.github/workflows/ci.yml): npm ci → typecheck → lint → bundle → vsce package sanity → pytest gate (tests_python, runs Jest + line coverage check). If that job fails on a push to main in yskwon-wayforyou/agent_notebook, a follow-up job files a CI … failed GitHub Issue, posts a comment with steps to download the ci-failure-debug-logs artifact (workflow log ZIP + failure-context.txt), and uploads that artifact on the same run. To verify issue automation without breaking the build, run Actions → GitHub issue automation smoke (github-issue-automation-smoke.yml). Tag push v* triggers Release workflow (bundle, tests, VSIX, release notes).
Extension E2E (slow, downloads VS Code): run manually via Actions → E2E → Run workflow (.github/workflows/e2e.yml). Same as locally: npm run test:e2e.
Pytest gate (venv)
python -m venv .venv
. .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements-ci.txt
pytest tests_python -q
Development
npm install
npm run typecheck
npm run lint
npm test # Jest (pretest: typecheck + bundle, no version bump)
npm run test:coverage # Jest + coverage thresholds
npm run test:e2e # Extension host smoke test (bundle + @vscode/test-electron; downloads VS Code on first run)
Install this checkout into Cursor / VS Code on your machine (run tests, build VSIX, --force install — then reload the window):
npm run apply:local
Requires cursor and/or code on your PATH. More on VSIX, optional skips, and releases: PUBLISHING.md; history: CHANGELOG.md.
License
MIT — see LICENSE.
Author
yskwon — publisher wayforyou — GitHub