Learning Extension
Turn your AI chat conversations into structured tutorials and learning materials — automatically.
Features
Generate from Chat
Select any recent chat session from the sidebar, pick specific questions or use the entire conversation, choose a difficulty level (Beginner, Intermediate, or Advanced), and generate a polished tutorial in one click.
Batch Generate
Combine multiple chat sessions from a configurable time window into separate tutorials at once. You choose how far back to look and what difficulty level to use.
Scheduled Auto-Generation
Set up multiple schedules — like phone alarms — to automatically generate tutorials at specific times of day. Each schedule has:
- Time — when to run (24-hour format, e.g.
18:00)
- Lookback window — how many hours of chat history to consider
- Difficulty level — Beginner, Intermediate, or Advanced
Each chat within the lookback window gets its own tutorial. Schedules are off by default and only run while the IDE is open.
Chat Detail View
Click on any chat in the Available Chats tree to open a detail view where you can:
- See all the questions you asked in that session
- Select specific questions to include in the tutorial
- Choose the difficulty level before generating
Multiple LLM Providers
| Provider |
Setup |
| VS Code LM (default) |
Works out of the box with Copilot or Cursor's built-in models |
| OpenAI |
Run Learning: Set API Key, select OpenAI, paste your key |
| Anthropic |
Run Learning: Set API Key, select Anthropic, paste your key |
| Google Gemini |
Run Learning: Set API Key, select Gemini, paste your key |
| Ollama |
No key needed — just have Ollama running locally |
Tutorial Difficulty Levels
Every generation mode supports three levels:
- Beginner — simplified explanations, step-by-step walkthroughs, minimal jargon
- Intermediate — balanced depth with practical examples and best practices
- Advanced — in-depth analysis, edge cases, performance considerations, and architectural patterns
@learn Chat Participant
Type @learn in the VS Code chat panel followed by any question:
@learn How do JavaScript closures work?
The extension will:
- Answer your question using the available language model, streaming the response in the chat
- Automatically generate a structured tutorial from the Q&A
- Save it as
learn-how-do-javascript-closures-work.md in your output folder
- Show it in the Generated Materials panel
This works in VS Code when Copilot or another language model is active. No extra steps needed — ask and learn.
Chat Source Detection
The extension automatically discovers your AI chat history from multiple sources:
| Source |
How it works |
| Cursor Transcripts |
Reads agent transcripts from all your Cursor projects (only when running in Cursor) |
| GitHub Copilot Chat |
Reads Copilot chat session files directly from VS Code's workspace storage |
@learn Chat Participant |
Ask a question in VS Code's chat panel — gets answered and turned into a tutorial automatically |
The Actions panel shows a live count of detected chats per source (e.g. "5 Cursor + 8 Copilot chats").
Status Bar Notifications
- A flashing book icon appears in the status bar after new chat activity, prompting you to generate learning content
- A spinning indicator shows when a tutorial is being generated
- A checkmark briefly confirms when generation is complete
The Learning Materials activity bar panel has three sections:
- Actions — Quick buttons for Generate from Chat, Batch Generate, schedule management, and settings
- Available Chats — Tree view of all detected chat sessions across sources
- Generated Materials — Tree view of all your generated tutorials with inline rename and delete
Rename & Delete
Right-click (or use the inline icons) on any generated tutorial to rename or delete it directly from the sidebar.
Cancellable Generation
Every generation task shows a progress notification with a cancel button. Aborting stops the LLM request immediately.
Visual Settings Page
Run Learning: Open Settings to open a dedicated webview where you can configure your LLM provider, model, API keys, and output folder — all in one place.
Getting Started
- Install the extension from the marketplace
- Open the Learning Materials panel in the activity bar (book icon)
- Configure your LLM provider via Learning: Open Settings or the gear icon
Settings
| Setting |
Default |
Description |
learningExtension.llmProvider |
vscode-lm |
LLM provider to use |
learningExtension.openaiModel |
gpt-4o |
OpenAI model name |
learningExtension.anthropicModel |
claude-sonnet-4-20250514 |
Anthropic model name |
learningExtension.geminiModel |
gemini-2.0-flash |
Google Gemini model name |
learningExtension.ollamaModel |
llama3 |
Ollama model name |
learningExtension.ollamaBaseUrl |
http://localhost:11434 |
Ollama server URL |
learningExtension.outputFolder |
.learning |
Output folder for generated materials |
learningExtension.schedules |
[] |
Auto-generation schedules (array of time + lookback + level) |
Each generated file is a Markdown document with YAML frontmatter:
---
title: "Understanding TypeScript Generics"
generated: 2026-02-21T14:30:00.000Z
mode: chat
level: intermediate
sources: ["abc-123"]
---
# Understanding TypeScript Generics
## Overview
...
## Core Concepts
...
## Step-by-Step Guide
...
## Key Takeaways
...
## Further Reading
...
License
MIT