MD ChatYour Chat with AI directly inside Markdown files. No side panels, no separate apps — just write Why MD Chat?
Quick StartOption A: Use a CLI provider (no API key needed)
Option B: Use an API provider
That's it. Keep adding Providers
Fallback chain — if the default provider is unavailable, MD Chat automatically tries the next one in Per-file override — set FrontmatterControl behavior per file with YAML frontmatter:
|
| Command | Shortcut | Description |
|---|---|---|
| Send to AI | Ctrl+Enter |
Send the conversation to the AI provider |
| Cancel Generation | Escape |
Stop the streaming response |
| Extract Code Block | Ctrl+Shift+E |
Save the code block at cursor to a file |
| Fork Chat | Ctrl+Shift+F |
Branch the conversation into a new file |
| New Chat File | — | Create a new .md chat from template |
| Select Provider | — | Switch between providers |
| Select Model | — | Set model for the current file |
| Reset Session | — | Clear session to start a fresh conversation |
Extract Code Block
When the AI gives you a code block, press Ctrl+Shift+E to save it to a file. The file extension is auto-detected from the language label (e.g. ```typescript → .ts).
Fork Chat
Press Ctrl+Shift+F to branch the conversation at the current position. Creates a new file with the history up to that point, clears the session, and adds an empty ## User section — ready for a different direction.
Settings
General
| Setting | Default | Description |
|---|---|---|
mdChat.defaultProvider |
claudeCli |
Default provider |
mdChat.fallbackProviders |
["codexCli"] |
Fallback provider chain |
mdChat.showMetadata |
true |
Show provider/model/time after each response |
mdChat.showThinking |
true |
Show thinking/reasoning trace blocks when the provider emits them |
mdChat.showToolUse |
true |
Show tool-use trace blocks when the provider emits them |
mdChat.streamUpdateInterval |
80 |
Streaming flush interval (ms) |
CLI Providers
| Setting | Default | Description |
|---|---|---|
mdChat.providers.claudeCli.command |
claude |
Path to Claude CLI |
mdChat.providers.claudeCli.defaultModel |
sonnet |
Default Claude model |
mdChat.providers.codexCli.command |
codex |
Path to Codex CLI |
mdChat.providers.codexCli.defaultModel |
— | Default Codex model |
API Providers
| Setting | Default | Description |
|---|---|---|
mdChat.providers.openai.apiKey |
— | OpenAI API key |
mdChat.providers.openai.baseUrl |
https://api.openai.com/v1 |
OpenAI endpoint |
mdChat.providers.openai.defaultModel |
gpt-4o-mini |
Default OpenAI model |
mdChat.providers.anthropic.apiKey |
— | Anthropic API key |
mdChat.providers.anthropic.baseUrl |
https://api.anthropic.com/v1 |
Anthropic endpoint |
mdChat.providers.anthropic.defaultModel |
claude-sonnet-4-6 |
Default Anthropic model |
mdChat.providers.openrouter.apiKey |
— | OpenRouter API key |
mdChat.providers.openrouter.baseUrl |
https://openrouter.ai/api/v1 |
OpenRouter endpoint |
mdChat.providers.openrouter.defaultModel |
openai/gpt-4o-mini |
Default OpenRouter model |
mdChat.providers.ollama.baseUrl |
http://127.0.0.1:11434/v1 |
Ollama endpoint |
mdChat.providers.ollama.defaultModel |
qwen2.5:7b |
Default Ollama model |
Requirements
- VS Code 1.85+
- At least one of:
- Claude Code (
claudeon PATH) - Codex CLI (
codexon PATH) - An API key for OpenAI, Anthropic, or OpenRouter
- Ollama running locally
- Claude Code (
License
MIT