Convo-VSCode
VSCode tools for Convo-Lang

Learn Convo-Lang with Live Demos - https://learn.convo-lang.ai
GitHub - https://github.com/convo-lang/convo-lang
VSCode Marketplace - https://marketplace.visualstudio.com/items?itemName=iyio.convo-lang-tools
NPM - https://www.npmjs.com/package/@convo-lang/convo-lang-tools
What is Convo-VSCode?
Convo-VSCode is the VSCode extension for working with Convo-Lang directly inside the editor.
It provides syntax highlighting, inline and file-based convo execution, code block tooling, Convo-Make integration, markdown preview support, and reviewable AI-generated file and shell outputs.
The extension is designed to keep AI workflows inside normal source files so prompts, generated outputs, script execution, and project scaffolding remain visible and reviewable.
What you can do with the extension
- Author and run
.convo files directly in VSCode
- Execute selected embedded Convo snippets from Markdown, JavaScript, TypeScript, and Python
- Review generated
FILE_CONTENT blocks before writing files
- Run
RUNNABLE_SCRIPT blocks and append SCRIPT_OUTPUT back into the conversation
- Parse, flatten, inspect messages, inspect vars, and convert conversations
- Build and manage Convo-Make targets from the editor
- Use imports to create reusable file-based agent harnesses
- Work with the same runtime model used by the Convo CLI and core libraries
Index
Install
Install the extension from the VSCode Marketplace:
https://marketplace.visualstudio.com/items?itemName=iyio.convo-lang-tools
After installing, configure an OpenAI, OpenRouter, AWS Bedrock, or compatible API key in VSCode settings if you want to run prompts directly in the editor.
Common settings include:
convo.defaultModel
convo.openAiBaseUrl
convo.openAiApiKey
convo.openRouterBaseUrl
convo.openRouterApiKey
convo.awsBedrockProfile
convo.awsBedrockApiKey
convo.awsBedrockRegion

Quick Start
1. Create a .convo file
> system
You are a concise engineering assistant.
> user
Summarize the tradeoffs between REST and GraphQL.
2. Run the conversation
Use one of the following:
- Click the play button in the top-right of the editor
- Run the
Complete Convo Conversation command
- Press
Cmd+R on macOS or Ctrl+R on Windows/Linux
3. Continue iterating in-place
The extension appends assistant output directly into the file so you can keep a readable conversation history in source control.
4. Inspect runtime details when needed
Useful commands include:
Parse Convo
Flatten Convo
Flat Message Objects
Message Objects
Vars
Convert Convo
List Convo Models
Main Features
Syntax highlighting
The extension provides Convo-Lang syntax highlighting for:
.convo
.convo-make
.convo-make-target
It also supports embedded Convo syntax in:
- JavaScript
- TypeScript
- JSX / TSX
- Python
- Markdown
Run conversations in the editor
You can run full convo files or selected embedded convo snippets directly from VSCode.
This is useful for:
- prompt iteration
- testing tools and functions
- inspecting agent behavior
- prototyping task harnesses
- continuing durable conversations stored in files
Parse, flatten, inspect, and convert
The extension exposes commands for runtime inspection and debugging, including:
- parse source
- flatten evaluated conversations
- inspect message objects
- inspect evaluated vars
- convert convo to provider-native request formats
- list known models
- inspect modules
These tools are especially useful when debugging imports, graph behavior, transforms, and tool exposure.
Import-aware authoring
Convo-VSCode supports path autocomplete and clickable links for imports.
That makes it easy to build reusable file-based harnesses such as:
- shared system prompts
- reusable tools
- policy files
- review rules
- task-specific wrappers
Example:
@import ./shared/system-prompt.convo
@import ./shared/tools.convo
> define
targetFile="./src/app.ts"
> user
Review and improve {{targetFile}} and return a FILE_CONTENT block.
Code Blocks view
The extension parses recognized output blocks from convo files and shows them in a dedicated Code Blocks view.
This helps you:
- inspect generated file outputs
- open related messages
- diff generated changes
- copy generated content
- run scripts
- apply all actions from a message
Output block code lenses
Recognized output blocks get code lenses with actions such as:
- Open Output
- Write Output
- Open Diff
- Copy Output
- Run Script
- Run Script and Complete
Working with output blocks
A major strength of the extension is that generated outputs remain reviewable before you apply them.
File outputs
When a convo response includes a FILE_CONTENT block, the extension can open, diff, copy, or write the proposed file.
Example:
(note - file blocks actually use triple backticks but the example show only 2 due to markdown formatting limitations)
<FILE_CONTENT name="example.ts" target-output-path="./src/example.ts">
`` ts
export const value='hello';
``
</FILE_CONTENT>
This keeps AI-assisted editing review-first instead of write-first.
Runnable shell scripts
The extension can also execute RUNNABLE_SCRIPT blocks.
Example:
<RUNNABLE_SCRIPT script-name="list-project" cwd="." target-shell-type="bash">
`` bash
# list project files
ls -la
``
</RUNNABLE_SCRIPT>
When run, stdout and stderr are appended back into the conversation as a structured SCRIPT_OUTPUT block. You can then continue the conversation with the result in context.
This is useful for:
- project inspection
- guided refactoring
- environment checks
- generation plus validation loops
Convo-Make in VSCode
Convo-VSCode includes support for Convo-Make workflows.
From the editor you can:
- inspect make targets
- build or rebuild a target
- review-build a target
- sync outputs
- open associated target convo files
- delete generated outputs
- stop active builds
This makes the extension a practical front end for AI-powered scaffolding and build-like project generation flows.
Model Support
OpenAI
https://platform.openai.com/docs/models
- gpt-5
- gpt-5-mini
- gpt-5-nano
- gpt-4.1
- gpt-4
- gpt-4-turbo
- gpt-4o
- gpt-4o-mini
- o4-mini-deep-research
- o4-mini
- o3
- o3-pro
- o3-mini
- o3-deep-research
- o1-mini
- o1-preview
- gpt-3.5-turbo
- gpt-3.5-turbo-16k
Local LLMs & OpenAI Compatible
Any OpenAI chat completions compatible API can be used with Convo-Lang including locally hosted LLMs
Open Router
Convo-Lang can be used with Open Router's 400+ models - https://openrouter.ai/models
AWS Bedrock
https://aws.amazon.com/bedrock/
- us.amazon.nova-lite-v1:0
- us.amazon.nova-micro-v1:0
- us.amazon.nova-pro-v1:0
- us.anthropic.claude-3-5-haiku-20241022-v1:0
- us.anthropic.claude-3-5-sonnet-20240620-v1:0
- us.anthropic.claude-3-5-sonnet-20241022-v2:0
- us.anthropic.claude-3-7-sonnet-20250219-v1:0
- us.anthropic.claude-3-haiku-20240307-v1:0
- us.anthropic.claude-opus-4-20250514-v1:0
- us.anthropic.claude-sonnet-4-20250514-v1:0
- us.deepseek.r1-v1:0
- us.meta.llama3-1-70b-instruct-v1:0
- us.meta.llama3-1-8b-instruct-v1:0
- us.meta.llama3-2-11b-instruct-v1:0
- us.meta.llama3-2-1b-instruct-v1:0
- us.meta.llama3-2-3b-instruct-v1:0
- us.meta.llama3-2-90b-instruct-v1:0
- us.meta.llama3-3-70b-instruct-v1:0
- us.meta.llama4-maverick-17b-instruct-v1:0
- us.meta.llama4-scout-17b-instruct-v1:0
- us.mistral.pixtral-large-2502-v1:0
Using Convo-Lang in JavaScript
Convo-VSCode is built for authoring and running Convo-Lang in the editor, but many users also use the core package directly in application code.
Install Convo-Lang packages:
npm install @convo-lang/convo-lang
// example.mjs
import { convo } from '@convo-lang/convo-lang';
import { initOpenAiBackend } from '@convo-lang/convo-lang';
initOpenAiBackend();
const planets=await convo`
> system
You are a super smart and funny astronomer that love make funny quotes
> define
Planet = struct(
name: string
moonCount: number
quote: string
)
@json Planet[]
> user
List the planets in our solar system
`;
console.log(planets);
# Set the OPENAI_API_KEY env var however you see fit
export OPENAI_API_KEY=sk-___YOUR_KEY___
node example.mjs
The VSCode extension also provides syntax highlighting for Convo-Lang embedded in JavaScript and TypeScript template literals tagged with convo.
Using the CLI
The Convo CLI can be used alongside the VSCode extension to execute convo scripts from the command line.
Install the CLI:
npm i -g @convo-lang/convo-lang-cli
Basic usage:
# Results will be printed to stdout
convo talky-time.convo
# Results will be written to a new file named something.convo
convo talky-time.convo --out something.convo
# Result will be written to the source input file for a continuous conversation
convo talky-time.convo --out .
Useful CLI features for extension users include:
- running the same
.convo files outside VSCode
- converting or parsing conversations in scripts
- listing supported models
- using Convo-Make in automation
- keeping local editor workflows aligned with CI or shell workflows
CLI Arguments
| argument |
multi |
description |
| --config |
N |
ConvoCliConfig object or path to a ConvoCliConfig file |
| --inline-config |
N |
Inline configuration as JSON |
| --source |
N |
Path to a source convo file |
| --stdin |
N |
If present, the source will be read from stdin |
| --inline |
N |
Inline convo code |
| --source-path |
N |
Used to set or overwrite the source path of executed files |
| --cmd-mode |
N |
If present, CLI operates in command mode for function calling via stdin/stdout |
| --repl |
N |
If present, CLI enters REPL mode for chat |
| --prefix-output |
N |
If present, each output line is prefixed to indicate its relation |
| --print-state |
N |
If present, prints the shared variable state |
| --print-flat |
N |
If present, prints the flattened messages |
| --print-messages |
N |
If present, prints the messages |
| --parse |
N |
If present, parses convo code and outputs as JSON instead of executing |
| --parse-format |
N |
JSON formatting used if parse option is present |
| --convert |
N |
If present, converts input to target LLM format and writes as output |
| --out |
N |
Function or path for output; if ".", writes to source path |
| --buffer-output |
N |
If present, buffers executor output for later use |
| --allow-exec |
N |
Controls shell command execution permissions |
| --prepend |
N |
Conversation content to prepend to source |
| --exe-cwd |
N |
Current working directory for context execution |
| --sync-ts-config |
Y |
Path(s) to tsconfig for TypeScript project synchronization |
| --sync-watch |
N |
If present, updates TypeScript projects in real time during scan |
| --sync-out |
N |
Directory for generated sync output files |
| --spawn |
N |
Command line to run in parallel with actions like sync watching |
| --spawn-dir |
N |
Directory where spawn command runs |
| --create-next-app |
N |
If present, creates a new Next.js app using the template |
| --create-app-dir |
N |
Directory where apps will be created |
| --create-app-working-dir |
N |
Directory where the create app command will be run |
| --list-models |
N |
If present, lists all known models as JSON |
| --var |
Y |
Adds a named variable that can be used by executed convo-lang. To use spaces and other special characters enclose the variable name and value in double or single quotes. By default variables are strings but can use a colon followed by a type to set the type of the variable. Variables with dots in their name can be used to override deeply nested values in objects loaded using the vars or varsPath options. Vars that don't assign a value will be give a value of boolean true. Variables are assigned in the following order: --vars-path, --vars, --var |
| --vars |
Y |
A JSON object containing variables that can be used by executed convo-lang |
| --vars-path |
Y |
Path to a JSON or .env file that defines variables that can be used by executed convo-lang. Variables in .env files follow the same rules as vars define the by --var argument, allowing them to use types and nested value assign name |
| --make |
N |
If present, make targets should be built |
| --make-targets |
N |
If present, make targets will be printed as JSON |
CLI configuration
To allow the Convo CLI and VSCode extension to access OpenAI and other model providers, configure the appropriate VSCode settings and optionally create a JSON file at ~/.config/convo/convo.json for CLI usage.
Example CLI config:
{
"env":{
"openAiApiKey":"{API key for using OpenAI models}",
"awsBedrockApiKey":"{API key for using AWS Bedrock models}"
},
"defaultModel":"{Default LLM model to use - gpt-5, gpt-4.1, claude, llama, deepseek, etc}"
}
Learn More
Email - doc@convo-lang.ai
Join our sub Reddit - https://www.reddit.com/r/ConvoLang/
Join our Discord Server - https://discord.gg/GyXp8Dsa
X - https://x.com/ConvoLang
Change Log
See repository history and releases for the latest extension changes.