LOCO VS Code Extension (MCP-ready)
This extension demonstrates a low-code AI workflow in VS Code:
- Generate Python/C++ stubs from natural language (local mock provider)
- Inline generation in editor via trigger like
lc:
- Tip: type
lc: class MyNode base=myframework.base.NodeBase params=threshold:float=0.5:阈值 to inline-generate a class
- Built-in HTTP endpoint for demo (
/generate)
- Embedded MCP stdio server for tools like Cursor
No external MCP server is required anymore. The previous mcp-server/ folder has been removed; everything is embedded in the extension.
Structure
- Root — VS Code extension (TypeScript). Build with
npm run compile.
Quick start (Windows PowerShell)
- Install dependencies and run tests:
cd "d:\1.speedcode\gitlab\kunwu\loco_vscode_extension_mcp"; npm install; npm run test
Press F5 in VS Code to launch the Extension Development Host.
Try these commands in the Command Palette:
- "LOCO: 生成拓展代码" — Prompt for a description and open generated code
- "LOCO: 使用选区生成并插入" — Use selected text as prompt and insert generated code
- "LOCO: 切换 Inline 设置" — Toggle inline generation and backend (local/MCP/Tongyi)
- "LOCO: 复制 Cursor stdio 配置(绝对路径)" — Copy a ready-to-paste MCP stdio config with an absolute path to the embedded server
- Inline generation: In any editor, type a trigger like
# lc: parse JSON (or // lc parse args) and accept the suggestion.
- Class intent: start with
lc: class ... (or include 类) and optional keys: base=..., params=..., lang=python|cpp.
- If
loco.inline.generateClass is enabled (default true), descriptions without explicit def/function will prefer generating a class.
Built-in services
Settings
loco.provider — Default model provider label (mocked)
loco.language — Default target language: python | cpp
loco.inline.enabled — Enable/disable inline generation
loco.inline.trigger — Trigger text, default lc:
loco.inline.backend — Use local mock, mcp (HTTP), or tongyi
loco.inline.generateClass — Prefer generating a class for inline triggers (default true)
loco.tongyi.model / loco.tongyi.apiBase — Tongyi model and optional base URL
- Use command "LOCO: 设置通义千问 API Key" to store your API key securely
loco.mcp.baseUrl — Base URL for HTTP backend (default http://localhost:8787)
loco.mcp.enableBuiltInServer — Start embedded HTTP server
loco.mcp.port — HTTP server port
Notes
- This is a demo using a local mock generator. You can wire real providers (e.g., Tongyi/QWen/OpenAI) inside
src/generator.ts.
- For Cursor MCP stdio mode, the copied config runs
node <absolute-path-to>/mcpStdioServer.js directly and works on Windows/Mac/Linux.
Cursor-only distribution (single-file executable)
If you want to ship a standalone MCP server for Cursor users (no Node/VS Code required), you can build single-file executables using pkg:
- Build binaries (from project root):
cd "d:\1.speedcode\gitlab\kunwu\loco_vscode_extension_mcp"; npm install; npm run build:stdio:exe
Artifacts will be output to ../release/ as:
- Windows:
loco-mcp-stdio-win.exe
- macOS:
loco-mcp-stdio-macos
- Linux:
loco-mcp-stdio-linux
- Cursor config example (Windows):
{
"mcpServers": {
"loco-lowcode": {
"command": "C:/Path/To/release/loco-mcp-stdio-win.exe",
"args": [],
"disabled": false,
"description": "LOCO 单文件 MCP stdio 服务器"
}
}
}
- macOS/Linux users: point
command to the corresponding binary in release/.
Alternatively, publish as an npm CLI (bin pointing to dist/mcpStdioServer.js) and let users set command: "loco-mcp-stdio" with empty args.
Publish to VS Marketplace (brief)
Prereqs: Create a Publisher, install vsce.
Ensure package has required fields (name, displayName, publisher, version, engines.vscode, icon).
Build and package:
cd "d:\1.speedcode\gitlab\kunwu\loco_vscode_extension_mcp"; npm run compile; npx vsce package
- Publish:
npx vsce publish