Skip to content
| Marketplace
Sign in
Visual Studio Code>Other>labourlmNew to Visual Studio Code? Get it now.
labourlm

labourlm

LabourLM Tech

| (0) | Free
experimental workflow using vscode
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

LabourLM

LabourLM is a Visual Studio Code extension written in TypeScript that enables seamless integration with large language model (LLM) APIs to work directly on your project files. It is designed for VSCode users who want to define, use, and share custom multi-step LLM API workflows with their team.

Features

  • LLM API Integration: Easily run large language model API calls on your project files inside VSCode.
  • Custom Workflows: Load external workflows defined in TypeScript from a Git URL.
  • Team Collaboration: Share and reuse complex LLM API workflows with your team.
  • Intuitive UI: Choose workflows from a dropdown, drag project files, run workflows, and view results all within VSCode.

VS Code Extension Installation

LabourLM is available on the Visual Studio Code Marketplace (link to be added). Search for "LabourLM" and click install.

Usage

  • Open VSCode and locate the LabourLM icon in the activity bar.
  • Click on the icon to open the LabourLM view.
  • Select your desired workflow from the dropdown menu — workflows are loaded from external Git URLs you or your team have defined.
  • Click "Chat" to execute the workflow. The extension will make necessary API calls in sequence.
  • View the results displayed at the bottom of the LabourLM view.

Configuration

The LabourLM extension has the following VSCode settings:

  • labourlm.openaiApiKey: Your OpenAI API key for GPT models
  • labourlm.geminiApiKey: Your Google Gemini API key for Gemini models
  • labourlm.anthropicApiKey: Your Anthropic API key for Claude models
  • labourlm.repositories: List of GitHub repository URLs containing workflow plugins
  • labourlm.singlecalltimeout: Chat request timeout in seconds (default: 120)
  • labourlm.skipLoadingSamples: Skip loading sample workflows during extension startup

You can configure these settings through VSCode's settings UI or settings.json.

LabourLM Customisation

Using Existing Workflows

You can use publicly available workflows by adding their Git repository URLs to the labourlm.repositories setting in VSCode. The extension will automatically load all workflow classes found in the src/jobs directory of these repositories.

Developing Custom Workflows

To create your own workflow plugin:

  1. Create a new npm project:
mkdir my-labourlm-workflow
cd my-labourlm-workflow
npm init
  1. Set up the project structure:
mkdir src
mkdir src/jobs
  1. Install required dependencies:
npm install --save labourlm-common openai
npm install --save-dev typescript @types/node
  1. Create a tsconfig.json file:
{
  "compilerOptions": {
    "module": "commonjs",
    "target": "es2020",
    "lib": ["es2020"],
    "outDir": "out",
    "sourceMap": true,
    "strict": true,
    "rootDir": "src"
  },
  "exclude": ["node_modules", ".vscode-test"]
}
  1. Add build scripts to package.json:
{
  "scripts": {
    "compile": "tsc -p ./",
    "watch": "tsc -watch -p ./"
  }
}

Workflow Implementation

Create your workflow class in src/jobs/. Here are two implementation approaches:

Basic File Content Processing

This example demonstrates a simple workflow that processes file content with OpenAI:

import { ChatCompletionCreateParams } from "openai/resources/chat/completions";
import { ChatMessageParamBowl, IWorkflow, WorkflowField } from 'labourlm-common';

export default class BasicFileProcessor extends IWorkflow {
    private model: string = "gpt-4.1";
    private fileContent: string = "";
    private basePrompt: string = "Review the following content:";
    private description: string = "Process file content with OpenAI API and provide analysis.";

    constructor() {
        super();
    }

    /**
     * Returns a unique identifier for this workflow
     * This UUID helps VSCode extension track and manage workflows
     * 
     * Generate your own UUID using:
     * - UUID generator: https://www.uuidgenerator.net/
     * - Node.js: npm install uuid && node -e "console.log(require('uuid').v4())"
     * 
     * @returns {string} A unique UUID v4 string
     */
    public getWorkflowID(): string {
        return "00000000-0000-0000-0000-000000000000";
    }

    public getName(): string { return "Basic File Processor"; }
    public getModel(): string { return this.model; }

    public getFields(): WorkflowField[] {
        return [
            new WorkflowField("description", 5, this.description, true, "Description"),
            new WorkflowField("fileContent", 3, "fileContent", true, "Source File"),
            new WorkflowField("basePrompt", 1, this.basePrompt, true, "Review Prompt")
        ];
    }

    public async handleAction(sessionId: string, data: any, chatMessageParamBowl: ChatMessageParamBowl, isResultReady: boolean): Promise<void> {
        // Check if OpenAI client is initialized
        if (this.openaiClient === null) {
            this.markErrorResult("OpenAI Library not initialized");
            return;
        }

        // Set up API parameters with message history and prompt
        const completionParams: ChatCompletionCreateParams = {
            model: this.model,
            messages: chatMessageParamBowl.getMessageParam(),
        };

        // Add user's base prompt with file content
        completionParams.messages.push({
            role: "user",
            content: `${this.basePrompt}\n\n${this.fileContent[1]}`
        });

        // Make API call
        let response = await this.openaiClient.chat.completions.create(completionParams);

        // Extract token usage from response
        let promptTokens = response.usage?.prompt_tokens || 0;
        let completionTokens = response.usage?.completion_tokens || 0;

        // Mark the result as ready and refreshable
        this.markResultAndRefreshable(
            completionParams.model,
            response.choices[0].message,
            true,
            sessionId,
            chatMessageParamBowl,
            promptTokens,
            completionTokens
        );
    }
}

Publishing Your Workflow

  1. Push your workflow to a Git repository
  2. Users can then add your repository URL to their labourlm.repositories setting

Field Types

Available field types for workflow inputs:

  • WorkflowType.richText (1): Multi-line text input
  • WorkflowType.text (2): Single line text input
  • WorkflowType.singleFile (3): File selectsssor
  • WorkflowType.choice (4): Dropdown/selection input
  • WorkflowType.readonlydescription (5): Read-only description field

Key Concepts

  • IWorkflow: Abstract base class for implementing workflow plugins
  • SingleFilePromptWorkFlow: Ready-to-use base class for simple file processing workflows. Provides:
    • Built-in OpenAI client setup
    • Pre-configured model selection
    • File content handling
    • Base prompt management
    • Automatic tool integration for get_article function
    • Easy version and model management
  • WorkflowField: Represents configurable fields in your workflow
  • WorkflowResult: Handles the result state and output
  • ChatMessageParamBowl: Manages chat message history and state

Here's a minimal example using SingleFilePromptWorkFlow:

import { SingleFilePromptWorkFlow } from 'labourlm-common';

export default class SimpleCodeReviewer extends SingleFilePromptWorkFlow {
    constructor() {
        super();
        this.model = "gpt-4";
        this.basePrompt = `Review this code and suggest improvements:
1. Code structure
2. Performance
3. Best practices
Use bullet points for suggestions.`;
        this.acceptModels = ["gpt-4", "gpt-3.5-turbo"];
    }

    public getWorkflowID(): string {
        return "your-unique-uuid-here";
    }

    public getName(): string {
        return "Simple Code Reviewer";
    }
}

API Reference

IWorkflow Abstract Methods

All workflow plugins must extend the IWorkflow class and implement these abstract methods:

constructor

constructor () {
    super();
}

The constructor allows you to initialize custom fields and configurations for your workflow. Common customizations include:

  • Setting default model and acceptable model list
  • Defining base prompts and descriptions
  • Initializing workflow-specific parameters

Example:

acceptModels: string[] = ["gemini-2.0-flash", "gemini-2.5-flash-preview-04-17", "gemini-2.5-pro-preview-05-06"];
model: string = "gpt-4";
basePrompt: string = "Review this code...";

constructor() {
    super();
}

getWorkflowID

getWorkflowID(): string

Returns a unique UUID v4 identifier for the workflow. This UUID helps VSCode extension track and manage workflows.

Generate your own UUID using:

  • UUID generator: https://www.uuidgenerator.net/
  • Node.js: npm install uuid && node -e "console.log(require('uuid').v4())"

getFields

getFields(): WorkflowField[]

Returns an array of configurable fields that define the workflow's UI and input parameters. Each field is defined using the WorkflowField class with the following parameters:

  • name: Field identifier
  • type: Field type (1-5, see WorkflowType enum)
  • value: Default value
  • required: Whether the field is required
  • caption: Display label in UI

For fields with choices (type 4), use setChoices() to define available options:

public getFields(): WorkflowField[] {
    const choices: string[] = this.acceptModels;
    return [
        new WorkflowField("description", 5, this.description, true, "Description"),
        new WorkflowField("fileContent", 3, "fileContent", true, "File"),
        new WorkflowField("basePrompt", 1, this.basePrompt, true, "User Prompt"),
        new WorkflowField("model", 4, this.model, true, "Model").setChoices(choices),
    ];
}

getModel

getModel(): string

Returns the default AI model name to be used by this workflow. It should be one of the supported models.

handleAction

handleAction(
  sessionId: string,
  data: any,
  chatMessageParamBowl: ChatMessageParamBowl,
  isResultReady: boolean
): Promise<void>

Implements the main workflow logic. Called when the workflow is executed.

  • sessionId: Unique identifier for the current chat session
  • data: Input data from workflow fields
  • chatMessageParamBowl: Message history manager
  • isResultReady: Indicates if previous results are available

IWorkflow Helper Methods

These methods are available in the base class and can be used or overridden:

handleRefresh

handleRefresh(
  message: any,
  chatMessageParamBowl: ChatMessageParamBowl
): Promise<void>

Optional override to implement refresh behavior for the workflow.

markResultAndNotRefreshable

markResultAndNotRefreshable(
  model: string,
  message: ChatCompletionMessage,
  isResultReady: boolean,
  sessionId: string,
  chatMessageParamBowl: ChatMessageParamBowl | null,
  promptTokens: number,
  completionTokens: number
): void

Marks the workflow result as complete but not refreshable.

markResultAndRefreshable

markResultAndRefreshable(
  model: string,
  message: ChatCompletionMessage,
  isResultReady: boolean,
  sessionId: string,
  chatMessageParamBowl: ChatMessageParamBowl,
  promptTokens: number,
  completionTokens: number
): void

Marks the workflow result as complete and refreshable.

markErrorResult

markErrorResult(message: string): void

Sets the workflow result to an error state with the given message. Always return and end the routine after this call.

Properties

openaiClient

openaiClient: OpenAI | null

OpenAI API client instance. Automatically initialized by the extension.

genaiClient

genaiClient: GoogleGenAI | null

Google Gemini API client instance. Automatically initialized by the extension.

anthropicClient

anthropicClient: Anthropic | null

Anthropic API client instance. Automatically initialized by the extension.

License

MIT

Attribution

  • Extension icon from FLATICON, Busy icons created by Awicon - Flaticon
  • Extension setting icon from FLATICON, Administrator icons created by SumberRejeki - Flaticon
  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2025 Microsoft