Skip to content
| Marketplace
Sign in
Visual Studio Code>Machine Learning>ML ProductionizerNew to Visual Studio Code? Get it now.
ML Productionizer

ML Productionizer

Gourav Sahu

|
2 installs
| (0) | Free
Convert ML scripts into production-ready projects using local Llama 2
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

ML Productionizer

ML Productionizer is a powerful VS Code extension designed to instantly transform your raw, single-file Python Machine Learning training scripts into robust, production-ready project scaffolds.

Powered by Large Language Models (LLMs)—including Gemini, Hugging Face Router, Ollama, and other local/remote backends—this extension analyzes your script and automatically generates a complete project structure with industry-standard best practices.

Features

  • Seamless VS Code Integration: Load any active .py file directly into the interactive webview panel.
  • Multiple LLM Backends: Support for leading cloud and local models:
    • Gemini 2.5 Pro (Google AI Studio)
    • Hugging Face Router API (e.g., Qwen-Coder)
    • Ollama
    • llama.cpp
    • LM Studio
    • text-generation-webui
  • Modular Production Scaffolding: Select exactly what your project needs:
    • [L] Logging: Standardized Python logging setups.
    • [M] MLflow: Experiment tracking integration.
    • [T] Tests: Pytest scaffolds for your models.
    • [D] Docker: Dockerfile and docker-compose.yml.
    • [CI] CI/CD: GitHub Actions workflows.
    • [V] DVC: Data Version Control initialization.
    • [C] Config: YAML/JSON configuration files.
    • [O] Monitoring: Observability and metrics hooks.
  • Real-Time Streaming: Watch as your project structure is generated live in the panel.
  • Interactive File Viewer: Review, copy, and selectively browse generated files before committing them.
  • One-Click Export: Write all generated files directly into your VS Code workspace with a single click.

Usage Guide

  1. Open any Python ML project folder in VS Code.
  2. Right-click a .py file containing your ML script and choose ML Productionizer: Open Panel, or trigger it from the Command Palette (Ctrl+Shift+P / Cmd+Shift+P).
  3. In the Connection Settings tab, select your preferred backend and paste your API key (stored securely in VS Code's SecretStorage).
  4. Click Test Connection to ensure the backend is reachable.
  5. Review the script in the ML training script text area.
  6. Select your desired Production features (Docker, MLflow, CI/CD, etc.).
  7. Click Generate production project.
  8. Preview the output in the tab viewer and click Write all to workspace to save the files!

Supported Backends & Endpoints

You can customize the endpoint URL and model name for any of the supported backends right in the extension panel:

  • Gemini: https://generativelanguage.googleapis.com/v1beta
  • Hugging Face Router: https://router.huggingface.co/v1
  • Ollama: http://localhost:11434
  • llama.cpp server: http://localhost:8080
  • LM Studio: http://localhost:1234
  • text-generation-webui: http://localhost:5000

Extension Settings

Default connection settings can be configured directly in your VS Code settings under the mlProductionizer prefix:

  • mlProductionizer.backend: Choose your default backend.
  • mlProductionizer.endpoint: The base URL of the LLM server.
  • mlProductionizer.model: The default model name (e.g., gemini-2.5-pro or qwen2.5-coder:32b).
  • mlProductionizer.maxTokens: The maximum number of tokens to generate.
  • mlProductionizer.temperature: The sampling temperature.

Privacy & Security

API keys (such as Gemini or Hugging Face) are stored securely using VS Code's native SecretStorage API, ensuring your credentials remain safe, encrypted, and strictly local to your development environment.


Built to accelerate the transition from ML research to production.

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2026 Microsoft