Skip to content
| Marketplace
Sign in
Visual Studio Code>SCM Providers>ProCommitNew to Visual Studio Code? Get it now.
ProCommit

ProCommit

Kochán

|
26 installs
| (1) | Free
📝 A Customizable VS Code extension for AI-generated commit messages.
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info
logo

ProCommit

📝 A Customizable VS Code extension for AI-generated commit messages.

GitHub Workflow Status VSX

Features

  • Emoji features.
  • Custom Generator, Endpoint, and Api Key.
  • Generating commit message using different language.
  • Using multiple result for commit messages.
  • More customizable.

demo

Requirements

To use this extension, you need an API Key:

  • Obtain an API key from OpenAI (Default endpoint).
  • Alternatively, you can use your own custom API key (Custom endpoint).

Install

  • Download ProCommit Extension From Marketplace

Install (Manually)

  • Download ProCommit Extension From Direct Link or VSIX Registry
  • In Visual Studio Code, at the bottom of the Activity Bar, click the Extensions icon, and select Install from VSIX. Select the VSIX file ProCommit.vsix and click Install.
  • You're done!

Extension Settings

ProCommit extension contributes the following settings:

General

  • procommit.general.generator: Generator used to create commit messages. Available options: ChatGPT, Custom.
  • procommit.general.messageApproveMethod: Method used to approve generated commit messages. Available options: Quick pick, Message file.
  • procommit.general.language: Control what language should used for commit message.
  • procommit.general.showEmoji: Include emojis in commit messages
  • procommit.general.useMultipleResults: Allow using multiple results for commit messages

OpenAI

  • procommit.openAI.apiKey: API Key needed for generating AI commit messages.
  • procommit.openAI.modelVersion: Version of AI Model used.
  • procommit.openAI.customEndpoint: Custom endpoint URL.
  • procommit.openAI.temperature: Controls randomness. Lower values result in less random completions. As the temperature approaches zero, the model becomes deterministic and repetitive.
  • procommit.openAI.maxTokens: The maximum number of tokens to generate. Requests can use up to 2048 tokens shared between prompt and completion.

License

Released under the MIT License by @Kochan.

Contributing

If you want more languages to be supported, please open an issue on our GitHub repository.

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2025 Microsoft