Skip to content
| Marketplace
Sign in
Visual Studio Code>Programming Languages>Codellm: Use Ollama and OpenAI to write codeNew to Visual Studio Code? Get it now.
Codellm: Use Ollama and OpenAI to write code

Codellm: Use Ollama and OpenAI to write code

ekbanasolutions

|
8,963 installs
| (5) | Free
Use local LLM models or OpenAI right inside the IDE to enhance and automate your coding with AI-powered assistance
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info
icon

Codellm: Opensource LLM and OpenAI extension for VSCode

# Visual Studio Code Extension for Large Language Models

This Visual Studio Code extension integrates with the Large Language Model (OLLAMA), an open-source language model, offering both offline and online functionality. It's designed to simplify generating code or answering queries directly within the editor.

  • Embeddings Support: Use local or OpenAI embeddings via Redis for enhanced context in your prompts.
  • File Type Support: Upload and embed content from PDF, DOCX, JSON, and TXT files.
  • Codebase Embedding: Embed your entire codebase in Redis vector storage or OpenAI for relevant response generation.
  • OpenAI API Integration: Access OpenAI's official API to utilize GPT-3, GPT-4, or ChatGPT models for code generation and natural language processing.

This extension makes it easy for developers at any skill level to integrate advanced language models into their development process, enhancing productivity and creativity.

Extension

Features

  • 💡 Ask general questions or use code snippets from the editor to query from locally available LLM modes, OpenAI Models via an input box in the sidebar
  • 🖱️ Right click on a code selection and run one of the context menu shortcuts
    • automatically write documentation for your code
    • explain the selected code
    • refactor or optimize it
    • find problems with it
    • Copy the code to the clipboard
    • Download Conversation as a JSON file
    • Upload Conversation from a JSON file
  • 📚 View the history of your conversations with the AI
  • 📌 Pin conversations to keep them in the sidebar
  • 📦 Save conversations to a file
  • 📤 Export conversations to a file
  • 📥 Import conversations from a file
  • 💻 View GPT's responses in a panel next to the editor
  • 📝 Insert code snippets from the AI's response into the active editor by clicking on them
  • 🎛️ Customize the prompt that will be sent to the AI for each command

Usage

You can select some code in the editor, right click on it and choose one of the following shortcuts from the context menu:

Commands:

  • Ask Codellm: will provide a prompt for you to enter any query
  • Codellm: Explain selection: will explain what the selected code does
  • Codellm: Refactor selection: will try to refactor the selected code
  • Codellm: Find problems: looks for problems/errors in the selected code, fixes and explains them
  • Codellm: Optimize selection: tries to optimize the selected code
  • Codellm: Download conversation history: Download conversation history
  • Codellm: Upload conversation history: Upload conversation history

Ask Codellm is also available when nothing is selected. For the other four commands, you can customize the exact prompt that will be sent to the AI by editing the extension settings in VSCode Preferences.

There, you can also change the model that will be used for the requests. The default is ChatGPT which is smartest and currently free, but you can change it to another model (text-davinci-003 is the best of the paid ones, code-davinci-002 of the free) if it doesn't work. You can also change the temperature and number of tokens that will be returned by the AI. The default values are 0.5 and 1024, respectively.


Installation of extension (Manual):

Step 1: Download the VSIX Extension File

Obtain the VSIX extension of codellm. This file usually has the extension ".vsix".

Step 2: Open VSCode Extensions View

Launch Visual Studio Code and go to the Extensions view. You can do this by clicking on the Extensions icon in the Activity Bar on the side of the window, or by going to the Extensions option in the View menu in the top menu bar.

Step 3: Install from VSIX

In the Extensions view, click on the ellipsis (three dots) in the top right corner and select "Install from VSIX" from the drop-down menu.

Step 4: Browse and Select VSIX File

Navigate to the location where you downloaded the VSIX extension file in Step 1 using the file picker dialog that appears after clicking "Install from VSIX." Select the VSIX file and click "Open" to start the installation process.

Step 5: Review Extension Details

After you select the VSIX file, VSCode will display the extension details, including the extension's name, publisher, and version. Review these details to ensure that you are installing the correct extension.

Step 6: Confirm Extension Installation

Click the "Install" button to confirm the installation of the VSIX extension. VSCode will then install the extension and display a notification once the installation is complete.

Step 7: Restart VSCode

After the installation is complete, VSCode may prompt you to restart the editor. If prompted, click the "Reload" button to restart VSCode with the installed extension.

Step 8: Use the Custom Extension

Once VSCode has restarted, you should be able to use the installed custom extension just like any other extension. You can access its features and settings through the Extensions view, and customize it according to your needs.

Locating extension

At left side of the VSCode, click on the extensions icon. You will see the installed extensions. Click on below Icon to open the extension. //load image from examples folder Step-1

After clicking on the icon, you will see the extension. Click on the extension to open it. Step-2

Configure Local LLM

Install LLM locally with OLLAMA

Visit OLLAMA and download the LLM model. You can find the instructions to install the LLM model in the documentation. After installing the LLM model, you can use the local LLM in the extension.

Configuration in Extension

To configure local LLM, click on the gear icon at the bottom left corner of the VSCode. Steps:

  1. Click on the gear icon at the bottom left corner of the VSCode.
  2. Choose "ollama" from the dropdown for LLM Provider.
  3. Input your custom ollama url as API endpoint or keep as default.
  4. Donot change anything on API key since local LLM doesnot require API key.
  5. When you choose local LLM, extension will automatically fetch the available models from the local server in extension main panel and you can select it from the dropdown as shown in the image below.

Step-3

Configure OpenAI API

To configure OpenAI API, click on the gear icon at the bottom left corner of the VSCode. Steps:

  1. Click on the gear icon at the bottom left corner of the VSCode.
  2. Choose "openai" from the dropdown for LLM.
  3. Input API key in the key section.
  4. Choose model from the dropdown as shown in the image below.

Step-4

Write a query

To write a query, click on the "Ask Codellm" textarea at the bottom of the extension. You will see a textarea to write a query. Write a query and click on the "Send" button. You will see the response from the LLM in the panel.

Step-5

Edit/Delete/Insert/Copy conversation

To edit conversation click on the edit icon on top right corner of your conversation. You can also delete the conversation by clicking on the delete icon. You can also insert the conversation to the editor by clicking on the insert icon. You can also copy the conversation to the clipboard by clicking on the copy icon. Step-6

View Chat History

To view chat history, click on the "Chat" button at the top right corner of the extension panel. You will see the chat history in the panel.

Step-7

Pin/Unpin conversation history

To pin a conversation, click on the pin icon on top right corner of the chat history. You can also unpin the conversation by clicking on the unpin icon. You can search conversation by clicking on the search icon and input the query in the search bar.

Edit/Delete conversation history

To edit conversation click on the edit icon on top right corner of your conversation. You can also delete the conversation by clicking on the delete icon.

Step-8

Export conversation history

To export conversation history, right click on your editor and click on the "Codellm: Download conversation history" option. Click on the button to export the conversation history.

Step-9

Import conversation history

To import conversation history, right click on your editor and click on the "Codellm: Upload conversation history" option. Click on the button to import the conversation history.

Step-10

Embeddings

When using OLLAMA as your Large Language Model (LLM) provider through this extension, it leverages Redis for storing embeddings. Additionally, the extension supports using OpenAI embeddings, offering the flexibility to combine OpenAI with the Redis vector store for enhanced embedding capabilities. These embeddings are crucial for grasping the context of the code, improving the relevance of generated responses. upload file for embeddings

Redis-Stack-Server Requirement

To utilize the embedding features, you must have the redis-stack-server installed, as the extension is compatible exclusively with this version and not with the standalone Redis server. Installation instructions for the redis-stack-server can be found here. It's important to note that this setup is necessary to enable the advanced embedding features offered by the extension.

Session-Specific Context

Please be aware that the context generated from your codebase is only available for the current chat session. If you initiate a new chat session, you will need to re-upload your codebase to regenerate the context. This ensures that each session's suggestions are as accurate and relevant as possible.

Conversation-Specific Context

In the main chat panel, you'll find the 'Rescan Workspace' button, providing the ability to upload your entire codebase to a Redis server to create these embeddings. This process is crucial for tailoring the model's responses to your specific coding environment or project.

Rescan Workspace

We understand that your codebase may contain unnecessary files, so by default, the system takes into account the .gitignore file and ignores the directories and files mentioned in it. Additionally, you can specify more files and directories to ignore by adding them to the .codellmignore file in the root of your workspace.

For example, to ignore the following files and directories, create a .codellmignore file in the root of your workspace and add the listed lines:

.vscode/
.husky/
.github/
bin/
config
coverage/
dist/
node_modules/
public/
i18n/

If you make any changes in your codebase, you can rescan the workspace by clicking the 'Rescan Workspace' button in the main chat panel. This action updates the embeddings with the latest changes in your codebase by comparing checksums of the files.

Now, with the embeddings, you can generate more accurate and relevant responses from the model.

To use the workspace file embeddings, type '@file' in the prompt. You'll then see a dropdown list of files in the workspace from which you can select to utilize the embeddings, as shown in the image below:

Choose Conversation Context

Refresh Option

Click on the 'Refresh' button in the main chat panel to refresh the models and embedded files in the workspace.

Contributors

  • Roshan Ranabhat (Extension)
  • Sandeep Risal (Design)

Acknowledgements

  • CodeGPT: GPT3 and ChatGPT extension for VSCode
  • Ekbana
  • OpenAI
  • OLLAMA
  • VSCode
  • Code icons created by Azland Studio - Flaticon
  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2025 Microsoft