Codellm: Opensource LLM and OpenAI extension for VSCode# Visual Studio Code Extension for Large Language ModelsThis Visual Studio Code extension integrates with the Large Language Model (OLLAMA), an open-source language model, offering both offline and online functionality. It's designed to simplify generating code or answering queries directly within the editor.
This extension makes it easy for developers at any skill level to integrate advanced language models into their development process, enhancing productivity and creativity. Features
UsageYou can select some code in the editor, right click on it and choose one of the following shortcuts from the context menu: Commands:
There, you can also change the model that will be used for the requests. The default is Installation of extension (Manual):Step 1: Download the VSIX Extension FileObtain the VSIX extension of codellm. This file usually has the extension ".vsix". Step 2: Open VSCode Extensions ViewLaunch Visual Studio Code and go to the Extensions view. You can do this by clicking on the Extensions icon in the Activity Bar on the side of the window, or by going to the Extensions option in the View menu in the top menu bar. Step 3: Install from VSIXIn the Extensions view, click on the ellipsis (three dots) in the top right corner and select "Install from VSIX" from the drop-down menu. Step 4: Browse and Select VSIX FileNavigate to the location where you downloaded the VSIX extension file in Step 1 using the file picker dialog that appears after clicking "Install from VSIX." Select the VSIX file and click "Open" to start the installation process. Step 5: Review Extension DetailsAfter you select the VSIX file, VSCode will display the extension details, including the extension's name, publisher, and version. Review these details to ensure that you are installing the correct extension. Step 6: Confirm Extension InstallationClick the "Install" button to confirm the installation of the VSIX extension. VSCode will then install the extension and display a notification once the installation is complete. Step 7: Restart VSCodeAfter the installation is complete, VSCode may prompt you to restart the editor. If prompted, click the "Reload" button to restart VSCode with the installed extension. Step 8: Use the Custom ExtensionOnce VSCode has restarted, you should be able to use the installed custom extension just like any other extension. You can access its features and settings through the Extensions view, and customize it according to your needs. Locating extensionAt left side of the VSCode, click on the extensions icon. You will see the installed extensions. Click on below Icon to open the extension. //load image from examples folder After clicking on the icon, you will see the extension. Click on the extension to open it. Configure Local LLMInstall LLM locally with OLLAMAVisit OLLAMA and download the LLM model. You can find the instructions to install the LLM model in the documentation. After installing the LLM model, you can use the local LLM in the extension. Configuration in ExtensionTo configure local LLM, click on the gear icon at the bottom left corner of the VSCode. Steps:
Configure OpenAI APITo configure OpenAI API, click on the gear icon at the bottom left corner of the VSCode. Steps:
Write a queryTo write a query, click on the "Ask Codellm" textarea at the bottom of the extension. You will see a textarea to write a query. Write a query and click on the "Send" button. You will see the response from the LLM in the panel. Edit/Delete/Insert/Copy conversationTo edit conversation click on the edit icon on top right corner of your conversation. You can also delete the conversation by clicking on the delete icon. You can also insert the conversation to the editor by clicking on the insert icon. You can also copy the conversation to the clipboard by clicking on the copy icon. View Chat HistoryTo view chat history, click on the "Chat" button at the top right corner of the extension panel. You will see the chat history in the panel. Pin/Unpin conversation historyTo pin a conversation, click on the pin icon on top right corner of the chat history. You can also unpin the conversation by clicking on the unpin icon. You can search conversation by clicking on the search icon and input the query in the search bar. Edit/Delete conversation historyTo edit conversation click on the edit icon on top right corner of your conversation. You can also delete the conversation by clicking on the delete icon. Export conversation historyTo export conversation history, right click on your editor and click on the "Codellm: Download conversation history" option. Click on the button to export the conversation history. Import conversation historyTo import conversation history, right click on your editor and click on the "Codellm: Upload conversation history" option. Click on the button to import the conversation history. EmbeddingsWhen using OLLAMA as your Large Language Model (LLM) provider through this extension, it leverages Redis for storing embeddings. Additionally, the extension supports using OpenAI embeddings, offering the flexibility to combine OpenAI with the Redis vector store for enhanced embedding capabilities. These embeddings are crucial for grasping the context of the code, improving the relevance of generated responses. Redis-Stack-Server RequirementTo utilize the embedding features, you must have the redis-stack-server installed, as the extension is compatible exclusively with this version and not with the standalone Redis server. Installation instructions for the redis-stack-server can be found here. It's important to note that this setup is necessary to enable the advanced embedding features offered by the extension. Session-Specific ContextPlease be aware that the context generated from your codebase is only available for the current chat session. If you initiate a new chat session, you will need to re-upload your codebase to regenerate the context. This ensures that each session's suggestions are as accurate and relevant as possible. Conversation-Specific ContextIn the main chat panel, you'll find the 'Rescan Workspace' button, providing the ability to upload your entire codebase to a Redis server to create these embeddings. This process is crucial for tailoring the model's responses to your specific coding environment or project. We understand that your codebase may contain unnecessary files, so by default, the system takes into account the .gitignore file and ignores the directories and files mentioned in it. Additionally, you can specify more files and directories to ignore by adding them to the .codellmignore file in the root of your workspace. For example, to ignore the following files and directories, create a .codellmignore file in the root of your workspace and add the listed lines:
If you make any changes in your codebase, you can rescan the workspace by clicking the 'Rescan Workspace' button in the main chat panel. This action updates the embeddings with the latest changes in your codebase by comparing checksums of the files. Now, with the embeddings, you can generate more accurate and relevant responses from the model. To use the workspace file embeddings, type '@file' in the prompt. You'll then see a dropdown list of files in the workspace from which you can select to utilize the embeddings, as shown in the image below: Refresh OptionClick on the 'Refresh' button in the main chat panel to refresh the models and embedded files in the workspace. ContributorsAcknowledgements |