Skip to content
| Marketplace
Sign in
Visual Studio Code>Other>DevHelperAINew to Visual Studio Code? Get it now.
DevHelperAI

DevHelperAI

FoxProgrammer

|
5 installs
| (0) | Free
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

devhelperai README

Developer productivity can be significantly impacted by the tools they use. It is estimated that developers spend up to 30% or more of their time on routine and manual tasks, which leads to decreased productivity, burnout, and dissatisfaction.

Recently, it has become evident that LLMs (Large Language Models) can help address several of the productivity challenges faced by developers in software development tasks. As a result, there is a growing need for AI-based coding assistants to enhance developer productivity. This is akin to having an experienced software developer available at all times.

We present DevHelperAI, a chat-based, context-aware, and flexible VSCode AI assistant extension, packed with productivity-enhancing features.

This documentation is in progress and will be updated soon! Thanks for your patience.

Features

(1) Codebase Parsing and Storage (2) Question Answering (3) Function Dependency discovery (4) Context-aware code and test generation (5) Code review assistant (6) Full control over the LLM

Requirements

Please install Ollama by visiting https://ollama.com/download and following the installation instructions for your operating system. Running Ollama locally is required, as the extension relies on it to handle model downloads, deployment, and resource allocation for inference.

Extension Settings

ID Description Default
devhelperai.ollama_max_retries Set Ollama API max retries value. 2
devhelperai.ollama_models Set the model IDs in an array as they appear on Ollama. ["gemma3", "llama3-chatqa:8b", "qwen2.5:1.5b", "deepseek-coder:1.3b-instruct"]
devhelperai.ollama_temperature Set Ollama inference temperature value; lower is more deterministic, higher is more random. 0.7
devhelperai.retrieval_threshold Set Retrieval Threshold; lower is more permissive, higher is more strict. Sets the threshold to decide between RAG inference workflow and normal LLM inference. 0.65

Known Issues

This extension has not been tested on Apple Silicon. Please use with caution. More development in progress.

Release Notes

0.0.5

Update README

0.0.3

Some bug fixes

0.0.1

Initial release of DevHelperAI


Enjoy!

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2025 Microsoft