Skip to content
| Marketplace
Sign in
Visual Studio Code>Programming Languages>DevGists Code AssistantNew to Visual Studio Code? Get it now.
DevGists Code Assistant

DevGists Code Assistant

Dr. Michael Kyazze

|
37 installs
| (0) | Free
AI-powered code assistant using local LLM for analysis, improvements, and search
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

DevGists Code Assistant

A powerful VS Code extension that brings local LLM-powered code analysis to your development workflow. This extension uses local Llama models for complete privacy and zero-latency responses.

Features

🔍 Code Analysis

  • Get instant answers about your code
  • Understand complex functions and patterns
  • Deep code analysis without leaving your IDE

💡 Improvement Suggestions

  • Receive AI-powered code recommendations
  • Learn best practices
  • Identify potential issues

💬 Interactive Chat

  • Natural language conversations about your code
  • Context-aware responses
  • Step-by-step guidance

🔎 Codebase Search

  • Semantic code search capabilities
  • Find similar patterns
  • Navigate large codebases efficiently

Requirements

  • Python 3.8+
  • 16GB RAM minimum (32GB recommended)
  • Local Llama server running: https://github.com/mkyazze/devgists_code_assistant

Getting Started

  1. Install the Local Server
pip install llama-cpp-python fastapi "uvicorn[standard]"
  1. Download a Llama Model
  • Recommended: CodeLlama-7B-Instruct-GGUF
  • Place in your models directory
  1. Start the Server
python code_assistant.py
  1. Use the Extension
  • Open any code file
  • Press Cmd+Shift+P (Mac) or Ctrl+Shift+P (Windows/Linux)
  • Type "Code Assistant" to see available commands

Commands

  • Code Assistant: Analyze Current File - Get insights about your code
  • Code Assistant: Suggest Improvements - Receive enhancement suggestions
  • Code Assistant: Start Chat - Begin an interactive chat session

Extension Settings

This extension contributes the following settings:

  • codeAssistant.serverUrl: URL of the local Llama server (default: "http://localhost:8000")

Performance Notes

Initial startup:

  • Model loading: 30-60 seconds
  • First request: 10-20 seconds
  • Subsequent requests: 2-5 seconds

Troubleshooting

Server Won't Start

  • Check Python version
  • Verify model path
  • Ensure port 8000 is available

Slow Responses

  • Reduce file size
  • Adjust thread count
  • Check system resources

Connection Issues

  • Verify server is running
  • Check localhost access
  • Restart VS Code if needed

Release Notes

1.0.0

Initial release of DevGists Code Assistant

  • Local LLM integration
  • Code analysis features
  • Interactive chat interface
  • Codebase search capabilities

License

This software is provided under a dual license:

  • Server component is open source under MIT license
  • VS Code extension is proprietary, free for use

Support

Email michael@devgists.com for:

  • Documentation
  • Tutorials
  • Support

Enjoy coding with AI assistance!

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2025 Microsoft