DeepSeek VS is a Visual Studio Code extension that provides a local DeepSeek-powered AI coding assistant directly in your editor. Leverage the power of DeepSeek R1 models to get intelligent code assistance while keeping all your data private and secure on your own machine.
Developer
Brendan Choi | 최만승
I am a Korean, New Zealand-based computer science student at the University of Auckland. In addition to my studies, I work as a personal trainer at CITYFITNESS NZ (Queen Street branch), so I can help you stay fit if too much programming leaves you feeling out of shape. I’m active on various social platforms, so please follow me to keep up with my latest projects and insights!
This software is designed to empower everyone to code without the need to worry about sending sensitive code or information to external LLM servers. One of the key benefits is that you can work seamlessly without switching between multiple tabs. The release of DeepSeek was a game changer for me—it unlocked a new way for sLLM’s to run on slower hardware while still delivering quality responses. I am dedicated to continually updating the app to ensure it remains competitive with other chatbots. If you appreciate what I’m doing, buy me a coffee ☕️🫠
PAYPAL: brendanchoi0626@gmail.com
NZ BANK ACCOUNT: 12-3401-0083103-50
Features
Local LLM: Run DeepSeek R1 models locally on your machine
Code Assistance: Get help with writing, explaining, and improving your code
Chat Management: Organize conversations in folders for better workflow
Multiple Model Options: Choose from different DeepSeek model variants
Direct Code Selection: Select code snippets and perform actions via context menu
Fast Responses: Get AI help without sending your code to external servers
Installation
Install the extension from the VS Code Marketplace
Download at least one DeepSeek R1 model variant through the extension interface
Requirements
Hardware: A modern CPU with at least 8GB RAM is required. For optimal performance, a machine with 16GB+ RAM and a dedicated GPU is recommended. (GB) next to the models are the VRAM requirements to run the specific model. You can find your VRAM on
windows os: task manager -> select your gpu and check VRAM
mac os: apple logo on the top left corner -> about this mac -> MAX VRAM = MEMORY x 0.75
Models with higher parameters will run slower and require more resource but will provide better responses. Models with lower parameters will run faster with less resource but won't be as capable.
Ollama: The extension requires Ollama to run the LLM models locally.
Disk Space: Depending on which models you install, you'll need specified amount of free disk space per model.
Usage
Starting a Chat
Right-click anywhere in the editor
Select "Load DeepSeek Assistant" from the context menu
A chat panel will open where you can start interacting with the model
Getting Help with Code
Select the code you want help with
Right-click and choose from options like:
"Explain This Code"
"Improve This Function"
"Find Potential Bugs"
"Ask with Custom Prompt..."
Managing Chats
Create folders to organize related conversations
Save and revisit past chats
Delete or rename chats and folders as needed
Known Issues
Currently only supports DeepSeek R1 model family
No conversation memory between chat sessions (context window support coming soon)
May have slower performance on machines with limited resources
Might be OS specific (will fix very soon)
Roadmap
[ ] Add conversation memory/context window
[ ] Support for more LLM models
[ ] Code action suggestions
[ ] Enhanced UI with syntax highlighting in responses
[ ] Prompt templates for different coding tasks
Release Notes
1.0.5
style bug fixed
supports windows and mac
Privacy
DeepSeek VS runs all models locally on your machine. No code or queries are sent to external servers, ensuring your code and intellectual property remain private.