Skip to content
| Marketplace
Sign in
Visual Studio Code>Other>neuralMateNew to Visual Studio Code? Get it now.
neuralMate

neuralMate

NeuralMate

|
23 installs
| (0) | Free
Enhance your coding experience with neuralMate, an AI-powered autocompletion tool that suggests intelligent, context-aware code snippets in real-time. Self-host your model for privacy and performance!
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

NeuralMate README

neuralMate is a powerful VS Code extension that provides AI-powered code autocompletion and inline suggestions. Designed with privacy and flexibility in mind, neuralMate allows you to run models locally on your machine, ensuring that your code never leaves your device. Alternatively, you can configure it to connect with your own server where your custom model is deployed.

With neuralMate, you have complete control over your AI-powered coding experience, enhancing productivity while maintaining maximum security.

Features

•	Inline Code Suggestions: Get real-time inline suggestions as you type, powered by customizable AI models.
•	Local and Remote Model Support:
•	Run your model locally on your own machine—no code is sent outside your device, ensuring maximum privacy.
•	Optionally, connect to a remote server where your model is deployed, giving you the flexibility to use more powerful hardware or cloud-based solutions.
•	Multiple Model Configurations: Easily switch between multiple models configured in your settings.
•	Custom Parameters: Fine-tune model behavior with custom parameters like temperature and max_tokens.
•	Dynamic Model Updates: Update active models directly from VS Code settings without restarting.
•	Secure by Design: neuralMate is designed with security and privacy as top priorities. By running locally, your code remains fully secured.

Requirements

•	Custom Model API running locally or remotely. neuralMate supports multiple model configurations.

Setting Up AI model locally

Setting Up AI Model Locally
    1.	Install OLLAMA:
        Download and install OLLAMA from https://ollama.com/.
    2.	Pull and Run Your Preferred Model:
        Choose a model that suits your needs. In this example, we’ll use the DeepSeek-Coder model.
        To pull and run the model, open your terminal and execute the following command:
        ```ollama run deepseek-coder```

    3. Enjoy NeuralMate

Extension Settings

This extension contributes the following settings: • neuralMate.enable: Enable or disable neuralMate. • neuralMate.modelName: Select the default model to use for autocompletion. • neuralMate.models: Configure multiple models with custom parameters.

Privacy and Security

•	Local Processing: By default, neuralMate runs models locally, ensuring that your code never leaves your machine. This guarantees maximum privacy and security.
•	Customizable Remote Connections: If you choose to connect to a remote server, you have full control over the server configuration and model deployment.

neuralMate is designed to empower developers with enhanced productivity without compromising code security.

Known Issues

Release Notes

Users appreciate release notes as you update your extension.

For more information

  • OR
  • email: issues@neuralmate.dev

Enjoy!

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2025 Microsoft