Agaze AI Helper
Agaze AI Helper is a Visual Studio Code extension that provides AI-powered assistance for developers. This extension allows users to interact with AI models, submit queries, and receive responses directly within the VS Code environment.
Features
- Open Agaze AI Helper Panel
- Select AI Model
- Submit queries to AI models
- Display chat history
- Fetch and update available AI models
Installation
To use Agaze AI Helper, you need to have Ollama installed locally and a model running.
1. Install Ollama Locally
On macOS & Linux
curl -fsSL https://ollama.com/install.sh | sh
On Windows (PowerShell)
iwr -useb https://ollama.com/install.ps1 | iex
Verify the installation:
ollama --version
2. Download and Run a Model
After installing Ollama, pull a model to use with Agaze AI Helper.
For example, to pull and run DeepSeek Chat:
ollama pull deepseek-chat
ollama run deepseek-chat
To list available models:
ollama list
3. Run Ollama as a Local API
To make the model accessible for Agaze AI Helper:
ollama serve
Ollama will now be available at http://localhost:11434.
Using Agaze AI Helper in VS Code
- Open VS Code.
- Install the Agaze AI Helper extension.
- Ensure Ollama is running locally (
ollama serve).
- Open the command palette (
Ctrl+Shift+P or Cmd+Shift+P on macOS).
- Search for
Agaze AI and select:
Open Agaze AI Helper Panel
Select AI Model
- Start chatting with your locally running AI model!
Accessing Ollama from a Remote VM or Another Device
If you want to access Ollama from another machine or a VM, follow these steps:
1. Allow Remote Access
By default, Ollama only listens to localhost. To allow remote access:
OLLAMA_HOST=0.0.0.0 ollama serve
2. Open Firewall Ports
Ensure your firewall allows incoming connections on port 11434.
For Ubuntu/Debian (UFW):
sudo ufw allow 11434/tcp
For CentOS/RHEL (firewalld):
sudo firewall-cmd --add-port=11434/tcp --permanent
sudo firewall-cmd --reload
For AWS EC2:
- Go to Security Groups → Edit Inbound Rules.
- Add a rule for
TCP, port 11434, with your IP or 0.0.0.0/0 for public access.
3. Connect to Ollama from Another Machine
On another device, replace <VM_IP> with your server’s IP:
curl -X POST http://<VM_IP>:11434/api/generate -d '{
"model": "deepseek-r1",
"prompt": "Hello, how are you?",
"stream": false
}'
If using SSH tunneling for security:
ssh -L 11434:localhost:11434 user@remote-server
Now, you can access http://localhost:11434 from your local machine.
Summary
- Install Ollama on your system.
- Pull and run an AI model (
ollama pull deepseek-r1).
- Start Ollama API (
ollama serve).
- Open VS Code and use Agaze AI Helper.
- If needed, configure remote access to use Ollama from other devices.
Now, you're ready to use Agaze AI Helper with your locally running AI model! 🚀