Binny.AI - VS Code Extension
🤖 Binny.AI is an intelligent AI-powered pair programming assistant for Visual Studio Code that integrates with local Ollama models to provide advanced coding assistance, debugging, and code analysis features.
Features
🚀 AI-Powered Autocomplete
- Intelligent code completions powered by local AI models
- Context-aware suggestions based on your current code
- Configurable delay and token limits
- Works with all programming languages
🔍 Code Analysis & Debugging
- Explain Code: Get clear explanations of selected code snippets
- Debug Code: Identify potential bugs, issues, and vulnerabilities
- Improve Code: Receive suggestions for code optimization and best practices
- Generate Tests: Automatically create unit tests for your functions
💬 Interactive Chat Interface
- Built-in chat panel for conversational AI assistance
- Persistent conversation history
- Beautiful, VS Code-themed interface
- Real-time typing indicators
⚙️ Local AI Integration
- Connects to your local Ollama installation
- No data sent to external servers - complete privacy
- Support for multiple AI models (CodeLlama, Llama2, etc.)
- Configurable model selection for different tasks
Prerequisites
Before using Binny.AI, you need to have Ollama installed and running on your system.
Installing Ollama
- Download Ollama from https://ollama.ai
- Install recommended models:
# For code assistance
ollama pull codellama:7b
# For chat conversations
ollama pull llama2:7b
# Alternative models you can try
ollama pull codellama:13b
ollama pull llama2:13b
ollama pull mistral:7b
- Start Ollama (it usually runs automatically after installation)
ollama serve
Installation
- Clone or download this extension
- Open the extension folder in VS Code
- Install dependencies:
npm install
- Compile the extension:
npm run compile
- Press F5 to run the extension in a new Extension Development Host window
Configuration
Configure Binny.AI through VS Code settings:
{
"binny-ai.ollamaUrl": "http://localhost:11434",
"binny-ai.model": "codellama:7b",
"binny-ai.chatModel": "llama2:7b",
"binny-ai.enableAutoComplete": true,
"binny-ai.autoCompleteDelay": 500,
"binny-ai.maxTokens": 2048
}
Settings Description
ollamaUrl
: URL where your Ollama server is running (default: http://localhost:11434
)
model
: AI model to use for code assistance (default: codellama:7b
)
chatModel
: AI model to use for chat conversations (default: llama2:7b
)
enableAutoComplete
: Enable/disable AI-powered autocomplete (default: true
)
autoCompleteDelay
: Delay in milliseconds before triggering autocomplete (default: 500
)
maxTokens
: Maximum number of tokens for AI responses (default: 2048
)
Usage
Commands
Access Binny.AI features through the Command Palette (Ctrl+Shift+P
/ Cmd+Shift+P
):
Binny.AI: Open Chat
- Open the interactive chat panel
Binny.AI: Explain Selected Code
- Get an explanation of selected code
Binny.AI: Debug Code Issues
- Analyze selected code for bugs and issues
Binny.AI: Suggest Code Improvements
- Get suggestions for code optimization
Binny.AI: Generate Unit Tests
- Create unit tests for selected code
Right-click on selected code to access Binny.AI features directly from the context menu.
Autocomplete
Simply start typing, and Binny.AI will provide intelligent code completions after the configured delay.
Supported Languages
Binny.AI works with all programming languages supported by VS Code, including:
- JavaScript/TypeScript
- Python
- Java
- C/C++
- C#
- Go
- Rust
- PHP
- Ruby
- And many more!
Troubleshooting
Common Issues
1. "Cannot connect to Ollama" error
- Ensure Ollama is installed and running
- Check that Ollama is accessible at the configured URL
- Verify the Ollama service is not blocked by firewall
2. "Model not found" error
- Make sure you have pulled the required models using
ollama pull <model-name>
- Check that the model names in settings match exactly with installed models
3. Slow autocomplete responses
- Try using smaller models (7B instead of 13B)
- Increase the
autoCompleteDelay
setting
- Reduce the
maxTokens
setting
4. Extension not activating
- Check the VS Code Developer Console for error messages
- Ensure all dependencies are installed (
npm install
)
- Try recompiling the extension (
npm run compile
)
Checking Ollama Status
You can verify Ollama is working by running:
curl http://localhost:11434/api/tags
This should return a list of installed models.
Development
Building from Source
- Clone the repository
- Install dependencies:
npm install
- Compile TypeScript:
npm run compile
- Run in development: Press
F5
in VS Code
Project Structure
binny-ai-extension/
├── src/
│ ├── extension.ts # Main extension entry point
│ ├── ollamaClient.ts # Ollama API client
│ ├── completionProvider.ts # Autocomplete provider
│ ├── chatProvider.ts # Chat interface
│ └── codeAnalyzer.ts # Code analysis features
├── package.json # Extension manifest
├── tsconfig.json # TypeScript configuration
└── README.md # This file
Privacy & Security
- 100% Local: All AI processing happens on your machine
- No Data Collection: No code or personal data is sent to external servers
- Open Source: Full source code available for inspection
- Secure: Uses local Ollama installation with no external dependencies
Contributing
Contributions are welcome! Please feel free to submit issues, feature requests, or pull requests.
License
This project is licensed under the MIT License.
Acknowledgments
- Built with Ollama for local AI model hosting
- Powered by CodeLlama and other open-source language models
- Inspired by the need for privacy-focused AI coding assistants
Happy Coding with Binny.AI! 🚀