Vishwa Autocomplete
AI-powered code completions that learn your style — locally or in the cloud.

Features
- Works with any language — Python, TypeScript, Go, Rust, Java, and more
- Local or cloud models — run Ollama locally (free, private) or use Anthropic, OpenAI, Novita, OpenRouter
- Learns as you code — reinforcement learning adapts suggestions to your accept/reject patterns
- Zero config start — install, run the setup wizard, and start coding
- Privacy first — no telemetry, API keys encrypted in your OS keychain, local models keep all data on your machine
Quick Start
- Install the extension from the VS Code Marketplace
Ctrl+Shift+P > Vishwa: Setup — pick a model and enter your API key (if cloud)
- Start typing — suggestions appear inline
Python 3.10+ is required. The backend installs automatically on first launch into ~/.vishwa-autocomplete/venv/ (never touches your project).
For local models, install Ollama and pull a model:
ollama pull gemma3:4b
Supported Models
Local (free, private):
| Model |
Description |
gemma3:4b |
Google Gemma 3 4B — lightweight, fast (default) |
qwen2.5-coder:7b |
Qwen 2.5 Coder 7B — code-specialized |
deepseek-coder |
DeepSeek Coder — code-specialized |
Cloud (requires API key):
| Model |
Provider |
claude-haiku-4-5 |
Anthropic |
claude-sonnet-4-6 |
Anthropic |
gpt-5.2-2025-12-11 |
OpenAI |
moonshotai/kimi-k2.5 |
OpenRouter |
Add your own models by editing models.json — they show up in the setup wizard automatically.
Commands
All via Ctrl+Shift+P:
| Command |
Description |
| Vishwa: Setup |
Model, API keys, and license wizard |
| Vishwa: Toggle Autocomplete |
Enable/disable suggestions |
| Vishwa: Enter License Key |
Activate a purchased license |
| Vishwa: Purchase License |
Open checkout page |
| Vishwa: Show RL Stats |
View reinforcement learning policy stats |
Configuration
Search "vishwa" in VS Code settings (Ctrl+,):
{
// Model to use (any model from models.json)
"vishwa.autocomplete.model": "gemma3:4b",
// Delay before fetching a suggestion (ms)
"vishwa.autocomplete.debounceDelay": 500,
// Lines of code context sent to the model
"vishwa.autocomplete.contextLines": 20,
// Python executable path ("auto" = auto-detect)
"vishwa.autocomplete.pythonPath": "auto"
}
How It Works
The extension has two parts:
- VS Code extension (TypeScript) — inline completions, license management, UI
- Python backend — LLM calls, context building, caching, reinforcement learning
The backend runs as an isolated child process via JSON-RPC over stdio. No network ports are opened.
Reinforcement learning: Vishwa uses Thompson Sampling to learn which context strategy works best for different code situations. It tracks whether you accept or reject suggestions and picks the optimal strategy automatically.
Licensing
25 free completions are included on install — no sign-up required. Once used, you can:
- Start a 3-day free trial — unlimited completions, no credit card required
- Purchase a license — unlimited completions, permanent
To activate:
Ctrl+Shift+P > Vishwa: Purchase License (or click the status bar)
- Copy your key from the confirmation email
Ctrl+Shift+P > Vishwa: Enter License Key
License keys are encrypted in your OS keychain. The extension works offline for 7 days after a successful validation.
Security
- No telemetry — zero data collection
- API keys — encrypted in OS keychain, never logged or written to disk
- Local models — all inference on your machine, nothing leaves your network
- Process isolation — backend communicates via stdio, not network sockets
Troubleshooting
Suggestions not appearing?
- Check the status bar — do you have completions remaining or an active license?
- For local models, is Ollama running? (
ollama serve)
- For cloud models, is your API key set? Run Vishwa: Setup
- Check View > Output > "Vishwa Autocomplete" for errors
Python backend not starting?
- Install Python 3.10+ from python.org
- Or set
vishwa.autocomplete.pythonPath to your Python executable
Report Issues · Source