Skip to content
| Marketplace
Sign in
Visual Studio Code>Other>MemoskNew to Visual Studio Code? Get it now.
Memosk

Memosk

RemanKhanal

|
1 install
| (0) | Free
AI assistant for VS Code: chat, explain, improve code with Ollama/OpenAI/Gemini
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

Memosk - AI Coding Assistant for VS Code

Memosk is your AI-powered coding sidekick that lives in your VS Code Activity Bar. Chat, explain code, fix bugs, run tests - all with privacy-first local Ollama fallback.

🚀 Quick Start

  1. Install dependencies:
npm install
  1. Build:
npm run build
  1. Run in dev mode (F5)
  2. Load in VS Code: Extensions > ... > Install from VSIX (package after build)

🧠 Recommended Ollama Setup (Free/Local)

Primary: cline + qwen2.5-coder:7b (best agent+model combo)

ollama pull qwen2.5-coder:7b
ollama pull cline

Alternatives:

  • opencode + qwen2.5-coder:7b
  • deepseek-coder-v2
  • codestral
  • qwen2.5-coder:14b (heavier/better)

Set memosk.defaultOllamaModel in settings.

🔌 Providers

Provider Setup Settings
Ollama (default) ollama serve memosk.ollamaHost
OpenAI API key memosk.openaiApiKey
Gemini API key memosk.googleApiKey

📱 Features

  • Activity Bar Chat (Memosk: Open Chat)
  • Explain Selection/File (Ctrl+Shift+P)
  • Improve Code (select → command)
  • Terminal/Problems Inspection (auto-capture errors)
  • Tagging (label files/terminals for context)
  • Run Tests (memosk.testCommand)
  • Streaming responses
  • Privacy: No uploads unless explicitly enabled+confirmed

🛡️ Privacy

  • Local first (Ollama)
  • Workspace files never sent without:
    1. memosk.privacy.uploadWorkspaceFiles: true
    2. Per-session confirmation dialog
  • Terminal outputs sanitized (paths/tokens stripped)

🧪 Test Checklist

  • [ ] Activity Bar shows Memosk icon
  • [ ] Chat view opens
  • [ ] memosk.ask quick input works
  • [ ] Commands: explain/improve work with selection
  • [ ] npm run build succeeds
  • [ ] F5 loads extension without errors
  • [ ] Ollama responds (with model pulled)

Troubleshooting

  • Ollama not responding: ollama serve, check http://localhost:11434
  • No model: ollama pull qwen2.5-coder:7b
  • CORS: Ensure Ollama allows browser requests

Recommended Combos (Settings → Model Routing)

  1. Daily: cline + qwen2.5-coder:7b
  2. Heavy: gpt-4o-mini (OpenAI)
  3. Code-only: deepseek-coder-v2

Happy coding! 🚀

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2026 Microsoft