Skip to content
| Marketplace
Sign in
Visual Studio Code>AI>OpsPilot AI — DevOps CopilotNew to Visual Studio Code? Get it now.
OpsPilot AI — DevOps Copilot

OpsPilot AI — DevOps Copilot

anirudhdeshmukh

|
1 install
| (0) | Free
Enterprise DevOps Copilot — policy-driven, autonomous infrastructure automation with BYOK support
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

OpsPilot AI — DevOps Copilot (Beta)

Enterprise DevOps Copilot — policy-driven, autonomous infrastructure automation with BYOK support

Beta — actively developed. Feedback welcome via DM or Docker Hub.

Submit infrastructure tasks in plain English. The agent plans, validates, simulates, and either executes or routes through an approval workflow — all from inside VS Code.


Requirements

  • VS Code 1.90 or later
  • Docker Desktop installed and running
  • One LLM provider: Anthropic Claude, Azure OpenAI, or Ollama (free, local)

Step 1 — Install Ollama (free, no API key needed)

Skip if you have an Anthropic or Azure OpenAI key.

macOS

brew install ollama
ollama serve
ollama pull llama3.2

Windows — Download from ollama.com/download, then open PowerShell:

ollama pull llama3.2

Linux

curl -fsSL https://ollama.com/install.sh | sh
ollama serve &
ollama pull llama3.2

Verify:

curl http://localhost:11434/api/tags
# Expected: {"models":[{"name":"llama3.2",...}]}

Step 2 — Create your .env file

Create a file named .env in any empty folder:

# Required
JWT_SECRET_KEY=testing-secret-key-replace-this-32c
REDIS_PASSWORD=changeme
POSTGRES_PASSWORD=opspilot

# Pick ONE LLM provider:

# Option A — Anthropic Claude (recommended)
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...
CLAUDE_MODEL=claude-sonnet-4-6

# Option B — Azure OpenAI / AI Foundry
# LLM_PROVIDER=azure
# AZURE_FOUNDRY_ENDPOINT=https://YOUR_RESOURCE.cognitiveservices.azure.com/
# AZURE_FOUNDRY_API_KEY=your-key
# AZURE_FOUNDRY_DEPLOYMENT=gpt-4o

# Option C — Ollama (free, local, no API key — see Step 1)
# LLM_PROVIDER=ollama
# OLLAMA_BASE_URL=http://host.docker.internal:11434   ← use this, NOT localhost
# OLLAMA_MODEL=llama3.2

Step 3 — Start the backend

Run from the same folder as your .env:

docker pull dskystar/opspilot-ai:latest

curl -o docker-compose.hub.yml \
  https://gist.githubusercontent.com/Iam-anirudhdeshmukh/34af785c93aa730866cb16560fd77898/raw/eb1a84e81a1cf9140aa2f6e945f48d4b7927c5fa/docker-compose.hub.yml

docker compose -f docker-compose.hub.yml up -d

Verify:

curl http://localhost:8000/health
# {"status":"ok","version":"1.0.0-beta.1"}

Step 4 — Install the VS Code Extension

Easiest: Open VS Code → Extensions (Ctrl+Shift+X / Cmd+Shift+X) → search "OpsPilot AI — DevOps Copilot" → Install

Quick Open: Press Ctrl+P (Windows/Linux) or Cmd+P (Mac) → paste:

ext install anirudhdeshmukh.opspilotai

Step 5 — Connect the Extension

  1. Click the OpsPilot AI icon in the VS Code activity bar (left sidebar)
  2. Set Server URL to http://localhost:8000
  3. Run OpsPilot: Set Password (Cmd+Shift+P / Ctrl+Shift+P) → enter admin123
  4. Click Check API Health — status bar turns green when connected

Demo credentials (built-in for local testing):

Username Password Role
admin@opspilot.dev admin123 Admin — run + approve
dev@opspilot.dev dev123 Developer — run only

Step 6 — Run Test Tasks

Type these tasks in the extension's task input box.

Terraform — Azure VM

Create an Azure VM with a VNet and subnets for production

Expected: status: files_written — LLM generates modular Terraform structure.

Next steps in the extension:

  1. Click Validate Plan → runs terraform init + plan
  2. Review plan output inline
  3. Click Apply to execute

Terraform — AWS

Create an S3 bucket with versioning and lifecycle policies on AWS

Terraform — GCP

Set up a GKE cluster with a custom VPC on GCP

Kubernetes — Get pods

Get all pods in the default namespace

You can use any namespace — just mention it in the task. OpsPilot extracts it automatically. No config change needed.

Kubernetes — Scale (dev, immediate)

Scale the nginx deployment to 5 replicas in opspilot-test namespace

Kubernetes — Scale (prod, approval required)

Scale the nginx deployment to 5 replicas in opspilot-test namespace on production

Expected: status: pending_approval — approval panel appears in sidebar.

Helm — Install

Install the redis Helm chart as release opspilot-redis in the opspilot-test namespace

Helm — List releases

List all Helm releases in the opspilot-test namespace

Helm — Rollback (prod, approval required)

Rollback the opspilot-redis release to revision 1 in opspilot-test namespace on production

Dry run (simulate only, no changes)

Select Simulate mode in the extension before clicking Run Task. Expected: status: simulated — nothing executed.


Approval Matrix

Environment Risk Approvals Required
dev LOW 0 — executes immediately
dev MEDIUM+ 1 approval
qa Any 1 approval
prod Any 1 approval
prod HIGH (destroy/rollback) 2 approvals

Troubleshooting

Problem Fix
Extension shows "API unreachable" Set API URL to http://localhost:8000 (no trailing slash). Run curl http://localhost:8000/health to verify
401 Unauthorized Re-run OpsPilot: Set Password — JWT expires after 60 minutes
status: failed with LLM error Check API key in .env, restart: docker compose -f docker-compose.hub.yml restart api
Ollama not reachable Use http://host.docker.internal:11434 not localhost for OLLAMA_BASE_URL
kubectl / helm not found Install on your host machine — OpsPilot calls them via subprocess
Panel blank or stale Cmd+Shift+P → Developer: Reload Window

Beta Resources

  • Docker Hub: hub.docker.com/r/dskystar/opspilot-ai
  • Full Setup + Testing Guide: gist.github.com/Iam-anirudhdeshmukh/34af785c93aa730866cb16560fd77898
  • Publisher: marketplace.visualstudio.com/publishers/anirudhdeshmukh
  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2026 Microsoft