Skip to content
| Marketplace
Sign in
Visual Studio Code>Snippets>Conic AINew to Visual Studio Code? Get it now.
Conic AI

Conic AI

Girish Dewangan AI Developer

|
2 installs
| (0) | Free
AI voice coding assistant with wake word detection, Gemini code generation, and ElevenLabs voice replies
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

image.png# 🎙️ Conic AI - Voice-Powered Coding Assistant

A VS Code extension that brings hands-free AI coding assistance to your editor. Just say "Hey Conic" and start coding with your voice!

✨ Features

  • 🎯 Copilot-like Sidebar - Beautiful sidebar interface similar to GitHub Copilot
  • 🎙️ Always Listening - Wake word detection ("Hey Conic") for hands-free operation
  • 🤖 Gemini Integration - Powered by Google's Gemini AI for code generation, refactoring, and problem-solving
  • 🔊 ElevenLabs Voice Replies - Natural AI voice responses using ElevenLabs TTS
  • 🧠 Smart Context Awareness - Understands your current file, selection, and codebase
  • ⚡ Real-time Processing - Instant voice command processing and code changes

🚀 Quick Start

Prerequisites

  • VS Code 1.74.0 or higher
  • Node.js 16.x or higher
  • Microphone access

Installation

  1. Clone this repository:
git clone <your-repo-url>
cd ConicAi
  1. Install dependencies:
npm install
  1. Compile the extension:
npm run compile
  1. Press F5 in VS Code to open a new Extension Development Host window

Configuration

All API keys are stored in a .env file for security.

  1. Create .env file:

    # Copy the example file
    copy .env.example .env
    

    (On Mac/Linux: cp .env.example .env)

  2. Open .env and add your API keys:

    • GEMINI_API_KEY - Your Google Gemini API key (Required)

      • Get it from: https://makersuite.google.com/app/apikey
      • Example: GEMINI_API_KEY=your_actual_api_key_here
    • ELEVENLABS_API_KEY - Your ElevenLabs API key (Optional)

      • Get it from: https://elevenlabs.io/
      • If not provided, the extension will use system TTS as fallback
      • Example: ELEVENLABS_API_KEY=your_actual_api_key_here
    • ELEVENLABS_VOICE_ID - ElevenLabs voice ID (Optional)

      • Default: 21m00Tcm4TlvDq8ikWAM
      • Example: ELEVENLABS_VOICE_ID=21m00Tcm4TlvDq8ikWAM
    • CONIC_WAKE_WORD - Customize wake word (Optional)

      • Default: hey conic
      • Example: CONIC_WAKE_WORD=hey conic

Note: The .env file is automatically ignored by git (already in .gitignore) so your keys are safe.

Alternative: You can still use VS Code settings as a fallback, but .env file is recommended for security.

📖 Usage

Basic Usage

  1. Start the extension - The extension auto-starts listening when activated

  2. Say the wake word - Say "Hey Conic" to activate

  3. Give your command - Speak your coding request, for example:

    • "Create a function to calculate factorial"
    • "Refactor this code to use async/await"
    • "Add error handling to this function"
    • "Explain what this code does"
    • "Generate a React component for a login form"
  4. Review and apply - The AI will show the response in the sidebar and ask if you want to apply code changes

Voice Commands Examples

  • Code Generation: "Create a Python function to sort a list"
  • Refactoring: "Refactor this function to be more efficient"
  • Debugging: "Why is this code throwing an error?"
  • Documentation: "Add comments to explain this code"
  • Testing: "Generate unit tests for this function"
  • Code Review: "Review this code and suggest improvements"

Controls

  • Start/Stop Listening - Use the button in the sidebar or command palette
  • Toggle Listening - Press Ctrl+Shift+P and search "Conic AI: Toggle Listening"

🏗️ Project Structure

ConicAi/
├── src/
│   ├── extension.ts          # Main extension entry point
│   ├── sidebarProvider.ts    # Sidebar webview provider
│   ├── voiceRecognition.ts   # Voice recognition and wake word detection
│   ├── geminiService.ts      # Gemini AI integration
│   └── elevenLabsService.ts  # ElevenLabs TTS integration
├── .env.example              # Example environment variables file
├── .env                      # Your API keys (create from .env.example)
├── package.json              # Extension manifest
├── tsconfig.json             # TypeScript configuration
└── README.md                 # This file

🔧 Development

Building

npm run compile

Watching for Changes

npm run watch

Testing

  1. Press F5 to open Extension Development Host
  2. The extension will be loaded in the new window
  3. Open the sidebar to see Conic AI panel
  4. Test voice commands with your microphone

🎯 How It Works

  1. Voice Recognition: The webview uses browser SpeechRecognition API to continuously listen
  2. Wake Word Detection: Detects "Hey Conic" to activate command mode
  3. Command Processing: Sends voice command to Gemini AI with current editor context
  4. Code Generation: Gemini analyzes the request and generates/modifies code
  5. Voice Response: ElevenLabs converts the response to natural speech
  6. Code Application: User can review and apply suggested code changes

🔐 API Keys Setup

Using .env File (Recommended)

  1. Copy the example file:

    copy .env.example .env
    

    (Mac/Linux: cp .env.example .env)

  2. Open .env file and add your keys:

Gemini API Key (Required)

  1. Visit https://makersuite.google.com/app/apikey
  2. Sign in with your Google account
  3. Create a new API key
  4. Add to .env: GEMINI_API_KEY=your_api_key_here

ElevenLabs API Key (Optional)

  1. Visit https://elevenlabs.io/
  2. Sign up for an account
  3. Navigate to your profile settings
  4. Copy your API key
  5. Add to .env: ELEVENLABS_API_KEY=your_api_key_here

Note:

  • If you don't provide an ElevenLabs API key, the extension will use your system's built-in text-to-speech as a fallback.
  • The .env file is automatically ignored by git, so your keys are safe.
  • The extension looks for .env in your workspace folder first, then in the extension directory.

🐛 Troubleshooting

Microphone Not Working

  • Ensure microphone permissions are granted in your browser/system
  • Check that your microphone is not being used by another application
  • Try clicking "Start Listening" button in the sidebar

Wake Word Not Detecting

  • Speak clearly: "Hey Conic"
  • Ensure microphone is working and not muted
  • Check the status indicator in the sidebar (should be green when listening)

API Errors

  • Verify your API keys are correct in settings
  • Check your internet connection
  • Ensure you have API credits/quota available

Code Not Applying

  • Review the generated code in the sidebar
  • Click "Yes" when prompted to apply changes
  • Ensure you have write permissions to the file

📝 License

MIT License - feel free to use and modify as needed!

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

🙏 Acknowledgments

  • Google Gemini for AI code generation
  • ElevenLabs for natural voice synthesis
  • VS Code team for the amazing extension API

Made with ❤️ for developers who want to code hands-free!

  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2026 Microsoft