Skip to content
| Marketplace
Sign in
Visual Studio Code>AI>Cerebras InferenceNew to Visual Studio Code? Get it now.
Cerebras Inference

Cerebras Inference

Preview

Cerebras

|
866 installs
| (0) | Free
The world's fastest AI inference
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

Cerebras VS Code Extension

Build with the world's fastest AI inference—directly in VS Code, powered by Cerebras.

Make GitHub Copilot run 10× faster with the world’s fastest inference API. Cerebras Inference powers the world’s top coding models at 2,000 tokens/sec, making code generation instant and enabling super-fast agentic flows. Get your free API key to get started today.

Get Started

API Key Setup

Here's how you can use Cerebras models in VS Code:

  1. Get your free API key from Cerebras Cloud.
  2. Install the Cerebras VS Code extension.
  3. Set up GitHub Copilot if you haven't already done so.
  4. In the GitHub Copilot chat interface, select Manage Models and choose Cerebras.
  5. Paste in your API key when prompted.
  6. Choose which models to enable.
  7. You're all set! Happy coding 🎉

Note: Bring-your-own-key is not supported for GitHub Copilot Enterprise subscriptions at this time.

Supported Models

This extension provides support for Qwen 3 Coder in agent mode, as well as the following models in chat mode:

Model Token Speed
OpenAI GPT OSS ~3,000 tokens/sec
Qwen 3 32B ~2,600 tokens/sec
Qwen 3 480B Coder (Preview) ~2,000 tokens/sec
Qwen 3 235B Instruct (Preview) ~1,400 tokens/sec
Qwen 3 235B Thinking (Preview) ~1,700 tokens/sec
Llama 4 Scout ~2,600 tokens/sec
Llama 3.1 8B ~2,200 tokens/sec
Llama 3.3 70B ~2,100 tokens/sec

Advanced Tips

Here's how you can accomplish more with Cerebras:

  • Get higher rate limits on Qwen 3 Coder with our Cerebras Code plans, starting at $50/month.
  • Generate code at top speed with Cerebras by installing the Cerebras Code MCP server.
  • Read our developer documentation to turbocharge your own AI products using Cerebras' Inference API.

What is Cerebras?

Cerebras Systems delivers the world's fastest AI inference for leading open models on top of its revolutionary AI hardware and software.

Cerebras consistently delivers chart-topping speeds for leading open models like Qwen 3 480B Coder and OpenAI's GPT OSS 120B, according to independent measurements by Artificial Analysis and OpenRouter.

At the heart of Cerebras' technology is the Wafer-Scale Engine (WSE), which is purpose-built for ultra-fast AI training and inference. The Cerebras WSE is the world's fastest processor for AI, delivering unprecedented speed that no number of GPUs can match. Learn more about our novel hardware architecture here.

Related

  • Cerebras Inference Documentation
  • VS Code Extension API
  • Language Model API Documentation
  • VS Code Extension Samples
  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2025 Microsoft