AWS Bedrock Chat Provider for VS Code
Use AWS Bedrock models directly in VS Code chat via Mantle's OpenAI-compatible API.
Features
- 26+ Models: Access to OpenAI, Google, Mistral, Qwen, DeepSeek, Nvidia, and more
- Streaming Responses: Real-time chat with streaming support
- Tool Calling: Function calling support for capable models
- Multi-Region: Support for 12 AWS regions
- OpenAI Compatible: Uses familiar OpenAI SDK patterns via Mantle
Available Models
OpenAI
gpt-oss-20b, gpt-oss-120b
- Safeguard variants:
gpt-oss-safeguard-20b/120b
Google
- Gemma 3:
4b, 12b, 27b variants
Mistral
magistral-small-2509
mistral-large-3-675b-instruct
- Ministral:
3b, 8b, 14b variants
- Voxtral:
mini-3b, small-24b variants
Qwen
- General:
qwen3-32b, qwen3-235b, qwen3-next-80b
- Vision:
qwen3-vl-235b (multimodal)
- Coding:
qwen3-coder-30b/480b
DeepSeek
Nvidia
nemotron-nano-9b-v2, nemotron-nano-12b-v2
Others
- MoonshotAI:
kimi-k2-thinking
- Minimax:
minimax-m2
- ZAI:
glm-4.6
Prerequisites
- AWS Bedrock API Key: Obtain from the AWS Bedrock Console
- VS Code: Version 1.104.0 or later
Installation
From Source
Clone this repository:
git clone https://github.com/bedrock/bedrock-vscode-chat.git
cd bedrock-vscode-chat
Install dependencies:
npm install
Compile the extension:
npm run compile
Press F5 to open a new VS Code window with the extension loaded
From VSIX (Coming Soon)
code --install-extension bedrock-vscode-chat-0.1.0.vsix
Setup
Method 1: Via Command Palette
- Open Command Palette (
Cmd+Shift+P / Ctrl+Shift+P)
- Run:
Manage AWS Bedrock API Key
- Select "Enter API Key"
- Paste your API key from AWS Bedrock Console
Method 2: On First Use
- The extension will prompt for your API key when you first try to use a model
- Your key is stored securely in VS Code's SecretStorage
2. Select Region (Optional)
Default region is us-east-1. To change:
- Open Command Palette
- Run:
Manage AWS Bedrock API Key
- Select "Change Region"
- Choose your preferred AWS region
Or set in Settings:
{
"aws-bedrock.region": "us-west-2"
}
Show/hide specialized models (like safeguard variants):
{
"aws-bedrock.showAllModels": true // default: true
}
Usage
Using in Chat
- Open VS Code Chat (
Cmd+Shift+I / Ctrl+Shift+I)
- Click the model picker (top of chat panel)
- Select an AWS Bedrock model (e.g., "OpenAI GPT OSS 120B")
- Start chatting!
Using with Copilot Chat
- In any editor, use
@workspace or other chat participants
- The model picker will include Bedrock models
- Select a Bedrock model for your conversation
Example Chat
You: What are the key features of Rust's ownership system?
Assistant (via Bedrock): [Streams response in real-time...]
Configuration
Settings
| Setting |
Type |
Default |
Description |
aws-bedrock.region |
string |
us-east-1 |
AWS region for Bedrock Mantle endpoint |
aws-bedrock.showAllModels |
boolean |
true |
Show all models including specialized variants |
Supported Regions
us-east-1 (N. Virginia) - Default
us-east-2 (Ohio)
us-west-2 (Oregon)
eu-west-1 (Ireland)
eu-west-2 (London)
eu-central-1 (Frankfurt)
eu-north-1 (Stockholm)
eu-south-1 (Milan)
ap-south-1 (Mumbai)
ap-northeast-1 (Tokyo)
ap-southeast-3 (Jakarta)
sa-east-1 (São Paulo)
Commands
| Command |
Description |
Manage AWS Bedrock API Key |
Configure API key, region, and settings |
Clear AWS Bedrock API Key |
Remove stored API key |
Architecture
This extension implements VS Code's LanguageModelChatProvider interface using AWS Bedrock's Mantle API, which provides OpenAI-compatible endpoints.
Key Components
- BedrockMantleProvider: Main provider implementing VSCode's chat interface
- Dynamic Model Discovery: Fetches available models from Mantle's Models API
- Streaming Support: Processes SSE (Server-Sent Events) for real-time responses
- Tool Calling: Buffers and parses streaming tool calls for function calling support
https://bedrock-mantle.<region>.api.aws/v1
Model Capabilities
Models with function calling capabilities:
gpt-oss-120b
mistral-large-3-675b-instruct
magistral-small-2509
deepseek.v3.1
qwen3-235b and larger models
qwen3-vl-235b (vision + tools)
Vision Support
Models with multimodal (image) input:
qwen3-vl-235b-a22b-instruct
Code Specialization
Models optimized for coding:
qwen3-coder-30b-a3b-instruct
qwen3-coder-480b-a35b-instruct
Reasoning/Thinking
Models with enhanced reasoning:
Troubleshooting
API Key Issues
Problem: "Invalid API key" error
Solution:
- Verify your API key in AWS Bedrock Console
- Run:
Manage AWS Bedrock API Key → "Clear API Key"
- Re-enter your API key
Model Not Available
Problem: "Model not available in region" error
Solution:
Rate Limiting
Problem: "Rate limit exceeded" error
Solution:
- Wait a few moments and try again
- Consider using smaller models for testing
- Check your AWS Bedrock quotas in AWS Console
Connection Issues
Problem: Network or timeout errors
Solution:
- Check your internet connection
- Verify firewall/proxy settings allow access to
*.api.aws
- Ensure the selected region is accessible from your location
Development
Building from Source
# Install dependencies
npm install
# Compile TypeScript
npm run compile
# Watch mode for development
npm run watch
# Run linting
npm run lint
Debugging
- Open the project in VS Code
- Press
F5 to launch Extension Development Host
- Set breakpoints in source files
- Test the extension in the new window
Project Structure
bedrock-vscode-chat/
├── src/
│ ├── extension.ts # Extension entry point
│ ├── provider.ts # Main provider implementation
│ ├── types.ts # TypeScript type definitions
│ └── utils.ts # Utility functions
├── package.json # Extension manifest
├── tsconfig.json # TypeScript configuration
└── README.md # This file
Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
Resources
License
MIT License - See LICENSE file for details
Acknowledgments
Inspired by the HuggingFace VSCode Chat extension.
Support
Version: 0.1.0
Status: Beta
Last Updated: December 18, 2025