AWS Bedrock Chat Provider for VS Code

Use AWS Bedrock models directly in VS Code chat via:
- Mantle (OpenAI-compatible API) for the OSS/openai-style model catalog
- Native Bedrock (Converse API) for the full Bedrock foundation model catalog
Features
- Mantle + Native: Use Mantle models and native Bedrock foundation models
- Dynamic Model Discovery: Mantle models are fetched from Mantle's Models API; native models are listed from AWS Bedrock
- Streaming Responses: Real-time chat with streaming support
- Tool Calling: Function calling support for capable models
- Multi-Region: Support for 12 AWS regions
- OpenAI Compatible (Mantle): Uses familiar OpenAI SDK patterns via Mantle
- Converse API (Native): Uses the unified Bedrock conversation API
Available Models
OpenAI
gpt-oss-20b, gpt-oss-120b
- Safeguard variants:
gpt-oss-safeguard-20b/120b
Google
- Gemma 3:
4b, 12b, 27b variants
Mistral
magistral-small-2509
mistral-large-3-675b-instruct
- Ministral:
3b, 8b, 14b variants
- Voxtral:
mini-3b, small-24b variants
Qwen
- General:
qwen3-32b, qwen3-235b, qwen3-next-80b
- Vision:
qwen3-vl-235b (multimodal)
- Coding:
qwen3-coder-30b/480b
DeepSeek
Nvidia
nemotron-nano-9b-v2, nemotron-nano-12b-v2
Others
- MoonshotAI:
kimi-k2-thinking
- Minimax:
minimax-m2
- ZAI:
glm-4.6
Prerequisites
You can use either backend (or both):
- Mantle (optional):
- Option A: AWS Bedrock API Key from the AWS Bedrock Console (simpler)
- Option B: AWS credentials/profile (better for existing AWS setups)
- Native Bedrock (optional): AWS credentials available to VS Code (env vars,
~/.aws/credentials, SSO, etc). You can also set aws-bedrock.awsProfile.
- VS Code: Version 1.104.0 or later
Installation
From VS Code Marketplace
- Open VS Code
- Go to Extensions (Cmd+Shift+X)
- Search for "Bedrock LLMs for VS Code Chat"
- Click Install
From Source
Clone this repository:
git clone https://github.com/easytocloud/bedrock-vscode-chat.git
cd bedrock-vscode-chat
Install dependencies:
npm install
Compile the extension:
npm run compile
Press F5 to open a new VS Code window with the extension loaded
Setup
Mantle supports two authentication methods:
Option A: API Key (Simpler)
Via Command Palette:
- Open Command Palette (
Cmd+Shift+P / Ctrl+Shift+P)
- Run:
Manage AWS Bedrock
- Select "Enter API Key (Mantle)"
- Paste your API key from AWS Bedrock Console
On First Use:
- The extension will prompt for your API key when you first try to use a Mantle model
- Your key is stored securely in VS Code's SecretStorage
Option B: AWS Credentials (Better for existing AWS setups)
- Open Command Palette
- Run:
Manage AWS Bedrock
- Select "Configure Mantle Authentication"
- Choose "AWS Credentials"
- Optionally set a specific profile via "Set AWS Profile (Mantle)"
This method uses AWS Signature V4 authentication with your existing AWS credentials.
If you're using native Bedrock models and want a specific named profile:
- Run:
Manage AWS Bedrock
- Select "Set AWS Profile (Native)"
- Enter a profile name (or leave blank to use the default credential chain)
3. Select Region (Optional)
Default region is us-east-1. To change:
- Open Command Palette
- Run:
Manage AWS Bedrock
- Select "Change Region"
- Choose your preferred AWS region
Or set in Settings:
{
"aws-bedrock.region": "us-west-2",
"aws-bedrock.mantleAuthMethod": "awsCredentials", // or "apiKey"
"aws-bedrock.mantleAwsProfile": "my-profile", // optional
"aws-bedrock.awsProfile": "my-profile" // for native Bedrock
}
Show/hide specialized models (like safeguard variants):
{
"aws-bedrock.showAllModels": true // default: true
}
Usage
Using in Chat
- Open VS Code Chat (
Cmd+Shift+I / Ctrl+Shift+I)
- Click the model picker (top of chat panel)
- Select an AWS Bedrock model (e.g., "OpenAI GPT OSS 120B")
- Start chatting!
Using with Copilot Chat
- In any editor, use
@workspace or other chat participants
- The model picker will include Bedrock models
- Select a Bedrock model for your conversation
Example Chat
You: What are the key features of Rust's ownership system?
Assistant (via Bedrock): [Streams response in real-time...]
Configuration
Settings
| Setting |
Type |
Default |
Description |
aws-bedrock.region |
string |
us-east-1 |
AWS region for Bedrock (Mantle + native) |
aws-bedrock.enableMantle |
boolean |
true |
Enable Mantle (OpenAI-compatible) models |
aws-bedrock.enableNative |
boolean |
true |
Enable native Bedrock models via Converse API |
aws-bedrock.mantleAuthMethod |
string |
apiKey |
Mantle auth method: apiKey or awsCredentials |
aws-bedrock.mantleAwsProfile |
string |
empty |
Optional AWS profile for Mantle when using credentials |
aws-bedrock.awsProfile |
string |
empty |
Optional AWS profile for native Bedrock |
aws-bedrock.showAllModels |
boolean |
true |
Show all models including specialized variants |
aws-bedrock.debugLogging |
boolean |
false |
Enable verbose debug logging |
aws-bedrock.sendTools |
boolean |
true |
Send tool definitions to the model |
aws-bedrock.emitPlaceholders |
boolean |
true |
Emit placeholder text while waiting |
aws-bedrock.modelMetadataSource |
string |
litellm |
Metadata source for token/capability info |
aws-bedrock.modelMetadataUrl |
string |
default URL |
External metadata registry URL |
aws-bedrock.modelMetadataCacheHours |
number |
24 |
Cache duration for external metadata |
Supported Regions
us-east-1 (N. Virginia) - Default
us-east-2 (Ohio)
us-west-2 (Oregon)
eu-west-1 (Ireland)
eu-west-2 (London)
eu-central-1 (Frankfurt)
eu-north-1 (Stockholm)
eu-south-1 (Milan)
ap-south-1 (Mumbai)
ap-northeast-1 (Tokyo)
ap-southeast-3 (Jakarta)
sa-east-1 (São Paulo)
Commands
| Command |
Description |
Manage AWS Bedrock |
Configure Mantle auth, native AWS profile, region, and settings |
Clear AWS Bedrock API Key (Mantle) |
Remove stored Mantle API key |
Show AWS Bedrock Logs |
Open the extension output channel |
Architecture
This extension implements VS Code's LanguageModelChatProvider interface using AWS Bedrock's Mantle API, which provides OpenAI-compatible endpoints.
Key Components
- BedrockMantleProvider: Main provider implementing VSCode's chat interface
- Dynamic Model Discovery: Fetches available models from Mantle's Models API
- Streaming Support: Processes SSE (Server-Sent Events) for real-time responses
- Tool Calling: Buffers and parses streaming tool calls for function calling support
https://bedrock-mantle.<region>.api.aws/v1
Model Capabilities
Models with function calling capabilities:
gpt-oss-120b
mistral-large-3-675b-instruct
magistral-small-2509
deepseek.v3.1
qwen3-235b and larger models
qwen3-vl-235b (vision + tools)
Vision Support
Models with multimodal (image) input:
- Mantle models: based on model naming and API behavior
- Native Bedrock models: based on Bedrock's reported input modalities
- Token limits + initial capabilities: The extension can optionally use an external model metadata registry (default: Litellm's public JSON) to populate
maxInputTokens, maxOutputTokens, and initial tool/vision flags. Configure via aws-bedrock.modelMetadataSource, aws-bedrock.modelMetadataUrl, and aws-bedrock.modelMetadataCacheHours.
- Native Bedrock models: vision is derived from
ListFoundationModels input modalities (reliable). Tool support is still verified on-demand by attempting a tool-enabled request and caching whether the model accepts tool config (this overrides external metadata if the runtime behavior differs).
- Mantle models: Mantle's
/v1/models does not include tool/vision/token metadata, so the extension uses external metadata when enabled, plus runtime probing (tools) as a safety net.
Code Specialization
Models optimized for coding:
qwen3-coder-30b-a3b-instruct
qwen3-coder-480b-a35b-instruct
Reasoning/Thinking
Models with enhanced reasoning:
Troubleshooting
API Key Issues
Problem: "Invalid API key" error
Solution:
- Verify your API key in AWS Bedrock Console
- Run:
Manage AWS Bedrock → "Clear API Key (Mantle)"
- Re-enter your API key
Model Not Available
Problem: "Model not available in region" error
Solution:
Rate Limiting
Problem: "Rate limit exceeded" error
Solution:
- Wait a few moments and try again
- Consider using smaller models for testing
- Check your AWS Bedrock quotas in AWS Console
Connection Issues
Problem: Network or timeout errors
Solution:
- Check your internet connection
- Verify firewall/proxy settings allow access to
*.api.aws
- Ensure the selected region is accessible from your location
Development
Building from Source
# Install dependencies
npm install
# Compile TypeScript
npm run compile
# Watch mode for development
npm run watch
# Run linting
npm run lint
Or use the Makefile shortcuts:
make install
make compile
make watch
make lint
Debugging
- Open the project in VS Code
- Press
F5 to launch Extension Development Host
- Set breakpoints in source files
- Test the extension in the new window
Project Structure
bedrock-vscode-chat/
├── src/
│ ├── extension.ts # Extension entry point
│ ├── provider.ts # Main provider implementation
│ ├── bedrockNative.ts # Native Bedrock Converse API
│ ├── externalModelMetadata.ts # External model metadata loader
│ ├── types.ts # TypeScript type definitions
│ └── utils.ts # Utility functions
├── package.json # Extension manifest
├── tsconfig.json # TypeScript configuration
├── icon.svg # Source icon (editable)
├── icon.png # Extension icon (128x128)
├── README.md # This file
├── CONTRIBUTING.md # Development guide
└── PLAN.md # Architecture details
Contributing
Contributions are welcome! See CONTRIBUTING.md for detailed development guidelines.
Quick start for contributors:
- Fork and clone the repository
- Install dependencies:
npm install
- Compile:
npm run compile
- Press F5 to launch Extension Development Host
- See CONTRIBUTING.md for testing, logging, and publishing guidelines
Key development notes:
- Publisher name:
easytocloud (lowercase)
- Use Output Channel for logging, not console.log
- Include node_modules in VSIX (required for AWS SDK)
- Test in both F5 mode and installed VSIX
- Use
rsvg-convert for icon generation
Resources
License
MIT License - See LICENSE file for details
Credits
- Project Lead: easytocloud
- Development Assistant: GitHub Copilot
Acknowledgments
Inspired by the HuggingFace VSCode Chat extension.
Support
Version: 0.3.1
Status: Production
Last Updated: February 5, 2026