Claude Carbon Tracker
A VSCode extension to track and visualize carbon emissions from Claude Code AI usage.
Features
- Real-time tracking of carbon emissions from Claude Code usage
- Status bar widget showing current CO₂ emissions
- Detailed statistics view in the sidebar with:
- Total CO₂ emissions
- Token counts (input/output)
- Request counts
- Environmental impact equivalents
- Configurable emission factors to adjust calculations
- Persistent tracking across VSCode sessions
Installation
From Source
- Clone this repository
- Run
npm install
- Run
npm run compile
- Press F5 to open a new VSCode window with the extension loaded
From VSIX (once published)
- Download the
.vsix
file
- Run
code --install-extension claude-carbon-tracker-0.0.1.vsix
Usage
Once installed, the extension automatically starts tracking Claude Code usage:
- View in Status Bar: Look for the leaf icon (🌿) in the bottom-right status bar
- Open Sidebar: Click the Claude Carbon Tracker icon in the Activity Bar
- View Statistics: Click the status bar item or run
Claude Carbon: Show Statistics
- Reset Stats: Run
Claude Carbon: Reset Statistics
from the command palette
Configuration
Configure the extension in your VSCode settings:
{
"claudeCarbonTracker.emissionFactor": 0.0004,
"claudeCarbonTracker.showInStatusBar": true
}
emissionFactor
: CO₂ emissions in kg per 1,000 tokens (default: 0.0004)
showInStatusBar
: Show/hide the status bar widget
How It Works
Data Collection Methodology
The extension monitors Claude Code's local conversation data stored in JSONL files:
File System Monitoring: Scans Claude Code's data directory every 5 seconds
- Windows:
%USERPROFILE%\.claude\projects\
- macOS/Linux:
~/.claude/projects/
or ~/.config/claude/projects/
Token Extraction: Parses conversation files to extract:
input_tokens
: Tokens sent to the model
output_tokens
: Tokens generated by the model
- Cache tokens are tracked but not counted (following Claude's billing model)
Carbon Calculation: Applies emission factors to token counts
- Formula:
CO₂ (kg) = (total_tokens / 1000) × emission_factor
- Supports configurable emission factors for different scenarios
Privacy-First: All processing happens locally; no data leaves your machine
Emission Factor Methodology
The default emission factor (0.0004 kg CO₂/1K tokens) is derived from recent research on LLM carbon footprint:
Calculation Basis
Energy Consumption:
- Research estimates 3-4 joules per token for large language models [1]
- For Claude 3.7 Sonnet: ~0.4 joule/token [2]
- Conversion: 1 kWh = 3,600,000 joules
- Energy per 1K tokens: (400 joules × 1000) / 3,600,000 ≈ 0.11 Wh ≈ 0.00011 kWh
Carbon Intensity:
- GPT-4 operational emissions: ~0.3 gCO₂e per 1K tokens (Azure US West, 240.6 gCO₂e/kWh) [3]
- Claude reports 3.5 gCO₂ per query [4]
- Global data center average: ~400 gCO₂eq/kWh [5]
- Conservative estimate: 0.4 gCO₂/1K tokens = 0.0004 kg CO₂/1K tokens
Factors Considered:
- Server hardware power consumption (GPU/CPU TDP)
- Power Usage Effectiveness (PUE) of data centers (typically 1.2-1.5×)
- Grid carbon intensity (varies by region: 57-429 gCO₂eq/kWh) [5]
- Inference efficiency (Claude 3.7 Sonnet is noted as highly eco-efficient) [6]
Model-Specific Considerations
Different Claude models have different computational requirements:
- Haiku: Faster, more efficient, lower emissions per token
- Sonnet: Balanced performance and efficiency (current default)
- Opus: More computational power, higher emissions per token
Note: This extension currently uses a single emission factor. Future versions will support model-specific factors.
Environmental Impact Equivalents
The extension converts CO₂ emissions to relatable comparisons:
- Trees needed for 1 year: CO₂ (kg) / 21 kg [7]
- Kilometers driven: CO₂ (kg) / 0.12 kg/km (average car) [8]
- Smartphones charged: CO₂ (kg) / 0.011 kg [9]
- 60W light bulb hours: CO₂ (kg) / 0.0006 kg/hour [10]
Limitations and Assumptions
- Inference Only: Does not account for model training emissions (one-time cost amortized across users)
- Network Overhead: Does not include data transmission emissions
- Regional Variation: Uses average carbon intensity; actual emissions vary by data center location
- Cache Benefits: Claude's prompt caching reduces actual emissions but is not fully modeled
- Hardware Diversity: Assumes consistent hardware; actual GPU/CPU efficiency varies
Uncertainty Range
Based on literature review, estimated emissions could range from:
- Lower bound: 0.0002 kg CO₂/1K tokens (optimal conditions: renewable energy, efficient hardware)
- Default: 0.0004 kg CO₂/1K tokens (conservative average)
- Upper bound: 0.001 kg CO₂/1K tokens (worst case: coal-powered grid, inefficient hardware)
Users can adjust the emission factor in settings based on their preferred assumptions or updated research.
Roadmap
- [x] Monitor Claude Code's local conversation files
- [x] Real-time token tracking and carbon calculation
- [ ] Support for different models (Sonnet, Opus, Haiku) with model-specific emission factors
- [ ] Detect model type from conversation files
- [ ] Export statistics to CSV/JSON
- [ ] Daily/weekly/monthly reports and trends
- [ ] Carbon offset recommendations
- [ ] Integration with carbon offset APIs
- [ ] Historical emissions visualization charts
References
[1] Luccioni, A. S., et al. (2023). "Estimating the Carbon Footprint of BLOOM, a 176B Parameter Language Model." Journal of Machine Learning Research, 24(253), 1-15. https://jmlr.org/papers/volume24/23-0069/23-0069.pdf
[2] FOSS Force. (2025). "What's Your Chatbot's Carbon Footprint?" Retrieved from https://fossforce.com/2025/04/whats-your-chatbots-carbon-footprint/
[3] arXiv:2507.11417v1. (2025). "Quantifying the Energy Consumption and Carbon Emissions of LLM Inference via Simulations." https://arxiv.org/html/2507.11417v1
[4] Fast Company. (2025). "The environmental impact of LLMs: Here's how OpenAI, DeepSeek, and Anthropic stack up." https://www.fastcompany.com/91336991/openai-anthropic-deepseek-ai-models-environmental-impact
[5] Li, Y. L., et al. (2024). "Towards Carbon-efficient LLM Life Cycle." HotCarbon '24: Workshop on Sustainable Computer Systems. https://hotcarbon.org/assets/2024/pdf/hotcarbon24-final154.pdf
[6] Clune, J. (2025). "Environmental Impact of AI." Arthur's Blog. https://clune.org/posts/environmental-impact-of-ai/
[7] U.S. Environmental Protection Agency. "Greenhouse Gas Equivalencies Calculator." https://www.epa.gov/energy/greenhouse-gas-equivalencies-calculator
[8] European Environment Agency. (2024). "CO2 emissions from passenger transport." https://www.eea.europa.eu/
[9] U.S. Department of Energy. "Energy Use Calculator." https://www.energy.gov/
[10] Carbon Footprint Ltd. "Carbon Footprint Calculator." https://www.carbonfootprint.com/
Additional Reading
Contributing
Contributions welcome! This is an early-stage project focusing on:
- Refining emission factor calculations with latest research
- Supporting model-specific emission factors
- Improving accuracy of environmental impact estimates
- Enhancing UI/UX and data visualization
- Adding export and reporting features
Please cite relevant research when proposing changes to emission factors.
License
ISC
Disclaimer
This extension provides estimates only based on available research and reasonable assumptions. Actual carbon emissions may vary significantly based on:
- Data center location and energy sources: Anthropic's infrastructure location and renewable energy usage
- Model architecture and optimization: Hardware efficiency, quantization, and other optimizations
- Network infrastructure: Data transmission emissions not included in current calculations
- Caching and efficiency measures: Claude's prompt caching and other efficiency features
- Temporal variations: Time of day, grid mix changes, and seasonal factors
- Hardware specifics: GPU/CPU models, utilization rates, and cooling requirements
Research Limitations
Current research on LLM carbon emissions is evolving, and many companies (including Anthropic) provide limited transparency about their environmental impact. The emission factors used in this extension are derived from:
- Peer-reviewed academic research
- Industry reports and benchmarks
- Reasonable engineering estimates
For authoritative data, refer to:
- Anthropic's official sustainability reports (when available)
- Peer-reviewed research on LLM carbon footprint
- Third-party audits and certifications
Educational Purpose
This tool is designed to raise awareness about the environmental impact of AI usage and encourage more sustainable practices. It should not be used for:
- Carbon accounting for regulatory compliance
- Official carbon offset calculations
- Comparative claims without proper context
We encourage users to critically evaluate the assumptions and stay informed about latest research in this rapidly evolving field.