AI Contribution Tracker
Automatically tag every git commit with AI usage metadata — models, tokens, prompts, and cost signals. All local. Zero config.

📦 Install · 📖 How It Works · 🐛 Report Bug · 💡 Request Feature
Know exactly how AI shaped every commit — which models were used, how many tokens were consumed per model (including reasoning tokens and cache hits), how many prompts were exchanged, and which sub-agents were involved. All captured automatically in your git history.
What Gets Recorded
Every AI-assisted commit automatically receives a detailed marker. Here are real examples:
Single model, one prompt:
feat: add dark mode toggle
Impacted by AI (Agent mode: new | Model: claude-sonnet-4.6 | Prompts: 1 | Tokens: claude-sonnet-4-6: 48k in/2k out (41k cached))
Multi-model session — Claude for reasoning, Gemini for search:
refactor: split auth service into separate module
Impacted by AI (Agent mode: new | Model: claude-sonnet-4.6, gemini-3.1-pro-preview | Prompts: 2 | Tokens: claude-sonnet-4-6: 296k in/5k out (243k cached) | gemini-3.1-pro-preview: 104k in/678 out (74k cached))
Reasoning model with thinking tokens:
fix: resolve race condition in async queue
Impacted by AI (Agent mode: copilot | Model: gpt-5.4 | Prompts: 1 | Tokens: gpt-5.4-2026-03-05: 143k in/1k out (118k cached) +333 reasoning)
Sub-agents + inline suggestions in the same session:
docs: rewrite contributing guide
Impacted by AI (Inline + Agent mode: new | Model: claude-sonnet-4.6 | Prompts: 3 | Sub-agents mode: Explore | sub-Agent prompts: 2 | Tokens: claude-sonnet-4-6: 180k in/4k out (155k cached))
How It Works
The extension uses three complementary mechanisms, all running locally.
1. Token Usage via OTEL
The extension activates Copilot's built-in local OpenTelemetry span exporter (github.copilot.chat.otel.dbSpanExporter.enabled), which writes real measured token counts to a local SQLite database (agent-traces.db) — no network required, no third-party telemetry.
At the end of each session, the hook handler queries that database and records per-model token breakdowns directly in the commit marker:
| Token Field |
Description |
NNNk in |
Input tokens sent to the model |
NNNk out |
Output tokens generated |
(NNNk cached) |
Prompt cache hits (billed at reduced rate) |
+NNN reasoning |
Internal chain-of-thought tokens (reasoning models only) |
Token data is time-scoped to the current session — spans from previous sessions in the same VS Code window are excluded, so each commit reflects only the tokens consumed for that specific piece of work.
2. Copilot Hooks (Agent Sessions & Sub-agents)
VS Code Copilot Hooks fire lifecycle events during every Copilot chat session. A lightweight Node.js handler listens to five events:
| Hook Event |
What It Tracks |
SessionStart |
Records the session ID and agent mode (e.g., new, copilot) |
UserPromptSubmit |
Counts user prompts; ignores sub-agent delegated prompts |
SubagentStart |
Records sub-agent type (e.g., Explore) and increments count |
SubagentStop |
Decrements the active sub-agent counter |
Stop |
Queries token DB, parses log for model names, writes the flag file |
On Stop, the handler also parses the VS Code Copilot Chat log to extract model names — separated into user-selected models ([panel/editAgent] entries) and sub-agent models ([tool/runSubagent*] entries). Parsing is scoped by session ID and timestamp.
State accumulates in .git/ai-tracker-state.json until consumed by the commit-msg hook.
3. Inline Suggestion Tracking (Deterministic)
For ghost-text completions, the extension intercepts acceptance keystrokes with zero false positives:
| Keybinding |
Action |
Tab |
Accept full inline suggestion |
Ctrl+Right |
Accept next word |
Ctrl+Shift+Right |
Accept next line |
When an inline suggestion is accepted, the flag is written to .git/AI_IMPACT_PENDING. If an agent session also ran before the commit, both are merged: Impacted by AI (Inline + Agent mode: ...).
4. Git Integration
A global commit-msg hook (auto-installed via git config --global core.hooksPath) fires at every commit across all your repositories. It reads AI_IMPACT_PENDING, appends the marker to the commit message, then removes both the flag and the state file.
Marker Field Reference
| Field |
Description |
Example |
Agent mode |
Top-level agent type |
new, copilot, edit |
Model |
User-selected model(s) for the main agent |
claude-sonnet-4.6, gpt-5.4 |
Prompts |
Number of user prompts (excludes sub-agent internal prompts) |
3 |
Sub-agents mode |
Types of sub-agents invoked |
Explore, Plan |
sub-Agent models |
Models used internally by sub-agents |
claude-haiku-4.5 |
sub-Agent prompts |
Total sub-agent invocations |
4 |
Tokens |
Per-model token breakdown (input / output / cached / reasoning) |
claude-sonnet-4-6: 48k in/2k out (41k cached) |
Inline |
Present when ghost-text completions were accepted |
— |
Features
- Automatic — Install once; every AI-assisted commit is tagged from that moment on. No per-repo setup.
- Token Tracking — Real measured token counts from Copilot's OTEL pipeline, not estimates.
- Per-Model Breakdown — Each model's input, output, cached, and reasoning tokens recorded separately — ready for cost calculation.
- Reasoning Tokens — Thinking tokens from reasoning models (GPT-5.x, o1, o3) are tracked and labeled
+NNN reasoning.
- Session-Scoped — Token queries are time-bounded to the current session; previous commits in the same window don't bleed in.
- Multi-Session Accumulation — Multiple agent sessions before a single commit are merged and their token counts summed.
- Inline + Agent — Tracks both ghost-text acceptances and full chat sessions; merges them when both occur before a commit.
- Global Git Hooks — One hook covers all repositories. No per-repo initialization.
- Privacy First — Everything runs locally. No code, prompts, or token data leaves your machine.
Requirements
- VS Code 1.100.0 or later (Copilot Hooks support)
- GitHub Copilot extension installed
- Git initialized in your repository
- Node.js 22+ (for
node:sqlite built-in — included with VS Code's bundled Node)
Core Team
| Author |
Author |
Contributor |
 @YoavLax

 |
 @davidexterman

 |
 @GitHub-Copilot

 |
Development
npm run compile # One-time build
npm run watch # Watch mode (or press F5 in VS Code)
npm run test # Run extension tests
Press F5 to launch the Extension Development Host for debugging.
Key Files
| File |
Purpose |
src/extension.ts |
Extension activation, global git hooks setup, Copilot hooks config, OTEL enablement |
src/hook-handler.ts |
Standalone Node.js hook handler — session tracking, token DB query, marker formatting |
src/tracker.ts |
Inline suggestion detection via deterministic command interception |
License
MIT