2026-03-16 09:46:49 +00:00
|
|
|
# Claude Code Impact Toolkit
|
|
|
|
|
|
|
|
|
|
Track the environmental and financial cost of your Claude Code
|
|
|
|
|
conversations.
|
|
|
|
|
|
|
|
|
|
## What it does
|
|
|
|
|
|
|
|
|
|
A PreCompact hook that runs before each context compaction, capturing:
|
|
|
|
|
- Token counts (actual from transcript or heuristic estimate)
|
|
|
|
|
- Cache usage breakdown (creation vs. read)
|
|
|
|
|
- Energy consumption estimate (Wh)
|
|
|
|
|
- CO2 emissions estimate (grams)
|
|
|
|
|
- Financial cost estimate (USD)
|
2026-03-16 15:05:53 +00:00
|
|
|
- Model ID
|
|
|
|
|
- Automation ratio (AI output vs. user input — deskilling proxy)
|
|
|
|
|
- File churn (edits per file — code quality proxy)
|
|
|
|
|
- Test pass/fail counts
|
|
|
|
|
- Public push detection (data pollution risk flag)
|
2026-03-16 09:46:49 +00:00
|
|
|
|
|
|
|
|
Data is logged to a JSONL file for analysis over time.
|
|
|
|
|
|
|
|
|
|
## Install
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
# Project-level (recommended)
|
|
|
|
|
cd your-project
|
|
|
|
|
./path/to/impact-toolkit/install.sh
|
|
|
|
|
|
|
|
|
|
# Or user-level (applies to all projects)
|
|
|
|
|
./path/to/impact-toolkit/install.sh --user
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Requirements: `bash`, `jq`, `python3`.
|
|
|
|
|
|
|
|
|
|
## View results
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
.claude/hooks/show-impact.sh # all sessions
|
|
|
|
|
.claude/hooks/show-impact.sh <session_id> # specific session
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
## How it works
|
|
|
|
|
|
|
|
|
|
The hook fires before Claude Code compacts your conversation context.
|
|
|
|
|
It reads the conversation transcript, extracts token usage data from
|
|
|
|
|
API response metadata, and calculates cost estimates using:
|
|
|
|
|
|
Tasks 12-14: Related work, citations, complementary tool links
Task 12: Add Related Work section (Section 21) to methodology covering
EcoLogits, CodeCarbon, AI Energy Score, Green Algorithms, Google/Jegham
published data, UNICC framework, and social cost research.
Task 13: Add specific citations and links for cognitive deskilling
(CHI 2025, Springer 2025, endoscopy study), linguistic homogenization
(UNESCO), and algorithmic monoculture (Stanford HAI).
Task 14: Add Related Tools section to toolkit README linking EcoLogits,
CodeCarbon, and AI Energy Score. Also updated toolkit energy values to
match calibrated methodology.
2026-03-16 10:43:51 +00:00
|
|
|
- **Energy**: 0.1 Wh/1K input tokens, 0.5 Wh/1K output tokens
|
|
|
|
|
(midpoint of range calibrated against Google and Jegham et al., 2025)
|
2026-03-16 09:46:49 +00:00
|
|
|
- **PUE**: 1.2 (data center overhead)
|
|
|
|
|
- **CO2**: 325g/kWh (US grid average for cloud regions)
|
|
|
|
|
- **Cost**: $15/M input tokens, $75/M output tokens
|
|
|
|
|
|
|
|
|
|
Cache-read tokens are weighted at 10% of full cost (they skip most
|
|
|
|
|
computation).
|
|
|
|
|
|
Tasks 12-14: Related work, citations, complementary tool links
Task 12: Add Related Work section (Section 21) to methodology covering
EcoLogits, CodeCarbon, AI Energy Score, Green Algorithms, Google/Jegham
published data, UNICC framework, and social cost research.
Task 13: Add specific citations and links for cognitive deskilling
(CHI 2025, Springer 2025, endoscopy study), linguistic homogenization
(UNESCO), and algorithmic monoculture (Stanford HAI).
Task 14: Add Related Tools section to toolkit README linking EcoLogits,
CodeCarbon, and AI Energy Score. Also updated toolkit energy values to
match calibrated methodology.
2026-03-16 10:43:51 +00:00
|
|
|
## Related tools
|
|
|
|
|
|
|
|
|
|
This toolkit measures a subset of the costs covered by
|
|
|
|
|
`impact-methodology.md`. For more precise environmental measurement,
|
|
|
|
|
consider these complementary tools:
|
|
|
|
|
|
|
|
|
|
- **[EcoLogits](https://ecologits.ai/)** — Python library that tracks
|
|
|
|
|
per-query energy and CO2 for API calls to OpenAI, Anthropic, Mistral,
|
|
|
|
|
and others. More precise than our estimates for environmental metrics.
|
|
|
|
|
- **[CodeCarbon](https://codecarbon.io/)** — Measures GPU/CPU energy for
|
|
|
|
|
local training and inference workloads.
|
|
|
|
|
- **[Hugging Face AI Energy Score](https://huggingface.github.io/AIEnergyScore/)** —
|
|
|
|
|
Benchmarks model energy efficiency. Useful for choosing between models.
|
|
|
|
|
|
2026-03-16 15:05:53 +00:00
|
|
|
These tools focus on environmental metrics only. This toolkit also
|
|
|
|
|
tracks financial cost and proxy metrics for social costs (automation
|
|
|
|
|
ratio, file churn, test outcomes, public push detection). The
|
|
|
|
|
accompanying methodology covers additional dimensions in depth.
|
Tasks 12-14: Related work, citations, complementary tool links
Task 12: Add Related Work section (Section 21) to methodology covering
EcoLogits, CodeCarbon, AI Energy Score, Green Algorithms, Google/Jegham
published data, UNICC framework, and social cost research.
Task 13: Add specific citations and links for cognitive deskilling
(CHI 2025, Springer 2025, endoscopy study), linguistic homogenization
(UNESCO), and algorithmic monoculture (Stanford HAI).
Task 14: Add Related Tools section to toolkit README linking EcoLogits,
CodeCarbon, and AI Energy Score. Also updated toolkit energy values to
match calibrated methodology.
2026-03-16 10:43:51 +00:00
|
|
|
|
2026-03-16 09:46:49 +00:00
|
|
|
## Limitations
|
|
|
|
|
|
|
|
|
|
- All numbers are estimates with low to medium confidence.
|
Tasks 12-14: Related work, citations, complementary tool links
Task 12: Add Related Work section (Section 21) to methodology covering
EcoLogits, CodeCarbon, AI Energy Score, Green Algorithms, Google/Jegham
published data, UNICC framework, and social cost research.
Task 13: Add specific citations and links for cognitive deskilling
(CHI 2025, Springer 2025, endoscopy study), linguistic homogenization
(UNESCO), and algorithmic monoculture (Stanford HAI).
Task 14: Add Related Tools section to toolkit README linking EcoLogits,
CodeCarbon, and AI Energy Score. Also updated toolkit energy values to
match calibrated methodology.
2026-03-16 10:43:51 +00:00
|
|
|
- Energy-per-token figures are calibrated against published research
|
|
|
|
|
(Google, Aug 2025; Jegham et al., May 2025), not official Anthropic data.
|
2026-03-16 09:46:49 +00:00
|
|
|
- The hook only runs on context compaction, not at conversation end.
|
|
|
|
|
Short conversations that never compact will not be logged.
|
Tasks 12-14: Related work, citations, complementary tool links
Task 12: Add Related Work section (Section 21) to methodology covering
EcoLogits, CodeCarbon, AI Energy Score, Green Algorithms, Google/Jegham
published data, UNICC framework, and social cost research.
Task 13: Add specific citations and links for cognitive deskilling
(CHI 2025, Springer 2025, endoscopy study), linguistic homogenization
(UNESCO), and algorithmic monoculture (Stanford HAI).
Task 14: Add Related Tools section to toolkit README linking EcoLogits,
CodeCarbon, and AI Energy Score. Also updated toolkit energy values to
match calibrated methodology.
2026-03-16 10:43:51 +00:00
|
|
|
- This toolkit only works with Claude Code. The methodology itself is
|
|
|
|
|
tool-agnostic.
|
2026-03-16 09:46:49 +00:00
|
|
|
- See `impact-methodology.md` for the full methodology, uncertainty
|
|
|
|
|
analysis, and non-quantifiable costs.
|
|
|
|
|
|
|
|
|
|
## Files
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
impact-toolkit/
|
|
|
|
|
install.sh # installer
|
|
|
|
|
hooks/pre-compact-snapshot.sh # PreCompact hook
|
|
|
|
|
hooks/show-impact.sh # log viewer
|
|
|
|
|
README.md # this file
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
## License
|
|
|
|
|
|
2026-03-16 10:49:58 +00:00
|
|
|
CC0 1.0 Universal (public domain). See LICENSE in the repository root.
|