The hook now records which files were edited and how many times,
enabling future comparison with committed code to measure human
review effort (Phase 2 of quantify-social-costs plan).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New show-aggregate.sh script computes cross-session metrics:
monoculture index, spend concentration by provider, automation
profile distribution, code quality signals, and data pollution
risk summary. Integrated into toolkit installer and README.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Extend pre-compact-snapshot.sh to extract 5 new per-conversation
metrics from the transcript: automation ratio (deskilling proxy),
model ID (monoculture tracking), test pass/fail counts (code quality
proxy), file churn (edits per unique file), and public push detection
(data pollution risk flag). Update show-impact.sh to display them.
New plan: quantify-social-costs.md — roadmap for moving non-environmental
cost categories from qualitative to proxy-measurable.
Tasks 19-24 done. Task 25 (methodology update) pending.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Update energy values in hook scripts to match calibrated methodology
(0.1/0.5 Wh per 1K tokens, was 0.003/0.015)
- Fix license in toolkit README: CC0, not MIT
- Update H2 sharing framing to match "beyond carbon" positioning
Task 12: Add Related Work section (Section 21) to methodology covering
EcoLogits, CodeCarbon, AI Energy Score, Green Algorithms, Google/Jegham
published data, UNICC framework, and social cost research.
Task 13: Add specific citations and links for cognitive deskilling
(CHI 2025, Springer 2025, endoscopy study), linguistic homogenization
(UNESCO), and algorithmic monoculture (Stanford HAI).
Task 14: Add Related Tools section to toolkit README linking EcoLogits,
CodeCarbon, and AI Energy Score. Also updated toolkit energy values to
match calibrated methodology.
CC0-licensed methodology for estimating the environmental and social
costs of AI conversations (20+ categories), plus a reusable toolkit
for automated impact tracking in Claude Code sessions.