4 categories moved from "Unquantifiable/No" to "Proxy": cognitive
deskilling, code quality degradation, data pollution, algorithmic
monoculture. Added explanation of what each proxy measures and its
limitations.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Task 12: Add Related Work section (Section 21) to methodology covering
EcoLogits, CodeCarbon, AI Energy Score, Green Algorithms, Google/Jegham
published data, UNICC framework, and social cost research.
Task 13: Add specific citations and links for cognitive deskilling
(CHI 2025, Springer 2025, endoscopy study), linguistic homogenization
(UNESCO), and algorithmic monoculture (Stanford HAI).
Task 14: Add Related Tools section to toolkit README linking EcoLogits,
CodeCarbon, and AI Energy Score. Also updated toolkit energy values to
match calibrated methodology.
Task 10: Add "How this was made" section to README disclosing AI
collaboration and project costs. Landing page updated separately.
Task 11: Calibrate energy-per-token against Google (Patterson et al.,
Aug 2025) and "How Hungry is AI" (Jegham et al., May 2025). Previous
values (0.003/0.015 Wh per 1K tokens) were ~10-100x too low. Updated
to 0.05-0.3/0.25-1.5 Wh per 1K tokens with model-dependent ranges.
Worked example now produces ~246 Wh, consistent with headline figures.
CC0-licensed methodology for estimating the environmental and social
costs of AI conversations (20+ categories), plus a reusable toolkit
for automated impact tracking in Claude Code sessions.