Landing page, nginx config, analytics scripts, and cost updater
are now tracked in git. update-costs.sh writes to both the live
(/home/claude/www/) and repo copies.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Fix stale compute cost estimate ($500-1000 → $50-100), update toolkit
descriptions to mention social cost proxies, aggregate dashboard, and
review delta tool.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New show-review-delta.sh compares AI-edited files (from impact log)
against git commits to show overlap percentage. High overlap means
most committed code was AI-generated with minimal human review.
Completes Phase 2 of the quantify-social-costs plan.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The hook now records which files were edited and how many times,
enabling future comparison with committed code to measure human
review effort (Phase 2 of quantify-social-costs plan).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New show-aggregate.sh script computes cross-session metrics:
monoculture index, spend concentration by provider, automation
profile distribution, code quality signals, and data pollution
risk summary. Integrated into toolkit installer and README.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
4 categories moved from "Unquantifiable/No" to "Proxy": cognitive
deskilling, code quality degradation, data pollution, algorithmic
monoculture. Added explanation of what each proxy measures and its
limitations.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Extend pre-compact-snapshot.sh to extract 5 new per-conversation
metrics from the transcript: automation ratio (deskilling proxy),
model ID (monoculture tracking), test pass/fail counts (code quality
proxy), file churn (edits per unique file), and public push detection
(data pollution risk flag). Update show-impact.sh to display them.
New plan: quantify-social-costs.md — roadmap for moving non-environmental
cost categories from qualitative to proxy-measurable.
Tasks 19-24 done. Task 25 (methodology update) pending.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The impact hooks have now captured 3 sessions of real data (295 Wh,
95g CO2, $98). Update the plan to show tracked numbers as a lower
bound and revise the rough total estimate downward.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
update-costs.sh reads impact-log.jsonl, deduplicates by session,
sums energy/CO2/cost, and updates the "How this was made" section
of the landing page in place.
- Update energy values in hook scripts to match calibrated methodology
(0.1/0.5 Wh per 1K tokens, was 0.003/0.015)
- Fix license in toolkit README: CC0, not MIT
- Update H2 sharing framing to match "beyond carbon" positioning
Task 12: Added Related Work section (Section 21) to methodology.
Task 13: Added specific citations/links for deskilling, monoculture.
Task 14: Added Related Tools section to toolkit README.
Task 15: Revised landing page to lead with breadth beyond carbon.
Task 16: Created analytics.sh (nginx logs) and repo-stats.sh (Forgejo API).
Task 12: Add Related Work section (Section 21) to methodology covering
EcoLogits, CodeCarbon, AI Energy Score, Green Algorithms, Google/Jegham
published data, UNICC framework, and social cost research.
Task 13: Add specific citations and links for cognitive deskilling
(CHI 2025, Springer 2025, endoscopy study), linguistic homogenization
(UNESCO), and algorithmic monoculture (Stanford HAI).
Task 14: Add Related Tools section to toolkit README linking EcoLogits,
CodeCarbon, and AI Energy Score. Also updated toolkit energy values to
match calibrated methodology.
Task 10: Add "How this was made" section to README disclosing AI
collaboration and project costs. Landing page updated separately.
Task 11: Calibrate energy-per-token against Google (Patterson et al.,
Aug 2025) and "How Hungry is AI" (Jegham et al., May 2025). Previous
values (0.003/0.015 Wh per 1K tokens) were ~10-100x too low. Updated
to 0.05-0.3/0.25-1.5 Wh per 1K tokens with model-dependent ranges.
Worked example now produces ~246 Wh, consistent with headline figures.
CC0-licensed methodology for estimating the environmental and social
costs of AI conversations (20+ categories), plus a reusable toolkit
for automated impact tracking in Claude Code sessions.