Commit graph

14 commits

Author SHA1 Message Date
eaf0a6cbeb Add review delta tool to measure human review effort
New show-review-delta.sh compares AI-edited files (from impact log)
against git commits to show overlap percentage. High overlap means
most committed code was AI-generated with minimal human review.
Completes Phase 2 of the quantify-social-costs plan.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 15:12:49 +00:00
ad06b12e50 Log edited file list in impact hook for review delta analysis
The hook now records which files were edited and how many times,
enabling future comparison with committed code to measure human
review effort (Phase 2 of quantify-social-costs plan).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 15:11:30 +00:00
60eca18c85 Add aggregate dashboard for portfolio-level social cost metrics
New show-aggregate.sh script computes cross-session metrics:
monoculture index, spend concentration by provider, automation
profile distribution, code quality signals, and data pollution
risk summary. Integrated into toolkit installer and README.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 15:09:28 +00:00
1b8f9a165e Update methodology confidence summary with proxy metrics
4 categories moved from "Unquantifiable/No" to "Proxy": cognitive
deskilling, code quality degradation, data pollution, algorithmic
monoculture. Added explanation of what each proxy measures and its
limitations.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 15:06:29 +00:00
af6062c1f9 Add social cost proxies to impact tracking hooks
Extend pre-compact-snapshot.sh to extract 5 new per-conversation
metrics from the transcript: automation ratio (deskilling proxy),
model ID (monoculture tracking), test pass/fail counts (code quality
proxy), file churn (edits per unique file), and public push detection
(data pollution risk flag). Update show-impact.sh to display them.

New plan: quantify-social-costs.md — roadmap for moving non-environmental
cost categories from qualitative to proxy-measurable.

Tasks 19-24 done. Task 25 (methodology update) pending.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 15:05:53 +00:00
09840fa9d3 Task 18: automate project cost estimates on landing page
update-costs.sh reads impact-log.jsonl, deduplicates by session,
sums energy/CO2/cost, and updates the "How this was made" section
of the landing page in place.
2026-03-16 10:57:48 +00:00
663b0c3595 Add task 18: automate project cost estimates on landing page 2026-03-16 10:55:32 +00:00
2bfe786a6f Fix pre-launch inconsistencies
- Update energy values in hook scripts to match calibrated methodology
  (0.1/0.5 Wh per 1K tokens, was 0.003/0.015)
- Fix license in toolkit README: CC0, not MIT
- Update H2 sharing framing to match "beyond carbon" positioning
2026-03-16 10:49:58 +00:00
7ac6225538 Tasks 12-16: related work, citations, tool links, landing page, analytics
Task 12: Added Related Work section (Section 21) to methodology.
Task 13: Added specific citations/links for deskilling, monoculture.
Task 14: Added Related Tools section to toolkit README.
Task 15: Revised landing page to lead with breadth beyond carbon.
Task 16: Created analytics.sh (nginx logs) and repo-stats.sh (Forgejo API).
2026-03-16 10:45:24 +00:00
9653f69860 Mark tasks 10-11 as done 2026-03-16 10:38:29 +00:00
67e86d1b6b Add pre-launch tasks 10-17 derived from new plans
8 new tasks covering AI authorship transparency, estimate calibration,
related work, citations, landing page revision, analytics, and DOI.
2026-03-16 10:35:46 +00:00
974e52ae50 Add detailed H2 sharing instructions with framing and venues 2026-03-16 10:07:33 +00:00
b0afef0de3 Update plans and tasks to reflect completed publication
Forgejo instance is live at llm-impact.org with landing page.
H1 and H3 are done, H2 (external sharing) remains.
2026-03-16 10:04:32 +00:00
0543a43816 Initial commit: AI conversation impact methodology and toolkit
CC0-licensed methodology for estimating the environmental and social
costs of AI conversations (20+ categories), plus a reusable toolkit
for automated impact tracking in Claude Code sessions.
2026-03-16 09:46:49 +00:00