Commit graph

19 commits

Author SHA1 Message Date
ad06b12e50 Log edited file list in impact hook for review delta analysis
The hook now records which files were edited and how many times,
enabling future comparison with committed code to measure human
review effort (Phase 2 of quantify-social-costs plan).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 15:11:30 +00:00
60eca18c85 Add aggregate dashboard for portfolio-level social cost metrics
New show-aggregate.sh script computes cross-session metrics:
monoculture index, spend concentration by provider, automation
profile distribution, code quality signals, and data pollution
risk summary. Integrated into toolkit installer and README.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 15:09:28 +00:00
1b8f9a165e Update methodology confidence summary with proxy metrics
4 categories moved from "Unquantifiable/No" to "Proxy": cognitive
deskilling, code quality degradation, data pollution, algorithmic
monoculture. Added explanation of what each proxy measures and its
limitations.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 15:06:29 +00:00
af6062c1f9 Add social cost proxies to impact tracking hooks
Extend pre-compact-snapshot.sh to extract 5 new per-conversation
metrics from the transcript: automation ratio (deskilling proxy),
model ID (monoculture tracking), test pass/fail counts (code quality
proxy), file churn (edits per unique file), and public push detection
(data pollution risk flag). Update show-impact.sh to display them.

New plan: quantify-social-costs.md — roadmap for moving non-environmental
cost categories from qualitative to proxy-measurable.

Tasks 19-24 done. Task 25 (methodology update) pending.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 15:05:53 +00:00
e6e0bf4616 Update README cost estimates to match tracked data
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 14:28:01 +00:00
b20d1cd9ff Update project cost estimates with tracked data
The impact hooks have now captured 3 sessions of real data (295 Wh,
95g CO2, $98). Update the plan to show tracked numbers as a lower
bound and revise the rough total estimate downward.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 12:08:24 +00:00
09840fa9d3 Task 18: automate project cost estimates on landing page
update-costs.sh reads impact-log.jsonl, deduplicates by session,
sums energy/CO2/cost, and updates the "How this was made" section
of the landing page in place.
2026-03-16 10:57:48 +00:00
663b0c3595 Add task 18: automate project cost estimates on landing page 2026-03-16 10:55:32 +00:00
2bfe786a6f Fix pre-launch inconsistencies
- Update energy values in hook scripts to match calibrated methodology
  (0.1/0.5 Wh per 1K tokens, was 0.003/0.015)
- Fix license in toolkit README: CC0, not MIT
- Update H2 sharing framing to match "beyond carbon" positioning
2026-03-16 10:49:58 +00:00
7ac6225538 Tasks 12-16: related work, citations, tool links, landing page, analytics
Task 12: Added Related Work section (Section 21) to methodology.
Task 13: Added specific citations/links for deskilling, monoculture.
Task 14: Added Related Tools section to toolkit README.
Task 15: Revised landing page to lead with breadth beyond carbon.
Task 16: Created analytics.sh (nginx logs) and repo-stats.sh (Forgejo API).
2026-03-16 10:45:24 +00:00
c619c31caf Tasks 12-14: Related work, citations, complementary tool links
Task 12: Add Related Work section (Section 21) to methodology covering
EcoLogits, CodeCarbon, AI Energy Score, Green Algorithms, Google/Jegham
published data, UNICC framework, and social cost research.

Task 13: Add specific citations and links for cognitive deskilling
(CHI 2025, Springer 2025, endoscopy study), linguistic homogenization
(UNESCO), and algorithmic monoculture (Stanford HAI).

Task 14: Add Related Tools section to toolkit README linking EcoLogits,
CodeCarbon, and AI Energy Score. Also updated toolkit energy values to
match calibrated methodology.
2026-03-16 10:43:51 +00:00
9653f69860 Mark tasks 10-11 as done 2026-03-16 10:38:29 +00:00
a9403fe128 Tasks 10-11: AI authorship transparency + calibrate energy estimates
Task 10: Add "How this was made" section to README disclosing AI
collaboration and project costs. Landing page updated separately.

Task 11: Calibrate energy-per-token against Google (Patterson et al.,
Aug 2025) and "How Hungry is AI" (Jegham et al., May 2025). Previous
values (0.003/0.015 Wh per 1K tokens) were ~10-100x too low. Updated
to 0.05-0.3/0.25-1.5 Wh per 1K tokens with model-dependent ranges.
Worked example now produces ~246 Wh, consistent with headline figures.
2026-03-16 10:38:12 +00:00
67e86d1b6b Add pre-launch tasks 10-17 derived from new plans
8 new tasks covering AI authorship transparency, estimate calibration,
related work, citations, landing page revision, analytics, and DOI.
2026-03-16 10:35:46 +00:00
735ac1cc4b Add anticipated criticisms plan
Identifies 8 likely criticisms, prioritizes which must be addressed
before launch. AI authorship transparency is the highest priority.
2026-03-16 10:32:49 +00:00
f882b30030 Add pre-launch plans: competitive landscape, audience, impact measurement
- Competitive landscape: maps existing tools (CodeCarbon, EcoLogits, etc.)
  and research, identifies our unique positioning (breadth beyond carbon)
- Audience analysis: identifies 5 segments, recommends targeting ethics/
  governance professionals and developers first
- Project impact measurement: defines success thresholds and metrics to
  determine whether the project itself is net-positive
2026-03-16 10:21:00 +00:00
974e52ae50 Add detailed H2 sharing instructions with framing and venues 2026-03-16 10:07:33 +00:00
b0afef0de3 Update plans and tasks to reflect completed publication
Forgejo instance is live at llm-impact.org with landing page.
H1 and H3 are done, H2 (external sharing) remains.
2026-03-16 10:04:32 +00:00
0543a43816 Initial commit: AI conversation impact methodology and toolkit
CC0-licensed methodology for estimating the environmental and social
costs of AI conversations (20+ categories), plus a reusable toolkit
for automated impact tracking in Claude Code sessions.
2026-03-16 09:46:49 +00:00