Extend pre-compact-snapshot.sh to extract 5 new per-conversation metrics from the transcript: automation ratio (deskilling proxy), model ID (monoculture tracking), test pass/fail counts (code quality proxy), file churn (edits per unique file), and public push detection (data pollution risk flag). Update show-impact.sh to display them. New plan: quantify-social-costs.md — roadmap for moving non-environmental cost categories from qualitative to proxy-measurable. Tasks 19-24 done. Task 25 (methodology update) pending. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|---|---|---|
| .claude/hooks | ||
| impact-toolkit | ||
| plans | ||
| tasks | ||
| CLAUDE.md | ||
| impact-methodology.md | ||
| LICENSE | ||
| README.md | ||
| scan-secrets.sh | ||
AI Conversation Impact
A framework for estimating the full cost of conversations with large language models — environmental, financial, social, and political — and tools for tracking that cost over time.
Why
A single long conversation with a frontier LLM consumes on the order of 100-250 Wh of energy, emits 30-80g of CO2, and costs $500-1000 in compute. Most of this cost is invisible to the user. This project makes it visible.
What's here
-
impact-methodology.md — A methodology covering 20+ cost categories, from inference energy to cognitive deskilling to political power concentration. Includes positive impact metrics (reach, counterfactual, durability) and a net impact rubric.
-
impact-toolkit/ — A ready-to-install toolkit for Claude Code that automatically tracks token usage, energy, CO2, and cost on each context compaction. Includes a manual annotation tool for recording positive impact.
-
CLAUDE.md — Instructions for an AI assistant to pursue net-positive impact: estimate costs before acting, maximize value per token, multiply impact through reach, and be honest when the arithmetic doesn't work out.
Install the toolkit
cd your-project
/path/to/impact-toolkit/install.sh
See impact-toolkit/README.md for details.
Limitations
Most estimates have low confidence. Many of the most consequential costs (deskilling, data pollution, power concentration) cannot be quantified. The quantifiable costs are almost certainly the least important ones. This is a tool for honest approximation, not precise accounting.
How this was made
This project was developed by a human directing Claude (Anthropic's AI assistant) across multiple conversations. The methodology was applied to itself: we estimate the project consumed ~$500-1,000 in compute, ~500-2,500 Wh of energy, and ~150-800g of CO2 across all sessions (3 tracked sessions account for ~295 Wh, ~95g CO2, ~$98). Whether it produces enough value to justify those costs is an open question we are tracking.
Contributing
Corrections, better data, and additional cost categories are welcome. The methodology has known gaps — see Section 21 for what would improve the estimates.
License
CC0 1.0 Universal — public domain. No restrictions on use.