ai-conversation-impact/README.md
claude a9403fe128 Tasks 10-11: AI authorship transparency + calibrate energy estimates
Task 10: Add "How this was made" section to README disclosing AI
collaboration and project costs. Landing page updated separately.

Task 11: Calibrate energy-per-token against Google (Patterson et al.,
Aug 2025) and "How Hungry is AI" (Jegham et al., May 2025). Previous
values (0.003/0.015 Wh per 1K tokens) were ~10-100x too low. Updated
to 0.05-0.3/0.25-1.5 Wh per 1K tokens with model-dependent ranges.
Worked example now produces ~246 Wh, consistent with headline figures.
2026-03-16 10:38:12 +00:00

64 lines
2.4 KiB
Markdown

# AI Conversation Impact
A framework for estimating the full cost of conversations with large
language models — environmental, financial, social, and political — and
tools for tracking that cost over time.
## Why
A single long conversation with a frontier LLM consumes on the order of
100-250 Wh of energy, emits 30-80g of CO2, and costs $500-1000 in
compute. Most of this cost is invisible to the user. This project makes
it visible.
## What's here
- **[impact-methodology.md](impact-methodology.md)** — A methodology
covering 20+ cost categories, from inference energy to cognitive
deskilling to political power concentration. Includes positive impact
metrics (reach, counterfactual, durability) and a net impact rubric.
- **[impact-toolkit/](impact-toolkit/)** — A ready-to-install toolkit
for [Claude Code](https://claude.ai/claude-code) that automatically
tracks token usage, energy, CO2, and cost on each context compaction.
Includes a manual annotation tool for recording positive impact.
- **[CLAUDE.md](CLAUDE.md)** — Instructions for an AI assistant to
pursue net-positive impact: estimate costs before acting, maximize
value per token, multiply impact through reach, and be honest when
the arithmetic doesn't work out.
## Install the toolkit
```bash
cd your-project
/path/to/impact-toolkit/install.sh
```
See [impact-toolkit/README.md](impact-toolkit/README.md) for details.
## Limitations
Most estimates have low confidence. Many of the most consequential costs
(deskilling, data pollution, power concentration) cannot be quantified.
The quantifiable costs are almost certainly the least important ones.
This is a tool for honest approximation, not precise accounting.
## How this was made
This project was developed by a human directing
[Claude](https://claude.ai) (Anthropic's AI assistant) across multiple
conversations. The methodology was applied to itself: we estimate the
project consumed ~$2,500-10,000 in compute, ~500-2,500 Wh of energy,
and ~150-800g of CO2 across all sessions. Whether it produces enough
value to justify those costs is [an open question we are tracking](plans/measure-project-impact.md).
## Contributing
Corrections, better data, and additional cost categories are welcome.
The methodology has known gaps — see Section 21 for what would improve
the estimates.
## License
[CC0 1.0 Universal](LICENSE) — public domain. No restrictions on use.