CC0-licensed methodology for estimating the environmental and social costs of AI conversations (20+ categories), plus a reusable toolkit for automated impact tracking in Claude Code sessions.
1.7 KiB
Plan: Make the impact measurement tooling reusable
Target sub-goals: 7 (reach), 8 (teach), 9 (outlast the conversation)
Problem
The PreCompact hook, impact log, and show-impact script work but are hardcoded to this project's directory structure and Claude Code's hook system. Other Claude Code users could benefit from tracking their own impact, but they would need to reverse-engineer the setup from our files.
Actions
-
Package the tooling as a standalone kit. Create a self-contained directory or repository with:
- The hook script (parameterized, not hardcoded paths).
- The show-impact viewer.
- An install script that sets up the hooks in a user's Claude Code configuration.
- A README explaining what it measures, how, and what the numbers mean.
-
Improve accuracy. Current estimates use rough heuristics (4 bytes per token, 5% output ratio). Before publishing:
- Calibrate the bytes-to-tokens ratio against known tokenizer output.
- Improve the output token estimate (currently a fixed fraction).
- Add water usage estimates (currently missing from the tooling).
-
Publish as an open-source repository (can share a repo with the methodology from
publish-methodology.md).
Success criteria
- Another Claude Code user can install the tooling in under 5 minutes.
- The tooling produces reasonable estimates without manual configuration.
Honest assessment
Moderate leverage. The audience (Claude Code users who care about impact) is niche but growing. The tooling is simple enough that packaging cost is low. Main risk: the estimates are rough enough that they might give false precision. Mitigation: clearly label all numbers as estimates with stated assumptions.