The impact hooks have now captured 3 sessions of real data (295 Wh, 95g CO2, $98). Update the plan to show tracked numbers as a lower bound and revise the rough total estimate downward. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
4 KiB
Plan: Measure the positive impact of this project
Target sub-goals: 2 (measure impact), 12 (honest arithmetic)
Problem
We built a framework for measuring AI conversation impact but have no plan for measuring the impact of the framework itself. Without this, we cannot know whether the project is net-positive.
Costs of the project so far
Tracked data (3 sessions with impact hooks active):
- ~295 Wh energy, ~95g CO2, ~$98 compute
These numbers are a lower bound. Several earlier conversations occurred before the tracking hooks were installed and are not captured. Rough total estimate including untracked sessions:
- ~5-10 long conversations total × ~$50-100 compute each = $500-1,000
- ~500-2,500 Wh energy, ~150-800g CO2
- VPS + domain ongoing: ~$10-20/month
- Human time: significant (harder to quantify)
What "net-positive" would look like
The project is net-positive if the value it creates exceeds these costs. Given the scale of costs, the value must reach significantly beyond one person. Concretely:
Threshold 1: Minimal justification
- 10+ people read the methodology and find it useful
- 1+ external correction improves accuracy
- 1+ other project adopts the toolkit or cites the methodology
Threshold 2: Clearly net-positive
- 100+ unique visitors who engage (not just bounce)
- 5+ external contributions (issues, corrections, adaptations)
- Cited in 1+ academic paper or policy document
- 1+ organization uses the framework for actual reporting
Threshold 3: High impact
- Adopted or referenced by a standards body or major org
- Influences how other AI tools report their environmental impact
- Methodology contributes to regulatory implementation (EU AI Act, etc.)
What to measure
Quantitative (automated where possible)
| Metric | How to measure | Tool |
|---|---|---|
| Unique visitors | Web server logs | nginx access log analysis |
| Page engagement | Time on page, scroll depth | Minimal JS or log analysis |
| Repository views | Forgejo built-in stats | Forgejo admin panel |
| Stars / forks | Forgejo API | Script or manual check |
| Issues opened | Forgejo API | Notification |
| External links | Referrer logs, web search | nginx logs + periodic search |
| Citations | Google Scholar alerts | Manual periodic check |
Qualitative (manual)
| Metric | How to measure |
|---|---|
| Quality of feedback | Read issues, assess substance |
| Adoption evidence | Search for references to the project |
| Influence on policy/standards | Monitor EU AI Act implementation, NIST |
| Corrections received | Count and assess accuracy improvements |
Implementation
Phase 1: Basic analytics (before launch)
- Set up nginx access log rotation and a simple log analysis script (no third-party analytics — respect visitors, minimize infrastructure)
- Create a script that queries Forgejo API for repo stats (stars, forks, issues, unique cloners)
- Add a
project-impact-log.mdfile to track observations manually
Phase 2: After launch
- Check metrics weekly for the first month, then monthly
- Record observations in
project-impact-log.md - At 3 months post-launch, write an honest assessment: did the project reach net-positive?
Phase 3: Long-term
- Set up a Google Scholar alert for the methodology title
- Periodically search for references to llm-impact.org
- If the project is clearly net-negative at 6 months (no engagement, no corrections, no adoption), acknowledge it honestly in the README
Honest assessment
The most likely outcome is low engagement. Most open-source projects get no traction. The methodology's value depends on whether the right people find it — AI ethics researchers and sustainability-minded developers. The landing page and initial sharing strategy are critical.
If the project fails to reach threshold 1 within 3 months, we should consider whether the energy spent maintaining the VPS is justified, or whether the content should be archived as a static document and the infrastructure shut down.