Add pre-launch plans: competitive landscape, audience, impact measurement
- Competitive landscape: maps existing tools (CodeCarbon, EcoLogits, etc.) and research, identifies our unique positioning (breadth beyond carbon) - Audience analysis: identifies 5 segments, recommends targeting ethics/ governance professionals and developers first - Project impact measurement: defines success thresholds and metrics to determine whether the project itself is net-positive
This commit is contained in:
parent
974e52ae50
commit
f882b30030
4 changed files with 290 additions and 0 deletions
102
plans/measure-project-impact.md
Normal file
102
plans/measure-project-impact.md
Normal file
|
|
@ -0,0 +1,102 @@
|
|||
# Plan: Measure the positive impact of this project
|
||||
|
||||
**Target sub-goals**: 2 (measure impact), 12 (honest arithmetic)
|
||||
|
||||
## Problem
|
||||
|
||||
We built a framework for measuring AI conversation impact but have no
|
||||
plan for measuring the impact of the framework itself. Without this,
|
||||
we cannot know whether the project is net-positive.
|
||||
|
||||
## Costs of the project so far
|
||||
|
||||
Rough estimates across all conversations:
|
||||
|
||||
- ~5-10 long conversations × ~$500-1000 compute each = **$2,500-10,000**
|
||||
- ~500-2,500 Wh energy, ~150-800g CO2
|
||||
- VPS + domain ongoing: ~$10-20/month
|
||||
- Human time: significant (harder to quantify)
|
||||
|
||||
## What "net-positive" would look like
|
||||
|
||||
The project is net-positive if the value it creates exceeds these costs.
|
||||
Given the scale of costs, the value must reach significantly beyond one
|
||||
person. Concretely:
|
||||
|
||||
### Threshold 1: Minimal justification
|
||||
|
||||
- 10+ people read the methodology and find it useful
|
||||
- 1+ external correction improves accuracy
|
||||
- 1+ other project adopts the toolkit or cites the methodology
|
||||
|
||||
### Threshold 2: Clearly net-positive
|
||||
|
||||
- 100+ unique visitors who engage (not just bounce)
|
||||
- 5+ external contributions (issues, corrections, adaptations)
|
||||
- Cited in 1+ academic paper or policy document
|
||||
- 1+ organization uses the framework for actual reporting
|
||||
|
||||
### Threshold 3: High impact
|
||||
|
||||
- Adopted or referenced by a standards body or major org
|
||||
- Influences how other AI tools report their environmental impact
|
||||
- Methodology contributes to regulatory implementation (EU AI Act, etc.)
|
||||
|
||||
## What to measure
|
||||
|
||||
### Quantitative (automated where possible)
|
||||
|
||||
| Metric | How to measure | Tool |
|
||||
|--------|---------------|------|
|
||||
| Unique visitors | Web server logs | nginx access log analysis |
|
||||
| Page engagement | Time on page, scroll depth | Minimal JS or log analysis |
|
||||
| Repository views | Forgejo built-in stats | Forgejo admin panel |
|
||||
| Stars / forks | Forgejo API | Script or manual check |
|
||||
| Issues opened | Forgejo API | Notification |
|
||||
| External links | Referrer logs, web search | nginx logs + periodic search |
|
||||
| Citations | Google Scholar alerts | Manual periodic check |
|
||||
|
||||
### Qualitative (manual)
|
||||
|
||||
| Metric | How to measure |
|
||||
|--------|---------------|
|
||||
| Quality of feedback | Read issues, assess substance |
|
||||
| Adoption evidence | Search for references to the project |
|
||||
| Influence on policy/standards | Monitor EU AI Act implementation, NIST |
|
||||
| Corrections received | Count and assess accuracy improvements |
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Basic analytics (before launch)
|
||||
|
||||
- [ ] Set up nginx access log rotation and a simple log analysis script
|
||||
(no third-party analytics — respect visitors, minimize infrastructure)
|
||||
- [ ] Create a script that queries Forgejo API for repo stats
|
||||
(stars, forks, issues, unique cloners)
|
||||
- [ ] Add a `project-impact-log.md` file to track observations manually
|
||||
|
||||
### Phase 2: After launch
|
||||
|
||||
- [ ] Check metrics weekly for the first month, then monthly
|
||||
- [ ] Record observations in `project-impact-log.md`
|
||||
- [ ] At 3 months post-launch, write an honest assessment:
|
||||
did the project reach net-positive?
|
||||
|
||||
### Phase 3: Long-term
|
||||
|
||||
- [ ] Set up a Google Scholar alert for the methodology title
|
||||
- [ ] Periodically search for references to llm-impact.org
|
||||
- [ ] If the project is clearly net-negative at 6 months (no engagement,
|
||||
no corrections, no adoption), acknowledge it honestly in the README
|
||||
|
||||
## Honest assessment
|
||||
|
||||
The most likely outcome is low engagement. Most open-source projects
|
||||
get no traction. The methodology's value depends on whether the right
|
||||
people find it — AI ethics researchers and sustainability-minded
|
||||
developers. The landing page and initial sharing strategy are critical.
|
||||
|
||||
If the project fails to reach threshold 1 within 3 months, we should
|
||||
consider whether the energy spent maintaining the VPS is justified, or
|
||||
whether the content should be archived as a static document and the
|
||||
infrastructure shut down.
|
||||
Loading…
Add table
Add a link
Reference in a new issue