- Competitive landscape: maps existing tools (CodeCarbon, EcoLogits, etc.) and research, identifies our unique positioning (breadth beyond carbon) - Audience analysis: identifies 5 segments, recommends targeting ethics/ governance professionals and developers first - Project impact measurement: defines success thresholds and metrics to determine whether the project itself is net-positive
74 lines
3.2 KiB
Markdown
74 lines
3.2 KiB
Markdown
# Plan: Competitive landscape analysis
|
|
|
|
**Target sub-goals**: 7 (multiply impact through reach), 12 (honest arithmetic)
|
|
|
|
## Problem
|
|
|
|
Before sharing the project, we need to know what already exists so we can
|
|
position honestly. If a better alternative exists, we should point people
|
|
to it rather than duplicating effort.
|
|
|
|
## Landscape (as of March 2026)
|
|
|
|
### Tools that measure energy/carbon
|
|
|
|
| Tool | Scope | Covers social costs? | Per-conversation? |
|
|
|------|-------|---------------------|-------------------|
|
|
| [CodeCarbon](https://codecarbon.io/) | Training energy/CO2 | No | No |
|
|
| [EcoLogits](https://ecologits.ai/) | Inference energy/CO2 via APIs | No | Yes |
|
|
| [ML CO2 Impact](https://mlco2.github.io/impact/) | Training carbon estimate | No | No |
|
|
| [Green Algorithms](https://www.green-algorithms.org/) | Any compute workload | No | No |
|
|
| [HF AI Energy Score](https://huggingface.github.io/AIEnergyScore/) | Model efficiency benchmark | No | No |
|
|
|
|
### Published research with per-query data
|
|
|
|
- **Google/Patterson et al. (Aug 2025)**: 0.24 Wh, 0.03g CO2, 0.26 mL
|
|
water per median Gemini text prompt. Most rigorous provider-published
|
|
data. Environmental only.
|
|
([arXiv:2508.15734](https://arxiv.org/abs/2508.15734))
|
|
- **"How Hungry is AI?" (Jegham et al., May 2025)**: Cross-model
|
|
benchmarks for 30 LLMs. o3 and DeepSeek-R1 consume >33 Wh for long
|
|
prompts. Claude 3.7 Sonnet ranked highest eco-efficiency.
|
|
([arXiv:2505.09598](https://arxiv.org/abs/2505.09598))
|
|
|
|
### Frameworks that go broader
|
|
|
|
- **UNICC/Frugal AI Hub (Dec 2025)**: TCO + SDG alignment. Portfolio-level,
|
|
not per-conversation. No specific social cost categories.
|
|
- **CHI 2025 deskilling research**: Empirical evidence that AI assistance
|
|
reduces critical thinking. Academic finding, not a measurement tool.
|
|
- **Oxford "Hidden Cost of AI" (2025)**: Descriptive survey of social costs.
|
|
Not quantitative or actionable.
|
|
|
|
### What no one else does
|
|
|
|
No existing tool or framework combines per-conversation environmental
|
|
measurement with social/cognitive/political cost categories. The tools
|
|
that measure well (CodeCarbon, EcoLogits) only cover environmental
|
|
dimensions. The research that names social costs is descriptive, not
|
|
actionable.
|
|
|
|
## Our positioning
|
|
|
|
**Honest differentiator**: We are the only framework that enumerates 20+
|
|
cost categories — environmental, financial, social, epistemic, political —
|
|
at per-conversation granularity.
|
|
|
|
**Honest weakness**: Our environmental estimates have lower confidence than
|
|
Google's or EcoLogits' because we don't have access to infrastructure data.
|
|
Our social cost categories are named and described but mostly not
|
|
quantified.
|
|
|
|
**We are not competing with**: CodeCarbon, EcoLogits, or AI Energy Score.
|
|
These are measurement tools for specific environmental metrics. We are a
|
|
taxonomy and framework. We should reference and link to them, not
|
|
position against them.
|
|
|
|
## Tasks
|
|
|
|
- [ ] Add a "Related work" section to `impact-methodology.md` citing the
|
|
tools and research above, with honest comparison
|
|
- [ ] Calibrate our energy estimates against Google's published data
|
|
and the "How Hungry is AI" benchmarks
|
|
- [ ] Link to EcoLogits and CodeCarbon from the toolkit README as
|
|
complementary tools
|