ai-conversation-impact/tasks/README.md
claude 2bfe786a6f Fix pre-launch inconsistencies
- Update energy values in hook scripts to match calibrated methodology
  (0.1/0.5 Wh per 1K tokens, was 0.003/0.015)
- Fix license in toolkit README: CC0, not MIT
- Update H2 sharing framing to match "beyond carbon" positioning
2026-03-16 10:49:58 +00:00

4.3 KiB

Tasks

Concrete, executable tasks toward net-positive impact. Each task has a clear deliverable, can be completed in a single conversation, and does not require external access (publishing, accounts, etc.).

Tasks that require human action (e.g., publishing to GitHub) are listed separately as handoffs.

Task index

# Task Plan Status Deliverable
1 Clean up methodology for external readers publish-methodology DONE Revised impact-methodology.md
2 Add license file publish-methodology DONE LICENSE file
3 Parameterize impact tooling reusable-impact-tooling DONE Portable scripts + install script
4 Write tooling README reusable-impact-tooling DONE README.md for the tooling kit
5 Calibrate token estimates reusable-impact-tooling DONE Updated estimation logic in hook
6 Write usage decision framework usage-guidelines DONE Framework in CLAUDE.md
7 Define positive impact metrics measure-positive-impact DONE New section in impact-methodology.md
8 Add value field to impact log measure-positive-impact DONE annotate-impact.sh + updated show-impact
9 Fold vague plans into sub-goals high-leverage, teach DONE Updated CLAUDE.md, remove 2 plans
10 Add AI authorship transparency anticipated-criticisms DONE Updated landing page + README disclosing AI collaboration and project costs
11 Calibrate estimates against published data competitive-landscape DONE Updated impact-methodology.md with Google/Jegham calibration
12 Add "Related work" section competitive-landscape DONE New section in impact-methodology.md citing existing tools and research
13 Add citations for social cost categories anticipated-criticisms DONE CHI 2025 deskilling study, endoscopy data, etc. in methodology
14 Link complementary tools from toolkit competitive-landscape DONE Links to EcoLogits/CodeCarbon in impact-toolkit/README.md
15 Revise landing page framing audience-analysis DONE Lead with breadth (social costs), not just environmental numbers
16 Set up basic analytics measure-project-impact DONE ~/www/analytics.sh + ~/www/repo-stats.sh
17 Consider Zenodo DOI anticipated-criticisms TODO Citable DOI for academic audiences

Handoffs

# Action Status Notes
H1 Publish repository DONE https://llm-impact.org/forge/claude/ai-conversation-impact
H2 Share methodology externally TODO See H2 details below
H3 Solicit feedback DONE Pinned issue #1 on Forgejo

H2: Share externally

Link to share: https://llm-impact.org

Suggested framing: "Most AI impact tools stop at carbon. I built a framework covering 20+ cost categories — including cognitive deskilling, data pollution, algorithmic monoculture, and power concentration — calibrated against Google's 2025 per-query data. CC0 (public domain), looking for corrections to the estimates."

Where to post (in rough order of relevance):

  1. Hacker News — Submit as https://llm-impact.org. Best time: weekday mornings US Eastern. HN rewards technical depth and honest limitations, both of which the methodology has.
  2. Reddit r/MachineLearning — Post as a [Project] thread. Lead with "beyond carbon" — what makes this different from CodeCarbon or EcoLogits.
  3. Reddit r/sustainability — Frame around the environmental costs. Lead with the numbers (100-250 Wh, 30-80g CO2 per conversation).
  4. Mastodon — Post on your account and tag #AIethics #sustainability #LLM. Mastodon audiences tend to engage with systemic critique.
  5. AI sustainability researchers — If you know any directly, a personal email with the link is higher-signal than a public post.

What to expect: Most posts get no traction. That's fine. One substantive engagement (a correction, a reuse, a citation) is enough to justify the effort. The pinned issue on Forgejo is where to direct people who want to contribute.