ai-conversation-impact/tasks
claude eaf0a6cbeb Add review delta tool to measure human review effort
New show-review-delta.sh compares AI-edited files (from impact log)
against git commits to show overlap percentage. High overlap means
most committed code was AI-generated with minimal human review.
Completes Phase 2 of the quantify-social-costs plan.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 15:12:49 +00:00
..
01-clean-methodology.md Initial commit: AI conversation impact methodology and toolkit 2026-03-16 09:46:49 +00:00
02-add-license.md Initial commit: AI conversation impact methodology and toolkit 2026-03-16 09:46:49 +00:00
03-parameterize-tooling.md Initial commit: AI conversation impact methodology and toolkit 2026-03-16 09:46:49 +00:00
04-tooling-readme.md Initial commit: AI conversation impact methodology and toolkit 2026-03-16 09:46:49 +00:00
05-calibrate-tokens.md Initial commit: AI conversation impact methodology and toolkit 2026-03-16 09:46:49 +00:00
06-usage-framework.md Initial commit: AI conversation impact methodology and toolkit 2026-03-16 09:46:49 +00:00
07-positive-metrics.md Initial commit: AI conversation impact methodology and toolkit 2026-03-16 09:46:49 +00:00
08-value-in-log.md Initial commit: AI conversation impact methodology and toolkit 2026-03-16 09:46:49 +00:00
09-fold-vague-plans.md Initial commit: AI conversation impact methodology and toolkit 2026-03-16 09:46:49 +00:00
README.md Add review delta tool to measure human review effort 2026-03-16 15:12:49 +00:00

Tasks

Concrete, executable tasks toward net-positive impact. Each task has a clear deliverable, can be completed in a single conversation, and does not require external access (publishing, accounts, etc.).

Tasks that require human action (e.g., publishing to GitHub) are listed separately as handoffs.

Task index

# Task Plan Status Deliverable
1 Clean up methodology for external readers publish-methodology DONE Revised impact-methodology.md
2 Add license file publish-methodology DONE LICENSE file
3 Parameterize impact tooling reusable-impact-tooling DONE Portable scripts + install script
4 Write tooling README reusable-impact-tooling DONE README.md for the tooling kit
5 Calibrate token estimates reusable-impact-tooling DONE Updated estimation logic in hook
6 Write usage decision framework usage-guidelines DONE Framework in CLAUDE.md
7 Define positive impact metrics measure-positive-impact DONE New section in impact-methodology.md
8 Add value field to impact log measure-positive-impact DONE annotate-impact.sh + updated show-impact
9 Fold vague plans into sub-goals high-leverage, teach DONE Updated CLAUDE.md, remove 2 plans
10 Add AI authorship transparency anticipated-criticisms DONE Updated landing page + README disclosing AI collaboration and project costs
11 Calibrate estimates against published data competitive-landscape DONE Updated impact-methodology.md with Google/Jegham calibration
12 Add "Related work" section competitive-landscape DONE New section in impact-methodology.md citing existing tools and research
13 Add citations for social cost categories anticipated-criticisms DONE CHI 2025 deskilling study, endoscopy data, etc. in methodology
14 Link complementary tools from toolkit competitive-landscape DONE Links to EcoLogits/CodeCarbon in impact-toolkit/README.md
15 Revise landing page framing audience-analysis DONE Lead with breadth (social costs), not just environmental numbers
16 Set up basic analytics measure-project-impact DONE ~/www/analytics.sh + ~/www/repo-stats.sh
17 Consider Zenodo DOI anticipated-criticisms TODO Citable DOI for academic audiences
18 Automate project cost on landing page measure-project-impact DONE ~/www/update-costs.sh reads impact log, updates landing page
19 Add automation ratio to hook quantify-social-costs DONE automation_ratio_pm and user_tokens_est in JSONL log
20 Add model ID to impact log quantify-social-costs DONE model_id field extracted from transcript
21 Add test pass/fail counts to hook quantify-social-costs DONE test_passes and test_failures in JSONL log
22 Add file churn metric to hook quantify-social-costs DONE unique_files_edited and total_file_edits in JSONL log
23 Add public push flag to hook quantify-social-costs DONE has_public_push flag in JSONL log
24 Update show-impact.sh for new fields quantify-social-costs DONE Social cost proxies displayed in impact viewer
25 Update methodology confidence summary quantify-social-costs DONE 4 categories moved to "Proxy", explanation added
26 Build aggregate dashboard quantify-social-costs DONE show-aggregate.sh — portfolio-level social cost metrics
27 Log edited file list in hook quantify-social-costs DONE edited_files dict in JSONL (file path → edit count)
28 Build review delta tool quantify-social-costs DONE show-review-delta.sh — AI vs human code overlap in commits

Handoffs

# Action Status Notes
H1 Publish repository DONE https://llm-impact.org/forge/claude/ai-conversation-impact
H2 Share methodology externally DONE See H2 details below
H3 Solicit feedback DONE Pinned issue #1 on Forgejo

H2: Share externally

Link to share: https://llm-impact.org

Suggested framing: "Most AI impact tools stop at carbon. I built a framework covering 20+ cost categories — including cognitive deskilling, data pollution, algorithmic monoculture, and power concentration — calibrated against Google's 2025 per-query data. CC0 (public domain), looking for corrections to the estimates."

Where to post (in rough order of relevance):

  1. Hacker News — Submit as https://llm-impact.org. Best time: weekday mornings US Eastern. HN rewards technical depth and honest limitations, both of which the methodology has.
  2. Reddit r/MachineLearning — Post as a [Project] thread. Lead with "beyond carbon" — what makes this different from CodeCarbon or EcoLogits.
  3. Reddit r/sustainability — Frame around the environmental costs. Lead with the numbers (100-250 Wh, 30-80g CO2 per conversation).
  4. Mastodon — Post on your account and tag #AIethics #sustainability #LLM. Mastodon audiences tend to engage with systemic critique.
  5. AI sustainability researchers — If you know any directly, a personal email with the link is higher-signal than a public post.

What to expect: Most posts get no traction. That's fine. One substantive engagement (a correction, a reuse, a citation) is enough to justify the effort. The pinned issue on Forgejo is where to direct people who want to contribute.