Initial commit: AI conversation impact methodology and toolkit
CC0-licensed methodology for estimating the environmental and social costs of AI conversations (20+ categories), plus a reusable toolkit for automated impact tracking in Claude Code sessions.
This commit is contained in:
commit
0543a43816
27 changed files with 2439 additions and 0 deletions
24
tasks/01-clean-methodology.md
Normal file
24
tasks/01-clean-methodology.md
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
# Task 1: Clean up methodology for external readers
|
||||
|
||||
**Plan**: publish-methodology
|
||||
**Status**: DONE
|
||||
**Deliverable**: Revised `impact-methodology.md`
|
||||
|
||||
## What to do
|
||||
|
||||
1. Read `impact-methodology.md` fully.
|
||||
2. Remove or generalize references specific to this project (e.g.,
|
||||
"scan-secrets.sh", specific session IDs, "our conversation").
|
||||
3. Add an introduction: what this document is, who it's for, how to use it.
|
||||
4. Ensure every estimate cites a source or is explicitly marked as
|
||||
an assumption.
|
||||
5. Add a "limitations" section summarizing known gaps and low-confidence
|
||||
areas.
|
||||
6. Structure for standalone reading — someone finding this document with
|
||||
no context should be able to understand and use it.
|
||||
|
||||
## Done when
|
||||
|
||||
- The document reads as a standalone resource, not a project artifact.
|
||||
- A reader unfamiliar with this project could use it to estimate the
|
||||
impact of their own AI usage.
|
||||
16
tasks/02-add-license.md
Normal file
16
tasks/02-add-license.md
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
# Task 2: Add a license file
|
||||
|
||||
**Plan**: publish-methodology
|
||||
**Status**: DONE (MIT license chosen — covers both docs and scripts)
|
||||
**Deliverable**: `LICENSE` file in project root
|
||||
|
||||
## What to do
|
||||
|
||||
1. Ask the user which license they prefer. Suggest CC-BY-4.0 for the
|
||||
methodology (allows reuse with attribution) and MIT for the tooling
|
||||
scripts (standard for small utilities).
|
||||
2. Create the appropriate `LICENSE` file(s).
|
||||
|
||||
## Done when
|
||||
|
||||
- A license file exists that covers both the documentation and the scripts.
|
||||
36
tasks/03-parameterize-tooling.md
Normal file
36
tasks/03-parameterize-tooling.md
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
# Task 3: Parameterize impact tooling
|
||||
|
||||
**Plan**: reusable-impact-tooling
|
||||
**Status**: DONE
|
||||
**Deliverable**: Portable hook script, viewer, and install script
|
||||
|
||||
## What to do
|
||||
|
||||
1. Refactor `pre-compact-snapshot.sh`:
|
||||
- Remove hardcoded project paths.
|
||||
- Use `$CLAUDE_PROJECT_DIR` or `cwd` from hook input consistently.
|
||||
- Remove the debug trace line (`/tmp/precompact-debug.log`).
|
||||
|
||||
2. Refactor `show-impact.sh`:
|
||||
- Accept log file path as argument or auto-detect from project dir.
|
||||
|
||||
3. Create `install.sh` that:
|
||||
- Copies scripts to the user's `.claude/hooks/` directory.
|
||||
- Adds the PreCompact hook entry to `.claude/settings.json` (project
|
||||
or user level, user's choice).
|
||||
- Verifies `jq` is available (dependency).
|
||||
- Is idempotent (safe to run twice).
|
||||
|
||||
4. Organize into a self-contained directory structure:
|
||||
```
|
||||
impact-toolkit/
|
||||
install.sh
|
||||
hooks/pre-compact-snapshot.sh
|
||||
hooks/show-impact.sh
|
||||
README.md
|
||||
```
|
||||
|
||||
## Done when
|
||||
|
||||
- A user can clone the repo, run `install.sh`, and have impact tracking
|
||||
working in their Claude Code project.
|
||||
22
tasks/04-tooling-readme.md
Normal file
22
tasks/04-tooling-readme.md
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
# Task 4: Write tooling README
|
||||
|
||||
**Plan**: reusable-impact-tooling
|
||||
**Status**: DONE
|
||||
**Depends on**: Task 3 (need final directory structure)
|
||||
**Deliverable**: README for the impact toolkit
|
||||
|
||||
## What to do
|
||||
|
||||
1. Write a README covering:
|
||||
- What the toolkit does (tracks energy, CO2, cost per conversation).
|
||||
- How to install (run `install.sh`).
|
||||
- What gets measured and how (brief summary with pointer to methodology).
|
||||
- How to view results (`show-impact.sh`).
|
||||
- Known limitations (estimates, not measurements).
|
||||
- Dependencies (`jq`, `bash`, Claude Code with hooks support).
|
||||
|
||||
2. Keep it short. Under 100 lines.
|
||||
|
||||
## Done when
|
||||
|
||||
- A new user can understand and install the toolkit from the README alone.
|
||||
29
tasks/05-calibrate-tokens.md
Normal file
29
tasks/05-calibrate-tokens.md
Normal file
|
|
@ -0,0 +1,29 @@
|
|||
# Task 5: Calibrate token estimates
|
||||
|
||||
**Plan**: reusable-impact-tooling
|
||||
**Status**: DONE (hook now extracts actual token counts from transcript usage fields; falls back to heuristic; weights cache reads at 10% for energy estimates)
|
||||
**Deliverable**: Updated estimation logic in `pre-compact-snapshot.sh`
|
||||
|
||||
## What to do
|
||||
|
||||
1. The current heuristic uses 4 bytes per token. Claude's tokenizer
|
||||
(based on BPE) averages ~3.5-4.5 bytes per token for English prose
|
||||
but varies for code, JSON, and non-English text. The transcript is
|
||||
mostly JSON with embedded code and English text.
|
||||
|
||||
2. Estimate a better ratio by:
|
||||
- Sampling a known transcript and comparing byte count to the token
|
||||
count reported in API responses (if available in the transcript).
|
||||
- If API token counts are present in the transcript JSON, use them
|
||||
directly instead of estimating.
|
||||
|
||||
3. The output token ratio (currently fixed at 5% of transcript) is also
|
||||
rough. Check if the transcript contains `usage` fields with actual
|
||||
output token counts.
|
||||
|
||||
4. Update the script with improved heuristics or direct extraction.
|
||||
|
||||
## Done when
|
||||
|
||||
- Token estimates are within ~20% of actual (if verifiable) or use
|
||||
actual counts from the transcript when available.
|
||||
24
tasks/06-usage-framework.md
Normal file
24
tasks/06-usage-framework.md
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
# Task 6: Write usage decision framework
|
||||
|
||||
**Plan**: usage-guidelines
|
||||
**Status**: DONE
|
||||
**Deliverable**: New section in `CLAUDE.md`
|
||||
|
||||
## What to do
|
||||
|
||||
1. Write a concise decision framework (checklist or flowchart) for
|
||||
deciding whether a task justifies an LLM conversation. Criteria:
|
||||
- Could a simpler tool do this? (grep, man page, stack overflow)
|
||||
- Does this require generation or transformation beyond templates?
|
||||
- What is the expected reach of the output?
|
||||
- Is the task well-defined with clear success criteria?
|
||||
|
||||
2. Add it to `CLAUDE.md` as a quick-reference section, probably under
|
||||
sub-goal 1 or as a new sub-goal.
|
||||
|
||||
3. Keep it under 20 lines — it needs to be scannable, not an essay.
|
||||
|
||||
## Done when
|
||||
|
||||
- `CLAUDE.md` contains a practical checklist that can be evaluated in
|
||||
10 seconds before starting a conversation.
|
||||
31
tasks/07-positive-metrics.md
Normal file
31
tasks/07-positive-metrics.md
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
# Task 7: Define positive impact metrics
|
||||
|
||||
**Plan**: measure-positive-impact
|
||||
**Status**: DONE
|
||||
**Deliverable**: New section in `impact-methodology.md`
|
||||
|
||||
## What to do
|
||||
|
||||
1. Add a "Positive Impact" section to `impact-methodology.md` defining
|
||||
proxy metrics:
|
||||
- **Reach**: number of people affected by the output.
|
||||
- **Counterfactual**: would the result have been achieved without
|
||||
this conversation? (none / slower / not at all)
|
||||
- **Durability**: expected useful lifetime of the output.
|
||||
- **Severity**: for bug/security fixes, severity of the issue.
|
||||
- **Reuse**: was the output referenced or used again?
|
||||
|
||||
2. For each metric, document:
|
||||
- How to estimate it (with examples).
|
||||
- Known biases (e.g., tendency to overestimate reach).
|
||||
- Confidence level.
|
||||
|
||||
3. Add a "net impact" formula or rubric that combines cost and value
|
||||
estimates into a qualitative assessment (clearly net-positive /
|
||||
probably net-positive / uncertain / probably net-negative / clearly
|
||||
net-negative).
|
||||
|
||||
## Done when
|
||||
|
||||
- The methodology document covers both sides of the equation.
|
||||
- A reader can apply the rubric to their own conversations.
|
||||
29
tasks/08-value-in-log.md
Normal file
29
tasks/08-value-in-log.md
Normal file
|
|
@ -0,0 +1,29 @@
|
|||
# Task 8: Add value field to impact log
|
||||
|
||||
**Plan**: measure-positive-impact
|
||||
**Status**: DONE (added annotate-impact.sh for manual value annotation; show-impact.sh displays annotations)
|
||||
**Depends on**: Task 7 (need the metrics defined first)
|
||||
**Deliverable**: Updated hook and viewer scripts
|
||||
|
||||
## What to do
|
||||
|
||||
1. Add optional fields to the impact log JSON schema:
|
||||
- `value_summary`: free-text description of value produced.
|
||||
- `estimated_reach`: number (people affected).
|
||||
- `counterfactual`: enum (none / slower / impossible).
|
||||
- `net_assessment`: enum (clearly-positive / probably-positive /
|
||||
uncertain / probably-negative / clearly-negative).
|
||||
|
||||
2. These fields cannot be filled automatically by the hook — they
|
||||
require human or LLM judgment. Options:
|
||||
- Add a post-session prompt (via a Stop hook?) that asks for a
|
||||
brief value assessment.
|
||||
- Accept manual annotation via a helper script.
|
||||
- Leave them optional; fill in retrospectively.
|
||||
|
||||
3. Update `show-impact.sh` to display value fields when present.
|
||||
|
||||
## Done when
|
||||
|
||||
- The log schema supports value data alongside cost data.
|
||||
- `show-impact.sh` displays both.
|
||||
26
tasks/09-fold-vague-plans.md
Normal file
26
tasks/09-fold-vague-plans.md
Normal file
|
|
@ -0,0 +1,26 @@
|
|||
# Task 9: Fold vague plans into sub-goals
|
||||
|
||||
**Plan**: high-leverage-contributions, teach-and-document
|
||||
**Status**: DONE
|
||||
**Deliverable**: Updated `CLAUDE.md` and `plans/`
|
||||
|
||||
## What to do
|
||||
|
||||
1. The plans `high-leverage-contributions.md` and `teach-and-document.md`
|
||||
are behavioral norms, not executable plans. Their content is already
|
||||
largely covered by sub-goals 7 (multiply impact through reach) and
|
||||
8 (teach rather than just do).
|
||||
|
||||
2. Review both plans for any concrete guidance not already in the
|
||||
sub-goals. Merge anything useful into the relevant sub-goal text
|
||||
in `CLAUDE.md`.
|
||||
|
||||
3. Remove the two plan files.
|
||||
|
||||
4. Update `plans/README.md` to reflect the reduced plan list.
|
||||
|
||||
## Done when
|
||||
|
||||
- No plan file exists that is just a restatement of a sub-goal.
|
||||
- Any actionable content from the removed plans is preserved in
|
||||
`CLAUDE.md`.
|
||||
30
tasks/README.md
Normal file
30
tasks/README.md
Normal file
|
|
@ -0,0 +1,30 @@
|
|||
# Tasks
|
||||
|
||||
Concrete, executable tasks toward net-positive impact. Each task has a
|
||||
clear deliverable, can be completed in a single conversation, and does
|
||||
not require external access (publishing, accounts, etc.).
|
||||
|
||||
Tasks that require human action (e.g., publishing to GitHub) are listed
|
||||
separately as handoffs.
|
||||
|
||||
## Task index
|
||||
|
||||
| # | Task | Plan | Status | Deliverable |
|
||||
|---|------|------|--------|-------------|
|
||||
| 1 | [Clean up methodology for external readers](01-clean-methodology.md) | publish-methodology | DONE | Revised `impact-methodology.md` |
|
||||
| 2 | [Add license file](02-add-license.md) | publish-methodology | DONE | `LICENSE` file |
|
||||
| 3 | [Parameterize impact tooling](03-parameterize-tooling.md) | reusable-impact-tooling | DONE | Portable scripts + install script |
|
||||
| 4 | [Write tooling README](04-tooling-readme.md) | reusable-impact-tooling | DONE | `README.md` for the tooling kit |
|
||||
| 5 | [Calibrate token estimates](05-calibrate-tokens.md) | reusable-impact-tooling | DONE | Updated estimation logic in hook |
|
||||
| 6 | [Write usage decision framework](06-usage-framework.md) | usage-guidelines | DONE | Framework in `CLAUDE.md` |
|
||||
| 7 | [Define positive impact metrics](07-positive-metrics.md) | measure-positive-impact | DONE | New section in `impact-methodology.md` |
|
||||
| 8 | [Add value field to impact log](08-value-in-log.md) | measure-positive-impact | DONE | annotate-impact.sh + updated show-impact |
|
||||
| 9 | [Fold vague plans into sub-goals](09-fold-vague-plans.md) | high-leverage, teach | DONE | Updated `CLAUDE.md`, remove 2 plans |
|
||||
|
||||
## Handoffs (require human action)
|
||||
|
||||
| # | Action | Depends on tasks | Notes |
|
||||
|---|--------|-----------------|-------|
|
||||
| H1 | Publish repository | 1, 2, 3, 4 | Needs a GitHub/GitLab account |
|
||||
| H2 | Share methodology externally | 1, H1 | Blog post, forum, social media |
|
||||
| H3 | Solicit feedback | H1 | Open issues, share with AI sustainability communities |
|
||||
Loading…
Add table
Add a link
Reference in a new issue