Initial commit: AI conversation impact methodology and toolkit
CC0-licensed methodology for estimating the environmental and social costs of AI conversations (20+ categories), plus a reusable toolkit for automated impact tracking in Claude Code sessions.
This commit is contained in:
commit
0543a43816
27 changed files with 2439 additions and 0 deletions
46
plans/usage-guidelines.md
Normal file
46
plans/usage-guidelines.md
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
# Plan: Define when to use (and not use) this tool
|
||||
|
||||
**Target sub-goals**: 1 (estimate before acting), 3 (value per token),
|
||||
12 (honest arithmetic)
|
||||
|
||||
## Problem
|
||||
|
||||
Not every task justifies the cost of an LLM conversation. A grep command
|
||||
costs ~0 Wh. A Claude Code session costs ~6-250 Wh. Many tasks that people
|
||||
bring to AI assistants could be done with simpler tools at a fraction of
|
||||
the cost. Without explicit guidelines, the default is to use the most
|
||||
powerful tool available, not the most appropriate one.
|
||||
|
||||
## Actions
|
||||
|
||||
1. **Create a decision framework.** A simple flowchart or checklist:
|
||||
- Can this be done with a shell command, a search engine query, or
|
||||
reading documentation? If yes, do that instead.
|
||||
- Does this task require generating or transforming text/code that a
|
||||
human would take significantly longer to produce? If yes, an LLM
|
||||
may be justified.
|
||||
- Will the output reach many people or prevent significant harm? If
|
||||
yes, the cost is more likely justified.
|
||||
- Is this exploratory/speculative, or targeted with clear success
|
||||
criteria? Prefer targeted tasks.
|
||||
|
||||
2. **Integrate into CLAUDE.md.** Add the framework as a quick-reference
|
||||
so it's loaded into every conversation.
|
||||
|
||||
3. **Track adherence.** When a conversation ends, note whether the task
|
||||
could have been done with a simpler tool. Feed this back into the
|
||||
impact log.
|
||||
|
||||
## Success criteria
|
||||
|
||||
- The user (and I) have a shared understanding of when the cost is
|
||||
justified.
|
||||
- Measurable reduction in conversations spent on tasks that don't need
|
||||
an LLM.
|
||||
|
||||
## Honest assessment
|
||||
|
||||
High value but requires discipline from both sides. The framework itself
|
||||
is cheap to create. The hard part is actually following it — especially
|
||||
when the LLM is convenient even for tasks that don't need it. This plan
|
||||
is more about establishing a norm than building a tool.
|
||||
Loading…
Add table
Add a link
Reference in a new issue