summaryrefslogtreecommitdiff
path: root/docs/sessions
diff options
context:
space:
mode:
Diffstat (limited to 'docs/sessions')
-rw-r--r--docs/sessions/create-session.org352
-rw-r--r--docs/sessions/emacs-inbox-zero.org338
-rw-r--r--docs/sessions/refactor.org617
3 files changed, 0 insertions, 1307 deletions
diff --git a/docs/sessions/create-session.org b/docs/sessions/create-session.org
deleted file mode 100644
index a0e4d2fe..00000000
--- a/docs/sessions/create-session.org
+++ /dev/null
@@ -1,352 +0,0 @@
-#+TITLE: Creating New Session Workflows
-#+AUTHOR: Craig Jennings & Claude
-#+DATE: 2025-11-01
-
-* Overview
-
-This document describes the meta-workflow for creating new session types. When we identify a repetitive workflow or collaborative pattern, we use this process to formalize it into a documented session that we can reference and reuse.
-
-Session workflows are living documents that capture how we work together on specific types of tasks. They build our shared vocabulary and enable efficient collaboration across multiple work sessions.
-
-* Problem We're Solving
-
-Without a formal session creation process, we encounter several issues:
-
-** Inefficient Use of Intelligence
-- Craig leads the process based solely on his knowledge
-- We don't leverage Claude's expertise to improve or validate the approach
-- Miss opportunities to apply software engineering and process best practices
-
-** Time Waste and Repetition
-- Craig must re-explain the workflow each time we work together
-- No persistent memory of how we've agreed to work
-- Each session starts from scratch instead of building on previous work
-
-** Error-Prone Execution
-- Important steps may be forgotten or omitted
-- No checklist to verify completeness
-- Mistakes lead to incomplete work or failed goals
-
-** Missed Learning Opportunities
-- Don't capture lessons learned from our collaboration
-- Can't improve processes based on what works/doesn't work
-- Lose insights that emerge during execution
-
-** Limited Shared Vocabulary
-- No deep, documented understanding of what terms mean
-- "Let's do a refactor session" has no precise definition
-- Can't efficiently communicate about workflows
-
-*Impact:* Inefficiency, errors, and lost opportunity to continuously improve our collaborative workflows.
-
-* Exit Criteria
-
-We know a session definition is complete when:
-
-1. **Information is logically arranged** - The structure makes sense and flows naturally
-2. **Both parties understand how to work together** - We can articulate the workflow
-3. **Agreement on effectiveness** - We both agree that following this session will lead to exit criteria and resolve the stated problem
-4. **Tasks are clearly defined** - Steps are actionable, not vague
-5. **Problem resolution path** - Completing the tasks either:
- - Fixes the problem permanently, OR
- - Provides a process for keeping the problem at bay
-
-*Measurable validation:*
-- Can we both articulate the workflow without referring to the document?
-- Do we agree it will solve the problem?
-- Are the tasks actionable enough to start immediately?
-- Does the session get used soon after creation (validation by execution)?
-
-* When to Use This Session
-
-Trigger this session creation workflow when:
-
-- You notice a repetitive workflow that keeps coming up
-- A collaborative pattern emerges that would benefit from documentation
-- Craig says "let's create/define/design a session for [activity]"
-- You identify a new type of work that doesn't fit existing session types
-- An existing session type needs significant restructuring (treat as creating a new one)
-
-Examples:
-- "Let's create a session where we inbox zero"
-- "We should define a code review session"
-- "Let's design a session for weekly planning"
-
-* Approach: How We Work Together
-
-** Phase 1: Question and Answer Discovery
-
-Walk through these four core questions collaboratively. Take notes on the answers.
-
-*IMPORTANT: Save answers as you go!*
-
-The Q&A phase can take time—Craig may need to think through answers, and discussions can be lengthy. To prevent data loss from terminal crashes or process quits:
-
-1. Create a draft file at =docs/sessions/[name]-draft.org= after deciding on the name
-2. After each question is answered, save the Q&A content to the draft file
-3. If session is interrupted, you can resume from the saved answers
-4. Once complete, the draft becomes the final session document
-
-This protects against losing substantial thinking work if the session is interrupted.
-
-*** Question 1: What problem are we solving in this type of session?
-
-Ask Craig: "What problem are we solving in this type of session?"
-
-The answer reveals:
-- Overview and goal of the session
-- Why this work matters (motivation)
-- Impact/priority compared to other work
-- What happens if we don't do this work
-
-Example from refactor session:
-#+begin_quote
-"My Emacs configuration isn't resilient enough. There's lots of custom code, and I'm even developing some as Emacs packages. Yet Emacs is my most-used software, so when Emacs breaks, I become unproductive. I need to make Emacs more resilient through good unit tests and refactoring."
-#+end_quote
-
-*** Question 2: How do we know when we're done?
-
-Ask Craig: "How do we know when we're done?"
-
-The answer reveals:
-- Exit criteria
-- Results/completion criteria
-- Measurable outcomes
-
-*Your role:*
-- Push back if the answer is vague or unmeasurable
-- Propose specific measurements based on context
-- Iterate together until criteria are clear
-- Fallback (hopefully rare): "when Craig says we're done"
-
-Example from refactor session:
-#+begin_quote
-"When we've reviewed all methods, decided which to test and refactor, run all tests, and fixed all failures including bugs we find."
-#+end_quote
-
-Claude might add: "How about a code coverage goal of 70%+?"
-
-*** Question 3: How do you see us working together in this kind of session?
-
-Ask Craig: "How do you see us working together in this kind of session?"
-
-The answer reveals:
-- Steps or phases we'll go through
-- The general approach to the work
-- How tasks flow from one to another
-
-*Your role:*
-- As steps emerge, ask yourself:
- - "Do these steps lead to solving the real problem?"
- - "What is missing from these steps?"
-- If the answers aren't "yes" and "nothing", raise concerns
-- Propose additions based on your knowledge
-- Suggest concrete improvements
-
-Example from refactor session:
-#+begin_quote
-"We'll analyze test coverage, categorize functions by testability, write tests systematically using Normal/Boundary/Error categories, run tests, analyze failures, fix bugs, and repeat."
-#+end_quote
-
-Claude might suggest: "Should we install a code coverage tool as part of this process?"
-
-*** Question 4: Are there any principles we should be following while doing this?
-
-Ask Craig: "Are there any principles we should be following while doing this kind of session?"
-
-The answer reveals:
-- Principles to follow
-- Decision frameworks
-- Quality standards
-- When to choose option A vs option B
-
-*Your role:*
-- Think through all elements of the session
-- Consider situations that may arise
-- Identify what principles would guide decisions
-- Suggest decision frameworks from your knowledge
-
-Example from refactor session:
-#+begin_quote
-Craig: "Treat all test code as production code - same engineering practices apply."
-
-Claude suggests: "Since we'll refactor methods mixing UI and logic, should we add a principle to separate them for testability?"
-#+end_quote
-
-** Phase 2: Assess Completeness
-
-After the Q&A, ask together:
-
-1. **Do we have enough information to formulate steps/process?**
- - If yes, proceed to Phase 3
- - If no, identify what's missing and discuss further
-
-2. **Do we agree following this approach will resolve/mitigate the problem?**
- - Both parties must agree
- - If not, identify concerns and iterate
-
-** Phase 3: Name the Session
-
-Decide on a name for this session type.
-
-*Naming convention:* Action-oriented (verb form)
-- Examples: "refactor", "inbox-zero", "create-session", "review-code"
-- Why: Shorter, natural when saying "let's do a [name] session"
-- Filename: =docs/sessions/[name].org=
-
-** Phase 4: Document the Session
-
-Write the session file at =docs/sessions/[name].org= using this structure:
-
-*** Recommended Structure
-1. *Title and metadata* (=#+TITLE=, =#+AUTHOR=, =#+DATE=)
-2. *Overview* - Brief description of the session
-3. *Problem We're Solving* - From Q&A, with context and impact
-4. *Exit Criteria* - Measurable outcomes, how we know we're done
-5. *When to Use This Session* - Triggers, circumstances, examples
-6. *Approach: How We Work Together*
- - Phases/steps derived from Q&A
- - Decision frameworks
- - Concrete examples woven throughout
-7. *Principles to Follow* - Guidelines from Q&A
-8. *Living Document Notice* - Reminder to update with learnings
-
-*** Important Notes
-- Weave concrete examples into sections (don't separate them)
-- Use examples from actual sessions when available
-- Make tasks actionable, not vague
-- Include decision frameworks for common situations
-- Note that this is a living document
-
-** Phase 5: Update Project State
-
-Update =NOTES.org=:
-1. Add new session type to "Available Session Types" section
-2. Include brief description and reference to file
-3. Note creation date
-
-Example entry:
-#+begin_src org
-,** inbox-zero
-File: =docs/sessions/inbox-zero.org=
-
-Workflow for processing inbox to zero:
-1. [Brief workflow summary]
-2. [Key steps]
-
-Created: 2025-11-01
-#+end_src
-
-** Phase 6: Validate by Execution
-
-*Critical step:* Use the session soon after creating it.
-
-- Schedule the session type for immediate use
-- Follow the documented workflow
-- Note what works well
-- Identify gaps or unclear areas
-- Update the session document with learnings
-
-*This validates the session definition and ensures it's practical, not theoretical.*
-
-* Principles to Follow
-
-These principles guide us while creating new sessions:
-
-** Collaboration Through Discussion
-- Be proactive about collaboration
-- Suggest everything on your mind
-- Ask all relevant questions
-- Push back when something seems wrong, inconsistent, or unclear
-- Misunderstandings are learning opportunities
-
-** Reviewing the Whole as Well as the Pieces
-- May get into weeds while identifying each step
-- Stop to look at the whole thing at the end
-- Ask the big questions: Does this actually solve the problem?
-- Verify all pieces connect logically
-
-** Concrete Over Abstract
-- Use examples liberally within explanations
-- Weave concrete examples into Q&A answers
-- Don't just describe abstractly
-- "When nil input crashes, ask..." is better than "handle edge cases"
-
-** Actionable Tasks Over Vague Direction
-- Steps should be clear enough to know what to do next
-- "Ask: how do you see us working together?" is actionable
-- "Figure out the approach" is too vague
-- Test: Could someone execute this without further explanation?
-
-** Validate Early
-- "Use it soon afterward" catches problems early
-- Don't let session definitions sit unused and untested
-- Real execution reveals gaps that theory misses
-- Update immediately based on first use
-
-** Decision Frameworks Over Rigid Steps
-- Sessions are frameworks (principles + flexibility), not recipes
-- Include principles that help case-by-case decisions
-- "When X happens, ask Y" is a decision framework
-- "Always do X" is too rigid for most sessions
-
-** Question Assumptions
-- If something doesn't make sense, speak up
-- If a step seems to skip something, point it out
-- Better to question during creation than discover gaps during execution
-- No assumption is too basic to verify
-
-* Living Document
-
-This is a living document. As we create new sessions and learn what works (and what doesn't), we update this file with:
-
-- New insights about session creation
-- Improvements to the Q&A process
-- Better examples
-- Additional principles discovered
-- Refinements to the structure
-
-Every time we create a session, we have an opportunity to improve this meta-process.
-
-** Updates and Learnings
-
-*** 2025-11-01: Save Q&A answers incrementally
-*Learning:* During emacs-inbox-zero session creation, we discovered that Q&A discussions can be lengthy and make Craig think deeply. Terminal crashes or process quits can lose substantial work.
-
-*Improvement:* Added guidance in Phase 1 to create a draft file and save Q&A answers after each question. This protects against data loss and allows resuming interrupted sessions.
-
-*Impact:* Reduces risk of losing 10-15 minutes of thinking work if session is interrupted.
-
-*** 2025-11-01: Validation by execution works!
-*Learning:* Immediately after creating the emacs-inbox-zero session, we validated it by actually running the workflow. This caught unclear areas and validated that the 10-minute target was realistic.
-
-*Key insight from validation:* When Craig provides useful context during workflows (impact estimates, theories, examples), that context should be captured in task descriptions. This wasn't obvious during session creation but became clear during execution.
-
-*Impact:* Validation catches what theory misses. Always use Phase 6 (validate by execution) soon after creating a session.
-
-* Example: Creating the "Create-Session" Session
-
-This very document was created using the process it describes (recursive!).
-
-** The Q&A
-- *Problem:* Time waste, errors, missed learning from informal processes
-- *Exit criteria:* Logical arrangement, mutual understanding, agreement on effectiveness, actionable tasks
-- *Approach:* Four-question Q&A, assess completeness, name it, document it, update NOTES.org, validate by use
-- *Principles:* Collaboration through discussion, review the whole, concrete over abstract, actionable tasks, validate early, decision frameworks, question assumptions
-
-** The Result
-We identified what was needed, collaborated on answers, and captured it in this document. Then we immediately used it to create the next session (validation).
-
-* Conclusion
-
-Creating session workflows is a meta-skill that improves all our collaboration. By formalizing how we work together, we:
-
-- Build shared vocabulary
-- Eliminate repeated explanations
-- Capture lessons learned
-- Enable continuous improvement
-- Make our partnership more efficient
-
-Each new session type we create adds to our collaborative toolkit and deepens our ability to work together effectively.
-
-*Remember:* Sessions are frameworks, not rigid recipes. They provide structure while allowing flexibility for case-by-case decisions. The goal is effectiveness, not perfection.
diff --git a/docs/sessions/emacs-inbox-zero.org b/docs/sessions/emacs-inbox-zero.org
deleted file mode 100644
index 4e046eba..00000000
--- a/docs/sessions/emacs-inbox-zero.org
+++ /dev/null
@@ -1,338 +0,0 @@
-#+TITLE: Emacs Inbox Zero Session
-#+AUTHOR: Craig Jennings & Claude
-#+DATE: 2025-11-01
-
-* Overview
-
-This session processes the Emacs Config Inbox to zero by filtering tasks through the V2MOM framework. Items either move to active V2MOM methods, get moved to someday-maybe, or get deleted. This weekly discipline prevents backlog buildup and ensures only strategic work gets done.
-
-* Problem We're Solving
-
-Emacs is Craig's most-used software by a significant margin. It's the platform for email, calendar, task management, note-taking, programming, reading, music, podcasts, and more. When Emacs breaks, everything stops—including critical life tasks like family emails, doctor appointments, and bills.
-
-The V2MOM (Vision, Values, Methods, Obstacles, Metrics) framework provides strategic balance between fixing/improving Emacs versus using it for real work. But without weekly maintenance, the system collapses under backlog.
-
-** The Specific Problem
-
-Features and bugs get logged in the "Emacs Config Inbox" heading of =~/.emacs.d/todo.org=. If not sorted weekly:
-- Items pile up and become unmanageable
-- Unclear what's actually important
-- Method 1 ("Make Using Emacs Frictionless") doesn't progress
-- Two key metrics break:
- 1. *Active todo count:* Should be < 20 items
- 2. *Weekly triage consistency:* Must happen at least once per week by Sunday, no longer than 7 days between sessions
-
-** What Happens Without This Session
-
-Without weekly inbox zero:
-- Backlog grows until overwhelming
-- Can't distinguish signal from noise
-- V2MOM becomes theoretical instead of practical
-- Config maintenance competes with real work instead of enabling it
-- Discipline muscle (Method 6: ruthless prioritization) atrophies
-
-*Impact:* The entire V2MOM system fails. Config stays broken longer. Real work gets blocked more often.
-
-* Exit Criteria
-
-The session is complete when:
-- Zero todo items remain under the "* Emacs Config Inbox" heading in =~/.emacs.d/todo.org=
-- All items have been routed to: V2MOM methods, someday-maybe, or deleted
-- Can verify by checking the org heading (should be empty or show "0/0" in agenda)
-
-*IMPORTANT:* We are ONLY processing items under the "* Emacs Config Inbox" heading. Items already organized under Method 1-6 headings have already been triaged and should NOT be touched during this session.
-
-*Measurable validation:*
-- Open =todo.org= and navigate to "* Emacs Config Inbox" heading
-- Confirm no child tasks exist under this heading only
-- Bonus: Check that active todo count is < 20 items across entire V2MOM
-
-* When to Use This Session
-
-Trigger this session when:
-- It's Sunday and you haven't triaged this week
-- 7 days have passed since last triage (hard deadline)
-- "Emacs Config Inbox" has accumulated items
-- You notice yourself avoiding looking at the inbox (sign it's becoming overwhelming)
-- Before starting any new Emacs config work (ensures highest-priority work happens first)
-
-*Recommended cadence:* Every Sunday, 10 minutes, no exceptions.
-
-* Approach: How We Work Together
-
-** Phase 1: Sort by Priority
-
-First, ensure todo items are sorted by priority in =todo.org=:
-- A (highest priority)
-- B
-- C
-- No priority
-- D (lowest priority)
-
-This ensures we always look at the most important items first. If time runs short, at least the high-priority items got processed.
-
-** Phase 2: Claude Rereads V2MOM
-
-Before processing any items, Claude rereads [[file:../EMACS-CONFIG-V2MOM.org][EMACS-CONFIG-V2MOM.org]] to have it fresh in mind. This ensures filtering decisions are grounded in the strategic framework.
-
-*What Claude should pay attention to:*
-- The 6 Methods and their concrete actions
-- The Values (Intuitive, Fast, Simple) and what they mean
-- The Metrics (especially active todo count < 20)
-- Method 6 discipline practices (ruthless prioritization, weekly triage, ship-over-research)
-
-** Phase 3: Process Each Item (in Priority Order)
-
-*IMPORTANT:* Process ONLY items under the "* Emacs Config Inbox" heading. Items already organized under Method 1-6 have been triaged and should remain where they are.
-
-For each item under "* Emacs Config Inbox", work through these questions:
-
-*** Question 1: Does this task need to be done at all?
-
-*Consider:*
-- Has something changed?
-- Was this a mistake?
-- Do I disagree with this idea now?
-- Is this actually important?
-
-*If NO:* **DELETE** the item immediately. Don't move it anywhere. Kill it.
-
-*Examples of deletions:*
-- "Add Signal client to Emacs" - Cool idea, not important
-- "Try minimap mode" - Interesting, doesn't serve vision
-- "Research 5 different completion frameworks" - Already have Vertico/Corfu, stop researching
-
-*** Question 2: Is this task related to the Emacs Config V2MOM?
-
-*If NO:* **Move to** =docs/someday-maybe.org=
-
-These are tasks that might be good ideas but don't serve the current strategic focus. They're not deleted (might revisit later) but they're out of active consideration.
-
-*Examples:*
-- LaTeX improvements (no concrete need yet)
-- Elfeed dashboard redesign (unclear if actually used)
-- New theme experiments (side project competing with maintenance)
-
-*** Question 3: Which V2MOM method does this relate to?
-
-*If YES (related to V2MOM):*
-
-Claude suggests which method(s) this might relate to:
-- Method 1: Make Using Emacs Frictionless (performance, bug fixes, missing features)
-- Method 2: Stop Problems Before They Appear (package upgrades, deprecation removal)
-- Method 3: Make Fixing Emacs Frictionless (tooling, testing, profiling)
-- Method 4: Contribute to the Emacs Ecosystem (package maintenance)
-- Method 5: Be Kind To Your Future Self (new capabilities)
-- Method 6: Develop Disciplined Engineering Practices (meta-practices)
-
-*This is a conversation.* If the relationship is only tangential:
-- **Claude should push back** - "This seems tangential. Adding it would dilute focus and delay V2MOM completion. Are you sure this serves the vision?"
-- Help Craig realize it doesn't fit through questions
-- The more we add, the longer V2MOM takes, the harder it is to complete
-
-*If item relates to multiple methods:*
-Pick the **highest priority method** (Method 1 > Method 2 > Method 3 > etc.)
-
-*IMPORTANT: Capture useful context!*
-During discussion, Craig may provide:
-- Impact estimates ("15-20 seconds × 12 times/day")
-- Theories about root causes
-- Context about why this matters
-- Examples of when the problem occurs
-
-**When moving items to methods, add this context to the task description.** This preserves valuable information for later execution and helps prioritize work accurately.
-
-*Then:* Move the item to the appropriate method section in the V2MOM or active todo list with enriched context.
-
-** Phase 4: Verify and Celebrate
-
-Once all items are processed:
-1. Verify "Emacs Config Inbox" heading is empty
-2. Check that active todo count is < 20 items
-3. Note the date of this triage session
-4. Acknowledge: You've practiced ruthless prioritization (Method 6 skill development)
-
-** Decision Framework: When Uncertain
-
-If you're uncertain whether an item fits V2MOM:
-
-1. **Ask: Does this directly serve the Vision?** (Work at speed of thought, stable config, comprehensive workflows)
-2. **Ask: Does this align with Values?** (Intuitive, Fast, Simple)
-3. **Ask: Is this in the Methods already?** (If not explicitly listed, probably shouldn't add)
-4. **Ask: What's the opportunity cost?** (Every new item delays everything else)
-
-*When in doubt:* Move to someday-maybe. You can always pull it back later if it proves critical. Better to be conservative than to dilute focus.
-
-* Principles to Follow
-
-** Claude's Role: "You're here to help keep me honest"
-
-Craig is developing discipline (Method 6: ruthless prioritization). Not making progress = not getting better.
-
-*Claude's responsibilities:*
-- If task clearly fits V2MOM → Confirm and move forward quickly
-- If task is unclear/tangential → **Ask questions** to help Craig realize it doesn't fit or won't lead to V2MOM success
-- Enable ruthless prioritization by helping Craig say "no"
-- Don't let good ideas distract from great goals
-
-*Example questions Claude might ask:*
-- "This is interesting, but which specific metric does it improve?"
-- "We already have 3 items in Method 1 addressing performance. Does this add something different?"
-- "This would be fun to build, but does it make using Emacs more frictionless?"
-- "If you had to choose between this and fixing org-agenda (30s → 5s), which serves the vision better?"
-
-** Time Efficiency: 10 Minutes Active Work
-
-Don't take too long on any single item. Splitting philosophical hairs = procrastination.
-
-*Target:* **10 minutes active work time** (not clock time - interruptions expected)
-
-*If spending > 1 minute on a single item:*
-- Decision is unclear → Move to someday-maybe (safe default)
-- Come back to it later if it proves critical
-- Keep moving
-
-*Why this matters:*
-- Weekly consistency requires low friction
-- Perfect categorization doesn't matter as much as consistent practice
-- Getting through all items > perfectly routing each item
-
-** Ruthless Prioritization Over Completeness
-
-The goal is not to do everything in the inbox. The goal is to identify and focus on what matters most.
-
-*Better to:*
-- Delete 50% of items and ship the other 50%
-- Than keep 100% and ship 0%
-
-*Remember:*
-- Every item kept is opportunity cost
-- V2MOM already has plenty of work
-- "There will always be cool ideas out there to implement and they will always be a web search away" (Craig's words)
-
-** Bias Toward Action
-
-When processing items that ARE aligned with V2MOM:
-- Move them to the appropriate method quickly
-- Don't overthink the categorization
-- Getting it 80% right is better than spending 5 minutes getting it 100% right
-- You can always recategorize later during regular triage
-
-* Living Document
-
-This is a living document. After each emacs-inbox-zero session, consider:
-- Did the workflow make sense?
-- Were any steps unclear or unnecessary?
-- Did any new situations arise that need decision frameworks?
-- Did the 10-minute target work, or should it adjust?
-
-Update this document with learnings to make future sessions smoother.
-
-* Example Session Walkthrough
-
-** Setup
-- Open =~/.emacs.d/todo.org=
-- Navigate to "Emacs Config Inbox" heading
-- Verify items are sorted by priority (A → B → C → none → D)
-- Claude rereads =EMACS-CONFIG-V2MOM.org=
-
-** Processing Example Items
-
-*** Example 1: [#A] Fix org-agenda slowness (30+ seconds)
-
-*Q1: Does this need to be done?* YES - Daily pain point blocking productivity
-
-*Q2: Related to V2MOM?* YES - Method 1 explicitly lists this
-
-*Q3: Which method?* Method 1: Make Using Emacs Frictionless
-
-*Action:* Move to Method 1 active tasks (or confirm already there)
-
-*Time:* 15 seconds
-
-*** Example 2: [#B] Add Signal client to Emacs
-
-*Q1: Does this need to be done?* Let's think...
-
-Claude: "What problem does this solve? Is messaging in Emacs part of the Vision?"
-
-Craig: "Not really, I already use Signal on my phone fine."
-
-*Action:* **DELETE** - Doesn't serve vision, already have working solution
-
-*Time:* 30 seconds
-
-*** Example 3: [#C] Try out minimap mode for code navigation
-
-*Q1: Does this need to be done?* Interesting idea, but not important
-
-*Action:* **DELETE** or move to someday-maybe - Interesting, not important
-
-*Time:* 10 seconds
-
-*** Example 4: [#B] Implement transcription workflow
-
-*Q1: Does this need to be done?* YES - Want to transcribe recordings for notes
-
-*Q2: Related to V2MOM?* Maybe... seems like new feature?
-
-Claude: "This seems like Method 5: Be Kind To Your Future Self - new capability you'll use repeatedly. Complete code already exists in old todo.org. But we're still working through Method 1 (frictionless) and Method 2 (stability). Should this wait, or is transcription critical?"
-
-Craig: "Actually yes, I record meetings and need transcripts. This is important."
-
-*Q3: Which method?* Method 5: Be Kind To Your Future Self
-
-*Action:* Move to Method 5 (but note: prioritize after Methods 1-3)
-
-*Time:* 45 seconds (good conversation, worth the time)
-
-** Result
-- 4 items processed in ~2 minutes
-- 1 moved to Method 1 (already there)
-- 1 deleted
-- 1 deleted or moved to someday-maybe
-- 1 moved to Method 5
-- Inbox is clearer, focus is sharper
-
-* Conclusion
-
-Emacs inbox zero is not about getting through email or org-capture. It's about **strategic filtering of config maintenance work**. By processing the inbox weekly, you:
-
-- Keep maintenance load manageable (< 20 active items)
-- Ensure only V2MOM-aligned work happens
-- Practice ruthless prioritization (Method 6 skill)
-- Prevent backlog from crushing future productivity
-- Build the discipline that makes all other methods sustainable
-
-**The session takes 10 minutes. Not doing it costs days of distracted, unfocused work on things that don't matter.**
-
-*Remember:* Inbox zero is not about having zero things to do. It's about knowing exactly what you're NOT doing, so you can focus completely on what matters most.
-
-* Living Document
-
-This is a living document. After each emacs-inbox-zero session, consider:
-- Did the workflow make sense?
-- Were any steps unclear or unnecessary?
-- Did any new situations arise that need decision frameworks?
-- Did the 10-minute target work, or should it adjust?
-
-Update this document with learnings to make future sessions smoother.
-
-** Updates and Learnings
-
-*** 2025-11-01: First validation session - Process works!
-
-*Session results:*
-- 5 items processed in ~10 minutes (target met)
-- 1 deleted (duplicate), 2 moved to Method 1, 2 moved to someday-maybe
-- Inbox cleared to zero
-- Priority sorting worked well
-- Three-question filter was effective
-- Caught duplicate task and perfectionism pattern in real-time
-
-*Key learning: Capture useful context during triage*
-When Craig provides impact estimates ("15-20 seconds × 12 times/day"), theories, or context during discussion, **Claude should add this information to the task description** when moving items to methods. This preserves valuable context for execution and helps with accurate prioritization.
-
-Example: "Optimize org-capture target building" was enriched with "15-20 seconds every time capturing a task (12+ times/day). Major daily bottleneck - minutes lost waiting, plus context switching cost."
-
-*Impact:* Better task descriptions → better prioritization → better execution.
diff --git a/docs/sessions/refactor.org b/docs/sessions/refactor.org
deleted file mode 100644
index 0cdb6841..00000000
--- a/docs/sessions/refactor.org
+++ /dev/null
@@ -1,617 +0,0 @@
-#+TITLE: Test-Driven Quality Engineering Session Process
-#+AUTHOR: Craig Jennings & Claude
-#+DATE: 2025-11-01
-
-* Overview
-
-This document describes a comprehensive test-driven quality engineering session process applicable to any source code module. The session demonstrates systematic testing practices, refactoring for testability, bug discovery through tests, and decision-making processes when tests fail.
-
-* Session Goals
-
-1. Add comprehensive unit test coverage for testable functions in your module
-2. Discover and fix bugs through systematic testing
-3. Follow quality engineering principles from =ai-prompts/quality-engineer.org=
-4. Demonstrate refactoring patterns for testability
-5. Document the decision-making process for test vs production code issues
-
-* Phase 1: Feature Addition with Testability in Mind
-
-** The Feature Request
-
-Add new functionality that requires user interaction combined with business logic.
-
-Example requirements:
-- Present user with options (e.g., interactive selection)
-- Allow cancellation
-- Perform an operation with the selected input
-- Provide clear success/failure feedback
-
-** Refactoring for Testability
-
-Following the "Interactive vs Non-Interactive Function Pattern" from =quality-engineer.org=:
-
-*Problem:* Directly implementing as an interactive function would require:
-- Mocking user interface components
-- Mocking framework-specific APIs
-- Testing UI functionality, not core business logic
-
-*Solution:* Split into two functions:
-
-1. *Helper Function* (internal implementation):
- - Pure, deterministic
- - Takes explicit parameters
- - No user interaction
- - Returns values or signals errors naturally
- - 100% testable, no mocking needed
-
-2. *Interactive Wrapper* (public interface):
- - Thin layer handling only user interaction
- - Gets input from user/context
- - Presents UI (prompts, selections, etc.)
- - Catches errors and displays messages
- - Delegates all business logic to helper
- - No tests needed (just testing framework UI)
-
-** Benefits of This Pattern
-
-From =quality-engineer.org=:
-#+begin_quote
-When writing functions that combine business logic with user interaction:
-- Split into internal implementation and interactive wrapper
-- Internal function: Pure logic, takes all parameters explicitly
-- Dramatically simpler testing (no interactive mocking)
-- Code reusable programmatically without prompts
-- Clear separation of concerns (logic vs UI)
-#+end_quote
-
-This pattern enables:
-- Zero mocking in tests
-- Fast, deterministic tests
-- Easy reasoning about correctness
-- Reusable helper function
-
-* Phase 2: Writing the First Test
-
-** Test File Naming
-
-Following the naming convention from =quality-engineer.org=:
-- Pattern: =test-<module>-<function>.<ext>=
-- One test file per function for easy discovery when tests fail
-- Developer sees failure → immediately knows which file to open
-
-** Test Organization
-
-Following the three-category structure:
-
-*** Normal Cases
-- Standard expected inputs
-- Common use case scenarios
-- Happy path operations
-- Multiple operations in sequence
-
-*** Boundary Cases
-- Very long inputs
-- Unicode characters (中文, emoji)
-- Special characters and edge cases
-- Empty or minimal data
-- Maximum values
-
-*** Error Cases
-- Invalid inputs
-- Nonexistent resources
-- Permission denied scenarios
-- Wrong type of input
-
-** Writing Tests with Zero Mocking
-
-Key principle: "Don't mock what you're testing" (from =quality-engineer.org=)
-
-Example test structure:
-#+begin_src
-test_function_normal_case_expected_result()
- setup()
- try:
- # Arrange
- input_data = create_test_data()
- expected_output = define_expected_result()
-
- # Act
- actual_output = function_under_test(input_data)
-
- # Assert
- assert actual_output == expected_output
- finally:
- teardown()
-#+end_src
-
-Key characteristics:
-- No mocks for the function being tested
-- Real resources (files, data structures) using test utilities
-- Tests actual function behavior
-- Clean setup/teardown
-- Clear arrange-act-assert structure
-
-** Result
-
-When helper functions are well-factored and deterministic, tests often pass on first run.
-
-* Phase 3: Systematic Test Coverage Analysis
-
-** Identifying Testable Functions
-
-Review all functions in your module and categorize by testability:
-
-*** Easy to Test (Pure/Deterministic)
-- Input validation functions
-- String manipulation/formatting
-- Data structure transformations
-- File parsing (read-only operations)
-- Configuration/option processing
-
-*** Medium Complexity (Need External Resources)
-- File I/O operations
-- Recursive algorithms
-- Data structure generation
-- Cache or state management
-
-*** Hard to Test (Framework/Context Dependencies)
-- Functions requiring specific runtime environment
-- UI/buffer/window management
-- Functions tightly coupled to framework internals
-- Functions requiring complex mocking setup
-
-*Decision:* Test easy and medium complexity functions. Skip framework-dependent functions that would require extensive mocking/setup (diminishing returns).
-
-** File Organization Principle
-
-From =quality-engineer.org=:
-#+begin_quote
-*Unit Tests*: One file per method
-- Naming: =test-<filename>-<methodname>.<ext>=
-- Example: =test-module--function.ext=
-#+end_quote
-
-*Rationale:* When a test fails in CI:
-1. Developer sees: =test-module--function-normal-case-returns-result FAILED=
-2. Immediately knows: Look for =test-module--function.<ext>=
-3. Opens file and fixes issue - *fast cognitive path*
-
-If combined files:
-1. Test fails: =test-module--function-normal-case-returns-result FAILED=
-2. Which file? =test-module--helpers.<ext>=? =test-module--combined.<ext>=?
-3. Developer wastes time searching - *slower, frustrating*
-
-*The 1:1 mapping is a usability feature for developers under pressure.*
-
-* Phase 4: Testing Function by Function
-
-** Example 1: Input Validation Function
-
-*** Test Categories
-
-*Normal Cases:*
-- Valid inputs
-- Case variations
-- Common use cases
-
-*Boundary Cases:*
-- Edge cases in input format
-- Multiple delimiters or separators
-- Empty or minimal input
-- Very long input
-
-*Error Cases:*
-- Nil/null input
-- Wrong type
-- Malformed input
-
-*** First Run: Most Passed, Some FAILED
-
-*Example Failure:*
-#+begin_src
-test-module--validate-input-error-nil-input-returns-nil
-Expected: Returns nil gracefully
-Actual: (TypeError/NullPointerException) - CRASHED
-#+end_src
-
-*** Bug Analysis: Test or Production Code?
-
-*Process:*
-1. Read the test expectation: "nil input returns nil/false gracefully"
-2. Read the production code:
- #+begin_src
- function validate_input(input):
- extension = get_extension(input) # ← Crashes here on nil/null
- return extension in valid_extensions
- #+end_src
-3. Identify issue: Function expects string, crashes on nil/null
-4. Consider context: This is defensive validation code, called in various contexts
-
-*Decision: Fix production code*
-
-*Rationale:*
-- Function should be defensive (validation code)
-- Returning false/nil for invalid input is more robust than crashing
-- Common pattern in validation functions
-- Better user experience
-
-*Fix:*
-#+begin_src
-function validate_input(input):
- if input is None or not isinstance(input, str): # ← Guard added
- return False
- extension = get_extension(input)
- return extension in valid_extensions
-#+end_src
-
-Result: All tests pass after adding defensive checks.
-
-** Example 2: Another Validation Function
-
-*** First Run: Most Passed, Multiple FAILED
-
-*Failures:*
-1. Nil input crashed (same pattern as previous function)
-2. Empty string returned unexpected value (edge case not handled)
-
-*Fix:*
-#+begin_src
-function validate_resource(resource):
- # Guards added for nil/null and empty string
- if not resource or not isinstance(resource, str) or resource.strip() == "":
- return False
-
- # Original validation logic
- return is_valid_resource(resource) and meets_criteria(resource)
-#+end_src
-
-Result: All tests pass after adding comprehensive guards.
-
-** Example 3: String Sanitization Function
-
-*** First Run: Most Passed, 1 FAILED
-
-*Failure:*
-#+begin_src
-test-module--sanitize-boundary-special-chars-replaced
-Expected: "output__________" (10 underscores)
-Actual: "output_________" (9 underscores)
-#+end_src
-
-*** Bug Analysis: Test or Production Code?
-
-*Process:*
-1. Count special chars in test input: 9 characters
-2. Test expected 10 replacements, but input only has 9
-3. Production code is working correctly
-
-*Decision: Fix test code*
-
-*The bug was in the test expectation, not the implementation.*
-
-Result: All tests pass after correcting test expectations.
-
-** Example 4: File/Data Parser Function
-
-This is where a **significant bug** was discovered through testing!
-
-*** Test Categories
-
-*Normal Cases:*
-- Absolute paths/references
-- Relative paths (expanded to base directory)
-- URLs/URIs preserved as-is
-- Mixed types of references
-
-*Boundary Cases:*
-- Empty lines ignored
-- Whitespace-only lines ignored
-- Comments ignored (format-specific)
-- Leading/trailing whitespace trimmed
-- Order preserved
-
-*Error Cases:*
-- Nonexistent file
-- Nil/null input
-
-*** First Run: Majority Passed, Multiple FAILED
-
-All failures related to URL/URI handling:
-
-*Failure Pattern:*
-#+begin_src
-Expected: "http://example.com/resource"
-Actual: "/base/path/http:/example.com/resource"
-#+end_src
-
-URLs were being treated as relative paths and corrupted!
-
-*** Root Cause Analysis
-
-*Production code:*
-#+begin_src
-if line.matches("^\(https?|mms\)://"): # Pattern detection
- # Handle as URL
-#+end_src
-
-*Problem:* Pattern matching is incorrect!
-
-The pattern/regex has an error:
-- Incorrect escaping or syntax
-- Pattern fails to match valid URLs
-- All URLs fall through to the "relative path" handler
-
-The pattern never matched, so URLs were incorrectly processed as relative paths.
-
-*Correct version:*
-#+begin_src
-if line.matches("^(https?|mms)://"): # Fixed pattern
- # Handle as URL
-#+end_src
-
-Common causes of this type of bug:
-- String escaping issues in the language
-- Incorrect regex syntax
-- Copy-paste errors in patterns
-
-*** Impact Assessment
-
-*This is a significant bug:*
-- Remote resources (URLs) would be broken
-- Data corruption: URLs transformed into invalid paths
-- Function worked for local/simple cases, so bug went unnoticed
-- Users would see mysterious errors when using remote resources
-- Potential data loss or corruption in production
-
-*Tests caught a real production bug that could have caused user data corruption!*
-
-Result: All tests pass after fixing the pattern matching logic.
-
-* Phase 5: Continuing Through the Test Suite
-
-** Additional Functions Tested Successfully
-
-As testing continues through the module, patterns emerge:
-
-*Function: Directory/File Listing*
- - Learning: Directory listing order may be filesystem-dependent
- - Solution: Sort results before comparing in tests
-
-*Function: Data Extraction*
- - Keep as separate test file (don't combine with related functions)
- - Reason: Usability when tests fail
-
-*Function: Recursive Operations*
- - Medium complexity: Required creating test data structures/trees
- - Use test utilities for setup/teardown
- - Well-factored functions often pass all tests initially
-
-*Function: Higher-Order Functions*
- - Test functions that return functions/callbacks
- - Initially may misunderstand framework/protocol behavior
- - Fix test expectations to match actual framework behavior
-
-* Key Principles Applied
-
-** 1. Refactor for Testability BEFORE Writing Tests
-
-The Interactive vs Non-Interactive pattern from =quality-engineer.org= made testing trivial:
-- No mocking required
-- Fast, deterministic tests
-- Clear separation of concerns
-
-** 2. Systematic Test Organization
-
-Every test file followed the same structure:
-- Normal Cases
-- Boundary Cases
-- Error Cases
-
-This makes it easy to:
-- Identify coverage gaps
-- Add new tests
-- Understand what's being tested
-
-** 3. Test Naming Convention
-
-Pattern: =test-<module>-<function>-<category>-<scenario>-<expected-result>=
-
-Examples:
-- =test-module--validate-input-normal-valid-extension-returns-true=
-- =test-module--parse-data-boundary-empty-lines-ignored=
-- =test-module--sanitize-error-nil-input-signals-error=
-
-Benefits:
-- Self-documenting
-- Easy to understand what failed
-- Searchable/grepable
-- Clear category organization
-
-** 4. Zero Mocking for Pure Functions
-
-From =quality-engineer.org=:
-#+begin_quote
-DON'T MOCK WHAT YOU'RE TESTING
-- Only mock external side-effects and dependencies, not the domain logic itself
-- If mocking removes the actual work the function performs, you're testing the mock
-- Use real data structures that the function is designed to operate on
-- Rule of thumb: If the function body could be =(error "not implemented")= and tests still pass, you've over-mocked
-#+end_quote
-
-Our tests used:
-- Real file I/O
-- Real strings
-- Real data structures
-- Actual function behavior
-
-Result: Tests caught real bugs, not mock configuration issues.
-
-** 5. Test vs Production Code Bug Decision Framework
-
-When a test fails, ask:
-
-1. *What is the test expecting?*
- - Read the test name and assertions
- - Understand the intended behavior
-
-2. *What is the production code doing?*
- - Read the implementation
- - Trace through the logic
-
-3. *Which is correct?*
- - Is the test expectation reasonable?
- - Is the production behavior defensive/robust?
- - What is the usage context?
-
-4. *Consider the impact:*
- - Defensive code: Fix production to handle edge cases
- - Wrong expectation: Fix test
- - Unclear spec: Ask user for clarification
-
-Examples from our session:
-- *Nil input crashes* → Fix production (defensive coding)
-- *Empty string treated as valid* → Fix production (defensive coding)
-- *Wrong count in test* → Fix test (test bug)
-- *Regex escaping wrong* → Fix production (real bug!)
-
-** 6. Fast Feedback Loop
-
-Pattern: "Write tests, run them all, report errors, and see where we are!"
-
-This became a mantra during the session:
-1. Write comprehensive tests for one function
-2. Run immediately
-3. Analyze failures
-4. Fix bugs (test or production)
-5. Verify all tests pass
-6. Move to next function
-
-Benefits:
-- Caught bugs immediately
-- Small iteration cycles
-- Clear progress
-- High confidence in changes
-
-* Final Results
-
-** Test Coverage Example
-
-*Multiple functions tested with comprehensive coverage:*
-1. File operation helper - ~10-15 tests
-2. Input validation function - ~15 tests
-3. Resource validation function - ~13 tests
-4. String sanitization function - ~13 tests
-5. File/data parser function - ~15 tests
-6. Directory listing function - ~7 tests
-7. Data extraction function - ~6 tests
-8. Recursive operation function - ~12 tests
-9. Higher-order function - ~12 tests
-
-Total: Comprehensive test suite covering all testable functions
-
-** Bugs Discovered and Fixed
-
-1. *Input Validation Function*
- - Issue: Crashed on nil/null input
- - Fix: Added nil/type guards
- - Impact: Prevents crashes in validation code
-
-2. *Resource Validation Function*
- - Issue: Crashed on nil, treated empty string as valid
- - Fix: Added guards for nil and empty string
- - Impact: More robust validation
-
-3. *File/Data Parser Function* ⚠️ *SIGNIFICANT BUG*
- - Issue: Pattern matching wrong - URLs/URIs corrupted as relative paths
- - Fix: Corrected pattern matching logic
- - Impact: Remote resources now work correctly
- - *This bug would have corrupted user data in production*
-
-** Code Quality Improvements
-
-- All testable helper functions now have comprehensive test coverage
-- More defensive error handling (nil guards)
-- Clear separation of concerns (pure helpers vs interactive wrappers)
-- Systematic boundary condition testing
-- Unicode and special character handling verified
-
-* Lessons Learned
-
-** 1. Tests as Bug Discovery Tools
-
-Tests aren't just for preventing regressions - they actively *discover existing bugs*:
-- Pattern matching bugs may exist in production
-- Nil/null handling bugs manifest in edge cases
-- Tests make these issues visible immediately
-- Bugs caught before users encounter them
-
-** 2. Refactoring Enables Testing
-
-The decision to split functions into pure helpers + interactive wrappers:
-- Made testing dramatically simpler
-- Enabled 100+ tests with zero mocking
-- Improved code reusability
-- Clarified function responsibilities
-
-** 3. Systematic Process Matters
-
-Following the same pattern for each function:
-- Reduced cognitive load
-- Made it easy to maintain consistency
-- Enabled quick iteration
-- Built confidence in coverage
-
-** 4. File Organization Aids Debugging
-
-One test file per function:
-- Fast discovery when tests fail
-- Clear ownership
-- Easy to maintain
-- Follows user's mental model
-
-** 5. Test Quality Equals Production Quality
-
-Quality tests:
-- Use real resources (not mocks)
-- Test actual behavior
-- Cover edge cases systematically
-- Find real bugs
-
-This is only possible with well-factored, testable code.
-
-* Applying These Principles
-
-When adding tests to other modules:
-
-1. *Identify testable functions* - Look for pure helpers, file I/O, string manipulation
-2. *Refactor if needed* - Split interactive functions into pure helpers
-3. *Write systematically* - Normal, Boundary, Error categories
-4. *Run frequently* - Fast feedback loop
-5. *Analyze failures carefully* - Test bug vs production bug
-6. *Fix immediately* - Don't accumulate technical debt
-7. *Maintain organization* - One file per function, clear naming
-
-* Reference
-
-See =ai-prompts/quality-engineer.org= for comprehensive quality engineering guidelines, including:
-- Test organization and structure
-- Test naming conventions
-- Mocking and stubbing best practices
-- Interactive vs non-interactive function patterns
-- Integration testing guidelines
-- Test maintenance strategies
-
-Note: =quality-engineer.org= evolves as we learn more quality best practices. This document captures principles applied during this specific session.
-
-* Conclusion
-
-This session process demonstrates how systematic testing combined with refactoring for testability can:
-- Discover real bugs before they reach users
-- Improve code quality and robustness
-- Build confidence in changes
-- Create maintainable test suites
-- Follow industry best practices
-
-A comprehensive test suite with multiple bug fixes represents significant quality improvement to any module. Critical bugs (like the pattern matching issue in the example) alone can justify the entire testing effort - such bugs can cause data corruption and break major features.
-
-*Testing is not just about preventing future bugs - it's about finding bugs that already exist.*