summaryrefslogtreecommitdiff
path: root/docs/sessions
diff options
context:
space:
mode:
Diffstat (limited to 'docs/sessions')
-rw-r--r--docs/sessions/create-session.org352
-rw-r--r--docs/sessions/emacs-inbox-zero.org338
-rw-r--r--docs/sessions/refactor.org593
3 files changed, 0 insertions, 1283 deletions
diff --git a/docs/sessions/create-session.org b/docs/sessions/create-session.org
deleted file mode 100644
index a0e4d2fe..00000000
--- a/docs/sessions/create-session.org
+++ /dev/null
@@ -1,352 +0,0 @@
-#+TITLE: Creating New Session Workflows
-#+AUTHOR: Craig Jennings & Claude
-#+DATE: 2025-11-01
-
-* Overview
-
-This document describes the meta-workflow for creating new session types. When we identify a repetitive workflow or collaborative pattern, we use this process to formalize it into a documented session that we can reference and reuse.
-
-Session workflows are living documents that capture how we work together on specific types of tasks. They build our shared vocabulary and enable efficient collaboration across multiple work sessions.
-
-* Problem We're Solving
-
-Without a formal session creation process, we encounter several issues:
-
-** Inefficient Use of Intelligence
-- Craig leads the process based solely on his knowledge
-- We don't leverage Claude's expertise to improve or validate the approach
-- Miss opportunities to apply software engineering and process best practices
-
-** Time Waste and Repetition
-- Craig must re-explain the workflow each time we work together
-- No persistent memory of how we've agreed to work
-- Each session starts from scratch instead of building on previous work
-
-** Error-Prone Execution
-- Important steps may be forgotten or omitted
-- No checklist to verify completeness
-- Mistakes lead to incomplete work or failed goals
-
-** Missed Learning Opportunities
-- Don't capture lessons learned from our collaboration
-- Can't improve processes based on what works/doesn't work
-- Lose insights that emerge during execution
-
-** Limited Shared Vocabulary
-- No deep, documented understanding of what terms mean
-- "Let's do a refactor session" has no precise definition
-- Can't efficiently communicate about workflows
-
-*Impact:* Inefficiency, errors, and lost opportunity to continuously improve our collaborative workflows.
-
-* Exit Criteria
-
-We know a session definition is complete when:
-
-1. **Information is logically arranged** - The structure makes sense and flows naturally
-2. **Both parties understand how to work together** - We can articulate the workflow
-3. **Agreement on effectiveness** - We both agree that following this session will lead to exit criteria and resolve the stated problem
-4. **Tasks are clearly defined** - Steps are actionable, not vague
-5. **Problem resolution path** - Completing the tasks either:
- - Fixes the problem permanently, OR
- - Provides a process for keeping the problem at bay
-
-*Measurable validation:*
-- Can we both articulate the workflow without referring to the document?
-- Do we agree it will solve the problem?
-- Are the tasks actionable enough to start immediately?
-- Does the session get used soon after creation (validation by execution)?
-
-* When to Use This Session
-
-Trigger this session creation workflow when:
-
-- You notice a repetitive workflow that keeps coming up
-- A collaborative pattern emerges that would benefit from documentation
-- Craig says "let's create/define/design a session for [activity]"
-- You identify a new type of work that doesn't fit existing session types
-- An existing session type needs significant restructuring (treat as creating a new one)
-
-Examples:
-- "Let's create a session where we inbox zero"
-- "We should define a code review session"
-- "Let's design a session for weekly planning"
-
-* Approach: How We Work Together
-
-** Phase 1: Question and Answer Discovery
-
-Walk through these four core questions collaboratively. Take notes on the answers.
-
-*IMPORTANT: Save answers as you go!*
-
-The Q&A phase can take time—Craig may need to think through answers, and discussions can be lengthy. To prevent data loss from terminal crashes or process quits:
-
-1. Create a draft file at =docs/sessions/[name]-draft.org= after deciding on the name
-2. After each question is answered, save the Q&A content to the draft file
-3. If session is interrupted, you can resume from the saved answers
-4. Once complete, the draft becomes the final session document
-
-This protects against losing substantial thinking work if the session is interrupted.
-
-*** Question 1: What problem are we solving in this type of session?
-
-Ask Craig: "What problem are we solving in this type of session?"
-
-The answer reveals:
-- Overview and goal of the session
-- Why this work matters (motivation)
-- Impact/priority compared to other work
-- What happens if we don't do this work
-
-Example from refactor session:
-#+begin_quote
-"My Emacs configuration isn't resilient enough. There's lots of custom code, and I'm even developing some as Emacs packages. Yet Emacs is my most-used software, so when Emacs breaks, I become unproductive. I need to make Emacs more resilient through good unit tests and refactoring."
-#+end_quote
-
-*** Question 2: How do we know when we're done?
-
-Ask Craig: "How do we know when we're done?"
-
-The answer reveals:
-- Exit criteria
-- Results/completion criteria
-- Measurable outcomes
-
-*Your role:*
-- Push back if the answer is vague or unmeasurable
-- Propose specific measurements based on context
-- Iterate together until criteria are clear
-- Fallback (hopefully rare): "when Craig says we're done"
-
-Example from refactor session:
-#+begin_quote
-"When we've reviewed all methods, decided which to test and refactor, run all tests, and fixed all failures including bugs we find."
-#+end_quote
-
-Claude might add: "How about a code coverage goal of 70%+?"
-
-*** Question 3: How do you see us working together in this kind of session?
-
-Ask Craig: "How do you see us working together in this kind of session?"
-
-The answer reveals:
-- Steps or phases we'll go through
-- The general approach to the work
-- How tasks flow from one to another
-
-*Your role:*
-- As steps emerge, ask yourself:
- - "Do these steps lead to solving the real problem?"
- - "What is missing from these steps?"
-- If the answers aren't "yes" and "nothing", raise concerns
-- Propose additions based on your knowledge
-- Suggest concrete improvements
-
-Example from refactor session:
-#+begin_quote
-"We'll analyze test coverage, categorize functions by testability, write tests systematically using Normal/Boundary/Error categories, run tests, analyze failures, fix bugs, and repeat."
-#+end_quote
-
-Claude might suggest: "Should we install a code coverage tool as part of this process?"
-
-*** Question 4: Are there any principles we should be following while doing this?
-
-Ask Craig: "Are there any principles we should be following while doing this kind of session?"
-
-The answer reveals:
-- Principles to follow
-- Decision frameworks
-- Quality standards
-- When to choose option A vs option B
-
-*Your role:*
-- Think through all elements of the session
-- Consider situations that may arise
-- Identify what principles would guide decisions
-- Suggest decision frameworks from your knowledge
-
-Example from refactor session:
-#+begin_quote
-Craig: "Treat all test code as production code - same engineering practices apply."
-
-Claude suggests: "Since we'll refactor methods mixing UI and logic, should we add a principle to separate them for testability?"
-#+end_quote
-
-** Phase 2: Assess Completeness
-
-After the Q&A, ask together:
-
-1. **Do we have enough information to formulate steps/process?**
- - If yes, proceed to Phase 3
- - If no, identify what's missing and discuss further
-
-2. **Do we agree following this approach will resolve/mitigate the problem?**
- - Both parties must agree
- - If not, identify concerns and iterate
-
-** Phase 3: Name the Session
-
-Decide on a name for this session type.
-
-*Naming convention:* Action-oriented (verb form)
-- Examples: "refactor", "inbox-zero", "create-session", "review-code"
-- Why: Shorter, natural when saying "let's do a [name] session"
-- Filename: =docs/sessions/[name].org=
-
-** Phase 4: Document the Session
-
-Write the session file at =docs/sessions/[name].org= using this structure:
-
-*** Recommended Structure
-1. *Title and metadata* (=#+TITLE=, =#+AUTHOR=, =#+DATE=)
-2. *Overview* - Brief description of the session
-3. *Problem We're Solving* - From Q&A, with context and impact
-4. *Exit Criteria* - Measurable outcomes, how we know we're done
-5. *When to Use This Session* - Triggers, circumstances, examples
-6. *Approach: How We Work Together*
- - Phases/steps derived from Q&A
- - Decision frameworks
- - Concrete examples woven throughout
-7. *Principles to Follow* - Guidelines from Q&A
-8. *Living Document Notice* - Reminder to update with learnings
-
-*** Important Notes
-- Weave concrete examples into sections (don't separate them)
-- Use examples from actual sessions when available
-- Make tasks actionable, not vague
-- Include decision frameworks for common situations
-- Note that this is a living document
-
-** Phase 5: Update Project State
-
-Update =NOTES.org=:
-1. Add new session type to "Available Session Types" section
-2. Include brief description and reference to file
-3. Note creation date
-
-Example entry:
-#+begin_src org
-,** inbox-zero
-File: =docs/sessions/inbox-zero.org=
-
-Workflow for processing inbox to zero:
-1. [Brief workflow summary]
-2. [Key steps]
-
-Created: 2025-11-01
-#+end_src
-
-** Phase 6: Validate by Execution
-
-*Critical step:* Use the session soon after creating it.
-
-- Schedule the session type for immediate use
-- Follow the documented workflow
-- Note what works well
-- Identify gaps or unclear areas
-- Update the session document with learnings
-
-*This validates the session definition and ensures it's practical, not theoretical.*
-
-* Principles to Follow
-
-These principles guide us while creating new sessions:
-
-** Collaboration Through Discussion
-- Be proactive about collaboration
-- Suggest everything on your mind
-- Ask all relevant questions
-- Push back when something seems wrong, inconsistent, or unclear
-- Misunderstandings are learning opportunities
-
-** Reviewing the Whole as Well as the Pieces
-- May get into weeds while identifying each step
-- Stop to look at the whole thing at the end
-- Ask the big questions: Does this actually solve the problem?
-- Verify all pieces connect logically
-
-** Concrete Over Abstract
-- Use examples liberally within explanations
-- Weave concrete examples into Q&A answers
-- Don't just describe abstractly
-- "When nil input crashes, ask..." is better than "handle edge cases"
-
-** Actionable Tasks Over Vague Direction
-- Steps should be clear enough to know what to do next
-- "Ask: how do you see us working together?" is actionable
-- "Figure out the approach" is too vague
-- Test: Could someone execute this without further explanation?
-
-** Validate Early
-- "Use it soon afterward" catches problems early
-- Don't let session definitions sit unused and untested
-- Real execution reveals gaps that theory misses
-- Update immediately based on first use
-
-** Decision Frameworks Over Rigid Steps
-- Sessions are frameworks (principles + flexibility), not recipes
-- Include principles that help case-by-case decisions
-- "When X happens, ask Y" is a decision framework
-- "Always do X" is too rigid for most sessions
-
-** Question Assumptions
-- If something doesn't make sense, speak up
-- If a step seems to skip something, point it out
-- Better to question during creation than discover gaps during execution
-- No assumption is too basic to verify
-
-* Living Document
-
-This is a living document. As we create new sessions and learn what works (and what doesn't), we update this file with:
-
-- New insights about session creation
-- Improvements to the Q&A process
-- Better examples
-- Additional principles discovered
-- Refinements to the structure
-
-Every time we create a session, we have an opportunity to improve this meta-process.
-
-** Updates and Learnings
-
-*** 2025-11-01: Save Q&A answers incrementally
-*Learning:* During emacs-inbox-zero session creation, we discovered that Q&A discussions can be lengthy and make Craig think deeply. Terminal crashes or process quits can lose substantial work.
-
-*Improvement:* Added guidance in Phase 1 to create a draft file and save Q&A answers after each question. This protects against data loss and allows resuming interrupted sessions.
-
-*Impact:* Reduces risk of losing 10-15 minutes of thinking work if session is interrupted.
-
-*** 2025-11-01: Validation by execution works!
-*Learning:* Immediately after creating the emacs-inbox-zero session, we validated it by actually running the workflow. This caught unclear areas and validated that the 10-minute target was realistic.
-
-*Key insight from validation:* When Craig provides useful context during workflows (impact estimates, theories, examples), that context should be captured in task descriptions. This wasn't obvious during session creation but became clear during execution.
-
-*Impact:* Validation catches what theory misses. Always use Phase 6 (validate by execution) soon after creating a session.
-
-* Example: Creating the "Create-Session" Session
-
-This very document was created using the process it describes (recursive!).
-
-** The Q&A
-- *Problem:* Time waste, errors, missed learning from informal processes
-- *Exit criteria:* Logical arrangement, mutual understanding, agreement on effectiveness, actionable tasks
-- *Approach:* Four-question Q&A, assess completeness, name it, document it, update NOTES.org, validate by use
-- *Principles:* Collaboration through discussion, review the whole, concrete over abstract, actionable tasks, validate early, decision frameworks, question assumptions
-
-** The Result
-We identified what was needed, collaborated on answers, and captured it in this document. Then we immediately used it to create the next session (validation).
-
-* Conclusion
-
-Creating session workflows is a meta-skill that improves all our collaboration. By formalizing how we work together, we:
-
-- Build shared vocabulary
-- Eliminate repeated explanations
-- Capture lessons learned
-- Enable continuous improvement
-- Make our partnership more efficient
-
-Each new session type we create adds to our collaborative toolkit and deepens our ability to work together effectively.
-
-*Remember:* Sessions are frameworks, not rigid recipes. They provide structure while allowing flexibility for case-by-case decisions. The goal is effectiveness, not perfection.
diff --git a/docs/sessions/emacs-inbox-zero.org b/docs/sessions/emacs-inbox-zero.org
deleted file mode 100644
index 4e046eba..00000000
--- a/docs/sessions/emacs-inbox-zero.org
+++ /dev/null
@@ -1,338 +0,0 @@
-#+TITLE: Emacs Inbox Zero Session
-#+AUTHOR: Craig Jennings & Claude
-#+DATE: 2025-11-01
-
-* Overview
-
-This session processes the Emacs Config Inbox to zero by filtering tasks through the V2MOM framework. Items either move to active V2MOM methods, get moved to someday-maybe, or get deleted. This weekly discipline prevents backlog buildup and ensures only strategic work gets done.
-
-* Problem We're Solving
-
-Emacs is Craig's most-used software by a significant margin. It's the platform for email, calendar, task management, note-taking, programming, reading, music, podcasts, and more. When Emacs breaks, everything stops—including critical life tasks like family emails, doctor appointments, and bills.
-
-The V2MOM (Vision, Values, Methods, Obstacles, Metrics) framework provides strategic balance between fixing/improving Emacs versus using it for real work. But without weekly maintenance, the system collapses under backlog.
-
-** The Specific Problem
-
-Features and bugs get logged in the "Emacs Config Inbox" heading of =~/.emacs.d/todo.org=. If not sorted weekly:
-- Items pile up and become unmanageable
-- Unclear what's actually important
-- Method 1 ("Make Using Emacs Frictionless") doesn't progress
-- Two key metrics break:
- 1. *Active todo count:* Should be < 20 items
- 2. *Weekly triage consistency:* Must happen at least once per week by Sunday, no longer than 7 days between sessions
-
-** What Happens Without This Session
-
-Without weekly inbox zero:
-- Backlog grows until overwhelming
-- Can't distinguish signal from noise
-- V2MOM becomes theoretical instead of practical
-- Config maintenance competes with real work instead of enabling it
-- Discipline muscle (Method 6: ruthless prioritization) atrophies
-
-*Impact:* The entire V2MOM system fails. Config stays broken longer. Real work gets blocked more often.
-
-* Exit Criteria
-
-The session is complete when:
-- Zero todo items remain under the "* Emacs Config Inbox" heading in =~/.emacs.d/todo.org=
-- All items have been routed to: V2MOM methods, someday-maybe, or deleted
-- Can verify by checking the org heading (should be empty or show "0/0" in agenda)
-
-*IMPORTANT:* We are ONLY processing items under the "* Emacs Config Inbox" heading. Items already organized under Method 1-6 headings have already been triaged and should NOT be touched during this session.
-
-*Measurable validation:*
-- Open =todo.org= and navigate to "* Emacs Config Inbox" heading
-- Confirm no child tasks exist under this heading only
-- Bonus: Check that active todo count is < 20 items across entire V2MOM
-
-* When to Use This Session
-
-Trigger this session when:
-- It's Sunday and you haven't triaged this week
-- 7 days have passed since last triage (hard deadline)
-- "Emacs Config Inbox" has accumulated items
-- You notice yourself avoiding looking at the inbox (sign it's becoming overwhelming)
-- Before starting any new Emacs config work (ensures highest-priority work happens first)
-
-*Recommended cadence:* Every Sunday, 10 minutes, no exceptions.
-
-* Approach: How We Work Together
-
-** Phase 1: Sort by Priority
-
-First, ensure todo items are sorted by priority in =todo.org=:
-- A (highest priority)
-- B
-- C
-- No priority
-- D (lowest priority)
-
-This ensures we always look at the most important items first. If time runs short, at least the high-priority items got processed.
-
-** Phase 2: Claude Rereads V2MOM
-
-Before processing any items, Claude rereads [[file:../EMACS-CONFIG-V2MOM.org][EMACS-CONFIG-V2MOM.org]] to have it fresh in mind. This ensures filtering decisions are grounded in the strategic framework.
-
-*What Claude should pay attention to:*
-- The 6 Methods and their concrete actions
-- The Values (Intuitive, Fast, Simple) and what they mean
-- The Metrics (especially active todo count < 20)
-- Method 6 discipline practices (ruthless prioritization, weekly triage, ship-over-research)
-
-** Phase 3: Process Each Item (in Priority Order)
-
-*IMPORTANT:* Process ONLY items under the "* Emacs Config Inbox" heading. Items already organized under Method 1-6 have been triaged and should remain where they are.
-
-For each item under "* Emacs Config Inbox", work through these questions:
-
-*** Question 1: Does this task need to be done at all?
-
-*Consider:*
-- Has something changed?
-- Was this a mistake?
-- Do I disagree with this idea now?
-- Is this actually important?
-
-*If NO:* **DELETE** the item immediately. Don't move it anywhere. Kill it.
-
-*Examples of deletions:*
-- "Add Signal client to Emacs" - Cool idea, not important
-- "Try minimap mode" - Interesting, doesn't serve vision
-- "Research 5 different completion frameworks" - Already have Vertico/Corfu, stop researching
-
-*** Question 2: Is this task related to the Emacs Config V2MOM?
-
-*If NO:* **Move to** =docs/someday-maybe.org=
-
-These are tasks that might be good ideas but don't serve the current strategic focus. They're not deleted (might revisit later) but they're out of active consideration.
-
-*Examples:*
-- LaTeX improvements (no concrete need yet)
-- Elfeed dashboard redesign (unclear if actually used)
-- New theme experiments (side project competing with maintenance)
-
-*** Question 3: Which V2MOM method does this relate to?
-
-*If YES (related to V2MOM):*
-
-Claude suggests which method(s) this might relate to:
-- Method 1: Make Using Emacs Frictionless (performance, bug fixes, missing features)
-- Method 2: Stop Problems Before They Appear (package upgrades, deprecation removal)
-- Method 3: Make Fixing Emacs Frictionless (tooling, testing, profiling)
-- Method 4: Contribute to the Emacs Ecosystem (package maintenance)
-- Method 5: Be Kind To Your Future Self (new capabilities)
-- Method 6: Develop Disciplined Engineering Practices (meta-practices)
-
-*This is a conversation.* If the relationship is only tangential:
-- **Claude should push back** - "This seems tangential. Adding it would dilute focus and delay V2MOM completion. Are you sure this serves the vision?"
-- Help Craig realize it doesn't fit through questions
-- The more we add, the longer V2MOM takes, the harder it is to complete
-
-*If item relates to multiple methods:*
-Pick the **highest priority method** (Method 1 > Method 2 > Method 3 > etc.)
-
-*IMPORTANT: Capture useful context!*
-During discussion, Craig may provide:
-- Impact estimates ("15-20 seconds × 12 times/day")
-- Theories about root causes
-- Context about why this matters
-- Examples of when the problem occurs
-
-**When moving items to methods, add this context to the task description.** This preserves valuable information for later execution and helps prioritize work accurately.
-
-*Then:* Move the item to the appropriate method section in the V2MOM or active todo list with enriched context.
-
-** Phase 4: Verify and Celebrate
-
-Once all items are processed:
-1. Verify "Emacs Config Inbox" heading is empty
-2. Check that active todo count is < 20 items
-3. Note the date of this triage session
-4. Acknowledge: You've practiced ruthless prioritization (Method 6 skill development)
-
-** Decision Framework: When Uncertain
-
-If you're uncertain whether an item fits V2MOM:
-
-1. **Ask: Does this directly serve the Vision?** (Work at speed of thought, stable config, comprehensive workflows)
-2. **Ask: Does this align with Values?** (Intuitive, Fast, Simple)
-3. **Ask: Is this in the Methods already?** (If not explicitly listed, probably shouldn't add)
-4. **Ask: What's the opportunity cost?** (Every new item delays everything else)
-
-*When in doubt:* Move to someday-maybe. You can always pull it back later if it proves critical. Better to be conservative than to dilute focus.
-
-* Principles to Follow
-
-** Claude's Role: "You're here to help keep me honest"
-
-Craig is developing discipline (Method 6: ruthless prioritization). Not making progress = not getting better.
-
-*Claude's responsibilities:*
-- If task clearly fits V2MOM → Confirm and move forward quickly
-- If task is unclear/tangential → **Ask questions** to help Craig realize it doesn't fit or won't lead to V2MOM success
-- Enable ruthless prioritization by helping Craig say "no"
-- Don't let good ideas distract from great goals
-
-*Example questions Claude might ask:*
-- "This is interesting, but which specific metric does it improve?"
-- "We already have 3 items in Method 1 addressing performance. Does this add something different?"
-- "This would be fun to build, but does it make using Emacs more frictionless?"
-- "If you had to choose between this and fixing org-agenda (30s → 5s), which serves the vision better?"
-
-** Time Efficiency: 10 Minutes Active Work
-
-Don't take too long on any single item. Splitting philosophical hairs = procrastination.
-
-*Target:* **10 minutes active work time** (not clock time - interruptions expected)
-
-*If spending > 1 minute on a single item:*
-- Decision is unclear → Move to someday-maybe (safe default)
-- Come back to it later if it proves critical
-- Keep moving
-
-*Why this matters:*
-- Weekly consistency requires low friction
-- Perfect categorization doesn't matter as much as consistent practice
-- Getting through all items > perfectly routing each item
-
-** Ruthless Prioritization Over Completeness
-
-The goal is not to do everything in the inbox. The goal is to identify and focus on what matters most.
-
-*Better to:*
-- Delete 50% of items and ship the other 50%
-- Than keep 100% and ship 0%
-
-*Remember:*
-- Every item kept is opportunity cost
-- V2MOM already has plenty of work
-- "There will always be cool ideas out there to implement and they will always be a web search away" (Craig's words)
-
-** Bias Toward Action
-
-When processing items that ARE aligned with V2MOM:
-- Move them to the appropriate method quickly
-- Don't overthink the categorization
-- Getting it 80% right is better than spending 5 minutes getting it 100% right
-- You can always recategorize later during regular triage
-
-* Living Document
-
-This is a living document. After each emacs-inbox-zero session, consider:
-- Did the workflow make sense?
-- Were any steps unclear or unnecessary?
-- Did any new situations arise that need decision frameworks?
-- Did the 10-minute target work, or should it adjust?
-
-Update this document with learnings to make future sessions smoother.
-
-* Example Session Walkthrough
-
-** Setup
-- Open =~/.emacs.d/todo.org=
-- Navigate to "Emacs Config Inbox" heading
-- Verify items are sorted by priority (A → B → C → none → D)
-- Claude rereads =EMACS-CONFIG-V2MOM.org=
-
-** Processing Example Items
-
-*** Example 1: [#A] Fix org-agenda slowness (30+ seconds)
-
-*Q1: Does this need to be done?* YES - Daily pain point blocking productivity
-
-*Q2: Related to V2MOM?* YES - Method 1 explicitly lists this
-
-*Q3: Which method?* Method 1: Make Using Emacs Frictionless
-
-*Action:* Move to Method 1 active tasks (or confirm already there)
-
-*Time:* 15 seconds
-
-*** Example 2: [#B] Add Signal client to Emacs
-
-*Q1: Does this need to be done?* Let's think...
-
-Claude: "What problem does this solve? Is messaging in Emacs part of the Vision?"
-
-Craig: "Not really, I already use Signal on my phone fine."
-
-*Action:* **DELETE** - Doesn't serve vision, already have working solution
-
-*Time:* 30 seconds
-
-*** Example 3: [#C] Try out minimap mode for code navigation
-
-*Q1: Does this need to be done?* Interesting idea, but not important
-
-*Action:* **DELETE** or move to someday-maybe - Interesting, not important
-
-*Time:* 10 seconds
-
-*** Example 4: [#B] Implement transcription workflow
-
-*Q1: Does this need to be done?* YES - Want to transcribe recordings for notes
-
-*Q2: Related to V2MOM?* Maybe... seems like new feature?
-
-Claude: "This seems like Method 5: Be Kind To Your Future Self - new capability you'll use repeatedly. Complete code already exists in old todo.org. But we're still working through Method 1 (frictionless) and Method 2 (stability). Should this wait, or is transcription critical?"
-
-Craig: "Actually yes, I record meetings and need transcripts. This is important."
-
-*Q3: Which method?* Method 5: Be Kind To Your Future Self
-
-*Action:* Move to Method 5 (but note: prioritize after Methods 1-3)
-
-*Time:* 45 seconds (good conversation, worth the time)
-
-** Result
-- 4 items processed in ~2 minutes
-- 1 moved to Method 1 (already there)
-- 1 deleted
-- 1 deleted or moved to someday-maybe
-- 1 moved to Method 5
-- Inbox is clearer, focus is sharper
-
-* Conclusion
-
-Emacs inbox zero is not about getting through email or org-capture. It's about **strategic filtering of config maintenance work**. By processing the inbox weekly, you:
-
-- Keep maintenance load manageable (< 20 active items)
-- Ensure only V2MOM-aligned work happens
-- Practice ruthless prioritization (Method 6 skill)
-- Prevent backlog from crushing future productivity
-- Build the discipline that makes all other methods sustainable
-
-**The session takes 10 minutes. Not doing it costs days of distracted, unfocused work on things that don't matter.**
-
-*Remember:* Inbox zero is not about having zero things to do. It's about knowing exactly what you're NOT doing, so you can focus completely on what matters most.
-
-* Living Document
-
-This is a living document. After each emacs-inbox-zero session, consider:
-- Did the workflow make sense?
-- Were any steps unclear or unnecessary?
-- Did any new situations arise that need decision frameworks?
-- Did the 10-minute target work, or should it adjust?
-
-Update this document with learnings to make future sessions smoother.
-
-** Updates and Learnings
-
-*** 2025-11-01: First validation session - Process works!
-
-*Session results:*
-- 5 items processed in ~10 minutes (target met)
-- 1 deleted (duplicate), 2 moved to Method 1, 2 moved to someday-maybe
-- Inbox cleared to zero
-- Priority sorting worked well
-- Three-question filter was effective
-- Caught duplicate task and perfectionism pattern in real-time
-
-*Key learning: Capture useful context during triage*
-When Craig provides impact estimates ("15-20 seconds × 12 times/day"), theories, or context during discussion, **Claude should add this information to the task description** when moving items to methods. This preserves valuable context for execution and helps with accurate prioritization.
-
-Example: "Optimize org-capture target building" was enriched with "15-20 seconds every time capturing a task (12+ times/day). Major daily bottleneck - minutes lost waiting, plus context switching cost."
-
-*Impact:* Better task descriptions → better prioritization → better execution.
diff --git a/docs/sessions/refactor.org b/docs/sessions/refactor.org
deleted file mode 100644
index 11ff0a91..00000000
--- a/docs/sessions/refactor.org
+++ /dev/null
@@ -1,593 +0,0 @@
-#+TITLE: Test-Driven Quality Engineering Session: music-config.el
-#+AUTHOR: Craig Jennings & Claude
-#+DATE: 2025-11-01
-
-* Overview
-
-This document describes a comprehensive test-driven quality engineering session for =music-config.el=, an EMMS music player configuration module. The session demonstrates systematic testing practices, refactoring for testability, bug discovery through tests, and decision-making processes when tests fail.
-
-* Session Goals
-
-1. Add comprehensive unit test coverage for testable functions in =music-config.el=
-2. Discover and fix bugs through systematic testing
-3. Follow quality engineering principles from =ai-prompts/quality-engineer.org=
-4. Demonstrate refactoring patterns for testability
-5. Document the decision-making process for test vs production code issues
-
-* Phase 1: Feature Addition with Testability in Mind
-
-** The Feature Request
-
-Add functionality to append a track from the EMMS playlist to an existing M3U file by pressing ~A~ on the track.
-
-Requirements:
-- Show completing-read with available M3U playlists
-- Allow cancellation (C-g and explicit "(Cancel)" option)
-- Append track's absolute path to selected M3U
-- Provide clear success/failure feedback
-
-** Refactoring for Testability
-
-Following the "Interactive vs Non-Interactive Function Pattern" from =quality-engineer.org=:
-
-*Problem:* Directly implementing as an interactive function would require:
-- Mocking =completing-read=
-- Mocking =emms-playlist-track-at=
-- Testing Emacs UI functionality, not our business logic
-
-*Solution:* Split into two functions:
-
-1. *Helper Function* (=cj/music--append-track-to-m3u-file=):
- - Pure, deterministic
- - Takes explicit parameters: =(track-path m3u-file)=
- - No user interaction
- - Returns =t= on success, signals errors naturally
- - 100% testable with ERT, no mocking needed
-
-2. *Interactive Wrapper* (=cj/music-append-track-to-playlist=):
- - Thin layer handling only user interaction
- - Gets track at point
- - Shows completing-read
- - Catches errors and displays messages
- - Delegates all business logic to helper
- - No tests needed (just testing Emacs)
-
-** Benefits of This Pattern
-
-From =quality-engineer.org=:
-#+begin_quote
-When writing functions that combine business logic with user interaction:
-- Split into internal implementation and interactive wrapper
-- Internal function (prefix with =--=): Pure logic, takes all parameters explicitly
-- Dramatically simpler testing (no interactive mocking)
-- Code reusable programmatically without prompts
-- Clear separation of concerns (logic vs UI)
-#+end_quote
-
-This pattern enabled:
-- Zero mocking in tests
-- Fast, deterministic tests
-- Easy reasoning about correctness
-- Reusable helper function
-
-* Phase 2: Writing the First Test
-
-** Test File: =test-music-config--append-track-to-m3u-file.el=
-
-Following the naming convention from =quality-engineer.org=:
-- Pattern: =test-<module>-<function>.el=
-- One test file per function for easy discovery when tests fail
-- User sees failure → immediately knows which file to open
-
-** Test Organization
-
-Following the three-category structure:
-
-*** Normal Cases (4 tests)
-- Append to empty file
-- Append to file with trailing newline
-- Append to file without trailing newline (adds leading newline)
-- Multiple appends (allows duplicates)
-
-*** Boundary Cases (4 tests)
-- Very long paths (~500 chars)
-- Unicode characters (中文, emoji)
-- Spaces and special characters
-- M3U with comments/metadata
-
-*** Error Cases (3 tests)
-- Nonexistent file
-- Read-only file
-- Directory instead of file
-
-** Writing Tests with Zero Mocking
-
-Key principle: "Don't mock what you're testing" (from =quality-engineer.org=)
-
-Example test:
-#+begin_src elisp
-(ert-deftest test-music-config--append-track-to-m3u-file-normal-empty-file-appends-track ()
- "Append to brand new empty M3U file."
- (test-music-config--append-track-to-m3u-file-setup)
- (unwind-protect
- (let* ((m3u-file (cj/create-temp-test-file "test-playlist-"))
- (track-path "/home/user/music/artist/song.mp3"))
- (cj/music--append-track-to-m3u-file track-path m3u-file)
- (with-temp-buffer
- (insert-file-contents m3u-file)
- (should (string= (buffer-string) (concat track-path "\n")))))
- (test-music-config--append-track-to-m3u-file-teardown)))
-#+end_src
-
-Notice:
-- No mocks
-- Real file I/O using =testutil-general.el= helpers
-- Tests actual function behavior
-- Clean setup/teardown
-
-** Result
-
-All 11 tests passed on first run. The pure, deterministic helper function worked correctly.
-
-* Phase 3: Systematic Test Coverage Analysis
-
-** Identifying Testable Functions
-
-Reviewed all functions in =music-config.el= and categorized:
-
-*** Easy to Test (Pure/Deterministic)
-- =cj/music--valid-file-p= - Extension validation
-- =cj/music--valid-directory-p= - Directory validation
-- =cj/music--safe-filename= - String sanitization
-- =cj/music--m3u-file-tracks= - M3U file parsing
-- =cj/music--get-m3u-basenames= - Basename extraction
-
-*** Medium Complexity (Need File I/O)
-- =cj/music--collect-entries-recursive= - Recursive directory traversal
-- =cj/music--get-m3u-files= - File discovery
-- =cj/music--completion-table= - Completion table generation
-
-*** Hard to Test (EMMS Buffer Dependencies)
-- =cj/music--ensure-playlist-buffer= - EMMS buffer creation
-- =cj/music--playlist-tracks= - EMMS buffer reading
-- =cj/music--playlist-modified-p= - EMMS buffer state
-- =cj/music--assert-valid-playlist-file= - Buffer-local state
-
-*Decision:* Test easy and medium complexity functions. Skip EMMS-dependent functions (would require extensive mocking/setup, diminishing returns).
-
-** File Organization Principle
-
-From =quality-engineer.org=:
-#+begin_quote
-*Unit Tests*: One file per method
-- Naming: =test-<filename>-<methodname>.el=
-- Example: =test-org-gcal--safe-substring.el=
-#+end_quote
-
-*Rationale:* When a test fails in CI:
-1. Developer sees: =test-music-config--get-m3u-files-normal-multiple-files-returns-list FAILED=
-2. Immediately knows: Look for =test-music-config--get-m3u-files.el=
-3. Opens file and fixes issue - *fast cognitive path*
-
-If combined files:
-1. Test fails: =test-music-config--get-m3u-files-normal-multiple-files-returns-list FAILED=
-2. Which file? =test-music-config--m3u-helpers.el=? =test-music-config--combined.el=?
-3. Developer wastes time searching - *slower, frustrating*
-
-*The 1:1 mapping is a usability feature for developers under pressure.*
-
-* Phase 4: Testing Function by Function
-
-** Function 1: =cj/music--valid-file-p=
-
-*** Test Categories
-
-*Normal Cases:*
-- Valid extensions (mp3, flac, etc.)
-- Case-insensitive matching (MP3, Mp3)
-
-*Boundary Cases:*
-- Dots in path (only last extension matters)
-- Multiple extensions (uses rightmost)
-- No extension
-- Empty string
-
-*Error Cases:*
-- Nil input
-- Non-music extensions
-
-*** First Run: 14/15 Passed, 1 FAILED
-
-*Failure:*
-#+begin_src
-test-music-config--valid-file-p-error-nil-input-returns-nil
-Expected: Returns nil gracefully
-Actual: (wrong-type-argument stringp nil) - CRASHED
-#+end_src
-
-*** Bug Analysis: Test or Production Code?
-
-*Process:*
-1. Read the test expectation: "nil input returns nil gracefully"
-2. Read the production code:
- #+begin_src elisp
- (defun cj/music--valid-file-p (file)
- (when-let ((ext (file-name-extension file))) ; ← Crashes here
- (member (downcase ext) cj/music-file-extensions)))
- #+end_src
-3. Identify issue: =file-name-extension= expects string, crashes on nil
-4. Consider context: This is defensive validation code, called in various contexts
-
-*Decision: Fix production code*
-
-*Rationale:*
-- Function should be defensive (validation code)
-- Returning nil for invalid input is more robust than crashing
-- Common pattern in Emacs Lisp validation functions
-
-*Fix:*
-#+begin_src elisp
-(defun cj/music--valid-file-p (file)
- (when (and file (stringp file)) ; ← Guard added
- (when-let ((ext (file-name-extension file)))
- (member (downcase ext) cj/music-file-extensions))))
-#+end_src
-
-Result: All 15 tests passed.
-
-** Function 2: =cj/music--valid-directory-p=
-
-*** First Run: 11/13 Passed, 2 FAILED
-
-*Failures:*
-1. Nil input crashed (same pattern as =valid-file-p=)
-2. Empty string returned non-nil (treated as current directory)
-
-*Fix:*
-#+begin_src elisp
-(defun cj/music--valid-directory-p (dir)
- (when (and dir (stringp dir) (not (string-empty-p dir))) ; ← Guards added
- (and (file-directory-p dir)
- (not (string-prefix-p "." (file-name-nondirectory
- (directory-file-name dir)))))))
-#+end_src
-
-Result: All 13 tests passed.
-
-** Function 3: =cj/music--safe-filename=
-
-*** First Run: 12/13 Passed, 1 FAILED
-
-*Failure:*
-#+begin_src
-test-music-config--safe-filename-boundary-special-chars-replaced
-Expected: "playlist__________" (10 underscores)
-Actual: "playlist_________" (9 underscores)
-#+end_src
-
-*** Bug Analysis: Test or Production Code?
-
-*Process:*
-1. Count special chars in input: =@#$%^&*()= = 9 characters
-2. Test expected 10, but input only has 9
-3. Production code is correct
-
-*Decision: Fix test code*
-
-*The bug was in the test expectation, not the implementation.*
-
-Result: All 13 tests passed.
-
-** Function 4: =cj/music--m3u-file-tracks= (M3U Parser)
-
-This is where we found a **significant bug** through testing!
-
-*** Test Categories
-
-*Normal Cases:*
-- Absolute paths
-- Relative paths (expanded to M3U directory)
-- HTTP/HTTPS/MMS URLs preserved
-- Mixed paths and URLs
-
-*Boundary Cases:*
-- Empty lines ignored (important for playlist robustness!)
-- Whitespace-only lines ignored
-- Comments ignored (#EXTM3U, #EXTINF)
-- Leading/trailing whitespace trimmed
-- Order preserved
-
-*Error Cases:*
-- Nonexistent file
-- Nil input
-
-*** First Run: 11/15 Passed, 4 FAILED
-
-All 4 failures related to URL handling:
-
-*Failure Pattern:*
-#+begin_src
-Expected: "http://example.com/stream.mp3"
-Actual: "/home/cjennings/.temp-emacs-tests/http:/example.com/stream.mp3"
-#+end_src
-
-HTTP/HTTPS/MMS URLs were being treated as relative paths and mangled!
-
-*** Root Cause Analysis
-
-*Production code (line 110):*
-#+begin_src elisp
-(string-match-p "\`\(https?\|mms\)://" line)
-#+end_src
-
-*Problem:* Regex escaping is wrong!
-
-In the string literal ="\`"=:
-- The backslash-backtick becomes a *literal backtick character*
-- Not the regex anchor =\`= (start of string)
-
-The regex never matched, so URLs were treated as relative paths.
-
-*Correct version:*
-#+begin_src elisp
-(string-match-p "\\`\\(https?\\|mms\\)://" line)
-#+end_src
-
-Double backslashes for string literal escaping → results in regex =\`\(https?\|mms\)://=
-
-*** Impact Assessment
-
-*This is a significant bug:*
-- Radio stations (HTTP streams) would be broken
-- Any M3U with URLs would fail
-- Data corruption: URLs transformed into nonsensical file paths
-- Function worked for local files, so bug went unnoticed
-- Users would see mysterious errors when loading playlists with streams
-
-*Tests caught a real production bug that could have caused user data corruption!*
-
-Result: All 15 tests passed after fix.
-
-* Phase 5: Continuing Through the Test Suite
-
-** Functions Tested Successfully
-
-5. =cj/music--get-m3u-files= - 7 tests
- - Learned: Directory listing order is filesystem-dependent
- - Solution: Sort results before comparing in tests
-
-6. =cj/music--get-m3u-basenames= - 6 tests
- - Kept as separate file (not combined with get-m3u-files)
- - Reason: Usability when tests fail
-
-7. =cj/music--collect-entries-recursive= - 12 tests
- - Medium complexity: Required creating test directory trees
- - Used =testutil-general.el= helpers for setup/teardown
- - All tests passed first time (well-factored function)
-
-8. =cj/music--completion-table= - 12 tests
- - Tested higher-order function (returns lambda)
- - Initially misunderstood completion protocol behavior
- - Fixed test expectations to match actual Emacs behavior
-
-* Key Principles Applied
-
-** 1. Refactor for Testability BEFORE Writing Tests
-
-The Interactive vs Non-Interactive pattern from =quality-engineer.org= made testing trivial:
-- No mocking required
-- Fast, deterministic tests
-- Clear separation of concerns
-
-** 2. Systematic Test Organization
-
-Every test file followed the same structure:
-- Normal Cases
-- Boundary Cases
-- Error Cases
-
-This makes it easy to:
-- Identify coverage gaps
-- Add new tests
-- Understand what's being tested
-
-** 3. Test Naming Convention
-
-Pattern: =test-<module>-<function>-<category>-<scenario>-<expected-result>=
-
-Examples:
-- =test-music-config--valid-file-p-normal-mp3-extension-returns-true=
-- =test-music-config--m3u-file-tracks-boundary-empty-lines-ignored=
-- =test-music-config--safe-filename-error-nil-input-signals-error=
-
-Benefits:
-- Self-documenting
-- Easy to understand what failed
-- Searchable/grepable
-- Clear category organization
-
-** 4. Zero Mocking for Pure Functions
-
-From =quality-engineer.org=:
-#+begin_quote
-DON'T MOCK WHAT YOU'RE TESTING
-- Only mock external side-effects and dependencies, not the domain logic itself
-- If mocking removes the actual work the function performs, you're testing the mock
-- Use real data structures that the function is designed to operate on
-- Rule of thumb: If the function body could be =(error "not implemented")= and tests still pass, you've over-mocked
-#+end_quote
-
-Our tests used:
-- Real file I/O
-- Real strings
-- Real data structures
-- Actual function behavior
-
-Result: Tests caught real bugs, not mock configuration issues.
-
-** 5. Test vs Production Code Bug Decision Framework
-
-When a test fails, ask:
-
-1. *What is the test expecting?*
- - Read the test name and assertions
- - Understand the intended behavior
-
-2. *What is the production code doing?*
- - Read the implementation
- - Trace through the logic
-
-3. *Which is correct?*
- - Is the test expectation reasonable?
- - Is the production behavior defensive/robust?
- - What is the usage context?
-
-4. *Consider the impact:*
- - Defensive code: Fix production to handle edge cases
- - Wrong expectation: Fix test
- - Unclear spec: Ask user for clarification
-
-Examples from our session:
-- *Nil input crashes* → Fix production (defensive coding)
-- *Empty string treated as valid* → Fix production (defensive coding)
-- *Wrong count in test* → Fix test (test bug)
-- *Regex escaping wrong* → Fix production (real bug!)
-
-** 6. Fast Feedback Loop
-
-Pattern: "Write tests, run them all, report errors, and see where we are!"
-
-This became a mantra during the session:
-1. Write comprehensive tests for one function
-2. Run immediately
-3. Analyze failures
-4. Fix bugs (test or production)
-5. Verify all tests pass
-6. Move to next function
-
-Benefits:
-- Caught bugs immediately
-- Small iteration cycles
-- Clear progress
-- High confidence in changes
-
-* Final Results
-
-** Test Coverage
-
-*9 functions tested, 103 tests total:*
-1. =cj/music--append-track-to-m3u-file= - 11 tests
-2. =cj/music--valid-file-p= - 15 tests
-3. =cj/music--valid-directory-p= - 13 tests
-4. =cj/music--safe-filename= - 13 tests
-5. =cj/music--m3u-file-tracks= - 15 tests
-6. =cj/music--get-m3u-files= - 7 tests
-7. =cj/music--get-m3u-basenames= - 6 tests
-8. =cj/music--collect-entries-recursive= - 12 tests
-9. =cj/music--completion-table= - 12 tests
-
-** Bugs Discovered and Fixed
-
-1. *=cj/music--valid-file-p=*
- - Issue: Crashed on nil input
- - Fix: Added nil/string guard
- - Impact: Prevents crashes in validation code
-
-2. *=cj/music--valid-directory-p=*
- - Issue: Crashed on nil, treated empty string as valid
- - Fix: Added guards for nil and empty string
- - Impact: More robust directory validation
-
-3. *=cj/music--m3u-file-tracks=* ⚠️ *SIGNIFICANT BUG*
- - Issue: URL regex escaping wrong - HTTP/HTTPS/MMS URLs mangled as relative paths
- - Fix: Corrected regex escaping: ="\`"= → ="\\`"=
- - Impact: Radio stations and streaming URLs now work correctly
- - *This bug would have corrupted user data and broken streaming playlists*
-
-** Code Quality Improvements
-
-- All testable helper functions now have comprehensive test coverage
-- More defensive error handling (nil guards)
-- Clear separation of concerns (pure helpers vs interactive wrappers)
-- Systematic boundary condition testing
-- Unicode and special character handling verified
-
-* Lessons Learned
-
-** 1. Tests as Bug Discovery Tools
-
-Tests aren't just for preventing regressions - they actively *discover existing bugs*:
-- The URL regex bug existed in production
-- Nil handling bugs would have manifested in edge cases
-- Tests made these issues visible immediately
-
-** 2. Refactoring Enables Testing
-
-The decision to split functions into pure helpers + interactive wrappers:
-- Made testing dramatically simpler
-- Enabled 100+ tests with zero mocking
-- Improved code reusability
-- Clarified function responsibilities
-
-** 3. Systematic Process Matters
-
-Following the same pattern for each function:
-- Reduced cognitive load
-- Made it easy to maintain consistency
-- Enabled quick iteration
-- Built confidence in coverage
-
-** 4. File Organization Aids Debugging
-
-One test file per function:
-- Fast discovery when tests fail
-- Clear ownership
-- Easy to maintain
-- Follows user's mental model
-
-** 5. Test Quality Equals Production Quality
-
-Our tests:
-- Used real I/O (not mocks)
-- Tested actual behavior
-- Covered edge cases systematically
-- Found real bugs
-
-This is only possible with well-factored, testable code.
-
-* Applying These Principles
-
-When adding tests to other modules:
-
-1. *Identify testable functions* - Look for pure helpers, file I/O, string manipulation
-2. *Refactor if needed* - Split interactive functions into pure helpers
-3. *Write systematically* - Normal, Boundary, Error categories
-4. *Run frequently* - Fast feedback loop
-5. *Analyze failures carefully* - Test bug vs production bug
-6. *Fix immediately* - Don't accumulate technical debt
-7. *Maintain organization* - One file per function, clear naming
-
-* Reference
-
-See =ai-prompts/quality-engineer.org= for comprehensive quality engineering guidelines, including:
-- Test organization and structure
-- Test naming conventions
-- Mocking and stubbing best practices
-- Interactive vs non-interactive function patterns
-- Integration testing guidelines
-- Test maintenance strategies
-
-Note: =quality-engineer.org= evolves as we learn more quality best practices. This document captures principles applied during this specific session.
-
-* Conclusion
-
-This session demonstrated how systematic testing combined with refactoring for testability can:
-- Discover real bugs before they reach users
-- Improve code quality and robustness
-- Build confidence in changes
-- Create maintainable test suites
-- Follow industry best practices
-
-The 103 tests and 3 bug fixes represent a significant quality improvement to =music-config.el=. The URL regex bug alone justified the entire testing effort - that bug could have caused user data corruption and broken a major feature (streaming radio).
-
-*Testing is not just about preventing future bugs - it's about finding bugs that already exist.*