summaryrefslogtreecommitdiff
path: root/docs/workflows
diff options
context:
space:
mode:
Diffstat (limited to 'docs/workflows')
-rw-r--r--docs/workflows/create-v2mom.org598
-rw-r--r--docs/workflows/create-workflow.org352
-rw-r--r--docs/workflows/emacs-inbox-zero.org338
-rw-r--r--docs/workflows/refactor.org617
4 files changed, 0 insertions, 1905 deletions
diff --git a/docs/workflows/create-v2mom.org b/docs/workflows/create-v2mom.org
deleted file mode 100644
index d6a82c0e..00000000
--- a/docs/workflows/create-v2mom.org
+++ /dev/null
@@ -1,598 +0,0 @@
-#+TITLE: Creating a V2MOM Strategic Framework
-#+AUTHOR: Craig Jennings & Claude
-#+DATE: 2025-11-05
-
-* Overview
-
-This session creates a V2MOM (Vision, Values, Methods, Obstacles, Metrics) strategic framework for any project or goal. V2MOM provides clarity for decision-making, ruthless prioritization, and measuring progress. It transforms vague intentions into concrete action plans.
-
-The framework originated at Salesforce and works for any domain: personal projects, business strategy, health goals, financial planning, software development, or life planning.
-
-* Problem We're Solving
-
-Without a strategic framework, projects suffer from:
-
-** Unclear Direction
-- "Get healthier" or "improve my finances" is too vague to act on
-- Every idea feels equally important
-- No principled way to say "no" to distractions
-- Difficult to know what to work on next
-
-** Priority Inflation
-- Everything feels urgent or important
-- Research and planning without execution
-- Hard to distinguish signal from noise
-- Active todo list grows beyond manageability
-
-** No Decision Framework
-- When faced with choice between A and B, no principled way to decide
-- Debates about approach waste time
-- Second-guessing decisions after making them
-- Perfectionism masquerading as thoroughness
-
-** Unmeasurable Progress
-- Can't tell if work is actually making things better
-- No objective way to know when you're "done"
-- Metrics are either absent or vanity metrics
-- Difficult to celebrate wins or identify blockers
-
-*Impact:* Unfocused work, slow progress, frustration, and the nagging feeling that you're always working on the wrong thing.
-
-* Exit Criteria
-
-The V2MOM is complete when:
-
-1. **All 5 sections are filled with concrete content:**
- - Vision: Clear, aspirational picture of success
- - Values: 2-4 principles that guide decisions
- - Methods: 4-7 concrete approaches with specific actions
- - Obstacles: Honest personal/technical challenges
- - Metrics: Measurable outcomes (not vanity metrics)
-
-2. **You can use it for decision-making:**
- - Can answer "does X fit this V2MOM?" quickly
- - Provides clarity on priorities (Method 1 > Method 2 > etc.)
- - Identifies what NOT to do
-
-3. **Both parties agree it's ready:**
- - Feels complete, not rushed
- - Actionable enough to start execution
- - Honest about obstacles (not sugar-coated)
-
-*Measurable validation:*
-- Can you articulate the vision in one sentence?
-- Do the values help you say "no" to things?
-- Are methods ordered by priority?
-- Can you immediately identify 3-5 tasks from Method 1?
-- Do metrics tell you if you're succeeding?
-
-* When to Use This Session
-
-Trigger this V2MOM creation workflow when:
-
-- Starting a significant project (new business, new habit, new system)
-- Existing project has accumulated many competing priorities without clear focus
-- You find yourself constantly context-switching between ideas
-- Someone asks "what are you trying to accomplish?" and answer is vague
-- You want to apply ruthless prioritization but lack framework
-- Annual/quarterly planning for ongoing projects or life goals
-
-*V2MOM is particularly valuable for:*
-- Personal infrastructure projects (tooling, systems, workflows)
-- Health and fitness goals
-- Financial planning and wealth building
-- Software package development
-- Business strategy
-- Career development
-- Any long-running project where you're making the decisions
-
-* Approach: How We Work Together
-
-** Phase 1: Understand the V2MOM Framework
-
-Before starting, ensure both parties understand what each section means:
-
-- *Vision:* What you want to achieve (aspirational, clear picture of success)
-- *Values:* Principles that guide decisions (2-4 values, defined concretely)
-- *Methods:* How you'll achieve the vision (4-7 approaches, ordered by priority)
-- *Obstacles:* What's in your way (honest, personal, specific)
-- *Metrics:* How you'll measure success (objective, not vanity metrics)
-
-*Important:* V2MOM sections are completed IN ORDER. Vision informs Values. Values inform Methods. Methods reveal Obstacles. Everything together defines Metrics.
-
-** Phase 2: Create the Document Structure
-
-1. Create file: =docs/[project-name]-v2mom.org= or appropriate location
-2. Add metadata: #+TITLE, #+AUTHOR, #+DATE, #+FILETAGS
-3. Create section headings for all 5 components
-4. Add "What is V2MOM?" overview section at top
-
-*Save incrementally:* V2MOM discussions can be lengthy. Save after completing each section to prevent data loss.
-
-** Phase 3: Define the Vision
-
-*Ask:* "What do you want to achieve? What does success look like?"
-
-*Goal:* Get a clear, aspirational picture. Should be 1-3 paragraphs describing the end state.
-
-*Claude's role:*
-- Help articulate what's described
-- Push for specificity ("works great" → what specifically works?)
-- Identify scope (what's included, what's explicitly out of scope)
-- Capture concrete examples mentioned
-
-*Good vision characteristics:*
-- Paints a picture you can visualize
-- Describes outcomes, not implementation
-- Aspirational but grounded in reality
-- Specific enough to know what's included
-
-*Examples across domains:*
-- Health: "Wake up with energy, complete a 5K without stopping, feel strong in daily activities, and have stable mood throughout the day"
-- Finance: "Six months emergency fund, debt-free except mortgage, automatic retirement savings, and financial decisions that don't cause anxiety"
-- Software: "A package that integrates seamlessly, has comprehensive documentation, handles edge cases gracefully, and maintainers of other packages want to depend on"
-
-*Time estimate:* 15-30 minutes if vision is mostly clear; 45-60 minutes if needs exploration
-
-** Phase 4: Define the Values
-
-*Ask:* "What principles guide your decisions? When faced with choice A vs B, what values help you decide?"
-
-*Goal:* Identify 2-4 values with concrete definitions and examples.
-
-*Claude's role:*
-- Suggest values based on vision discussion
-- Push for concrete definitions (not just the word, but what it MEANS)
-- Help distinguish between overlapping values
-- Identify when examples contradict stated values
-
-*Common pitfall:* Listing generic words without defining them.
-- Bad: "Quality, Speed, Innovation"
-- Good: "Sustainable means can maintain this for 10+ years without burning out. No crash diets, no 80-hour weeks, no technical debt I can't service."
-
-*For each value, capture:*
-1. **The value name** (1-2 words)
-2. **Definition** (what this means in context of this project)
-3. **Concrete examples** (how this manifests)
-4. **What breaks this value** (anti-patterns)
-
-*Method:*
-- Start with 3-5 candidate values
-- For each one, ask: "What does [value] mean to you in this context?"
-- Discuss until definition is concrete
-- Write definition with examples
-- Refine/merge/remove until 2-4 remain
-
-*Examples across domains:*
-- Health V2MOM: "Sustainable: Can do this at 80 years old. No extreme diets. Focus on habits that compound over decades."
-- Finance V2MOM: "Automatic: Set up once, runs forever. Don't rely on willpower for recurring decisions. Automate savings and investments."
-- Software V2MOM: "Boring: Use proven patterns. No clever code. Maintainable by intermediate developers. Boring is reliable."
-
-*Time estimate:* 30-45 minutes
-
-** Phase 5: Define the Methods
-
-*Ask:* "How will you achieve the vision? What approaches will you take?"
-
-*Goal:* Identify 4-7 methods (concrete approaches) ordered by priority.
-
-*Claude's role:*
-- Extract methods from vision and values discussion
-- Help order by priority (what must happen first?)
-- Ensure methods are actionable (not just categories)
-- Push for concrete actions under each method
-- Watch for method ordering that creates dependencies
-
-*Structure for each method:*
-1. **Method name** (verb phrase: "Build X", "Eliminate Y", "Establish Z")
-2. **Aspirational description** (1-2 sentences: why this matters)
-3. **Concrete actions** (bulleted list: specific things to do)
-
-*Method ordering matters:*
-- Method 1 should be highest priority (blocking everything else)
-- Lower-numbered methods should enable higher-numbered ones
-- Common patterns:
- - Fix → Stabilize → Build → Enhance → Sustain
- - Eliminate → Replace → Optimize → Automate → Maintain
- - Learn → Practice → Apply → Teach → Systematize
-
-*Examples across domains:*
-
-Health V2MOM:
-- Method 1: Eliminate Daily Energy Drains (fix sleep, reduce inflammatory foods, address vitamin deficiencies)
-- Method 2: Build Baseline Strength (3x/week resistance training, progressive overload, focus on compound movements)
-- Method 3: Establish Sustainable Nutrition (meal prep system, protein targets, vegetable servings)
-
-Finance V2MOM:
-- Method 1: Stop the Bleeding (identify and eliminate wasteful subscriptions, high-interest debt, impulse purchases)
-- Method 2: Build the Safety Net (automate savings, reach $1000 emergency fund, then 3 months expenses)
-- Method 3: Invest for the Future (max employer 401k match, open IRA, set automatic contributions)
-
-Software Package V2MOM:
-- Method 1: Nail the Core Use Case (solve one problem extremely well, clear documentation, handles errors gracefully)
-- Method 2: Ensure Quality and Stability (comprehensive test suite, CI/CD, semantic versioning)
-- Method 3: Build Community and Documentation (contribution guide, examples, responsive to issues)
-
-*Important:* Each method should have 3-8 concrete actions listed. If you can't list concrete actions, the method is too vague.
-
-*Time estimate:* 45-90 minutes (longest section)
-
-** Phase 6: Identify the Obstacles
-
-*Ask:* "What's in your way? What makes this hard?"
-
-*Goal:* Honest, specific obstacles (both personal and technical/external).
-
-*Claude's role:*
-- Encourage honesty (obstacles are not failures, they're reality)
-- Help distinguish between symptoms and root causes
-- Identify patterns in behavior that create obstacles
-- Acknowledge challenges without judgment
-
-*Good obstacle characteristics:*
-- Honest about personal patterns
-- Specific, not generic
-- Acknowledges both internal and external obstacles
-- States real stakes (not just "might happen")
-
-*Common obstacle categories:*
-- Personal: perfectionism, hard to say no, gets bored, procrastinates
-- Knowledge: missing skills, unclear how to proceed, need to learn
-- External: limited time, limited budget, competing priorities
-- Systemic: environmental constraints, lack of tools, dependencies on others
-
-*For each obstacle:*
-- Name it clearly
-- Describe how it manifests in this project
-- Acknowledge the stakes (what happens because of this obstacle)
-
-*Examples across domains:*
-
-Health V2MOM obstacles:
-- "I get excited about new workout programs and switch before seeing results (pattern: 6 weeks into a program)"
-- "Social events involve food and alcohol - saying no feels awkward and isolating"
-- "When stressed at work, I skip workouts and eat convenient junk food"
-
-Finance V2MOM obstacles:
-- "Viewing budget as restriction rather than freedom - triggers rebellion and impulse spending"
-- "Fear of missing out on lifestyle experiences my peers have"
-- "Limited financial literacy - don't understand investing beyond 'put money in account'"
-
-Software Package V2MOM obstacles:
-- "Perfectionism delays releases - always 'one more feature' before v1.0"
-- "Maintaining documentation feels boring compared to writing features"
-- "Limited time (2-4 hours/week) and competing projects"
-
-*Time estimate:* 15-30 minutes
-
-** Phase 7: Define the Metrics
-
-*Ask:* "How will you measure success? What numbers tell you if this is working?"
-
-*Goal:* 5-10 metrics that are objective, measurable, and aligned with vision/values.
-
-*Claude's role:*
-- Suggest metrics based on vision, values, and methods
-- Push for measurable numbers (not "better", but concrete targets)
-- Identify vanity metrics (look good but don't measure real progress)
-- Ensure metrics align with values and methods
-
-*Metric categories:*
-- **Performance metrics:** Measurable outcomes of the work
-- **Discipline metrics:** Process adherence, consistency, focus
-- **Quality metrics:** Standards maintained, sustainability indicators
-
-*Good metric characteristics:*
-- Objective (not subjective opinion)
-- Measurable (can actually collect the data)
-- Actionable (can change behavior to improve it)
-- Aligned with values and methods
-
-*For each metric:*
-- Name it clearly
-- Specify current state (if known)
-- Specify target state
-- Describe how to measure it
-- Specify measurement frequency
-
-*Examples across domains:*
-
-Health V2MOM metrics:
-- Resting heart rate: 70 bpm → 60 bpm (measure: daily via fitness tracker)
-- Workout consistency: 3x/week strength training for 12 consecutive weeks
-- Sleep quality: 7+ hours per night 6+ nights per week (measure: sleep tracker)
-- Energy rating: subjective 1-10 scale, target 7+ average over week
-
-Finance V2MOM metrics:
-- Emergency fund: $0 → $6000 (measure: monthly)
-- High-interest debt: $8000 → $0 (measure: monthly)
-- Savings rate: 5% → 20% of gross income (measure: monthly)
-- Financial anxiety: weekly check-in, target "comfortable with financial decisions"
-
-Software Package V2MOM metrics:
-- Test coverage: 0% → 80% (measure: coverage tool)
-- Issue response time: median < 48 hours (measure: GitHub stats)
-- Documentation completeness: all public APIs documented with examples
-- Adoption: 10+ GitHub stars, 3+ projects depending on it
-
-*Time estimate:* 20-30 minutes
-
-** Phase 8: Review and Refine
-
-Once all sections are complete, review the whole V2MOM together:
-
-*Ask together:*
-1. **Does the vision excite you?** (If not, why not? What's missing?)
-2. **Do the values guide decisions?** (Can you use them to say no to things?)
-3. **Are the methods ordered by priority?** (Is Method 1 truly most important?)
-4. **Are the obstacles honest?** (Or are you sugar-coating?)
-5. **Will the metrics tell you if you're succeeding?** (Or are they vanity metrics?)
-6. **Does this V2MOM make you want to DO THE WORK?** (If not, something is wrong)
-
-*Refinement:*
-- Merge overlapping methods
-- Reorder methods if priorities are wrong
-- Add missing concrete actions
-- Strengthen weak definitions
-- Remove fluff
-
-*Red flags:*
-- Vision doesn't excite you → Need to dig deeper into what you really want
-- Values are generic → Need concrete definitions and examples
-- Methods have no concrete actions → Too vague, need specifics
-- Obstacles are all external → Need honesty about personal patterns
-- Metrics are subjective → Need objective measurements
-
-** Phase 9: Commit and Use
-
-Once the V2MOM feels complete:
-
-1. **Save the document** in appropriate location
-2. **Share with stakeholders** (if applicable)
-3. **Use it immediately** (start Method 1 execution or first triage)
-4. **Schedule first review** (1 week out: is this working?)
-
-*Why use immediately:* Validates the V2MOM is practical, not theoretical. Execution reveals gaps that discussion misses.
-
-* Principles to Follow
-
-** Honesty Over Aspiration
-
-V2MOM requires brutal honesty, especially in Obstacles section.
-
-*Examples:*
-- "I get bored after 6 weeks" (honest) vs "Maintaining focus is challenging" (bland)
-- "I have 3 hours per week max" (honest) vs "Time is limited" (vague)
-- "I impulse-spend when stressed" (honest) vs "Budget adherence needs work" (passive)
-
-**Honesty enables solutions.** If you can't name the obstacle, you can't overcome it.
-
-** Concrete Over Abstract
-
-Every section should have concrete examples and definitions.
-
-*Bad:*
-- Vision: "Be successful"
-- Values: "Quality, Speed, Innovation"
-- Methods: "Improve things"
-- Metrics: "Do better"
-
-*Good:*
-- Vision: "Complete a 5K in under 30 minutes, have energy to play with kids after work, sleep 7+ hours consistently"
-- Values: "Sustainable: Can maintain for 10+ years. No crash diets, no injury-risking overtraining."
-- Methods: "Method 1: Fix sleep quality (blackout curtains, consistent bedtime, no screens 1hr before bed)"
-- Metrics: "5K time: current 38min → target 29min (measure: monthly timed run)"
-
-** Priority Ordering is Strategic
-
-Method ordering determines what happens first. Get it wrong and you'll waste effort.
-
-*Common patterns:*
-- **Fix → Build → Enhance → Sustain** (eliminate problems before building)
-- **Eliminate → Replace → Optimize** (stop damage before improving)
-- **Learn → Practice → Apply → Teach** (build skill progressively)
-
-*Why Method 1 must address the blocker:*
-- If foundation is broken, can't build on it
-- High-impact quick wins build momentum
-- Must stop the bleeding before starting rehab
-
-** Methods Need Concrete Actions
-
-If you can't list 3-8 concrete actions for a method, it's too vague.
-
-*Test:* Can you start working on Method 1 immediately after completing the V2MOM?
-
-If answer is "I need to think about what to do first", the method needs more concrete actions.
-
-*Example:*
-- Too vague: "Method 1: Improve health"
-- Concrete: "Method 1: Fix sleep quality → blackout curtains, consistent 10pm bedtime, no screens after 9pm, magnesium supplement, sleep tracking"
-
-** Metrics Must Be Measurable
-
-"Better" is not a metric. "Bench press 135 lbs" is a metric.
-
-*For each metric, you must be able to answer:*
-1. How do I measure this? (exact method or tool)
-2. What's the current state?
-3. What's the target state?
-4. How often do I measure it?
-5. What does this metric actually tell me?
-
-If you can't answer these, it's not a metric yet.
-
-** V2MOM is Living Document
-
-V2MOM is not set in stone. As you execute:
-
-- Methods may need reordering (new information reveals priorities)
-- Metrics may need adjustment (too aggressive or too conservative)
-- New obstacles emerge (capture them)
-- Values get refined (concrete examples clarify definitions)
-
-*Update the V2MOM when:*
-- Major priority shift occurs
-- New obstacle emerges that changes approach
-- Metric targets prove unrealistic or too easy
-- Method completion opens new possibilities
-- Quarterly review reveals misalignment
-
-*But don't update frivolously:* Changing the V2MOM every week defeats the purpose. Update when major shifts occur, not when minor tactics change.
-
-** Use It or Lose It
-
-V2MOM only works if you use it for decisions.
-
-*Use it for:*
-- Weekly reviews (am I working on right things?)
-- Priority decisions (which method does this serve?)
-- Saying no to distractions (not in the methods)
-- Celebrating wins (shipped Method 1 items!)
-- Identifying blockers (obstacles getting worse?)
-
-*If 2 weeks pass without referencing the V2MOM, something is wrong.* Either the V2MOM isn't serving you, or you're not using it.
-
-* Living Document
-
-This is a living document. After creating V2MOMs for different projects, consider:
-- Did the process work well?
-- Were any sections harder than expected?
-- Did we discover better questions to ask?
-- Should sections be created in different order?
-- What patterns emerge across different domains?
-
-Update this session document with learnings to make future V2MOM creation smoother.
-
-* Examples: V2MOMs Across Different Domains
-
-** Example 1: Health and Fitness V2MOM (Brief)
-
-*Vision:* Wake up with energy, complete 5K comfortably, feel strong in daily activities, stable mood, no afternoon crashes.
-
-*Values:*
-- Sustainable: Can do this at 80 years old
-- Compound: Small daily habits over quick fixes
-
-*Methods:*
-1. Fix Sleep Quality (blackout curtains, consistent bedtime, track metrics)
-2. Build Baseline Strength (3x/week, compound movements, progressive overload)
-3. Establish Nutrition System (meal prep, protein targets, hydration)
-
-*Obstacles:*
-- Get excited about new programs, switch before results (6-week pattern)
-- Social events involve alcohol and junk food
-- Skip workouts when stressed at work
-
-*Metrics:*
-- Resting heart rate: 70 → 60 bpm
-- Workout consistency: 3x/week for 12 consecutive weeks
-- 5K time: 38min → 29min
-
-** Example 2: Financial Independence V2MOM (Brief)
-
-*Vision:* Six months emergency fund, debt-free except mortgage, automatic investing, financial decisions without anxiety.
-
-*Values:*
-- Automatic: Set up once, runs forever (don't rely on willpower)
-- Freedom: Budget enables choices, not restricts them
-
-*Methods:*
-1. Stop the Bleeding (eliminate subscriptions, high-interest debt, impulse purchases)
-2. Build Safety Net ($1000 emergency fund → 3 months → 6 months)
-3. Automate Investing (max 401k match, IRA, automatic contributions)
-
-*Obstacles:*
-- View budget as restriction → triggers rebellion spending
-- FOMO on experiences peers have
-- Limited financial literacy
-
-*Metrics:*
-- Emergency fund: $0 → $6000
-- Savings rate: 5% → 20%
-- High-interest debt: $8000 → $0
-
-** Example 3: Emacs Configuration V2MOM (Detailed)
-
-This V2MOM was created over 2 sessions in late 2025 and led to significant improvements in config quality and maintainability.
-
-*** The Context
-
-Craig's Emacs configuration had grown to ~50+ todo items, unclear priorities, and performance issues. Config was his most-used software (email, calendar, tasks, programming, reading, music) so breakage blocked all work.
-
-*** The Process (2 Sessions, ~2.5 Hours Total)
-
-*Session 1 (2025-10-30, ~1 hour):*
-- Vision: Already clear from existing draft, kept as-is
-- Values: Deep Q&A to define "Intuitive", "Fast", "Simple"
- - Each value got concrete definition + examples + anti-patterns
- - Intuitive: muscle memory + mnemonics + which-key timing
- - Fast: < 3s startup, org-agenda is THE BOTTLENECK
- - Simple: production practices, simplicity produces reliability
-
-*Session 2 (2025-10-31, ~1.5 hours):*
-- Methods: Identified 6 methods through Q&A
- - Method 1: Make Using Emacs Frictionless (fix daily pain)
- - Method 2: Stop Problems Before They Appear (stability)
- - Method 3: Make Fixing Emacs Frictionless (tooling)
- - Method 4: Contribute to Ecosystem (package maintenance)
- - Method 5: Be Kind To Future Self (new features)
- - Method 6: Develop Disciplined Practices (meta-method)
-- Obstacles: Honest personal patterns
- - "Getting irritated at mistakes and pushing on"
- - "Hard to say no to fun ideas"
- - "Perfectionism delays shipping"
-- Metrics: Measurable outcomes
- - Startup time: 6.2s → < 3s
- - Org-agenda rebuild: 30s → < 5s
- - Active todos: 50+ → < 20
- - Weekly triage consistency
- - Research:shipped ratio > 1:1
-
-*** Immediate Impact
-
-After completing V2MOM:
-- Ruthlessly triaged 50 todos → 23 (under < 20 target)
-- Archived items not serving vision to someday-maybe.org
-- Immediate execution: removed network check (2s improvement!)
-- Clear decision framework for weekly inbox triage
-- Startup improved: 6.19s → 4.16s → 3.8s (approaching target)
-
-*** Key Learnings
-
-1. **Vision was easy:** Already had clear picture of success
-2. **Values took work:** Required concrete definitions, not just words
-3. **Methods needed ordering:** Priority emerged from dependency discussion
-4. **Obstacles required honesty:** Hardest to name personal patterns
-5. **Metrics aligned with values:** "Fast" value → fast metrics (startup, org-agenda)
-
-*** Why It Worked
-
-- V2MOM provided framework to say "no" ruthlessly
-- Method ordering prevented building on broken foundation
-- Metrics were objective (seconds, counts) not subjective
-- Obstacles acknowledged personal patterns enabling better strategies
-- Used immediately for inbox triage (validated practicality)
-
-* Conclusion
-
-Creating a V2MOM transforms vague intentions into concrete strategy. It provides:
-
-- **Clarity** on what you're actually trying to achieve
-- **Decision framework** for ruthless prioritization
-- **Measurable progress** through objective metrics
-- **Honest obstacles** that can be addressed
-- **Ordered methods** that build on each other
-
-**The framework takes 2-3 hours to create. It saves weeks of unfocused work.**
-
-The V2MOM works across domains because the structure is universal:
-- Vision: Where am I going?
-- Values: What principles guide me?
-- Methods: How do I get there?
-- Obstacles: What's in my way?
-- Metrics: How do I know it's working?
-
-*Remember:* V2MOM is a tool, not a trophy. Create it, use it, update it, and let it guide your work. If you're not using it weekly, either fix the V2MOM or admit you don't need one.
-
-*Final test:* Can you say "no" to something you would have said "yes" to before? If so, the V2MOM is working.
diff --git a/docs/workflows/create-workflow.org b/docs/workflows/create-workflow.org
deleted file mode 100644
index b6896fd8..00000000
--- a/docs/workflows/create-workflow.org
+++ /dev/null
@@ -1,352 +0,0 @@
-#+TITLE: Creating New Session Workflows
-#+AUTHOR: Craig Jennings & Claude
-#+DATE: 2025-11-01
-
-* Overview
-
-This document describes the meta-workflow for creating new workflows. When we identify a repetitive workflow or collaborative pattern, we use this process to formalize it into a documented session that we can reference and reuse.
-
-Session workflows are living documents that capture how we work together on specific types of tasks. They build our shared vocabulary and enable efficient collaboration across multiple work sessions.
-
-* Problem We're Solving
-
-Without a formal session creation process, we encounter several issues:
-
-** Inefficient Use of Intelligence
-- Craig leads the process based solely on his knowledge
-- We don't leverage Claude's expertise to improve or validate the approach
-- Miss opportunities to apply software engineering and process best practices
-
-** Time Waste and Repetition
-- Craig must re-explain the workflow each time we work together
-- No persistent memory of how we've agreed to work
-- Each session starts from scratch instead of building on previous work
-
-** Error-Prone Execution
-- Important steps may be forgotten or omitted
-- No checklist to verify completeness
-- Mistakes lead to incomplete work or failed goals
-
-** Missed Learning Opportunities
-- Don't capture lessons learned from our collaboration
-- Can't improve processes based on what works/doesn't work
-- Lose insights that emerge during execution
-
-** Limited Shared Vocabulary
-- No deep, documented understanding of what terms mean
-- "Let's do a refactor session" has no precise definition
-- Can't efficiently communicate about workflows
-
-*Impact:* Inefficiency, errors, and lost opportunity to continuously improve our collaborative workflows.
-
-* Exit Criteria
-
-We know a session definition is complete when:
-
-1. **Information is logically arranged** - The structure makes sense and flows naturally
-2. **Both parties understand how to work together** - We can articulate the workflow
-3. **Agreement on effectiveness** - We both agree that following this session will lead to exit criteria and resolve the stated problem
-4. **Tasks are clearly defined** - Steps are actionable, not vague
-5. **Problem resolution path** - Completing the tasks either:
- - Fixes the problem permanently, OR
- - Provides a process for keeping the problem at bay
-
-*Measurable validation:*
-- Can we both articulate the workflow without referring to the document?
-- Do we agree it will solve the problem?
-- Are the tasks actionable enough to start immediately?
-- Does the session get used soon after creation (validation by execution)?
-
-* When to Use This Session
-
-Trigger this session creation workflow when:
-
-- You notice a repetitive workflow that keeps coming up
-- A collaborative pattern emerges that would benefit from documentation
-- Craig says "let's create/define/design a session for [activity]"
-- You identify a new type of work that doesn't fit existing workflows
-- An existing workflow needs significant restructuring (treat as creating a new one)
-
-Examples:
-- "Let's create a session where we inbox zero"
-- "We should define a code review session"
-- "Let's design a session for weekly planning"
-
-* Approach: How We Work Together
-
-** Phase 1: Question and Answer Discovery
-
-Walk through these four core questions collaboratively. Take notes on the answers.
-
-*IMPORTANT: Save answers as you go!*
-
-The Q&A phase can take time—Craig may need to think through answers, and discussions can be lengthy. To prevent data loss from terminal crashes or process quits:
-
-1. Create a draft file at =docs/workflows/[name]-draft.org= after deciding on the name
-2. After each question is answered, save the Q&A content to the draft file
-3. If session is interrupted, you can resume from the saved answers
-4. Once complete, the draft becomes the final session document
-
-This protects against losing substantial thinking work if the session is interrupted.
-
-*** Question 1: What problem are we solving in this type of session?
-
-Ask Craig: "What problem are we solving in this type of session?"
-
-The answer reveals:
-- Overview and goal of the session
-- Why this work matters (motivation)
-- Impact/priority compared to other work
-- What happens if we don't do this work
-
-Example from refactor session:
-#+begin_quote
-"My Emacs configuration isn't resilient enough. There's lots of custom code, and I'm even developing some as Emacs packages. Yet Emacs is my most-used software, so when Emacs breaks, I become unproductive. I need to make Emacs more resilient through good unit tests and refactoring."
-#+end_quote
-
-*** Question 2: How do we know when we're done?
-
-Ask Craig: "How do we know when we're done?"
-
-The answer reveals:
-- Exit criteria
-- Results/completion criteria
-- Measurable outcomes
-
-*Your role:*
-- Push back if the answer is vague or unmeasurable
-- Propose specific measurements based on context
-- Iterate together until criteria are clear
-- Fallback (hopefully rare): "when Craig says we're done"
-
-Example from refactor session:
-#+begin_quote
-"When we've reviewed all methods, decided which to test and refactor, run all tests, and fixed all failures including bugs we find."
-#+end_quote
-
-Claude might add: "How about a code coverage goal of 70%+?"
-
-*** Question 3: How do you see us working together in this kind of session?
-
-Ask Craig: "How do you see us working together in this kind of session?"
-
-The answer reveals:
-- Steps or phases we'll go through
-- The general approach to the work
-- How tasks flow from one to another
-
-*Your role:*
-- As steps emerge, ask yourself:
- - "Do these steps lead to solving the real problem?"
- - "What is missing from these steps?"
-- If the answers aren't "yes" and "nothing", raise concerns
-- Propose additions based on your knowledge
-- Suggest concrete improvements
-
-Example from refactor session:
-#+begin_quote
-"We'll analyze test coverage, categorize functions by testability, write tests systematically using Normal/Boundary/Error categories, run tests, analyze failures, fix bugs, and repeat."
-#+end_quote
-
-Claude might suggest: "Should we install a code coverage tool as part of this process?"
-
-*** Question 4: Are there any principles we should be following while doing this?
-
-Ask Craig: "Are there any principles we should be following while doing this kind of session?"
-
-The answer reveals:
-- Principles to follow
-- Decision frameworks
-- Quality standards
-- When to choose option A vs option B
-
-*Your role:*
-- Think through all elements of the session
-- Consider situations that may arise
-- Identify what principles would guide decisions
-- Suggest decision frameworks from your knowledge
-
-Example from refactor session:
-#+begin_quote
-Craig: "Treat all test code as production code - same engineering practices apply."
-
-Claude suggests: "Since we'll refactor methods mixing UI and logic, should we add a principle to separate them for testability?"
-#+end_quote
-
-** Phase 2: Assess Completeness
-
-After the Q&A, ask together:
-
-1. **Do we have enough information to formulate steps/process?**
- - If yes, proceed to Phase 3
- - If no, identify what's missing and discuss further
-
-2. **Do we agree following this approach will resolve/mitigate the problem?**
- - Both parties must agree
- - If not, identify concerns and iterate
-
-** Phase 3: Name the Session
-
-Decide on a name for this workflow.
-
-*Naming convention:* Action-oriented (verb form)
-- Examples: "refactor", "inbox-zero", "create-workflow", "review-code"
-- Why: Shorter, natural when saying "let's do a [name] session"
-- Filename: =docs/workflows/[name].org=
-
-** Phase 4: Document the Session
-
-Write the session file at =docs/workflows/[name].org= using this structure:
-
-*** Recommended Structure
-1. *Title and metadata* (=#+TITLE=, =#+AUTHOR=, =#+DATE=)
-2. *Overview* - Brief description of the session
-3. *Problem We're Solving* - From Q&A, with context and impact
-4. *Exit Criteria* - Measurable outcomes, how we know we're done
-5. *When to Use This Session* - Triggers, circumstances, examples
-6. *Approach: How We Work Together*
- - Phases/steps derived from Q&A
- - Decision frameworks
- - Concrete examples woven throughout
-7. *Principles to Follow* - Guidelines from Q&A
-8. *Living Document Notice* - Reminder to update with learnings
-
-*** Important Notes
-- Weave concrete examples into sections (don't separate them)
-- Use examples from actual sessions when available
-- Make tasks actionable, not vague
-- Include decision frameworks for common situations
-- Note that this is a living document
-
-** Phase 5: Update Project State
-
-Update =NOTES.org=:
-1. Add new workflow to "Available Workflows" section
-2. Include brief description and reference to file
-3. Note creation date
-
-Example entry:
-#+begin_src org
-,** inbox-zero
-File: =docs/workflows/inbox-zero.org=
-
-Workflow for processing inbox to zero:
-1. [Brief workflow summary]
-2. [Key steps]
-
-Created: 2025-11-01
-#+end_src
-
-** Phase 6: Validate by Execution
-
-*Critical step:* Use the session soon after creating it.
-
-- Schedule the workflow for immediate use
-- Follow the documented workflow
-- Note what works well
-- Identify gaps or unclear areas
-- Update the session document with learnings
-
-*This validates the session definition and ensures it's practical, not theoretical.*
-
-* Principles to Follow
-
-These principles guide us while creating new sessions:
-
-** Collaboration Through Discussion
-- Be proactive about collaboration
-- Suggest everything on your mind
-- Ask all relevant questions
-- Push back when something seems wrong, inconsistent, or unclear
-- Misunderstandings are learning opportunities
-
-** Reviewing the Whole as Well as the Pieces
-- May get into weeds while identifying each step
-- Stop to look at the whole thing at the end
-- Ask the big questions: Does this actually solve the problem?
-- Verify all pieces connect logically
-
-** Concrete Over Abstract
-- Use examples liberally within explanations
-- Weave concrete examples into Q&A answers
-- Don't just describe abstractly
-- "When nil input crashes, ask..." is better than "handle edge cases"
-
-** Actionable Tasks Over Vague Direction
-- Steps should be clear enough to know what to do next
-- "Ask: how do you see us working together?" is actionable
-- "Figure out the approach" is too vague
-- Test: Could someone execute this without further explanation?
-
-** Validate Early
-- "Use it soon afterward" catches problems early
-- Don't let session definitions sit unused and untested
-- Real execution reveals gaps that theory misses
-- Update immediately based on first use
-
-** Decision Frameworks Over Rigid Steps
-- Sessions are frameworks (principles + flexibility), not recipes
-- Include principles that help case-by-case decisions
-- "When X happens, ask Y" is a decision framework
-- "Always do X" is too rigid for most sessions
-
-** Question Assumptions
-- If something doesn't make sense, speak up
-- If a step seems to skip something, point it out
-- Better to question during creation than discover gaps during execution
-- No assumption is too basic to verify
-
-* Living Document
-
-This is a living document. As we create new sessions and learn what works (and what doesn't), we update this file with:
-
-- New insights about session creation
-- Improvements to the Q&A process
-- Better examples
-- Additional principles discovered
-- Refinements to the structure
-
-Every time we create a session, we have an opportunity to improve this meta-process.
-
-** Updates and Learnings
-
-*** 2025-11-01: Save Q&A answers incrementally
-*Learning:* During emacs-inbox-zero session creation, we discovered that Q&A discussions can be lengthy and make Craig think deeply. Terminal crashes or process quits can lose substantial work.
-
-*Improvement:* Added guidance in Phase 1 to create a draft file and save Q&A answers after each question. This protects against data loss and allows resuming interrupted sessions.
-
-*Impact:* Reduces risk of losing 10-15 minutes of thinking work if session is interrupted.
-
-*** 2025-11-01: Validation by execution works!
-*Learning:* Immediately after creating the emacs-inbox-zero session, we validated it by actually running the workflow. This caught unclear areas and validated that the 10-minute target was realistic.
-
-*Key insight from validation:* When Craig provides useful context during workflows (impact estimates, theories, examples), that context should be captured in task descriptions. This wasn't obvious during session creation but became clear during execution.
-
-*Impact:* Validation catches what theory misses. Always use Phase 6 (validate by execution) soon after creating a session.
-
-* Example: Creating the "Create-Session" Session
-
-This very document was created using the process it describes (recursive!).
-
-** The Q&A
-- *Problem:* Time waste, errors, missed learning from informal processes
-- *Exit criteria:* Logical arrangement, mutual understanding, agreement on effectiveness, actionable tasks
-- *Approach:* Four-question Q&A, assess completeness, name it, document it, update NOTES.org, validate by use
-- *Principles:* Collaboration through discussion, review the whole, concrete over abstract, actionable tasks, validate early, decision frameworks, question assumptions
-
-** The Result
-We identified what was needed, collaborated on answers, and captured it in this document. Then we immediately used it to create the next session (validation).
-
-* Conclusion
-
-Creating workflows is a meta-skill that improves all our collaboration. By formalizing how we work together, we:
-
-- Build shared vocabulary
-- Eliminate repeated explanations
-- Capture lessons learned
-- Enable continuous improvement
-- Make our partnership more efficient
-
-Each new workflow we create adds to our collaborative toolkit and deepens our ability to work together effectively.
-
-*Remember:* Sessions are frameworks, not rigid recipes. They provide structure while allowing flexibility for case-by-case decisions. The goal is effectiveness, not perfection.
diff --git a/docs/workflows/emacs-inbox-zero.org b/docs/workflows/emacs-inbox-zero.org
deleted file mode 100644
index 7040ddb7..00000000
--- a/docs/workflows/emacs-inbox-zero.org
+++ /dev/null
@@ -1,338 +0,0 @@
-#+TITLE: Emacs Inbox Zero Session
-#+AUTHOR: Craig Jennings & Claude
-#+DATE: 2025-11-01
-
-* Overview
-
-This workflow processes the Emacs Config Inbox to zero by filtering tasks through the V2MOM framework. Items either move to active V2MOM methods, get moved to someday-maybe, or get deleted. This weekly discipline prevents backlog buildup and ensures only strategic work gets done.
-
-* Problem We're Solving
-
-Emacs is Craig's most-used software by a significant margin. It's the platform for email, calendar, task management, note-taking, programming, reading, music, podcasts, and more. When Emacs breaks, everything stops—including critical life tasks like family emails, doctor appointments, and bills.
-
-The V2MOM (Vision, Values, Methods, Obstacles, Metrics) framework provides strategic balance between fixing/improving Emacs versus using it for real work. But without weekly maintenance, the system collapses under backlog.
-
-** The Specific Problem
-
-Features and bugs get logged in the "Emacs Config Inbox" heading of =~/.emacs.d/todo.org=. If not sorted weekly:
-- Items pile up and become unmanageable
-- Unclear what's actually important
-- Method 1 ("Make Using Emacs Frictionless") doesn't progress
-- Two key metrics break:
- 1. *Active todo count:* Should be < 20 items
- 2. *Weekly triage consistency:* Must happen at least once per week by Sunday, no longer than 7 days between sessions
-
-** What Happens Without This Session
-
-Without weekly inbox zero:
-- Backlog grows until overwhelming
-- Can't distinguish signal from noise
-- V2MOM becomes theoretical instead of practical
-- Config maintenance competes with real work instead of enabling it
-- Discipline muscle (Method 6: ruthless prioritization) atrophies
-
-*Impact:* The entire V2MOM system fails. Config stays broken longer. Real work gets blocked more often.
-
-* Exit Criteria
-
-The workflow is complete when:
-- Zero todo items remain under the "* Emacs Config Inbox" heading in =~/.emacs.d/todo.org=
-- All items have been routed to: V2MOM methods, someday-maybe, or deleted
-- Can verify by checking the org heading (should be empty or show "0/0" in agenda)
-
-*IMPORTANT:* We are ONLY processing items under the "* Emacs Config Inbox" heading. Items already organized under Method 1-6 headings have already been triaged and should NOT be touched during this workflow.
-
-*Measurable validation:*
-- Open =todo.org= and navigate to "* Emacs Config Inbox" heading
-- Confirm no child tasks exist under this heading only
-- Bonus: Check that active todo count is < 20 items across entire V2MOM
-
-* When to Use This Workflow
-
-Trigger this workflow when:
-- It's Sunday and you haven't triaged this week
-- 7 days have passed since last triage (hard deadline)
-- "Emacs Config Inbox" has accumulated items
-- You notice yourself avoiding looking at the inbox (sign it's becoming overwhelming)
-- Before starting any new Emacs config work (ensures highest-priority work happens first)
-
-*Recommended cadence:* Every Sunday, 10 minutes, no exceptions.
-
-* Approach: How We Work Together
-
-** Phase 1: Sort by Priority
-
-First, ensure todo items are sorted by priority in =todo.org=:
-- A (highest priority)
-- B
-- C
-- No priority
-- D (lowest priority)
-
-This ensures we always look at the most important items first. If time runs short, at least the high-priority items got processed.
-
-** Phase 2: Claude Rereads V2MOM
-
-Before processing any items, Claude rereads [[file:../EMACS-CONFIG-V2MOM.org][EMACS-CONFIG-V2MOM.org]] to have it fresh in mind. This ensures filtering decisions are grounded in the strategic framework.
-
-*What Claude should pay attention to:*
-- The 6 Methods and their concrete actions
-- The Values (Intuitive, Fast, Simple) and what they mean
-- The Metrics (especially active todo count < 20)
-- Method 6 discipline practices (ruthless prioritization, weekly triage, ship-over-research)
-
-** Phase 3: Process Each Item (in Priority Order)
-
-*IMPORTANT:* Process ONLY items under the "* Emacs Config Inbox" heading. Items already organized under Method 1-6 have been triaged and should remain where they are.
-
-For each item under "* Emacs Config Inbox", work through these questions:
-
-*** Question 1: Does this task need to be done at all?
-
-*Consider:*
-- Has something changed?
-- Was this a mistake?
-- Do I disagree with this idea now?
-- Is this actually important?
-
-*If NO:* **DELETE** the item immediately. Don't move it anywhere. Kill it.
-
-*Examples of deletions:*
-- "Add Signal client to Emacs" - Cool idea, not important
-- "Try minimap mode" - Interesting, doesn't serve vision
-- "Research 5 different completion frameworks" - Already have Vertico/Corfu, stop researching
-
-*** Question 2: Is this task related to the Emacs Config V2MOM?
-
-*If NO:* **Move to** =docs/someday-maybe.org=
-
-These are tasks that might be good ideas but don't serve the current strategic focus. They're not deleted (might revisit later) but they're out of active consideration.
-
-*Examples:*
-- LaTeX improvements (no concrete need yet)
-- Elfeed dashboard redesign (unclear if actually used)
-- New theme experiments (side project competing with maintenance)
-
-*** Question 3: Which V2MOM method does this relate to?
-
-*If YES (related to V2MOM):*
-
-Claude suggests which method(s) this might relate to:
-- Method 1: Make Using Emacs Frictionless (performance, bug fixes, missing features)
-- Method 2: Stop Problems Before They Appear (package upgrades, deprecation removal)
-- Method 3: Make Fixing Emacs Frictionless (tooling, testing, profiling)
-- Method 4: Contribute to the Emacs Ecosystem (package maintenance)
-- Method 5: Be Kind To Your Future Self (new capabilities)
-- Method 6: Develop Disciplined Engineering Practices (meta-practices)
-
-*This is a conversation.* If the relationship is only tangential:
-- **Claude should push back** - "This seems tangential. Adding it would dilute focus and delay V2MOM completion. Are you sure this serves the vision?"
-- Help Craig realize it doesn't fit through questions
-- The more we add, the longer V2MOM takes, the harder it is to complete
-
-*If item relates to multiple methods:*
-Pick the **highest priority method** (Method 1 > Method 2 > Method 3 > etc.)
-
-*IMPORTANT: Capture useful context!*
-During discussion, Craig may provide:
-- Impact estimates ("15-20 seconds × 12 times/day")
-- Theories about root causes
-- Context about why this matters
-- Examples of when the problem occurs
-
-**When moving items to methods, add this context to the task description.** This preserves valuable information for later execution and helps prioritize work accurately.
-
-*Then:* Move the item to the appropriate method section in the V2MOM or active todo list with enriched context.
-
-** Phase 4: Verify and Celebrate
-
-Once all items are processed:
-1. Verify "Emacs Config Inbox" heading is empty
-2. Check that active todo count is < 20 items
-3. Note the date of this triage session
-4. Acknowledge: You've practiced ruthless prioritization (Method 6 skill development)
-
-** Decision Framework: When Uncertain
-
-If you're uncertain whether an item fits V2MOM:
-
-1. **Ask: Does this directly serve the Vision?** (Work at speed of thought, stable config, comprehensive workflows)
-2. **Ask: Does this align with Values?** (Intuitive, Fast, Simple)
-3. **Ask: Is this in the Methods already?** (If not explicitly listed, probably shouldn't add)
-4. **Ask: What's the opportunity cost?** (Every new item delays everything else)
-
-*When in doubt:* Move to someday-maybe. You can always pull it back later if it proves critical. Better to be conservative than to dilute focus.
-
-* Principles to Follow
-
-** Claude's Role: "You're here to help keep me honest"
-
-Craig is developing discipline (Method 6: ruthless prioritization). Not making progress = not getting better.
-
-*Claude's responsibilities:*
-- If task clearly fits V2MOM → Confirm and move forward quickly
-- If task is unclear/tangential → **Ask questions** to help Craig realize it doesn't fit or won't lead to V2MOM success
-- Enable ruthless prioritization by helping Craig say "no"
-- Don't let good ideas distract from great goals
-
-*Example questions Claude might ask:*
-- "This is interesting, but which specific metric does it improve?"
-- "We already have 3 items in Method 1 addressing performance. Does this add something different?"
-- "This would be fun to build, but does it make using Emacs more frictionless?"
-- "If you had to choose between this and fixing org-agenda (30s → 5s), which serves the vision better?"
-
-** Time Efficiency: 10 Minutes Active Work
-
-Don't take too long on any single item. Splitting philosophical hairs = procrastination.
-
-*Target:* **10 minutes active work time** (not clock time - interruptions expected)
-
-*If spending > 1 minute on a single item:*
-- Decision is unclear → Move to someday-maybe (safe default)
-- Come back to it later if it proves critical
-- Keep moving
-
-*Why this matters:*
-- Weekly consistency requires low friction
-- Perfect categorization doesn't matter as much as consistent practice
-- Getting through all items > perfectly routing each item
-
-** Ruthless Prioritization Over Completeness
-
-The goal is not to do everything in the inbox. The goal is to identify and focus on what matters most.
-
-*Better to:*
-- Delete 50% of items and ship the other 50%
-- Than keep 100% and ship 0%
-
-*Remember:*
-- Every item kept is opportunity cost
-- V2MOM already has plenty of work
-- "There will always be cool ideas out there to implement and they will always be a web search away" (Craig's words)
-
-** Bias Toward Action
-
-When processing items that ARE aligned with V2MOM:
-- Move them to the appropriate method quickly
-- Don't overthink the categorization
-- Getting it 80% right is better than spending 5 minutes getting it 100% right
-- You can always recategorize later during regular triage
-
-* Living Document
-
-This is a living document. After each emacs-inbox-zero workflow, consider:
-- Did the workflow make sense?
-- Were any steps unclear or unnecessary?
-- Did any new situations arise that need decision frameworks?
-- Did the 10-minute target work, or should it adjust?
-
-Update this document with learnings to make future workflows smoother.
-
-* Example Session Walkthrough
-
-** Setup
-- Open =~/.emacs.d/todo.org=
-- Navigate to "Emacs Config Inbox" heading
-- Verify items are sorted by priority (A → B → C → none → D)
-- Claude rereads =EMACS-CONFIG-V2MOM.org=
-
-** Processing Example Items
-
-*** Example 1: [#A] Fix org-agenda slowness (30+ seconds)
-
-*Q1: Does this need to be done?* YES - Daily pain point blocking productivity
-
-*Q2: Related to V2MOM?* YES - Method 1 explicitly lists this
-
-*Q3: Which method?* Method 1: Make Using Emacs Frictionless
-
-*Action:* Move to Method 1 active tasks (or confirm already there)
-
-*Time:* 15 seconds
-
-*** Example 2: [#B] Add Signal client to Emacs
-
-*Q1: Does this need to be done?* Let's think...
-
-Claude: "What problem does this solve? Is messaging in Emacs part of the Vision?"
-
-Craig: "Not really, I already use Signal on my phone fine."
-
-*Action:* **DELETE** - Doesn't serve vision, already have working solution
-
-*Time:* 30 seconds
-
-*** Example 3: [#C] Try out minimap mode for code navigation
-
-*Q1: Does this need to be done?* Interesting idea, but not important
-
-*Action:* **DELETE** or move to someday-maybe - Interesting, not important
-
-*Time:* 10 seconds
-
-*** Example 4: [#B] Implement transcription workflow
-
-*Q1: Does this need to be done?* YES - Want to transcribe recordings for notes
-
-*Q2: Related to V2MOM?* Maybe... seems like new feature?
-
-Claude: "This seems like Method 5: Be Kind To Your Future Self - new capability you'll use repeatedly. Complete code already exists in old todo.org. But we're still working through Method 1 (frictionless) and Method 2 (stability). Should this wait, or is transcription critical?"
-
-Craig: "Actually yes, I record meetings and need transcripts. This is important."
-
-*Q3: Which method?* Method 5: Be Kind To Your Future Self
-
-*Action:* Move to Method 5 (but note: prioritize after Methods 1-3)
-
-*Time:* 45 seconds (good conversation, worth the time)
-
-** Result
-- 4 items processed in ~2 minutes
-- 1 moved to Method 1 (already there)
-- 1 deleted
-- 1 deleted or moved to someday-maybe
-- 1 moved to Method 5
-- Inbox is clearer, focus is sharper
-
-* Conclusion
-
-Emacs inbox zero is not about getting through email or org-capture. It's about **strategic filtering of config maintenance work**. By processing the inbox weekly, you:
-
-- Keep maintenance load manageable (< 20 active items)
-- Ensure only V2MOM-aligned work happens
-- Practice ruthless prioritization (Method 6 skill)
-- Prevent backlog from crushing future productivity
-- Build the discipline that makes all other methods sustainable
-
-**The session takes 10 minutes. Not doing it costs days of distracted, unfocused work on things that don't matter.**
-
-*Remember:* Inbox zero is not about having zero things to do. It's about knowing exactly what you're NOT doing, so you can focus completely on what matters most.
-
-* Living Document
-
-This is a living document. After each emacs-inbox-zero workflow, consider:
-- Did the workflow make sense?
-- Were any steps unclear or unnecessary?
-- Did any new situations arise that need decision frameworks?
-- Did the 10-minute target work, or should it adjust?
-
-Update this document with learnings to make future workflows smoother.
-
-** Updates and Learnings
-
-*** 2025-11-01: First validation session - Process works!
-
-*Session results:*
-- 5 items processed in ~10 minutes (target met)
-- 1 deleted (duplicate), 2 moved to Method 1, 2 moved to someday-maybe
-- Inbox cleared to zero
-- Priority sorting worked well
-- Three-question filter was effective
-- Caught duplicate task and perfectionism pattern in real-time
-
-*Key learning: Capture useful context during triage*
-When Craig provides impact estimates ("15-20 seconds × 12 times/day"), theories, or context during discussion, **Claude should add this information to the task description** when moving items to methods. This preserves valuable context for execution and helps with accurate prioritization.
-
-Example: "Optimize org-capture target building" was enriched with "15-20 seconds every time capturing a task (12+ times/day). Major daily bottleneck - minutes lost waiting, plus context switching cost."
-
-*Impact:* Better task descriptions → better prioritization → better execution.
diff --git a/docs/workflows/refactor.org b/docs/workflows/refactor.org
deleted file mode 100644
index 2467ab99..00000000
--- a/docs/workflows/refactor.org
+++ /dev/null
@@ -1,617 +0,0 @@
-#+TITLE: Test-Driven Quality Engineering Workflow
-#+AUTHOR: Craig Jennings & Claude
-#+DATE: 2025-11-01
-
-* Overview
-
-This document describes a comprehensive test-driven quality engineering workflow applicable to any source code module. The workflow demonstrates systematic testing practices, refactoring for testability, bug discovery through tests, and decision-making processes when tests fail.
-
-* Workflow Goals
-
-1. Add comprehensive unit test coverage for testable functions in your module
-2. Discover and fix bugs through systematic testing
-3. Follow quality engineering principles from =ai-prompts/quality-engineer.org=
-4. Demonstrate refactoring patterns for testability
-5. Document the decision-making process for test vs production code issues
-
-* Phase 1: Feature Addition with Testability in Mind
-
-** The Feature Request
-
-Add new functionality that requires user interaction combined with business logic.
-
-Example requirements:
-- Present user with options (e.g., interactive selection)
-- Allow cancellation
-- Perform an operation with the selected input
-- Provide clear success/failure feedback
-
-** Refactoring for Testability
-
-Following the "Interactive vs Non-Interactive Function Pattern" from =quality-engineer.org=:
-
-*Problem:* Directly implementing as an interactive function would require:
-- Mocking user interface components
-- Mocking framework-specific APIs
-- Testing UI functionality, not core business logic
-
-*Solution:* Split into two functions:
-
-1. *Helper Function* (internal implementation):
- - Pure, deterministic
- - Takes explicit parameters
- - No user interaction
- - Returns values or signals errors naturally
- - 100% testable, no mocking needed
-
-2. *Interactive Wrapper* (public interface):
- - Thin layer handling only user interaction
- - Gets input from user/context
- - Presents UI (prompts, selections, etc.)
- - Catches errors and displays messages
- - Delegates all business logic to helper
- - No tests needed (just testing framework UI)
-
-** Benefits of This Pattern
-
-From =quality-engineer.org=:
-#+begin_quote
-When writing functions that combine business logic with user interaction:
-- Split into internal implementation and interactive wrapper
-- Internal function: Pure logic, takes all parameters explicitly
-- Dramatically simpler testing (no interactive mocking)
-- Code reusable programmatically without prompts
-- Clear separation of concerns (logic vs UI)
-#+end_quote
-
-This pattern enables:
-- Zero mocking in tests
-- Fast, deterministic tests
-- Easy reasoning about correctness
-- Reusable helper function
-
-* Phase 2: Writing the First Test
-
-** Test File Naming
-
-Following the naming convention from =quality-engineer.org=:
-- Pattern: =test-<module>-<function>.<ext>=
-- One test file per function for easy discovery when tests fail
-- Developer sees failure → immediately knows which file to open
-
-** Test Organization
-
-Following the three-category structure:
-
-*** Normal Cases
-- Standard expected inputs
-- Common use case scenarios
-- Happy path operations
-- Multiple operations in sequence
-
-*** Boundary Cases
-- Very long inputs
-- Unicode characters (中文, emoji)
-- Special characters and edge cases
-- Empty or minimal data
-- Maximum values
-
-*** Error Cases
-- Invalid inputs
-- Nonexistent resources
-- Permission denied scenarios
-- Wrong type of input
-
-** Writing Tests with Zero Mocking
-
-Key principle: "Don't mock what you're testing" (from =quality-engineer.org=)
-
-Example test structure:
-#+begin_src
-test_function_normal_case_expected_result()
- setup()
- try:
- # Arrange
- input_data = create_test_data()
- expected_output = define_expected_result()
-
- # Act
- actual_output = function_under_test(input_data)
-
- # Assert
- assert actual_output == expected_output
- finally:
- teardown()
-#+end_src
-
-Key characteristics:
-- No mocks for the function being tested
-- Real resources (files, data structures) using test utilities
-- Tests actual function behavior
-- Clean setup/teardown
-- Clear arrange-act-assert structure
-
-** Result
-
-When helper functions are well-factored and deterministic, tests often pass on first run.
-
-* Phase 3: Systematic Test Coverage Analysis
-
-** Identifying Testable Functions
-
-Review all functions in your module and categorize by testability:
-
-*** Easy to Test (Pure/Deterministic)
-- Input validation functions
-- String manipulation/formatting
-- Data structure transformations
-- File parsing (read-only operations)
-- Configuration/option processing
-
-*** Medium Complexity (Need External Resources)
-- File I/O operations
-- Recursive algorithms
-- Data structure generation
-- Cache or state management
-
-*** Hard to Test (Framework/Context Dependencies)
-- Functions requiring specific runtime environment
-- UI/buffer/window management
-- Functions tightly coupled to framework internals
-- Functions requiring complex mocking setup
-
-*Decision:* Test easy and medium complexity functions. Skip framework-dependent functions that would require extensive mocking/setup (diminishing returns).
-
-** File Organization Principle
-
-From =quality-engineer.org=:
-#+begin_quote
-*Unit Tests*: One file per method
-- Naming: =test-<filename>-<methodname>.<ext>=
-- Example: =test-module--function.ext=
-#+end_quote
-
-*Rationale:* When a test fails in CI:
-1. Developer sees: =test-module--function-normal-case-returns-result FAILED=
-2. Immediately knows: Look for =test-module--function.<ext>=
-3. Opens file and fixes issue - *fast cognitive path*
-
-If combined files:
-1. Test fails: =test-module--function-normal-case-returns-result FAILED=
-2. Which file? =test-module--helpers.<ext>=? =test-module--combined.<ext>=?
-3. Developer wastes time searching - *slower, frustrating*
-
-*The 1:1 mapping is a usability feature for developers under pressure.*
-
-* Phase 4: Testing Function by Function
-
-** Example 1: Input Validation Function
-
-*** Test Categories
-
-*Normal Cases:*
-- Valid inputs
-- Case variations
-- Common use cases
-
-*Boundary Cases:*
-- Edge cases in input format
-- Multiple delimiters or separators
-- Empty or minimal input
-- Very long input
-
-*Error Cases:*
-- Nil/null input
-- Wrong type
-- Malformed input
-
-*** First Run: Most Passed, Some FAILED
-
-*Example Failure:*
-#+begin_src
-test-module--validate-input-error-nil-input-returns-nil
-Expected: Returns nil gracefully
-Actual: (TypeError/NullPointerException) - CRASHED
-#+end_src
-
-*** Bug Analysis: Test or Production Code?
-
-*Process:*
-1. Read the test expectation: "nil input returns nil/false gracefully"
-2. Read the production code:
- #+begin_src
- function validate_input(input):
- extension = get_extension(input) # ← Crashes here on nil/null
- return extension in valid_extensions
- #+end_src
-3. Identify issue: Function expects string, crashes on nil/null
-4. Consider context: This is defensive validation code, called in various contexts
-
-*Decision: Fix production code*
-
-*Rationale:*
-- Function should be defensive (validation code)
-- Returning false/nil for invalid input is more robust than crashing
-- Common pattern in validation functions
-- Better user experience
-
-*Fix:*
-#+begin_src
-function validate_input(input):
- if input is None or not isinstance(input, str): # ← Guard added
- return False
- extension = get_extension(input)
- return extension in valid_extensions
-#+end_src
-
-Result: All tests pass after adding defensive checks.
-
-** Example 2: Another Validation Function
-
-*** First Run: Most Passed, Multiple FAILED
-
-*Failures:*
-1. Nil input crashed (same pattern as previous function)
-2. Empty string returned unexpected value (edge case not handled)
-
-*Fix:*
-#+begin_src
-function validate_resource(resource):
- # Guards added for nil/null and empty string
- if not resource or not isinstance(resource, str) or resource.strip() == "":
- return False
-
- # Original validation logic
- return is_valid_resource(resource) and meets_criteria(resource)
-#+end_src
-
-Result: All tests pass after adding comprehensive guards.
-
-** Example 3: String Sanitization Function
-
-*** First Run: Most Passed, 1 FAILED
-
-*Failure:*
-#+begin_src
-test-module--sanitize-boundary-special-chars-replaced
-Expected: "output__________" (10 underscores)
-Actual: "output_________" (9 underscores)
-#+end_src
-
-*** Bug Analysis: Test or Production Code?
-
-*Process:*
-1. Count special chars in test input: 9 characters
-2. Test expected 10 replacements, but input only has 9
-3. Production code is working correctly
-
-*Decision: Fix test code*
-
-*The bug was in the test expectation, not the implementation.*
-
-Result: All tests pass after correcting test expectations.
-
-** Example 4: File/Data Parser Function
-
-This is where a **significant bug** was discovered through testing!
-
-*** Test Categories
-
-*Normal Cases:*
-- Absolute paths/references
-- Relative paths (expanded to base directory)
-- URLs/URIs preserved as-is
-- Mixed types of references
-
-*Boundary Cases:*
-- Empty lines ignored
-- Whitespace-only lines ignored
-- Comments ignored (format-specific)
-- Leading/trailing whitespace trimmed
-- Order preserved
-
-*Error Cases:*
-- Nonexistent file
-- Nil/null input
-
-*** First Run: Majority Passed, Multiple FAILED
-
-All failures related to URL/URI handling:
-
-*Failure Pattern:*
-#+begin_src
-Expected: "http://example.com/resource"
-Actual: "/base/path/http:/example.com/resource"
-#+end_src
-
-URLs were being treated as relative paths and corrupted!
-
-*** Root Cause Analysis
-
-*Production code:*
-#+begin_src
-if line.matches("^\(https?|mms\)://"): # Pattern detection
- # Handle as URL
-#+end_src
-
-*Problem:* Pattern matching is incorrect!
-
-The pattern/regex has an error:
-- Incorrect escaping or syntax
-- Pattern fails to match valid URLs
-- All URLs fall through to the "relative path" handler
-
-The pattern never matched, so URLs were incorrectly processed as relative paths.
-
-*Correct version:*
-#+begin_src
-if line.matches("^(https?|mms)://"): # Fixed pattern
- # Handle as URL
-#+end_src
-
-Common causes of this type of bug:
-- String escaping issues in the language
-- Incorrect regex syntax
-- Copy-paste errors in patterns
-
-*** Impact Assessment
-
-*This is a significant bug:*
-- Remote resources (URLs) would be broken
-- Data corruption: URLs transformed into invalid paths
-- Function worked for local/simple cases, so bug went unnoticed
-- Users would see mysterious errors when using remote resources
-- Potential data loss or corruption in production
-
-*Tests caught a real production bug that could have caused user data corruption!*
-
-Result: All tests pass after fixing the pattern matching logic.
-
-* Phase 5: Continuing Through the Test Suite
-
-** Additional Functions Tested Successfully
-
-As testing continues through the module, patterns emerge:
-
-*Function: Directory/File Listing*
- - Learning: Directory listing order may be filesystem-dependent
- - Solution: Sort results before comparing in tests
-
-*Function: Data Extraction*
- - Keep as separate test file (don't combine with related functions)
- - Reason: Usability when tests fail
-
-*Function: Recursive Operations*
- - Medium complexity: Required creating test data structures/trees
- - Use test utilities for setup/teardown
- - Well-factored functions often pass all tests initially
-
-*Function: Higher-Order Functions*
- - Test functions that return functions/callbacks
- - Initially may misunderstand framework/protocol behavior
- - Fix test expectations to match actual framework behavior
-
-* Key Principles Applied
-
-** 1. Refactor for Testability BEFORE Writing Tests
-
-The Interactive vs Non-Interactive pattern from =quality-engineer.org= made testing trivial:
-- No mocking required
-- Fast, deterministic tests
-- Clear separation of concerns
-
-** 2. Systematic Test Organization
-
-Every test file followed the same structure:
-- Normal Cases
-- Boundary Cases
-- Error Cases
-
-This makes it easy to:
-- Identify coverage gaps
-- Add new tests
-- Understand what's being tested
-
-** 3. Test Naming Convention
-
-Pattern: =test-<module>-<function>-<category>-<scenario>-<expected-result>=
-
-Examples:
-- =test-module--validate-input-normal-valid-extension-returns-true=
-- =test-module--parse-data-boundary-empty-lines-ignored=
-- =test-module--sanitize-error-nil-input-signals-error=
-
-Benefits:
-- Self-documenting
-- Easy to understand what failed
-- Searchable/grepable
-- Clear category organization
-
-** 4. Zero Mocking for Pure Functions
-
-From =quality-engineer.org=:
-#+begin_quote
-DON'T MOCK WHAT YOU'RE TESTING
-- Only mock external side-effects and dependencies, not the domain logic itself
-- If mocking removes the actual work the function performs, you're testing the mock
-- Use real data structures that the function is designed to operate on
-- Rule of thumb: If the function body could be =(error "not implemented")= and tests still pass, you've over-mocked
-#+end_quote
-
-Our tests used:
-- Real file I/O
-- Real strings
-- Real data structures
-- Actual function behavior
-
-Result: Tests caught real bugs, not mock configuration issues.
-
-** 5. Test vs Production Code Bug Decision Framework
-
-When a test fails, ask:
-
-1. *What is the test expecting?*
- - Read the test name and assertions
- - Understand the intended behavior
-
-2. *What is the production code doing?*
- - Read the implementation
- - Trace through the logic
-
-3. *Which is correct?*
- - Is the test expectation reasonable?
- - Is the production behavior defensive/robust?
- - What is the usage context?
-
-4. *Consider the impact:*
- - Defensive code: Fix production to handle edge cases
- - Wrong expectation: Fix test
- - Unclear spec: Ask user for clarification
-
-Examples from our session:
-- *Nil input crashes* → Fix production (defensive coding)
-- *Empty string treated as valid* → Fix production (defensive coding)
-- *Wrong count in test* → Fix test (test bug)
-- *Regex escaping wrong* → Fix production (real bug!)
-
-** 6. Fast Feedback Loop
-
-Pattern: "Write tests, run them all, report errors, and see where we are!"
-
-This became a mantra during the session:
-1. Write comprehensive tests for one function
-2. Run immediately
-3. Analyze failures
-4. Fix bugs (test or production)
-5. Verify all tests pass
-6. Move to next function
-
-Benefits:
-- Caught bugs immediately
-- Small iteration cycles
-- Clear progress
-- High confidence in changes
-
-* Final Results
-
-** Test Coverage Example
-
-*Multiple functions tested with comprehensive coverage:*
-1. File operation helper - ~10-15 tests
-2. Input validation function - ~15 tests
-3. Resource validation function - ~13 tests
-4. String sanitization function - ~13 tests
-5. File/data parser function - ~15 tests
-6. Directory listing function - ~7 tests
-7. Data extraction function - ~6 tests
-8. Recursive operation function - ~12 tests
-9. Higher-order function - ~12 tests
-
-Total: Comprehensive test suite covering all testable functions
-
-** Bugs Discovered and Fixed
-
-1. *Input Validation Function*
- - Issue: Crashed on nil/null input
- - Fix: Added nil/type guards
- - Impact: Prevents crashes in validation code
-
-2. *Resource Validation Function*
- - Issue: Crashed on nil, treated empty string as valid
- - Fix: Added guards for nil and empty string
- - Impact: More robust validation
-
-3. *File/Data Parser Function* ⚠️ *SIGNIFICANT BUG*
- - Issue: Pattern matching wrong - URLs/URIs corrupted as relative paths
- - Fix: Corrected pattern matching logic
- - Impact: Remote resources now work correctly
- - *This bug would have corrupted user data in production*
-
-** Code Quality Improvements
-
-- All testable helper functions now have comprehensive test coverage
-- More defensive error handling (nil guards)
-- Clear separation of concerns (pure helpers vs interactive wrappers)
-- Systematic boundary condition testing
-- Unicode and special character handling verified
-
-* Lessons Learned
-
-** 1. Tests as Bug Discovery Tools
-
-Tests aren't just for preventing regressions - they actively *discover existing bugs*:
-- Pattern matching bugs may exist in production
-- Nil/null handling bugs manifest in edge cases
-- Tests make these issues visible immediately
-- Bugs caught before users encounter them
-
-** 2. Refactoring Enables Testing
-
-The decision to split functions into pure helpers + interactive wrappers:
-- Made testing dramatically simpler
-- Enabled 100+ tests with zero mocking
-- Improved code reusability
-- Clarified function responsibilities
-
-** 3. Systematic Process Matters
-
-Following the same pattern for each function:
-- Reduced cognitive load
-- Made it easy to maintain consistency
-- Enabled quick iteration
-- Built confidence in coverage
-
-** 4. File Organization Aids Debugging
-
-One test file per function:
-- Fast discovery when tests fail
-- Clear ownership
-- Easy to maintain
-- Follows user's mental model
-
-** 5. Test Quality Equals Production Quality
-
-Quality tests:
-- Use real resources (not mocks)
-- Test actual behavior
-- Cover edge cases systematically
-- Find real bugs
-
-This is only possible with well-factored, testable code.
-
-* Applying These Principles
-
-When adding tests to other modules:
-
-1. *Identify testable functions* - Look for pure helpers, file I/O, string manipulation
-2. *Refactor if needed* - Split interactive functions into pure helpers
-3. *Write systematically* - Normal, Boundary, Error categories
-4. *Run frequently* - Fast feedback loop
-5. *Analyze failures carefully* - Test bug vs production bug
-6. *Fix immediately* - Don't accumulate technical debt
-7. *Maintain organization* - One file per function, clear naming
-
-* Reference
-
-See =ai-prompts/quality-engineer.org= for comprehensive quality engineering guidelines, including:
-- Test organization and structure
-- Test naming conventions
-- Mocking and stubbing best practices
-- Interactive vs non-interactive function patterns
-- Integration testing guidelines
-- Test maintenance strategies
-
-Note: =quality-engineer.org= evolves as we learn more quality best practices. This document captures principles applied during this specific session.
-
-* Conclusion
-
-This workflow process demonstrates how systematic testing combined with refactoring for testability can:
-- Discover real bugs before they reach users
-- Improve code quality and robustness
-- Build confidence in changes
-- Create maintainable test suites
-- Follow industry best practices
-
-A comprehensive test suite with multiple bug fixes represents significant quality improvement to any module. Critical bugs (like the pattern matching issue in the example) alone can justify the entire testing effort - such bugs can cause data corruption and break major features.
-
-*Testing is not just about preventing future bugs - it's about finding bugs that already exist.*