From 3a2445080c880544985f50fb0d916534698cc073 Mon Sep 17 00:00:00 2001 From: Craig Jennings Date: Sun, 22 Feb 2026 23:20:56 -0600 Subject: chore: add docs/ to .gitignore and untrack personal files docs/ contains session history, personal workflows, and private protocols that shouldn't be in a public repository. --- .gitignore | 3 + docs/2026-01-22-mkinitcpio-config-boot-failure.org | 161 ----- ...01-22-ratio-amd-gpu-freeze-fix-instructions.org | 224 ------- docs/2026-01-22-ratio-boot-fix-session.org | 241 ------- ...2026-01-27-ratio-amd-gpu-suspend-workaround.org | 217 ------- docs/PLAN-archangel-btrfs.org | 227 ------- docs/PRINCIPLES.org | 49 -- docs/TESTING-STRATEGY.org | 144 ----- docs/announcements/inbox-gitkeep.txt | 10 - .../the-purpose-of-this-directory.org | 18 - docs/notes.org | 586 ----------------- docs/previous-session-history.org | 157 ----- docs/project-workflows/code-review.org | 275 -------- docs/protocols.org | 495 --------------- docs/research-btrfs-expansion.org | 451 ------------- docs/research-sandreas-zarch.org | 365 ----------- docs/retrospectives/2026-01-22-ratio-boot-fix.org | 45 -- .../eml-view-and-extract-attachments-readme.org | 47 -- docs/scripts/eml-view-and-extract-attachments.py | 398 ------------ docs/scripts/maildir-flag-manager.py | 345 ---------- docs/scripts/tests/conftest.py | 77 --- docs/scripts/tests/fixtures/empty-body.eml | 16 - docs/scripts/tests/fixtures/html-only.eml | 20 - .../tests/fixtures/multiple-received-headers.eml | 12 - .../scripts/tests/fixtures/no-received-headers.eml | 9 - docs/scripts/tests/fixtures/plain-text.eml | 15 - docs/scripts/tests/fixtures/with-attachment.eml | 27 - docs/scripts/tests/test_extract_body.py | 96 --- docs/scripts/tests/test_extract_metadata.py | 65 -- docs/scripts/tests/test_generate_filenames.py | 157 ----- docs/scripts/tests/test_integration_stdout.py | 68 -- docs/scripts/tests/test_parse_received_headers.py | 105 ---- docs/scripts/tests/test_process_eml.py | 129 ---- docs/scripts/tests/test_save_attachments.py | 97 --- docs/someday-maybe.org | 0 docs/workflows/add-calendar-event.org | 208 ------ docs/workflows/assemble-email.org | 181 ------ docs/workflows/create-v2mom.org | 699 --------------------- docs/workflows/create-workflow.org | 360 ----------- docs/workflows/delete-calendar-event.org | 217 ------- docs/workflows/edit-calendar-event.org | 255 -------- docs/workflows/email-assembly.org | 183 ------ docs/workflows/email.org | 198 ------ docs/workflows/extract-email.org | 116 ---- docs/workflows/find-email.org | 122 ---- docs/workflows/journal-entry.org | 214 ------- docs/workflows/open-tasks.org | 151 ----- docs/workflows/process-meeting-transcript.org | 301 --------- docs/workflows/read-calendar-events.org | 214 ------- docs/workflows/refactor.org | 621 ------------------ docs/workflows/retrospective.org | 94 --- docs/workflows/send-email.org | 198 ------ docs/workflows/session-start.org | 3 - docs/workflows/set-alarm.org | 165 ----- docs/workflows/startup.org | 103 --- docs/workflows/status-check.org | 178 ------ docs/workflows/summarize-emails.org | 237 ------- docs/workflows/sync-email.org | 108 ---- docs/workflows/v2mom.org | 1 - docs/workflows/whats-next.org | 146 ----- docs/workflows/wrap-it-up.org | 527 ---------------- 61 files changed, 3 insertions(+), 11148 deletions(-) delete mode 100644 docs/2026-01-22-mkinitcpio-config-boot-failure.org delete mode 100644 docs/2026-01-22-ratio-amd-gpu-freeze-fix-instructions.org delete mode 100644 docs/2026-01-22-ratio-boot-fix-session.org delete mode 100644 docs/2026-01-27-ratio-amd-gpu-suspend-workaround.org delete mode 100644 docs/PLAN-archangel-btrfs.org delete mode 100644 docs/PRINCIPLES.org delete mode 100644 docs/TESTING-STRATEGY.org delete mode 100644 docs/announcements/inbox-gitkeep.txt delete mode 100644 docs/announcements/the-purpose-of-this-directory.org delete mode 100644 docs/notes.org delete mode 100644 docs/previous-session-history.org delete mode 100644 docs/project-workflows/code-review.org delete mode 100644 docs/protocols.org delete mode 100644 docs/research-btrfs-expansion.org delete mode 100644 docs/research-sandreas-zarch.org delete mode 100644 docs/retrospectives/2026-01-22-ratio-boot-fix.org delete mode 100644 docs/scripts/eml-view-and-extract-attachments-readme.org delete mode 100644 docs/scripts/eml-view-and-extract-attachments.py delete mode 100755 docs/scripts/maildir-flag-manager.py delete mode 100644 docs/scripts/tests/conftest.py delete mode 100644 docs/scripts/tests/fixtures/empty-body.eml delete mode 100644 docs/scripts/tests/fixtures/html-only.eml delete mode 100644 docs/scripts/tests/fixtures/multiple-received-headers.eml delete mode 100644 docs/scripts/tests/fixtures/no-received-headers.eml delete mode 100644 docs/scripts/tests/fixtures/plain-text.eml delete mode 100644 docs/scripts/tests/fixtures/with-attachment.eml delete mode 100644 docs/scripts/tests/test_extract_body.py delete mode 100644 docs/scripts/tests/test_extract_metadata.py delete mode 100644 docs/scripts/tests/test_generate_filenames.py delete mode 100644 docs/scripts/tests/test_integration_stdout.py delete mode 100644 docs/scripts/tests/test_parse_received_headers.py delete mode 100644 docs/scripts/tests/test_process_eml.py delete mode 100644 docs/scripts/tests/test_save_attachments.py delete mode 100644 docs/someday-maybe.org delete mode 100644 docs/workflows/add-calendar-event.org delete mode 100644 docs/workflows/assemble-email.org delete mode 100644 docs/workflows/create-v2mom.org delete mode 100644 docs/workflows/create-workflow.org delete mode 100644 docs/workflows/delete-calendar-event.org delete mode 100644 docs/workflows/edit-calendar-event.org delete mode 100644 docs/workflows/email-assembly.org delete mode 100644 docs/workflows/email.org delete mode 100644 docs/workflows/extract-email.org delete mode 100644 docs/workflows/find-email.org delete mode 100644 docs/workflows/journal-entry.org delete mode 100644 docs/workflows/open-tasks.org delete mode 100644 docs/workflows/process-meeting-transcript.org delete mode 100644 docs/workflows/read-calendar-events.org delete mode 100644 docs/workflows/refactor.org delete mode 100644 docs/workflows/retrospective.org delete mode 100644 docs/workflows/send-email.org delete mode 100644 docs/workflows/session-start.org delete mode 100644 docs/workflows/set-alarm.org delete mode 100644 docs/workflows/startup.org delete mode 100644 docs/workflows/status-check.org delete mode 100644 docs/workflows/summarize-emails.org delete mode 100644 docs/workflows/sync-email.org delete mode 100644 docs/workflows/v2mom.org delete mode 100644 docs/workflows/whats-next.org delete mode 100644 docs/workflows/wrap-it-up.org diff --git a/.gitignore b/.gitignore index 57e156e..df13c81 100644 --- a/.gitignore +++ b/.gitignore @@ -8,3 +8,6 @@ zfs-packages/ vm/ test-logs/ reference-repos/ + +# Personal session/workflow docs (not project documentation) +docs/ diff --git a/docs/2026-01-22-mkinitcpio-config-boot-failure.org b/docs/2026-01-22-mkinitcpio-config-boot-failure.org deleted file mode 100644 index 3785bd7..0000000 --- a/docs/2026-01-22-mkinitcpio-config-boot-failure.org +++ /dev/null @@ -1,161 +0,0 @@ -#+TITLE: install-archzfs leaves broken mkinitcpio configuration -#+DATE: 2026-01-22 - -* Problem Summary - -After installing Arch Linux with ZFS via install-archzfs, the system has incorrect mkinitcpio configuration that can cause boot failures. The configuration issues are latent - the system may boot initially but will fail after any mkinitcpio regeneration (kernel updates, manual rebuilds, etc.). - -* Root Cause - -The install-archzfs script does not properly configure mkinitcpio for a ZFS boot environment. Three issues were identified: - -** Issue 1: Wrong HOOKS in mkinitcpio.conf - -The installed system had: -#+begin_example -HOOKS=(base systemd autodetect microcode modconf kms keyboard keymap sd-vconsole block filesystems fsck) -#+end_example - -This is wrong for ZFS because: -- Uses =systemd= init hook, but ZFS hook is busybox-based and incompatible with systemd init -- Missing =zfs= hook entirely -- Has =fsck= hook which is unnecessary/wrong for ZFS - -Correct HOOKS for ZFS: -#+begin_example -HOOKS=(base udev microcode modconf kms keyboard keymap consolefont block zfs filesystems) -#+end_example - -Note: =autodetect= is deliberately omitted. During installation from a live ISO, autodetect would detect the live ISO's hardware, not the target machine's hardware. This could result in missing NVMe, AHCI, or other storage drivers on the installed system. - -** Issue 2: Leftover archiso.conf drop-in - -The file =/etc/mkinitcpio.conf.d/archiso.conf= was left over from the live ISO: -#+begin_example -HOOKS=(base udev microcode modconf kms memdisk archiso archiso_loop_mnt archiso_pxe_common archiso_pxe_nbd archiso_pxe_http archiso_pxe_nfs block filesystems keyboard) -COMPRESSION="xz" -COMPRESSION_OPTIONS=(-9e) -#+end_example - -This drop-in OVERRIDES the HOOKS setting in mkinitcpio.conf, so even if mkinitcpio.conf were correct, this file would break it. - -** Issue 3: Wrong preset file - -The file =/etc/mkinitcpio.d/linux-lts.preset= contained archiso-specific configuration: -#+begin_example -# mkinitcpio preset file for the 'linux-lts' package on archiso - -PRESETS=('archiso') - -ALL_kver='/boot/vmlinuz-linux-lts' -archiso_config='/etc/mkinitcpio.conf.d/archiso.conf' - -archiso_image="/boot/initramfs-linux-lts.img" -#+end_example - -Should be: -#+begin_example -# mkinitcpio preset file for linux-lts - -PRESETS=(default fallback) - -ALL_kver="/boot/vmlinuz-linux-lts" - -default_image="/boot/initramfs-linux-lts.img" - -fallback_image="/boot/initramfs-linux-lts-fallback.img" -fallback_options="-S autodetect" -#+end_example - -* How This Manifests - -1. Fresh install appears to work (initramfs built during install has ZFS support somehow) -2. System boots fine initially -3. Kernel update or manual =mkinitcpio -P= rebuilds initramfs -4. New initramfs lacks ZFS support due to wrong config -5. Next reboot fails with "cannot import pool" or "failed to mount /sysroot" - -* Fix Required in install-archzfs - -The script needs to, after arch-chroot setup: - -1. *Set correct mkinitcpio.conf HOOKS* (no autodetect - see note above): - #+begin_src bash - sed -i 's/^HOOKS=.*/HOOKS=(base udev microcode modconf kms keyboard keymap consolefont block zfs filesystems)/' /mnt/etc/mkinitcpio.conf - #+end_src - -2. *Remove archiso drop-in*: - #+begin_src bash - rm -f /mnt/etc/mkinitcpio.conf.d/archiso.conf - #+end_src - -3. *Create proper preset file*: - #+begin_src bash - cat > /mnt/etc/mkinitcpio.d/linux-lts.preset << 'EOF' - # mkinitcpio preset file for linux-lts - - PRESETS=(default fallback) - - ALL_kver="/boot/vmlinuz-linux-lts" - - default_image="/boot/initramfs-linux-lts.img" - - fallback_image="/boot/initramfs-linux-lts-fallback.img" - fallback_options="-S autodetect" - EOF - #+end_src - -4. *Rebuild initramfs after fixing config*: - #+begin_src bash - arch-chroot /mnt mkinitcpio -P - #+end_src - -* Recovery Procedure (for affected systems) - -Boot from archzfs live ISO, then: - -#+begin_src bash -# Import and mount ZFS -zpool import -f zroot -zfs mount zroot/ROOT/default -mount /dev/nvme0n1p1 /boot # adjust device as needed - -# Fix mkinitcpio.conf (no autodetect - detects live ISO hardware, not target) -sed -i 's/^HOOKS=.*/HOOKS=(base udev microcode modconf kms keyboard keymap consolefont block zfs filesystems)/' /etc/mkinitcpio.conf - -# Remove archiso drop-in -rm -f /etc/mkinitcpio.conf.d/archiso.conf - -# Fix preset (adjust for your kernel: linux, linux-lts, linux-zen, etc.) -cat > /etc/mkinitcpio.d/linux-lts.preset << 'EOF' -PRESETS=(default fallback) -ALL_kver="/boot/vmlinuz-linux-lts" -default_image="/boot/initramfs-linux-lts.img" -fallback_image="/boot/initramfs-linux-lts-fallback.img" -fallback_options="-S autodetect" -EOF - -# Mount system directories for chroot -mount --rbind /dev /dev -mount --rbind /sys /sys -mount --rbind /proc /proc -mount --rbind /run /run - -# Rebuild initramfs -chroot / mkinitcpio -P - -# Reboot -reboot -#+end_src - -* Machine Details (ratio) - -- Two NVMe drives in ZFS mirror (nvme0n1, nvme1n1) -- Pool: zroot -- Root dataset: zroot/ROOT/default -- Kernel: linux-lts 6.12.66-1 -- Boot partition: /dev/nvme0n1p1 (FAT32, mounted at /boot) - -* Related Information - -The immediate trigger for discovering this was a system freeze during mkinitcpio regeneration. That freeze was caused by the AMD GPU VPE power gating bug (separate issue - see archsetup NOTES.org for details). However, the system's inability to boot afterward exposed these latent mkinitcpio configuration problems. diff --git a/docs/2026-01-22-ratio-amd-gpu-freeze-fix-instructions.org b/docs/2026-01-22-ratio-amd-gpu-freeze-fix-instructions.org deleted file mode 100644 index d6b8461..0000000 --- a/docs/2026-01-22-ratio-amd-gpu-freeze-fix-instructions.org +++ /dev/null @@ -1,224 +0,0 @@ -AMD Strix Halo VPE/CWSR Freeze Fix Instructions -=============================================== -Created: 2026-01-22 -Machine: ratio (Framework Desktop, AMD Ryzen AI Max 300) - -PROBLEM SUMMARY ---------------- -Two AMD GPU bugs cause random freezes on Strix Halo: - -1. VPE Power Gating Bug - - VPE (Video Processing Engine) tries to power gate after 1 second idle - - SMU hangs, system freezes - - Fix: amdgpu.pg_mask=0 (disables power gating) - -2. CWSR Bug (Compute Wavefront Save/Restore) - - MES firmware hang under compute loads - - Causes GPU reset loops and crashes - - Fix: amdgpu.cwsr_enable=0 - -Current state on ratio: -- pg_mask = 4294967295 (power gating ENABLED - bad) -- cwsr_enable = 1 (CWSR ENABLED - bad) -- Neither workaround is applied - - -PART 1: GRUB CMDLINE FIX (Quick, can do now) -============================================ -This adds the parameters to the kernel command line via GRUB. -Can be done on the running system, takes effect on next boot. - -Step 1: Edit GRUB defaults --------------------------- -sudo nano /etc/default/grub - -Find the line: -GRUB_CMDLINE_LINUX_DEFAULT="..." - -Add these parameters (keep existing ones): -GRUB_CMDLINE_LINUX_DEFAULT="... amdgpu.pg_mask=0 amdgpu.cwsr_enable=0" - -Example - if current line is: -GRUB_CMDLINE_LINUX_DEFAULT="loglevel=2 rd.systemd.show_status=auto rd.udev.log_level=2 nvme.noacpi=1 mem_sleep_default=deep nowatchdog random.trust_cpu=off quiet splash" - -Change to: -GRUB_CMDLINE_LINUX_DEFAULT="loglevel=2 rd.systemd.show_status=auto rd.udev.log_level=2 nvme.noacpi=1 mem_sleep_default=deep nowatchdog random.trust_cpu=off quiet splash amdgpu.pg_mask=0 amdgpu.cwsr_enable=0" - -Step 2: Regenerate GRUB config ------------------------------- -sudo grub-mkconfig -o /boot/grub/grub.cfg - -Step 3: Reboot and verify -------------------------- -sudo reboot - -After reboot, verify: -cat /sys/module/amdgpu/parameters/pg_mask -# Should show: 0 - -cat /sys/module/amdgpu/parameters/cwsr_enable -# Should show: 0 - -cat /proc/cmdline | grep -oE "(pg_mask|cwsr_enable)=[^ ]+" -# Should show: -# pg_mask=0 -# cwsr_enable=0 - - -PART 2: MODPROBE.D FIX (Permanent, requires live ISO) -===================================================== -This embeds the parameters in the initramfs so they're always applied. -MUST be done from live ISO because mkinitcpio triggers the freeze. - -Step 1: Boot archzfs live ISO ------------------------------ -- Boot from USB with archzfs ISO -- Get to root shell - -Step 2: Import and mount ZFS ----------------------------- -zpool import -f zroot -zfs mount zroot/ROOT/default -mount /dev/nvme1n1p1 /mnt/boot # Note: nvme1n1p1, not nvme0n1p1! - -Verify: -ls /mnt/boot/vmlinuz* -# Should show kernel images - -Step 3: Create modprobe config ------------------------------- -cat > /mnt/etc/modprobe.d/amdgpu.conf << 'EOF' -# Workarounds for AMD Strix Halo GPU bugs -# Created: 2026-01-22 -# Remove when kernel has proper fixes (check linux-lts >= 6.18 with fixes) - -# Disable power gating to prevent VPE freeze -# VPE tries to power gate after 1s idle, causing SMU hang -options amdgpu pg_mask=0 - -# Disable Compute Wavefront Save/Restore to prevent MES hang -# CWSR causes MES firmware 0x80 hang under compute loads -options amdgpu cwsr_enable=0 -EOF - -Step 4: Chroot and rebuild initramfs ------------------------------------- -# Mount system directories -mount --rbind /dev /mnt/dev -mount --rbind /sys /mnt/sys -mount --rbind /proc /mnt/proc -mount --rbind /run /mnt/run - -# Chroot -arch-chroot /mnt - -# Rebuild initramfs (this is safe from live ISO) -mkinitcpio -P - -# Verify amdgpu.conf is in initramfs -lsinitcpio /boot/initramfs-linux.img | grep amdgpu -# Should show: etc/modprobe.d/amdgpu.conf - -# Exit chroot -exit - -Step 5: Clean up and reboot ---------------------------- -# Unmount everything -umount -R /mnt/dev /mnt/sys /mnt/proc /mnt/run -zfs unmount -a -zpool export zroot - -# Reboot -reboot - -Step 6: Verify after reboot ---------------------------- -cat /sys/module/amdgpu/parameters/pg_mask -# Should show: 0 - -cat /sys/module/amdgpu/parameters/cwsr_enable -# Should show: 0 - -lsinitcpio /boot/initramfs-linux.img | grep amdgpu.conf -# Should show: etc/modprobe.d/amdgpu.conf - - -VERIFICATION CHECKLIST -====================== -After applying fixes, verify: - -[ ] pg_mask shows 0 (not 4294967295) -[ ] cwsr_enable shows 0 (not 1) -[ ] Parameters visible in /proc/cmdline (if using GRUB method) -[ ] amdgpu.conf in initramfs (if using modprobe.d method) -[ ] System stable - no freezes during idle -[ ] mkinitcpio -P completes without freeze (test after fix applied) - - -IMPORTANT NOTES -=============== - -1. Boot partition UUID - ratio has mirrored NVMe drives. The boot partition is on nvme1n1p1: - - nvme0n1p1: 6A4B-47A4 (NOT the boot partition) - - nvme1n1p1: 6A4A-93B1 (THIS is /boot) - -2. Kernel is pinned - /etc/pacman.conf has: IgnorePkg = linux - This prevents upgrading from 6.15.2 until manually unpinned. - DO NOT upgrade to 6.18.x - it has worse bugs for Strix Halo. - -3. When to remove workarounds - Monitor Framework Community and AMD-gfx mailing list for proper fixes. - When linux-lts has confirmed VPE and CWSR fixes, can try removing. - Test by commenting out lines in amdgpu.conf, rebuild initramfs, test. - -4. If system freezes during mkinitcpio - This means the fix isn't active yet. Must do from live ISO. - The modconf hook reads /etc/modprobe.d/ at build time, but the - running kernel still has the old parameters until reboot. - - -TROUBLESHOOTING -=============== - -System still freezes after GRUB fix: -- Check /proc/cmdline - are parameters there? -- Check /sys/module/amdgpu/parameters/* - are values correct? -- If cmdline has them but sysfs doesn't, driver may have loaded before - parsing. Need modprobe.d method instead. - -Can't import zpool from live ISO: -- Try: zpool import -f zroot -- If "pool was previously in use": zpool import -f zroot -- Check hostid: cat /etc/hostid on installed system - -mkinitcpio says "Preset not found": -- Check /etc/mkinitcpio.d/*.preset files exist -- For linux kernel: linux.preset -- For linux-lts: linux-lts.preset - -After chroot, wrong mountpoints: -- Reset mountpoints after any chroot work: - zfs set mountpoint=/ zroot/ROOT/default - zfs set mountpoint=/home zroot/home - (etc. for all datasets) - - -REFERENCES -========== - -VPE timeout patch (not merged): -https://www.mail-archive.com/amd-gfx@lists.freedesktop.org/msg127724.html - -Framework Community - critical 6.18 bugs: -https://community.frame.work/t/attn-critical-bugs-in-amdgpu-driver-included-with-kernel-6-18-x-6-19-x/79221 - -CWSR workaround: -https://github.com/ROCm/ROCm/issues/5590 - -Session documentation: -- docs/2026-01-22-ratio-boot-fix-session.org -- docs/2026-01-22-mkinitcpio-config-boot-failure.org -- assets/2026-01-22-mkinitcpio-freeze-during-rebuild.org diff --git a/docs/2026-01-22-ratio-boot-fix-session.org b/docs/2026-01-22-ratio-boot-fix-session.org deleted file mode 100644 index 56563d9..0000000 --- a/docs/2026-01-22-ratio-boot-fix-session.org +++ /dev/null @@ -1,241 +0,0 @@ -#+TITLE: Ratio Boot Fix Session - 2026-01-22 -#+DATE: 2026-01-22 -#+AUTHOR: Craig Jennings + Claude - -* Summary - -Successfully diagnosed and fixed boot failures on ratio (Framework Desktop with AMD Ryzen AI Max 300 / Strix Halo GPU). The primary issue was outdated/missing linux-firmware causing the amdgpu driver to hang during boot. - -* Hardware - -- Machine: Framework Desktop -- CPU/GPU: AMD Ryzen AI Max 300 (Strix Halo APU, codenamed GFX 1151) -- Storage: 2x NVMe in ZFS mirror (zroot) -- Installed via: install-archzfs script from this project - -* Initial Symptoms - -1. System froze at "triggering udev events..." during boot -2. Only visible message before freeze: "RDSEED32 is broken" (red herring - just a warning) -3. Freeze occurred with both linux-lts (6.12.66) and linux (6.15.2) kernels -4. Blacklisting amdgpu allowed boot to proceed but caused kernel panic (no display = init killed) - -* Root Cause - -The linux-firmware package was either missing or outdated. Specifically: -- linux-firmware 20251125 is known to break AMD Strix Halo (GFX 1151) -- linux-firmware 20260110 contains fixes for Strix Halo stability - -Source: Donato Capitella video "ROCm+Linux Support on Strix Halo: It's finally stable in 2026!" -- Firmware 20251125 completely broke ROCm/GPU on Strix Halo -- Firmware 20260110+ restores functionality - -* Troubleshooting Timeline - -** Phase 1: Initial Diagnosis - -- SSH'd to ratio via archzfs live ISO -- Found mkinitcpio configuration issues (separate bug) -- Fixed mkinitcpio.conf HOOKS and removed archiso.conf drop-in -- System still froze after fixes - -** Phase 2: Kernel Investigation - -- Researched AMD Strix Halo issues on Framework community forums -- Found reports of VPE (Video Processing Engine) idle timeout bug -- Attempted kernel parameter workarounds: - - amdgpu.pg_mask=0 (disable power gating) - didn't help - - amdgpu.cwsr_enable=0 (disable compute wavefront save/restore) - not tested -- Installed kernel 6.15.2 from Arch Linux Archive (has VPE fix) -- Installed matching zfs-linux package for 6.15.2 -- System still froze - -** Phase 3: ZFS Rollback Complications - -- Rolled back to pre-kernel-switch ZFS snapshot -- Discovered /boot is NOT on ZFS (EFI partition) -- Rollback caused mismatch: root filesystem rolled back, but /boot kept newer kernels -- Kernel 6.15.2 panicked because its modules didn't exist on rolled-back root -- Documented this as a fundamental ZFS-on-root issue (see todo.org) - -** Phase 4: Firmware Discovery - -- Found video transcript explaining Strix Halo firmware requirements -- Discovered linux-firmware package was not installed (orphaned files from rollback) -- Repo had linux-firmware 20260110-1 (the fixed version) -- Installed linux-firmware 20260110-1 - -** Phase 5: Boot Success with Secondary Issues - -After firmware install, encountered additional issues: - -1. *Hostid mismatch*: Pool showed "previously in use from another system" - - Fix: Clean export from live ISO (zpool export zroot) - -2. *ZFS mountpoint=legacy*: Root dataset had legacy mountpoint from chroot work - - Fix: zfs set mountpoint=/ zroot/ROOT/default - -3. *ZFS mountpoints with /mnt prefix*: All child datasets had /mnt prefix - - Cause: zpool import -R /mnt persisted mountpoint changes - - Fix: Reset all mountpoints (zfs set mountpoint=/home zroot/home, etc.) - -* Final Working Configuration - -#+BEGIN_SRC -Kernel: linux-lts 6.12.66-1-lts -Firmware: linux-firmware 20260110-1 -ZFS: zfs-linux-lts (DKMS built for 6.12.66) -Boot: GRUB with spl.spl_hostid=0x564478f3 -#+END_SRC - -* Key Learnings - -** 1. Firmware Matters for AMD APUs - -The linux-firmware package is critical for AMD integrated GPUs. Strix Halo specifically requires firmware 20260110 or newer. The kernel version (6.12 vs 6.15) was less important than having correct firmware. - -** 2. ZFS Rollback + Separate /boot = Danger - -When /boot is on a separate EFI partition (not ZFS): -- ZFS rollback doesn't affect /boot -- Kernel images remain at newer version -- Modules on root get rolled back -- Result: Boot failure or kernel panic - -Solutions: -- Use ZFSBootMenu (stores kernel/initramfs on ZFS) -- Put /boot on ZFS (GRUB can read it) -- Always rebuild initramfs after rollback -- Sync /boot backups with ZFS snapshots - -** 3. zpool import -R Persists Mountpoints - -Using =zpool import -R /mnt= for chroot work can permanently change dataset mountpoints. The -R flag sets altroot, but if you then modify datasets, those changes persist with the /mnt prefix. - -Fix after chroot work: -#+BEGIN_SRC bash -zfs set mountpoint=/ zroot/ROOT/default -zfs set mountpoint=/home zroot/home -# ... etc for all datasets -#+END_SRC - -** 4. Hostid Consistency Required - -ZFS pools track which system last accessed them. If hostid changes (e.g., between live ISO and installed system), import fails with "pool was previously in use from another system." - -Solutions: -- Clean export before switching systems (zpool export) -- Force import (zpool import -f) -- Ensure consistent hostid via /etc/hostid and spl.spl_hostid kernel parameter - -* Files Modified on Ratio - -- /etc/mkinitcpio.conf - Fixed HOOKS -- /etc/mkinitcpio.conf.d/archiso.conf - Removed (was overriding HOOKS) -- /etc/default/grub - GRUB_TIMEOUT=5 (was 0) -- /boot/grub/grub.cfg - Regenerated, added TEST label to mainline kernel -- /etc/hostid - Regenerated to match GRUB hostid parameter -- ZFS dataset mountpoints - Reset from /mnt/* to /* - -* Packages Installed - -- linux-firmware 20260110-1 (critical fix) -- linux 6.15.2 + zfs-linux (available as TEST kernel, not needed for boot) -- Various system packages updated during troubleshooting - -* Resources Referenced - -** Framework Community Posts -- https://community.frame.work/t/attn-critical-bugs-in-amdgpu-driver-included-with-kernel-6-18-x-6-19-x/79221 -- https://community.frame.work/t/fyi-linux-firmware-amdgpu-20251125-breaks-rocm-on-ai-max-395-8060s/78554 -- https://github.com/FrameworkComputer/SoftwareFirmwareIssueTracker/issues/162 - -** Donato Capitella Video -- Title: "ROCm+Linux Support on Strix Halo: It's finally stable in 2026!" -- Key info: Firmware 20260110+ required, kernel 6.18.4+ for ROCm stability -- Transcript saved: assets/Donato Capitella-ROCm+Linux Support on Strix Halo...txt - -** Other -- Arch Linux Archive (for kernel 6.15.2 package) -- Jeff Geerling blog (VRAM allocation on AMD APUs) - -* TODO Items Created - -Added to todo.org: -- [#A] Fix ZFS rollback breaking boot (/boot not on ZFS) -- Links to existing [#A] Integrate ZFSBootMenu task - -* Test Kernel Available - -The TEST kernel (linux 6.15.2) is installed and available in GRUB Advanced menu. It has matching zfs-linux modules and should work if needed. The mainline kernel may be useful for: -- ROCm/AI workloads (combined with ROCm 7.2+ when released) -- Future GPU stability improvements -- Testing newer kernel features - -Current recommendation: Use linux-lts for stability, TEST kernel for experimentation. - -* Post-Fix Configuration (Phase 6) - -After boot was working, made kernel 6.15.2 the default with a clean GRUB menu. - -** Made 6.15.2 Default - -1. Created custom GRUB script /etc/grub.d/09_custom_kernels -2. Added clean menu entries: - - "Linux 6.15.2 (default)" - - "Linux LTS 6.12.66 (fallback)" -3. Set GRUB_DEFAULT="linux-6.15.2" - -** Pinned Kernel - -Added to /etc/pacman.conf: -#+BEGIN_SRC -IgnorePkg = linux -#+END_SRC - -This prevents pacman from upgrading linux package until manually unpinned. - -** GRUB UUID Issue - -Initial custom script used wrong boot partition UUID: -- nvme0n1p1: 6A4B-47A4 (wrong - got this from lsblk on first NVMe) -- nvme1n1p1: 6A4A-93B1 (correct - actually mounted at /boot) - -Fix: Updated /etc/grub.d/09_custom_kernels to use 6A4A-93B1 - -** Final GRUB Menu - -#+BEGIN_SRC -1. Linux 6.15.2 (default) <- Boots automatically -2. Linux LTS 6.12.66 (fallback) -3. Arch Linux (ZFS) Linux <- Auto-generated (ignored) -4. Advanced options... -5. UEFI Firmware Settings -6. ZFS Snapshots -#+END_SRC - -** SSH Keys - -Configured SSH key authentication for cjennings@ratio.local to simplify remote access. -Password auth (sshpass) was unreliable from Claude's session. - -* Final System State - -#+BEGIN_SRC -Hostname: ratio.local -Kernel: 6.15.2-arch1-1 (default) -Fallback: 6.12.66-1-lts -Firmware: linux-firmware 20260110-1 -ZFS: All pools healthy, 11 datasets mounted -Boot: Custom GRUB menu with clean entries -Kernel pinned: Yes (IgnorePkg = linux) -#+END_SRC - -* When to Unpin Kernel - -Unpin linux package when linux-lts version >= 6.15: -#+BEGIN_SRC bash -sudo sed -i 's/^IgnorePkg = linux/#IgnorePkg = linux/' /etc/pacman.conf -#+END_SRC - -Then optionally switch back to LTS as default for stability. diff --git a/docs/2026-01-27-ratio-amd-gpu-suspend-workaround.org b/docs/2026-01-27-ratio-amd-gpu-suspend-workaround.org deleted file mode 100644 index 46e403d..0000000 --- a/docs/2026-01-27-ratio-amd-gpu-suspend-workaround.org +++ /dev/null @@ -1,217 +0,0 @@ -#+TITLE: Ratio AMD GPU Suspend Freeze - Workaround & Fix Tracking -#+DATE: 2026-01-27 - -* Summary - -Ratio (Framework Desktop, AMD Ryzen AI Max / Strix Halo) freezes hard on -resume from suspend due to a VPE power gating race condition in the amdgpu -driver. The freeze requires a hard power cycle, which causes journal -corruption and can leave the btrfs filesystem read-only. - -As of 2026-01-27, the proper kernel fix exists (merged in 6.18) but is -unusable due to separate CWSR bugs in 6.18+. Ratio runs kernel 6.12 LTS, -which does not have the fix and will not receive a backport. - -A systemd suspend mask is applied as a workaround to prevent the system from -ever entering the suspend/resume path. - -* The Bug - -** What Happens - -~8% of suspend/resume cycles on Strix Halo result in a hard system freeze -approximately 1 second after the screen turns on during resume. - -** Root Cause: VPE Power Gating Race Condition - -The freeze is caused by a race condition in the amdgpu driver's VPE (Video -Processing Engine) power management during resume: - -1. System resumes from suspend. -2. amdgpu schedules =amdgpu_device_delayed_init_work_handler= (2s delay) to - run self-tests, including =vpe_ring_test_ib= which briefly powers on VPE. -3. The ring buffer test is very short. VPE goes idle. -4. After 1 second of idle, =vpe_idle_work_handler= fires and tells the SMU - (System Management Unit) to power gate (shut down) VPE. -5. *But VPE is still at a high DPM level.* Newer VPE firmware only drops DPM - back to the lowest level (DPM0) after a workload has run for 2+ seconds. - The ring buffer test was too short to trigger that drop. -6. The SMU tries to power gate VPE while it's at a high DPM level. On Strix - Halo, this hangs the SMU. -7. The SMU hang cascades -- VCN, JPEG, and other GPU IPs can't be managed. - Half the GPU is frozen. -8. The thread that issued the SMU command is stuck. System is locked up. - No further logging is possible. - -It only triggers on resume because that's when the driver runs the ring -buffer self-test. During normal operation, VPE either isn't used or has had -enough time to settle its DPM level before power gating. - -** Error Messages (if visible before freeze) - -#+begin_example -SMU: I'm not done with your previous command -Failed to power gate VPE! -Dpm disable vpe failed, ret = -62 -Failed to power gate JPEG -Failed to power gate VCN instance 0 -Dpm disable uvd failed -#+end_example - -** References - -- [[https://lkml.org/lkml/2025/8/24/139][Original VPE_IDLE_TIMEOUT patch (LKML, Aug 2025)]] -- [[https://www.mail-archive.com/amd-gfx@lists.freedesktop.org/msg130657.html][VPE DPM0 fix v5 (amd-gfx, Oct 2025)]] -- [[https://www.mail-archive.com/amd-gfx@lists.freedesktop.org/msg130804.html][Follow-up: missing return statement fix]] -- [[https://gitlab.freedesktop.org/drm/amd/-/issues/4615][Freedesktop bug #4615]] -- [[https://community.frame.work/t/attn-critical-bugs-in-amdgpu-driver-included-with-kernel-6-18-x-6-19-x/79221][Framework Community: Critical 6.18/6.19 CWSR bugs]] - -* Kernel Fix Status - -** The Proper Fix - -Mario Limonciello (AMD) wrote =drm/amd: Check that VPE has reached DPM0 in -idle handler= -- makes the idle handler check that VPE has actually reached -DPM0 before attempting the power gate. Targets VPE 6.1.1 (Strix Halo) with -firmware versions below =0x0a640500=. - -Merged into Linux 6.18 during the RC phase (drm-fixes-6.18, Oct 29, 2025). -Closes freedesktop bug #4615. - -** Why We Can't Use 6.18 - -Kernel 6.18.x and 6.19.x have critical CWSR (Compute Wavefront Save/Restore) -bugs that cause hard GPU hangs on RDNA3/RDNA4 during compute workloads. The -Framework Community recommends staying on 6.15-6.17 for Strix Halo until -AMD resolves both VPE and CWSR issues in the same kernel. - -** Backport Status - -The fix was tagged =Cc: stable@vger.kernel.org= for backport but has NOT -appeared in any 6.12 LTS release as of 6.12.67. It likely won't be -backported to 6.12 due to infrastructure differences. - -** When to Check Again - -Monitor these for resolution: -- Arch =linux-lts= package updates (=pacman -Si linux-lts=) -- [[https://cdn.kernel.org/pub/linux/kernel/v6.x/][Kernel.org changelogs]] for 6.12.x stable releases -- [[https://community.frame.work/t/attn-critical-bugs-in-amdgpu-driver-included-with-kernel-6-18-x-6-19-x/79221][Framework Community thread]] for CWSR resolution status -- [[https://gitlab.freedesktop.org/drm/amd/-/issues/4615][Freedesktop #4615]] for any further developments - -* What We Applied (2026-01-27) - -** Workaround: Disable Suspend via systemd - -Prevents the system from entering the suspend/resume path entirely. -The GPU bug is still present but never triggered. - -#+begin_src bash -# Applied 2026-01-27: -sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target -#+end_src - -Effects: -- hypridle can no longer suspend the system -- Screen stays on at idle (active power draw) -- No more freeze → hard reboot → filesystem corruption cycle - -** Kernel Parameters NOT Applied - -The following parameters were identified as fixes but caused boot failures -on ratio when previously attempted (twice): - -#+begin_example -amdgpu.pg_mask=0 # Disables all GPU power gating -amdgpu.cwsr_enable=0 # Disables Compute Wavefront Save/Restore -#+end_example - -It is unclear whether the boot failures were caused by the parameters -themselves or by a corrupted initramfs from running mkinitcpio while the -GPU was in a bad state. Testing via the GRUB =e= key (temporary, no -permanent change) is planned but deferred. - -** Current Kernel Command Line (for reference) - -#+begin_example -BOOT_IMAGE=/@/boot/vmlinuz-linux-lts root=UUID=5b9f7f7f-2477-488f-8fb1-52b5c7d90e98 -rw rootflags=subvol=@ console=tty0 console=ttyS0,115200 rw loglevel=2 -rd.systemd.show_status=auto rd.udev.log_level=2 nvme.noacpi=1 -mem_sleep_default=deep nowatchdog random.trust_cpu=off quiet splash -#+end_example - -* How to Undo When a Fixed Kernel Arrives - -** Step 1: Verify the Fix is in the New Kernel - -Check that the VPE DPM0 fix is present: - -#+begin_src bash -# Check kernel version -uname -r - -# Search for the fix in the changelog -# Look for "VPE" or "DPM0" or "vpe_idle" in the relevant changelog: -# https://cdn.kernel.org/pub/linux/kernel/v6.x/ChangeLog- - -# Or check the source directly: -grep -r "vpe_need_dpm0_at_power_down\|vpe_get_dpm_level" /usr/src/linux/drivers/gpu/drm/amd/ 2>/dev/null -#+end_src - -Also verify that CWSR bugs are resolved (check Framework Community thread). - -** Step 2: Unmask Suspend Targets - -#+begin_src bash -sudo systemctl unmask sleep.target suspend.target hibernate.target hybrid-sleep.target -#+end_src - -** Step 3: Test Suspend/Resume - -#+begin_src bash -# Test a single suspend/resume cycle -sudo systemctl suspend - -# If system resumes cleanly, test a few more times -# The original bug had ~8% failure rate, so test at least 20 cycles -#+end_src - -** Step 4: If Kernel Parameters Were Applied - -If =amdgpu.pg_mask=0= and =amdgpu.cwsr_enable=0= were added to GRUB, remove -them once the kernel fix is confirmed working: - -#+begin_src bash -# Edit GRUB config -sudo vim /etc/default/grub -# Remove amdgpu.pg_mask=0 and amdgpu.cwsr_enable=0 from GRUB_CMDLINE_LINUX_DEFAULT - -# Rebuild GRUB config -sudo grub-mkconfig -o /boot/grub/grub.cfg - -# Reboot and test suspend -#+end_src - -* Log Evidence (2026-01-27 Investigation) - -** System Info - -- Machine: Framework Desktop (AMD Ryzen AI Max 300 Series) -- Hostname: ratio -- Kernel: 6.12.67-1-lts -- Filesystem: btrfs RAID1 on 2x NVMe (nvme0n1p2 + nvme1n1p2) -- GPU: AMD Strix Halo (RDNA 3.5) - -** Findings - -- 13 boots between Jan 25-27, most ending in suspend then hard freeze -- Journal corruption on boots -5, -3, and -7 (unclean shutdown) -- =mc= (Midnight Commander) stuck in D state (uninterruptible I/O) during - failed freeze attempts, in =io_schedule → folio_wait_bit_common → - filemap_read= path -- Suspend freeze pattern: =PM: suspend entry (deep)= → =PM: suspend exit= → - =PM: suspend entry (s2idle)= → no more logs → hard reboot required -- =mu= database corruption (error 121) from repeated unclean shutdowns -- btrfs device stats: zero errors on both NVMe drives -- No explicit BTRFS read-only event logged (freeze kills logging before it - can be recorded) diff --git a/docs/PLAN-archangel-btrfs.org b/docs/PLAN-archangel-btrfs.org deleted file mode 100644 index 20f1984..0000000 --- a/docs/PLAN-archangel-btrfs.org +++ /dev/null @@ -1,227 +0,0 @@ -#+TITLE: Implementation Plan: Archangel Btrfs Support -#+DATE: 2026-01-23 - -* Overview - -Add btrfs filesystem support to archangel (formerly archzfs) installer. -Users can choose ZFS or Btrfs during installation. - -See [[file:research-btrfs-expansion.org][research-btrfs-expansion.org]] for background research. - -* Key Decisions (Already Made) - -- Project rename: archzfs → archangel -- Snapshot tool: snapper + snap-pac + grub-btrfs -- Bootloader: ZFS uses ZFSBootMenu, Btrfs uses GRUB -- Encryption: ZFS native, Btrfs uses LUKS -- RAID: Btrfs raid1 only (raid5/6 unstable) -- Layout: Btrfs subvols mirror ZFS datasets - -* Phase 1: Refactor Current Installer - -Goal: Modularize install-archzfs before adding btrfs. - -** 1.1 Create lib/ directory structure -- [ ] Create custom/lib/ directory -- [ ] Move color/output functions → lib/common.sh -- [ ] Move fzf prompt functions → lib/common.sh -- [ ] Move config file handling → lib/config.sh - -** 1.2 Extract ZFS-specific code -- [ ] Create lib/zfs.sh -- [ ] Move pool creation → lib/zfs.sh -- [ ] Move dataset creation → lib/zfs.sh -- [ ] Move ZFS mount logic → lib/zfs.sh -- [ ] Move ZFSBootMenu install → lib/zfs.sh - -** 1.3 Extract shared disk operations -- [ ] Create lib/disk.sh -- [ ] Move partitioning logic (EFI + root) -- [ ] Move disk selection/validation -- [ ] Move RAID detection logic - -** 1.4 Add filesystem selection prompt -- [ ] Add fzf prompt: "Filesystem: ZFS / Btrfs" -- [ ] Store choice in config -- [ ] Gate ZFS vs Btrfs code paths - -** 1.5 Rename project -- [ ] Rename install-archzfs → archangel -- [ ] Update build.sh references -- [ ] Update README.org -- [ ] Update all internal references - -* Phase 2: Implement Btrfs Support - -Goal: Full btrfs installation path. - -** 2.1 Create lib/btrfs.sh -- [ ] Create btrfs volume function -- [ ] Create subvolume creation function -- [ ] Create mount function (with correct options) -- [ ] Create fstab generation (NO subvolid!) - -** 2.2 Subvolume layout -Create these subvolumes (matching ZFS datasets): -- [ ] @ → / -- [ ] @home → /home -- [ ] @snapshots → /.snapshots (snapper requirement) -- [ ] @var_log → /var/log -- [ ] @var_cache → /var/cache -- [ ] @tmp → /tmp -- [ ] @var_tmp → /var/tmp -- [ ] @media → /media (compress=off) -- [ ] @vms → /vms (nodatacow) -- [ ] @var_lib_docker → /var/lib/docker - -** 2.3 Mount options -#+BEGIN_SRC -BTRFS_OPTS="noatime,compress=zstd,space_cache=v2,discard=async" -#+END_SRC -- [ ] Apply to all subvols except @media (compress=off) and @vms (nodatacow) - -** 2.4 Snapper configuration -- [ ] Install snapper, snap-pac packages -- [ ] Create /etc/snapper/configs/root -- [ ] Set timeline policy (hourly=6, daily=7, weekly=2, monthly=1) -- [ ] Enable snapper-timeline.timer -- [ ] Enable snapper-cleanup.timer - -** 2.5 GRUB + grub-btrfs installation -- [ ] Install grub, grub-btrfs packages -- [ ] Configure GRUB for btrfs root -- [ ] Enable grub-btrfsd service (auto-update on snapshots) -- [ ] Test snapshot appears in GRUB menu - -** 2.6 Genesis snapshot -- [ ] Create initial snapshot: snapper create -d "genesis" -- [ ] Verify appears in snapper list -- [ ] Verify appears in GRUB menu - -** 2.7 Test basic btrfs (before encryption) -- [ ] VM test: single-disk btrfs install -- [ ] Verify subvolumes created correctly -- [ ] Verify GRUB boots and shows snapshots -- [ ] Verify snapper works -- [ ] Verify genesis snapshot exists - -** 2.8 LUKS encryption (after basic btrfs works) -- [ ] Add encryption prompt (yes/no) -- [ ] Create LUKS container on root partition -- [ ] Configure crypttab -- [ ] Add encrypt hook to mkinitcpio -- [ ] Test passphrase prompt at boot - -* Phase 3: Multi-disk Btrfs - -Goal: Full multi-disk support for btrfs (matching ZFS capabilities). - -** 3.1 RAID level support -- [ ] Stripe (raid0): mkfs.btrfs -d raid0 -m raid0 -- [ ] Mirror (raid1): mkfs.btrfs -d raid1 -m raid1 -- [ ] raid10: mkfs.btrfs -d raid10 -m raid10 (4+ disks) -- [ ] raid5: mkfs.btrfs -d raid5 -m raid5 (3+ disks, warn: unstable) -- [ ] raid6: mkfs.btrfs -d raid6 -m raid6 (4+ disks, warn: unstable) -- [ ] Detect multi-disk selection and offer appropriate levels -- [ ] Handle mixed disk sizes gracefully - -** 3.2 Encryption + multi-disk -- [ ] LUKS on each disk before btrfs -- [ ] crypttab entries for all disks -- [ ] Test unlock sequence at boot -- [ ] Single passphrase unlocks all (keyfile approach) - -** 3.3 EFI redundancy -- [ ] Create EFI partition on all disks -- [ ] Install GRUB to all EFI partitions -- [ ] Create boot entries for each disk - -** 3.4 Degraded boot support -- [ ] Add degraded mount option for emergency -- [ ] Kernel param: rootflags=degraded -- [ ] Document recovery procedure - -* Phase 4: Testing Infrastructure - -Goal: Automated tests for all configurations. - -** 4.1 Test configs -- [ ] Create test-configs/zfs-single.conf -- [ ] Create test-configs/zfs-mirror.conf -- [ ] Create test-configs/btrfs-single.conf -- [ ] Create test-configs/btrfs-mirror.conf -- [ ] Create test-configs/btrfs-encrypted.conf - -** 4.2 Test scripts -- [ ] Create test-btrfs-single.sh -- [ ] Create test-btrfs-mirror.sh -- [ ] Update test-vm.sh for btrfs option - -** 4.3 Validation checks (per research doc) -- [ ] Fresh install checks -- [ ] Reboot survival checks -- [ ] Snapshot operation checks -- [ ] Rollback + reboot checks -- [ ] Failure recovery checks (multi-disk) -- [ ] Encryption checks - -* Phase 5: CLI Tools - -Goal: Unified snapshot management wrappers. - -** 5.1 archangel-snapshot -- [ ] Detect filesystem (ZFS vs Btrfs) -- [ ] ZFS: call zfs snapshot -- [ ] Btrfs: call snapper create -- [ ] Consistent interface for both - -** 5.2 archangel-rollback -- [ ] Detect filesystem -- [ ] ZFS: call zfsrollback script -- [ ] Btrfs: call snapper rollback -- [ ] Include reboot prompt (required for full rollback) - -** 5.3 archangel-list -- [ ] List snapshots for either filesystem -- [ ] Consistent output format - -* Phase 6: Documentation - -Goal: Update all docs for dual-filesystem support. - -** 6.1 README.org -- [ ] Update project name to archangel -- [ ] Document filesystem choice -- [ ] Update feature list -- [ ] Update usage examples - -** 6.2 RESCUE-GUIDE.txt -- [ ] Add btrfs recovery procedures -- [ ] Add snapper commands -- [ ] Add GRUB recovery for btrfs - -** 6.3 New docs -- [ ] Create BTRFS.org with btrfs-specific details -- [ ] Update notes.org project context - -* Schedule (Suggested Order) - -1. Phase 1 (Refactor) - do first, enables everything else -2. Phase 2 (Btrfs single-disk) - core functionality -3. Phase 4.1-4.2 (Test infra) - validate as we go -4. Phase 3 (Multi-disk) - after single-disk works -5. Phase 5 (CLI tools) - polish -6. Phase 6 (Docs) - ongoing, finalize at end - -* Dependencies - -- Phase 2 depends on Phase 1 (refactored code) -- Phase 3 depends on Phase 2 (btrfs basics) -- Phase 4 can run in parallel with 2-3 -- Phase 5 depends on Phase 2 -- Phase 6 is ongoing - -* Open Items - -- [ ] File ZFSBootMenu rollback bug -- [ ] Decide: offer archsetup --chroot during install? (TODO exists) diff --git a/docs/PRINCIPLES.org b/docs/PRINCIPLES.org deleted file mode 100644 index fc34b80..0000000 --- a/docs/PRINCIPLES.org +++ /dev/null @@ -1,49 +0,0 @@ -#+TITLE: Working Principles -#+DESCRIPTION: Behavioral lessons learned from retrospectives. Read at session start. - -* How We Work Together - -** Sync Before Action -- Confirm before destructive or irreversible actions -- State what I'm about to do and wait for go-ahead -- "Wait, wait, wait" is valid and important feedback -- Don't assume the next step - ask or confirm - -** Clean Up After Yourself -- After chroot work: reset mountpoints, verify settings before export -- After testing: remove debug flags, temp files, test labels -- Before reboot: verify the system is in expected state - -** Verify Assumptions -- When something "should work" but doesn't, question the assumption -- Test one variable at a time to isolate causes -- Don't stack fixes - apply one, test, then apply next - -** Research Before Guessing -- Check community forums, release notes, known issues -- External sources (videos, blog posts) often have answers -- The obvious fix isn't always the right fix - -** Patience Over Speed -- Taking time to sync improves effectiveness -- Rushing creates mistakes that cost more time -- Working together > working fast - -* Checklists - -** After Chroot Work -- [ ] Reset ZFS mountpoints (remove /mnt prefix) -- [ ] Verify /etc files weren't overwritten with .pacnew -- [ ] Check hostid consistency -- [ ] Clean export of ZFS pool - -** Before Reboot -- [ ] Confirm which kernel/entry will boot -- [ ] Verify GRUB timeout allows menu selection -- [ ] Know the fallback plan if boot fails - -* Revision History - -| Date | Change | -|------------+---------------------------------------------| -| 2026-01-22 | Initial version from ratio boot fix session | diff --git a/docs/TESTING-STRATEGY.org b/docs/TESTING-STRATEGY.org deleted file mode 100644 index db63fa8..0000000 --- a/docs/TESTING-STRATEGY.org +++ /dev/null @@ -1,144 +0,0 @@ -#+TITLE: Testing Strategy -#+AUTHOR: Craig Jennings -#+DATE: 2026-01-25 - -* Overview - -This document describes the testing strategy for the archzfs installer project, -including automated VM testing and the rationale for key technical decisions. - -* Test Infrastructure - -** Test Scripts - -- =scripts/test-install.sh= - Main test runner -- =scripts/test-configs/= - Configuration files for different test scenarios - -** Test Flow - -1. Build ISO with =./build.sh= -2. Boot QEMU VM from ISO -3. Run unattended installation via config file -4. Verify installation (packages, services, filesystem) -5. Reboot from installed disk (no ISO) -6. Verify system survives reboot -7. Test rollback functionality (btrfs only) - -* LUKS Encryption Testing - -** The Challenge - -LUKS-encrypted systems require TWO passphrase prompts at boot: - -1. *GRUB prompt* - GRUB must decrypt /boot to read kernel/initramfs -2. *Initramfs prompt* - encrypt hook must decrypt root to mount filesystem - -This blocks automated testing because: -- SSH is unavailable until after both decryptions complete -- Both prompts require interactive passphrase entry - -** Options Evaluated - -*** Option A: Put /boot on EFI partition for testing - -Move /boot to the unencrypted EFI partition when TESTING=yes, so GRUB -doesn't need to decrypt anything. - -*Rejected* - Tests different code path than production. Bugs in GRUB -cryptodisk setup would not be caught. "Testing something different than -what ships defeats the purpose." - -*** Option B: Accept limitation, enhance installation verification - -Skip reboot tests for LUKS. Instead, verify configs before cleanup: -- Check crypttab, grub.cfg, mkinitcpio.conf are correct -- If configs are right, boot should work - -*Rejected* - We already found bugs (empty grub.cfg from FAT32 sync) that -only manifested at boot time. Config inspection wouldn't catch everything. - -*** Option C: Hybrid approach (Chosen) - -Use TWO mechanisms to handle the two prompts: - -1. *GRUB prompt* - QEMU monitor sendkey (timing is predictable) -2. *Initramfs prompt* - Keyfile in initramfs (deterministic) - -The GRUB countdown provides clear timing signal: -#+begin_example -The highlighted entry will be executed automatically in 0s. -Booting 'Arch Linux' -Enter passphrase for hd0,gpt2: -#+end_example - -We know exactly when the GRUB prompt appears. After sendkey handles GRUB, -the keyfile handles initramfs automatically. - -** Why Option C - -- Tests actual production code path (critical requirement) -- GRUB timing is predictable (countdown visible in serial) -- Keyfile handles the harder timing problem (initramfs) -- Only one sendkey interaction needed (GRUB prompt) - -** Implementation - -*** GRUB Passphrase (sendkey) - -1. Change serial from file-based to real-time (socket or pty) -2. Monitor for "Enter passphrase for" text after GRUB countdown -3. Send passphrase via QEMU monitor: =sendkey= commands -4. Send Enter key to submit - -*** Initramfs Passphrase (keyfile) - -When =TESTING=yes= is set in config: - -1. Generate random 2KB keyfile at =/etc/cryptroot.key= -2. Add keyfile to LUKS slot 1 (passphrase remains in slot 0) -3. Set keyfile permissions to 000 -4. Add keyfile to mkinitcpio FILES= array -5. Configure crypttab to use keyfile instead of "none" -6. Initramfs unlocks automatically (no prompt) - -** Security Mitigations - -- Test-only flag: Only activates when TESTING=yes -- Separate key slot: Keyfile in slot 1, passphrase in slot 0 -- Random per-build: Fresh keyfile generated each installation -- Never shipped: Keyfile only in test VMs, not in ISO -- Restricted permissions: chmod 000 on keyfile - -** Files Modified - -- =custom/lib/btrfs.sh= - setup_luks_testing_keyfile(), configure_crypttab(), configure_luks_initramfs() -- =custom/archangel= - Calls keyfile setup in LUKS flow -- =scripts/test-install.sh= - sendkey for GRUB, real-time serial monitoring -- =scripts/test-configs/btrfs-luks.conf= - TESTING=yes -- =scripts/test-configs/btrfs-mirror-luks.conf= - TESTING=yes - -* Test Configurations - -** Btrfs Tests - -| Config | Disks | LUKS | Status | -|--------+-------+------+--------| -| btrfs-single | 1 | No | Pass | -| btrfs-luks | 1 | Yes | Pass (with TESTING=yes) | -| btrfs-mirror | 2 | No | Pass | -| btrfs-stripe | 2 | No | Pass | -| btrfs-mirror-luks | 2 | Yes | Pass (with TESTING=yes) | - -** ZFS Tests - -| Config | Disks | Encryption | Status | -|--------+-------+------------+--------| -| single-disk | 1 | No | Pass | -| mirror | 2 | No | Pass | -| raidz1 | 3 | No | Pass | - -* References - -- Arch Wiki: dm-crypt/System configuration -- HashiCorp Discuss: LUKS Encryption Key on Initial Reboot -- GitHub: tylert/packer-build Issue #31 (LUKS unattended builds) diff --git a/docs/announcements/inbox-gitkeep.txt b/docs/announcements/inbox-gitkeep.txt deleted file mode 100644 index f8946c2..0000000 --- a/docs/announcements/inbox-gitkeep.txt +++ /dev/null @@ -1,10 +0,0 @@ -The inbox/ directory was disappearing between sessions because git doesn't track -empty directories. A .gitkeep file has been added to fix this. - -Action: If your project has an inbox/ directory, ensure it contains a .gitkeep file: - - touch inbox/.gitkeep - -If your project doesn't have an inbox/ directory yet, create one with .gitkeep: - - mkdir -p inbox && touch inbox/.gitkeep diff --git a/docs/announcements/the-purpose-of-this-directory.org b/docs/announcements/the-purpose-of-this-directory.org deleted file mode 100644 index ae3f756..0000000 --- a/docs/announcements/the-purpose-of-this-directory.org +++ /dev/null @@ -1,18 +0,0 @@ -The purpose of this directory is to contain one-off instructions for every project using these templates. - -Example: -- restructing of the claude-specific directories. -- announcements of new workflows and how to use them. -- one-off topics to be discussed with Craig when the templates are sync'd. - -How The Directory Is Used -- Craig will decide that all projects will need to perform an action (reorganizing files, adding info to NOTES or protocols, removing unused workflows) -- Craig will add the instructions as a file in this directory -- When the project syncs templates, they will inherit new files in the directory -- They will discuss the instructions with Craig to ensure he knows they have understood the instructions and have the opportunity to ask any clarifying questions before executing the instructions. -- They will then execute the instructions. -- They will inform Craig of the results of executing the instructions (success, failure, and any observations). -- Typically, they will delete the instructions when complete unless the instructions explicitly state to do otherwise. - - -NOTE: This file should always remain in the directory for future reference. This file may be overwritten by a template sync, but should never otherwise be deleted by claude. diff --git a/docs/notes.org b/docs/notes.org deleted file mode 100644 index 1f562cd..0000000 --- a/docs/notes.org +++ /dev/null @@ -1,586 +0,0 @@ -#+TITLE: Claude Code Notes - archangel -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-01-17 - -* About This File - -This file contains project-specific information for this project. - -**When to read this:** -- At the start of EVERY session (after reading protocols.org) -- When needing project context or history -- When checking reminders or pending decisions - -**What's in this file:** -- Project-specific context and goals -- Available workflows for this project -- Active reminders -- Pending decisions -- Session history - -**For protocols and conventions, see:** [[file:protocols.org][protocols.org]] - -* Project-Specific Context - -** Overview - -Build system for creating a custom Arch Linux installation ISO with ZFS support. The goal is to have a bootable ISO that can install Arch Linux on ZFS root without needing to manually compile ZFS or deal with kernel version mismatches. - -** Repository - -- Remote: =cjennings@cjennings.net:git/archangel.git= -- Branch: =main= -- docs/ is committed (not private) - -** Key Components - -- =build.sh= - Main build script (runs as root) - - Downloads ZFS packages from archzfs.com repository - - Creates custom archiso profile based on releng - - Adds custom packages (nodejs, npm, jq, zsh, htop, ripgrep, etc.) - - Copies custom installer scripts into ISO - - Builds ISO with mkarchiso - -- =custom/= - Custom scripts included in ISO - - =archangel= - Main installer script - - =install-claude= - Claude Code installer - - =archsetup-zfs= - ZFS-specific Arch setup - - =zfs-setup= - Installs ZFS packages and loads module (generated by build.sh) - -- =scripts/test-vm.sh= - QEMU VM for testing the ISO - -** Current State - -TESTING: archangel installer supports both ZFS and Btrfs. - -- ISO builds successfully with linux-lts + zfs-dkms -- ZFS installations use ZFSBootMenu -- Btrfs installations use GRUB + grub-btrfs for snapshot boot -- Both filesystems support multi-disk RAID configurations - -** Goals - -Create a bootable Arch Linux installation ISO that: -1. Installs Arch on ZFS root with native encryption -2. Uses sane defaults for dataset layout -3. Configures automatic snapshots (sanoid) -4. Sets up replication to TrueNAS for backups -5. Includes Claude Code on live ISO for emergency troubleshooting - -** Design Decisions - -*** Kernel Strategy -- Use =linux-lts= + =zfs-dkms= from archzfs.com repo -- DKMS builds ZFS from source, guaranteeing kernel compatibility -- Slower build time but eliminates version mismatch issues entirely -- LTS kernel provides stability, DKMS provides flexibility - -*** ZFS Pool Configuration -| Setting | Value | Rationale | -|---------+-------+-----------| -| Pool name | =zroot= | Standard convention | -| Encryption | AES-256-GCM, passphrase | Required at every boot | -| Compression | =zstd= (default) | Good balance of speed/ratio | -| Ashift | 12 (4K sectors) | Modern drives | -| Root reservation | 50GB | Prevents pool from filling | - -*** Dataset Layout -| Dataset | Mountpoint | Special Settings | Purpose | -|---------+------------+------------------+---------| -| zroot/ROOT/default | / | reservation=50G | Root filesystem | -| zroot/home | /home | | Home directories (archsetup creates user subdataset) | -| zroot/media | /media | compression=off | Pre-compressed media files | -| zroot/vms | /vms | recordsize=64K | VM disk images (qemu/libvirt + virtualbox) | -| zroot/var/log | /var/log | | System logs | -| zroot/var/cache | /var/cache | | Package cache | -| zroot/var/lib/pacman | /var/lib/pacman | | Package database | -| zroot/var/lib/docker | /var/lib/docker | | Docker storage | -| zroot/tmp | /tmp | auto-snapshot=false | Temp files | -| zroot/var/tmp | /var/tmp | auto-snapshot=false | Temp files | - -*** Snapshot Policy (Sanoid) -Less aggressive since TrueNAS handles long-term backups: - -| Template | Hourly | Daily | Weekly | Monthly | Used For | -|----------+--------+-------+--------+---------+----------| -| production | 6 | 7 | 2 | 1 | root, home, var/log, pacman | -| backup | 0 | 3 | 2 | 1 | media, vms | -| none | 0 | 0 | 0 | 0 | tmp, cache | - -Plus: Pacman hook creates snapshot before every transaction. - -*** TrueNAS Replication -- Primary: =truenas.local= (local network) -- Fallback: =truenas= (tailscale) -- Destination pool: =vault/[TBD]= -- Schedule: Nightly at 2:00 AM -- Datasets: ROOT/default, home, media, vms - -*** Included Packages -- Base system + development tools -- =nodejs=, =npm=, =jq= (for Claude Code) -- =zsh=, =htop=, =ripgrep=, =eza=, =fd=, =fzf= -- =sanoid= (snapshot management) -- =dialog= (installer UI) - -*** Installation UX -- All questions asked upfront, then unattended installation -- WiFi tested before installation begins (if provided) -- User can walk away during install and come back -- Summary + final confirmation before starting - -*** User Account Strategy -- install-archzfs creates root account only (asks for root password) -- No user account created during install -- Just create =zroot/home= dataset (no user-specific subdataset) -- archsetup creates user account + home dataset post-reboot - -*** GRUB HiDPI Support -- Generate 32px DejaVuSansMono font during install -- Set =GRUB_FONT= to use custom font -- Works well on HiDPI and regular displays - -*** WiFi Configuration -- Ask for SSID + password during install (optional) -- Test connection before installation starts -- Copy connection profile to installed system -- Auto-connects after reboot - -*** Post-Install Workflow -1. install-archzfs: Minimal ZFS system + root account -2. Reboot, login as root -3. Run archsetup manually for full workstation setup - -*** Testing/Debugging (VM) -- SSH access on live ISO: sshd enabled, known root password -- Serial console: =-serial mon:stdio= in QEMU for terminal copy/paste -- Port forwarding: 2222→22 (already configured) -- Allows easy copy/paste of error messages during testing - -** Open Questions - -- [ ] TrueNAS destination dataset path (vault/???) - -* AVAILABLE WORKFLOWS - -This section lists all documented workflows for this project. Update this section whenever a new workflow is created. - -** create-workflow -File: [[file:workflows/create-workflow.org][docs/workflows/create-workflow.org]] - -Meta-workflow for creating new workflows. Use this when identifying repetitive workflows that would benefit from documentation. - -Workflow: -1. Q&A discovery (4 core questions) -2. Assess completeness -3. Name the workflow -4. Document it -5. Update notes.org -6. Validate by execution - -Created: [Date when workflow was created] - -** create-v2mom -File: [[file:workflows/create-v2mom.org][docs/workflows/create-v2mom.org]] - -Workflow for creating a V2MOM (Vision, Values, Methods, Obstacles, Metrics) strategic framework for any project or goal. - -Workflow: -1. Understand V2MOM framework -2. Create document structure -3. Define Vision (aspirational picture of success) -4. Define Values (2-4 principles with concrete definitions) -5. Define Methods (4-7 approaches ordered by priority) -6. Identify Obstacles (honest personal/technical challenges) -7. Define Metrics (measurable outcomes) -8. Review and refine -9. Commit and use immediately - -Time: ~2-3 hours total -Applicable to: Any project (health, finance, software, personal infrastructure, etc.) - -Created: 2025-11-05 - -** startup -File: [[file:workflows/startup.org][docs/workflows/startup.org]] - -Workflow for beginning a Claude Code session with proper context and priorities. - -Triggered by: **Automatically at the start of EVERY session** - -Workflow: -1. Add session start timestamp (check for interrupted sessions) -2. Sync with templates (exclude notes.org and previous-session-history.org) -3. Scan workflows directory for available workflows -4. Read key notes.org sections (NOT entire file) -5. Process inbox (mandatory) -6. Ask about priorities (urgent work vs what's-next workflow) - -Ensures: Full context, current templates, processed inbox, clear session direction - -Created: 2025-11-14 - -** wrap-it-up -File: [[file:workflows/wrap-it-up.org][docs/workflows/wrap-it-up.org]] - -Workflow for ending a Claude Code session cleanly with proper documentation and version control. - -Triggered by: "wrap it up," "that's a wrap," "let's call it a wrap," or similar phrases - -Workflow: -1. Write session notes to notes.org Session History section -2. Archive sessions older than 5 sessions to previous-session-history.org -3. Git commit and push all changes (NO Claude attribution) -4. Provide brief valediction with accomplishments and next steps - -Ensures: Clean handoff between sessions, nothing lost, clear git history, proper documentation - -Created: 2025-11-14 - -** [Add more workflows as they are created] - -Format for new entries: -#+begin_example -** workflow-name -File: [[file:workflows/workflow-name.org][docs/workflows/workflow-name.org]] - -Brief description of what this workflow does. - -Workflow: -1. Step 1 -2. Step 2 -3. Step 3 - -Created: YYYY-MM-DD -#+end_example - -* PENDING DECISIONS - -This section tracks decisions that need Craig's input before work can proceed. - -**Instructions:** -- Add pending decisions as they arise during sessions -- Format: =** [Topic/Feature Name]= -- Include: What needs to be decided, options available, why it matters -- Remove decisions once resolved (document resolution in Session History) - -**Example format:** -#+begin_example -** Feature Name or Topic - -Craig needs to decide on [specific question]. - -Options: -1. Option A - [brief description, pros/cons] -2. Option B - [brief description, pros/cons] - -Why this matters: [impact on project] - -Implementation is ready - just need Craig's preference. -#+end_example - -** Current Pending Decisions - -(None currently - will be added as they arise) - -* Active Reminders - -** Current Reminders - -- [2026-02-12] Verify TrueNAS ISO hash matches local: =d17351445e4110ed2cf7190c25dc5fa91ec7325bb34644bbca1515fcd876d251=. TrueNAS was unreachable at end of session. Push hash file to truenas isos directory after verifying. - -** Instructions for This Section - -When Craig says "remind me" about something: -1. Add it here with timestamp and description -2. If it's a TODO, also add to =/home/cjennings/sync/org/roam/inbox.org= scheduled for today -3. Check this section at start of every session -4. Remove reminders once addressed - -Format: -- =[YYYY-MM-DD]= Description of what to remind Craig about - -* Session History - -This section contains notes from each session with Craig. Sessions are logged in reverse chronological order (most recent first). - -**Note:** Sessions older than 5 sessions are archived in [[file:previous-session-history.org][Previous Session History]] - -** Format for Session History Entries - -Each entry should use this format: - -- **Timestamp:** =*** YYYY-MM-DD Day @ HH:MM TZ= (get TZ with =date +%z=) -- **Time estimate:** How long the session took -- **Status:** COMPLETE / IN PROGRESS / PAUSED -- **What We Completed:** Bulleted list of accomplishments -- **Key Decisions:** Any important decisions made -- **Files Modified:** Links to changed files (use relative paths) -- **Next Steps:** What to do next session (if applicable) - -**Best practices:** -- Keep entries concise but informative -- Include enough context to resume work later -- Document important technical insights -- Note any new patterns or preferences discovered -- Link to files using org-mode =file:= links - -** Session Entries - -*** 2026-02-19 Thu @ 16:11-16:14 -0600 - -*Status:* COMPLETE - -*What We Completed:* -- Template sync from claude-templates (protocols, workflows, scripts, announcements) -- Processed 4 announcements: - 1. Calendar workflows updated with cross-calendar visibility - 2. gcalcli now available for Google Calendar CLI access - 3. New open-tasks workflow — updated todo.org headers to project-named convention (Archangel Open Work / Archangel Resolved) - 4. New summarize-emails workflow added -- New workflows synced: add-calendar-event, delete-calendar-event, edit-calendar-event, read-calendar-events, open-tasks, summarize-emails -- New script synced: maildir-flag-manager.py - -*Files Modified:* -- [[file:../todo.org][todo.org]] — renamed headers to project-named convention - -*Files Added (from template):* -- docs/workflows/{add,delete,edit,read}-calendar-event.org -- docs/workflows/open-tasks.org, summarize-emails.org -- docs/scripts/maildir-flag-manager.py -- docs/announcements/inbox-gitkeep.txt - -*Outstanding Reminder:* -- [2026-02-12] Verify TrueNAS ISO hash — still pending - -*** 2026-02-12 Thu @ 08:23-16:08 -0600 - -*Status:* COMPLETE - -*What We Completed:* -- Rebuilt archangel ISO for linux-lts 6.12.70-1 kernel -- ISO: archangel-vmlinuz-6.12.70-lts-2026-02-12-x86_64.iso (2.3G) -- All tests passed: sanity (26/26), single-disk, mirror, raidz1 -- Fixed archzfs GPG key prompt hanging unattended installs (SigLevel → Never) -- Fixed pgrep false positive in full-test.sh (avahi matched hostname pattern) -- Bumped INSTALL_TIMEOUT from 900s to 1800s for DKMS builds -- Added local distribution to build-release (~/downloads/isos + archsetup inbox notification) -- Distributed ISO to ~/downloads/isos and truenas.local:/mnt/vault/isos -- Audited codebase for open-source readiness, added todo.org task with full checklist -- Dropped SSH access info and test VM rebuild notice in archsetup inbox - -*Key Decisions:* -- archzfs SigLevel changed to Never (HTTPS provides transport security; GPG key management kept breaking unattended installs) -- USB drives removed as distribution target -- build-release now handles ~/downloads/isos and archsetup inbox automatically - -*Bugs Found and Fixed:* -1. archzfs GPG key prompt: pacstrap -K creates empty keyring, pacman-key -r silently fails, pacman prompts interactively → changed SigLevel to Never in custom/archangel (2 locations) -2. Test pgrep false positive: pgrep -f 'archangel' matched avahi-daemon's "running [archangel.local]" → changed to pgrep -f '/usr/local/bin/archangel' -3. Install timeout: 15 min too short for DKMS compile in VM → bumped to 30 min - -*Files Modified:* -- [[file:../custom/archangel][custom/archangel]] — SigLevel fix (install_base + configure_system) -- [[file:../scripts/full-test.sh][scripts/full-test.sh]] — pgrep fix, timeout bump -- [[file:../scripts/build-release][scripts/build-release]] — local distribution + archsetup inbox -- [[file:../todo.org][todo.org]] — open-sourcing prep task - -*Next Steps:* -- Verify TrueNAS ISO hash (was unreachable at session end) -- Fix TrueNAS connectivity issues -- Continue with open-sourcing prep or other todo.org items - -*** 2026-02-07 Sat @ 21:36 -0600 - -*Status:* COMPLETE - -*What We Completed:* -- Synced templates from claude-templates (protocols.org, workflows, scripts updated) -- Executed announcements: - 1. Deleted old renamed workflows (session-wrap-up.org, retrospective-workflow.org, session-start.org) - replaced by wrap-it-up.org, retrospective.org, startup.org - 2. Confirmed no docs/templates/ cache to remove - 3. Renamed NOTES.org to notes.org, updated all internal references -- Updated workflow catalog in notes.org to reflect renamed workflows - -*Files Modified:* -- [[file:notes.org][docs/notes.org]] - Renamed from NOTES.org, updated references and workflow catalog -- [[file:protocols.org][docs/protocols.org]] - Synced from template -- [[file:previous-session-history.org][docs/previous-session-history.org]] - Updated NOTES.org references -- [[file:PLAN-archangel-btrfs.org][docs/PLAN-archangel-btrfs.org]] - Updated NOTES.org reference - -*Files Deleted:* -- docs/workflows/session-wrap-up.org (replaced by wrap-it-up.org) -- docs/workflows/retrospective-workflow.org (replaced by retrospective.org) -- docs/workflows/session-start.org (replaced by startup.org) - -*Files Added (from template):* -- New workflows: wrap-it-up.org, retrospective.org, startup.org, set-alarm.org, status-check.org, send-email.org, find-email.org, extract-email.org, sync-email.org, email-assembly.org -- docs/scripts/ updates - -*** 2026-01-25 Sun @ 00:15-08:34 -0600 - -*Status:* COMPLETE - -*What We Completed:* -- Phase 4.3 validation testing for btrfs installations -- Non-LUKS btrfs tests all PASS: btrfs-single, btrfs-mirror, btrfs-stripe -- Attempted LUKS automated reboot testing (sendkey + keyfile hybrid approach) -- GRUB passphrase via sendkey WORKS (Slot 0 opened) -- Initramfs encrypt hook does NOT receive sendkey - accepted as limitation -- Fixed configure_btrfs_initramfs() - was overwriting HOOKS and removing encrypt hook -- Added setup_luks_testing_keyfile() function for keyfile-based testing -- Created TESTING-STRATEGY.org documenting the LUKS automation limitation -- Copied ISO to Ventoy flash drive -- Added manual LUKS verification task to todo.org (priority A) - -*Key Decisions:* -- LUKS reboot automation is a known limitation - installation tests pass, reboot verification requires manual testing -- Hybrid approach (sendkey for GRUB, keyfile for initramfs) was correct direction but initramfs encrypt hook reads input differently than GRUB -- Documented in TESTING-STRATEGY.org for future reference - -*Files Modified:* -- [[file:../custom/lib/btrfs.sh][custom/lib/btrfs.sh]] - LUKS keyfile support, encrypt hook fix -- [[file:../custom/archangel][custom/archangel]] - Integrated keyfile setup call -- [[file:../scripts/test-configs/btrfs-luks.conf][scripts/test-configs/btrfs-luks.conf]] - Added TESTING=yes -- [[file:../scripts/test-configs/btrfs-mirror-luks.conf][scripts/test-configs/btrfs-mirror-luks.conf]] - Added TESTING=yes -- [[file:TESTING-STRATEGY.org][docs/TESTING-STRATEGY.org]] - New file documenting approach -- [[file:../todo.org][todo.org]] - Added manual LUKS verification task - -*Test Results:* -| Config | Installation | Reboot | -|--------+--------------+--------| -| btrfs-single | PASS | PASS | -| btrfs-mirror | PASS | PASS | -| btrfs-stripe | PASS | PASS | -| btrfs-luks | PASS | MANUAL | -| btrfs-mirror-luks | PASS | MANUAL | - -*Next Steps:* -- Review Phase 5 (CLI tools: archangel-snapshot, archangel-rollback, archangel-list) -- Manual LUKS reboot verification when hardware available - -*** 2026-01-23 Fri @ 02:12 -0600 - -*Status:* COMPLETE - -*What We Completed:* -- Validated ZFSBootMenu installation on bare metal (ratio - Framework Desktop) -- Monitored full install-archzfs run via SSH to archzfs.local -- Installation completed successfully on 2-disk NVMe mirror (2x 7.3TB) -- Verified ZFSBootMenu installed on both EFI partitions (redundancy working) -- Confirmed both EFI boot entries created (disk1 and disk2) -- Genesis snapshot created, system ready to reboot - -*Validation Results:* -| Component | Status | -|-----------+--------| -| ZFS pool (mirror) | Exported correctly | -| EFI partition nvme0n1p1 | ZFSBootMenu installed | -| EFI partition nvme1n1p1 | ZFSBootMenu installed | -| Boot0001 ZFSBootMenu-disk1 | Created, first priority | -| Boot0006 ZFSBootMenu-disk2 | Created, fallback | -| Initramfs with ZFS hooks | Built successfully | -| Genesis snapshot | Created | - -*System Configuration:* -- Hostname: ratio -- Pool: zroot (2-disk mirror, unencrypted) -- Kernel: linux-lts 6.12.66-1 -- ZFSBootMenu timeout: 3 seconds - -*Next Steps:* -- Reboot ratio into ZFSBootMenu -- Run archsetup for full workstation configuration -- Merge zfsbootmenu branch to main after successful boot - -*** 2026-01-23 Fri @ 01:21 -0600 - -*Status:* COMPLETE - -*What We Completed:* -- Replaced GRUB with ZFSBootMenu bootloader in install-archzfs -- Researched 5 open-source ZFS installation projects for best practices -- Created implementation plan (PLAN-zfsbootmenu-implementation.org) -- Deleted GRUB snapshot tooling: grub-zfs-snap, 40_zfs_snapshots, zz-grub-zfs-snap.hook, zfs-snap-prune -- Updated build.sh to remove GRUB file copying -- Updated zfssnapshot and zfsrollback to remove grub-zfs-snap calls -- Built and tested ISO successfully: - - Single disk installation: PASSED - - 2-disk mirror (mirror-0): PASSED - - 3-disk RAIDZ1 (raidz1-0): PASSED -- Committed to zfsbootmenu branch, pushed to origin -- Copied ISO to Ventoy flash drive (/dev/sda1) - -*Key Technical Changes:* -- EFI partition reduced from 1GB to 512MB (only holds ZFSBootMenu) -- EFI mounts at /efi instead of /boot -- Kernel/initramfs live on ZFS root (enables snapshot boot with matching kernel) -- Downloads pre-built ZFSBootMenu EFI from get.zfsbootmenu.org -- Creates boot entries for all disks in multi-disk configs -- Syncs ZFSBootMenu to all EFI partitions for redundancy -- Sets org.zfsbootmenu:commandline on zroot/ROOT for kernel cmdline inheritance -- AMD GPU workarounds (pg_mask, cwsr_enable) auto-added when AMD detected - -*Key Decisions:* -- /boot must NOT be a separate ZFS dataset (per research from all 5 projects) -- Download pre-built ZFSBootMenu rather than building from AUR -- Use --unicode parameter for ZFSBootMenu cmdline in efibootmgr - -*Files Modified:* -- [[file:../build.sh][build.sh]] - Removed grub-zfs-snap copying -- [[file:../custom/archangel][custom/install-archzfs]] - Major ZFSBootMenu rewrite -- [[file:../custom/zfssnapshot][custom/zfssnapshot]] - Removed grub-zfs-snap call -- [[file:../custom/zfsrollback][custom/zfsrollback]] - Removed grub-zfs-snap call - -*Files Deleted:* -- custom/grub-zfs-snap -- custom/40_zfs_snapshots -- custom/zz-grub-zfs-snap.hook -- custom/zfs-snap-prune - -*Files Created:* -- [[file:../PLAN-zfsbootmenu-implementation.org][PLAN-zfsbootmenu-implementation.org]] - Implementation plan -- [[file:2026-01-22-ratio-amd-gpu-freeze-fix-instructions.org][docs/2026-01-22-ratio-amd-gpu-freeze-fix-instructions.org]] - Filed from inbox -- [[file:research-sandreas-zarch.org][docs/research-sandreas-zarch.org]] - Research notes -- [[file:session-context.org][docs/session-context.org]] - Session tracking - -*Commits Made:* -- 2ad560b: Replace GRUB with ZFSBootMenu bootloader - -*Next Steps:* -- Test ZFSBootMenu on physical hardware (ratio or new install) -- Merge zfsbootmenu branch to main after hardware validation -- Consider encrypted pool test (requires interactive passphrase) - -*** 2026-01-22 Thu @ 15:44 -0600 - -*Status:* COMPLETE - -*What We Completed:* -- Diagnosed system freeze on ratio - same VPE power gating bug from earlier -- Researched VPE_IDLE_TIMEOUT patch status - NOT merged, AMD maintainer skeptical -- Discovered kernel 6.18.x has critical CWSR bugs for Strix Halo - do NOT upgrade -- Identified two workarounds: amdgpu.pg_mask=0 and amdgpu.cwsr_enable=0 -- Verified neither workaround is currently applied on ratio -- Created detailed fix instructions in inbox/instructions.txt for live ISO application - -*Key Decisions:* -- Stay on kernel 6.15.2 (pinned) - 6.18.x has worse bugs for Strix Halo -- Apply both pg_mask=0 and cwsr_enable=0 workarounds -- Will apply fixes from velox via live ISO (mkinitcpio triggers freeze on live system) - -*Key Technical Findings:* -- VPE_IDLE_TIMEOUT patch (1s→2s) submitted Aug 2025 but not merged -- Framework Community recommends 6.15.x-6.17.x for Strix Halo, avoid 6.18+ -- Current ratio state: pg_mask=4294967295 (bad), cwsr_enable=1 (bad) - -*Files Created:* -- [[file:../inbox/instructions.txt][inbox/instructions.txt]] - Detailed fix instructions for live ISO - -*Next Steps:* -- Boot ratio from archzfs live ISO (from velox) -- Apply both workarounds per inbox/instructions.txt -- Rebuild initramfs from live ISO -- Verify fixes active after reboot - - diff --git a/docs/previous-session-history.org b/docs/previous-session-history.org deleted file mode 100644 index 2c9d377..0000000 --- a/docs/previous-session-history.org +++ /dev/null @@ -1,157 +0,0 @@ -#+TITLE: Previous Session History -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2025-11-14 - -* About This File - -This file contains archived session history entries older than 2 weeks from the current date. Sessions are automatically moved here during the wrap-up workflow to keep notes.org at a manageable size. - -Sessions are listed in reverse chronological order (most recent first). - -* Archived Sessions - -*** 2026-01-22 Thu @ 15:02 -0600 - -*Status:* COMPLETE - -*What We Completed:* -- Diagnosed and fixed ratio (Framework Desktop, AMD Strix Halo) boot failures -- Root cause: missing linux-firmware 20260110 caused amdgpu to freeze at boot -- Installed linux-firmware 20260110-1, fixed ZFS mountpoints, fixed hostid mismatch -- Configured kernel 6.15.2 as default (pinned), created clean GRUB menu -- Created retrospective workflow for continuous improvement -- Added PRINCIPLES.org with behavioral lessons learned -- Documented full troubleshooting session - -*Key Decisions:* -- linux-firmware version is critical for AMD Strix Halo (20260110+ required) -- ZFS rollback with separate /boot partition is dangerous - recommend ZFSBootMenu -- Established retrospective workflow for major problem-solving sessions -- Behavioral lessons go in PRINCIPLES.org, technical facts in session docs - -*Files Modified:* -- [[file:../custom/archangel][custom/install-archzfs]] - Fixed mkinitcpio configuration -- [[file:../todo.org][todo.org]] - Added ZFS rollback + /boot issue, ZFSBootMenu task -- [[file:PRINCIPLES.org][docs/PRINCIPLES.org]] - New file with behavioral lessons -- [[file:protocols.org][docs/protocols.org]] - Added PRINCIPLES.org to session startup -- [[file:retrospectives/2026-01-22-ratio-boot-fix.org][docs/retrospectives/]] - Retrospective for this session -- [[file:2026-01-22-ratio-boot-fix-session.org][docs/2026-01-22-ratio-boot-fix-session.org]] - Full technical session doc - -*Commits Made:* -- c46191c: Fix ratio boot issues: firmware, mkinitcpio, and document ZFS rollback dangers -- 9100517: Update ratio session doc: kernel 6.15.2 now default with clean GRUB menu -- e5aedfa: Add retrospective workflow and PRINCIPLES.org for continuous improvement - -*Next Steps:* -- Implement ZFSBootMenu on ratio to solve /boot rollback issue -- Consider adding ZFSBootMenu to install-archzfs as alternative to GRUB - -*** 2026-01-18 Sat @ 16:30 -0600 - -*Status:* COMPLETE - -*What We Completed:* -- Completed RESCUE-GUIDE.txt with all 8 sections fully documented -- Added final round of utility packages to ISO: - - Disk tools: ncdu, tree - - Hardware diagnostics: iotop - - Network: speedtest-cli, mosh, aria2, tmate, sshuttle - - Security: pass (password manager) -- Removed AUR-only packages that broke build: safecopy, ms-sys, dislocker, nwipe -- Successfully rebuilt ISO (5.1GB) -- Copied ISO to truenas.local:/mnt/vault/isos and ~/downloads/isos -- Wrote ISO to USB drives (/dev/sda 1TB, /dev/sdb 240GB) -- Ran all tests: - - zfs-snap-prune unit tests: 22/22 PASSED - - VM install test (single-disk): PASSED - - VM install test (mirror): PASSED - - VM install test (raidz1): PASSED -- Marked "Add common recovery tools" TODO as DONE - -*Commits Made:* -- 36aa130: Add utility tools and rescue guide documentation -- 6f4fd68: Remove AUR-only packages from ISO build - -*Files Modified:* -- [[file:../build.sh][build.sh]] - Added utility packages, removed AUR-only packages -- [[file:../custom/RESCUE-GUIDE.txt][custom/RESCUE-GUIDE.txt]] - Completed all 8 sections -- [[file:../TODO.org][TODO.org]] - Marked recovery tools task as DONE -- [[file:session-context.org][docs/session-context.org]] - Updated session state - -*Key Technical Notes:* -- AUR packages cannot be included in mkarchiso builds without custom AUR handling -- Documented AUR tools (safecopy, ms-sys, dislocker, nwipe) in RESCUE-GUIDE with install instructions -- ISO now doubles as a comprehensive rescue/recovery disk - -*Next Steps:* -- Test booting from physical USB drive on real hardware -- Consider CI/CD pipeline for automated ISO builds -- Consider adding ISO to GRUB boot menu for on-disk recovery - -*** 2026-01-17 Sat @ 17:10 -0600 - -*Status:* IN PROGRESS - -*What We Completed:* -- Fixed ZFS kernel module mismatch by switching to linux-lts + zfs-dkms -- Fixed bootloader to use linux-lts kernel (was defaulting to regular linux) -- Fixed broadcom-wl dependency (switched to broadcom-wl-dkms) -- Updated mkinitcpio preset for linux-lts with archiso config -- Fixed install-archzfs bugs: - - =[[ ]] && error= pattern causing early exit with =set -e= (changed to if/then) - - 50G reservation on 50G disk (now dynamic: 20% of pool, capped 5-20G) - - sanoid not in official repos (moved to archsetup) - - grub-mkfont needs freetype2 package (added to pacstrap) -- Removed sanoid/syncoid from install-archzfs (archsetup will handle) -- Created inbox item for archsetup with full sanoid/syncoid config -- ISO now 4.8G (was 5.4G) - only linux-lts kernel - -*Key Technical Insights:* -- =broadcom-wl= depends on =linux= kernel specifically - use =broadcom-wl-dkms= instead -- archiso releng profile has linux.preset in airootfs that needs renaming to linux-lts.preset -- With =set -e=, =[[ test ]] && command= returns exit code 1 if test is false, causing script exit -- =grub-mkfont= requires =freetype2= package (not installed by default with grub) - -*Files Modified:* -- [[file:../build.sh][build.sh]] - major updates for linux-lts, bootloader configs, mkinitcpio preset -- [[file:../custom/archangel][custom/install-archzfs]] - multiple bug fixes, removed sanoid/syncoid -- [[file:~/code/archsetup/inbox/zfs-sanoid-detection.txt][archsetup inbox]] - sanoid/syncoid config for archsetup to implement - -*Current State:* -- ISO builds successfully with linux-lts + zfs-dkms -- ZFS module loads correctly in live environment -- install-archzfs runs through most steps -- Last error: grub-mkfont missing freetype2 (now fixed, needs rebuild/test) - -*Next Steps:* -- Rebuild ISO with freetype2 fix -- Complete full install-archzfs test in VM -- Test booting the installed system -- Git commit all changes - -*** 2026-01-17 Sat @ 13:16 -0600 - -*Status:* COMPLETE (continued above) - -*What We Completed:* -- Initialized git repository -- Created .gitignore (excludes work/, out/, profile/, zfs-packages/) -- Initial commit with all build scripts -- Added docs/ to git (decided to track publicly) -- Built fresh ISO (archzfs-claude-2026.01.17-x86_64.iso, 4.9G) -- Tested ISO in QEMU VM -- Documented project goals and design decisions in notes.org - -*Key Decisions Made:* -- Use linux-lts + zfs-dkms from archzfs.com (DKMS ensures kernel compatibility) -- Less aggressive snapshot policy (TrueNAS handles long-term backups) -- All install questions upfront, then unattended installation -- Root account only (archsetup creates user post-reboot) -- 32px GRUB font for HiDPI displays -- WiFi config tested before install starts - -*Files Modified:* -- [[file:../.gitignore][.gitignore]] - created -- [[file:../build.sh][build.sh]] - major rewrite -- [[file:../custom/install-archzfs][custom/install-archzfs]] - complete rewrite -- [[file:../scripts/test-vm.sh][scripts/test-vm.sh]] - added serial console diff --git a/docs/project-workflows/code-review.org b/docs/project-workflows/code-review.org deleted file mode 100644 index 79ef0ed..0000000 --- a/docs/project-workflows/code-review.org +++ /dev/null @@ -1,275 +0,0 @@ -#+TITLE: Code Review Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-01-23 - -* Overview - -This workflow guides code review with a focus on what matters most. It's based on the Code Review Pyramid (Gunnar Morling) which prioritizes review effort by cost of change - spend the most time on things that are hardest to fix later. - -#+begin_example - ┌─────────────────┐ - │ CODE STYLE │ ← Automate - ┌───┴─────────────────┴───┐ - │ TESTS │ ← Automate - ┌───┴─────────────────────────┴───┐ - │ DOCUMENTATION │ - ┌───┴─────────────────────────────────┴───┐ - │ IMPLEMENTATION SEMANTICS │ ← Focus - ┌───┴─────────────────────────────────────────┴───┐ - │ API SEMANTICS │ ← Most Focus - └─────────────────────────────────────────────────┘ - - ↑ HARDER TO CHANGE LATER = MORE HUMAN REVIEW TIME -#+end_example - -*Key Insight:* A formatting issue is a 2-minute fix. A breaking API change after release is a multi-sprint disaster. Prioritize accordingly. - -* When to Use This Workflow - -Use this workflow when: -- Reviewing pull requests / merge requests -- Doing self-review before submitting code -- Pair reviewing with another developer - -* Effort Scaling - -Not all changes need the same scrutiny: - -| Change Type | Review Depth | Time | -|-------------+--------------+------| -| Typo/docs fix | Quick scan for correctness | 2-5 min | -| Bug fix | Verify fix + no regressions | 10-15 min | -| New feature | Full checklist, deep dive on logic | 30-60 min | -| Architectural change | Bring in senior reviewers, verify design | 60+ min | - -* The Core Checklist - -Use this for every review. If you answer "no" or "maybe" to any item, dig deeper using the relevant Deep Dive section. - -** 1. API Semantics (Hardest to Change - Focus Most Here) - -These questions matter most because changes here break things for other people. - -- [ ] *Simple?* Is the API as small as possible but as large as needed? -- [ ] *Predictable?* Does it follow existing patterns and the principle of least surprise? -- [ ] *Clean boundaries?* Is there clear separation between public API and internal implementation? -- [ ] *Generally useful?* Is the API helpful for common cases, not just one specific scenario? -- [ ] *Backwards compatible?* Are there any breaking changes to things users already rely on? - -** 2. Implementation Semantics (Focus Here) - -Once the API looks good, check how the code actually works. - -- [ ] *Correct?* Does it actually do what the requirements/ticket says? -- [ ] *Logically sound?* Are there edge cases, off-by-one errors, or logic bugs? -- [ ] *Simple?* Could this be simpler? Is there unnecessary complexity or over-engineering? -- [ ] *Robust?* Does it handle errors gracefully? Concurrency issues? -- [ ] *Secure?* Is user input validated? No hardcoded secrets? (See Security Deep Dive) -- [ ] *Performant?* Any expensive operations, N+1 queries, or scaling concerns? (See Performance Deep Dive) -- [ ] *Observable?* Is there appropriate logging, metrics, or tracing? -- [ ] *Dependencies OK?* Do new libraries pull their weight? License acceptable? - -** 3. Documentation - -Good code still needs good explanations. - -- [ ] *New features documented?* README, API docs, or user guides updated? -- [ ] *Code comments explain "why"?* Complex logic or non-obvious decisions explained? -- [ ] *Readable?* Documentation is clear, accurate, no major errors? - -** 4. Tests (Automate the "Pass/Fail" Check) - -Don't just check if tests pass - check if they're good tests. - -- [ ] *Tests exist?* Are new features and bug fixes covered by tests? -- [ ] *Test the right things?* Do tests verify behavior, not implementation details? -- [ ] *Edge cases covered?* Are boundary conditions and error paths tested? -- [ ] *Right test type?* Unit tests where possible, integration tests where necessary? -- [ ] *Would catch regressions?* If someone breaks this later, will a test fail? - -** 5. Code Style (Automate This) - -Let linters and formatters handle these. Only check manually if tooling isn't set up. - -- [ ] *Linter passes?* No style violations flagged by automated tools? -- [ ] *Readable?* Methods reasonable length? Clear naming? -- [ ] *DRY?* No unnecessary duplication? - -** 6. Process & Collaboration - -Code review is a conversation, not an inspection. - -- [ ] *PR has context?* Does the description explain WHY, not just what? -- [ ] *Reasonable size?* Is this reviewable in one sitting? (<400 lines ideal) -- [ ] *Self-reviewed?* Has the author clearly reviewed their own code first? -- [ ] *No debug artifacts?* Console.logs, commented code, TODOs without tickets removed? - -* Deep Dives - -Use these when the Core Checklist flags a concern. Each section provides detailed checks for specific areas. - -** Security Deep Dive - -When "Is it secure?" raises concerns, check these: - -*** Input Validation -- [ ] All user input validated and sanitized -- [ ] SQL queries parameterized (no string concatenation) -- [ ] Output encoded to prevent XSS -- [ ] File uploads validated (type, size, content) -- [ ] Path traversal prevented - -*** Authentication & Authorization -- [ ] Auth checks at every access point -- [ ] Session management secure -- [ ] CSRF protections for state-changing operations -- [ ] Rate limiting where appropriate - -*** Secrets & Data -- [ ] No hardcoded credentials (use env vars/secrets manager) -- [ ] Sensitive data encrypted at rest and in transit -- [ ] Error messages don't leak system information -- [ ] Logs exclude sensitive data - -*** Common Vulnerabilities (OWASP) -- [ ] No SQL injection vectors -- [ ] No command injection vectors -- [ ] No unsafe deserialization -- [ ] Proper HTTP security headers - -** Performance Deep Dive - -When "Is it performant?" raises concerns, check these: - -*** Database -- [ ] No N+1 query problems -- [ ] Appropriate indexes exist -- [ ] No SELECT * - explicit columns only -- [ ] Pagination for large result sets -- [ ] Connection pooling used - -*** Code Efficiency -- [ ] No unnecessary loops or iterations -- [ ] Appropriate data structures (HashMap vs List lookups) -- [ ] Expensive operations cached where beneficial -- [ ] Large collections processed efficiently (streaming vs loading all) -- [ ] Resources properly closed (connections, file handles) - -*** Scaling -- [ ] Will this work with 10x/100x the data? -- [ ] Async operations used where beneficial -- [ ] No blocking operations in hot paths - -** Concurrency Deep Dive - -When reviewing multi-threaded or async code: - -- [ ] Shared mutable state properly synchronized -- [ ] No race conditions -- [ ] Lazy initialization is thread-safe -- [ ] Getters don't return mutable internal objects -- [ ] Thread-safe collections used where needed -- [ ] Consistent lock ordering (no deadlocks) -- [ ] Loop variables passed into closures/lambdas (not captured) -- [ ] Proper timeouts on blocking operations -- [ ] Tests run with race detector where available - -** Error Handling Deep Dive - -When reviewing error handling and logging: - -- [ ] Exceptions caught at appropriate levels -- [ ] System "fails safe" (deny on error, not grant) -- [ ] Transactions rolled back on failure -- [ ] Error messages help debugging but don't leak info -- [ ] Correlation IDs for tracing across services -- [ ] Log levels appropriate (DEBUG/INFO/WARN/ERROR) -- [ ] Critical failures trigger alerts - -** API Compatibility Deep Dive - -When reviewing API changes: - -*** Non-Breaking (Safe) -- Adding optional fields -- Adding new endpoints -- Making required fields optional -- Adding new enum values (if clients handle unknown) - -*** Breaking (Dangerous) -- Removing fields or endpoints -- Changing field types -- Making optional fields required -- Changing semantic meaning of fields -- Changing response status codes - -*** If Breaking Changes Required -- [ ] Version bumped appropriately -- [ ] Migration path documented -- [ ] Deprecation warnings added -- [ ] Transition period planned - -** Dependencies Deep Dive - -When new dependencies are added: - -- [ ] Actually necessary (not just convenience) -- [ ] From trusted, reputable source -- [ ] Actively maintained (recent commits, responsive maintainers) -- [ ] No known vulnerabilities (check Dependabot/Snyk) -- [ ] License compatible with project -- [ ] Doesn't bloat dependency tree excessively -- [ ] Version pinned in lock file - -* How to Give Feedback - -** Frame as Questions -Bad: "This is wrong." -Good: "I noticed we used X here - would Y be more efficient for this case?" - -** Distinguish Severity -- *Blocker:* Must fix before merge (security issue, breaking bug) -- *Should fix:* Important but could be follow-up PR -- *Suggestion:* Nice to have, author's discretion -- *Nit:* Minor style preference, totally optional - -** Call Out Good Stuff -Reviews are for learning too. If you see a clever solution or clean pattern, say so. - -** Be Timely -Don't let PRs sit. If you're stuck, ask someone to pair-review with you. - -* Anti-Patterns to Avoid - -** The Empty LGTM -Never rubber-stamp with "Looks Good To Me" without actually reviewing. Even noting one thing you liked shows you read it. - -** Nitpicking -Don't comment on spaces, tabs, or semicolons. Let automated tools handle formatting. Focus on logic and design. - -** The Stall -Don't let PRs languish for days. Review within 24 hours or communicate delays. - -** Bikeshedding -Don't spend 30 minutes debating variable names while ignoring architectural concerns. - -** Review by Line Count -400 lines of careful refactoring may need less scrutiny than 40 lines of new auth logic. Adjust effort to risk, not size. - -* Validation Checklist - -Before approving, verify: - -- [ ] Core checklist complete (all 5 pyramid levels) -- [ ] Any "maybe" answers investigated with Deep Dives -- [ ] Feedback framed constructively -- [ ] Blockers clearly marked vs suggestions -- [ ] You could explain this code to someone else - -* Sources - -- [[https://www.morling.dev/blog/the-code-review-pyramid/][The Code Review Pyramid]] - Gunnar Morling -- [[https://google.github.io/eng-practices/review/][Google Engineering Practices]] - Code Review Guidelines -- [[https://owasp.org/www-project-code-review-guide/][OWASP Code Review Guide]] - Security Checklist -- [[https://github.com/code-review-checklists/java-concurrency][Java Concurrency Checklist]] - Thread Safety -- [[https://go.dev/wiki/CodeReviewConcurrency][Go Concurrency Wiki]] - Race Conditions diff --git a/docs/protocols.org b/docs/protocols.org deleted file mode 100644 index b3106b6..0000000 --- a/docs/protocols.org +++ /dev/null @@ -1,495 +0,0 @@ -#+TITLE: Claude Code Protocols -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2025-11-05 - -* About This File - -This file contains instructions and protocols for how Claude should behave when working with Craig. These protocols are consistent across all projects. - -**When to read this:** -- At the start of EVERY session (this is the single entry point) -- Before making any significant decisions -- When unclear about user preferences or conventions - -**What's in this file:** -- Directory architecture (docs/ file/directory map) -- Session management protocols (context files, compacting) -- Terminology and trigger phrases -- User information and preferences -- Git commit requirements -- File format and naming conventions -- Startup instructions (runs docs/workflows/startup.org) - -**What's NOT in this file:** -- Project-specific context (see notes.org) -- Session history (see notes.org) -- Active reminders (see notes.org) -- Pending decisions (see notes.org) - -* Directory Architecture - -The =docs/= directory has a specific structure. Every file and directory has a defined purpose: - -| Item | Purpose | -|------+---------| -| =protocols.org= | Single entry point — behavioral instructions + directory map | -| =notes.org= | Project data: context, reminders, decisions, session history | -| =session-context.org= | Live crash recovery (exists only during active sessions) | -| =previous-session-history.org= | Archived session history | -| =workflows/= | Template workflows (synced from claude-templates, never edit in project) | -| =project-workflows/= | Project-specific workflows (never touched by sync) | -| =scripts/= | Template scripts | -| =announcements/= | One-off cross-project instructions from Craig | -| =someday-maybe.org= | Project ideas backlog | - -* IMPORTANT - MUST DO - -** CRITICAL: Always Check the Time with =date= Command - -***NEVER GUESS THE TIME. ALWAYS RUN =date= TO CHECK.*** - -Claude's internal sense of time is unreliable. When mentioning what time it is - whether in conversation, session notes, timestamps, or scheduling - ALWAYS run: - -#+begin_src bash -date "+%A %Y-%m-%d %H:%M %Z" -#+end_src - -This applies to: -- "What time is it?" -- Session start/end timestamps -- Calculating time until appointments -- Setting alarms -- Any time-related statement - -Do NOT estimate, guess, or rely on memory. Just run the command. It takes one second and prevents errors. - -** !!!!! SESSION CONTEXT FILE - READ THIS EVERY TIME !!!!! - -#+begin_example -╔══════════════════════════════════════════════════════════════════════════════╗ -║ STOP. THIS IS THE MOST IMPORTANT PROTOCOL IN THIS ENTIRE FILE. ║ -║ ║ -║ UPDATE docs/session-context.org EVERY 3RD RESPONSE. ║ -║ ║ -║ NOT "when convenient." NOT "when there's a lot to write." EVERY 3RD ONE. ║ -║ Count: 1, 2, 3, UPDATE. 1, 2, 3, UPDATE. 1, 2, 3, UPDATE. ║ -║ ║ -║ FAILURE TO DO THIS HAS ALREADY CAUSED LOST WORK. DON'T LET IT HAPPEN AGAIN. ║ -╚══════════════════════════════════════════════════════════════════════════════╝ -#+end_example - -*** Why This Is Here In Giant Letters - -On 2026-01-22, a session crashed and 20+ minutes of detailed workflow planning was lost because the session context file wasn't updated. The information was gone forever. This protocol exists to prevent that from EVER happening again. - -*** Location -=docs/session-context.org= (always this exact path, every project) - -*** UPDATE FREQUENCY: EVERY. 3RD. RESPONSE. - -This means: -- Craig says something, you respond (1) -- Craig says something, you respond (2) -- Craig says something, you respond (3) → **UPDATE THE FILE NOW** -- Repeat: 1, 2, 3 → UPDATE - -"Response" = one Claude response. After THREE of these, update the file. No exceptions. - -*** What Counts as "Updating" - -Actually write to the file using the Edit or Write tool. Not "I should update it." Not "I'll update it soon." ACTUALLY WRITE TO THE FILE. - -Include: -- Current task/goal being worked on -- Key decisions made this session -- Workflows being designed (capture the FULL details) -- Data collected (vitals, measurements, paths, commands, etc.) -- Files modified or created -- Next steps planned -- ANY context needed to resume if session crashes RIGHT NOW - -*** Why This Is Non-Negotiable - -If the machine freezes, the network drops, or Claude crashes: -- Session history in notes.org? NOT WRITTEN YET (only at wrap-up) -- Your memory of the conversation? GONE -- The ONLY way to recover context? THIS FILE - -If you don't update it, the work is LOST. Period. There is no recovery. - -*** At Session Start - CHECK FOR THIS FILE -If =docs/session-context.org= exists when starting a new session, it means the previous session was interrupted. Read it IMMEDIATELY to recover context. - -** Before Compacting -If you know you're about to compact, update the session context file FIRST, with enough detail that you can pick up the discussion without having lost any essential information. - -** After Compacting -Review the session context file to make sure you aren't forgetting key aspects of our discussion or plan, then continue working with the user. - -** When Session Ends (Wrap-Up Workflow) -Write your session summary to notes.org, leveraging the session context file. Delete session-context.org ONLY AFTER you've written the session history entry. The file's existence indicates an interrupted session, so it must be deleted at the end of each successful wrap-up. - -** NEVER =cd= Into Directories You Will Delete - -If you =cd= into a directory and then delete that directory, the shell's working directory becomes invalid. All subsequent commands will fail silently (exit code 1) with no useful error message. The only fix is to restart the session. - -***Rule:*** Always use absolute paths for file operations in temporary directories. Never =cd= into extraction directories, build directories, or any directory that will be cleaned up. - -This caused a session break on 2026-02-06 when an extraction directory was =cd='d into and then deleted during cleanup. - -* Important Terminology - -** "Let's run the [X] workflow" vs "I want to create an [X] workflow" - -These phrases mean DIFFERENT things! - -*** "Let's run/do the [workflow name] workflow" -This means: **Execute the existing workflow** for that process. - -*Example:* -- "Let's run the refactor workflow" -> Read docs/workflows/refactor.org and guide through workflow -- "Let's do a refactor workflow" -> Same as above - -*** "I want to create an [X] workflow" -This means: **CREATE a new workflow definition** for doing X (meta-work). -This does **NOT** mean "let's DO X right now." - -*Example:* -- "I want to create a refactor workflow" -> Create docs/workflows/refactor.org using create-workflow process - -When Craig uses this phrasing, trigger the create-workflow process from docs/workflows/create-workflow.org. New workflows go to =docs/project-workflows/= by default. Only put a workflow in =docs/workflows/= (and =~/projects/claude-templates/docs/workflows/=) if Craig explicitly says it's for all projects. - -** "Wrap it up" / "That's a wrap" / "Let's call it a wrap" - -Execute the wrap-up workflow (details in Session Protocols section below): -1. Write session notes to notes.org -2. Git commit and push all changes -3. Valediction summary - -* User Information - -** Calendar Location -Craig's calendars are at: =~/.emacs.d/data/*cal.org= (gcal.org, dcal.org, pcal.org) - -These files are **READ-ONLY** — NEVER add anything to them. - -Use this to: -- Check meeting times and schedules -- Verify when events occurred -- See what's upcoming - -If there are tasks or events to schedule, put them in the todo.org file in the project root, not in Craig's calendar. - -** Task List Location -Craig's global task list is available at: =/home/cjennings/sync/org/roam/inbox.org= - -Use this to: -- See all the tasks that he's working on outside of projects like this one - -**Note:** Some projects may have a project-specific task file (e.g., =todo.org= at project root). Check notes.org for project-specific task locations. - -** Working Style - -*** General Preferences -- Prefers detailed preparation before high-stakes meetings -- Values practice/role-play for negotiations and general learning -- Makes decisions based on principles and timeline arguments -- Prefers written documentation over verbal agreements - -*** Emacs as a Primary Working Tool -- Craig uses Emacs as his primary tool (most everything Craig does is inside Emacs) -- Consider Emacs packages along with other software when recommending software solutions -- Look for ways to streamline routine work with Emacs custom code if no packages exist - -*** Wayland Environment (No XWayland) -Craig runs a pure Wayland setup (Hyprland) and avoids XWayland/Xorg apps. - -- Clipboard: Use =wl-copy= and =wl-paste= (NOT =xclip= or =xsel=) -- Window management: Use Hyprland commands (NOT =xkill=, =xdotool=, etc.) -- Prefer Wayland-native tools over X11 equivalents -- Open URLs in browser: Use =google-chrome-stable "URL" &>/dev/null &= - - The =&>/dev/null &= is required to detach the process and suppress output - - Without it, the command may appear to hang or produce no result - -** Miscellaneous Information -- Craig currently lives in New Orleans, LA -- Craig's phone number: 510-316-9357 -- Craig maintains a remote server at the cjennings.net domain -- This project is in a git repository which is associated with a remote repository on cjennings.net - -** Setting Alarms / Reminders - -Use Craig's =notify= script with the =at= daemon for persistent reminders. - -**IMPORTANT:** Always check the current date and time (=date=) before setting alarms to ensure accurate calculations. - -*** Setting an alarm -#+begin_src bash -echo 'notify alarm "Title" "Message"' | at 10:55am -#+end_src - -*** Examples -#+begin_example -echo 'notify alarm "Standup" "Daily standup in 5 minutes"' | at 10:55am -echo 'notify alarm "BP Reading" "Time to take BP"' | at 2:00pm -echo 'notify alert "Meeting" "Ryan call starting"' | at 11:25am -#+end_example - -*** Notify types available -- =alarm= - Alarm clock icon, alarm sound -- =alert= - Yellow exclamation, attention tone -- =info= - Blue info icon, confident tone -- =success= - Green checkmark, pleasant chime -- =fail= - Red X, warning tone - -Full usage: =notify --help= or see =~/.local/bin/notify= - -*** Managing alarms -- =atq= - list all scheduled alarms -- =atrm [number]= - remove an alarm by its queue number - -* Session Protocols - -** CRITICAL: Git Commit Requirements - -***IMPORTANT: ALL commits must be made as Craig, NOT as Claude.*** - -***CRITICAL: NO Claude Code or Anthropic attribution ANYWHERE in commits.*** - -When creating commits: - -1. **Author Identity**: NEVER commit as Claude. All commits must use Craig's identity. - - Git will use the configured user.name and user.email - - Do NOT modify git config - - **ABSOLUTELY NO** Co-Authored-By lines - - **ABSOLUTELY NO** "Generated with Claude Code" text - - **ABSOLUTELY NO** Anthropic attribution of any kind - - Write commits AS CRAIG, not as Claude Code - -2. **Commit Message Format**: - - Use project-specific commit format if defined - - Otherwise: concise subject line and terse description only - - **ONLY subject line and terse description - NO Claude Code attribution** - - Keep messages clear and informative - -3. **Validation**: - - Claude should validate commit message format before committing - - Ensure no AI attribution appears anywhere in commit - -** IMPORTANT: Reminders Protocol - -When starting a new session: -- Check "Active Reminders" section in notes.org -- Remind Craig of outstanding tasks he's asked to be reminded about -- This ensures important follow-up actions aren't forgotten between sessions - -When Craig says "remind me" about something: -1. Add it to Active Reminders section in notes.org -2. If it's something he needs to DO, also add to the todo.org file in the project root as an org-mode task (e.g., =* TODO [description]=). If this project does not have a todo.org at the project root, alert Craig and offer to create it. -3. If not already provided, ask for the priority and a date for scheduled or deadline. - -** Workflows: "Let's run/do the [workflow name] workflow" - -When Craig says this phrase: - -1. **Check =docs/workflows/= for match** - - If exact match found: Read and guide through process - - Example: "refactor workflow" -> read docs/workflows/refactor.org - -2. **Check =docs/project-workflows/= for match** - - If exact match found: Read and guide through process - -3. **Fuzzy match across both directories:** Ask for clarification - - Example: User says "empty inbox" but we have "inbox-zero.org" - - Ask: "Did you mean the 'inbox zero' workflow, or create new 'empty inbox'?" - -4. **No match at all:** Offer to create it - - Say: "I don't see '[workflow-name]' yet. Create it using create-workflow process?" - - If yes: Run create-workflow — new workflows go to =docs/project-workflows/= by default - -** Long-Running Process Status Updates - -When monitoring a long-running process (rsync, large downloads, builds, VM tests, etc.), follow this protocol: - -***At Start:*** -1. Run =date= to get accurate time -2. Announce the task/job beginning -3. Provide best-guess ETA for completion - -#+begin_example -**14:30** - Starting ISO build. ETA: ~10 minutes. -#+end_example - -***Every 5 Minutes:*** -- Check progress and display status in format: =HH:MM= - terse description - ETA - -#+begin_example -**14:35** - ISO build: packages installed, creating squashfs. ETA: ~5 min. -**14:40** - ISO build: squashfs 95% complete. ETA: ~1 min. -#+end_example - -***At Completion:*** -1. Send notification via notify script: - #+begin_src bash - notify success "Task Complete" "Description of what finished" - #+end_src - Use =fail= type instead of =success= if the task failed. -2. Provide summary of success or failure - -#+begin_example -**14:42** - ISO build complete. Size: 2.0G. Ready for testing. -#+end_example - -***Guidelines:*** -- Always run =date= for accurate timestamps -- Keep progress descriptions terse but informative -- Update ETA as job progresses -- If ETA cannot be determined, say "ETA unknown" rather than guessing wildly - -***Why This Matters:*** -- Craig may be working on other things while waiting -- Status updates provide confidence the process is still running -- ETAs help with planning (e.g., "I have time for coffee" vs "stay close") -- Sound notification alerts Craig when he's away from the screen -- If something stalls, the updates make it obvious - -** "Wrap it up" / "That's a wrap" / "Let's call it a wrap" - -When Craig says any of these phrases (or variations), execute wrap-up workflow: - -1. **Write session notes** to notes.org (Session History section) - - Key decisions made - - Work completed - - Context needed for next session - - Any pending issues or blockers - - New conventions or preferences learned - - Critical reminders for tomorrow go in Active Reminders section - -2. **Git commit and push** (if project is in git repository) - - Check git status and diff - - Create commit message following requirements (see "Git Commit Requirements" above) - - **ONLY subject line and terse description - NO Claude Code attribution** - - Commit as Craig (NOT as Claude) - - **ABSOLUTELY NO** Co-Authored-By, Claude Code, or Anthropic attribution - - Push to ALL remotes (check with =git remote -v=) - - Ensure working tree is clean - - Confirm push succeeded to all remotes - -3. **Valediction** - Brief, warm goodbye - - Acknowledge the day's work - - What was accomplished - - What's ready for next session - - Any important reminders - - Keep it warm but concise - -This ensures clean handoff between sessions and nothing gets lost. - -** The docs/ Directory - -Claude needs to add information to notes.org. For large amounts of information: - -- Create separate document in docs/ directory -- Link it in notes.org with explanation of document's purpose -- **Project-specific decision:** Should docs/ be committed to git or added to .gitignore? - - Ask Craig on first session if not specified - - Some projects keep docs/ private, others commit it -- Unless specified otherwise, all Claude-generated documents go in docs/ folder - -**When to break out documents:** -- If notes.org gets very large (> 1500 lines) -- If information isn't all relevant anymore -- Example: Keep only last 3-4 months of session history here, move rest to separate file - -* File Format Preferences - -** ALWAYS Use Org-Mode Format - -Craig uses Emacs as primary tool. **ALWAYS** create new documentation files in =.org= format, not =.md= (markdown). - -*Rationale:* -- Org-mode files are well-supported in Emacs -- Can be easily exported to any other format (HTML, PDF, Markdown, etc.) -- Better integration with user's workflow - -*Exception:* Only use .md if specifically requested or if file is intended for GitHub/web display where markdown is expected. - -** NEVER Use Spaces in Filenames - -**ALWAYS** use hyphens (=-=) to separate words in filenames. Underscores (=_=) are also acceptable. - -*Rationale:* -- Spaces cause problems with links across different operating systems -- User works with Mac, Windows, Linux, and potentially other systems -- Hyphens create more reliable, portable filenames -- Easier to work with in command-line tools - -*Examples:* -- Good: =project-meeting-notes.org= -- Good: =change-log-2025-11-04.md= -- Bad: =project meeting notes.org= -- Bad: =change log 2025.11.04.md= - -* File Naming Conventions - -** Files Too Large to Read - -PDFs or other files that are too large for Claude to read should be prefixed with =TOOLARGE-= to prevent read errors that halt the session. - -Example: -- Original: =assets/large-architectural-plans.pdf= -- Renamed: =assets/TOOLARGE-large-architectural-plans.pdf= - -** Unreadable Binary Files (.docx Format) - -Binary .docx files cannot be read directly by Claude. When encountering these: -- Convert to markdown format using pandoc: =pandoc file.docx -o file.md= -- Keep the original .docx file for reference -- Work with the converted .md file for analysis and editing - -** CRITICAL: Always Keep Links Current - -Many documents are linked in org files using org-mode =file:= links. Craig relies on these links being valid at all times. - -**MANDATORY WORKFLOW - When renaming or moving ANY file:** - -1. **BEFORE renaming:** Search ALL org files for references to that file - - Use grep or search tools to find both filename and partial matches - - Check in TODO items and event log sections - -2. **Rename or move the file** - -3. **IMMEDIATELY AFTER:** Update ALL =file:= links to new path/filename - - Update links in task files - - Update links in event logs - - Update links in reference sections - -4. **Verify:** Test a few updated links to ensure they point to valid files - -Example workflow: -#+begin_example -# Step 1: Search before renaming -grep -rn "2025-10-15-invoice.pdf" *.org - -# Step 2: Rename the file -mv documents/2025-10-15-invoice.pdf documents/2025-10-15-vendor-invoice.pdf - -# Step 3: Update all references in affected .org files -# Edit to change: -# file:documents/2025-10-15-invoice.pdf -# to: -# file:documents/2025-10-15-vendor-invoice.pdf - -# Step 4: Verify links work -#+end_example - -*Why This is Critical:* -- Org files are primary task tracking and reference system -- Event logs document complete history with file references -- Craig depends on clicking links to access documents quickly -- Broken links disrupt workflow and make documentation unreliable - -**NEVER rename or move files without updating links in the same session.** - -* Session Start - AUTOMATIC - -At the start of EVERY session, run [[file:workflows/startup.org][docs/workflows/startup.org]]. Do NOT ask — just do it automatically. diff --git a/docs/research-btrfs-expansion.org b/docs/research-btrfs-expansion.org deleted file mode 100644 index 8a6c1a6..0000000 --- a/docs/research-btrfs-expansion.org +++ /dev/null @@ -1,451 +0,0 @@ -#+TITLE: Research: Expanding Project to Support Btrfs -#+DATE: 2026-01-23 -#+AUTHOR: Craig Jennings & Claude - -* Executive Summary - -This document covers research and architecture planning for expanding the archzfs -project to support both ZFS and Btrfs filesystems, providing equivalent functionality -for both. - -* Project Renaming - -Since "archzfs" implies ZFS-only, the project needs a new name. - -** Naming Options - -| Name | Pros | Cons | -|------+------+------| -| archsnap | Short, implies snapshots | Doesn't mention filesystems | -| archfs | Generic filesystem installer | Might be too generic | -| archroot | Implies root filesystem setup | Already used by other tools | -| arch-snap-install | Descriptive | Long | -| snaparch | Short, memorable | Sounds like "snap" package manager | -| archraid | Implies multi-disk | Doesn't mention snapshots | -| arch-resilient | Implies reliability | Long, vague | - -** Recommendation -*archsnap* - Short, memorable, emphasizes the snapshot capability that's the core -value proposition of both ZFS and Btrfs configurations. - -* Btrfs Equivalent Features - -** Feature Parity Matrix - -| Feature (ZFS) | Btrfs Equivalent | Tool/Method | -|---------------+------------------+-------------| -| ZFS native encryption | LUKS2 + dm-crypt | cryptsetup | -| zpool mirror | btrfs raid1 | mkfs.btrfs -d raid1 -m raid1 | -| zpool stripe | btrfs raid0 | mkfs.btrfs -d raid0 | -| zpool raidz1 | btrfs raid5 | NOT RECOMMENDED (unstable) | -| zpool raidz2 | btrfs raid6 | NOT RECOMMENDED (unstable) | -| ZFS datasets | Btrfs subvolumes | btrfs subvolume create | -| zfs snapshot | btrfs snapshot | btrfs subvolume snapshot | -| ZFSBootMenu | GRUB + grub-btrfs | grub-btrfs package | -| sanoid/syncoid | btrbk or snapper | snapper + snap-pac | -| zfs rollback | snapper rollback | snapper undochange | -| Pre-pacman snapshots | snap-pac | snap-pac + snapper | - -** Important Btrfs Limitations - -*** RAID 5/6 is Unstable -#+BEGIN_QUOTE -"Parity RAID (RAID 5/6) code has multiple serious data-loss bugs in it." --- Btrfs Wiki -#+END_QUOTE - -For multi-disk Btrfs, only offer: -- *raid1* (mirror) - Stable, recommended -- *raid10* (striped mirrors) - Requires 4+ disks -- *raid0* (stripe) - No redundancy, not recommended for root - -*** No Native Encryption -Btrfs doesn't have native encryption. Use LUKS2 as layer beneath: -#+BEGIN_SRC bash -cryptsetup luksFormat /dev/sdX -cryptsetup open /dev/sdX cryptroot -mkfs.btrfs /dev/mapper/cryptroot -#+END_SRC - -*** Bootloader Considerations -- *GRUB* is required for booting into snapshots (via grub-btrfs) -- *systemd-boot* can't boot from btrfs snapshots (kernel on FAT32 ESP) -- ZFSBootMenu equivalent: GRUB + grub-btrfs + grub-btrfsd daemon - -* Recommended Btrfs Subvolume Layout - -Based on research from multiple sources, this layout provides optimal snapshot -management and rollback capability: - -#+BEGIN_SRC -btrfs volume (LABEL=archsnap) -│ -├── @ (subvol) → mounted at / -├── @home (subvol) → mounted at /home -├── @snapshots (subvol) → mounted at /.snapshots -├── @var_log (subvol) → mounted at /var/log -├── @var_cache (subvol) → mounted at /var/cache -└── @var_tmp (subvol) → mounted at /var/tmp -#+END_SRC - -** Why This Layout? - -1. *@ for root* - Main system, snapshotted for rollbacks -2. *@home separate* - User data not affected by system rollbacks -3. *@snapshots separate* - Snapper requirement for snapshot storage -4. *@var_log separate* - Logs persist across rollbacks, required for read-only snapshots -5. *@var_cache separate* - Package cache excluded from snapshots (saves space) -6. *@var_tmp separate* - Temp files excluded from snapshots - -** Mount Options - -#+BEGIN_SRC bash -# Recommended mount options -BTRFS_OPTS="noatime,compress=zstd,space_cache=v2,discard=async" - -# Example fstab entries (NO subvolid - allows rollback) -UUID=xxx / btrfs $BTRFS_OPTS,subvol=@ 0 0 -UUID=xxx /home btrfs $BTRFS_OPTS,subvol=@home 0 0 -UUID=xxx /.snapshots btrfs $BTRFS_OPTS,subvol=@snapshots 0 0 -UUID=xxx /var/log btrfs $BTRFS_OPTS,subvol=@var_log 0 0 -UUID=xxx /var/cache btrfs $BTRFS_OPTS,subvol=@var_cache 0 0 -UUID=xxx /var/tmp btrfs $BTRFS_OPTS,subvol=@var_tmp 0 0 -#+END_SRC - -*CRITICAL*: Do NOT use subvolid= in fstab - it breaks rollbacks! - -* Snapshot Management Stack - -** Recommended: Snapper + snap-pac + grub-btrfs + btrfs-assistant - -| Component | Purpose | Package | -|-----------+---------+---------| -| snapper | Snapshot creation, retention, rollback | snapper | -| snap-pac | Automatic pre/post pacman snapshots | snap-pac | -| grub-btrfs | Bootable snapshot menu in GRUB | grub-btrfs | -| grub-btrfsd | Auto-update GRUB on snapshot changes | grub-btrfs (service) | -| btrfs-assistant | GUI for snapshot management | btrfs-assistant (AUR) | -| snapper-gui | Alternative lightweight GUI | snapper-gui (AUR) | - -** Alternative: btrbk - -btrbk is simpler than snapper and handles both snapshots and remote backups: -#+BEGIN_SRC conf -# /etc/btrbk.conf -snapshot_preserve_min 2d -snapshot_preserve 24h 7d 4w 12m - -volume / - snapshot_dir /.snapshots - subvolume @ -#+END_SRC - -** Retention Policy (Snapper) - -Default snapper timeline for root: -#+BEGIN_SRC conf -# /etc/snapper/configs/root -TIMELINE_CREATE="yes" -TIMELINE_CLEANUP="yes" -TIMELINE_MIN_AGE="1800" -TIMELINE_LIMIT_HOURLY="5" -TIMELINE_LIMIT_DAILY="7" -TIMELINE_LIMIT_WEEKLY="0" -TIMELINE_LIMIT_MONTHLY="0" -TIMELINE_LIMIT_YEARLY="0" -#+END_SRC - -Keeps: 5 hourly + 7 daily = ~12 snapshots at any time. - -* Multi-Disk Btrfs Setup - -** Mirror (2+ disks) - RECOMMENDED - -#+BEGIN_SRC bash -# Create mirrored filesystem -mkfs.btrfs -L archsnap -d raid1 -m raid1 /dev/sda2 /dev/sdb2 - -# Mount using ANY device (btrfs auto-discovers others) -mount /dev/sda2 /mnt - -# Create subvolumes -btrfs subvolume create /mnt/@ -btrfs subvolume create /mnt/@home -# ... etc -#+END_SRC - -** Stripe (2+ disks) - NOT RECOMMENDED for root - -#+BEGIN_SRC bash -# Striped data, mirrored metadata (default) -mkfs.btrfs -L archsnap -d raid0 -m raid1 /dev/sda2 /dev/sdb2 -#+END_SRC - -Metadata is mirrored by default even in raid0 mode for safety. - -** Converting Single to Mirror - -#+BEGIN_SRC bash -# Add second device to existing filesystem -mount /dev/sda2 /mnt -btrfs device add /dev/sdb2 /mnt -btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt -#+END_SRC - -* Reference Repositories Cloned - -Located in =reference-repos/=: - -| Repository | Purpose | -|------------+---------| -| grub-btrfs | GRUB snapshot menu integration | -| easy-arch | Automated Arch+Btrfs installer | -| arch-btrfs-installation | Manual install guide | -| btrfs-assistant | GUI snapshot manager | -| snap-pac | Pacman hook for snapper | -| btrbk | Alternative snapshot/backup tool | -| buttermanager | GUI btrfs management | -| alis | Arch Linux Install Script (multi-fs) | - -* Installer Architecture Changes - -** Current Structure (ZFS-only) - -#+BEGIN_SRC -install-archzfs -├── Phase 1: Configuration gathering (fzf prompts) -├── Phase 2: Disk partitioning (EFI + ZFS) -├── Phase 3: ZFS pool/dataset creation -├── Phase 4: Pacstrap base system -├── Phase 5: System configuration -├── Phase 6: ZFSBootMenu installation -├── Phase 7: Genesis snapshot -└── Phase 8: Cleanup -#+END_SRC - -** Proposed Structure (Multi-filesystem) - -#+BEGIN_SRC -install-archsnap -├── Phase 1: Configuration gathering -│ ├── Hostname, timezone, locale, keymap -│ ├── Disk selection (multi-select) -│ ├── RAID level (if multi-disk) -│ ├── NEW: Filesystem selection (ZFS / Btrfs) -│ ├── Encryption passphrase -│ ├── Root password -│ ├── NEW: Initial user password -│ └── SSH configuration -│ -├── Phase 2: Disk partitioning -│ ├── EFI partition (same for both) -│ └── Root partition (type depends on FS choice) -│ -├── Phase 3: Filesystem creation -│ ├── IF ZFS: create pool + datasets -│ └── IF Btrfs: create volume + subvolumes -│ -├── Phase 4: Pacstrap base system -│ ├── Common packages -│ ├── IF ZFS: zfs-dkms, zfs-utils -│ └── IF Btrfs: btrfs-progs, snapper, snap-pac, grub-btrfs -│ -├── Phase 5: System configuration -│ ├── Common config (locale, timezone, hostname) -│ ├── IF ZFS: ZFS-specific configs -│ └── IF Btrfs: snapper configs, grub-btrfs configs -│ -├── Phase 6: Bootloader installation -│ ├── IF ZFS: ZFSBootMenu -│ └── IF Btrfs: GRUB + grub-btrfs -│ -├── Phase 7: Genesis snapshot -│ ├── IF ZFS: zfs snapshot -r zroot@genesis -│ └── IF Btrfs: snapper create -c root -d "genesis" -│ -└── Phase 8: Cleanup -#+END_SRC - -** Code Organization - -#+BEGIN_SRC bash -install-archsnap -├── lib/ -│ ├── common.sh # Shared functions (colors, prompts, fzf) -│ ├── disk.sh # Partitioning (shared) -│ ├── zfs.sh # ZFS-specific functions -│ ├── btrfs.sh # Btrfs-specific functions -│ └── config.sh # Config file handling -├── configs/ -│ ├── snapper-root.conf -│ └── btrbk.conf.example -└── install-archsnap # Main script -#+END_SRC - -* Testing Strategy - -** VM Test Matrix - -- Single ZFS (1 disk): Install, boot, reboot, rollback+reboot -- Single Btrfs (1 disk): Install, boot, reboot, rollback+reboot -- Mirror ZFS (2 disk): Install, fail disk, recover -- Mirror Btrfs (2 disk, raid1): Install, fail disk, recover -- RAIDZ1 ZFS (3 disk): Install, fail disk, recover - -** Validation Checks - -*** Fresh Install (all configs) -- Partitions created correctly (EFI + root) -- Filesystem created (pool/subvols) -- All mount points accessible -- Packages installed (pacman -Q zfs-dkms or btrfs-progs) -- Services enabled (zfs.target or snapper-timeline.timer) -- Bootloader installed (ZFSBootMenu or GRUB + grub-btrfs) -- fstab correct (no subvolid for btrfs) -- Can boot without ISO - -*** Reboot Survival -Critical: Verify system survives reboot cleanly. Catches: -- Misconfigured services slamming CPU/network -- Services that fail to start properly -- ZFS import issues -- fstab mount failures - -Checks after reboot: -- System reaches login prompt -- All filesystems mounted (findmnt) -- No failed services (systemctl --failed) -- CPU/memory/network normal (no runaway processes) -- Can SSH in (if network configured) - -*** Snapshot Operations -- Manual snapshot creates successfully -- Snapshot appears in list (zfs list -t snap / snapper list) -- Pre-pacman snapshot created automatically (install package, verify) -- Snapshot visible in boot menu (ZFSBootMenu / GRUB) -- Can boot into snapshot - -*** Rollback + Reboot -Important: Rollback MUST include reboot to validate fully. -Do NOT use ZFSBootMenu's built-in rollback (known bug, needs filing). -Use zfsrollback script or snapper rollback instead. - -Checks: -- Rollback command completes without error -- Reboot completes successfully -- System state matches snapshot after reboot -- Previously installed test package is gone -- No orphaned/broken state - -*** Failure Recovery (multi-disk only) -- Pool/volume shows correct redundancy (zpool status / btrfs fi show) -- Survives single disk removal (zpool offline / btrfs device delete) -- Boots with degraded array -- Warnings displayed about degraded state -- Can resilver/rebuild after disk replaced (zpool replace / btrfs replace) -- Pool returns to healthy state - -*** Encryption -- Passphrase prompt appears at boot -- Correct unlock with right passphrase -- Fails gracefully with wrong passphrase (retry prompt, not panic) -- Data unreadable when mounted elsewhere without key -- Pool/volume not auto-imported without passphrase - -** Known Issues - -*** ZFSBootMenu Rollback Bug -ZFSBootMenu's built-in rollback feature has issues (to be filed). -Workaround: Use zfsrollback script from installed system or live ISO. - -** Test Scripts - -#+BEGIN_SRC -scripts/ -├── test-zfs-single.sh -├── test-zfs-mirror.sh -├── test-zfs-raidz.sh -├── test-btrfs-single.sh -├── test-btrfs-mirror.sh -└── test-configs/ - ├── zfs-single.conf - ├── zfs-mirror.conf - ├── btrfs-single.conf - └── btrfs-mirror.conf -#+END_SRC - -* CLI and GUI Tools - -** CLI Tools to Install (Both Filesystems) - -| Tool | Purpose | -|------+---------| -| archsnap-snapshot | Create manual snapshot (wrapper) | -| archsnap-rollback | Interactive rollback (fzf-based) | -| archsnap-list | List snapshots | -| archsnap-prune | Manual pruning | - -These would be thin wrappers that detect the filesystem and call the appropriate -underlying tool (zfs/snapper/btrbk). - -** GUI Options - -*** For Btrfs -- *btrfs-assistant* (recommended) - Full-featured, actively maintained -- *snapper-gui* - Lighter weight alternative - -*** For ZFS -- No equivalent mature GUI exists -- Could build a simple zenity/yad wrapper - -* Implementation Phases - -** Phase 1: Refactor Current Installer -1. Extract common functions to lib/common.sh -2. Extract ZFS-specific code to lib/zfs.sh -3. Add filesystem selection prompt -4. Rename project to archsnap - -** Phase 2: Implement Btrfs Support -1. Create lib/btrfs.sh with btrfs-specific functions -2. Implement subvolume creation -3. Implement snapper configuration -4. Implement GRUB + grub-btrfs installation -5. Create btrfs genesis snapshot - -** Phase 3: Testing Infrastructure -1. Create test configs for all scenarios -2. Update test-vm.sh for btrfs testing -3. Create automated test suite -4. Test multi-disk configurations - -** Phase 4: CLI Tools -1. Create archsnap-snapshot wrapper -2. Create archsnap-rollback wrapper -3. Update documentation - -** Phase 5: Documentation & Polish -1. Update README for dual-filesystem support -2. Create BTRFS-specific documentation -3. Update troubleshooting guides - -* Sources - -** Btrfs Installation -- [[https://wiki.archlinux.org/title/Btrfs][Arch Wiki - Btrfs]] -- [[https://github.com/egara/arch-btrfs-installation][arch-btrfs-installation]] -- [[https://gist.github.com/mjkstra/96ce7a5689d753e7a6bdd92cdc169bae][Modern Arch+Btrfs Guide]] - -** Snapshot Management -- [[https://github.com/Antynea/grub-btrfs][grub-btrfs]] -- [[https://wiki.archlinux.org/title/Snapper][Arch Wiki - Snapper]] -- [[https://github.com/wesbarnett/snap-pac][snap-pac]] -- [[https://github.com/digint/btrbk][btrbk]] - -** GUI Tools -- [[https://gitlab.com/btrfs-assistant/btrfs-assistant][btrfs-assistant]] -- [[https://github.com/egara/buttermanager][buttermanager]] - -** Multi-Disk -- [[https://archive.kernel.org/oldwiki/btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices.html][Btrfs Wiki - Multiple Devices]] -- [[https://thelinuxcode.com/set-up-btrfs-raid/][How to Set Up Btrfs RAID]] diff --git a/docs/research-sandreas-zarch.org b/docs/research-sandreas-zarch.org deleted file mode 100644 index 55bc77b..0000000 --- a/docs/research-sandreas-zarch.org +++ /dev/null @@ -1,365 +0,0 @@ -#+TITLE: Research: sandreas/zarch ZFSBootMenu Installation -#+DATE: 2026-01-22 -#+AUTHOR: Research Notes - -* Overview - -This document summarizes research on the [[https://github.com/sandreas/zarch][sandreas/zarch]] GitHub repository for -Arch Linux ZFS installation. The project uses ZFSBootMenu, native encryption, -and automatic snapshots via zrepl. - -* Project Philosophy - -sandreas/zarch is described as a "single, non-modular file with some minor -config profiles" - the author explicitly avoids a "modular multi-script beast." -This contrasts with our more modular approach but offers useful patterns. - -** Key Features -- ZFSBootMenu as bootloader (not GRUB) -- Native ZFS encryption (AES-256-GCM) -- Automatic snapshots via zrepl -- EFI-only (no BIOS support) -- Profile-based configuration - -* ZFSBootMenu Installation - -** Download and Install -#+begin_src bash -# Create EFI directory -mkdir -p /efi/EFI/ZBM - -# Download latest ZFSBootMenu EFI binary -wget -c https://get.zfsbootmenu.org/latest.EFI -O /efi/EFI/ZBM/ZFSBOOTMENU.EFI - -# Or use curl variant -curl -o /boot/efi/EFI/ZBM/VMLINUZ.EFI -L https://get.zfsbootmenu.org/efi -#+end_src - -** EFI Boot Entry Registration -#+begin_src bash -efibootmgr --disk $DISK --part 1 \ - --create \ - --label "ZFSBootMenu" \ - --loader '\EFI\ZBM\ZFSBOOTMENU.EFI' \ - --unicode "spl_hostid=$(hostid) zbm.timeout=3 zbm.prefer=zroot zbm.import_policy=hostid" \ - --verbose -#+end_src - -** Key ZFSBootMenu Parameters -| Parameter | Purpose | -|------------------------+------------------------------------------------| -| zbm.timeout=N | Seconds to wait before auto-booting default | -| zbm.prefer=POOL | Preferred pool for default boot environment | -| zbm.import_policy | Pool import strategy (hostid recommended) | -| zbm.skip | Skip menu and boot default immediately | -| zbm.show | Force menu display | -| spl_hostid=0xXXXXXXXX | Host ID for pool import validation | - -** Kernel Command Line for Boot Environments -#+begin_src bash -# Set inherited command line on ROOT dataset -zfs set org.zfsbootmenu:commandline="quiet loglevel=0" zroot/ROOT - -# Set pool bootfs property -zpool set bootfs=zroot/ROOT/arch zroot -#+end_src - -* Dataset Layout - -** zarch Dataset Structure -#+begin_example -$POOL mountpoint=none -$POOL/ROOT mountpoint=none (container for boot environments) -$POOL/ROOT/arch mountpoint=/, canmount=noauto (active root) -$POOL/home mountpoint=/home (shared across boot environments) -#+end_example - -** Comparison: Our archzfs Dataset Structure -#+begin_example -zroot mountpoint=none, canmount=off -zroot/ROOT mountpoint=none, canmount=off -zroot/ROOT/default mountpoint=/, canmount=noauto, reservation=5-20G -zroot/home mountpoint=/home -zroot/home/root mountpoint=/root -zroot/media mountpoint=/media, compression=off -zroot/vms mountpoint=/vms, recordsize=64K -zroot/var mountpoint=/var, canmount=off -zroot/var/log mountpoint=/var/log -zroot/var/cache mountpoint=/var/cache -zroot/var/lib mountpoint=/var/lib, canmount=off -zroot/var/lib/pacman mountpoint=/var/lib/pacman -zroot/var/lib/docker mountpoint=/var/lib/docker -zroot/var/tmp mountpoint=/var/tmp, auto-snapshot=false -zroot/tmp mountpoint=/tmp, auto-snapshot=false -#+end_example - -** Key Differences -- zarch: Minimal dataset layout (ROOT, home) -- archzfs: Fine-grained datasets with workload-specific tuning -- archzfs: Separate /var/log, /var/cache, /var/lib/docker -- archzfs: recordsize=64K for VM storage -- archzfs: compression=off for media (already compressed) - -* ZFS Pool Creation - -** zarch Pool Creation (with encryption) -#+begin_src bash -zpool create -f \ - -o ashift=12 \ - -O compression=lz4 \ - -O acltype=posixacl \ - -O xattr=sa \ - -O relatime=off \ - -O atime=off \ - -O encryption=aes-256-gcm \ - -O keylocation=prompt \ - -O keyformat=passphrase \ - -o autotrim=on \ - -m none \ - $POOL ${DISK}-part2 -#+end_src - -** Our archzfs Pool Creation (with encryption) -#+begin_src bash -zpool create -f \ - -o ashift="$ASHIFT" \ - -o autotrim=on \ - -O acltype=posixacl \ - -O atime=off \ - -O canmount=off \ - -O compression="$COMPRESSION" \ - -O dnodesize=auto \ - -O normalization=formD \ - -O relatime=on \ - -O xattr=sa \ - -O encryption=aes-256-gcm \ - -O keyformat=passphrase \ - -O keylocation=prompt \ - -O mountpoint=none \ - -R /mnt \ - "$POOL_NAME" $pool_config -#+end_src - -** Key Differences -| Option | zarch | archzfs | Notes | -|-----------------+-------------------+-----------------------+---------------------------------| -| compression | lz4 | zstd (configurable) | zstd better ratio, more CPU | -| atime | off | off | Same | -| relatime | off | on | archzfs uses relatime instead | -| dnodesize | (default) | auto | Better extended attribute perf | -| normalization | (default) | formD | Unicode consistency | - -* Snapshot Automation - -** zarch: zrepl Configuration - -zarch uses zrepl for automated snapshots with this retention grid: - -#+begin_example -1x1h(keep=4) | 24x1h(keep=1) | 7x1d(keep=1) | 4x1w(keep=1) | 12x4w(keep=1) | 1x53w(keep=1) -#+end_example - -This means: -- Keep 4 snapshots within the last hour -- Keep 1 snapshot per hour for 24 hours -- Keep 1 snapshot per day for 7 days -- Keep 1 snapshot per week for 4 weeks -- Keep 1 snapshot per 4 weeks for 12 periods (48 weeks) -- Keep 1 snapshot per year - -#+begin_src yaml -# Example zrepl.yml structure -jobs: - - name: snapjob - type: snap - filesystems: - "zroot<": true - snapshotting: - type: periodic - interval: 15m - prefix: zrepl_ - pruning: - keep: - - type: grid - grid: 1x1h(keep=all) | 24x1h | 14x1d - regex: "^zrepl_.*" - - type: regex - negate: true - regex: "^zrepl_.*" -#+end_src - -** archzfs: Pacman Hook Approach - -Our approach uses pre-transaction snapshots: -#+begin_src bash -# /etc/pacman.d/hooks/zfs-snapshot.hook -[Trigger] -Operation = Upgrade -Operation = Install -Operation = Remove -Type = Package -Target = * - -[Action] -Description = Creating ZFS snapshot before pacman transaction... -When = PreTransaction -Exec = /usr/local/bin/zfs-pre-snapshot -#+end_src - -** Comparison: Snapshot Approaches -| Feature | zrepl (zarch) | Pacman Hook (archzfs) | -|-------------------+--------------------------+------------------------------| -| Trigger | Time-based (15 min) | Event-based (pacman) | -| Retention | Complex grid policy | Manual or sanoid | -| Granularity | High (frequent) | Package transaction focused | -| Recovery Point | ~15 minutes | Last package operation | -| Storage overhead | Higher (more snapshots) | Lower (fewer snapshots) | - -** Alternative: sanoid (mentioned in archzfs) -Sanoid provides similar functionality to zrepl with simpler configuration: -#+begin_src ini -# /etc/sanoid/sanoid.conf -[zroot/ROOT/default] -use_template = production -recursive = yes - -[template_production] -frequently = 0 -hourly = 24 -daily = 7 -weekly = 4 -monthly = 12 -yearly = 1 -autosnap = yes -autoprune = yes -#+end_src - -* EFI and Boot Partition Strategy - -** zarch: 512MB EFI, ZFSBootMenu -- Single 512MB EFI partition (type EF00) -- ZFSBootMenu EFI binary downloaded from upstream -- No GRUB, no separate boot partition on ZFS -- Kernel/initramfs stored on ZFS root (ZFSBootMenu reads them) - -** archzfs: 1GB EFI, GRUB with ZFS Support -- 1GB EFI partition per disk -- GRUB with ZFS module for pool access -- Redundant EFI partitions synced via rsync -- Boot files in EFI partition (not ZFS) - -** Trade-offs - -| Aspect | ZFSBootMenu | GRUB + ZFS | -|---------------------+--------------------------------+------------------------------| -| Boot environment | Native (designed for ZFS) | Requires ZFS module | -| Snapshot booting | Built-in, interactive | Custom GRUB menu entries | -| Encryption | Prompts for key automatically | More complex setup | -| EFI space needed | Minimal (~512MB) | Larger (kernel/initramfs) | -| Complexity | Simpler (single binary) | More moving parts | -| Recovery | Can browse/rollback at boot | Requires grub.cfg regen | - -* Pacman Hooks and Systemd Services - -** zarch Services -#+begin_example -zfs-import-cache -zfs-import.target -zfs-mount -zfs-zed -zfs.target -set-locale-once.service (custom first-boot locale config) -#+end_example - -** archzfs Services -#+begin_example -zfs.target -zfs-import-scan.service (instead of cache-based) -zfs-mount.service -zfs-import.target -NetworkManager -avahi-daemon -sshd -#+end_example - -** Key Difference: Import Method -- zarch: Uses zfs-import-cache (requires cachefile) -- archzfs: Uses zfs-import-scan (scans with blkid, no cachefile needed) - -The scan method is simpler and more portable (works if moving disks between -systems). - -* mkinitcpio Configuration - -** zarch Approach -#+begin_src bash -sed -i '/^HOOKS=/s/block filesystems/block zfs filesystems/g' /etc/mkinitcpio.conf -#+end_src - -** archzfs Approach -#+begin_src bash -HOOKS=(base udev microcode modconf kms keyboard keymap consolefont block zfs filesystems) -#+end_src - -** Important Notes -- Both use busybox-based udev (not systemd hook) -- archzfs explicitly removes autodetect to ensure all storage drivers included -- archzfs removes fsck (ZFS doesn't use it) -- archzfs includes microcode early loading - -* Useful Patterns to Consider - -** 1. Profile-Based Configuration -zarch uses a profile directory system: -#+begin_example -default/ - archpkg.txt # Official packages - aurpkg.txt # AUR packages - services.txt # Services to enable - zarch.conf # Core configuration - custom-chroot.sh # Custom post-install -#+end_example - -This allows maintaining multiple configurations (desktop, server, VM) cleanly. - -** 2. ZFSBootMenu for Simpler Boot -For future consideration: -- Native ZFS boot environment support -- Interactive snapshot selection at boot -- Simpler encryption key handling -- Smaller EFI partition needs - -** 3. zrepl for Time-Based Snapshots -For systems needing frequent snapshots beyond pacman transactions: -- 15-minute intervals for development machines -- Complex retention policies -- Replication to remote systems - -** 4. AUR Helper Installation Pattern -#+begin_src bash -# Build yay as regular user, install as root -su -c "git clone https://aur.archlinux.org/yay-bin.git" "$USER_NAME" -arch-chroot -u "$USER_NAME" /mnt makepkg -D /home/$USER_NAME/yay-bin -s -pacman -U --noconfirm yay-bin-*.pkg.tar.* -#+end_src - -* References - -- [[https://github.com/sandreas/zarch][sandreas/zarch GitHub Repository]] -- [[https://zfsbootmenu.org/][ZFSBootMenu Official Site]] -- [[https://docs.zfsbootmenu.org/en/latest/][ZFSBootMenu Documentation]] -- [[https://zrepl.github.io/][zrepl Documentation]] -- [[https://wiki.archlinux.org/title/ZFS][Arch Wiki: ZFS]] -- [[https://github.com/acrion/zfs-autosnap][zfs-autosnap - Pre-upgrade Snapshots]] -- [[https://aur.archlinux.org/packages/pacman-zfs-hook][pacman-zfs-hook AUR Package]] -- [[https://florianesser.ch/posts/20220714-arch-install-zbm/][Guide: Install Arch Linux on encrypted zpool with ZFSBootMenu]] - -* Action Items for archzfs - -Based on this research, potential improvements: - -1. [ ] Consider adding ZFSBootMenu as alternative bootloader option -2. [ ] Evaluate zrepl for systems needing frequent time-based snapshots -3. [ ] Document the grub-zfs-snap vs ZFSBootMenu trade-offs -4. [ ] Consider profile-based configuration for different use cases -5. [ ] Add sanoid configuration to archsetup for automated snapshot retention diff --git a/docs/retrospectives/2026-01-22-ratio-boot-fix.org b/docs/retrospectives/2026-01-22-ratio-boot-fix.org deleted file mode 100644 index 430a03d..0000000 --- a/docs/retrospectives/2026-01-22-ratio-boot-fix.org +++ /dev/null @@ -1,45 +0,0 @@ -#+TITLE: Retrospective: Ratio Boot Fix -#+DATE: 2026-01-22 - -* Summary - -Diagnosed and fixed boot failures on ratio (Framework Desktop, AMD Strix Halo). -Root cause: missing linux-firmware 20260110. Secondary issues from chroot cleanup mistakes. - -See [[file:../2026-01-22-ratio-boot-fix-session.org][full session doc]] for technical details. - -* What Went Well - -- Systematic diagnosis - isolated variables (firmware vs kernel vs ZFS) -- External research - video transcript and community posts gave us the firmware fix -- Documentation - captured everything in session doc for future reference -- Collaboration sync - after feedback, stayed in step and confirmed each action -- ZFS snapshots - rollback capability enabled safe experimentation - -* What Didn't Go Well - -- Jumped ahead repeatedly - rebooting without confirming, running commands without checking in -- Chroot cleanup mistakes - left mountpoint=legacy and /mnt prefixes causing boot failures -- Wrong assumptions - initially assumed kernel 6.15 was the fix; firmware was the real issue -- UUID mistake - used wrong boot partition UUID (didn't account for mirrored NVMe) -- SSH debugging waste - spent time on sshpass issues when keys would have been simpler - -* Behavioral Lessons (Added to PRINCIPLES.org) - -1. *Sync Before Action* - Confirm before destructive actions, wait for go-ahead -2. *Clean Up After Yourself* - Reset mountpoints after chroot, verify before export -3. *Verify Assumptions* - When "should work" doesn't, question the assumption -4. *Patience Over Speed* - "Wait, wait, wait" improved our effectiveness - -* What Would We Do Differently - -- Set up SSH keys at start of remote troubleshooting session -- Create a chroot cleanup checklist and follow it every time -- State the plan and wait for confirmation before each reboot -- Test one fix at a time instead of stacking changes - -* Action Items - -- [X] Create PRINCIPLES.org with behavioral lessons -- [X] Add retrospective workflow to protocols -- [X] Document session for future reference diff --git a/docs/scripts/eml-view-and-extract-attachments-readme.org b/docs/scripts/eml-view-and-extract-attachments-readme.org deleted file mode 100644 index c132df8..0000000 --- a/docs/scripts/eml-view-and-extract-attachments-readme.org +++ /dev/null @@ -1,47 +0,0 @@ -#+TITLE: eml-view-and-extract-attachments.py - -Extract email content and attachments from EML files with auto-renaming. - -* Usage - -#+begin_src bash -# View mode — print metadata and body to stdout, extract attachments alongside EML -python3 docs/scripts/eml-view-and-extract-attachments.py inbox/message.eml - -# Pipeline mode — extract, auto-rename, refile to output dir, clean up -python3 docs/scripts/eml-view-and-extract-attachments.py inbox/message.eml --output-dir assets/ -#+end_src - -* Naming Convention - -Files are auto-renamed as =YYYY-MM-DD-HHMM-Sender-TYPE-Description.ext=: - -- =2026-02-05-1136-Jonathan-EMAIL-Re-Fw-4319-Danneel-Street.eml= -- =2026-02-05-1136-Jonathan-EMAIL-Re-Fw-4319-Danneel-Street.txt= -- =2026-02-05-1136-Jonathan-ATTACH-Ltr-Carrollton.pdf= - -Date and sender are parsed from email headers. Falls back to "unknown" for missing values. - -* Dependencies - -- Python 3 (stdlib only for core functionality) -- =html2text= (optional — used for HTML-only emails, falls back to tag stripping) - -* Pipeline Mode Behavior - -1. Creates a temp directory alongside the source EML -2. Copies and renames the EML, writes a =.txt= of the body, extracts attachments -3. Checks for filename collisions in the output directory -4. Moves all files to the output directory -5. Cleans up the temp directory -6. Prints a summary of created files - -Source EML is never modified or moved. - -* Tests - -#+begin_src bash -python3 -m pytest docs/scripts/tests/ -v -#+end_src - -48 tests: unit tests for parsing, filename generation, and attachment saving; integration tests for both pipeline and stdout modes. Requires =pytest=. diff --git a/docs/scripts/eml-view-and-extract-attachments.py b/docs/scripts/eml-view-and-extract-attachments.py deleted file mode 100644 index 3201c99..0000000 --- a/docs/scripts/eml-view-and-extract-attachments.py +++ /dev/null @@ -1,398 +0,0 @@ -#!/usr/bin/env python3 -"""Extract email content and attachments from EML files. - -Without --output-dir: parse and print to stdout (backwards compatible). -With --output-dir: full pipeline — extract, auto-rename, refile, clean up. -""" - -import argparse -import email -import email.utils -import os -import re -import shutil -import sys -import tempfile - - -# --------------------------------------------------------------------------- -# Parsing functions (no I/O beyond reading the input file) -# --------------------------------------------------------------------------- - -def parse_received_headers(msg): - """Parse Received headers to extract sent/received times and servers.""" - received_headers = msg.get_all('Received', []) - - sent_server = None - sent_time = None - received_server = None - received_time = None - - for header in received_headers: - header = ' '.join(header.split()) - - time_match = re.search(r';\s*(.+)$', header) - timestamp = time_match.group(1).strip() if time_match else None - - from_match = re.search(r'from\s+([\w.-]+)', header) - by_match = re.search(r'by\s+([\w.-]+)', header) - - if from_match and by_match and received_server is None: - received_time = timestamp - received_server = by_match.group(1) - sent_server = from_match.group(1) - sent_time = timestamp - - if received_server is None and received_headers: - header = ' '.join(received_headers[0].split()) - time_match = re.search(r';\s*(.+)$', header) - received_time = time_match.group(1).strip() if time_match else None - by_match = re.search(r'by\s+([\w.-]+)', header) - received_server = by_match.group(1) if by_match else "unknown" - - return { - 'sent_time': sent_time, - 'sent_server': sent_server, - 'received_time': received_time, - 'received_server': received_server - } - - -def extract_body(msg): - """Walk MIME parts, prefer text/plain, fall back to html2text on text/html. - - Returns body text string. - """ - plain_text = None - html_text = None - - for part in msg.walk(): - content_type = part.get_content_type() - if content_type == "text/plain" and plain_text is None: - payload = part.get_payload(decode=True) - if payload is not None: - plain_text = payload.decode('utf-8', errors='ignore') - elif content_type == "text/html" and html_text is None: - payload = part.get_payload(decode=True) - if payload is not None: - html_text = payload.decode('utf-8', errors='ignore') - - if plain_text is not None: - return plain_text - - if html_text is not None: - try: - import html2text - h = html2text.HTML2Text() - h.body_width = 0 - return h.handle(html_text) - except ImportError: - # Strip HTML tags as fallback if html2text not installed - return re.sub(r'<[^>]+>', '', html_text) - - return "" - - -def extract_metadata(msg): - """Extract email metadata from headers. - - Returns dict with from, to, subject, date, and timing info. - """ - return { - 'from': msg.get('From'), - 'to': msg.get('To'), - 'subject': msg.get('Subject'), - 'date': msg.get('Date'), - 'timing': parse_received_headers(msg), - } - - -def generate_basename(metadata): - """Generate date-sender prefix from metadata. - - Returns e.g. "2026-02-05-1136-Jonathan". - Falls back to "unknown" for missing/malformed Date or From. - """ - # Parse date - date_str = metadata.get('date') - date_prefix = "unknown" - if date_str: - try: - parsed = email.utils.parsedate_to_datetime(date_str) - date_prefix = parsed.strftime('%Y-%m-%d-%H%M') - except (ValueError, TypeError): - pass - - # Parse sender first name - from_str = metadata.get('from') - sender = "unknown" - if from_str: - # Extract display name or email local part - display_name, addr = email.utils.parseaddr(from_str) - if display_name: - sender = display_name.split()[0] - elif addr: - sender = addr.split('@')[0] - - return f"{date_prefix}-{sender}" - - -def _clean_for_filename(text, max_length=80): - """Clean text for use in a filename. - - Replace spaces with hyphens, strip chars unsafe for filenames, - collapse multiple hyphens. - """ - text = text.strip() - text = text.replace(' ', '-') - # Keep alphanumeric, hyphens, dots, underscores - text = re.sub(r'[^\w\-.]', '', text) - # Collapse multiple hyphens - text = re.sub(r'-{2,}', '-', text) - # Strip leading/trailing hyphens - text = text.strip('-') - if len(text) > max_length: - text = text[:max_length].rstrip('-') - return text - - -def generate_email_filename(basename, subject): - """Generate email filename from basename and subject. - - Returns e.g. "2026-02-05-1136-Jonathan-EMAIL-Re-Fw-4319-Danneel-Street" - (without extension — caller adds .eml or .txt). - """ - if subject: - clean_subject = _clean_for_filename(subject) - else: - clean_subject = "no-subject" - return f"{basename}-EMAIL-{clean_subject}" - - -def generate_attachment_filename(basename, original_filename): - """Generate attachment filename from basename and original filename. - - Returns e.g. "2026-02-05-1136-Jonathan-ATTACH-Ltr-Carrollton.pdf". - Preserves original extension. - """ - if not original_filename: - return f"{basename}-ATTACH-unnamed" - - name, ext = os.path.splitext(original_filename) - clean_name = _clean_for_filename(name) - return f"{basename}-ATTACH-{clean_name}{ext}" - - -# --------------------------------------------------------------------------- -# I/O functions (file operations) -# --------------------------------------------------------------------------- - -def save_attachments(msg, output_dir, basename): - """Write attachment files to output_dir with auto-renamed filenames. - - Returns list of dicts: {original_name, renamed_name, path}. - """ - results = [] - for part in msg.walk(): - if part.get_content_maintype() == 'multipart': - continue - if part.get('Content-Disposition') is None: - continue - - filename = part.get_filename() - if filename: - renamed = generate_attachment_filename(basename, filename) - filepath = os.path.join(output_dir, renamed) - with open(filepath, 'wb') as f: - f.write(part.get_payload(decode=True)) - results.append({ - 'original_name': filename, - 'renamed_name': renamed, - 'path': filepath, - }) - - return results - - -def save_text(text, filepath): - """Write body text to a .txt file.""" - with open(filepath, 'w', encoding='utf-8') as f: - f.write(text) - - -# --------------------------------------------------------------------------- -# Pipeline function -# --------------------------------------------------------------------------- - -def process_eml(eml_path, output_dir): - """Full extraction pipeline. - - 1. Create temp extraction dir - 2. Copy EML into temp dir - 3. Parse email (metadata, body, attachments) - 4. Generate filenames from headers - 5. Save renamed .eml, .txt, and attachments to temp dir - 6. Check for collisions in output_dir - 7. Move all files to output_dir - 8. Clean up temp dir - 9. Return results dict - """ - eml_path = os.path.abspath(eml_path) - output_dir = os.path.abspath(output_dir) - os.makedirs(output_dir, exist_ok=True) - - # Create temp dir as sibling of the EML file - eml_dir = os.path.dirname(eml_path) - temp_dir = tempfile.mkdtemp(prefix='extract-', dir=eml_dir) - - try: - # Copy EML to temp dir - temp_eml = os.path.join(temp_dir, os.path.basename(eml_path)) - shutil.copy2(eml_path, temp_eml) - - # Parse - with open(eml_path, 'rb') as f: - msg = email.message_from_binary_file(f) - - metadata = extract_metadata(msg) - body = extract_body(msg) - basename = generate_basename(metadata) - email_stem = generate_email_filename(basename, metadata['subject']) - - # Save renamed EML - renamed_eml = f"{email_stem}.eml" - renamed_eml_path = os.path.join(temp_dir, renamed_eml) - os.rename(temp_eml, renamed_eml_path) - - # Save .txt - renamed_txt = f"{email_stem}.txt" - renamed_txt_path = os.path.join(temp_dir, renamed_txt) - save_text(body, renamed_txt_path) - - # Save attachments - attachment_results = save_attachments(msg, temp_dir, basename) - - # Build file list - files = [ - {'type': 'eml', 'name': renamed_eml, 'path': None}, - {'type': 'txt', 'name': renamed_txt, 'path': None}, - ] - for att in attachment_results: - files.append({ - 'type': 'attach', - 'name': att['renamed_name'], - 'path': None, - }) - - # Check for collisions in output_dir - for file_info in files: - dest = os.path.join(output_dir, file_info['name']) - if os.path.exists(dest): - raise FileExistsError( - f"Collision: '{file_info['name']}' already exists in {output_dir}" - ) - - # Move all files to output_dir - for file_info in files: - src = os.path.join(temp_dir, file_info['name']) - dest = os.path.join(output_dir, file_info['name']) - shutil.move(src, dest) - file_info['path'] = dest - - return { - 'metadata': metadata, - 'body': body, - 'files': files, - } - - finally: - # Clean up temp dir - if os.path.exists(temp_dir): - shutil.rmtree(temp_dir) - - -# --------------------------------------------------------------------------- -# Stdout display (backwards-compatible mode) -# --------------------------------------------------------------------------- - -def print_email(eml_path): - """Parse and print email to stdout. Extract attachments alongside EML. - - This preserves the original script behavior when --output-dir is not given. - """ - with open(eml_path, 'rb') as f: - msg = email.message_from_binary_file(f) - - metadata = extract_metadata(msg) - body = extract_body(msg) - timing = metadata['timing'] - - print(f"From: {metadata['from']}") - print(f"To: {metadata['to']}") - print(f"Subject: {metadata['subject']}") - print(f"Date: {metadata['date']}") - print(f"Sent: {timing['sent_time']} (via {timing['sent_server']})") - print(f"Received: {timing['received_time']} (at {timing['received_server']})") - print() - print(body) - print() - - # Extract attachments alongside the EML file - for part in msg.walk(): - if part.get_content_maintype() == 'multipart': - continue - if part.get('Content-Disposition') is None: - continue - - filename = part.get_filename() - if filename: - filepath = os.path.join(os.path.dirname(eml_path), filename) - with open(filepath, 'wb') as f: - f.write(part.get_payload(decode=True)) - print(f"Extracted attachment: {filename}") - - -def print_pipeline_summary(result): - """Print summary after pipeline extraction.""" - metadata = result['metadata'] - timing = metadata['timing'] - - print(f"From: {metadata['from']}") - print(f"To: {metadata['to']}") - print(f"Subject: {metadata['subject']}") - print(f"Date: {metadata['date']}") - print(f"Sent: {timing['sent_time']} (via {timing['sent_server']})") - print(f"Received: {timing['received_time']} (at {timing['received_server']})") - print() - print("Files created:") - for f in result['files']: - print(f" [{f['type']:>6}] {f['name']}") - print(f"\nOutput directory: {os.path.dirname(result['files'][0]['path'])}") - - -# --------------------------------------------------------------------------- -# CLI -# --------------------------------------------------------------------------- - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="Extract email content and attachments from EML files." - ) - parser.add_argument('eml_path', help="Path to source EML file") - parser.add_argument( - '--output-dir', - help="Destination directory for extracted files. " - "Without this flag, prints to stdout only (backwards compatible)." - ) - - args = parser.parse_args() - - if not os.path.isfile(args.eml_path): - print(f"Error: '{args.eml_path}' not found or is not a file.", file=sys.stderr) - sys.exit(1) - - if args.output_dir: - result = process_eml(args.eml_path, args.output_dir) - print_pipeline_summary(result) - else: - print_email(args.eml_path) diff --git a/docs/scripts/maildir-flag-manager.py b/docs/scripts/maildir-flag-manager.py deleted file mode 100755 index 9c4a59c..0000000 --- a/docs/scripts/maildir-flag-manager.py +++ /dev/null @@ -1,345 +0,0 @@ -#!/usr/bin/env python3 -"""Manage maildir flags (read, starred) across email accounts. - -Uses atomic os.rename() for flag operations directly on maildir files. -Safer and more reliable than shell-based approaches (zsh loses PATH in -while-read loops, piped mu move silently fails). - -Supports the same flag semantics as mu4e: maildir files in new/ are moved -to cur/ when the Seen flag is added, and flag changes are persisted to the -filesystem so mbsync picks them up on the next sync. - -Usage: - # Mark all unread INBOX emails as read - maildir-flag-manager.py mark-read - - # Mark specific emails as read (by path) - maildir-flag-manager.py mark-read /path/to/message1 /path/to/message2 - - # Mark all unread INBOX emails as read, then reindex mu - maildir-flag-manager.py mark-read --reindex - - # Star specific emails (by path) - maildir-flag-manager.py star /path/to/message1 /path/to/message2 - - # Star and mark read - maildir-flag-manager.py star --mark-read /path/to/message1 - - # Dry run — show what would change without modifying anything - maildir-flag-manager.py mark-read --dry-run -""" - -import argparse -import os -import shutil -import subprocess -import sys - - -# --------------------------------------------------------------------------- -# Configuration -# --------------------------------------------------------------------------- - -MAILDIR_ACCOUNTS = { - 'gmail': os.path.expanduser('~/.mail/gmail/INBOX'), - 'cmail': os.path.expanduser('~/.mail/cmail/Inbox'), -} - - -# --------------------------------------------------------------------------- -# Core flag operations -# --------------------------------------------------------------------------- - -def parse_maildir_flags(filename): - """Extract flags from a maildir filename. - - Maildir filenames follow the pattern: unique:2,FLAGS - where FLAGS is a sorted string of flag characters (e.g., "FS" for - Flagged+Seen). - - Returns (base, flags_string). If no flags section, returns (filename, ''). - """ - if ':2,' in filename: - base, flags = filename.rsplit(':2,', 1) - return base, flags - return filename, '' - - -def build_flagged_filename(filename, new_flags): - """Build a maildir filename with the given flags. - - Flags are always sorted alphabetically per maildir spec. - """ - base, _ = parse_maildir_flags(filename) - sorted_flags = ''.join(sorted(set(new_flags))) - return f"{base}:2,{sorted_flags}" - - -def rename_with_flag(file_path, flag, dry_run=False): - """Add a flag to a single maildir message file via atomic rename. - - Handles moving from new/ to cur/ when adding the Seen flag. - Returns True if the flag was added, False if already present. - """ - dirname = os.path.dirname(file_path) - filename = os.path.basename(file_path) - maildir_root = os.path.dirname(dirname) - subdir = os.path.basename(dirname) - - _, current_flags = parse_maildir_flags(filename) - - if flag in current_flags: - return False - - new_flags = current_flags + flag - new_filename = build_flagged_filename(filename, new_flags) - - # Messages with the Seen flag belong in cur/, not new/ - if 'S' in new_flags and subdir == 'new': - target_dir = os.path.join(maildir_root, 'cur') - else: - target_dir = dirname - - new_path = os.path.join(target_dir, new_filename) - - if dry_run: - return True - - os.rename(file_path, new_path) - return True - - -def process_maildir(maildir_path, flag, dry_run=False): - """Add a flag to all messages in a maildir that don't have it. - - Scans both new/ and cur/ subdirectories. - Returns (changed_count, skipped_count, error_count). - """ - if not os.path.isdir(maildir_path): - print(f" Skipping {maildir_path} (not found)", file=sys.stderr) - return 0, 0, 0 - - changed = 0 - skipped = 0 - errors = 0 - - for subdir in ('new', 'cur'): - subdir_path = os.path.join(maildir_path, subdir) - if not os.path.isdir(subdir_path): - continue - - for filename in os.listdir(subdir_path): - file_path = os.path.join(subdir_path, filename) - if not os.path.isfile(file_path): - continue - - try: - if rename_with_flag(file_path, flag, dry_run): - changed += 1 - else: - skipped += 1 - except Exception as e: - print(f" Error on {filename}: {e}", file=sys.stderr) - errors += 1 - - return changed, skipped, errors - - -def process_specific_files(paths, flag, dry_run=False): - """Add a flag to specific message files by path. - - Returns (changed_count, skipped_count, error_count). - """ - changed = 0 - skipped = 0 - errors = 0 - - for path in paths: - path = os.path.abspath(path) - if not os.path.isfile(path): - print(f" File not found: {path}", file=sys.stderr) - errors += 1 - continue - - # Verify file is inside a maildir (parent should be cur/ or new/) - parent_dir = os.path.basename(os.path.dirname(path)) - if parent_dir not in ('cur', 'new'): - print(f" Not in a maildir cur/ or new/ dir: {path}", - file=sys.stderr) - errors += 1 - continue - - try: - if rename_with_flag(path, flag, dry_run): - changed += 1 - else: - skipped += 1 - except Exception as e: - print(f" Error on {path}: {e}", file=sys.stderr) - errors += 1 - - return changed, skipped, errors - - -def reindex_mu(): - """Run mu index to update the database after flag changes.""" - mu_path = shutil.which('mu') - if not mu_path: - print("Warning: mu not found in PATH, skipping reindex", - file=sys.stderr) - return False - - try: - result = subprocess.run( - [mu_path, 'index'], - capture_output=True, text=True, timeout=120 - ) - if result.returncode == 0: - print("mu index: database updated") - return True - else: - print(f"mu index failed: {result.stderr}", file=sys.stderr) - return False - except subprocess.TimeoutExpired: - print("mu index timed out after 120s", file=sys.stderr) - return False - - -# --------------------------------------------------------------------------- -# Commands -# --------------------------------------------------------------------------- - -def cmd_mark_read(args): - """Mark emails as read (add Seen flag).""" - flag = 'S' - action = "Marking as read" - if args.dry_run: - action = "Would mark as read" - - total_changed = 0 - total_skipped = 0 - total_errors = 0 - - if args.paths: - print(f"{action}: {len(args.paths)} specific message(s)") - c, s, e = process_specific_files(args.paths, flag, args.dry_run) - total_changed += c - total_skipped += s - total_errors += e - else: - for name, maildir_path in MAILDIR_ACCOUNTS.items(): - print(f"{action} in {name} ({maildir_path})") - c, s, e = process_maildir(maildir_path, flag, args.dry_run) - total_changed += c - total_skipped += s - total_errors += e - if c > 0: - print(f" {c} message(s) marked as read") - if s > 0: - print(f" {s} already read") - - print(f"\nTotal: {total_changed} changed, {total_skipped} already set, " - f"{total_errors} errors") - - if args.reindex and not args.dry_run and total_changed > 0: - reindex_mu() - - return 0 if total_errors == 0 else 1 - - -def cmd_star(args): - """Star/flag emails (add Flagged flag).""" - flag = 'F' - action = "Starring" - if args.dry_run: - action = "Would star" - - if not args.paths: - print("Error: star requires specific message paths", file=sys.stderr) - return 1 - - print(f"{action}: {len(args.paths)} message(s)") - total_changed = 0 - total_skipped = 0 - total_errors = 0 - - c, s, e = process_specific_files(args.paths, flag, args.dry_run) - total_changed += c - total_skipped += s - total_errors += e - - # Also mark as read if requested - if args.mark_read: - print("Also marking as read...") - c2, _, e2 = process_specific_files(args.paths, 'S', args.dry_run) - total_changed += c2 - total_errors += e2 - - print(f"\nTotal: {total_changed} flag(s) changed, {total_skipped} already set, " - f"{total_errors} errors") - - if args.reindex and not args.dry_run and total_changed > 0: - reindex_mu() - - return 0 if total_errors == 0 else 1 - - -# --------------------------------------------------------------------------- -# CLI -# --------------------------------------------------------------------------- - -def main(): - parser = argparse.ArgumentParser( - description="Manage maildir flags (read, starred) across email accounts." - ) - subparsers = parser.add_subparsers(dest='command', required=True) - - # mark-read - p_read = subparsers.add_parser( - 'mark-read', - help="Mark emails as read (add Seen flag)" - ) - p_read.add_argument( - 'paths', nargs='*', - help="Specific message file paths. If omitted, marks all unread " - "messages in configured INBOX maildirs." - ) - p_read.add_argument( - '--reindex', action='store_true', - help="Run mu index after changing flags" - ) - p_read.add_argument( - '--dry-run', action='store_true', - help="Show what would change without modifying anything" - ) - p_read.set_defaults(func=cmd_mark_read) - - # star - p_star = subparsers.add_parser( - 'star', - help="Star/flag emails (add Flagged flag)" - ) - p_star.add_argument( - 'paths', nargs='+', - help="Message file paths to star" - ) - p_star.add_argument( - '--mark-read', action='store_true', - help="Also mark starred messages as read" - ) - p_star.add_argument( - '--reindex', action='store_true', - help="Run mu index after changing flags" - ) - p_star.add_argument( - '--dry-run', action='store_true', - help="Show what would change without modifying anything" - ) - p_star.set_defaults(func=cmd_star) - - args = parser.parse_args() - sys.exit(args.func(args)) - - -if __name__ == '__main__': - main() diff --git a/docs/scripts/tests/conftest.py b/docs/scripts/tests/conftest.py deleted file mode 100644 index 8d965ab..0000000 --- a/docs/scripts/tests/conftest.py +++ /dev/null @@ -1,77 +0,0 @@ -"""Shared fixtures for EML extraction tests.""" - -import os -from email.message import EmailMessage -from email.mime.application import MIMEApplication -from email.mime.multipart import MIMEMultipart -from email.mime.text import MIMEText - -import pytest - - -@pytest.fixture -def fixtures_dir(): - """Return path to the fixtures/ directory.""" - return os.path.join(os.path.dirname(__file__), 'fixtures') - - -def make_plain_message(body="Test body", from_="Jonathan Smith ", - to="Craig ", - subject="Test Subject", - date="Wed, 05 Feb 2026 11:36:00 -0600"): - """Create an EmailMessage with text/plain body.""" - msg = EmailMessage() - msg['From'] = from_ - msg['To'] = to - msg['Subject'] = subject - msg['Date'] = date - msg.set_content(body) - return msg - - -def make_html_message(html_body="

Test body

", - from_="Jonathan Smith ", - to="Craig ", - subject="Test Subject", - date="Wed, 05 Feb 2026 11:36:00 -0600"): - """Create an EmailMessage with text/html body only.""" - msg = EmailMessage() - msg['From'] = from_ - msg['To'] = to - msg['Subject'] = subject - msg['Date'] = date - msg.set_content(html_body, subtype='html') - return msg - - -def make_message_with_attachment(body="Test body", - from_="Jonathan Smith ", - to="Craig ", - subject="Test Subject", - date="Wed, 05 Feb 2026 11:36:00 -0600", - attachment_filename="document.pdf", - attachment_content=b"fake pdf content"): - """Create a multipart message with a text body and one attachment.""" - msg = MIMEMultipart() - msg['From'] = from_ - msg['To'] = to - msg['Subject'] = subject - msg['Date'] = date - - msg.attach(MIMEText(body, 'plain')) - - att = MIMEApplication(attachment_content, Name=attachment_filename) - att['Content-Disposition'] = f'attachment; filename="{attachment_filename}"' - msg.attach(att) - - return msg - - -def add_received_headers(msg, headers): - """Add Received headers to an existing message. - - headers: list of header strings, added in order (first = most recent). - """ - for header in headers: - msg['Received'] = header - return msg diff --git a/docs/scripts/tests/fixtures/empty-body.eml b/docs/scripts/tests/fixtures/empty-body.eml deleted file mode 100644 index cf008df..0000000 --- a/docs/scripts/tests/fixtures/empty-body.eml +++ /dev/null @@ -1,16 +0,0 @@ -From: Jonathan Smith -To: Craig Jennings -Subject: Empty Body Test -Date: Thu, 05 Feb 2026 11:36:00 -0600 -MIME-Version: 1.0 -Content-Type: multipart/mixed; boundary="boundary456" -Received: from mail-sender.example.com by mx.receiver.example.com with ESMTP; Thu, 05 Feb 2026 11:36:05 -0600 - ---boundary456 -Content-Type: application/octet-stream; name="data.bin" -Content-Disposition: attachment; filename="data.bin" -Content-Transfer-Encoding: base64 - -AQIDBA== - ---boundary456-- diff --git a/docs/scripts/tests/fixtures/html-only.eml b/docs/scripts/tests/fixtures/html-only.eml deleted file mode 100644 index 4db7645..0000000 --- a/docs/scripts/tests/fixtures/html-only.eml +++ /dev/null @@ -1,20 +0,0 @@ -From: Jonathan Smith -To: Craig Jennings -Subject: HTML Update -Date: Thu, 05 Feb 2026 11:36:00 -0600 -MIME-Version: 1.0 -Content-Type: text/html; charset="utf-8" -Content-Transfer-Encoding: 7bit -Received: from mail-sender.example.com by mx.receiver.example.com with ESMTP; Thu, 05 Feb 2026 11:36:05 -0600 - - - -

Hi Craig,

-

Here is the HTML update.

-
    -
  • Item one
  • -
  • Item two
  • -
-

Best,
Jonathan

- - diff --git a/docs/scripts/tests/fixtures/multiple-received-headers.eml b/docs/scripts/tests/fixtures/multiple-received-headers.eml deleted file mode 100644 index 1b8d6a7..0000000 --- a/docs/scripts/tests/fixtures/multiple-received-headers.eml +++ /dev/null @@ -1,12 +0,0 @@ -From: Jonathan Smith -To: Craig Jennings -Subject: Multiple Received Headers Test -Date: Thu, 05 Feb 2026 11:36:00 -0600 -MIME-Version: 1.0 -Content-Type: text/plain; charset="utf-8" -Content-Transfer-Encoding: 7bit -Received: by internal.example.com with SMTP; Thu, 05 Feb 2026 11:36:10 -0600 -Received: from mail-sender.example.com by mx.receiver.example.com with ESMTP; Thu, 05 Feb 2026 11:36:05 -0600 -Received: from originator.example.com by relay.example.com with SMTP; Thu, 05 Feb 2026 11:35:58 -0600 - -Test body with multiple received headers. diff --git a/docs/scripts/tests/fixtures/no-received-headers.eml b/docs/scripts/tests/fixtures/no-received-headers.eml deleted file mode 100644 index 8a05dc7..0000000 --- a/docs/scripts/tests/fixtures/no-received-headers.eml +++ /dev/null @@ -1,9 +0,0 @@ -From: Jonathan Smith -To: Craig Jennings -Subject: No Received Headers -Date: Thu, 05 Feb 2026 11:36:00 -0600 -MIME-Version: 1.0 -Content-Type: text/plain; charset="utf-8" -Content-Transfer-Encoding: 7bit - -Test body with no received headers at all. diff --git a/docs/scripts/tests/fixtures/plain-text.eml b/docs/scripts/tests/fixtures/plain-text.eml deleted file mode 100644 index 8cc9d9c..0000000 --- a/docs/scripts/tests/fixtures/plain-text.eml +++ /dev/null @@ -1,15 +0,0 @@ -From: Jonathan Smith -To: Craig Jennings -Subject: Re: Fw: 4319 Danneel Street -Date: Thu, 05 Feb 2026 11:36:00 -0600 -MIME-Version: 1.0 -Content-Type: text/plain; charset="utf-8" -Content-Transfer-Encoding: 7bit -Received: from mail-sender.example.com by mx.receiver.example.com with ESMTP; Thu, 05 Feb 2026 11:36:05 -0600 - -Hi Craig, - -Here is the update on 4319 Danneel Street. - -Best, -Jonathan diff --git a/docs/scripts/tests/fixtures/with-attachment.eml b/docs/scripts/tests/fixtures/with-attachment.eml deleted file mode 100644 index ac49c5d..0000000 --- a/docs/scripts/tests/fixtures/with-attachment.eml +++ /dev/null @@ -1,27 +0,0 @@ -From: Jonathan Smith -To: Craig Jennings -Subject: Ltr from Carrollton -Date: Thu, 05 Feb 2026 11:36:00 -0600 -MIME-Version: 1.0 -Content-Type: multipart/mixed; boundary="boundary123" -Received: from mail-sender.example.com by mx.receiver.example.com with ESMTP; Thu, 05 Feb 2026 11:36:05 -0600 - ---boundary123 -Content-Type: text/plain; charset="utf-8" -Content-Transfer-Encoding: 7bit - -Hi Craig, - -Please find the letter attached. - -Best, -Jonathan - ---boundary123 -Content-Type: application/octet-stream; name="Ltr Carrollton.pdf" -Content-Disposition: attachment; filename="Ltr Carrollton.pdf" -Content-Transfer-Encoding: base64 - -ZmFrZSBwZGYgY29udGVudA== - ---boundary123-- diff --git a/docs/scripts/tests/test_extract_body.py b/docs/scripts/tests/test_extract_body.py deleted file mode 100644 index 7b53cda..0000000 --- a/docs/scripts/tests/test_extract_body.py +++ /dev/null @@ -1,96 +0,0 @@ -"""Tests for extract_body().""" - -import sys -import os - -sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..')) - -from conftest import make_plain_message, make_html_message, make_message_with_attachment -from email.message import EmailMessage -from email.mime.multipart import MIMEMultipart -from email.mime.text import MIMEText -from email.mime.application import MIMEApplication - -import importlib.util -spec = importlib.util.spec_from_file_location( - "eml_script", - os.path.join(os.path.dirname(__file__), '..', 'eml-view-and-extract-attachments.py') -) -eml_script = importlib.util.module_from_spec(spec) -spec.loader.exec_module(eml_script) - -extract_body = eml_script.extract_body - - -class TestPlainText: - def test_returns_plain_text(self): - msg = make_plain_message(body="Hello, this is plain text.") - result = extract_body(msg) - assert "Hello, this is plain text." in result - - -class TestHtmlOnly: - def test_returns_converted_html(self): - msg = make_html_message(html_body="

Hello world

") - result = extract_body(msg) - assert "Hello" in result - assert "world" in result - # Should not contain raw HTML tags - assert "

" not in result - assert "" not in result - - -class TestBothPlainAndHtml: - def test_prefers_plain_text(self): - msg = MIMEMultipart('alternative') - msg['From'] = 'test@example.com' - msg['To'] = 'dest@example.com' - msg['Subject'] = 'Test' - msg['Date'] = 'Thu, 05 Feb 2026 11:36:00 -0600' - msg.attach(MIMEText("Plain text version", 'plain')) - msg.attach(MIMEText("

HTML version

", 'html')) - result = extract_body(msg) - assert "Plain text version" in result - assert "HTML version" not in result - - -class TestEmptyBody: - def test_returns_empty_string(self): - # Multipart with only attachments, no text parts - msg = MIMEMultipart() - msg['From'] = 'test@example.com' - att = MIMEApplication(b"binary data", Name="file.bin") - att['Content-Disposition'] = 'attachment; filename="file.bin"' - msg.attach(att) - result = extract_body(msg) - assert result == "" - - -class TestNonUtf8Encoding: - def test_decodes_with_errors_ignore(self): - msg = EmailMessage() - msg['From'] = 'test@example.com' - # Set raw bytes that include invalid UTF-8 - msg.set_content("Valid text with special: café") - result = extract_body(msg) - assert "Valid text" in result - - -class TestHtmlWithStructure: - def test_preserves_list_structure(self): - html = "
  • Item one
  • Item two
" - msg = make_html_message(html_body=html) - result = extract_body(msg) - assert "Item one" in result - assert "Item two" in result - - -class TestNoTextParts: - def test_returns_empty_string(self): - msg = MIMEMultipart() - msg['From'] = 'test@example.com' - att = MIMEApplication(b"data", Name="image.png") - att['Content-Disposition'] = 'attachment; filename="image.png"' - msg.attach(att) - result = extract_body(msg) - assert result == "" diff --git a/docs/scripts/tests/test_extract_metadata.py b/docs/scripts/tests/test_extract_metadata.py deleted file mode 100644 index d5ee52e..0000000 --- a/docs/scripts/tests/test_extract_metadata.py +++ /dev/null @@ -1,65 +0,0 @@ -"""Tests for extract_metadata().""" - -import sys -import os - -sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..')) - -from conftest import make_plain_message, add_received_headers -from email.message import EmailMessage - -import importlib.util -spec = importlib.util.spec_from_file_location( - "eml_script", - os.path.join(os.path.dirname(__file__), '..', 'eml-view-and-extract-attachments.py') -) -eml_script = importlib.util.module_from_spec(spec) -spec.loader.exec_module(eml_script) - -extract_metadata = eml_script.extract_metadata - - -class TestAllHeadersPresent: - def test_complete_dict(self): - msg = make_plain_message( - from_="Jonathan Smith ", - to="Craig ", - subject="Test Subject", - date="Thu, 05 Feb 2026 11:36:00 -0600" - ) - result = extract_metadata(msg) - assert result['from'] == "Jonathan Smith " - assert result['to'] == "Craig " - assert result['subject'] == "Test Subject" - assert result['date'] == "Thu, 05 Feb 2026 11:36:00 -0600" - assert 'timing' in result - - -class TestMissingFrom: - def test_from_is_none(self): - msg = EmailMessage() - msg['To'] = 'craig@example.com' - msg['Subject'] = 'Test' - msg['Date'] = 'Thu, 05 Feb 2026 11:36:00 -0600' - msg.set_content("body") - result = extract_metadata(msg) - assert result['from'] is None - - -class TestMissingDate: - def test_date_is_none(self): - msg = EmailMessage() - msg['From'] = 'test@example.com' - msg['To'] = 'craig@example.com' - msg['Subject'] = 'Test' - msg.set_content("body") - result = extract_metadata(msg) - assert result['date'] is None - - -class TestLongSubject: - def test_full_subject_returned(self): - long_subject = "Re: Fw: This is a very long subject line that spans many words and might be folded" - msg = make_plain_message(subject=long_subject) - result = extract_metadata(msg) - assert result['subject'] == long_subject diff --git a/docs/scripts/tests/test_generate_filenames.py b/docs/scripts/tests/test_generate_filenames.py deleted file mode 100644 index 07c8f84..0000000 --- a/docs/scripts/tests/test_generate_filenames.py +++ /dev/null @@ -1,157 +0,0 @@ -"""Tests for generate_basename(), generate_email_filename(), generate_attachment_filename().""" - -import sys -import os - -sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..')) - -import importlib.util -spec = importlib.util.spec_from_file_location( - "eml_script", - os.path.join(os.path.dirname(__file__), '..', 'eml-view-and-extract-attachments.py') -) -eml_script = importlib.util.module_from_spec(spec) -spec.loader.exec_module(eml_script) - -generate_basename = eml_script.generate_basename -generate_email_filename = eml_script.generate_email_filename -generate_attachment_filename = eml_script.generate_attachment_filename - - -# --- generate_basename --- - -class TestGenerateBasename: - def test_standard_from_and_date(self): - metadata = { - 'from': 'Jonathan Smith ', - 'date': 'Wed, 05 Feb 2026 11:36:00 -0600', - } - assert generate_basename(metadata) == "2026-02-05-1136-Jonathan" - - def test_from_with_display_name_first_token(self): - metadata = { - 'from': 'C Ciarm ', - 'date': 'Wed, 05 Feb 2026 11:36:00 -0600', - } - result = generate_basename(metadata) - assert result == "2026-02-05-1136-C" - - def test_from_without_display_name(self): - metadata = { - 'from': 'jsmith@example.com', - 'date': 'Wed, 05 Feb 2026 11:36:00 -0600', - } - result = generate_basename(metadata) - assert result == "2026-02-05-1136-jsmith" - - def test_missing_date(self): - metadata = { - 'from': 'Jonathan Smith ', - 'date': None, - } - result = generate_basename(metadata) - assert result == "unknown-Jonathan" - - def test_missing_from(self): - metadata = { - 'from': None, - 'date': 'Wed, 05 Feb 2026 11:36:00 -0600', - } - result = generate_basename(metadata) - assert result == "2026-02-05-1136-unknown" - - def test_both_missing(self): - metadata = {'from': None, 'date': None} - result = generate_basename(metadata) - assert result == "unknown-unknown" - - def test_unparseable_date(self): - metadata = { - 'from': 'Jonathan ', - 'date': 'not a real date', - } - result = generate_basename(metadata) - assert result == "unknown-Jonathan" - - def test_none_date_no_crash(self): - metadata = {'from': 'Test ', 'date': None} - # Should not raise - result = generate_basename(metadata) - assert "unknown" in result - - -# --- generate_email_filename --- - -class TestGenerateEmailFilename: - def test_standard_subject(self): - result = generate_email_filename( - "2026-02-05-1136-Jonathan", - "Re: Fw: 4319 Danneel Street" - ) - assert result == "2026-02-05-1136-Jonathan-EMAIL-Re-Fw-4319-Danneel-Street" - - def test_subject_with_special_chars(self): - result = generate_email_filename( - "2026-02-05-1136-Jonathan", - "Update: Meeting (draft) & notes!" - ) - # Colons, parens, ampersands, exclamation stripped - assert "EMAIL" in result - assert ":" not in result - assert "(" not in result - assert ")" not in result - assert "&" not in result - assert "!" not in result - - def test_none_subject(self): - result = generate_email_filename("2026-02-05-1136-Jonathan", None) - assert result == "2026-02-05-1136-Jonathan-EMAIL-no-subject" - - def test_empty_subject(self): - result = generate_email_filename("2026-02-05-1136-Jonathan", "") - assert result == "2026-02-05-1136-Jonathan-EMAIL-no-subject" - - def test_very_long_subject(self): - long_subject = "A" * 100 + " " + "B" * 100 - result = generate_email_filename("2026-02-05-1136-Jonathan", long_subject) - # The cleaned subject part should be truncated - # basename (27) + "-EMAIL-" (7) + subject - # Subject itself is limited to 80 chars by _clean_for_filename - subject_part = result.split("-EMAIL-")[1] - assert len(subject_part) <= 80 - - -# --- generate_attachment_filename --- - -class TestGenerateAttachmentFilename: - def test_standard_attachment(self): - result = generate_attachment_filename( - "2026-02-05-1136-Jonathan", - "Ltr Carrollton.pdf" - ) - assert result == "2026-02-05-1136-Jonathan-ATTACH-Ltr-Carrollton.pdf" - - def test_filename_with_spaces_and_parens(self): - result = generate_attachment_filename( - "2026-02-05-1136-Jonathan", - "Document (final copy).pdf" - ) - assert " " not in result - assert "(" not in result - assert ")" not in result - assert result.endswith(".pdf") - - def test_preserves_extension(self): - result = generate_attachment_filename( - "2026-02-05-1136-Jonathan", - "photo.jpg" - ) - assert result.endswith(".jpg") - - def test_none_filename(self): - result = generate_attachment_filename("2026-02-05-1136-Jonathan", None) - assert result == "2026-02-05-1136-Jonathan-ATTACH-unnamed" - - def test_empty_filename(self): - result = generate_attachment_filename("2026-02-05-1136-Jonathan", "") - assert result == "2026-02-05-1136-Jonathan-ATTACH-unnamed" diff --git a/docs/scripts/tests/test_integration_stdout.py b/docs/scripts/tests/test_integration_stdout.py deleted file mode 100644 index d87478e..0000000 --- a/docs/scripts/tests/test_integration_stdout.py +++ /dev/null @@ -1,68 +0,0 @@ -"""Integration tests for backwards-compatible stdout mode (no --output-dir).""" - -import os -import shutil -import sys - -sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..')) - -import importlib.util -spec = importlib.util.spec_from_file_location( - "eml_script", - os.path.join(os.path.dirname(__file__), '..', 'eml-view-and-extract-attachments.py') -) -eml_script = importlib.util.module_from_spec(spec) -spec.loader.exec_module(eml_script) - -print_email = eml_script.print_email - -FIXTURES = os.path.join(os.path.dirname(__file__), 'fixtures') - - -class TestPlainTextStdout: - def test_metadata_and_body_printed(self, tmp_path, capsys): - eml_src = os.path.join(FIXTURES, 'plain-text.eml') - working_eml = tmp_path / "message.eml" - shutil.copy2(eml_src, working_eml) - - print_email(str(working_eml)) - captured = capsys.readouterr() - - assert "From: Jonathan Smith " in captured.out - assert "To: Craig Jennings " in captured.out - assert "Subject: Re: Fw: 4319 Danneel Street" in captured.out - assert "Date:" in captured.out - assert "Sent:" in captured.out - assert "Received:" in captured.out - assert "4319 Danneel Street" in captured.out - - -class TestHtmlFallbackStdout: - def test_html_converted_on_stdout(self, tmp_path, capsys): - eml_src = os.path.join(FIXTURES, 'html-only.eml') - working_eml = tmp_path / "message.eml" - shutil.copy2(eml_src, working_eml) - - print_email(str(working_eml)) - captured = capsys.readouterr() - - # Should see converted text, not raw HTML - assert "HTML" in captured.out - assert "

" not in captured.out - - -class TestAttachmentsStdout: - def test_attachment_extracted_alongside_eml(self, tmp_path, capsys): - eml_src = os.path.join(FIXTURES, 'with-attachment.eml') - working_eml = tmp_path / "message.eml" - shutil.copy2(eml_src, working_eml) - - print_email(str(working_eml)) - captured = capsys.readouterr() - - assert "Extracted attachment:" in captured.out - assert "Ltr Carrollton.pdf" in captured.out - - # File should exist alongside the EML - extracted = tmp_path / "Ltr Carrollton.pdf" - assert extracted.exists() diff --git a/docs/scripts/tests/test_parse_received_headers.py b/docs/scripts/tests/test_parse_received_headers.py deleted file mode 100644 index e12e1fb..0000000 --- a/docs/scripts/tests/test_parse_received_headers.py +++ /dev/null @@ -1,105 +0,0 @@ -"""Tests for parse_received_headers().""" - -import email -import sys -import os - -sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..')) - -from conftest import make_plain_message, add_received_headers -from email.message import EmailMessage - -# Import the function under test -import importlib.util -spec = importlib.util.spec_from_file_location( - "eml_script", - os.path.join(os.path.dirname(__file__), '..', 'eml-view-and-extract-attachments.py') -) -eml_script = importlib.util.module_from_spec(spec) -spec.loader.exec_module(eml_script) - -parse_received_headers = eml_script.parse_received_headers - - -class TestSingleHeader: - def test_header_with_from_and_by(self): - msg = EmailMessage() - msg['Received'] = ( - 'from mail-sender.example.com by mx.receiver.example.com ' - 'with ESMTP; Thu, 05 Feb 2026 11:36:05 -0600' - ) - result = parse_received_headers(msg) - assert result['sent_server'] == 'mail-sender.example.com' - assert result['received_server'] == 'mx.receiver.example.com' - assert result['sent_time'] == 'Thu, 05 Feb 2026 11:36:05 -0600' - assert result['received_time'] == 'Thu, 05 Feb 2026 11:36:05 -0600' - - -class TestMultipleHeaders: - def test_uses_first_with_both_from_and_by(self): - msg = EmailMessage() - # Most recent first (by only) - msg['Received'] = 'by internal.example.com with SMTP; Thu, 05 Feb 2026 11:36:10 -0600' - # Next: has both from and by — this should be selected - msg['Received'] = ( - 'from mail-sender.example.com by mx.receiver.example.com ' - 'with ESMTP; Thu, 05 Feb 2026 11:36:05 -0600' - ) - # Oldest - msg['Received'] = ( - 'from originator.example.com by relay.example.com ' - 'with SMTP; Thu, 05 Feb 2026 11:35:58 -0600' - ) - result = parse_received_headers(msg) - assert result['sent_server'] == 'mail-sender.example.com' - assert result['received_server'] == 'mx.receiver.example.com' - - -class TestNoReceivedHeaders: - def test_all_values_none(self): - msg = EmailMessage() - result = parse_received_headers(msg) - assert result['sent_time'] is None - assert result['sent_server'] is None - assert result['received_time'] is None - assert result['received_server'] is None - - -class TestByButNoFrom: - def test_falls_back_to_first_header(self): - msg = EmailMessage() - msg['Received'] = 'by internal.example.com with SMTP; Thu, 05 Feb 2026 11:36:10 -0600' - result = parse_received_headers(msg) - assert result['received_server'] == 'internal.example.com' - assert result['received_time'] == 'Thu, 05 Feb 2026 11:36:10 -0600' - # No from in any header, so sent_server stays None - assert result['sent_server'] is None - - -class TestMultilineFoldedHeader: - def test_normalizes_whitespace(self): - # Use email.message_from_string to parse raw folded headers - # (EmailMessage policy rejects embedded CRLF in set values) - raw = ( - "From: test@example.com\r\n" - "Received: from mail-sender.example.com\r\n" - " by mx.receiver.example.com\r\n" - " with ESMTP; Thu, 05 Feb 2026 11:36:05 -0600\r\n" - "\r\n" - "body\r\n" - ) - msg = email.message_from_string(raw) - result = parse_received_headers(msg) - assert result['sent_server'] == 'mail-sender.example.com' - assert result['received_server'] == 'mx.receiver.example.com' - - -class TestMalformedTimestamp: - def test_no_semicolon(self): - msg = EmailMessage() - msg['Received'] = 'from sender.example.com by receiver.example.com with SMTP' - result = parse_received_headers(msg) - assert result['sent_server'] == 'sender.example.com' - assert result['received_server'] == 'receiver.example.com' - assert result['sent_time'] is None - assert result['received_time'] is None diff --git a/docs/scripts/tests/test_process_eml.py b/docs/scripts/tests/test_process_eml.py deleted file mode 100644 index 26c5ad5..0000000 --- a/docs/scripts/tests/test_process_eml.py +++ /dev/null @@ -1,129 +0,0 @@ -"""Integration tests for process_eml() — full pipeline with --output-dir.""" - -import os -import shutil -import sys - -sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..')) - -import importlib.util -spec = importlib.util.spec_from_file_location( - "eml_script", - os.path.join(os.path.dirname(__file__), '..', 'eml-view-and-extract-attachments.py') -) -eml_script = importlib.util.module_from_spec(spec) -spec.loader.exec_module(eml_script) - -process_eml = eml_script.process_eml - -import pytest - - -FIXTURES = os.path.join(os.path.dirname(__file__), 'fixtures') - - -class TestPlainTextPipeline: - def test_creates_eml_and_txt(self, tmp_path): - eml_src = os.path.join(FIXTURES, 'plain-text.eml') - # Copy fixture to tmp_path so temp dir can be created as sibling - working_eml = tmp_path / "inbox" / "message.eml" - working_eml.parent.mkdir() - shutil.copy2(eml_src, working_eml) - - output_dir = tmp_path / "output" - result = process_eml(str(working_eml), str(output_dir)) - - # Should have exactly 2 files: .eml and .txt - assert len(result['files']) == 2 - eml_file = result['files'][0] - txt_file = result['files'][1] - - assert eml_file['type'] == 'eml' - assert txt_file['type'] == 'txt' - assert eml_file['name'].endswith('.eml') - assert txt_file['name'].endswith('.txt') - - # Files exist in output dir - assert os.path.isfile(eml_file['path']) - assert os.path.isfile(txt_file['path']) - - # Filenames contain expected components - assert 'Jonathan' in eml_file['name'] - assert 'EMAIL' in eml_file['name'] - assert '2026-02-05' in eml_file['name'] - - # Temp dir cleaned up (no extract-* dirs in inbox) - inbox_contents = os.listdir(str(tmp_path / "inbox")) - assert not any(d.startswith('extract-') for d in inbox_contents) - - -class TestHtmlFallbackPipeline: - def test_txt_contains_converted_html(self, tmp_path): - eml_src = os.path.join(FIXTURES, 'html-only.eml') - working_eml = tmp_path / "inbox" / "message.eml" - working_eml.parent.mkdir() - shutil.copy2(eml_src, working_eml) - - output_dir = tmp_path / "output" - result = process_eml(str(working_eml), str(output_dir)) - - txt_file = result['files'][1] - with open(txt_file['path'], 'r') as f: - content = f.read() - - # Should be converted, not raw HTML - assert '

' not in content - assert '' not in content - assert 'HTML' in content - - -class TestAttachmentPipeline: - def test_eml_txt_and_attachment_created(self, tmp_path): - eml_src = os.path.join(FIXTURES, 'with-attachment.eml') - working_eml = tmp_path / "inbox" / "message.eml" - working_eml.parent.mkdir() - shutil.copy2(eml_src, working_eml) - - output_dir = tmp_path / "output" - result = process_eml(str(working_eml), str(output_dir)) - - assert len(result['files']) == 3 - types = [f['type'] for f in result['files']] - assert types == ['eml', 'txt', 'attach'] - - # Attachment is auto-renamed - attach_file = result['files'][2] - assert 'ATTACH' in attach_file['name'] - assert attach_file['name'].endswith('.pdf') - assert os.path.isfile(attach_file['path']) - - -class TestCollisionDetection: - def test_raises_on_existing_file(self, tmp_path): - eml_src = os.path.join(FIXTURES, 'plain-text.eml') - working_eml = tmp_path / "inbox" / "message.eml" - working_eml.parent.mkdir() - shutil.copy2(eml_src, working_eml) - - output_dir = tmp_path / "output" - # Run once to create files - result = process_eml(str(working_eml), str(output_dir)) - - # Run again — should raise FileExistsError - with pytest.raises(FileExistsError, match="Collision"): - process_eml(str(working_eml), str(output_dir)) - - -class TestMissingOutputDir: - def test_creates_directory(self, tmp_path): - eml_src = os.path.join(FIXTURES, 'plain-text.eml') - working_eml = tmp_path / "inbox" / "message.eml" - working_eml.parent.mkdir() - shutil.copy2(eml_src, working_eml) - - output_dir = tmp_path / "new" / "nested" / "output" - assert not output_dir.exists() - - result = process_eml(str(working_eml), str(output_dir)) - assert output_dir.exists() - assert len(result['files']) == 2 diff --git a/docs/scripts/tests/test_save_attachments.py b/docs/scripts/tests/test_save_attachments.py deleted file mode 100644 index 32f02a6..0000000 --- a/docs/scripts/tests/test_save_attachments.py +++ /dev/null @@ -1,97 +0,0 @@ -"""Tests for save_attachments().""" - -import sys -import os - -sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..')) - -from conftest import make_plain_message, make_message_with_attachment -from email.mime.multipart import MIMEMultipart -from email.mime.text import MIMEText -from email.mime.application import MIMEApplication - -import importlib.util -spec = importlib.util.spec_from_file_location( - "eml_script", - os.path.join(os.path.dirname(__file__), '..', 'eml-view-and-extract-attachments.py') -) -eml_script = importlib.util.module_from_spec(spec) -spec.loader.exec_module(eml_script) - -save_attachments = eml_script.save_attachments - - -class TestSingleAttachment: - def test_file_written_and_returned(self, tmp_path): - msg = make_message_with_attachment( - attachment_filename="report.pdf", - attachment_content=b"pdf bytes here" - ) - result = save_attachments(msg, str(tmp_path), "2026-02-05-1136-Jonathan") - - assert len(result) == 1 - assert result[0]['original_name'] == "report.pdf" - assert "ATTACH" in result[0]['renamed_name'] - assert result[0]['renamed_name'].endswith(".pdf") - - # File actually exists and has correct content - written_path = result[0]['path'] - assert os.path.isfile(written_path) - with open(written_path, 'rb') as f: - assert f.read() == b"pdf bytes here" - - -class TestMultipleAttachments: - def test_all_written_and_returned(self, tmp_path): - msg = MIMEMultipart() - msg['From'] = 'test@example.com' - msg['Date'] = 'Thu, 05 Feb 2026 11:36:00 -0600' - msg.attach(MIMEText("body", 'plain')) - - for name, content in [("doc1.pdf", b"pdf1"), ("image.png", b"png1")]: - att = MIMEApplication(content, Name=name) - att['Content-Disposition'] = f'attachment; filename="{name}"' - msg.attach(att) - - result = save_attachments(msg, str(tmp_path), "2026-02-05-1136-Jonathan") - - assert len(result) == 2 - for r in result: - assert os.path.isfile(r['path']) - - -class TestNoAttachments: - def test_empty_list(self, tmp_path): - msg = make_plain_message() - result = save_attachments(msg, str(tmp_path), "2026-02-05-1136-Jonathan") - assert result == [] - - -class TestFilenameWithSpaces: - def test_cleaned_filename(self, tmp_path): - msg = make_message_with_attachment( - attachment_filename="My Document (1).pdf", - attachment_content=b"data" - ) - result = save_attachments(msg, str(tmp_path), "2026-02-05-1136-Jonathan") - - assert len(result) == 1 - assert " " not in result[0]['renamed_name'] - assert os.path.isfile(result[0]['path']) - - -class TestNoContentDisposition: - def test_skipped(self, tmp_path): - msg = MIMEMultipart() - msg['From'] = 'test@example.com' - msg.attach(MIMEText("body", 'plain')) - - # Add a part without Content-Disposition - part = MIMEApplication(b"data", Name="file.bin") - # Explicitly remove Content-Disposition if present - if 'Content-Disposition' in part: - del part['Content-Disposition'] - msg.attach(part) - - result = save_attachments(msg, str(tmp_path), "2026-02-05-1136-Jonathan") - assert result == [] diff --git a/docs/someday-maybe.org b/docs/someday-maybe.org deleted file mode 100644 index e69de29..0000000 diff --git a/docs/workflows/add-calendar-event.org b/docs/workflows/add-calendar-event.org deleted file mode 100644 index 713a54d..0000000 --- a/docs/workflows/add-calendar-event.org +++ /dev/null @@ -1,208 +0,0 @@ -#+TITLE: Add Calendar Event Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-02-01 - -* Overview - -Workflow for creating calendar events via gcalcli with natural language input. - -* Triggers - -- "create an event" -- "add appointment" -- "schedule a meeting" -- "add to my calendar" -- "calendar event for..." - -* Prerequisites - -- gcalcli installed and authenticated -- Google Calendar API credentials configured -- Test with =gcalcli list= to verify authentication - -* CRITICAL: Check All Calendars Before Scheduling - -Before creating any event, ALWAYS check for conflicts across ALL calendars by querying the emacs org calendar files: - -#+begin_src bash -grep "2026-02-18" ~/.emacs.d/data/gcal.org # Google calendar -grep "2026-02-18" ~/.emacs.d/data/dcal.org # DeepSat work calendar -grep "2026-02-18" ~/.emacs.d/data/pcal.org # Proton calendar -#+end_src - -| File | Calendar | -|----------+---------------------------| -| gcal.org | Craig (Google) | -| dcal.org | Craig DeepSat (work) | -| pcal.org | Craig Proton | - -gcalcli only sees Google calendars — it will miss work and Proton events. Always verify the time slot is free across all three before creating. - -To *create* events, use gcalcli with =--calendar "Craig"= (Google). - -* Workflow Steps - -** 1. Parse Natural Language Input - -Interpret the user's request to extract: -- Event title -- Date/time (natural language like "tomorrow 3pm", "next Tuesday at 2") -- Any mentioned location -- Any mentioned description - -Examples: -- "Create an event tomorrow at 5pm called Grocery Shopping" -- "Add a meeting with Bob on Friday at 10am" -- "Schedule dentist appointment next Wednesday at 2pm at Downtown Dental" - -** 2. Apply Defaults - -| Field | Default Value | -|------------+----------------------------------| -| Calendar | Craig (default Google Calendar) | -| Reminders | 5 minutes before, at event time | -| Duration | NONE - always ask user | -| Location | None (optional) | - -** 3. Gather Missing Information - -*Always ask for:* -- Duration (required, no default) - -*Ask if relevant:* -- Location (if not provided and seems like an in-person event) - -*Never assume:* -- Duration - this must always be explicitly confirmed - -** 4. Show Event Summary - -Present the event in plain English (NOT the gcalcli command): - -#+begin_example -Event: Grocery Shopping -When: Tomorrow (Feb 2) at 5:00 PM -Duration: 1 hour -Location: (none) -Reminders: 5 min before, at event time -Calendar: Personal -#+end_example - -** 5. Explicit Confirmation - -Ask: "Create this event? (yes/no)" - -*Do NOT create the event until user confirms.* - -** 6. Execute - -Once confirmed, run: - -#+begin_src bash -gcalcli --calendar "Calendar Name" add \ - --title "Event Title" \ - --when "date and time" \ - --duration MINUTES \ - --where "Location" \ - --description "Description" \ - --reminder 5 \ - --reminder 0 \ - --noprompt -#+end_src - -** 7. Verify - -Confirm the event was created by searching: - -#+begin_src bash -gcalcli --calendar "Calendar Name" search "Event Title" -#+end_src - -Report success or failure to user. - -* Calendars - -| Calendar | Access | Notes | -|---------------------------+--------+--------------------------------| -| Craig | owner | Default — use for most events | -| Christine | owner | Christine's calendar | -| Todoist | owner | Todoist integration | -| Craig Jennings (TripIt) | reader | View only, no create | -| Holidays in United States | reader | View only | -| Craig Proton | reader | View only (no API access) | - -Use =--calendar "Craig"= to specify (default for adding events). - -* gcalcli Command Reference - -** Add Event - -#+begin_src bash -gcalcli add \ - --calendar "My Calendar" \ - --title "Event Title" \ - --when "tomorrow 3pm" \ - --duration 60 \ - --where "123 Main St" \ - --description "Notes" \ - --reminder 5 \ - --reminder 0 \ - --noprompt -#+end_src - -** Quick Add (Natural Language) - -#+begin_src bash -gcalcli --calendar "My Calendar" quick "Dinner with Eric 7pm tomorrow" -#+end_src - -** Key Options - -| Option | Description | -|---------------+-------------------------------------| -| --calendar | Calendar name or ID | -| --title | Event title | -| --when | Date/time (natural language OK) | -| --duration | Length in minutes | -| --where | Location | -| --description | Event notes | -| --reminder | Minutes before (can use multiple) | -| --allday | Create all-day event | -| --noprompt | Skip interactive confirmation | - -* Time Formats - -gcalcli accepts natural language times: -- "tomorrow 3pm" -- "next Tuesday at 2" -- "2026-02-15 14:00" -- "Feb 15 2pm" -- "today 5pm" - -* Duration Shortcuts - -| Input | Minutes | -|--------+---------| -| 30m | 30 | -| 1h | 60 | -| 1.5h | 90 | -| 2h | 120 | -| 90 | 90 | - -* Error Handling - -** Authentication Error -Run =gcalcli init= to re-authenticate. - -** Calendar Not Found -Check available calendars with =gcalcli list=. - -** Invalid Time Format -Use explicit date format: =YYYY-MM-DD HH:MM= - -* Related - -- [[file:read-calendar-events.org][Read Calendar Events]] - view events -- [[file:edit-calendar-event.org][Edit Calendar Event]] - modify events -- [[file:delete-calendar-event.org][Delete Calendar Event]] - remove events -- [[file:../calendar-api-research.org][Calendar API Research]] - gcalcli reference diff --git a/docs/workflows/assemble-email.org b/docs/workflows/assemble-email.org deleted file mode 100644 index bae647f..0000000 --- a/docs/workflows/assemble-email.org +++ /dev/null @@ -1,181 +0,0 @@ -#+TITLE: Email Assembly Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-01-29 - -* Overview - -This workflow assembles documents for an email that will be sent via Craig's email client (Proton Mail). It creates a temporary workspace, gathers relevant documents, drafts the email, and cleans up after sending. - -Use this workflow when Craig needs to send an email with multiple attachments that require gathering from various locations in the project. - -* When to Use This Workflow - -When Craig says: -- "assemble an email" or "email assembly workflow" -- "gather documents for an email" -- "I need to send [person] some documents" - -* The Workflow - -** Step 1: Create Temporary Workspace - -Create a temporary folder at the project root: - -#+begin_src bash -mkdir -p ./tmp -#+end_src - -This folder will hold: -- Copies of all attachments -- The draft email text - -** Step 2: Identify Required Documents - -Discuss with Craig what documents are needed. Common categories: -- Legal documents (deeds, certificates, agreements) -- Financial documents (statements, invoices) -- Correspondence (prior emails, letters) -- Identity documents (death certificates, ID copies) - -For each document: -1. Locate it in the project -2. Confirm with Craig it's the right one -3. Open it in zathura for Craig to verify if needed - -** Step 3: Copy Documents to Workspace - -**IMPORTANT: Always COPY, never MOVE documents.** - -#+begin_src bash -cp /path/to/original/document.pdf ./tmp/ -#+end_src - -After copying, list the workspace contents to confirm: - -#+begin_src bash -ls -lh ./tmp/ -#+end_src - -** Step 4: Draft the Email - -Create a draft email file in the workspace: - -#+begin_src bash -./tmp/email-draft.txt -#+end_src - -Include: -- To: (recipient email) -- Subject: (clear, descriptive subject line) -- Body: (context, list of attachments, contact info) - -The body should: -- Provide context for why documents are being sent -- List all attachments with brief descriptions -- Include Craig's contact information - -** Step 5: Open Draft in Emacs - -Open the draft for Craig to review and edit: - -#+begin_src bash -emacsclient -n ./tmp/email-draft.txt -#+end_src - -Wait for Craig to finish editing before proceeding. - -** Step 6: Craig Sends Email - -Craig will: -1. Open his email client (Proton Mail) -2. Create a new email using the draft text -3. Attach documents from the tmp folder -4. Send the email - -** Step 7: Process Sent Email - -Once Craig confirms the email was sent: - -1. Craig saves the sent email to the inbox -2. Use the extraction script to process it: - -#+begin_src bash -python3 docs/scripts/extract_attachments.py "./inbox/[email-file].eml" -#+end_src - -3. Read the extracted content to verify -4. Rename and refile the email appropriately: - -#+begin_src bash -mv "./inbox/[email-file].eml" ./[appropriate-folder]/YYYY-MM-DD-email-to-[recipient]-[topic].eml -#+end_src - -5. Delete any duplicate extracted attachments from inbox - -** Step 8: Clean Up Workspace - -Delete the temporary folder: - -#+begin_src bash -rm -rf ./tmp/ -#+end_src - -* Best Practices - -** Document Verification - -Before copying documents: -- Open each one in zathura for Craig to verify -- Confirm it's the correct version -- Check that sensitive information is appropriate to send - -** Email Draft Structure - -A good email draft includes: - -#+begin_example -To: recipient@example.com -Subject: [Clear Topic] - [Property/Case Reference] - -Hi [Name], - -[Opening - context for why you're sending this] - -[Middle - explanation of what's attached and why] - -Attached are the following documents: - -1. [Document name] - [brief description] -2. [Document name] - [brief description] -3. [Document name] - [brief description] - -[Closing - next steps, request for confirmation, offer to provide more] - -Thank you, - -Craig Jennings -510-316-9357 -c@cjennings.net -#+end_example - -** Filing Conventions - -When refiling sent emails: -- Use format: YYYY-MM-DD-email-to-[recipient]-[topic].eml -- File in the most relevant project folder. -- Remove duplicate attachments extracted to inbox - -* Example Usage - -Craig: "I need to send Seabreeze the documents for the HOA refund" - -Claude: -1. Creates ./tmp/ folder -2. Discusses needed documents (death certificate, closing docs, purchase agreement) -3. Locates and opens each document for verification -4. Copies verified documents to ./tmp/ -5. Drafts email and opens in emacsclient -6. Craig edits, then sends via Proton Mail -7. Craig saves sent email to inbox -8. Claude extracts, reads, renames, and refiles email -9. Claude deletes ./tmp/ folder diff --git a/docs/workflows/create-v2mom.org b/docs/workflows/create-v2mom.org deleted file mode 100644 index d2c30e5..0000000 --- a/docs/workflows/create-v2mom.org +++ /dev/null @@ -1,699 +0,0 @@ -#+TITLE: Creating a V2MOM Strategic Framework -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2025-11-05 - -* Overview - -This session creates a V2MOM (Vision, Values, Methods, Obstacles, Metrics) strategic framework for any project or goal. V2MOM provides clarity for decision-making, ruthless prioritization, and measuring progress. It transforms vague intentions into concrete action plans. - -The framework originated at Salesforce and works for any domain: personal projects, business strategy, health goals, financial planning, software development, or life planning. - -* Problem We're Solving - -Without a strategic framework, projects suffer from: - -** Unclear Direction -- "Get healthier" or "improve my finances" is too vague to act on -- Every idea feels equally important -- No principled way to say "no" to distractions -- Difficult to know what to work on next - -** Priority Inflation -- Everything feels urgent or important -- Research and planning without execution -- Hard to distinguish signal from noise -- Active todo list grows beyond manageability - -** No Decision Framework -- When faced with choice between A and B, no principled way to decide -- Debates about approach waste time -- Second-guessing decisions after making them -- Perfectionism masquerading as thoroughness - -** Unmeasurable Progress -- Can't tell if work is actually making things better -- No objective way to know when you're "done" -- Metrics are either absent or vanity metrics -- Difficult to celebrate wins or identify blockers - -*Impact:* Unfocused work, slow progress, frustration, and the nagging feeling that you're always working on the wrong thing. - -* Exit Criteria - -The V2MOM is complete when: - -1. **All 5 sections are filled with concrete content:** - - Vision: Clear, aspirational picture of success - - Values: 2-4 principles that guide decisions - - Methods: 4-7 concrete approaches with specific actions - - Obstacles: Honest personal/technical challenges - - Metrics: Measurable outcomes (not vanity metrics) - -2. **You can use it for decision-making:** - - Can answer "does X fit this V2MOM?" quickly - - Provides clarity on priorities (Method 1 > Method 2 > etc.) - - Identifies what NOT to do - -3. **Both parties agree it's ready:** - - Feels complete, not rushed - - Actionable enough to start execution - - Honest about obstacles (not sugar-coated) - -*Measurable validation:* -- Can you articulate the vision in one sentence? -- Do the values help you say "no" to things? -- Are methods ordered by priority? -- Can you immediately identify 3-5 tasks from Method 1? -- Do metrics tell you if you're succeeding? - -* When to Use This Session - -Trigger this V2MOM creation workflow when: - -- Starting a significant project (new business, new habit, new system) -- Existing project has accumulated many competing priorities without clear focus -- You find yourself constantly context-switching between ideas -- Someone asks "what are you trying to accomplish?" and answer is vague -- You want to apply ruthless prioritization but lack framework -- Annual/quarterly planning for ongoing projects or life goals - -*V2MOM is particularly valuable for:* -- Personal infrastructure projects (tooling, systems, workflows) -- Health and fitness goals -- Financial planning and wealth building -- Software package development -- Business strategy -- Career development -- Any long-running project where you're making the decisions - -* Approach: How We Work Together - -** Phase 0: Context Hygiene - -Before starting, write out the session context file and check with Craig whether we could compact the context. This might be a long process. If the context window collapses, we may forget important details. Writing out the session context prevents this data loss. - -** Phase 1: Understand the V2MOM Framework - -Ensure both parties understand what each section means: - -- *Vision:* What you want to achieve (aspirational, clear picture of success) -- *Values:* Principles that guide decisions (2-4 values, defined concretely) -- *Methods:* How you'll achieve the vision (4-7 approaches, ordered by priority) -- *Obstacles:* What's in your way (honest, personal, specific) -- *Metrics:* How you'll measure success (objective, not vanity metrics) - -*Important:* V2MOM sections are completed IN ORDER. Vision informs Values. Values inform Methods. Methods reveal Obstacles. Everything together defines Metrics. - -** Phase 2: Create the Document Structure - -1. Create file: =docs/[project-name]-v2mom.org= or appropriate location -2. Add metadata: #+TITLE, #+AUTHOR, #+DATE, #+FILETAGS -3. Create section headings for all 5 components -4. Add "What is V2MOM?" overview section at top - -*Save incrementally:* V2MOM discussions can be lengthy. Save after completing each section to prevent data loss. - -** Phase 3: Define the Vision - -*Ask:* "What do you want to achieve? What does success look like?" - -*Goal:* Get a clear, aspirational picture. Should be 1-3 paragraphs describing the end state. - -*Claude's role:* -- Help articulate what's described -- Push for specificity ("works great" → what specifically works?) -- Identify scope (what's included, what's explicitly out of scope) -- Capture concrete examples mentioned - -*Good vision characteristics:* -- Paints a picture you can visualize -- Describes outcomes, not implementation -- Aspirational but grounded in reality -- Specific enough to know what's included - -*Examples across domains:* -- Health: "Wake up with energy, complete a 5K without stopping, feel strong in daily activities, and have stable mood throughout the day" -- Finance: "Six months emergency fund, debt-free except mortgage, automatic retirement savings, and financial decisions that don't cause anxiety" -- Software: "A package that integrates seamlessly, has comprehensive documentation, handles edge cases gracefully, and maintainers of other packages want to depend on" - -*Time estimate:* 15-30 minutes if vision is mostly clear; 45-60 minutes if needs exploration - -** Phase 4: Define the Values - -*Ask:* "What principles guide your decisions? When faced with choice A vs B, what values help you decide?" - -*Goal:* Identify 2-4 values with concrete definitions and examples. - -*Claude's role:* -- Suggest values based on vision discussion -- Push for concrete definitions (not just the word, but what it MEANS) -- Help distinguish between overlapping values -- Identify when examples contradict stated values - -*Common pitfall:* Listing generic words without defining them. -- Bad: "Quality, Speed, Innovation" -- Good: "Sustainable means can maintain this for 10+ years without burning out. No crash diets, no 80-hour weeks, no technical debt I can't service." - -*For each value, capture:* -1. **The value name** (1-2 words) -2. **Definition** (what this means in context of this project) -3. **Concrete examples** (how this manifests) -4. **What breaks this value** (anti-patterns) - -*Method:* -- Start with 3-5 candidate values -- For each one, ask: "What does [value] mean to you in this context?" -- Discuss until definition is concrete -- Write definition with examples -- Refine/merge/remove until 2-4 remain - -*Examples across domains:* -- Health V2MOM: "Sustainable: Can do this at 80 years old. No extreme diets. Focus on habits that compound over decades." -- Finance V2MOM: "Automatic: Set up once, runs forever. Don't rely on willpower for recurring decisions. Automate savings and investments." -- Software V2MOM: "Boring: Use proven patterns. No clever code. Maintainable by intermediate developers. Boring is reliable." - -*Time estimate:* 30-45 minutes - -** Phase 5: Define the Methods - -*Ask:* "How will you achieve the vision? What approaches will you take?" - -*Goal:* Identify 4-7 methods (concrete approaches) ordered by priority. - -*Claude's role:* -- Extract methods from vision and values discussion -- Help order by priority (what must happen first?) -- Ensure methods are actionable (not just categories) -- Push for concrete actions under each method -- Watch for method ordering that creates dependencies - -*Structure for each method:* -1. **Method name** (verb phrase: "Build X", "Eliminate Y", "Establish Z") -2. **Aspirational description** (1-2 sentences: why this matters) - -*Method ordering matters:* -- Method 1 should be highest priority (blocking everything else) -- Lower-numbered methods should enable higher-numbered ones -- Common patterns: - - Fix → Stabilize → Build → Enhance → Sustain - - Eliminate → Replace → Optimize → Automate → Maintain - - Learn → Practice → Apply → Teach → Systematize - -*Examples across domains:* - -Health V2MOM: -- Method 1: Eliminate Daily Energy Drains (fix sleep, reduce inflammatory foods, address vitamin deficiencies) -- Method 2: Build Baseline Strength (3x/week resistance training, progressive overload, focus on compound movements) -- Method 3: Establish Sustainable Nutrition (meal prep system, protein targets, vegetable servings) - -Finance V2MOM: -- Method 1: Stop the Bleeding (identify and eliminate wasteful subscriptions, high-interest debt, impulse purchases) -- Method 2: Build the Safety Net (automate savings, reach $1000 emergency fund, then 3 months expenses) -- Method 3: Invest for the Future (max employer 401k match, open IRA, set automatic contributions) - -Software Package V2MOM: -- Method 1: Nail the Core Use Case (solve one problem extremely well, clear documentation, handles errors gracefully) -- Method 2: Ensure Quality and Stability (comprehensive test suite, CI/CD, semantic versioning) -- Method 3: Build Community and Documentation (contribution guide, examples, responsive to issues) - -*Method Ordering is Flexible:* After defining all methods, you may realize the ordering is wrong. For example, security (Method 5) might be more important than tool modernization (Method 4). Methods can be swapped - the order represents priority, so getting it right matters more than preserving the initial order. - -*Time estimate:* 45-90 minutes (longest section) - -Note: It would be a good to write out the session context file after each method. This helps prevent loss of data if the context auto-compacts. - -** Phase 5.6: Brainstorm Additional Tasks for Each Method - -Brainstorm what's missing to achieve the method. - -*Ask:* "What else would help achieve this method's goal?" - -*Claude's role:* -- Suggest additional tasks based on method's aspirational description -- Consider edge cases and error scenarios -- Identify automation opportunities -- Propose monitoring/visibility improvements -- Challenge if list feels incomplete, meaning the list isn't sufficient to reach the goal -- Challenge if the list feels too much, meaning items on the list aren't necessary to reach the goal -- Create sub-tasks using *** for items with multiple steps -- Ensure priorities reflect contribution to method goal - -*For each brainstormed task:* -- Describe what it does and why it matters -- Assign priority based on contribution to method -- Add technical details if known -- Get user agreement before adding - - -*Priority System:* -- [#A]: Critical blockers - must be done first, blocks everything else -- [#B]: High-impact reliability - directly enables method goal -- [#C]: Quality improvements - valuable but not blocking -- [#D]: Nice-to-have - low priority, can defer - - -*Time estimate:* 10-15 minutes per method (50-75 minutes for 5 methods) - -** Phase 6: Identify the Obstacles - -*Ask:* "What's in your way? What makes this hard?" - -*Goal:* Honest, specific obstacles (both personal and technical/external). - -*Claude's role:* -- Encourage honesty (obstacles are not failures, they're reality) -- Help distinguish between symptoms and root causes -- Identify patterns in behavior that create obstacles -- Acknowledge challenges without judgment - -*Good obstacle characteristics:* -- Honest about personal patterns -- Specific, not generic -- Acknowledges both internal and external obstacles -- States real stakes (not just "might happen") - -*Common obstacle categories:* -- Personal: perfectionism, hard to say no, gets bored, procrastinates -- Knowledge: missing skills, unclear how to proceed, need to learn -- External: limited time, limited budget, competing priorities -- Systemic: environmental constraints, lack of tools, dependencies on others - -*For each obstacle:* -- Name it clearly -- Describe how it manifests in this project -- Acknowledge the stakes (what happens because of this obstacle) - -*Examples across domains:* - -Health V2MOM obstacles: -- "I get excited about new workout programs and switch before seeing results (pattern: 6 weeks into a program)" -- "Social events involve food and alcohol - saying no feels awkward and isolating" -- "When stressed at work, I skip workouts and eat convenient junk food" - -Finance V2MOM obstacles: -- "Viewing budget as restriction rather than freedom - triggers rebellion and impulse spending" -- "Fear of missing out on lifestyle experiences my peers have" -- "Limited financial literacy - don't understand investing beyond 'put money in account'" - -Software Package V2MOM obstacles: -- "Perfectionism delays releases - always 'one more feature' before v1.0" -- "Maintaining documentation feels boring compared to writing features" -- "Limited time (2-4 hours/week) and competing projects" - -*Time estimate:* 15-30 minutes - -** Phase 7: Define the Metrics - -*Ask:* "How will you measure success? What numbers tell you if this is working?" - -*Goal:* 5-10 metrics that are objective, measurable, and aligned with vision/values. - -*Claude's role:* -- Suggest metrics based on vision, values, and methods -- Push for measurable numbers (not "better", but concrete targets) -- Identify vanity metrics (look good but don't measure real progress) -- Ensure metrics align with values and methods - -*Metric categories:* -- **Performance metrics:** Measurable outcomes of the work -- **Discipline metrics:** Process adherence, consistency, focus -- **Quality metrics:** Standards maintained, sustainability indicators - -*Good metric characteristics:* -- Objective (not subjective opinion) -- Measurable (can actually collect the data) -- Actionable (can change behavior to improve it) -- Aligned with values and methods - -*For each metric:* -- Name it clearly -- Specify current state (if known) -- Specify target state -- Describe how to measure it -- Specify measurement frequency - -*Examples across domains:* - -Health V2MOM metrics: -- Resting heart rate: 70 bpm → 60 bpm (measure: daily via fitness tracker) -- Workout consistency: 3x/week strength training for 12 consecutive weeks -- Sleep quality: 7+ hours per night 6+ nights per week (measure: sleep tracker) -- Energy rating: subjective 1-10 scale, target 7+ average over week - -Finance V2MOM metrics: -- Emergency fund: $0 → $6000 (measure: monthly) -- High-interest debt: $8000 → $0 (measure: monthly) -- Savings rate: 5% → 20% of gross income (measure: monthly) -- Financial anxiety: weekly check-in, target "comfortable with financial decisions" - -Software Package V2MOM metrics: -- Test coverage: 0% → 80% (measure: coverage tool) -- Issue response time: median < 48 hours (measure: GitHub stats) -- Documentation completeness: all public APIs documented with examples -- Adoption: 10+ GitHub stars, 3+ projects depending on it - -*Time estimate:* 20-30 minutes - -** Phase 8: Migrate Existing Tasks (If Applicable) - -If you have an existing TODO.org file with tasks, migrate them into the V2MOM methods. - -*Goal:* Consolidate all project tasks under V2MOM methods, eliminate duplicates, move non-fitting items to someday-maybe. - -*Process:* - -1. **Identify Duplicates:** - - Read existing TODO.org - - Find tasks already in V2MOM methods - - Check if V2MOM task has all technical details from TODO version - - Enhance V2MOM task with any missing details - - Mark TODO task for deletion - -2. **Map Tasks to Methods:** - - For each remaining TODO task, ask: "Which method does this serve?" - - Add task under appropriate method with priority - - Capture all technical details from original task - - If task has state (DOING, VERIFY), preserve that state - -3. **Review Someday-Maybe Candidates One-by-One:** - Present each task that doesn't fit methods and ask: - - Keep in V2MOM (which method)? - - Move to someday-maybe? - - Delete entirely? - -*Decision Criteria for Someday-Maybe:* -- Doesn't directly serve any method's goal -- Nice-to-have enhancement without clear benefit -- Research task without actionable outcome -- Architectural change decided not to pursue -- Personal task not related to project - -*Keep in V2MOM (don't move to someday-maybe):* -- DOING tasks - work in progress must continue -- VERIFY tasks - need testing/verification -- Tasks that enable method goals - -*Delete entirely:* -- Obsolete tasks (feature removed, problem solved elsewhere) -- Duplicate of something already done -- Task that no longer makes sense - -*For each task decision:* -- Present task with full context -- Wait for user decision -- Don't batch - review one by one -- Capture reasoning for future reference - -4. **Final Steps:** - - Append someday-maybe items to docs/someday-maybe.org - - Copy completed V2MOM to TODO.org (overwrite) - - V2MOM is now the single source of truth - -*Time estimate:* Highly variable -- Small TODO.org (20 tasks): 30-45 minutes -- Medium TODO.org (50 tasks): 60-90 minutes -- Large TODO.org (100+ tasks): 2-3 hours - -*Note:* This phase is optional - only needed if existing TODO.org has substantial content to migrate. - -** Phase 9: Review and Refine - -Once all sections are complete (including task migration if applicable), review the whole V2MOM together: - -*Ask together:* -1. **Does the vision excite you?** (If not, why not? What's missing?) -2. **Do the values guide decisions?** (Can you use them to say no to things?) -3. **Are the methods ordered by priority?** (Is Method 1 truly most important?) -4. **Are the obstacles honest?** (Or are you sugar-coating?) -5. **Will the metrics tell you if you're succeeding?** (Or are they vanity metrics?) -6. **Does this V2MOM make you want to DO THE WORK?** (If not, something is wrong) - -*Refinement:* -- Merge overlapping methods -- Reorder methods if priorities are wrong -- Add missing concrete actions -- Strengthen weak definitions -- Remove fluff - -*Red flags:* -- Vision doesn't excite you → Need to dig deeper into what you really want -- Values are generic → Need concrete definitions and examples -- Methods have no concrete actions → Too vague, need specifics -- Obstacles are all external → Need honesty about personal patterns -- Metrics are subjective → Need objective measurements - -** Phase 10: Commit and Use - -Once the V2MOM feels complete: - -1. **Save the document** in appropriate location -2. **Share with stakeholders** (if applicable) -3. **Use it immediately** (start Method 1 execution or first triage) -4. **Schedule first review** (1 week out: is this working?) - -*Why use immediately:* Validates the V2MOM is practical, not theoretical. Execution reveals gaps that discussion misses. - -Note: As before, write out the session context file to prevent any details from getting lost. - -* Principles to Follow - -** Honesty Over Aspiration - -V2MOM requires brutal honesty, especially in Obstacles section. - -*Examples:* -- "I get bored after 6 weeks" (honest) vs "Maintaining focus is challenging" (bland) -- "I have 3 hours per week max" (honest) vs "Time is limited" (vague) -- "I impulse-spend when stressed" (honest) vs "Budget adherence needs work" (passive) - -**Honesty enables solutions.** If you can't name the obstacle, you can't overcome it. - -** Concrete Over Abstract - -Every section should have concrete examples and definitions. - -*Bad:* -- Vision: "Be successful" -- Values: "Quality, Speed, Innovation" -- Methods: "Improve things" -- Metrics: "Do better" - -*Good:* -- Vision: "Complete a 5K in under 30 minutes, have energy to play with kids after work, sleep 7+ hours consistently" -- Values: "Sustainable: Can maintain for 10+ years. No crash diets, no injury-risking overtraining." -- Methods: "Method 1: Fix sleep quality (blackout curtains, consistent bedtime, no screens 1hr before bed)" -- Metrics: "5K time: current 38min → target 29min (measure: monthly timed run)" - -** Priority Ordering is Strategic - -Method ordering determines what happens first. Get it wrong and you'll waste effort. - -*Common patterns:* -- **Fix → Build → Enhance → Sustain** (eliminate problems before building) -- **Eliminate → Replace → Optimize** (stop damage before improving) -- **Learn → Practice → Apply → Teach** (build skill progressively) - -*Why Method 1 must address the blocker:* -- If foundation is broken, can't build on it -- High-impact quick wins build momentum -- Must stop the bleeding before starting rehab - -** Methods Need Concrete Actions - -If you can't list 3-8 concrete actions for a method, it's too vague. - -*Test:* Can you start working on Method 1 immediately after completing the V2MOM? - -If answer is "I need to think about what to do first", the method needs more concrete actions. - -*Example:* -- Too vague: "Method 1: Improve health" -- Concrete: "Method 1: Fix sleep quality → blackout curtains, consistent 10pm bedtime, no screens after 9pm, magnesium supplement, sleep tracking" - -** Metrics Must Be Measurable - -"Better" is not a metric. "Bench press 135 lbs" is a metric. - -*For each metric, you must be able to answer:* -1. How do I measure this? (exact method or tool) -2. What's the current state? -3. What's the target state? -4. How often do I measure it? -5. What does this metric actually tell me? - -If you can't answer these, it's not a metric yet. - -** V2MOM is Living Document - -V2MOM is not set in stone. As you execute: - -- Methods may need reordering (new information reveals priorities) -- Metrics may need adjustment (too aggressive or too conservative) -- New obstacles emerge (capture them) -- Values get refined (concrete examples clarify definitions) - -*Update the V2MOM when:* -- Major priority shift occurs -- New obstacle emerges that changes approach -- Metric targets prove unrealistic or too easy -- Method completion opens new possibilities -- Quarterly review reveals misalignment - -*But don't update frivolously:* Changing the V2MOM every week defeats the purpose. Update when major shifts occur, not when minor tactics change. - -** Use It or Lose It - -V2MOM only works if you use it for decisions. - -*Use it for:* -- Weekly reviews (am I working on right things?) -- Priority decisions (which method does this serve?) -- Saying no to distractions (not in the methods) -- Celebrating wins (shipped Method 1 items!) -- Identifying blockers (obstacles getting worse?) - -*If 2 weeks pass without referencing the V2MOM, something is wrong.* Either the V2MOM isn't serving you, or you're not using it. - -* Living Document - -This is a living document. After creating V2MOMs for different projects, consider: -- Did the process work well? -- Were any sections harder than expected? -- Did we discover better questions to ask? -- Should sections be created in different order? -- What patterns emerge across different domains? - -Update this session document with learnings to make future V2MOM creation smoother. - -* Examples: V2MOMs Across Different Domains - -** Example 1: Health and Fitness V2MOM (Brief) - -*Vision:* Wake up with energy, complete 5K comfortably, feel strong in daily activities, stable mood, no afternoon crashes. - -*Values:* -- Sustainable: Can do this at 80 years old -- Compound: Small daily habits over quick fixes - -*Methods:* -1. Fix Sleep Quality (blackout curtains, consistent bedtime, track metrics) -2. Build Baseline Strength (3x/week, compound movements, progressive overload) -3. Establish Nutrition System (meal prep, protein targets, hydration) - -*Obstacles:* -- Get excited about new programs, switch before results (6-week pattern) -- Social events involve alcohol and junk food -- Skip workouts when stressed at work - -*Metrics:* -- Resting heart rate: 70 → 60 bpm -- Workout consistency: 3x/week for 12 consecutive weeks -- 5K time: 38min → 29min - -** Example 2: Financial Independence V2MOM (Brief) - -*Vision:* Six months emergency fund, debt-free except mortgage, automatic investing, financial decisions without anxiety. - -*Values:* -- Automatic: Set up once, runs forever (don't rely on willpower) -- Freedom: Budget enables choices, not restricts them - -*Methods:* -1. Stop the Bleeding (eliminate subscriptions, high-interest debt, impulse purchases) -2. Build Safety Net ($1000 emergency fund → 3 months → 6 months) -3. Automate Investing (max 401k match, IRA, automatic contributions) - -*Obstacles:* -- View budget as restriction → triggers rebellion spending -- FOMO on experiences peers have -- Limited financial literacy - -*Metrics:* -- Emergency fund: $0 → $6000 -- Savings rate: 5% → 20% -- High-interest debt: $8000 → $0 - -** Example 3: Emacs Configuration V2MOM (Detailed) - -This V2MOM was created over 2 sessions in late 2025 and led to significant improvements in config quality and maintainability. - -*** The Context - -Craig's Emacs configuration had grown to ~50+ todo items, unclear priorities, and performance issues. Config was his most-used software (email, calendar, tasks, programming, reading, music) so breakage blocked all work. - -*** The Process (2 Sessions, ~2.5 Hours Total) - -*Session 1 (2025-10-30, ~1 hour):* -- Vision: Already clear from existing draft, kept as-is -- Values: Deep Q&A to define "Intuitive", "Fast", "Simple" - - Each value got concrete definition + examples + anti-patterns - - Intuitive: muscle memory + mnemonics + which-key timing - - Fast: < 3s startup, org-agenda is THE BOTTLENECK - - Simple: production practices, simplicity produces reliability - -*Session 2 (2025-10-31, ~1.5 hours):* -- Methods: Identified 6 methods through Q&A - - Method 1: Make Using Emacs Frictionless (fix daily pain) - - Method 2: Stop Problems Before They Appear (stability) - - Method 3: Make Fixing Emacs Frictionless (tooling) - - Method 4: Contribute to Ecosystem (package maintenance) - - Method 5: Be Kind To Future Self (new features) - - Method 6: Develop Disciplined Practices (meta-method) -- Obstacles: Honest personal patterns - - "Getting irritated at mistakes and pushing on" - - "Hard to say no to fun ideas" - - "Perfectionism delays shipping" -- Metrics: Measurable outcomes - - Startup time: 6.2s → < 3s - - Org-agenda rebuild: 30s → < 5s - - Active todos: 50+ → < 20 - - Weekly triage consistency - - Research:shipped ratio > 1:1 - -*** Immediate Impact - -After completing V2MOM: -- Ruthlessly triaged 50 todos → 23 (under < 20 target) -- Archived items not serving vision to someday-maybe.org -- Immediate execution: removed network check (2s improvement!) -- Clear decision framework for weekly inbox triage -- Startup improved: 6.19s → 4.16s → 3.8s (approaching target) - -*** Key Learnings - -1. **Vision was easy:** Already had clear picture of success -2. **Values took work:** Required concrete definitions, not just words -3. **Methods needed ordering:** Priority emerged from dependency discussion -4. **Obstacles required honesty:** Hardest to name personal patterns -5. **Metrics aligned with values:** "Fast" value → fast metrics (startup, org-agenda) - -*** Why It Worked - -- V2MOM provided framework to say "no" ruthlessly -- Method ordering prevented building on broken foundation -- Metrics were objective (seconds, counts) not subjective -- Obstacles acknowledged personal patterns enabling better strategies -- Used immediately for inbox triage (validated practicality) - -* Conclusion - -Creating a V2MOM transforms vague intentions into concrete strategy. It provides: - -- **Clarity** on what you're actually trying to achieve -- **Decision framework** for ruthless prioritization -- **Measurable progress** through objective metrics -- **Honest obstacles** that can be addressed -- **Ordered methods** that build on each other - -**The framework takes 2-3 hours to create. It saves weeks of unfocused work.** - -The V2MOM works across domains because the structure is universal: -- Vision: Where am I going? -- Values: What principles guide me? -- Methods: How do I get there? -- Obstacles: What's in my way? -- Metrics: How do I know it's working? - -*Remember:* V2MOM is a tool, not a trophy. Create it, use it, update it, and let it guide your work. If you're not using it weekly, either fix the V2MOM or admit you don't need one. - -*Final test:* Can you say "no" to something you would have said "yes" to before? If so, the V2MOM is working. diff --git a/docs/workflows/create-workflow.org b/docs/workflows/create-workflow.org deleted file mode 100644 index 85cd202..0000000 --- a/docs/workflows/create-workflow.org +++ /dev/null @@ -1,360 +0,0 @@ -#+TITLE: Creating New Workflows -#+AUTHOR: Craig Jennings -#+DATE: 2025-11-01 - -* Overview - -This document describes the meta-workflow for creating new workflows. When we identify a repetitive workflow or collaborative pattern, we use this process to formalize it into a documented workflow that we can reference and reuse. - -Note the definitions: "sessions" are the time Claude spends with the user, i.e., the user starts a "session" with Claude, does some work, then ends the "session". A workflow is a routine or pattern of doing tasks within a session to accomplish a goal. - -Workflows are living documents that capture how we work together on specific types of tasks. They build our shared vocabulary and enable efficient collaboration across multiple work sessions. - -* Problem We're Solving - -Without a formal workflow creation process, we encounter several issues: - -** Inefficient Use of Intelligence -- Craig leads the process based solely on his knowledge -- We don't leverage Claude's expertise to improve or validate the approach -- Miss opportunities to apply software engineering and process best practices - -** Time Waste and Repetition -- Craig must re-explain the workflow each time we work together -- No persistent memory of how we've agreed to work -- Each session starts from scratch instead of building on previous work - -** Error-Prone Execution -- Important steps may be forgotten or omitted -- No checklist to verify completeness -- Mistakes lead to incomplete work or failed goals - -** Missed Learning Opportunities -- Don't capture lessons learned from our collaboration -- Can't improve processes based on what works/doesn't work -- Lose insights that emerge during execution - -** Limited Shared Vocabulary -- No deep, documented understanding of what terms mean -- "Let's do a refactor workflow" has no precise definition -- Can't efficiently communicate about workflows - -*Impact:* Inefficiency, errors, and lost opportunity to continuously improve our collaborative workflows. - -* Exit Criteria - -We know a workflow definition is complete when: - -1. **Information is logically arranged** - The structure makes sense and flows naturally -2. **Both parties understand how to work together** - We can articulate the workflow -3. **Agreement on effectiveness** - We both agree that following this workflow will lead to exit criteria and resolve the stated problem -4. **Tasks are clearly defined** - Steps are actionable, not vague -5. **Problem resolution path** - Completing the tasks either: - - Fixes the problem permanently, OR - - Provides a process for keeping the problem at bay - -*Measurable validation:* -- Can we both articulate the workflow without referring to the document? -- Do we agree it will solve the problem? -- Are the tasks actionable enough to start immediately? -- Does the workflow get used soon after creation (validation by execution)? - -* When to Use This Workflow - -Trigger this workflow creation workflow when: - -- You notice a repetitive workflow that keeps coming up -- A collaborative pattern emerges that would benefit from documentation -- Craig says "let's create/define/design a workflow for [activity]" -- You identify a new type of work that doesn't fit existing workflows -- An existing workflow needs significant restructuring (treat as creating a new one) - -Examples: -- "Let's create a workflow where we inbox zero" -- "We should define a code review workflow" -- "Let's design a workflow for weekly planning" - -* Approach: How We Work Together -** Phase 0: Context Hygiene - -Before starting, write out the session context file and check with Craig whether we could compact the context. This might be a long process. If the context window collapses, we may forget important details. Writing out the session context prevents this data loss. - -** Phase 1: Question and Answer Discovery - -Walk through these four core questions collaboratively. Take notes on the answers. - -*IMPORTANT: Save answers as you go!* - -The Q&A phase can take time—Craig may need to think through answers, and discussions can be lengthy. To prevent data loss from terminal crashes or process quits: - -1. Create a draft file at =docs/workflows/[name]-draft.org= after deciding on the name -2. After each question is answered, save the Q&A content to the draft file -3. If workflow is interrupted, you can resume from the saved answers -4. Once complete, the draft becomes the final workflow document - -This protects against losing substantial thinking work if the workflow is interrupted. - -*** Question 1: What problem are we solving in this type of workflow? - -Ask Craig: "What problem are we solving in this type of workflow? What would happen without this workflow?" - -The answer reveals: -- Overview and goal of the workflow -- Why this work matters (motivation) -- Impact/priority compared to other work -- What happens if we don't do this work - -Example from refactor workflow: -#+begin_quote -"My Emacs configuration isn't resilient enough. There's lots of custom code, and I'm even developing some as Emacs packages. Yet Emacs is my most-used software, so when Emacs breaks, I become unproductive. I need to make Emacs more resilient through good unit tests and refactoring." -#+end_quote - -*** Question 2: How do we know when we're done? - -Ask Craig: "How do we know when we're done?" - -The answer reveals: -- Exit criteria -- Results/completion criteria -- Measurable outcomes - -*Your role:* -- Push back if the answer is vague or unmeasurable -- Propose specific measurements based on context -- Iterate together until criteria are clear -- Fallback (hopefully rare): "when Craig says we're done" - -Example from refactor workflow: -#+begin_quote -"When we've reviewed all methods, decided which to test and refactor, run all tests, and fixed all failures including bugs we find." -#+end_quote - -Claude might add: "How about a code coverage goal of 70%+?" - -*** Question 3: How do you see us working together in this kind of workflow? - -Ask Craig: "How do you see us working together in this kind of workflow?" - -The answer reveals: -- Steps or phases we'll go through -- The general approach to the work -- How tasks flow from one to another - -*Your role:* -- As steps emerge, ask yourself: - - "Do these steps lead to solving the real problem?" - - "What is missing from these steps?" -- If the answers aren't "yes" and "nothing", raise concerns -- Propose additions based on your knowledge -- Suggest concrete improvements - -Example from refactor workflow: -#+begin_quote -"We'll analyze test coverage, categorize functions by testability, write tests systematically using Normal/Boundary/Error categories, run tests, analyze failures, fix bugs, and repeat." -#+end_quote - -Claude might suggest: "Should we install a code coverage tool as part of this process?" - -*** Question 4: Are there any principles we should be following while doing this? - -Ask Craig: "Are there any principles we should be following while doing this kind of workflow?" - -The answer reveals: -- Principles to follow -- Decision frameworks -- Quality standards -- When to choose option A vs option B - -*Your role:* -- Think through all elements of the workflow -- Consider situations that may arise -- Identify what principles would guide decisions -- Suggest decision frameworks from your knowledge - -Example from refactor workflow: -#+begin_quote -Craig: "Treat all test code as production code - same engineering practices apply." - -Claude suggests: "Since we'll refactor methods mixing UI and logic, should we add a principle to separate them for testability?" -#+end_quote - -** Phase 2: Assess Completeness - -After the Q&A, ask together: - -1. **Do we have enough information to formulate steps/process?** - - If yes, proceed to Phase 3 - - If no, identify what's missing and discuss further - -2. **Do we agree following this approach will resolve/mitigate the problem?** - - Both parties must agree - - If not, identify concerns and iterate - -** Phase 3: Name the Workflow - -Decide on a name for this workflow. - -*Naming convention:* Action-oriented (verb form) -- Examples: "refactor", "inbox-zero", "create-workflow", "review-code" -- Why: Shorter, natural when saying "let's do a [name] workflow" -- Filename: =docs/workflows/[name].org= - -** Phase 4: Document the Workflow - -Write the workflow file at =docs/workflows/[name].org= using this structure: - -*** Recommended Structure -1. *Title and metadata* (=#+TITLE=, =#+AUTHOR=, =#+DATE=) -2. *Overview* - Brief description of the workflow -3. *Problem We're Solving* - From Q&A, with context and impact -4. *Exit Criteria* - Measurable outcomes, how we know we're done -5. *When to Use This Workflow* - Triggers, circumstances, examples -6. *Approach: How We Work Together* - - Phases/steps derived from Q&A - - Decision frameworks - - Concrete examples woven throughout -7. *Principles to Follow* - Guidelines from Q&A -8. *Living Document Notice* - Reminder to update with learnings - -*** Important Notes -- Weave concrete examples into sections (don't separate them) -- Use examples from actual workflows when available -- Make tasks actionable, not vague -- Include decision frameworks for common situations -- Note that this is a living document - -** Phase 5: Update Project State - -Update =notes.org=: -1. Add new workflow to "Available Workflows" section -2. Include brief description and reference to file -3. Note creation date - -Example entry: -#+begin_src org -,** inbox-zero -File: =docs/workflows/inbox-zero.org= - -Workflow for processing inbox to zero: -1. [Brief workflow summary] -2. [Key steps] - -Created: 2025-11-01 -#+end_src - -** Phase 6: Cleanup -Write out the session context file before proceeding any further - -** Phase 7: Validate by Execution - -*Critical step:* Use the workflow soon after creating it. - -- Schedule the workflow for immediate use -- Follow the documented workflow -- Note what works well -- Identify gaps or unclear areas -- Update the workflow document with learnings - -*This validates the workflow definition and ensures it's practical, not theoretical.* - -* Principles to Follow - -These principles guide us while creating new workflows: - -** Collaboration Through Discussion -- Be proactive about collaboration -- Suggest everything on your mind -- Ask all relevant questions -- Push back when something seems wrong, inconsistent, or unclear -- Misunderstandings are learning opportunities - -** Reviewing the Whole as Well as the Pieces -- May get into weeds while identifying each step -- Stop to look at the whole thing at the end -- Ask the big questions: Does this actually solve the problem? -- Verify all pieces connect logically - -** Concrete Over Abstract -- Use examples liberally within explanations -- Weave concrete examples into Q&A answers -- Don't just describe abstractly -- "When nil input crashes, ask..." is better than "handle edge cases" - -** Actionable Tasks Over Vague Direction -- Steps should be clear enough to know what to do next -- "Ask: how do you see us working together?" is actionable -- "Figure out the approach" is too vague -- Test: Could someone execute this without further explanation? - -** Validate Early -- "Use it soon afterward" catches problems early -- Don't let workflow definitions sit unused and untested -- Real execution reveals gaps that theory misses -- Update immediately based on first use - -** Decision Frameworks Over Rigid Steps -- Workflows are frameworks (principles + flexibility), not recipes -- Include principles that help case-by-case decisions -- "When X happens, ask Y" is a decision framework -- "Always do X" is too rigid for most workflows - -** Question Assumptions -- If something doesn't make sense, speak up -- If a step seems to skip something, point it out -- Better to question during creation than discover gaps during execution -- No assumption is too basic to verify - -* Living Document - -This is a living document. As we create new workflows and learn what works (and what doesn't), we update this file with: - -- New insights about workflow creation -- Improvements to the Q&A process -- Better examples -- Additional principles discovered -- Refinements to the structure - -Every time we create a workflow, we have an opportunity to improve this meta-process. - -** Updates and Learnings - -*** 2025-11-01: Save Q&A answers incrementally -*Learning:* During emacs-inbox-zero workflow creation, we discovered that Q&A discussions can be lengthy and make Craig think deeply. Terminal crashes or process quits can lose substantial work. - -*Improvement:* Added guidance in Phase 1 to create a draft file and save Q&A answers after each question. This protects against data loss and allows resuming interrupted workflows. - -*Impact:* Reduces risk of losing 10-15 minutes of thinking work if workflow is interrupted. - -*** 2025-11-01: Validation by execution works! -*Learning:* Immediately after creating the emacs-inbox-zero workflow, we validated it by actually running the workflow. This caught unclear areas and validated that the 10-minute target was realistic. - -*Key insight from validation:* When Craig provides useful context during workflows (impact estimates, theories, examples), that context should be captured in task descriptions. This wasn't obvious during workflow creation but became clear during execution. - -*Impact:* Validation catches what theory misses. Always use Phase 6 (validate by execution) soon after creating a workflow. - -* Example: Creating the "Create-Workflow" Workflow - -This very document was created using the process it describes (recursive!). - -** The Q&A -- *Problem:* Time waste, errors, missed learning from informal processes -- *Exit criteria:* Logical arrangement, mutual understanding, agreement on effectiveness, actionable tasks -- *Approach:* Four-question Q&A, assess completeness, name it, document it, update notes.org, validate by use -- *Principles:* Collaboration through discussion, review the whole, concrete over abstract, actionable tasks, validate early, decision frameworks, question assumptions - -** The Result -We identified what was needed, collaborated on answers, and captured it in this document. Then we immediately used it to create the next workflow (validation). - -* Conclusion - -Creating workflows is a meta-skill that improves all our collaboration. By formalizing how we work together, we: - -- Build shared vocabulary -- Eliminate repeated explanations -- Capture lessons learned -- Enable continuous improvement -- Make our partnership more efficient - -Each new workflow we create adds to our collaborative toolkit and deepens our ability to work together effectively. - -*Remember:* Workflows are frameworks, not rigid recipes. They provide structure while allowing flexibility for case-by-case decisions. The goal is effectiveness, not perfection. diff --git a/docs/workflows/delete-calendar-event.org b/docs/workflows/delete-calendar-event.org deleted file mode 100644 index 46c5cad..0000000 --- a/docs/workflows/delete-calendar-event.org +++ /dev/null @@ -1,217 +0,0 @@ -#+TITLE: Delete Calendar Event Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-02-01 - -* Overview - -Workflow for deleting calendar events via gcalcli with explicit confirmation. - -* Triggers - -- "delete the meeting" -- "cancel my appointment" -- "remove the event" -- "clear my calendar for..." - -* Prerequisites - -- gcalcli installed and authenticated -- Event must exist on calendar - -* Note: Calendar Visibility - -Events can only be deleted from Google calendars via gcalcli. DeepSat (work) and Proton calendar events are visible in =~/.emacs.d/data/{dcal,pcal}.org= but cannot be modified from here. - -* Workflow Steps - -** 1. Parse User Request - -Extract: -- Which event (title, partial match, or date hint) -- Date context (if provided) - -Examples: -- "Delete the dentist appointment" → search for "dentist" -- "Cancel tomorrow's meeting" → search tomorrow's events -- "Remove the 3pm call" → search by time - -** 2. Search for Event - -#+begin_src bash -# Search by title -gcalcli --calendar "Calendar Name" search "event title" - -# Or list events for a date -gcalcli --calendar "Calendar Name" agenda "date" "date 11:59pm" -#+end_src - -** 3. Handle Multiple Matches - -If search returns multiple events: - -#+begin_example -Found 3 events matching "meeting": - -1. Team Meeting - Feb 3, 2026 at 9:00 AM -2. Project Meeting - Feb 4, 2026 at 2:00 PM -3. Client Meeting - Feb 5, 2026 at 10:00 AM - -Which event do you want to delete? (1-3) -#+end_example - -** 4. Display Full Event Details - -Show the event that will be deleted: - -#+begin_example -Event to Delete: -================ -Event: Team Meeting -When: Monday, Feb 3, 2026 at 9:00 AM -Duration: 1 hour -Location: Conference Room A -Description: Weekly sync -Calendar: Work -#+end_example - -** 5. Explicit Confirmation - -Ask clearly: - -#+begin_example -Delete this event? (yes/no) -#+end_example - -*Do NOT delete until user explicitly confirms with "yes".* - -** 6. Execute Delete - -gcalcli delete requires interactive confirmation. Since Claude can't interact with the prompt, pipe "y" to confirm: - -#+begin_src bash -echo "y" | gcalcli --calendar "Calendar Name" delete "Event Title" -#+end_src - -Use a date range to narrow matches and avoid deleting the wrong event: - -#+begin_src bash -echo "y" | gcalcli --calendar "Calendar Name" delete "Event Title" 2026-02-14 2026-02-15 -#+end_src - -** 7. Verify - -Confirm the event is gone: - -#+begin_src bash -gcalcli --calendar "Calendar Name" search "Event Title" -#+end_src - -Report success: - -#+begin_example -Event "Team Meeting" has been deleted from your calendar. -#+end_example - -* gcalcli Delete Command - -** Basic Delete (pipe confirmation) - -gcalcli delete prompts interactively, which fails in non-interactive shells. Always pipe "y" to confirm: - -#+begin_src bash -echo "y" | gcalcli delete "Event Title" -#+end_src - -** With Date Range (preferred — avoids accidental matches) - -#+begin_src bash -echo "y" | gcalcli delete "Event Title" 2026-02-14 2026-02-15 -#+end_src - -** Calendar-Specific Delete - -#+begin_src bash -echo "y" | gcalcli --calendar "Craig" delete "Meeting" 2026-02-14 2026-02-15 -#+end_src - -** Skip All Prompts (dangerous) - -#+begin_src bash -gcalcli delete "Event Title" --iamaexpert -#+end_src - -*Warning:* =--iamaexpert= skips all prompts and deletes every match. Avoid unless the search is guaranteed to match exactly one event. - -* Handling Multiple Events - -If the search pattern matches multiple events, gcalcli may: -- Delete all matching events (dangerous!) -- Prompt for each one (interactive mode) - -*Best practice:* Use specific titles or search first, then delete by exact match. - -* Recurring Events - -*Warning:* Deleting a recurring event deletes ALL instances. - -For recurring events: -1. Warn the user that all instances will be deleted -2. Ask for confirmation specifically mentioning "all occurrences" -3. Consider if they only want to delete one instance (not supported by simple delete) - -#+begin_example -This is a recurring event. Deleting it will remove ALL occurrences. - -Delete all instances of "Weekly Standup"? (yes/no) -#+end_example - -* Error Handling - -** Event Not Found -- Verify spelling -- Try partial match -- Check date range -- May have already been deleted - -** Delete Failed -- Check calendar permissions -- Verify event exists -- Try with --calendar flag - -** Wrong Event Deleted -- Cannot undo gcalcli delete -- Would need to recreate the event manually - -* Safety Considerations - -1. *Always show full event details* before asking for confirmation -2. *Never delete without explicit "yes"* from user -3. *Warn about recurring events* before deletion -4. *Verify deletion* by searching after -5. *Read-only calendars* (like Christine's) cannot have events deleted - -* Read-Only Calendars - -Some calendars are read-only: - -| Calendar | Can Delete? | -|---------------------------+-------------| -| Craig | Yes | -| Christine | Yes | -| Todoist | Yes | -| Craig Jennings (TripIt) | No | -| Holidays in United States | No | -| Craig Proton | No | - -If user tries to delete from read-only calendar: - -#+begin_example -Cannot delete from "Craig Proton" - this is a read-only calendar. -#+end_example - -* Related - -- [[file:add-calendar-event.org][Add Calendar Event]] - create events -- [[file:read-calendar-events.org][Read Calendar Events]] - view events -- [[file:edit-calendar-event.org][Edit Calendar Event]] - modify events -- [[file:../calendar-api-research.org][Calendar API Research]] - gcalcli reference diff --git a/docs/workflows/edit-calendar-event.org b/docs/workflows/edit-calendar-event.org deleted file mode 100644 index 13a80a9..0000000 --- a/docs/workflows/edit-calendar-event.org +++ /dev/null @@ -1,255 +0,0 @@ -#+TITLE: Edit Calendar Event Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-02-01 - -* Overview - -Workflow for editing existing calendar events via gcalcli. - -*Note:* gcalcli's edit command is interactive. This workflow uses a delete-and-recreate approach for non-interactive editing. - -* Triggers - -- "edit the meeting" -- "change my appointment" -- "reschedule" -- "update the event" -- "move my appointment" - -* Prerequisites - -- gcalcli installed and authenticated -- Event must exist on calendar - -* CRITICAL: Check All Calendars Before Rescheduling - -When rescheduling an event, ALWAYS check for conflicts at the new time across ALL calendars: - -#+begin_src bash -grep "TARGET_DATE" ~/.emacs.d/data/gcal.org # Google calendar -grep "TARGET_DATE" ~/.emacs.d/data/dcal.org # DeepSat work calendar -grep "TARGET_DATE" ~/.emacs.d/data/pcal.org # Proton calendar -#+end_src - -gcalcli only sees Google calendars — verify the new time is free across all three files before rescheduling. - -* Workflow Steps - -** 1. Parse User Request - -Extract: -- Which event (title, partial match, or date hint) -- What to change (if mentioned) - -Examples: -- "Edit the dentist appointment" → search for "dentist" -- "Reschedule tomorrow's meeting" → search tomorrow's events -- "Change the 3pm call to 4pm" → search by time - -** 2. Search for Event - -#+begin_src bash -# Search by title -gcalcli --calendar "Calendar Name" search "event title" - -# Or list events for a date -gcalcli --calendar "Calendar Name" agenda "date" "date 11:59pm" -#+end_src - -** 3. Handle Multiple Matches - -If search returns multiple events: - -#+begin_example -Found 3 events matching "meeting": - -1. Team Meeting - Feb 3, 2026 at 9:00 AM -2. Project Meeting - Feb 4, 2026 at 2:00 PM -3. Client Meeting - Feb 5, 2026 at 10:00 AM - -Which event do you want to edit? (1-3) -#+end_example - -** 4. Display Full Event Details - -Show the current event state: - -#+begin_example -Event: Team Meeting -When: Monday, Feb 3, 2026 at 9:00 AM -Duration: 1 hour -Location: Conference Room A -Description: Weekly sync -Reminders: 5 min, 0 min -Calendar: Craig -#+end_example - -** 5. Ask What to Change - -Options: -- Title -- Date/Time -- Duration -- Location -- Description -- Reminders - -Can change one or multiple fields. - -** 6. Show Updated Summary - -Before applying changes: - -#+begin_example -Updated Event: -Event: Team Standup (was: Team Meeting) -When: Monday, Feb 3, 2026 at 9:30 AM (was: 9:00 AM) -Duration: 30 minutes (was: 1 hour) -Location: Conference Room A -Description: Weekly sync -Reminders: 5 min, 0 min -Calendar: Craig - -Apply these changes? (yes/no) -#+end_example - -** 7. Explicit Confirmation - -*Do NOT apply changes until user confirms.* - -** 8. Execute Edit (Delete + Recreate) - -Since gcalcli edit is interactive, use delete + add: - -#+begin_src bash -# Delete original -gcalcli --calendar "Calendar Name" delete "Event Title" --iamaexpert - -# Recreate with updated fields -gcalcli --calendar "Calendar Name" add \ - --title "Updated Title" \ - --when "new date/time" \ - --duration NEW_MINUTES \ - --where "Location" \ - --description "Description" \ - --reminder 5 \ - --reminder 0 \ - --noprompt -#+end_src - -** 9. Verify - -Confirm the updated event exists: - -#+begin_src bash -gcalcli --calendar "Calendar Name" search "Updated Title" -#+end_src - -Report success or failure. - -* Common Edit Scenarios - -** Reschedule (Change Time) - -#+begin_example -User: "Move my dentist appointment to 3pm" - -1. Search for "dentist" -2. Show current time -3. Confirm new time: 3pm -4. Delete + recreate at new time -#+end_example - -** Change Duration - -#+begin_example -User: "Make the meeting 2 hours instead of 1" - -1. Find the meeting -2. Show current duration -3. Confirm new duration: 2 hours -4. Delete + recreate with new duration -#+end_example - -** Update Location - -#+begin_example -User: "Change the meeting location to Room B" - -1. Find the meeting -2. Show current location -3. Confirm new location -4. Delete + recreate with new location -#+end_example - -** Move to Different Day - -#+begin_example -User: "Move Friday's review to Monday" - -1. Find event on Friday -2. Show full details -3. Confirm new date (Monday) and time -4. Delete + recreate on new day -#+end_example - -* gcalcli Command Reference - -** Search - -#+begin_src bash -gcalcli search "event title" -gcalcli --calendar "Craig" search "meeting" -#+end_src - -** Delete (for edit workflow) - -#+begin_src bash -gcalcli --calendar "Calendar" delete "Event Title" --iamaexpert -#+end_src - -** Add (recreate with edits) - -#+begin_src bash -gcalcli --calendar "Calendar" add \ - --title "Title" \ - --when "date time" \ - --duration MINUTES \ - --where "Location" \ - --description "Notes" \ - --reminder 5 \ - --reminder 0 \ - --noprompt -#+end_src - -* Handling Recurring Events - -*Warning:* The delete+recreate approach deletes ALL instances of a recurring event. - -For recurring events: -1. Warn the user this will affect all instances -2. Consider using gcalcli's interactive edit mode -3. Or create a new single event and delete the series - -* Error Handling - -** Event Not Found -- Verify spelling -- Try partial match -- Check date range - -** Multiple Matches -- Show all matches -- Ask user to select one -- Use more specific search terms - -** Delete Failed -- Event may already be deleted -- Check calendar permissions - -* Related - -- [[file:add-calendar-event.org][Add Calendar Event]] - create events -- [[file:read-calendar-events.org][Read Calendar Events]] - view events -- [[file:delete-calendar-event.org][Delete Calendar Event]] - remove events -- [[file:../calendar-api-research.org][Calendar API Research]] - gcalcli reference diff --git a/docs/workflows/email-assembly.org b/docs/workflows/email-assembly.org deleted file mode 100644 index 003459c..0000000 --- a/docs/workflows/email-assembly.org +++ /dev/null @@ -1,183 +0,0 @@ -#+TITLE: Email Assembly Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-01-29 - -* Overview - -This workflow assembles documents for an email that will be sent via Craig's email client (Proton Mail). It creates a temporary workspace, gathers relevant documents, drafts the email, and cleans up after sending. - -Use this workflow when Craig needs to send an email with multiple attachments that require gathering from various locations in the project. - -* When to Use This Workflow - -When Craig says: -- "assemble an email" or "email assembly workflow" -- "gather documents for an email" -- "I need to send [person] some documents" - -* The Workflow - -** Step 0: Context Window Hygiene -- Write out the session context file. -- Inform the user that you've written out the session context file and ask if they want to compact the context now before beginning. - -** Step 1: Create Temporary Workspace - -Create a temporary folder at the project root: - -#+begin_src bash -mkdir -p ./tmp -#+end_src - -This folder will hold: -- Copies of all attachments -- The draft email text - -** Step 2: Identify Required Documents - -Discuss with Craig what documents are needed. Common categories: -- Legal documents (deeds, certificates, agreements) -- Financial documents (statements, invoices) -- Correspondence (prior emails, letters) -- Identity documents (death certificates, ID copies) - -For each document: -1. Locate it in the project -2. Confirm with Craig it's the right one -3. Open it in zathura for Craig to verify if needed - -** Step 3: Copy Documents to Workspace - -**IMPORTANT: Always COPY, never MOVE documents.** - -#+begin_src bash -cp /path/to/original/document.pdf ./tmp/ -#+end_src - -After copying, list the workspace contents to confirm: - -#+begin_src bash -ls -lh ./tmp/ -#+end_src - -** Step 4: Draft the Email - -Create a draft email file in the workspace: - -#+begin_src bash -./tmp/email-draft.txt -#+end_src - -Include: -- To: (recipient email) -- Subject: (clear, descriptive subject line) -- Body: (context, list of attachments, contact info) - -The body should: -- Provide context for why documents are being sent -- List all attachments with brief descriptions -- Include Craig's contact information - -** Step 5: Open Draft in Emacs - -Open the draft for Craig to review and edit: - -#+begin_src bash -emacsclient -n ./tmp/email-draft.txt -#+end_src - -Wait for Craig to finish editing before proceeding. - -** Step 6: Craig Sends Email - -Craig will: -1. Open his email client (Proton Mail) -2. Create a new email using the draft text -3. Attach documents from the tmp folder -4. Send the email - -** Step 7: Process Sent Email - -Once Craig confirms the email was sent: - -1. Craig saves the sent email to the project inbox -2. Use the **extract-email workflow** to process it: - - Create extraction directory - - Copy email to extraction directory - - Run extraction script - - Rename with server timestamp: =YYYY-MM-DD_HHMMSS_description.ext= - - Refile to appropriate location - - Clean up extraction directory - -See [[file:extract-email.org][extract-email workflow]] for full details. - -** Step 8: Clean Up Workspace - -Delete the temporary folder: - -#+begin_src bash -rm -rf ./tmp/ -#+end_src - -** Step 9: Update Context Window -Update the session context file before exiting this workflow. - -* Best Practices - -** Document Verification - -Before copying documents: -- Open each one in zathura for Craig to verify -- Confirm it's the correct version -- Check that sensitive information is appropriate to send - -** Email Draft Structure - -A good email draft includes: - -#+begin_example -To: recipient@example.com -Subject: [Clear Topic] - [Property/Case Reference] - -Hi [Name], - -[Opening - context for why you're sending this] - -[Middle - explanation of what's attached and why] - -Attached are the following documents: - -1. [Document name] - [brief description] -2. [Document name] - [brief description] -3. [Document name] - [brief description] - -[Closing - next steps, request for confirmation, offer to provide more] - -Thank you, - -Craig Jennings -510-316-9357 -c@cjennings.net -#+end_example - -** Filing Conventions - -When refiling sent emails: -- Use format: =YYYY-MM-DD_HHMMSS_description.ext= (server timestamp) -- File in the most relevant project folder (check project's notes.org for conventions) -- Clean up extraction directory after refiling - -* Example Usage - -Craig: "I need to send Seabreeze the documents for the HOA refund" - -Claude: -1. Creates ./tmp/ folder -2. Discusses needed documents (death certificate, closing docs, purchase agreement) -3. Locates and opens each document for verification -4. Copies verified documents to ./tmp/ -5. Drafts email and opens in emacsclient -6. Craig edits, then sends via Proton Mail -7. Craig saves sent email to inbox -8. Claude extracts, reads, renames, and refiles email -9. Claude deletes ./tmp/ folder diff --git a/docs/workflows/email.org b/docs/workflows/email.org deleted file mode 100644 index cfd7adf..0000000 --- a/docs/workflows/email.org +++ /dev/null @@ -1,198 +0,0 @@ -#+TITLE: Email Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-01-26 - -* Overview - -This workflow sends emails with optional attachments via msmtp using the cmail account (c@cjennings.net via Proton Bridge). - -* When to Use This Workflow - -When Craig says: -- "email workflow" or "send an email" -- "email [person] about [topic]" -- "send [file] to [person]" - -* Required Information - -Before sending, gather and confirm: - -1. **To:** (required) - recipient email address(es) -2. **CC:** (optional) - carbon copy recipients -3. **BCC:** (optional) - blind carbon copy recipients -4. **Subject:** (required) - email subject line -5. **Body:** (required) - email body text -6. **Attachments:** (optional) - file path(s) to attach - -* The Workflow - -** Step 1: Gather Missing Information - -If any required fields are missing, prompt Craig: - -#+begin_example -To send this email, I need: -- To: [who should receive this?] -- Subject: [what's the subject line?] -- Body: [what should the email say?] -- Attachments: [any files to attach?] -- CC/BCC: [anyone to copy?] -#+end_example - -** Step 2: Validate Email Addresses - -Look up all recipient names/emails in the contacts file: - -#+begin_src bash -grep -i "[name or email]" ~/sync/org/contacts.org -#+end_src - -**Note:** If contacts.org is empty, check for sync-conflict files: -#+begin_src bash -ls ~/sync/org/contacts*.org -#+end_src - -For each recipient: -1. Search contacts by name or email -2. Confirm the email address matches -3. If name not found, ask Craig to confirm the email is correct -4. If multiple emails for a contact, ask which one to use - -** Step 3: Confirm Before Sending - -Display the complete email for review: - -#+begin_example -Ready to send: - -From: c@cjennings.net -To: [validated email(s)] -CC: [if any] -BCC: [if any] -Subject: [subject] - -[body text] - -Attachments: [list files if any] - -Send this email? [Y/n] -#+end_example - -** Step 4: Send the Email - -Use Python to construct MIME message and pipe to msmtp: - -#+begin_src python -python3 << 'EOF' | msmtp -a cmail [recipient] -import sys -from email.mime.multipart import MIMEMultipart -from email.mime.text import MIMEText -from email.mime.application import MIMEApplication -from email.utils import formatdate -import os - -msg = MIMEMultipart() -msg['From'] = 'c@cjennings.net' -msg['To'] = '[to_address]' -# msg['Cc'] = '[cc_address]' # if applicable -# msg['Bcc'] = '[bcc_address]' # if applicable -msg['Subject'] = '[subject]' -msg['Date'] = formatdate(localtime=True) - -body = """[body text]""" -msg.attach(MIMEText(body, 'plain')) - -# For each attachment: -# pdf_path = '/path/to/file.pdf' -# with open(pdf_path, 'rb') as f: -# attachment = MIMEApplication(f.read(), _subtype='pdf') -# attachment.add_header('Content-Disposition', 'attachment', filename='filename.pdf') -# msg.attach(attachment) - -print(msg.as_string()) -EOF -#+end_src - -**Important:** When there are CC or BCC recipients, pass ALL recipients to msmtp: -#+begin_src bash -python3 << 'EOF' | msmtp -a cmail to@example.com cc@example.com bcc@example.com -#+end_src - -** Step 5: Verify Delivery - -Check the msmtp log for confirmation: - -#+begin_src bash -tail -3 ~/.msmtp.cmail.log -#+end_src - -Look for: ~smtpstatus=250~ and ~exitcode=EX_OK~ - -** Step 6: Sync to Sent Folder (Optional) - -If Craig wants the email in his Sent folder: - -#+begin_src bash -mbsync cmail -#+end_src - -* msmtp Configuration - -The cmail account should be configured in ~/.msmtprc: - -#+begin_example -account cmail -tls_certcheck off -auth on -host 127.0.0.1 -port 1025 -protocol smtp -from c@cjennings.net -user c@cjennings.net -passwordeval "cat ~/.config/.cmailpass" -tls on -tls_starttls on -logfile ~/.msmtp.cmail.log -#+end_example - -**Note:** ~tls_certcheck off~ is used because Proton Bridge uses self-signed certificates on localhost. - -* Attachment Handling - -** Supported Types - -Common MIME subtypes: -- PDF: ~_subtype='pdf'~ -- Images: ~_subtype='png'~, ~_subtype='jpeg'~ -- Text: ~_subtype='plain'~ -- Generic: ~_subtype='octet-stream'~ - -** Multiple Attachments - -Add multiple attachment blocks before ~print(msg.as_string())~ - -* Troubleshooting - -** Password File Missing -Ensure ~/.config/.cmailpass exists with the Proton Bridge SMTP password. - -** TLS Certificate Errors -Use ~tls_certcheck off~ in msmtprc for Proton Bridge (localhost only). - -** Proton Bridge Not Running -Start Proton Bridge before sending. Check if port 1025 is listening: -#+begin_src bash -ss -tlnp | grep 1025 -#+end_src - -* Example Usage - -Craig: "email workflow - send the November 3rd SOV to Christine" - -Claude: -1. Searches contacts for "Christine" -> finds cciarmello@gmail.com -2. Asks for subject and body if not provided -3. Locates the SOV file in assets/ -4. Shows confirmation -5. Sends via msmtp -6. Verifies delivery in log diff --git a/docs/workflows/extract-email.org b/docs/workflows/extract-email.org deleted file mode 100644 index 08464af..0000000 --- a/docs/workflows/extract-email.org +++ /dev/null @@ -1,116 +0,0 @@ -#+TITLE: Extract Email Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-02-06 - -* Overview - -Extract email content and attachments from an EML file, rename with a consistent naming convention, and refile to =assets/=. - -* When to Use This Workflow - -When Craig says: -- "extract the email" -- "get the attachment from [email]" -- "pull the info from [email]" -- "process the email in inbox" - -* Sources - -The EML file may come from two places: - -** Already in =inbox/= - -Emails dropped into the project's =inbox/= directory via Syncthing, manual copy, or other means. These are ready for extraction immediately. - -** From =~/.mail/= - -Emails in the local maildir managed by mbsync/mu. Use the [[file:find-email.org][find-email workflow]] to locate the message, then copy (don't move) it into =inbox/= before proceeding. Never modify =~/.mail/= directly. - -* The Workflow - -** Step 0: Context Hygiene - -Before starting, write out the session context file and check with Craig whether we could compact the context. If there are a lot of emails, this will be a long process. If the context window collapses, we may forget important details. Writing out the session context prevents this data loss. - -** Step 1: Run Extraction Script - -Run the extraction script with =--output-dir= to perform the full pipeline (create temp dir, parse, auto-rename, extract attachments, refile, clean up): - -#+begin_src bash -python3 docs/scripts/eml-view-and-extract-attachments.py inbox/message.eml --output-dir assets/ -#+end_src - -The script automatically: -- Parses email headers, body, and attachments -- Generates filenames using the naming convention (see below) -- Creates =.eml= (renamed copy), =.txt= (body text), and attachment files -- Checks for filename collisions in the output directory -- Moves all files to =assets/= -- Cleans up its temp directory -- Prints a summary of created files - -** Step 2: Review Summary Output - -Review the script's summary output and verify: -- Filenames look correct (rename manually if needed) -- Delete junk attachments (e.g., signature logos, tracking pixels) -- Delete source EML from inbox after confirming results - -** Step 3: Report Results - -Report to Craig: -- Summary of email content -- What files were extracted and their final names -- Where files were saved - -* Naming Convention - -Pattern: =YYYY-MM-DD-HHMM-Sender-TYPE-Description.ext= - -| Component | Source | -|-------------+---------------------------------------------------------------------------| -| YYYY-MM-DD | From the email's Date header (server time) | -| HHMM | Hours and minutes from the Date header | -| Sender | First name of the sender | -| TYPE | =EMAIL= for the email body (.eml and .txt), =ATTACH= for attachments | -| Description | Shortened subject line for EMAIL files; original filename for ATTACH files | - -** Example - -For an email from Jonathan Smith, subject "Re: Fw: 4319 Danneel Street", sent 2026-02-05 at 11:36, with a PDF attachment "Ltr Carrollton.pdf": - -#+begin_src -2026-02-05-1136-Jonathan-EMAIL-Re-Fw-4319-Danneel-Street.eml -2026-02-05-1136-Jonathan-EMAIL-Re-Fw-4319-Danneel-Street.txt -2026-02-05-1136-Jonathan-ATTACH-Ltr-Carrollton.pdf -#+end_src - -* Backwards-Compatible Mode - -Without =--output-dir=, the script behaves as before: prints metadata and body to stdout, extracts attachments alongside the EML file. This is useful for quick inspection without filing. - -#+begin_src bash -python3 docs/scripts/eml-view-and-extract-attachments.py inbox/message.eml -#+end_src - -* Batch Processing - -When processing multiple emails, complete all steps for one email before starting the next. Do not parallelize across emails. - -* Principles - -- *Never modify =~/.mail/=* — always copy first, work on the copy -- *EML is authoritative* — always keep it alongside extracted files -- *Use email Date header for timestamps* — not extraction time -- *Refer to find-email for maildir searches* — don't duplicate those instructions -- *Script checks for collisions* — won't overwrite existing files in output dir -- *One email at a time* — complete the full cycle before starting the next -- *Source EML stays untouched* — the script copies, never moves the source; Claude deletes after verifying results - -* Tools Reference - -| Tool | Purpose | -|-------------------------------------+---------------------------------| -| eml-view-and-extract-attachments.py | Extract content and attachments | - -Script location: =docs/scripts/eml-view-and-extract-attachments.py= diff --git a/docs/workflows/find-email.org b/docs/workflows/find-email.org deleted file mode 100644 index 0ef9615..0000000 --- a/docs/workflows/find-email.org +++ /dev/null @@ -1,122 +0,0 @@ -#+TITLE: Find Email Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-02-01 - -* Overview - -This workflow searches local maildir to find and identify emails matching specific criteria. Uses mu (maildir indexer) for fast searching. - -* Problem We're Solving - -Craig needs to find specific emails - shipping confirmations, receipts, correspondence with specific people, or messages about specific topics. Manually browsing mail folders is slow and error-prone. mu provides powerful search capabilities over the local maildir. - -* Exit Criteria - -Search is complete when: -1. Matching emails are identified (or confirmed none exist) -2. Relevant information is reported (subject, date, from, message path) -3. Craig has what they need to proceed (info extracted, or path for further action) - -* When to Use This Workflow - -When Craig says: -- "find email about [topic]" -- "search for emails from [person]" -- "do I have an email about [subject]?" -- "look for [shipping/receipt/confirmation] email" -- Before extract-email workflow (to locate the target email) - -* The Workflow -** Step 0: Context Hygiene - -Before starting, write out the session context file and check with Craig whether we could compact the context. This might be a long process. If the context window collapses, we may forget important details. Writing out the session context prevents this data loss. - -** Step 1: Ensure Mail is Current (Optional) - -If searching for recent emails, run sync-email workflow first: - -#+begin_src bash -mbsync -a && mu index -#+end_src - -Skip if Craig confirms mail is already synced. - -** Step 2: Construct Search Query - -mu supports powerful search syntax: - -#+begin_src bash -# By sender -mu find from:jdslabs.com - -# By subject -mu find subject:shipped - -# By date range -mu find date:2w..now # last 2 weeks -mu find date:2026-01-01.. # since Jan 1 - -# Combined queries -mu find from:fedex subject:tracking date:1w..now - -# In specific folder -mu find maildir:/gmail/INBOX from:amazon - -# Full text search -mu find "order confirmation" -#+end_src - -** Step 3: Run Search - -#+begin_src bash -mu find [query] -#+end_src - -Default output shows: date, from, subject, path - -For more detail: -#+begin_src bash -mu find --fields="d f s l" [query] # date, from, subject, path -mu find --sortfield=date --reverse [query] # newest first -#+end_src - -** Step 4: Report Results - -Report to Craig: -- Number of matches found -- Key details (date, from, subject) for relevant matches -- Message path if Craig needs to extract or read it - -If no matches: -- Confirm the search was correct -- Suggest alternative search terms -- Consider if mail needs syncing first - -* Search Query Reference - -| Field | Example | Notes | -|----------+------------------------------+--------------------------| -| from: | from:amazon.com | Sender address/domain | -| to: | to:c@cjennings.net | Recipient | -| subject: | subject:"order shipped" | Subject line | -| body: | body:tracking | Message body | -| date: | date:1w..now | Relative or absolute | -| flag: | flag:unread | unread, flagged, etc. | -| maildir: | maildir:/gmail/INBOX | Specific folder | -| mime: | mime:application/pdf | Has attachment type | - -Combine with AND (space), OR (or), NOT (not): -#+begin_src bash -mu find from:amazon subject:shipped not subject:delayed -#+end_src - -* Principles - -- **Sync first if needed** - Searching stale mail misses recent messages -- **Start broad, narrow down** - Better to find too many than miss the target -- **Use date ranges** - Dramatically speeds up searches for recent mail -- **Report paths** - Message paths enable extract-email workflow - -* Living Document - -Update this workflow as we discover useful search patterns. diff --git a/docs/workflows/journal-entry.org b/docs/workflows/journal-entry.org deleted file mode 100644 index c7057de..0000000 --- a/docs/workflows/journal-entry.org +++ /dev/null @@ -1,214 +0,0 @@ -#+TITLE: Journal Entry Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2025-11-07 - -* Overview - -This workflow captures the day's work in Craig's personal journal. Journal entries serve as a searchable record for retrospectives, timelines, and trend analysis, while also providing context to Claude about relationships, priorities, mood, and goals that improve our collaboration. - -* Problem We're Solving - -Without regular journal entries, several problems emerge: - -** Limited Memory and Searchability -- Craig's memory is limited, but what's recorded is always available -- Finding when specific events occurred becomes difficult -- Creating project timelines and retrospectives requires manual reconstruction -- Identifying work patterns (weekday vs weekend, morning vs evening) is impossible - -** Missing Context for Collaboration -- Claude lacks understanding of relationships (Julie is Craig's aunt, Laura is his sister) -- Important contextual details that seem minor become critical unexpectedly -- Craig's mood, frustrations, and satisfaction levels remain hidden -- What Craig finds important vs unimportant isn't explicitly communicated -- Claude can't identify where to help Craig focus attention to avoid mistakes - -** Lost Insights -- Decisions made and reasoning behind them aren't captured -- Big picture goals and upcoming plans remain undocumented -- Patterns in what Craig is good at vs struggles with aren't tracked - -*Impact:* Without journal entries, Craig loses valuable personal records and Claude operates with incomplete context, reducing collaboration effectiveness. - -* Exit Criteria - -We know a journal entry is complete when: - -1. **Draft has been created** - Claude writes initial first-person draft based on notes.org session data -2. **Revisions are complete** - Craig provides corrections and context until satisfied -3. **Entry is added to journal file** - Text is added to the org-roam daily journal at ~/sync/org/roam/journal/YYYY-MM-DD.org -4. **Craig approves** - Craig explicitly approves or indicates no more revisions needed - -*Measurable validation:* -- Journal entry exists in the daily journal file -- Craig has approved the final text -- Entry captures big decisions, accomplishments, and unusual details -- Tone feels personal, vulnerable, and story-like - -* When to Use This Session - -Trigger this workflow when: - -- Craig says "let's do a journal entry" or "create a journal entry" -- At the end of a work session, particularly in the evening -- Craig asks to wrap up the day -- After completing significant work on a project - -This is typically done at the end of the day to capture that day's activities. - -* Approach: How We Work Together -** Step 0: Context Hygiene - -Before starting, write out the session context file and check with Craig whether we could compact the context. If the context window collapses, we may forget important details. Writing out the session context prevents this data loss. - -** Step 1: Review the Day's Work - -Check the project's notes.org file for today's session entries. This is your primary source for: -- Accomplishments achieved -- Decisions made -- Meetings or calls attended -- Files created or organized -- Actions planned for the future -- Outstanding items - -** Step 2: Draft the Journal Entry - -Write a first-person journal entry as Craig. The entry should: -- Be 2-3 paragraphs (unless it's an unusually eventful day) -- Focus on big ideas and decisions -- Include unusual or notable details -- Read like a personal journal - it's a little story about how things went -- Use a tone that's personal, genuine, and vulnerably open (never emotional) - -Structure suggestions: -- Start with the big event or decision of the day -- Explain what led to that decision or what work was accomplished -- Include any context about people, mood, or upcoming plans -- End with what's next or how you're feeling about progress - -** Step 3: Display and Request Revisions - -Display the draft to Craig and ask: "Does this capture the day? What would you like me to adjust?" - -This is where important context emerges: -- Corrections about relationships and people -- Clarification of goals and motivations -- Craig's mood and feelings about events -- Plans for the future -- What's important vs not important - -** Step 4: Incorporate Feedback and Iterate - -Make the requested changes and display the revised text. Ask again for revisions. Repeat this process until Craig approves or indicates no more changes are needed. - -During revisions: -- Ask questions if unsure about tone or word choice -- Ask about people mentioned for the first time -- If someone behaves strangely, ask Craig's thoughts to find the right tone -- Record any new context in your notes for future reference - -** Step 5: Add Entry to Journal File - -Once approved: - -1. Find the org-roam daily journal file at ~/sync/org/roam/journal/YYYY-MM-DD.org -2. If it doesn't exist, create it with this header: - ``` - :PROPERTIES: - :ID: [generate UUID using uuidgen] - :END: - #+FILETAGS: Journal - #+TITLE: YYYY-MM-DD - ``` -3. Create a top-level org header with timestamp: - ``` - * YYYY-MM-DD Day @ HH:MM:SS TZ ProjectName - What Kind of Day Has It Been? - ``` - (Get timezone with: date +%z) -4. Add the approved journal text below the header - -** Step 6: Wrap Up - -Update the session context file. - -After updating the session context file, ask Craig: "Are we done for the evening, or is there anything else that needs to be done?" - -Since journal entries typically happen at end of day, this provides a natural session close. - -* Principles to Follow - -** Personal and Vulnerable -- Write in a genuinely open, vulnerable tone -- Never emotional, but honest about challenges and feelings -- Make it feel like Craig's personal journal, not a work report - -** Brief but Complete -- Default to 2-3 paragraphs -- Capture big ideas and unusual details -- Don't document every minor task -- Longer entries are fine for unusually eventful days - -** Story-Like Quality -- Read like someone telling a story about their day -- Have a narrative flow, not just a bullet list -- Connect events and decisions with context - -** Clarifying Questions Welcome -- Ask about tone, word choice, or what to include when unsure -- Ask about people mentioned for the first time -- Probe for Craig's thoughts when events seem unusual -- Use questions to gather context that improves collaboration - -** Context Capture -- Record new information about relationships, goals, and preferences -- Note what Craig finds important vs unimportant -- Track mood indicators and patterns -- Save insights for future reference - -** Use Session Data -- Start from notes.org session entries for the day -- Don't rely on memory - check the documented record -- Include key decisions, accomplishments, and next steps - -* Living Document - -This is a living document. As we create journal entries and learn what works well, we update this file with: - -- Improvements to the drafting approach -- Better examples of tone and style -- Additional principles discovered -- Refinements based on Craig's feedback - -Every journal entry is an opportunity to improve this workflow. - -* Example Journal Entry - -Here's an example of the tone, narrative flow, and level of detail to aim for: - -#+begin_quote -Big day. We sold Gogo's condo. - -This morning I woke up to two counter offers - one from Cortney Arambula at $1,377,000 and another from Rolando Tong Jr. at $1,405,000. Deadline was 3 PM today. - -I had two phone calls with Craig Ratowsky. The first at 11:59 AM, we talked through both offers. Rolando's was clearly better - $28,000 more, already pre-approved, and the buyer is his sister. Craig walked me through the numbers and timeline. - -On the second call at 12:25 PM, I made the decision: accept Rolando's offer at $1,405,000. After all these months of work - dealing with mold, replacing the kitchen, new flooring, staging - we have a buyer. - -Escrow opens Monday (11/10/2025), 30-day close from there. By mid-December, this will be done. - -Net proceeds to the trust will be around $1,099,385 after the mortgage payoff, closing costs, and agent commissions. - -I spent the early evening getting all the files organized so I can figure out exactly how much Christine and I put in for the renovation and get reimbursed. This will also help when I report expenses to Mom and Laura about the estate. - -Now I need to plan the trip to Huntington Beach to handle Gogo's financial affairs - consolidate her accounts into the estate account, pay her bills, distribute her funds, and mail some items from the garage back home. Plus empty the garage for the seller before closing. - -Escrow Monday. Still need to: -- Decide on compensating Craig and Justin for their extra work -- Get the Tax ID number for the estate -- Work on Gogo's final taxes with a CPA -- File the Inventory & Appraisal with probate court - -It's been almost nine months since Gogo passed. Getting this condo sold feels like a huge milestone. -#+end_quote - -Note the personal tone, narrative flow, big decision (accepting the offer), context about people (Gogo, Craig Ratowsky, Christine), mood (milestone feeling), and what's next. diff --git a/docs/workflows/open-tasks.org b/docs/workflows/open-tasks.org deleted file mode 100644 index d93e743..0000000 --- a/docs/workflows/open-tasks.org +++ /dev/null @@ -1,151 +0,0 @@ -#+TITLE: List Open Tasks Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-02-12 - -* Overview - -This workflow gathers, reconciles, and displays all open tasks across project sources. It ensures nothing falls through the cracks by syncing loose reminders into todo.org, flags tasks that may already be done, and presents a clean priority-grouped list for review. - -* When to Use This Workflow - -- Craig says "list open tasks" or "show me all tasks" -- At the start of a planning or prioritization session -- When deciding what to work on next (complements the what's-next workflow) -- Periodically to audit and clean up stale tasks - -* The Workflow - -** Step 1: Write Session Context File - -Before anything else, update =docs/session-context.org= with current session state. Task review can surface decisions and status changes — capture context in case of crash. - -** Step 2: Gather Tasks from notes.org - -Read the following sections from =notes.org=: -- *Active Reminders* — time-sensitive items, follow-ups -- *Pending Decisions* — decisions that block work -- *Last 2-3 Session History entries* — recent next-steps and in-progress items - -Extract anything that represents an open task or action item. - -** Step 3: Gather Tasks from todo.org - -Read the project's =todo.org= file (typically at project root). -- Collect all entries under the open work header (named =* $Project Open Work=, e.g., =* Homelab Open Work=) -- Note each task's priority ([#A], [#B], [#C]), deadline, scheduled date, and status -- Skip anything under the resolved header (named =* $Project Resolved=) - -** Step 4: Reconcile — Sync Unique Tasks to todo.org - -Compare tasks found in notes.org (reminders, session history, pending decisions) against todo.org entries. - -For each task in notes.org that does NOT have a corresponding todo.org entry: -1. Create a new =** TODO= entry in todo.org under the =* $Project Open Work= header -2. Assign a priority based on context ([#A] if time-sensitive or blocking, [#B] if important, [#C] if low urgency) -3. Include: - - =:CREATED:= property with today's date - - Brief description of what needs to be done - - Why it matters (context from the reminder or session notes) - - Recommended approach or next steps -4. If a deadline exists (e.g., RMA expected date), add a =DEADLINE:= line - -*Do NOT remove the item from notes.org Active Reminders* — reminders serve a different purpose (surfaced at session start). The todo.org entry is for tracking and prioritization. - -*Judgment call:* Not every reminder needs a todo.org entry. Skip items that are: -- Pure informational notes (e.g., "rsyncshot running with 600s timeout") -- Waiting-for items with no action Craig can take (e.g., "package arriving Feb 25") -- Already completed (handle in Step 5) - -** Step 5: Review for Completed Tasks - -Quickly review all open tasks and check if any appear to already be done, based on: -- Recent session history mentioning completion -- Context clues (e.g., "arriving Feb 7" and it's now Feb 12) -- Work completed in previous sessions that wasn't marked done - -Build a list of *suspected completions* — do NOT mark them done yet. These will be confirmed with Craig in Step 7. - -** Step 6: Display All Open Tasks - -Present all open tasks grouped by priority. Format rules: - -- *Group by priority:* A (High), B (Medium), C (Low/Someday) -- *Default priority:* Tasks without an explicit priority are treated as C -- *No table structure* — use a flat bulleted list within each group -- *Include deadlines:* If a task has a =DEADLINE:=, show it inline as =DEADLINE: = -- *Include scheduled dates:* If a task has a =SCHEDULED:=, show it inline -- *Keep descriptions concise* — task name + one-line summary, not full details -- *Note source* if task came from reminders only (not yet in todo.org) vs todo.org - -Example format: -#+begin_example -**Priority A (High)** - -- Complete Sara Essex email setup — add Google Workspace MX records, verify delivery -- Set up Comet KVMs — remote console for TrueNAS and ratio -- Complete UPS/TrueNAS integration — USB cable, configure shutdown threshold. DEADLINE: <2026-01-21> - -**Priority B (Medium)** - -- Design Zettelkasten architecture — resume at Question 4 (Staleness) -- Compare Ubiquiti UTR vs open source mesh router - -**Priority C (Low / Someday)** - -- Explore Whisper-to-Claude-Code voice integration -- Get Keychron Q6 Pro carrying case. SCHEDULED: <2026-02-07> -#+end_example - -** Step 7: Confirm Suspected Completions - -After displaying the list, present suspected completions: - -#+begin_example -These tasks may already be completed — can you confirm? -- "OBSBOT Tiny 3 webcam arriving" — it's past the expected delivery date -- "Sweetwater order arriving" — expected Feb 7, now Feb 12 -#+end_example - -For each task Craig confirms as done: -1. Add =CLOSED: [YYYY-MM-DD Day]= timestamp (use =date= command for accuracy) -2. Change status from =TODO= to =DONE= -3. Add a brief completion note (when/how it was resolved) -4. Move the entry from =* $Project Open Work= to =* $Project Resolved= in todo.org -5. If the task also exists in Active Reminders in notes.org, remove it from there - -For tasks Craig says are NOT done, leave them as-is. - -* Resolving a Task — Format - -When moving a task to Resolved, it should look like this: - -#+begin_example -** DONE [#A] Set up Comet KVMs -CLOSED: [2026-02-12 Thu] -:PROPERTIES: -:CREATED: [2026-01-19 Mon] -:END: - -Comet KVMs set up for TrueNAS and ratio. Remote BIOS/console access working. - -*Resolution:* Completed during Feb 12 session. Both KVMs connected and tested. -#+end_example - -Key elements: -- =DONE= replaces =TODO= -- =CLOSED:= line with completion date -- Original =:PROPERTIES:= block preserved -- Brief resolution note explaining when/how - -* Common Mistakes - -1. *Marking tasks done without confirmation* — always ask Craig first -2. *Removing reminders from notes.org when adding to todo.org* — they serve different purposes -3. *Creating todo.org entries for pure informational reminders* — use judgment -4. *Forgetting to update session context file* — do it in Step 1, before the review starts -5. *Using a table for the task list* — Craig prefers flat bulleted lists for this -6. *Not running =date=* — always check current date before evaluating deadlines or completion dates - -* Living Document - -Update this workflow as task management patterns evolve. If new task sources are added (e.g., external issue trackers, shared task lists), add them to Steps 2-3. diff --git a/docs/workflows/process-meeting-transcript.org b/docs/workflows/process-meeting-transcript.org deleted file mode 100644 index 647e55f..0000000 --- a/docs/workflows/process-meeting-transcript.org +++ /dev/null @@ -1,301 +0,0 @@ -#+TITLE: Process Meeting Transcript Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-02-03 - -* Overview - -This workflow defines the process for processing meeting recordings from start to finish: finding recordings, extracting audio, transcribing via AssemblyAI, identifying speakers, correcting errors, and archiving files. - -* When to Use This Workflow - -Trigger this workflow when: -- Craig says "process the transcript" or "process the recording" or similar -- New recording files (.mkv) appear in ~/sync/recordings/ after meetings -- Craig wants to process meeting recordings into labeled transcripts - -* Prerequisites - -- Recording file(s) exist in ~/sync/recordings/ (*.mkv) -- Calendar files available at ~/.emacs.d/data/*cal.org for meeting titles -- AssemblyAI transcription script at ~/.emacs.d/scripts/assemblyai-transcribe -- AssemblyAI API key stored in ~/.authinfo.gpg (machine api.assemblyai.com) -- ffmpeg available for audio extraction - -* The Workflow - -** Step 1: Identify Engagement and Write Session Context - -Before starting transcript processing: - -1. *Identify which engagement this meeting belongs to:* - - DeepSat (default for current work) - - Vineti (historical) - - Salesforce (historical) - - If unclear, ask Craig - -2. *Set destination paths based on engagement:* - - Assets: ~{engagement}/assets/~ (e.g., ~deepsat/assets/~) - - Meetings: ~{engagement}/meetings/~ (e.g., ~deepsat/meetings/~) - - Knowledge: ~{engagement}/knowledge.org~ for reference - -3. Update docs/session-context.org with current status: - - Note that we're about to process a meeting transcript - - Get meeting name by checking ~/.emacs.d/data/*cal.org (match date/time to transcript timestamp) - - If meeting not found in calendar, ask Craig for the meeting title - -4. Ask Craig if he wants to compact the conversation context: - - Transcript processing can use significant context - - Compacting now preserves the session context file for recovery - -** Step 2: Find Recording Files - -Find and match recording files with calendar events: - -1. **List recordings:** Find all .mkv files in ~/sync/recordings/ - #+begin_src bash - ls -la ~/sync/recordings/*.mkv - #+end_src - -2. **Extract timestamps:** Parse date/time from each filename (format: YYYY-MM-DD_HH-MM-SS.mkv) - -3. **Match with calendar:** Check ~/.emacs.d/data/*cal.org for meetings at those times - #+begin_src bash - cat ~/.emacs.d/data/dcal.org | grep -A2 "YYYY-MM-DD" - #+end_src - -4. **Present selection table to Craig:** - | Filename | Meeting / Date-Time | - |-----------------------------+--------------------------------| - | 2026-02-03_10-00-00.mkv | DeepSat Standup (from calendar)| - | 2026-02-03_14-30-00.mkv | 2026-02-03 14:30 (no match) | - -5. **Craig selects files:** One, several, or all files to process - -6. **Queue for processing:** Selected files ordered oldest → newest for serial processing - -** Step 3: Extract Audio - -For each selected recording file, extract audio for transcription: - -#+begin_src bash -ffmpeg -i ~/sync/recordings/FILENAME.mkv -vn -ac 1 -c:a aac -b:a 96k /tmp/FILENAME.m4a -#+end_src - -Settings: -- =-vn= : no video (audio only) -- =-ac 1= : mono channel (sufficient for speech, smaller file) -- =-c:a aac= : AAC codec -- =-b:a 96k= : 96kbps bitrate (sufficient for speech transcription) - -Output: /tmp/FILENAME.m4a (temporary, deleted after transcription) - -** Step 4: Transcribe with AssemblyAI - -1. **Run transcription:** - #+begin_src bash - ~/.emacs.d/scripts/assemblyai-transcribe /tmp/FILENAME.m4a > ~/sync/recordings/FILENAME.txt - #+end_src - -2. **Clean up:** Delete intermediate .m4a file after successful transcription - #+begin_src bash - rm /tmp/FILENAME.m4a - #+end_src - -3. **Output format:** The script produces speaker-diarized output: - #+begin_example - Speaker A: First speaker's text here. - Speaker B: Second speaker's response. - Speaker A: First speaker continues. - #+end_example - -4. Continue to speaker identification workflow below. - -** Step 5: Locate Files - -Confirm the transcript and recording files are ready: - -1. **Verify transcript exists:** - #+begin_src bash - ls -la ~/sync/recordings/FILENAME.txt - #+end_src - -2. **Verify recording exists:** - #+begin_src bash - ls -la ~/sync/recordings/FILENAME.mkv - #+end_src - -3. **Get meeting title:** If not already known from Step 2, check calendar - - Calendar location: ~/.emacs.d/data/*cal.org - - Match the meeting time to the transcript timestamp - -** Step 6: Read and Analyze Transcript - -1. Read the full transcript file - -2. Identify speakers by analyzing context clues: - - Names mentioned in conversation ("Thanks, Ryan") - - Role references ("as the developer", "on the IT side") - - Project-specific knowledge (who works on what) - - Previous meeting context (known attendees) - - Speaking order patterns - -3. Build a speaker identification table: - | Speaker | Person | Evidence | - |---------|--------|----------| - | A | Name | Clues... | - -** Step 7: Confirm Speaker Identifications - -Present the speaker identification table to Craig for confirmation: -- List each speaker label and proposed name -- Include the evidence/reasoning -- Ask about any uncertain identifications -- Note any new people to add to notes.org contacts - -** Step 8: Create Labeled Transcript - -1. Replace all speaker labels with actual names - -2. Correct transcription errors: - - Common mishearings (names, technical terms, company names) - - Known substitutions from this project: - - "Vanetti" → "Vineti" - - "Fresh" → "Vrezh" - - "Clean4" / "clone" → "CLIN 4" - - "Vascan" → "Vazgan" - - "Hike" / "Ike" → "Hayk" - - "High Tech" → "HyeTech" - - "Java software" → "JAMA software" - - "JSON" (person) → "Jason" - - "their S" / "ress" → "Nerses" - - Technical terms specific to DeepSat (GovCloud, AFRL, SOUTHCOM, etc.) - -3. Save to engagement assets folder: - - Location: ~{engagement}/assets/~ (e.g., ~deepsat/assets/~) - - Filename: YYYY-MM-DD-meeting-name.txt - - Example: deepsat/assets/2026-02-03-standup-ipm-grooming.txt - -** Step 9: Copy Recording to Meetings Folder - -1. Ensure engagement meetings folder exists and pattern is in .gitignore (~*/meetings/*.mkv~) - -2. Copy the .mkv file with descriptive name: - #+begin_src bash - cp ~/sync/recordings/YYYY-MM-DD_HH-MM-SS.mkv {engagement}/meetings/YYYY-MM-DD_HH-MM-meeting-name.mkv - #+end_src - Example: ~deepsat/meetings/2026-02-03_11-02-standup-ipm-grooming.mkv~ - -3. Verify the copy succeeded - -** Step 10: Update Session Context with Meeting Summary - -Add a meeting summary section to docs/session-context.org including: - -1. **Attendees** - List all participants - -2. **Key Decisions** - Important choices made - -3. **Action Items** - Tasks assigned, especially for Craig - -4. **New Information** - Things learned that should be noted - -5. **New Contacts** - People to add to notes.org - -** Step 11: Write Session Context File - -Update docs/session-context.org with: -- Files created this session (transcript, recording) -- Summary of what was processed -- Next steps (file to assets, update notes.org, etc.) - -*** Context Management (for multiple files) - -When processing multiple recordings in a queue: - -1. **After completing each file's workflow**, update docs/session-context.org with: - - Files processed so far - - Current position in queue - - Summary of meeting just processed - -2. **Ask Craig if compact is needed** before starting next file: - - Transcript processing uses significant context - - Compacting preserves session context for recovery - -3. **If autocompact occurs**, reread session-context.org to: - - Resume at correct position in queue - - Avoid reprocessing already-completed files - -** Step 12: Clean Up Source Files - -After successful completion of all previous steps, delete the source files from ~/sync/recordings/: - -1. **Delete the original recording:** - #+begin_src bash - rm ~/sync/recordings/FILENAME.mkv - #+end_src - -2. **Delete the raw transcript** (if generated): - #+begin_src bash - rm ~/sync/recordings/FILENAME.txt - #+end_src - -This step happens last to ensure all files are safely copied/processed before deletion. If anything goes wrong earlier in the workflow, the source files remain intact for retry. - -* Output Files - -| File | Location | Purpose | -|--------------------+-------------------------------------------------------+------------------------------------| -| Labeled transcript | {engagement}/assets/YYYY-MM-DD-meeting-name.txt | Corrected transcript for reference | -| Meeting recording | {engagement}/meetings/YYYY-MM-DD_HH-MM-meeting-name.mkv | Video for review (gitignored) | -| Session context | docs/session-context.org | Crash recovery, meeting summary | -| Knowledge base | {engagement}/knowledge.org | Team, infrastructure, corrections | - -* Common Transcription Errors - -Keep this list updated as new patterns emerge: - -| Heard As | Correct | Context | -|---------------+---------------+------------------------------------------------| -| Vanetti | Vineti | Company where Craig, Nerses, Eric, Ryan worked | -| Fresh | Vrezh | Developer name | -| Clean4, clone | CLIN 4 | Contract milestone | -| Vascan | Vazgan | MagicalLabs AI team member | -| Hike, Ike | Hayk | CTO name | -| High Tech | HyeTech | Armenian tech community org | -| Java software | JAMA software | Requirements traceability tool | -| JSON (person) | Jason | DevSecOps or advisor | -| their S, ress | Nerses | CEO name | -| sir Keith | Sarkis | BD/investor relations | -| Fastgas | MagicalLabs | Armenian AI contractor | -| Sitelix | Cytellix | CMMC security/compliance partner | - -* Tips - -1. **Read the whole transcript first** - Context from later in the meeting often helps identify speakers from earlier - -2. **Use the calendar** - Meeting names help set expectations for who attended - -3. **Check engagement knowledge.org** - Team roster and transcription corrections specific to this engagement - -4. **Ask about unknowns** - If a new person appears, ask Craig for context - -5. **Note new learnings** - Update engagement knowledge.org with new contacts, corrections, or context after processing - -* Validation Checklist - -- [ ] Engagement identified and destination paths set -- [ ] Session context written before starting -- [ ] Recording files listed and matched with calendar -- [ ] Craig selected files to process -- [ ] Audio extracted to .m4a (mono, 96k AAC) -- [ ] AssemblyAI transcription completed -- [ ] Intermediate .m4a file deleted -- [ ] Transcript file verified -- [ ] All speakers identified -- [ ] Speaker identifications confirmed with Craig -- [ ] Transcript corrected and saved to {engagement}/assets/ -- [ ] Recording copied to {engagement}/meetings/ with proper name -- [ ] Session context updated with meeting summary -- [ ] New contacts/info flagged for {engagement}/knowledge.org update -- [ ] (If multiple files) Queue position tracked in session context -- [ ] Source files deleted from ~/sync/recordings/ diff --git a/docs/workflows/read-calendar-events.org b/docs/workflows/read-calendar-events.org deleted file mode 100644 index b1b85d6..0000000 --- a/docs/workflows/read-calendar-events.org +++ /dev/null @@ -1,214 +0,0 @@ -#+TITLE: Read Calendar Events Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-02-01 - -* Overview - -Workflow for viewing and querying calendar events via gcalcli. - -* Triggers - -- "what's on my calendar" -- "show me appointments" -- "summarize my schedule" -- "what do I have today" -- "calendar for this week" -- "any meetings tomorrow" - -* Prerequisites - -- gcalcli installed and authenticated -- Test with =gcalcli list= to verify authentication - -* CRITICAL: Cross-Calendar Visibility - -gcalcli only sees Google calendars. To see ALL of Craig's calendars (Google, DeepSat work, Proton), you MUST query the emacs org calendar files: - -#+begin_src bash -grep "2026-02-18" ~/.emacs.d/data/gcal.org # Google calendar -grep "2026-02-18" ~/.emacs.d/data/dcal.org # DeepSat work calendar -grep "2026-02-18" ~/.emacs.d/data/pcal.org # Proton calendar -#+end_src - -| File | Calendar | -|----------+---------------------------| -| gcal.org | Craig (Google) | -| dcal.org | Craig DeepSat (work) | -| pcal.org | Craig Proton | - -*ALWAYS check all three files* when checking availability or showing the schedule. gcalcli alone will miss work and Proton events, causing an incomplete picture. - -To *create* events, use gcalcli with =--calendar "Craig"= (Google). The org files are read-only views. - -* Workflow Steps - -** 1. Parse Time Range - -Interpret the user's request to determine date range: - -| Request | Interpretation | -|--------------------+-------------------------------| -| "today" | Today only | -| "tomorrow" | Tomorrow only | -| "this week" | Next 7 days | -| "next week" | 7-14 days from now | -| "this month" | Rest of current month | -| "April 2026" | That entire month | -| "next Tuesday" | That specific day | -| "the 15th" | The 15th of current month | - -*No fixed default* - interpret from context. If unclear, ask. - -** 2. Determine Calendar Scope - -Options: -- All calendars (default) -- Specific calendar: use =--calendar "Name"= - -** 3. Query Calendar - -#+begin_src bash -# Agenda view (list format) -gcalcli agenda "start_date" "end_date" - -# Weekly calendar view -gcalcli calw - -# Monthly calendar view -gcalcli calm -#+end_src - -** 4. Format Results - -Present events in a readable format: - -#+begin_example -=== Tuesday, February 4, 2026 === - -9:00 AM - 10:00 AM Team Standup - Location: Conference Room A - -2:00 PM - 3:00 PM Dentist Appointment - Location: Downtown Dental - -=== Wednesday, February 5, 2026 === - -(No events) - -=== Thursday, February 6, 2026 === - -10:00 AM - 11:30 AM Project Review - Location: Zoom -#+end_example - -** 5. Summarize - -Provide a brief summary: -- Total number of events -- Busy days vs free days -- Any all-day events -- Conflicts (if any) - -* gcalcli Command Reference - -** Agenda View - -#+begin_src bash -# Default agenda (next few days) -gcalcli agenda - -# Today only -gcalcli agenda "today" "today 11:59pm" - -# This week -gcalcli agenda "today" "+7 days" - -# Specific date range -gcalcli agenda "2026-03-01" "2026-03-31" - -# Specific calendar -gcalcli --calendar "Work" agenda "today" "+7 days" -#+end_src - -** Calendar Views - -#+begin_src bash -# Weekly calendar (visual) -gcalcli calw - -# Monthly calendar (visual) -gcalcli calm - -# Multiple weeks -gcalcli calw 2 # Next 2 weeks -#+end_src - -** Search - -#+begin_src bash -# Search by title -gcalcli search "meeting" - -# Search specific calendar -gcalcli --calendar "Work" search "standup" -#+end_src - -* Output Formats - -gcalcli supports different output formats: - -| Option | Description | -|------------------+--------------------------------| -| (default) | Colored terminal output | -| --nocolor | Plain text | -| --tsv | Tab-separated values | - -* Time Range Examples - -| User Says | gcalcli Command | -|------------------------+----------------------------------------------| -| "today" | agenda "today" "today 11:59pm" | -| "tomorrow" | agenda "tomorrow" "tomorrow 11:59pm" | -| "this week" | agenda "today" "+7 days" | -| "next week" | agenda "+7 days" "+14 days" | -| "February" | agenda "2026-02-01" "2026-02-28" | -| "next 3 days" | agenda "today" "+3 days" | -| "rest of the month" | agenda "today" "2026-02-28" | - -* Calendars - -| Calendar | Access | Notes | -|---------------------------+--------+--------------------------------| -| Craig | owner | Default personal calendar | -| Christine | owner | Christine's calendar | -| Todoist | owner | Todoist integration | -| Craig Jennings (TripIt) | reader | View only | -| Holidays in United States | reader | View only | -| Craig Proton | reader | View only (no API access) | - -* Handling No Events - -If the date range has no events: -- Confirm the range was correct -- Mention the calendar is free -- Offer to check a different range - -Example: "No events found for tomorrow (Feb 3). Your calendar is free that day." - -* Error Handling - -** No Events Found -Not an error - calendar may simply be free. - -** Authentication Error -Run =gcalcli init= to re-authenticate. - -** Invalid Date Range -Use explicit dates: =YYYY-MM-DD= - -* Related - -- [[file:add-calendar-event.org][Add Calendar Event]] - create events -- [[file:edit-calendar-event.org][Edit Calendar Event]] - modify events -- [[file:delete-calendar-event.org][Delete Calendar Event]] - remove events -- [[file:../calendar-api-research.org][Calendar API Research]] - gcalcli reference diff --git a/docs/workflows/refactor.org b/docs/workflows/refactor.org deleted file mode 100644 index 9e967b8..0000000 --- a/docs/workflows/refactor.org +++ /dev/null @@ -1,621 +0,0 @@ -#+TITLE: Test-Driven Quality Engineering Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2025-11-01 - -* Overview - -This document describes a comprehensive test-driven quality engineering workflow applicable to any source code module. The workflow demonstrates systematic testing practices, refactoring for testability, bug discovery through tests, and decision-making processes when tests fail. - -* Workflow Goals - -1. Add comprehensive unit test coverage for testable functions in your module -2. Discover and fix bugs through systematic testing -3. Follow quality engineering principles from =ai-prompts/quality-engineer.org= (analyze this now) -4. Demonstrate refactoring patterns for testability -5. Document the decision-making process for test vs production code issues - -* Phase 0: Context Hygiene - -Before starting, write out the session context file and check with Craig whether we could compact the context. If the context window collapses, we may forget important details. Writing out the session context prevents this data loss. - -* Phase 1: Feature Addition with Testability in Mind - -** The Feature Request - -Add new functionality that requires user interaction combined with business logic. - -Example requirements: -- Present user with options (e.g., interactive selection) -- Allow cancellation -- Perform an operation with the selected input -- Provide clear success/failure feedback - -** Refactoring for Testability - -Following the "Interactive vs Non-Interactive Function Pattern" from =quality-engineer.org=: - -*Problem:* Directly implementing as an interactive function would require: -- Mocking user interface components -- Mocking framework-specific APIs -- Testing UI functionality, not core business logic - -*Solution:* Split into two functions: - -1. *Helper Function* (internal implementation): - - Pure, deterministic - - Takes explicit parameters - - No user interaction - - Returns values or signals errors naturally - - 100% testable, no mocking needed - -2. *Interactive Wrapper* (public interface): - - Thin layer handling only user interaction - - Gets input from user/context - - Presents UI (prompts, selections, etc.) - - Catches errors and displays messages - - Delegates all business logic to helper - - No tests needed (just testing framework UI) - -** Benefits of This Pattern - -From =quality-engineer.org=: -#+begin_quote -When writing functions that combine business logic with user interaction: -- Split into internal implementation and interactive wrapper -- Internal function: Pure logic, takes all parameters explicitly -- Dramatically simpler testing (no interactive mocking) -- Code reusable programmatically without prompts -- Clear separation of concerns (logic vs UI) -#+end_quote - -This pattern enables: -- Zero mocking in tests -- Fast, deterministic tests -- Easy reasoning about correctness -- Reusable helper function - -* Phase 2: Writing the First Test - -** Test File Naming - -Following the naming convention from =quality-engineer.org=: -- Pattern: =test--.= -- One test file per function for easy discovery when tests fail -- Developer sees failure → immediately knows which file to open - -** Test Organization - -Following the three-category structure: - -*** Normal Cases -- Standard expected inputs -- Common use case scenarios -- Happy path operations -- Multiple operations in sequence - -*** Boundary Cases -- Very long inputs -- Unicode characters (中文, emoji) -- Special characters and edge cases -- Empty or minimal data -- Maximum values - -*** Error Cases -- Invalid inputs -- Nonexistent resources -- Permission denied scenarios -- Wrong type of input - -** Writing Tests with Zero Mocking - -Key principle: "Don't mock what you're testing" (from =quality-engineer.org=) - -Example test structure: -#+begin_src -test_function_normal_case_expected_result() - setup() - try: - # Arrange - input_data = create_test_data() - expected_output = define_expected_result() - - # Act - actual_output = function_under_test(input_data) - - # Assert - assert actual_output == expected_output - finally: - teardown() -#+end_src - -Key characteristics: -- No mocks for the function being tested -- Real resources (files, data structures) using test utilities -- Tests actual function behavior -- Clean setup/teardown -- Clear arrange-act-assert structure - -** Result - -When helper functions are well-factored and deterministic, tests often pass on first run. - -* Phase 3: Systematic Test Coverage Analysis - -** Identifying Testable Functions - -Review all functions in your module and categorize by testability: - -*** Easy to Test (Pure/Deterministic) -- Input validation functions -- String manipulation/formatting -- Data structure transformations -- File parsing (read-only operations) -- Configuration/option processing - -*** Medium Complexity (Need External Resources) -- File I/O operations -- Recursive algorithms -- Data structure generation -- Cache or state management - -*** Hard to Test (Framework/Context Dependencies) -- Functions requiring specific runtime environment -- UI/buffer/window management -- Functions tightly coupled to framework internals -- Functions requiring complex mocking setup - -*Decision:* Test easy and medium complexity functions. Skip framework-dependent functions that would require extensive mocking/setup (diminishing returns). - -** File Organization Principle - -From =quality-engineer.org=: -#+begin_quote -*Unit Tests*: One file per method -- Naming: =test--.= -- Example: =test-module--function.ext= -#+end_quote - -*Rationale:* When a test fails in CI: -1. Developer sees: =test-module--function-normal-case-returns-result FAILED= -2. Immediately knows: Look for =test-module--function.= -3. Opens file and fixes issue - *fast cognitive path* - -If combined files: -1. Test fails: =test-module--function-normal-case-returns-result FAILED= -2. Which file? =test-module--helpers.=? =test-module--combined.=? -3. Developer wastes time searching - *slower, frustrating* - -*The 1:1 mapping is a usability feature for developers under pressure.* - -* Phase 4: Testing Function by Function - -** Example 1: Input Validation Function - -*** Test Categories - -*Normal Cases:* -- Valid inputs -- Case variations -- Common use cases - -*Boundary Cases:* -- Edge cases in input format -- Multiple delimiters or separators -- Empty or minimal input -- Very long input - -*Error Cases:* -- Nil/null input -- Wrong type -- Malformed input - -*** First Run: Most Passed, Some FAILED - -*Example Failure:* -#+begin_src -test-module--validate-input-error-nil-input-returns-nil -Expected: Returns nil gracefully -Actual: (TypeError/NullPointerException) - CRASHED -#+end_src - -*** Bug Analysis: Test or Production Code? - -*Process:* -1. Read the test expectation: "nil input returns nil/false gracefully" -2. Read the production code: - #+begin_src - function validate_input(input): - extension = get_extension(input) # ← Crashes here on nil/null - return extension in valid_extensions - #+end_src -3. Identify issue: Function expects string, crashes on nil/null -4. Consider context: This is defensive validation code, called in various contexts - -*Decision: Fix production code* - -*Rationale:* -- Function should be defensive (validation code) -- Returning false/nil for invalid input is more robust than crashing -- Common pattern in validation functions -- Better user experience - -*Fix:* -#+begin_src -function validate_input(input): - if input is None or not isinstance(input, str): # ← Guard added - return False - extension = get_extension(input) - return extension in valid_extensions -#+end_src - -Result: All tests pass after adding defensive checks. - -** Example 2: Another Validation Function - -*** First Run: Most Passed, Multiple FAILED - -*Failures:* -1. Nil input crashed (same pattern as previous function) -2. Empty string returned unexpected value (edge case not handled) - -*Fix:* -#+begin_src -function validate_resource(resource): - # Guards added for nil/null and empty string - if not resource or not isinstance(resource, str) or resource.strip() == "": - return False - - # Original validation logic - return is_valid_resource(resource) and meets_criteria(resource) -#+end_src - -Result: All tests pass after adding comprehensive guards. - -** Example 3: String Sanitization Function - -*** First Run: Most Passed, 1 FAILED - -*Failure:* -#+begin_src -test-module--sanitize-boundary-special-chars-replaced -Expected: "output__________" (10 underscores) -Actual: "output_________" (9 underscores) -#+end_src - -*** Bug Analysis: Test or Production Code? - -*Process:* -1. Count special chars in test input: 9 characters -2. Test expected 10 replacements, but input only has 9 -3. Production code is working correctly - -*Decision: Fix test code* - -*The bug was in the test expectation, not the implementation.* - -Result: All tests pass after correcting test expectations. - -** Example 4: File/Data Parser Function - -This is where a **significant bug** was discovered through testing! - -*** Test Categories - -*Normal Cases:* -- Absolute paths/references -- Relative paths (expanded to base directory) -- URLs/URIs preserved as-is -- Mixed types of references - -*Boundary Cases:* -- Empty lines ignored -- Whitespace-only lines ignored -- Comments ignored (format-specific) -- Leading/trailing whitespace trimmed -- Order preserved - -*Error Cases:* -- Nonexistent file -- Nil/null input - -*** First Run: Majority Passed, Multiple FAILED - -All failures related to URL/URI handling: - -*Failure Pattern:* -#+begin_src -Expected: "http://example.com/resource" -Actual: "/base/path/http:/example.com/resource" -#+end_src - -URLs were being treated as relative paths and corrupted! - -*** Root Cause Analysis - -*Production code:* -#+begin_src -if line.matches("^\(https?|mms\)://"): # Pattern detection - # Handle as URL -#+end_src - -*Problem:* Pattern matching is incorrect! - -The pattern/regex has an error: -- Incorrect escaping or syntax -- Pattern fails to match valid URLs -- All URLs fall through to the "relative path" handler - -The pattern never matched, so URLs were incorrectly processed as relative paths. - -*Correct version:* -#+begin_src -if line.matches("^(https?|mms)://"): # Fixed pattern - # Handle as URL -#+end_src - -Common causes of this type of bug: -- String escaping issues in the language -- Incorrect regex syntax -- Copy-paste errors in patterns - -*** Impact Assessment - -*This is a significant bug:* -- Remote resources (URLs) would be broken -- Data corruption: URLs transformed into invalid paths -- Function worked for local/simple cases, so bug went unnoticed -- Users would see mysterious errors when using remote resources -- Potential data loss or corruption in production - -*Tests caught a real production bug that could have caused user data corruption!* - -Result: All tests pass after fixing the pattern matching logic. - -* Phase 5: Continuing Through the Test Suite - -** Additional Functions Tested Successfully - -As testing continues through the module, patterns emerge: - -*Function: Directory/File Listing* - - Learning: Directory listing order may be filesystem-dependent - - Solution: Sort results before comparing in tests - -*Function: Data Extraction* - - Keep as separate test file (don't combine with related functions) - - Reason: Usability when tests fail - -*Function: Recursive Operations* - - Medium complexity: Required creating test data structures/trees - - Use test utilities for setup/teardown - - Well-factored functions often pass all tests initially - -*Function: Higher-Order Functions* - - Test functions that return functions/callbacks - - Initially may misunderstand framework/protocol behavior - - Fix test expectations to match actual framework behavior - -* Key Principles Applied - -** 1. Refactor for Testability BEFORE Writing Tests - -The Interactive vs Non-Interactive pattern from =quality-engineer.org= made testing trivial: -- No mocking required -- Fast, deterministic tests -- Clear separation of concerns - -** 2. Systematic Test Organization - -Every test file followed the same structure: -- Normal Cases -- Boundary Cases -- Error Cases - -This makes it easy to: -- Identify coverage gaps -- Add new tests -- Understand what's being tested - -** 3. Test Naming Convention - -Pattern: =test-----= - -Examples: -- =test-module--validate-input-normal-valid-extension-returns-true= -- =test-module--parse-data-boundary-empty-lines-ignored= -- =test-module--sanitize-error-nil-input-signals-error= - -Benefits: -- Self-documenting -- Easy to understand what failed -- Searchable/grepable -- Clear category organization - -** 4. Zero Mocking for Pure Functions - -From =quality-engineer.org=: -#+begin_quote -DON'T MOCK WHAT YOU'RE TESTING -- Only mock external side-effects and dependencies, not the domain logic itself -- If mocking removes the actual work the function performs, you're testing the mock -- Use real data structures that the function is designed to operate on -- Rule of thumb: If the function body could be =(error "not implemented")= and tests still pass, you've over-mocked -#+end_quote - -Our tests used: -- Real file I/O -- Real strings -- Real data structures -- Actual function behavior - -Result: Tests caught real bugs, not mock configuration issues. - -** 5. Test vs Production Code Bug Decision Framework - -When a test fails, ask: - -1. *What is the test expecting?* - - Read the test name and assertions - - Understand the intended behavior - -2. *What is the production code doing?* - - Read the implementation - - Trace through the logic - -3. *Which is correct?* - - Is the test expectation reasonable? - - Is the production behavior defensive/robust? - - What is the usage context? - -4. *Consider the impact:* - - Defensive code: Fix production to handle edge cases - - Wrong expectation: Fix test - - Unclear spec: Ask user for clarification - -Examples from our session: -- *Nil input crashes* → Fix production (defensive coding) -- *Empty string treated as valid* → Fix production (defensive coding) -- *Wrong count in test* → Fix test (test bug) -- *Regex escaping wrong* → Fix production (real bug!) - -** 6. Fast Feedback Loop - -Pattern: "Write tests, run them all, report errors, and see where we are!" - -This became a mantra during the session: -1. Write comprehensive tests for one function -2. Run immediately -3. Analyze failures -4. Fix bugs (test or production) -5. Verify all tests pass -6. Move to next function - -Benefits: -- Caught bugs immediately -- Small iteration cycles -- Clear progress -- High confidence in changes - -* Final Results - -** Test Coverage Example - -*Multiple functions tested with comprehensive coverage:* -1. File operation helper - ~10-15 tests -2. Input validation function - ~15 tests -3. Resource validation function - ~13 tests -4. String sanitization function - ~13 tests -5. File/data parser function - ~15 tests -6. Directory listing function - ~7 tests -7. Data extraction function - ~6 tests -8. Recursive operation function - ~12 tests -9. Higher-order function - ~12 tests - -Total: Comprehensive test suite covering all testable functions - -** Bugs Discovered and Fixed - -1. *Input Validation Function* - - Issue: Crashed on nil/null input - - Fix: Added nil/type guards - - Impact: Prevents crashes in validation code - -2. *Resource Validation Function* - - Issue: Crashed on nil, treated empty string as valid - - Fix: Added guards for nil and empty string - - Impact: More robust validation - -3. *File/Data Parser Function* ⚠️ *SIGNIFICANT BUG* - - Issue: Pattern matching wrong - URLs/URIs corrupted as relative paths - - Fix: Corrected pattern matching logic - - Impact: Remote resources now work correctly - - *This bug would have corrupted user data in production* - -** Code Quality Improvements - -- All testable helper functions now have comprehensive test coverage -- More defensive error handling (nil guards) -- Clear separation of concerns (pure helpers vs interactive wrappers) -- Systematic boundary condition testing -- Unicode and special character handling verified - -* Lessons Learned - -** 1. Tests as Bug Discovery Tools - -Tests aren't just for preventing regressions - they actively *discover existing bugs*: -- Pattern matching bugs may exist in production -- Nil/null handling bugs manifest in edge cases -- Tests make these issues visible immediately -- Bugs caught before users encounter them - -** 2. Refactoring Enables Testing - -The decision to split functions into pure helpers + interactive wrappers: -- Made testing dramatically simpler -- Enabled 100+ tests with zero mocking -- Improved code reusability -- Clarified function responsibilities - -** 3. Systematic Process Matters - -Following the same pattern for each function: -- Reduced cognitive load -- Made it easy to maintain consistency -- Enabled quick iteration -- Built confidence in coverage - -** 4. File Organization Aids Debugging - -One test file per function: -- Fast discovery when tests fail -- Clear ownership -- Easy to maintain -- Follows user's mental model - -** 5. Test Quality Equals Production Quality - -Quality tests: -- Use real resources (not mocks) -- Test actual behavior -- Cover edge cases systematically -- Find real bugs - -This is only possible with well-factored, testable code. - -* Applying These Principles - -When adding tests to other modules: - -1. *Identify testable functions* - Look for pure helpers, file I/O, string manipulation -2. *Refactor if needed* - Split interactive functions into pure helpers -3. *Write systematically* - Normal, Boundary, Error categories -4. *Run frequently* - Fast feedback loop -5. *Analyze failures carefully* - Test bug vs production bug -6. *Fix immediately* - Don't accumulate technical debt -7. *Maintain organization* - One file per function, clear naming - -* Reference - -See =ai-prompts/quality-engineer.org= for comprehensive quality engineering guidelines, including: -- Test organization and structure -- Test naming conventions -- Mocking and stubbing best practices -- Interactive vs non-interactive function patterns -- Integration testing guidelines -- Test maintenance strategies - -Note: =quality-engineer.org= evolves as we learn more quality best practices. This document captures principles applied during this specific session. - -* Conclusion - -This workflow process demonstrates how systematic testing combined with refactoring for testability can: -- Discover real bugs before they reach users -- Improve code quality and robustness -- Build confidence in changes -- Create maintainable test suites -- Follow industry best practices - -A comprehensive test suite with multiple bug fixes represents significant quality improvement to any module. Critical bugs (like the pattern matching issue in the example) alone can justify the entire testing effort - such bugs can cause data corruption and break major features. - -*Testing is not just about preventing future bugs - it's about finding bugs that already exist.* diff --git a/docs/workflows/retrospective.org b/docs/workflows/retrospective.org deleted file mode 100644 index c512cfb..0000000 --- a/docs/workflows/retrospective.org +++ /dev/null @@ -1,94 +0,0 @@ -#+TITLE: Retrospective Workflow -#+DESCRIPTION: How to run a retrospective after major problem-solving sessions - -* When to Run a Retrospective - -Run after: -- Major debugging/troubleshooting sessions -- Complex multi-step implementations -- Any session where significant friction occurred -- Sessions lasting more than an hour with trial-and-error - -* The Process - -** 0. Context Hygiene - -Before starting, write out the session context file and check with Craig whether we could compact the context. If the context window collapses, we may forget important details. Writing out the session context prevents this data loss. - -** 1. Trigger the Retrospective - -Either party can say: "Let's do a retrospective" or "Retrospective time" - -** 2. Answer These Questions (Both Parties) - -*** What went well? -Identify patterns worth reinforcing. Be specific. - -*** What didn't go well? -Identify friction points, mistakes, wasted time. No blame, just facts. - -*** What behavioral changes should we make? -Focus on *how we work*, not technical facts. -- Good: "Confirm before rebooting" -- Not behavioral: "AMD needs firmware 20260110" - -*** What would we do differently next time? -Specific scenarios and better approaches. - -*** Any new principles to add? -Distill lessons into short, actionable principles for retrospective/PRINCIPLES.org. - -** 3. Copy and Update retrospectives/PRINCIPLES.org - -Copy the template retrospectives/PRINCIPLES.org. - -Using the copied template, add new behavioral principles learned. Keep them: -- Short and actionable -- Focused on behavior, not facts -- Easy to remember and apply - -** 4. Create Retrospective Record - -Save to =docs/retrospectives/YYYY-MM-DD-topic.org= with: -- Summary of what happened -- Answers to the questions above -- Link to detailed session doc if exists - -** 5. Commit Changes - -Commit PRINCIPLES.org updates and retrospective record. - -* PRINCIPLES.org Structure - -#+BEGIN_SRC org -,* How We Work Together -,** Principle Name -- Bullet points explaining the principle -- When it applies -- Why it matters - -,* Checklists -,** Checklist Name -- [ ] Step 1 -- [ ] Step 2 -#+END_SRC - -* Integration with Session Startup - -Add to project's protocols.org or session startup: -- Check if PRINCIPLES.org was updated since last session -- Review any new principles before starting work - -* Example Principles (Starters) - -** Sync Before Action -- Confirm before destructive or irreversible actions -- State what you're about to do and wait for go-ahead - -** Verify Assumptions -- When something "should work" but doesn't, question the assumption -- Test one variable at a time - -** Clean Up After Yourself -- Reset temporary changes before finishing -- Verify system is in expected state diff --git a/docs/workflows/send-email.org b/docs/workflows/send-email.org deleted file mode 100644 index cfd7adf..0000000 --- a/docs/workflows/send-email.org +++ /dev/null @@ -1,198 +0,0 @@ -#+TITLE: Email Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-01-26 - -* Overview - -This workflow sends emails with optional attachments via msmtp using the cmail account (c@cjennings.net via Proton Bridge). - -* When to Use This Workflow - -When Craig says: -- "email workflow" or "send an email" -- "email [person] about [topic]" -- "send [file] to [person]" - -* Required Information - -Before sending, gather and confirm: - -1. **To:** (required) - recipient email address(es) -2. **CC:** (optional) - carbon copy recipients -3. **BCC:** (optional) - blind carbon copy recipients -4. **Subject:** (required) - email subject line -5. **Body:** (required) - email body text -6. **Attachments:** (optional) - file path(s) to attach - -* The Workflow - -** Step 1: Gather Missing Information - -If any required fields are missing, prompt Craig: - -#+begin_example -To send this email, I need: -- To: [who should receive this?] -- Subject: [what's the subject line?] -- Body: [what should the email say?] -- Attachments: [any files to attach?] -- CC/BCC: [anyone to copy?] -#+end_example - -** Step 2: Validate Email Addresses - -Look up all recipient names/emails in the contacts file: - -#+begin_src bash -grep -i "[name or email]" ~/sync/org/contacts.org -#+end_src - -**Note:** If contacts.org is empty, check for sync-conflict files: -#+begin_src bash -ls ~/sync/org/contacts*.org -#+end_src - -For each recipient: -1. Search contacts by name or email -2. Confirm the email address matches -3. If name not found, ask Craig to confirm the email is correct -4. If multiple emails for a contact, ask which one to use - -** Step 3: Confirm Before Sending - -Display the complete email for review: - -#+begin_example -Ready to send: - -From: c@cjennings.net -To: [validated email(s)] -CC: [if any] -BCC: [if any] -Subject: [subject] - -[body text] - -Attachments: [list files if any] - -Send this email? [Y/n] -#+end_example - -** Step 4: Send the Email - -Use Python to construct MIME message and pipe to msmtp: - -#+begin_src python -python3 << 'EOF' | msmtp -a cmail [recipient] -import sys -from email.mime.multipart import MIMEMultipart -from email.mime.text import MIMEText -from email.mime.application import MIMEApplication -from email.utils import formatdate -import os - -msg = MIMEMultipart() -msg['From'] = 'c@cjennings.net' -msg['To'] = '[to_address]' -# msg['Cc'] = '[cc_address]' # if applicable -# msg['Bcc'] = '[bcc_address]' # if applicable -msg['Subject'] = '[subject]' -msg['Date'] = formatdate(localtime=True) - -body = """[body text]""" -msg.attach(MIMEText(body, 'plain')) - -# For each attachment: -# pdf_path = '/path/to/file.pdf' -# with open(pdf_path, 'rb') as f: -# attachment = MIMEApplication(f.read(), _subtype='pdf') -# attachment.add_header('Content-Disposition', 'attachment', filename='filename.pdf') -# msg.attach(attachment) - -print(msg.as_string()) -EOF -#+end_src - -**Important:** When there are CC or BCC recipients, pass ALL recipients to msmtp: -#+begin_src bash -python3 << 'EOF' | msmtp -a cmail to@example.com cc@example.com bcc@example.com -#+end_src - -** Step 5: Verify Delivery - -Check the msmtp log for confirmation: - -#+begin_src bash -tail -3 ~/.msmtp.cmail.log -#+end_src - -Look for: ~smtpstatus=250~ and ~exitcode=EX_OK~ - -** Step 6: Sync to Sent Folder (Optional) - -If Craig wants the email in his Sent folder: - -#+begin_src bash -mbsync cmail -#+end_src - -* msmtp Configuration - -The cmail account should be configured in ~/.msmtprc: - -#+begin_example -account cmail -tls_certcheck off -auth on -host 127.0.0.1 -port 1025 -protocol smtp -from c@cjennings.net -user c@cjennings.net -passwordeval "cat ~/.config/.cmailpass" -tls on -tls_starttls on -logfile ~/.msmtp.cmail.log -#+end_example - -**Note:** ~tls_certcheck off~ is used because Proton Bridge uses self-signed certificates on localhost. - -* Attachment Handling - -** Supported Types - -Common MIME subtypes: -- PDF: ~_subtype='pdf'~ -- Images: ~_subtype='png'~, ~_subtype='jpeg'~ -- Text: ~_subtype='plain'~ -- Generic: ~_subtype='octet-stream'~ - -** Multiple Attachments - -Add multiple attachment blocks before ~print(msg.as_string())~ - -* Troubleshooting - -** Password File Missing -Ensure ~/.config/.cmailpass exists with the Proton Bridge SMTP password. - -** TLS Certificate Errors -Use ~tls_certcheck off~ in msmtprc for Proton Bridge (localhost only). - -** Proton Bridge Not Running -Start Proton Bridge before sending. Check if port 1025 is listening: -#+begin_src bash -ss -tlnp | grep 1025 -#+end_src - -* Example Usage - -Craig: "email workflow - send the November 3rd SOV to Christine" - -Claude: -1. Searches contacts for "Christine" -> finds cciarmello@gmail.com -2. Asks for subject and body if not provided -3. Locates the SOV file in assets/ -4. Shows confirmation -5. Sends via msmtp -6. Verifies delivery in log diff --git a/docs/workflows/session-start.org b/docs/workflows/session-start.org deleted file mode 100644 index d7c3ba3..0000000 --- a/docs/workflows/session-start.org +++ /dev/null @@ -1,3 +0,0 @@ -This workflow has been renamed to startup.org. Run docs/workflows/startup.org instead. - -Delete this file (session-start.org) when finished. diff --git a/docs/workflows/set-alarm.org b/docs/workflows/set-alarm.org deleted file mode 100644 index 440e769..0000000 --- a/docs/workflows/set-alarm.org +++ /dev/null @@ -1,165 +0,0 @@ -#+TITLE: Set Alarm Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-01-31 -#+UPDATED: 2026-02-02 - -* Overview - -This workflow enables Claude to set timers and alarms that reliably notify Craig, even if the terminal session ends or is accidentally closed. Notifications are distinctive (audible + visual with alarm icon) and persist until manually dismissed. - -Uses the =notify= command (alarm type) for consistent notifications across all AI workflows. - -* Problem We're Solving - -Notifications from AI sessions have several issues: - -1. *Too easy to miss* - Among many dunst notifications, AI alerts blend in -2. *Not audible* - Dunst notifications are visual-only by default -3. *Lost on terminal close* - If the terminal is accidentally closed, scheduled notifications never fire - -*Impact:* Missed notifications lead to lost time and reduced productivity. Tasks that depend on timely reminders get forgotten or delayed. - -* Exit Criteria - -The workflow is successful when: - -1. AI-set alarms are never missed -2. Notifications are immediately noticeable, even when away from desk (audible) -3. Notifications persist until manually dismissed (no auto-fade) -4. Alarms fire regardless of whether the Claude session has ended or terminal was closed - -* When to Use This Workflow - -Use this workflow when: - -- Craig asks you to remind him about something at a specific time -- A long-running task needs a check-in notification -- Craig needs to leave the desk but wants to be alerted when to return -- Any situation requiring a timed notification that must not be missed - -Examples: -- "Set an alarm for 5pm to wrap up" -- "Remind me in 30 minutes to check the build" -- "Set a timer for 1 hour - time to take a break" - -* Approach: How We Work Together - -** Step 1: Craig Requests an Alarm - -Craig tells Claude when and why: -- "Set a timer for 45 minutes - meeting starts" -- "Alarm at 3:30pm to call the dentist" - -** Step 2: Claude Sets the Alarm - -Claude schedules the alarm using the =at= daemon with =notify=: - -#+begin_src bash -echo "notify alarm 'Alarm' 'Time to call the dentist' --persist" | at 3:30pm -echo "notify alarm 'Alarm' 'Meeting starts' --persist" | at now + 45 minutes -#+end_src - -The =at= daemon: -1. Schedules the notification (survives terminal close) -2. Confirms the alarm was set with the scheduled time - -** Step 3: Alarm Fires - -When the scheduled time arrives: -1. Distinctive sound plays (alarm.ogg) -2. Dunst notification appears with: - - Alarm icon - - The custom message provided - - Normal urgency (not critical - doesn't imply emergency) - - No timeout (persists until dismissed) - -** Step 4: Craig Responds - -Craig dismisses the notification and acts on it. - -* Implementation - -** Setting Alarms - -Use the =at= daemon to schedule a =notify alarm= command: - -#+begin_src bash -# Schedule alarm for specific time -echo "notify alarm 'Alarm' 'Meeting starts' --persist" | at 3:30pm - -# Schedule alarm for relative time -echo "notify alarm 'Alarm' 'Check the build' --persist" | at now + 30 minutes - -# Schedule alarm for tomorrow -echo "notify alarm 'Alarm' 'Call the dentist' --persist" | at 3:30pm tomorrow -#+end_src - -** Notification System - -Uses the =notify= command with the =alarm= type. The =notify= command provides 8 notification types with matching icons and sounds. - -#+begin_src bash -# Immediate alarm notification (for testing) -notify alarm "Alarm" "Your message here" --persist -#+end_src - -The =--persist= flag keeps the notification on screen until manually dismissed. - -** Managing Alarms - -#+begin_src bash -# List pending alarms -atq - -# Cancel an alarm by job number -atrm JOB_NUMBER -#+end_src - -The =at= command accepts various time formats: -- =now + 30 minutes= - relative time -- =now + 1 hour= - relative time -- =3:30pm= - specific time today -- =3:30pm tomorrow= - specific time tomorrow -- =noon= - 12:00pm today -- =midnight= - 12:00am tonight -* Principles to Follow - -** Reliability -The alarm must fire. Use the =at= daemon which is designed for exactly this purpose and survives terminal closure and session changes. - -** Efficiency -Simple invocation - Claude runs one command. No complex setup required per alarm. - -** Fail Audibly -If the alarm fails to schedule, report the error clearly. Don't fail silently. - -** Testable -The =notify alarm= command can be called directly to verify notifications work without waiting for a timer. - -** Non-Alarming -Use normal urgency, not critical. The notification should be noticeable but not imply something has gone horribly wrong. - -* Limitations (Current Version) - -- *Does not survive logout/reboot* - Alarms scheduled via =at= are lost on logout/reboot -- *No alarm management UI* - Use =atq= to list and =atrm= to remove alarms manually - -Future versions may add: -- Reboot persistence via systemd timers or alarm state file - -* Living Document - -Update this workflow as we learn what works: -- Sound choices that are distinctive but not jarring -- Icon that clearly indicates alarm origin -- Any edge cases discovered in use - -** Sound Resources - -For future notification sounds: -- Local collection: =~/documents/sounds/= (various notification tones) -- https://notificationsounds.com - good selection of clean notification tones -- https://mixkit.co/free-sound-effects/notification/ - royalty-free sounds - -See =notify= package for the unified notification system used across all AI workflows. - diff --git a/docs/workflows/startup.org b/docs/workflows/startup.org deleted file mode 100644 index 0241cfe..0000000 --- a/docs/workflows/startup.org +++ /dev/null @@ -1,103 +0,0 @@ -#+TITLE: Startup Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-02-07 - -* Overview - -This workflow runs automatically at the beginning of EVERY session. It gives Claude project context, syncs templates, discovers available workflows, and determines session priorities. Do NOT ask Craig if he wants to run it — just execute it. - -* The Workflow - -** Step 1: Check for Interrupted Session - -- Run =date= for accurate timestamp -- Check if =docs/session-context.org= exists — if so, previous session crashed. Read it immediately to recover context. -- Check last session entry in notes.org for missing end timestamp — mention to Craig if found - -Rationale: Prevents losing work from crashed sessions. - -** Step 2: Sync Templates - -Template is authoritative — copy, don't diff: - -#+begin_src bash -cp ~/projects/claude-templates/docs/protocols.org docs/protocols.org -cp -r ~/projects/claude-templates/docs/workflows docs/ -cp -r ~/projects/claude-templates/docs/scripts docs/ -cp -r ~/projects/claude-templates/docs/announcements docs/ -#+end_src - -Mention significant updates if noticed (new workflows, protocol changes). - -Two workflow directories: -- =docs/workflows/= — template (overwritten on sync, never edit in project) -- =docs/project-workflows/= — project-specific (never touched by sync) - -Rationale: Keeps all projects aligned with latest protocols. - -** Step 3: Process Announcements - -- Check =docs/announcements/= for files (skip =the-purpose-of-this-directory.org=) -- For each announcement: read, discuss with Craig, execute, report results, delete the announcement file - -Rationale: Announcements are one-off cross-project instructions from Craig. - -** Step 4: Scan Workflow Directories [CRITICAL STEP] - -List filenames from BOTH directories: - -#+begin_src bash -ls -1 docs/workflows/ -ls -1 docs/project-workflows/ # if it exists -#+end_src - -PRINT the filenames — these are the workflow lookup table. - -- Filenames are descriptive: "send-email.org" handles "send an email" -- The word "workflow" in any user request → check these directories for a match -- When a request matches a filename → read that file and execute its guidance -- If no match → offer to create via create-workflow (goes to =project-workflows/=) - -Rationale: Workflow filenames are the discovery mechanism. The directory listing IS the catalog — no separate index needed. - -** Step 5: Read notes.org (Key Sections Only) - -protocols.org is already read before this workflow runs — skip it. - -Read these notes.org sections: -1. Project-Specific Context (the key section) -2. Active Reminders → surface to user immediately -3. Pending Decisions → mention to user -4. Last 2-3 Session History entries only - -Do NOT read: About This File, full history, archived sessions. - -Rationale: Gives Claude project context without wasting tokens on static content. - -** Step 6: Process Inbox - -- Check =./inbox/= directory -- If empty: continue -- If non-empty: process is MANDATORY (don't ask, just do it) -- For each file: read, determine action, recommend filing, get approval, move - -Rationale: Ensures new documents don't go unnoticed. - -** Step 7: Ask About Priorities - -Ask: "Is there something urgent, or should we follow the what's-next workflow?" - -- If urgent: proceed immediately -- If what's-next: check =docs/workflows/whats-next.org= -- If unsure: surface reminders, pending decisions, recent work as context - -Rationale: Gives Craig control over session direction. - -* Common Mistakes - -1. Reading the entire notes.org file — only read key sections listed in Step 5 -2. Skipping template sync — miss important updates across projects -3. Not checking for session-context.org — lose context from crashed sessions -4. Forgetting to surface Active Reminders — Craig misses critical items -5. Asking if Craig wants inbox processed — it's mandatory, not optional -6. Announcing "session start complete" — just begin working on the chosen task diff --git a/docs/workflows/status-check.org b/docs/workflows/status-check.org deleted file mode 100644 index efff16d..0000000 --- a/docs/workflows/status-check.org +++ /dev/null @@ -1,178 +0,0 @@ -#+TITLE: Status Check Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-02-02 - -* Overview - -This workflow defines how Claude monitors and reports on long-running jobs (10+ minutes). It provides regular status updates, ETA estimates, and clear completion/failure signals with notifications. - -Uses the =notify= command for completion/failure notifications. - -* Problem We're Solving - -Long-running jobs create uncertainty: - -1. *Silent failures* - Jobs fail without notification, wasting time -2. *Missed completions* - Job finishes but user doesn't notice for hours -3. *No visibility* - User doesn't know if it's safe to context-switch -4. *Unknown ETAs* - No sense of when to check back - -*Impact:* Delayed follow-up, wasted time, uncertainty about when to return attention to the task. - -* Exit Criteria - -The workflow is successful when: - -1. Claude proactively monitors long-running tasks (10+ minutes) -2. Status updates arrive every 5 minutes with progress and ETA -3. Completion/failure is clearly announced with notification -4. Failures trigger investigation or confirmation before action - -* When to Use This Workflow - -Use automatically when: -- Network transfers (rsync, scp, file sync) -- Test suites expected to run long -- Build processes -- Any job estimated at 10+ minutes - -Use when Craig requests: -- "Keep me posted on this" -- "Provide status checks on this job" -- "Let me know when it's done" -- "Monitor this for me" - -* Approach: How We Work Together - -** Step 1: Initial Status - -When a long-running job starts, report: - -#+begin_example -HH:MM - description - ETA -19:10 - Starting file transfer of ~/videos to wolf - ETA ~30 minutes -#+end_example - -Format: One line, under 120 characters. - -** Step 2: Progress Updates (Every 5 Minutes) - -Report progress with updated ETA: - -#+begin_example -HH:MM - job description - update - ETA -19:15 - File transfer to wolf - now transferring files starting with "h" - ETA ~25 minutes -#+end_example - -If ETA changes significantly, explain why: - -#+begin_example -19:20 - File transfer to wolf - network speed dramatically reduced - ETA ~40 minutes -19:25 - File transfer to wolf - network speed recovered - ETA ~10 minutes -#+end_example - -** Step 3: Completion - -On success: - -#+begin_example -HH:MM - job description SUCCESS! - elapsed time -19:35 - File transfer to wolf SUCCESS! - elapsed: ~25 minutes -#+end_example - -Then: -1. Play success sound and show persistent notification -2. Report any relevant details (files transferred, tests passed, etc.) - -#+begin_src bash -notify success "Job Complete" "File transfer to wolf finished" --persist -#+end_src - -** Step 4: Failure - -On failure: - -#+begin_example -HH:MM - job description FAILURE! - elapsed time -Reason: Network connectivity dropped. Should I investigate, restart, or something else? -#+end_example - -Then: -1. Play failure sound and show persistent notification -2. Investigate the reason OR ask for confirmation before diagnosing -3. Unless fix is trivial and obvious, ask before fixing or rerunning - -#+begin_src bash -notify fail "Job Failed" "File transfer to wolf - network error" --persist -#+end_src - -* Status Format Reference - -| Situation | Format | -|-----------+----------------------------------------------------------| -| Initial | =HH:MM - description - ETA= | -| Progress | =HH:MM - job description - update - ETA= | -| Success | =HH:MM - job description SUCCESS! - elapsed time= | -| Failure | =HH:MM - job description FAILURE! - elapsed time= + reason | - -All status lines should be under 120 characters. - -* Principles to Follow - -** Reliability -Updates every 5 minutes, no exceptions. Status checks are never considered an interruption. - -** Transparency -Honest progress reporting. If ETA changes, explain why. Don't silently adjust estimates. - -** ETA Honesty -- Always try to estimate, even if uncertain -- If truly unknown, say "ETA unknown" -- When ETA changes significantly, explain the reason -- A wrong estimate with explanation is better than no estimate - -** Fail Loudly -Never let failures go unnoticed. Always announce failures with sound and persistent notification. - -** Ask Before Acting -On failure, investigate or ask - don't automatically retry or fix unless the solution is trivial and obvious. - -* Implementation - -** Monitoring with Sleep (Blocking) - -To ensure 5-minute status updates happen reliably, use blocking sleep loops. -Do NOT use =at= for reminders - it only notifies the user, not Claude. - -#+begin_src bash -# Check status, sleep 5 min, repeat until job completes -sleep 300 && date "+%H:%M" && tail -15 /path/to/output.log -#+end_src - -This blocks the conversation but guarantees regular updates. The user has -explicitly approved this approach - status checks are never an interruption. - -** Background Jobs - -For jobs run in background via Bash tool: -1. Start job with =run_in_background: true= -2. Note the output file path -3. Use blocking sleep loop to check output every 5 minutes -4. Continue until job completes or fails - -* Notification Reference - -#+begin_src bash -# Success -notify success "Job Complete" "description" --persist - -# Failure -notify fail "Job Failed" "description" --persist -#+end_src - -* Living Document - -Update as patterns emerge: -- Which jobs benefit most from monitoring -- ETA estimation techniques that work well -- Common failure modes and responses diff --git a/docs/workflows/summarize-emails.org b/docs/workflows/summarize-emails.org deleted file mode 100644 index a8ab805..0000000 --- a/docs/workflows/summarize-emails.org +++ /dev/null @@ -1,237 +0,0 @@ -#+TITLE: Summarize Emails Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-02-14 - -* Overview - -This workflow filters out marketing noise and surfaces only emails that matter — messages from real people, businesses Craig works with, or anything that needs his attention. It chains together existing tools: sync-email, mu queries, the extract script, and Claude's judgment to produce a curated summary. - -* Problem We're Solving - -Craig's inbox contains a mix of important correspondence and marketing noise. Manually scanning through emails to find what matters wastes time and risks missing something important buried under promotional messages. This workflow automates the filtering and presents a concise summary of only the emails that deserve attention. - -* Exit Criteria - -Summary is complete when: -1. All qualifying emails in the requested scope have been reviewed -2. Summary presented to Craig, grouped by account -3. Temp directory cleaned up -4. Session context file updated - -* When to Use This Workflow - -When Craig says: -- "summarize my emails", "what emails do I have", "anything important in my inbox", "email summary" -- "any unread emails", "check my unread" -- "any starred emails", "show flagged emails" -- "emails from [person]", "what has [person] sent me" -- "emails also sent to Christine" - -* The Workflow - -** Step 1: Context Hygiene - -Before starting, write out the session context file and ask Craig if he wants to compact first. This workflow is token-heavy (reading multiple full emails). If the context window compresses mid-workflow, we may lose important details. Writing out session context prevents this data loss. - -** Step 2: Parse Scope - -Determine the mu query from Craig's request. Supported scope types: - -| Scope Type | Example Request | mu Query | -|--------------------+--------------------------------+---------------------------------------| -| Date range | "last week", "since Monday" | =date:1w..now=, =date:2026-02-10..now= | -| Unread only | "unread emails" | =flag:unread= | -| Unread + date | "unread emails this week" | =flag:unread date:1w..now= | -| Starred/flagged | "starred emails" | =flag:flagged= (with optional date) | -| From a sender | "emails from Dan" | =from:dan= (with optional =--maxnum=) | -| Sent to someone | "emails also sent to Christine"| =to:christine OR cc:christine= | - -Scopes can be combined. For example, "unread emails from Dan this week" becomes =flag:unread from:dan date:1w..now=. - -If no scope is provided or it's ambiguous, *ask Craig* before querying. - -** Step 3: Offer to Sync - -Ask Craig if he wants to sync first (=mbsync -a && mu index=). Don't auto-sync. If Craig confirms, run the [[file:sync-email.org][sync-email workflow]]. - -** Step 4: Query mu - -Append =NOT flag:list= to the query to exclude emails with List-* headers (catches most mailing list / marketing / bulk mail). - -#+begin_src bash -mu find --sortfield=date --reverse --fields="d f t s l" [query] NOT flag:list -#+end_src - -Output fields: date, from, to, subject, path. Sorted by date, newest first. - -** Step 5: Copy Qualifying Emails to Temp Directory - -Create an isolated temp directory for the summary work: - -#+begin_src bash -mkdir -p ./tmp/email-summary-YYYY-MM-DD/ -#+end_src - -Copy the EML files from their maildir paths into this directory. - -*CRITICAL: Copy FROM =~/.mail/=, never modify =~/.mail/=.* - -#+begin_src bash -cp ~/.mail/gmail/INBOX/cur/message.eml ./tmp/email-summary-YYYY-MM-DD/ -#+end_src - -** Step 6: Second-Pass Header Inspection - -For each copied email, check headers for additional marketing signals that mu's =flag:list= might miss. Discard emails that match any of: - -*** Bulk Sender Tools -=X-Mailer= or =X-Mailtool= containing: Mailchimp, ExactTarget, Salesforce, SendGrid, Constant Contact, Campaign Monitor, HubSpot, Marketo, Brevo, Klaviyo - -*** Bulk Precedence -=Precedence: bulk= or =Precedence: list= - -*** Bulk Sender Patterns -=X-PM-Message-Id= patterns typical of bulk senders - -*** Marketing From Addresses -From address matching: =noreply@=, =no-reply@=, =newsletter@=, =marketing@=, =promotions@= - -** Step 7: Verify Addressed to Craig - -Check To/CC headers contain one of Craig's addresses: -- =craigmartinjennings@gmail.com= -- =c@cjennings.net= - -Discard BCC-only marketing blasts where Craig isn't in To/CC. - -** Step 8: Run Extract Script on Survivors - -Use =eml-view-and-extract-attachments.py= in stdout mode (no =--output-dir=) to read each email's content: - -#+begin_src bash -python3 docs/scripts/eml-view-and-extract-attachments.py ./tmp/email-summary-YYYY-MM-DD/message.eml -#+end_src - -This prints headers and body text to stdout without creating any files in the project. - -** Step 9: Triage and Summarize - -For each email, apply judgment: -- *Clearly needs Craig's attention* → summarize it (who, what, any action needed) -- *Unsure whether important* → summarize it with a note about why it might matter -- *Clearly unimportant* (automated notifications, receipts for known purchases, etc.) → mention it briefly but don't summarize in detail - -** Step 10: Present Summary - -Group by account (gmail / cmail). For each email show: -- From, Subject, Date -- Brief summary of content and any action needed -- Flag anything time-sensitive - -Example output format: - -#+begin_example -** Gmail - -1. From: Dan Smith | Subject: Project update | Date: Feb 14 - Dan is asking about the timeline for the next milestone. Needs a reply. - -2. From: Dr. Lee's Office | Subject: Appointment confirmation | Date: Feb 13 - Appointment confirmed for Feb 20 at 2pm. No action needed. - -** cmail - -1. From: Christine | Subject: Weekend plans | Date: Feb 14 - Asking about Saturday dinner. Needs a reply. - -** Skipped (not important) -- Order confirmation from Amazon (Feb 13) -- GitHub notification: CI passed (Feb 14) -#+end_example - -** Step 11: Clean Up - -Remove the temp directory: - -#+begin_src bash -rm -rf ./tmp/email-summary-YYYY-MM-DD/ -#+end_src - -If =./tmp/= is now empty, remove it too. - -** Step 12: Post-Summary Actions - -After presenting the summary, ask Craig if he wants to: - -*** Star emails - -Star specific emails by passing their maildir paths: - -#+begin_src bash -python3 docs/scripts/maildir-flag-manager.py star --reindex /path/to/message1 /path/to/message2 -#+end_src - -To also mark starred emails as read in one step: - -#+begin_src bash -python3 docs/scripts/maildir-flag-manager.py star --mark-read --reindex /path/to/message1 -#+end_src - -*** Mark reviewed emails as read - -Mark all unread INBOX emails as read across both accounts: - -#+begin_src bash -python3 docs/scripts/maildir-flag-manager.py mark-read --reindex -#+end_src - -Or mark specific emails as read: - -#+begin_src bash -python3 docs/scripts/maildir-flag-manager.py mark-read --reindex /path/to/message1 /path/to/message2 -#+end_src - -Use =--dry-run= to preview what would change without modifying anything. - -The script uses atomic =os.rename()= directly on maildir files — the same mechanism mu4e uses. Flag changes are persisted to the filesystem so mbsync picks them up on the next sync. - -*** Delete emails (future) -=mu= supports =mu remove= to delete messages from the filesystem and database. Not yet integrated into this workflow — explore when ready. - -** Step 13: Context Hygiene (Completion) - -Write out session-context.org again after the summary is presented, capturing what was reviewed and any action items identified. - -* Principles - -- *=maildir-flag-manager.py= for flag changes* — use the script for mark-read and star operations; it uses atomic =os.rename()= on maildir files (same mechanism as mu4e) and mbsync syncs changes on next run -- *Ask before syncing* — don't auto-sync; Craig may have already synced or may not want to wait -- *Ask before querying* — if scope is ambiguous, clarify rather than guess -- *Filter aggressively, surface generously* — when in doubt about whether an email is marketing, filter it out; when in doubt about whether it's important, include it in the summary -- *One pass through the extract script* — don't re-read emails; read once and summarize -- *Stdout mode only* — use the extract script without =--output-dir= to avoid creating files in the project -- *Clean up always* — remove the temp directory even if errors occur partway through - -* Tools Reference - -| Tool | Purpose | -|-------------------------------------+--------------------------------------| -| mbsync / mu index | Sync and index mail | -| mu find | Query maildir for matching emails | -| eml-view-and-extract-attachments.py | Read email content (stdout mode) | -| maildir-flag-manager.py | Mark read, star (batch flag changes) | - -* Files Referenced - -| File | Purpose | -|------------------------------------------------+-------------------------| -| [[file:sync-email.org][docs/workflows/sync-email.org]] | Sync step | -| [[file:find-email.org][docs/workflows/find-email.org]] | mu query patterns | -| docs/scripts/eml-view-and-extract-attachments.py | Extract script | -| docs/scripts/maildir-flag-manager.py | Flag management script | -| =~/.mail/gmail/= | Gmail maildir (READ ONLY) | -| =~/.mail/cmail/= | cmail maildir (READ ONLY) | - -* Living Document - -Update this workflow as we discover new marketing patterns to filter, useful query combinations, or improvements to the summary format. diff --git a/docs/workflows/sync-email.org b/docs/workflows/sync-email.org deleted file mode 100644 index 52a7caf..0000000 --- a/docs/workflows/sync-email.org +++ /dev/null @@ -1,108 +0,0 @@ -#+TITLE: Sync Email Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2026-02-01 - -* Overview - -This workflow syncs local maildir with remote email servers (Gmail and cmail/Proton) and updates the mu index for local searching. - -* Problem We're Solving - -Email lives on remote servers. To search or read emails locally, the local maildir needs to be updated from the servers. Without syncing, local tools (mu4e, mu find) only see stale data. - -* Exit Criteria - -Sync is complete when: -1. mbsync finishes successfully (exit code 0) -2. mu index completes successfully -3. Sync summary is reported (new messages, any errors) - -* When to Use This Workflow - -When Craig says: -- "sync email" or "sync mail" -- "pull new mail" -- "check for new email" -- Before any workflow that needs to search or read local email - -* The Workflow - -** Step 1: Sync All Accounts - -Run mbsync to pull mail from all configured accounts: - -#+begin_src bash -mbsync -a -#+end_src - -This syncs both gmail and cmail accounts as configured in ~/.mbsyncrc. - -** Step 2: Index Mail - -Update the mu database to make new mail searchable: - -#+begin_src bash -mu index -#+end_src - -mu index is incremental by default - it only indexes new/changed messages. - -** Step 3: Report Results - -Report to Craig: -- Number of new messages pulled (if visible in mbsync output) -- Any errors encountered -- Confirmation that sync and index completed - -** Handling Errors - -If errors occur, diagnose at that step. Common issues: - -*** UIDVALIDITY Errors - -UIDVALIDITY errors occur when UIDs change on the server (Proton Bridge resets) or when mu4e moves files without renaming them. - -*Prevention (mu4e users):* Add to Emacs config: -#+begin_src elisp -(setq mu4e-change-filenames-when-moving t) -#+end_src - -*If errors occur:* -1. First, try running mbsync again - [[https://isync.sourceforge.io/mbsync.html][official docs]] say it "will recover just fine if the change is unfounded" -2. If errors persist, reset sync state (only if mail is safe on server): -#+begin_src bash -find ~/.mail/cmail -name ".uidvalidity" -delete -find ~/.mail/cmail -name ".mbsyncstate" -delete -mbsync cmail -#+end_src - -*References:* -- [[https://isync.sourceforge.io/mbsync.html][mbsync official documentation]] -- [[https://pragmaticemacs.wordpress.com/2016/03/22/fixing-duplicate-uid-errors-when-using-mbsync-and-mu4e/][Fixing duplicate UID errors with mbsync and mu4e]] -- [[https://www.julioloayzam.com/guides/recovering-from-a-mbsync-uidvalidity-change/][Recovering from mbsync UIDVALIDITY change]] - -*** Connection Errors -- Gmail: Check network, may need app password refresh -- cmail: Ensure Proton Bridge is running (check port 1143) - -#+begin_src bash -ss -tlnp | grep 1143 -#+end_src - -* Mail Configuration Reference - -| Account | Local Path | IMAP Server | -|---------+---------------+--------------------| -| gmail | ~/.mail/gmail | imap.gmail.com | -| cmail | ~/.mail/cmail | 127.0.0.1:1143 | - -* Principles - -- **Sync all accounts by default** - Unless Craig specifies a single account -- **No pre-checks** - Don't verify connectivity before running; diagnose if errors occur -- **Trust the tools** - mbsync and mu are robust; don't add unnecessary validation -- **Never modify ~/.mail/ directly** - Read-only operations only; mbsync manages the maildir - -* Living Document - -Update this workflow as we discover new patterns or issues with email syncing. diff --git a/docs/workflows/v2mom.org b/docs/workflows/v2mom.org deleted file mode 100644 index d39e932..0000000 --- a/docs/workflows/v2mom.org +++ /dev/null @@ -1 +0,0 @@ -this is placeholder text. when the v2mom is created, this should be removed. diff --git a/docs/workflows/whats-next.org b/docs/workflows/whats-next.org deleted file mode 100644 index 701cce5..0000000 --- a/docs/workflows/whats-next.org +++ /dev/null @@ -1,146 +0,0 @@ -#+TITLE: What's Next Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2025-11-10 - -* Overview - -The "what's next" workflow helps identify the most urgent and important task to work on next. It reduces decision fatigue by having Claude apply objective prioritization criteria (deadlines, priorities, method order) instead of relying on subjective feelings (what's fun, what's quick). - -* Problem We're Solving - -Without a "what's next" workflow, choosing the next task creates friction: - -** Decision Fatigue -- Scanning through todo.org to find high-priority items -- Remembering context from previous sessions (what's blocked, what's ready) -- Mentally prioritizing competing tasks - -** Distraction by Easy Wins -- Natural tendency to pick fun/quick tasks for dopamine hits -- Important work gets postponed in favor of satisfying but less critical tasks -- Momentum is broken by choosing easier tasks - -** Missing Time-Sensitive Work -- Deadlines can be overlooked when scanning manually -- Follow-up tasks from reminders get forgotten -- In-progress tasks may not get finished before starting new work - -*Impact:* Less gets done per session, important work is delayed, and flow is disrupted by analysis paralysis. - -* Exit Criteria - -Success: User asks "what's next" and immediately receives a clear, actionable recommendation. - -We know this workflow succeeded when: -- Claude provides **one task recommendation** (occasionally 2-3 if truly can't decide) -- The recommendation is **urgent+important** (deadline-driven) or **important** (priority+method order) -- User can **start working immediately** without second-guessing the choice -- User **stays in flow** and gets more done per session - -* When to Use This Workflow - -Trigger "what's next" when: -- Starting a work session and need to choose first task -- Completing a task and ready for the next one -- Feeling uncertain about priorities -- Tempted to pick an easy/fun task instead of important work -- Want external validation of task choice - -User says: "what's next" or "what should I work on next?" - -* Approach: How We Work Together - -** Prioritization Cascade - -Follow this decision tree in order: - -*** 1. Check for Incomplete In-Progress Tasks -- Look for tasks marked as in-progress or partially complete -- **If found:** Recommend that task (always finish what's started) -- **If user declines:** Continue to next step - -*** 2. Check Active Reminders -- Review Active Reminders section in project's notes.org (if it exists) -- **If found:** Recommend reminder task -- **If user declines:** Ask for priority and add to todo.org, then continue - -*** 3. Check for Deadline-Driven Tasks -- Scan todo.org for tasks with explicit deadlines -- **If found:** Recommend task with closest deadline -- **If none:** Continue to next step - -*** 4. Prioritize by V2MOM Method Order (if applicable) -If todo.org is structured with V2MOM methods: -- Method 1 priority A tasks first -- Then Method 2 priority A, Method 3 priority A, etc. -- Then Method 1 priority B, Method 2 priority B, etc. -- Continue pattern through priorities C and D - -*** 5. Prioritize by Priority Only (simple list) -If todo.org is a simple flat list: -- Evaluate all priority A tasks, pick most important -- If no priority A, evaluate priority B tasks -- Continue through priorities C and D - -*** 6. All Tasks Complete -- **If no tasks remain:** Report "All done! No more tasks in todo.org" - -** Handling Multiple Tasks at Same Level - -When multiple tasks have same priority/method position: - -Pick one based on: -1. **Blocks other work** - Dependencies matter -2. **Recently discussed** - Mentioned in recent conversation -3. **Most foundational** - Enables other tasks -4. **If truly uncertain** - Show 2-3 options and let user choose - -** Providing Context - -Keep recommendations concise but informative: -- Task name/description -- One-line reasoning -- Progress indicator (for V2MOM-structured todos) - -**Example:** -``` -Next: Fix org-noter reliability (Method 1, Priority A, 8/18 complete) -Reason: Blocks daily reading/annotation workflow -``` - -** Speed and Efficiency - -- Complete workflow in **< 30 seconds** -- Don't re-read files if recently accessed (use recent context) -- Trust objective criteria over subjective assessment -- If uncertain, ask clarifying question rather than guessing - -* Workflow Steps - -1. **Scan in-progress tasks** - Check for incomplete work from previous session -2. **Check reminders** - Review Active Reminders in notes.org -3. **Scan for deadlines** - Look for time-sensitive tasks in todo.org -4. **Apply priority cascade** - Use V2MOM method order or simple priority ranking -5. **Make recommendation** - One task (or 2-3 if uncertain) -6. **Show context** - Brief reasoning and progress indicator -7. **Accept user decision** - If user declines, move to next candidate - -* Tips for Success - -** For Users -- Trust the recommendation - it's based on objective criteria -- If you decline, ask "what else?" to get next candidate -- Update priorities/deadlines in todo.org regularly for best results -- Use Active Reminders for important follow-ups - -** For Claude -- Be decisive - one clear recommendation is better than analysis -- Show your reasoning - builds trust in the process -- Respect the cascade - don't skip steps -- Keep it fast - this should reduce friction, not add it - -* Related Workflows - -- **emacs-inbox-zero** - Triage todo.org against V2MOM framework -- **create-v2mom** - Build strategic framework for prioritization -- **wrap-it-up** - End session with clean handoff to next session diff --git a/docs/workflows/wrap-it-up.org b/docs/workflows/wrap-it-up.org deleted file mode 100644 index 1ab31ec..0000000 --- a/docs/workflows/wrap-it-up.org +++ /dev/null @@ -1,527 +0,0 @@ -#+TITLE: Session Wrap-Up Workflow -#+AUTHOR: Craig Jennings & Claude -#+DATE: 2025-11-14 - -* Overview - -This workflow defines the process for ending a Claude Code session cleanly. It ensures that all work is documented, committed to version control, and properly handed off to the next session. The wrap-up creates a permanent record and leaves the project in a clean state. - -This workflow is triggered by Craig saying "wrap it up," "that's a wrap," "let's call it a wrap," or similar phrases indicating the session should end. - -* Problem We're Solving - -Without a structured wrap-up process, sessions can end poorly: - -** Lost Context -- Key decisions made during the session aren't documented -- Future sessions must reconstruct what happened and why -- Important details fade from memory between sessions -- No record of lessons learned or preferences discovered - -** Uncommitted Work -- Files modified but not committed to git -- Changes not pushed to remote repositories -- Working tree left dirty with unclear state -- Risk of losing work if something goes wrong - -** Unclear Handoff -- Next session doesn't know what was accomplished -- Unclear what should be worked on next -- Important reminders or blockers not surfaced -- No sense of closure or accomplishment - -** Inconsistent Git History -- Commit messages vary in quality and format -- Sometimes includes Claude attribution, sometimes doesn't -- Unclear what changes were made in each commit -- Inconsistent conventions across commits - -*Impact:* Poor continuity between sessions, lost work, unclear git history, and wasted time reconstructing context. - -* Exit Criteria - -The wrap-up is complete when: - -1. **Session history is documented:** - - Key decisions made are recorded in notes.org - - Work completed is summarized clearly - - Context needed for next session is captured - - Pending issues or blockers are noted - - New conventions or preferences are documented - -2. **Git state is clean:** - - All changes are committed with proper commit message - - All commits are pushed to all remote repositories - - Working tree shows no uncommitted changes - - Push succeeded to all remotes (verified) - -3. **Valediction is provided:** - - Brief, warm goodbye delivered - - Day's accomplishments acknowledged - - Next session readiness confirmed - - Important reminders surfaced - -*Measurable validation:* -- =git status= shows "working tree clean" -- =git log -1= shows today's session commit -- Session History section in notes.org has new entry -- User has clear sense of what was accomplished and what's next - -* When to Use This Workflow - -Trigger this workflow when Craig says any of these phrases (or clear variations): - -- "Wrap it up" -- "That's a wrap" -- "Let's call it a wrap" -- "Let's wrap up" -- "Time to wrap" -- "Wrap this session" - -Do NOT wait for Craig to explicitly request individual steps. Execute the entire workflow automatically when triggered. - -* First Time Setup - -If this is the first time using this workflow in a project, perform this one-time cleanup: - -** Clean Up Old "Wrap It Up" Documentation - -Before using this workflow, check if notes.org contains old/redundant "wrap it up" instructions: - -1. **Search notes.org for duplicate content:** - #+begin_src bash - grep -n "Wrap it up\|wrap-up" docs/notes.org - #+end_src - -2. **Look for sections that duplicate this workflow:** - - Old "Wrap it up" protocol descriptions - - Inline git commit instructions - - Valediction guidelines - - Session notes format instructions - -3. **Replace with link to this workflow:** - - Remove the detailed inline instructions - - Replace with: "See [[file:docs/workflows/wrap-it-up.org][Session Wrap-Up Workflow]]" - - Keep brief summary if helpful for quick reference - -4. **Why this matters:** - - Prevents conflicting instructions (e.g., different git commit formats) - - Single source of truth for wrap-up process - - Easier to maintain and update in one place - - Reduces notes.org file size - -**Example cleanup:** - -Before: -#+begin_example -*** "Wrap it up" -When Craig says this: -1. Write session notes with these details... -2. Git commit with this format... -3. Say goodbye like this... -[20+ lines of detailed instructions] -#+end_example - -After: -#+begin_example -*** "Wrap it up" -Execute the wrap-up workflow defined in [[file:docs/workflows/wrap-it-up.org][wrap-it-up.org]]. - -Brief summary: Write session notes, git commit/push (no Claude attribution), valediction. -#+end_example - -This cleanup should only be done ONCE when first adopting this workflow. - -* The Workflow - -** Step 1: Write Session Notes to notes.org - -Add new entry to Session History section in =docs/notes.org= - -*** Format for Session Entry - -#+begin_example -** Session: [Month Day, Year] ([Time of Day if multiple sessions]) - -**Work Completed**: -- List of major tasks accomplished -- Key decisions made -- Documents created or modified -- Problems solved - -**Context for Next Session**: -- What should be worked on next -- Pending decisions or questions -- Deadlines or time-sensitive items -- Files or directories to review - -**Key Decisions Made** (if applicable): -- Important choices and rationale -- Strategic pivots -- New approaches or conventions established - -**Status**: -Brief summary of where things stand after this session. -#+end_example - -*** What to Include in Session Notes - -1. **Work Completed:** - - Major tasks accomplished during session - - Files created, modified, or deleted - - Research conducted - - Problems solved - - Documents drafted or reviewed - -2. **Key Decisions:** - - Important choices made and why - - Strategic direction changes - - Approach or methodology decisions - - What was decided NOT to do - -3. **Context for Next Session:** - - What should be worked on next - - Pending issues that need attention - - Blockers or dependencies - - Questions that need answering - - Deadlines approaching - -4. **New Conventions or Preferences:** - - Working style preferences discovered - - New file naming conventions - - Process improvements identified - - Tools or approaches that worked well - -5. **Critical Reminders:** - - Time-sensitive items that need attention soon - - If truly critical, ALSO add to Active Reminders section - - Follow-up actions for next session - -*** Where to Add Session Notes - -- Open =docs/notes.org= -- Navigate to "Session History" section (near end of file) -- Add new =** Session:= entry at the TOP of the session list (most recent first) -- Use today's date in format: "Month Day, Year" -- If multiple sessions same day, add time: "Month Day, Year (Morning)" or "(Afternoon)" - -*** Archive Old Sessions (Keep Last 5) - -After adding the current session notes, check if there are more than 5 sessions in the Session History section and move older ones to the archive: - -1. **Count sessions in Session History:** - - Count the number of =** Session:= entries in notes.org Session History section - - If 5 or fewer sessions exist, skip to next step (nothing to archive) - -2. **Identify sessions to archive:** - - Keep the 5 most recent sessions in notes.org - - Mark all older sessions (6th and beyond) for archiving - -3. **Move old sessions to archive:** - - Check if =docs/previous-session-history.org= exists: - - If it doesn't exist, create it with this template: - #+begin_example - #+TITLE: Previous Session History Archive - #+AUTHOR: Craig Jennings - - * Archived Sessions - - This file contains archived session history from notes.org. - Sessions are in reverse chronological order (most recent first). - #+end_example - - Open =docs/previous-session-history.org= - - Cut old session entries from notes.org (6th session and beyond) - - Paste them at the TOP of "Archived Sessions" section in previous-session-history.org - - Keep reverse chronological order (most recent archived session first) - - Save both files - -4. **When to skip this step:** - - If there are 5 or fewer sessions in Session History - - Just continue to next step - -This keeps notes.org focused on recent work while preserving full history. - -** Step 2: Git Commit and Push - -*** 2.1 Check Git Status - -Run these commands to understand what's changed: - -#+begin_src bash -git status -#+end_src - -#+begin_src bash -git diff -#+end_src - -Review the output to understand what files were modified during the session. - -*** 2.2 Stage All Changes - -#+begin_src bash -git add . -#+end_src - -This stages all modified, new, and deleted files. - -*** 2.3 Create Commit Message - -**IMPORTANT: Commit message requirements:** - -1. **Subject line** (first line): - - Start with lowercase word describing work: "session:", "fix:", "add:", "update:", "refactor:", etc. - - Brief description of session work (50-72 chars) - - Example: =session: Strategic pivot to attorney-led demand letter approach= - -2. **Body** (after blank line): - - 1-3 terse sentences describing what was accomplished - - NO conversational language - - NO Claude Code attribution - - NO "Co-Authored-By" line - - NO emoji or casual language - - Keep it professional and concise - -3. **Author:** - - Commit as Craig Jennings (NOT as Claude) - - Use git config user.name and user.email (already configured) - -**Example of CORRECT commit message:** - -#+begin_example -session: Documentation package prepared for attorney review - -Created comprehensive email and supporting documents for Jonathan Schultis. -Includes historical SOVs, contracts, key emails, and meeting notes showing -timeline of dispute and evidence of unapproved charges. -#+end_example - -**Example of INCORRECT commit message (DO NOT DO THIS):** - -#+begin_example -Session wrap-up! 🤖 - -Today we had a great session where we prepared documentation for the attorney. -Craig and I worked together to organize all the files. - -🤖 Generated with Claude Code -Co-Authored-By: Claude -#+end_example - -*** 2.4 Execute Commit - -Use heredoc format for proper multi-line commit message: - -#+begin_src bash -git commit -m "$(cat <<'EOF' -session: [Brief description] - -[1-3 terse sentences about work completed] -EOF -)" -#+end_src - -Replace bracketed sections with actual content. - -*** 2.5 Check Remote Repositories - -#+begin_src bash -git remote -v -#+end_src - -This shows all configured remote repositories. Note which ones exist (usually =origin=, sometimes others). - -*** 2.6 Push to All Remotes - -For each remote repository found in step 2.5: - -#+begin_src bash -git push [remote-name] -#+end_src - -Example if only =origin= exists: -#+begin_src bash -git push origin -#+end_src - -If multiple remotes exist, push to each: -#+begin_src bash -git push origin && git push backup -#+end_src - -*** 2.7 Verify Clean State - -After pushing, verify the working tree is clean: - -#+begin_src bash -git status -#+end_src - -Should show: "working tree clean" and indicate branch is up to date with remote. - -** Step 3: Delete Session Context File - -The session context file must be deleted at the end of every successful wrap-up. - -*** Why This Is Critical - -The existence of =docs/session-context.org= indicates an interrupted session. If you don't delete it: -- Next session will think the previous session crashed -- Context recovery will be triggered unnecessarily -- Creates confusion about session state - -*** How to Delete - -#+begin_src bash -rm docs/session-context.org -#+end_src - -*** When to Skip - -NEVER skip this step. If the file doesn't exist, that's fine (the rm command will just report no file). But if it exists, it MUST be deleted. - -** Step 4: Verify Clean Git State - -*** CRITICAL: Git Status Must Be Completely Clean - -Before proceeding to valediction, =git status= MUST show: -- "nothing to commit, working tree clean" -- Branch is up to date with remote(s) - -*** If Git Status Is NOT Clean - -1. **Untracked files exist:** Ask Craig - "There are untracked files. Should I add and commit them, or leave them untracked?" -2. **Uncommitted changes exist:** Ask Craig - "There are uncommitted changes. Should I commit them now?" -3. **Unpushed commits exist:** Push them (this shouldn't require asking) -4. **Any other unclear state:** Ask Craig before proceeding - -*** The Rule - -Unless Craig explicitly says "leave git dirty" or "don't worry about that file," the working tree must be clean at session end. When in doubt, ASK. - -** Step 5: Valediction - -Provide a brief, warm goodbye message to Craig. - -*** Format for Valediction - -Include these elements (but keep it concise - 3-4 sentences total): - -1. **Acknowledge the session:** - - What was accomplished - - Key milestone or achievement - -2. **Confirm readiness for next session:** - - What's ready to go - - What should be worked on next - -3. **Surface important reminders:** - - Critical deadlines approaching - - Time-sensitive actions needed - -4. **Warm closing:** - - Brief personal touch - - Looking forward to next session - -*** Example Valediction - -#+begin_example -That's a wrap! We've prepared a comprehensive documentation package for -Jonathan with all the key evidence showing the $38,060 in unapproved charges. -The package is ready in jonathan-email-and-docs/ for you to review and send -this weekend. - -Remember: Axelrad inspection must happen before drywall closes (urgent), and -the $5,000 retainer check should go out when the representation agreement -arrives. - -Everything's committed and pushed. Looking forward to seeing how Jonathan's -demand letter strategy works out. Have a good evening! -#+end_example - -*** Tone Guidelines - -- **Warm but professional** - friendly without being overly casual -- **Concise** - 3-4 sentences, not a long summary -- **Forward-looking** - what's ready for next time -- **Acknowledging** - recognize the work accomplished -- **Helpful** - surface 1-2 critical reminders if they exist - -Do NOT: -- Write a long detailed summary (that's in Session History) -- Use emoji or overly casual language -- Be generic ("Good session, see you next time") -- Miss surfacing critical deadlines or reminders - -* Tips for Effective Wrap-Ups - -** Session Notes - -1. **Write notes in Craig's voice** - He's the one who will read them later -2. **Be specific** - "Created docs/MARK-MEETING-REFERENCE-GUIDE.org" not "created some docs" -3. **Include file paths** - Makes it easy to find referenced files -4. **Note decisions and rationale** - Not just what was done, but why -5. **Flag time-sensitive items clearly** - Use "URGENT" or "CRITICAL" when appropriate - -** Git Commits - -1. **Review the diff before committing** - Make sure you understand what changed -2. **One commit per session** - Don't create multiple commits during wrap-up -3. **Keep subject line under 72 characters** - Follows git best practices -4. **Be descriptive but concise** - Balance brevity with clarity -5. **NEVER include Claude attribution** - Commits are from Craig only - -** Valediction - -1. **Read the room** - If it was a tough/frustrating session, acknowledge that -2. **Surface critical reminders** - Don't let important deadlines slip -3. **Be genuine** - Sincerity matters more than polish -4. **Keep it brief** - Respect Craig's time at end of session -5. **End on positive note** - Even if session was challenging - -* Common Mistakes to Avoid - -1. **Forgetting to push to all remotes** - Check =git remote -v= and push to each -2. **Including Claude attribution in commits** - Commits should be from Craig only -3. **Writing session notes in Claude's voice** - Write as if Craig is documenting -4. **Generic valediction** - Be specific about what was accomplished -5. **Skipping context for next session** - Future you/Claude needs to know what's next -6. **Not surfacing critical reminders** - Time-sensitive items must be highlighted -7. **Writing commit message like a conversation** - Keep it professional and terse -8. **Forgetting to verify clean working tree** - Always confirm =git status= shows clean -9. **Making session notes too brief** - Better too much detail than too little -10. **Not checking if files were actually modified** - Run =git status= first -11. **Leaving session-context.org behind** - MUST be deleted at end of every wrap-up -12. **Leaving dirty git state without explicit approval** - Ask Craig if anything is unclear - -* Validation Checklist - -Before considering wrap-up complete, verify: - -- [ ] Session History entry added to notes.org with today's date -- [ ] Work completed is clearly documented -- [ ] Context for next session is captured -- [ ] Key decisions and rationale are recorded -- [ ] Sessions beyond the 5 most recent moved to previous-session-history.org (if any) -- [ ] =git status= showed files to commit (if nothing changed, skip git steps) -- [ ] =git add .= executed to stage all changes -- [ ] Commit message follows required format (no Claude attribution) -- [ ] Commit executed successfully -- [ ] Pushed to ALL remote repositories (checked with =git remote -v=) -- [ ] =git status= shows "nothing to commit, working tree clean" (CRITICAL - ask if unclear) -- [ ] Push confirmed successful to all remotes -- [ ] =docs/session-context.org= DELETED (CRITICAL - file must not exist after wrap-up) -- [ ] Valediction provided with accomplishments, readiness, reminders -- [ ] Critical reminders surfaced if they exist -- [ ] Warm, professional closing provided - -* Meta Notes - -This workflow should evolve based on experience: - -- If wrap-ups consistently miss something important, update the checklist -- If commit message format causes problems, revise the requirements -- If session notes format isn't working, adjust the template -- Craig can modify this workflow at any time to better fit needs - -The goal is clean handoffs between sessions, not rigid adherence to format. -- cgit v1.2.3