<feed xmlns='http://www.w3.org/2005/Atom'>
<title>rulesets/Makefile, branch main</title>
<subtitle>Claude Code skills, rules, and language bundles
</subtitle>
<id>https://git.cjennings.net/rulesets/atom?h=main</id>
<link rel='self' href='https://git.cjennings.net/rulesets/atom?h=main'/>
<link rel='alternate' type='text/html' href='https://git.cjennings.net/rulesets/'/>
<updated>2026-04-19T23:25:11+00:00</updated>
<entry>
<title>skills: add create-v2mom; extend add-tests with refactor-for-testability</title>
<updated>2026-04-19T23:25:11+00:00</updated>
<author>
<name>Craig Jennings</name>
<email>c@cjennings.net</email>
</author>
<published>2026-04-19T23:25:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.cjennings.net/rulesets/commit/?id=c90683ed477c891e54034de595c97f149c420c17'/>
<id>urn:sha1:c90683ed477c891e54034de595c97f149c420c17</id>
<content type='text'>
New standalone create-v2mom skill (converted from the homelab
workflow template, markdown + YAML frontmatter, context-hygiene
references removed in favor of the global session-context protocol).

add-tests/SKILL.md gains a 'Core Principle — Refactor for Testability
First' section and three inserts into the phase instructions:
- Phase 1 flags testability-blocked functions during inventory
- Phase 2 surfaces refactor-first candidates per function
- Phase 3 adds a test-failure-vs-production-bug triage step

Sourced from the retired refactor.org homelab workflow (which was a
TDD-for-testability guide, not a general refactoring guide — general
refactoring is already covered by the /refactor slash command).
</content>
</entry>
<entry>
<title>feat(hooks): add global hooks — PreCompact priorities + git/gh confirm modals</title>
<updated>2026-04-19T22:06:10+00:00</updated>
<author>
<name>Craig Jennings</name>
<email>c@cjennings.net</email>
</author>
<published>2026-04-19T22:06:10+00:00</published>
<link rel='alternate' type='text/html' href='https://git.cjennings.net/rulesets/commit/?id=4957c60c9ee985628ad59344e593d20a18ca8fdb'/>
<id>urn:sha1:4957c60c9ee985628ad59344e593d20a18ca8fdb</id>
<content type='text'>
Three new machine-wide hooks installed via `make install-hooks`:

- `precompact-priorities.sh` (PreCompact) — injects a priority block into
  the compaction prompt so the generated summary retains information most
  expensive to reconstruct: unanswered questions, root causes with
  file:line, subagent findings as primary evidence, exact numbers/IDs,
  A-vs-B decisions, open TODOs, classified-data handling.

- `git-commit-confirm.py` (PreToolUse/Bash) — gates `git commit` behind a
  confirmation modal showing parsed message, staged files, diff stats,
  author. Parses both HEREDOC and `-m`/`--message` forms.

- `gh-pr-create-confirm.py` (PreToolUse/Bash) — gates `gh pr create`
  behind a modal showing title, base ← head, reviewers, labels,
  assignees, milestone, draft flag, body (HEREDOC or quoted).

Makefile: adds `install-hooks` / `uninstall-hooks` targets and extends
`list` with a Hooks section. Install prints the settings.json snippet
(in `hooks/settings-snippet.json`) to merge into `~/.claude/settings.json`.

Also: `languages/elisp/claude/hooks/validate-el.sh` now emits JSON with
`hookSpecificOutput.additionalContext` on failure (via new `fail_json()`
helper) so Claude sees a structured error in context, in addition to
the existing stderr output and exit 2.

Patterns synthesized clean-room from fcakyon/claude-codex-settings
(Apache-2.0). Each hook is original content.
</content>
</entry>
<entry>
<title>feat: add finish-branch skill (clean-room synthesis from obra/superpowers pattern)</title>
<updated>2026-04-19T21:41:58+00:00</updated>
<author>
<name>Craig Jennings</name>
<email>c@cjennings.net</email>
</author>
<published>2026-04-19T21:41:58+00:00</published>
<link rel='alternate' type='text/html' href='https://git.cjennings.net/rulesets/commit/?id=8127307278160a2a7a744169180d2eea7c3bf731'/>
<id>urn:sha1:8127307278160a2a7a744169180d2eea7c3bf731</id>
<content type='text'>
Clean-room synthesis of the 'finishing a development branch' pattern from
obra/superpowers (MIT) — adopted as the forced-choice workflow scaffold,
not the substantive rules. Substantive rules defer to existing ones:
- Verification → verification.md
- Commit conventions + no AI attribution → commits.md
- Review discipline → review-code

Core patterns implemented:
- Phase 1 verify-before-options (hard-stop on failing tests)
- Phase 2 base branch + commit range + worktree detection
- Phase 3 forced-choice menu (exactly 4 options, no editorializing, stop+wait)
- Phase 4 execution per-option with:
  - Option 1 (merge locally): re-verify after merge, delete branch, prompt
    about remote-branch cleanup separately
  - Option 2 (push + PR): gh pr create with inline template (no AI
    attribution in the body); do NOT remove worktree
  - Option 3 (keep): no git state changes; preserve worktree
  - Option 4 (discard): typed-word "discard" confirmation gate required;
    lists what will be permanently lost; force-delete + remote cleanup
- Phase 5 worktree cleanup matrix (cleanup for 1 and 4; preserve for 2 and 3)

Notable over the upstream superpowers skill:
- Explicit delegation to verification.md / commits.md / review-code rather
  than re-teaching those standards inline
- Cross-references to /review-code (pre) and /arch-evaluate (if architectural)
- Handles remote-branch cleanup question separately from local branch
  (upstream conflates them)
- "Common Mistakes" section names the specific failure modes this skill
  prevents (open-ended "what now", accidental deletes, merge-then-oops,
  worktree amnesia, trailing AI attribution in PRs)

Rubric coverage vs upstream: M (verify → options → execute → cleanup);
M (forced-choice menu pattern); M (typed-discard confirmation gate);
M (worktree cleanup matrix); M (hard-stop on failing tests);
+ (explicit deferral to existing rules vs upstream's inline rules);
+ (remote-branch cleanup as separate prompt); + (skill integration notes
for /review-code and /arch-evaluate); no dropped capabilities.

Makefile SKILLS extended; make install symlinks globally at
~/.claude/skills/finish-branch.
</content>
</entry>
<entry>
<title>refactor: review-pr → review-code with superpowers + plugin-lifted improvements</title>
<updated>2026-04-19T21:28:03+00:00</updated>
<author>
<name>Craig Jennings</name>
<email>c@cjennings.net</email>
</author>
<published>2026-04-19T21:28:03+00:00</published>
<link rel='alternate' type='text/html' href='https://git.cjennings.net/rulesets/commit/?id=e35fe600ef9ec3bf2facae67608b0a8bf0298ed9'/>
<id>urn:sha1:e35fe600ef9ec3bf2facae67608b0a8bf0298ed9</id>
<content type='text'>
Renamed review-pr → review-code (the skill accepts PR, SHA range,
current branch, staged changes — "pr" was understating scope).
Rewrote SKILL.md with YAML frontmatter (previously header-style) and
merged useful patterns from two sources:

From obra/superpowers skills/requesting-code-review:
  - Intent-vs-delivery grading (given plan/ADR/ticket)
  - Mandatory Strengths section (three minimum)
  - Per-issue Critical/Important/Minor severity (per-criterion
    PASS/WARN/FAIL retained; complementary axes)
  - Required verdict + 1-2 sentence reasoning
  - Multi-input support (PR / SHA range / current branch / --staged)
  - Sub-agent dispatch recommendation for heavy reviews
  - Concrete filled-in example output

From the claude-plugins-official code-review plugin:
  - Phase 0 eligibility gate (skip closed/draft/auto/trivial/already-reviewed)
  - CLAUDE.md traversal + adherence criterion (reads root + per-directory
    CLAUDE.md files; audits the diff against stated rules)
  - Multi-perspective Phase 2: five passes (CLAUDE.md adherence, shallow
    bug scan, git history context, prior PR comments, in-scope code
    comments). For large reviews, dispatch as parallel sub-agents.
  - Confidence filter (High/Medium/Low; drop Low before reporting)
  - False-positive categories explicitly enumerated (pre-existing issues
    on unmodified lines, lint/typecheck issues CI handles,
    senior-wouldn't-call-out nitpicks, silenced issues with valid reason,
    intentional scope changes, unmodified-line issues, framework-behavior
    tests)
  - Trust-CI discipline (don't run builds yourself)

Substance from the original review-pr kept verbatim:
  - DeepSat-specific criteria (security, TDD evidence, conventions,
    no-AI-attribution, API contracts, architecture layering, root-cause
    discipline)

Size: 60 lines → 347 lines. Growth is structural (added phases, added
example, added perspectives, added filters) not verbose — each section
earns its lines.

NOT adopted from the plugin:
  - GitHub comment output format (plugin posts PR comments; review-code
    outputs a markdown report the user can paste if they want)
  - "Generated with Claude Code" footer (violates no-AI-attribution rule)
  - Specific 0/25/50/75/100 confidence scale (Critical/Important/Minor
    covers the same signal with less ceremony)

Makefile SKILLS updated: review-pr → review-code. Old
~/.claude/skills/review-pr symlink removed; make install creates the
new one at ~/.claude/skills/review-code.
</content>
</entry>
<entry>
<title>feat: adopt pairwise-tests (PICT combinatorial) + cross-reference from existing testing skills</title>
<updated>2026-04-19T21:12:02+00:00</updated>
<author>
<name>Craig Jennings</name>
<email>c@cjennings.net</email>
</author>
<published>2026-04-19T21:12:02+00:00</published>
<link rel='alternate' type='text/html' href='https://git.cjennings.net/rulesets/commit/?id=b11cfd66b185a253fecf10ad06080ae165f32a74'/>
<id>urn:sha1:b11cfd66b185a253fecf10ad06080ae165f32a74</id>
<content type='text'>
Forked verbatim from omkamal/pypict-claude-skill (MIT). LICENSE preserved.
Renamed from `pict-test-designer` to `pairwise-tests` — technique-first
naming so users invoking "pairwise" or "combinatorial" find it; PICT
remains the tool under the hood.

Bundle (skill-runtime only):
  pairwise-tests/SKILL.md                    (renamed, description rewritten)
  pairwise-tests/LICENSE                     (MIT, preserved)
  pairwise-tests/references/pict_syntax.md
  pairwise-tests/references/examples.md
  pairwise-tests/scripts/pict_helper.py      (Python CLI for model gen / output fmt)
  pairwise-tests/scripts/README.md

Upstream's repo-level docs (README, QUICKSTART, CONTRIBUTING, etc.) and
`examples/` dir (ATM + gearbox walkthroughs — useful as reading, not as
skill-runtime) omitted from the fork. Attribution footer added.

Cross-references so /add-tests naturally routes to /pairwise-tests when
warranted:

- add-tests/SKILL.md Phase 2 step 8: if a function in scope has 3+ parameters
  each taking multiple values, surface `/pairwise-tests` to the user before
  proposing normal category coverage. Default continues with /add-tests;
  user picks pairwise explicitly.
- claude-rules/testing.md: new "Combinatorial Coverage" section after the
  Normal/Boundary/Error categories. Explains when pairwise wins, when to
  skip (regulated / provably exhaustive contexts, ≤2 parameters, non-
  parametric testing), and points at /pairwise-tests.
- languages/python/claude/rules/python-testing.md: new "Pairwise /
  Combinatorial for Parameter-Heavy Functions" subsection under the
  parametrize guidance. Explains the pytest workflow: /pairwise-tests
  generates the matrix, paste into pytest parametrize block, or use
  pypict helper directly.

Mechanism note: cross-references are judgment-based — Claude reads the
nudges in add-tests/testing/python-testing and acts on them when appropriate,
not automatic dispatch. Craig can still invoke /pairwise-tests directly when
he already knows he wants combinatorial coverage.

Makefile SKILLS extended; make install symlinks /pairwise-tests globally.
</content>
</entry>
<entry>
<title>feat: adopt frontend-design (Apache 2.0 fork) + progressive-disclosure extensions</title>
<updated>2026-04-19T20:57:50+00:00</updated>
<author>
<name>Craig Jennings</name>
<email>c@cjennings.net</email>
</author>
<published>2026-04-19T20:57:50+00:00</published>
<link rel='alternate' type='text/html' href='https://git.cjennings.net/rulesets/commit/?id=a8deb6af6a14bc5e56e86289a2858a0258558388'/>
<id>urn:sha1:a8deb6af6a14bc5e56e86289a2858a0258558388</id>
<content type='text'>
Forked verbatim from anthropics/skills/skills/frontend-design (Apache 2.0).
LICENSE.txt preserved. Upstream SKILL.md prose (aesthetic guidance,
archetype list, anti-pattern callouts) kept intact.

Extensions added (clearly marked, load progressively — base SKILL.md
stays lean for simple cases):

SKILL.md:
  - Description extended with explicit negative triggers: narrow
    maintenance (single CSS bug, dependency upgrade, a11y-only retrofit),
    operational contexts where stakeholder has specified "minimal,
    functional, no creative direction," backend / API work, non-web UIs
    (mobile native, desktop, terminal), and refactoring without visible
    design component.
  - New "Workflow" section at the end of SKILL.md: four phases (intake,
    commitment, build, review) with pointers to reference files. Simple
    component tweaks skip the workflow; non-trivial redesigns walk it.
  - New "References" section: table mapping file → load-when condition.
  - Attribution footer marking upstream source + what's locally added.

references/workflow.md (~150 lines)
  Intake questions (purpose, audience, operational context, functional
  priority, technical constraints, brand references, success criteria).
  Commitment step (archetype pick, trade-offs, font pairing, palette,
  motion, layout as one-line decisions). Build reminders. Review
  pointer. Guidance on when to skip phases.

references/accessibility.md (~200 lines)
  WCAG AA contrast thresholds + practical check guidance. Keyboard
  navigation + focus management. Semantic HTML + ARIA rules. Reduced-
  motion CSS snippet. Smoke checklist. Operational-context note for
  defense / ISR work.

references/responsive.md (~160 lines)
  Mobile-first vs desktop-first decision. Named breakpoints (Tailwind-
  style) vs magic pixels. Container queries. Aesthetic translation
  table — how each archetype handles small-screen scaling. Responsive
  typography with clamp(). Operational-dashboard note: desktop-primary
  is a legitimate product decision.

references/design-review.md (~170 lines)
  Archetype check (does the build read as what was committed to?).
  Anti-pattern grep for fonts, palette, layout, motion, backgrounds,
  components. Code-quality-match check (ornate design + lazy code =
  failure). Performance sanity. Convergence check (if last 3 builds
  all used the same archetype, break the pattern). The one-sentence
  test for memorability.

references/rationale-template.md (~160 lines)
  Template for design-rationale.md alongside the build. Nine sections
  (purpose, archetype, locked decisions, deliberately absent,
  accessibility, responsive, implementation, open questions,
  references). Filled example using a DeepSat SOCOM demo landing page
  to show density and specificity.

Structure matches Anthropic's own pdf / docx / webapp-testing pattern
(SKILL.md entry + references/ for progressive disclosure). Makefile
SKILLS extended; make install symlinks globally.

Adoption caveat resolved: name kept as `frontend-design` (not renamed
to ui-design) — "frontend" signals scope (web code, not mobile /
desktop / terminal UIs), upstream parity preserved for attribution.
</content>
</entry>
<entry>
<title>fix(deps): use uv tool install for playwright-py; gitignore node_modules</title>
<updated>2026-04-19T20:30:20+00:00</updated>
<author>
<name>Craig Jennings</name>
<email>c@cjennings.net</email>
</author>
<published>2026-04-19T20:30:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.cjennings.net/rulesets/commit/?id=3e4cea6709edd16a51d513dd96da91e5aad0be66'/>
<id>urn:sha1:3e4cea6709edd16a51d513dd96da91e5aad0be66</id>
<content type='text'>
Two fixes rolled up:

1. Add .gitignore with **/node_modules/, package-lock.json, Python venv /
   cache artifacts, and OS metadata. Prior make deps run produced a 603-
   file playwright-js/node_modules tree that should never be tracked.

2. Makefile deps target: install playwright-py via `uv tool install
   playwright` instead of `pip install --system`. Earlier attempts with
   pip --user, pip --system, and uv pip --system all failed on externally-
   managed Python (PEP 668 on Arch). `uv tool install` creates an isolated
   venv for the CLI, avoiding the conflict. Chromium browsers are shared
   with the JS side via ~/.cache/ms-playwright — no re-download.

   Also added uv itself to the deps target (was missing).

   Library import (`import playwright`) still requires per-project venv,
   which is the right pattern on externally-managed systems. Deps output
   mentions this explicitly.
</content>
</entry>
<entry>
<title>refactor(playwright): split into playwright-js + playwright-py variants</title>
<updated>2026-04-19T20:24:51+00:00</updated>
<author>
<name>Craig Jennings</name>
<email>c@cjennings.net</email>
</author>
<published>2026-04-19T20:24:51+00:00</published>
<link rel='alternate' type='text/html' href='https://git.cjennings.net/rulesets/commit/?id=4ffa7417a359ef4eae09f61d7da4de06539462ca'/>
<id>urn:sha1:4ffa7417a359ef4eae09f61d7da4de06539462ca</id>
<content type='text'>
Rename `playwright-skill/` → `playwright-js/` and add `playwright-py/`
as a verbatim fork of Anthropic's official `webapp-testing` skill
(Apache-2.0). Cross-pollinate: each skill gains patterns and helpers
inspired by the other's strengths, with upstream semantics preserved.

## playwright-js (JS/TS stack)

Renamed from playwright-skill; upstream lackeyjb MIT content untouched.
New sections added (clearly marked, preserving upstream semantics):

- Static HTML vs Dynamic Webapp decision tree (core Anthropic methodology)
- Reconnaissance-Then-Action pattern (navigate → networkidle → inspect → act)
- Console Log Capture snippet (page.on console/pageerror/requestfailed)

Description updated to clarify JS/TS stack fit (React/Next/Vue/Svelte/Node)
and reference `/playwright-py` as the Python sibling.

## playwright-py (Python stack)

Verbatim fork of anthropics/skills/skills/webapp-testing; upstream SKILL.md
and bundled `scripts/with_server.py` + examples kept intact. New scripts
and examples added (all lackeyjb-style conveniences in Python):

Scripts:
  scripts/detect_dev_servers.py   Probe common localhost ports for HTTP
                                  servers; outputs JSON of found services.
  scripts/safe_actions.py         safe_click, safe_type (retry-wrapped),
                                  handle_cookie_banner (common selectors),
                                  build_context_with_headers (env-var-
                                  driven: PW_HEADER_NAME / PW_HEADER_VALUE /
                                  PW_EXTRA_HEADERS='{…json…}').

Examples:
  examples/login_flow.py          Login form + wait_for_url.
  examples/broken_links.py        Scan visible external hrefs via HEAD.
  examples/responsive_sweep.py    Multi-viewport screenshots to /tmp.

SKILL.md gains 5 "Added:" sections documenting the new scripts, retry
helpers, env-header injection, and /tmp script discipline. Attribution
notes explicitly mark upstream vs local additions.

## Makefile

SKILLS: playwright-skill → playwright-js + playwright-py
deps target: extended Playwright step to install Python package +
  Chromium via `python3 -m pip install --user playwright &amp;&amp; python3 -m
  playwright install chromium` when playwright-py/ is present. Idempotent
  (detected via `python3 -c "import playwright"`).

## Usage

Both skills symlinked globally via `make install`. Invoke whichever
matches the project stack — cross-references in descriptions route you
to the right one. Run `make deps` once to install both runtimes.
</content>
</entry>
<entry>
<title>feat: adopt lackeyjb/playwright-skill (MIT verbatim fork) + deps target</title>
<updated>2026-04-19T20:16:46+00:00</updated>
<author>
<name>Craig Jennings</name>
<email>c@cjennings.net</email>
</author>
<published>2026-04-19T20:16:46+00:00</published>
<link rel='alternate' type='text/html' href='https://git.cjennings.net/rulesets/commit/?id=11f5f003eef12bff9633ca8190e3c43c7dab6708'/>
<id>urn:sha1:11f5f003eef12bff9633ca8190e3c43c7dab6708</id>
<content type='text'>
Browser automation + UI testing skill forked verbatim from
github.com/lackeyjb/playwright-skill (MIT, 2458 stars, active through
Dec 2025). LICENSE preserved in skill dir with attribution footer added
to SKILL.md.

Bundle contents (from upstream):
  playwright-skill/SKILL.md
  playwright-skill/API_REFERENCE.md
  playwright-skill/run.js       (universal executor with module resolution)
  playwright-skill/package.json
  playwright-skill/lib/helpers.js (detectDevServers, safeClick, safeType,
                                   takeScreenshot, handleCookieBanner,
                                   extractTableData, createContext with
                                   env-driven header injection)
  playwright-skill/LICENSE      (MIT, lackeyjb)

Makefile updates:
  - SKILLS extended with playwright-skill; make install symlinks it
    globally into ~/.claude/skills/
  - deps target extended to check node + npm, and to run the skill's
    own `npm run setup` (installs Playwright + Chromium ~300 MB on
    first run). Idempotent: skipped if node_modules/playwright
    already exists.

Stack fit: JavaScript Playwright aligns with Craig's TypeScript/React
frontend work. Python-side (Django) browser tests would be better served
by Anthropic's official webapp-testing skill (Python Playwright bindings),
noted in the evaluation memory but not adopted here — minimal overlap,
easy to add later if the need arises.
</content>
</entry>
<entry>
<title>feat: clean-room synthesis — prompt-engineering skill</title>
<updated>2026-04-19T20:03:27+00:00</updated>
<author>
<name>Craig Jennings</name>
<email>c@cjennings.net</email>
</author>
<published>2026-04-19T20:03:27+00:00</published>
<link rel='alternate' type='text/html' href='https://git.cjennings.net/rulesets/commit/?id=b3247d0b1aaf73cae6068e42e3df26b256d9008e'/>
<id>urn:sha1:b3247d0b1aaf73cae6068e42e3df26b256d9008e</id>
<content type='text'>
Distilled from NeoLab customaize-agent:prompt-engineering rubric (GPL-3.0
source; clean-room, no prose reused). ~17 KB NeoLab version trimmed to
tighter ~430 lines focused on what's genuinely non-obvious:

- Four prompt-type classification (discipline-enforcing / guidance /
  collaborative / reference), with explanations for each so the user
  knows what they're picking. Used in both design and critique modes.
- Seven persuasion principles (Meincke et al. 2025, N≈28,000), with
  by-type matrix. Notably flags Liking as actively harmful for
  collaborative prompts (breeds sycophancy in reviews/critiques).
- Degrees-of-freedom axis (high/medium/low) matched to task fragility.
- Context-window-as-shared-resource framing.
- Brief reference only for classical techniques (few-shot, CoT, system
  prompts, templates) — widely documented elsewhere, not re-taught.
- Explicit ethics test for persuasion use.
- Design-mode vs critique-mode workflows.
- Anti-patterns list covering sycophancy-by-default, hedging-on-
  discipline-prompts, authority-stack-on-guidance, high-freedom-on-
  fragile-tasks.

Landscape: no prompt-engineering skill exists in Anthropic's official
repo, wshobson/agents, or the major community skill collections. Real gap.

Makefile SKILLS extended; global symlink installed.
</content>
</entry>
</feed>
