1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
|
#+TITLE: Rulesets — Open Work
#+AUTHOR: Craig Jennings
#+DATE: 2026-04-19
Tracking TODOs for the rulesets repo that span more than one commit.
Project-scoped (not the global =~/sync/org/roam/inbox.org= list).
* TODO [#A] Build =/update-skills= skill for keeping forks in sync with upstream
The rulesets repo has a growing set of forks (=arch-decide= from
wshobson/agents, =playwright-js= from lackeyjb/playwright-skill, =playwright-py=
from anthropics/skills/webapp-testing). Over time, upstream releases fixes,
new templates, or scope expansions that we'd want to pull in without losing
our local modifications. A skill should handle this deliberately rather than
by manual re-cloning.
** Design decisions (agreed)
- *Upstream tracking:* per-fork manifest =.skill-upstream= (YAML or JSON):
- =url= (GitHub URL)
- =ref= (branch or tag)
- =subpath= (path inside the upstream repo when it's a monorepo)
- =last_synced_commit= (updated on successful sync)
- *Local modifications:* 3-way merge. Requires a pristine baseline snapshot of
the upstream-at-time-of-fork. Store under =.skill-upstream/baseline/= or
similar; committed to the rulesets repo so the merge base is reproducible.
- *Apply changes:* skill edits files directly with per-file confirmation.
- *Conflict policy:* per-hunk prompt inside the skill. When a 3-way merge
produces a conflict, the skill walks each conflicting hunk and asks Craig:
keep-local / take-upstream / both / skip. Editor-independent; works on
machines where Emacs isn't available. Fallback when baseline is missing
or corrupt (can't run 3-way merge): write =.local=, =.upstream=,
=.baseline= files side-by-side and surface as manual review.
** V1 Scope
- [ ] Skill at =~/code/rulesets/update-skills/=
- [ ] Discovery: scan sibling skill dirs for =.skill-upstream= manifests
- [ ] Helper script (bash or python) to:
- Clone each upstream at =ref= shallowly into =/tmp/=
- Compare current skill state vs latest upstream vs stored baseline
- Classify each file: =unchanged= / =upstream-only= / =local-only= / =both-changed=
- For =both-changed=: run =git merge-file --stdout <local> <baseline> <upstream>=;
if clean, write result directly; if conflicts, parse the conflict-marker
output and feed each hunk into the per-hunk prompt loop
- [ ] Per-hunk prompt loop:
- Show base / local / upstream side-by-side for each conflicting hunk
- Ask: keep-local / take-upstream / both (concatenate) / skip (leave marker)
- Assemble resolved hunks into the final file content
- [ ] Per-fork summary output with file-level classification table
- [ ] Per-file confirmation flow (yes / no / show-diff) BEFORE per-hunk loop
- [ ] On successful sync: update =last_synced_commit= in the manifest
- [ ] =--dry-run= to preview without writing
** V2+ (deferred)
- [ ] Track upstream *releases* (tags) not just branches, so skill can propose
"upgrade from v1.2 to v1.3" with release notes pulled in
- [ ] Generate patch files as an alternative apply method (for users who prefer
=git apply= / =patch= over in-place edits)
- [ ] Non-interactive mode (=--non-interactive= / CI): skip conflict resolution,
emit side-by-side files for later manual review
- [ ] Auto-run on a schedule via Claude Code background agent
- [ ] Summary of aggregate upstream activity across all forks (which forks have
upstream changes waiting, which don't)
- [ ] Optional editor integration: on machines with Emacs, offer
=M-x smerge-ediff= as an alternate path for users who prefer ediff over
per-hunk prompts
** Initial forks to enumerate (for manifest bootstrap)
- [ ] =arch-decide= → =wshobson/agents= :: =plugins/documentation-generation/skills/architecture-decision-records= :: MIT
- [ ] =playwright-js= → =lackeyjb/playwright-skill= :: =skills/playwright-skill= :: MIT
- [ ] =playwright-py= → =anthropics/skills= :: =skills/webapp-testing= :: Apache-2.0
** Open questions
- [ ] What happens when upstream *renames* a file we fork? Skill would see
"file gone from upstream, still present locally" — drop, keep, or prompt?
- [ ] What happens when upstream splits into multiple forks (e.g., a plugin
reshuffles its structure)? Probably out of scope for v1; manual migration.
- [ ] Rate-limit / offline mode: if GitHub is unreachable, should skill fail
or degrade gracefully? Likely degrade; print warning per fork.
* TODO [#C] Try Skill Seekers on a real DeepSat docs-briefing need
SCHEDULED: <2026-05-15 Fri>
=Skill Seekers= ([[https://github.com/yusufkaraaslan/Skill_Seekers]]) is a Python
CLI + MCP server that ingests 18 source types (docs sites, PDFs, GitHub
repos, YouTube videos, Confluence, Notion, OpenAPI specs, etc.) and
exports to 20+ AI targets including Claude skills. MIT licensed, 12.9k
stars, active as of 2026-04-12.
*Evaluated: 2026-04-19 — not adopted for rulesets.* Generates
*reference-style* skills (encyclopedic dumps of scraped source material),
not *operational* skills (opinionated how-we-do-things content). Doesn't
fit the rulesets curation pattern.
*Next-trigger experiment (this TODO):* the next time a DeepSat task needs
Claude briefed deeply on a specific library, API, or docs site — try:
#+begin_src bash
pip install skill-seekers
skill-seekers create <url> --target claude
#+end_src
Measure output quality vs hand-curated briefing. If usable, consider
installing as a persistent tool. If output is bloated / under-structured,
discard and stick with hand briefing.
*Candidate first experiments (pick one from an actual need, don't invent):*
- A Django ORM reference skill scoped to the version DeepSat pins
- An OpenAPI-to-skill conversion for a partner-vendor API
- A React hooks reference skill for the frontend team's current patterns
- A specific AWS service's docs (e.g. GovCloud-flavored)
*Patterns worth borrowing into rulesets even without adopting the tool:*
- Enhancement-via-agent pipeline (scrape raw → LLM pass → structured
SKILL.md). Applicable if we ever build internal-docs-to-skill tooling.
- Multi-target export abstraction (one knowledge extraction → many output
formats). Clean design for any future multi-AI-tool workflow.
*Concerns to verify on actual use:*
- =LICENSE= has an unfilled =[Your Name/Username]= placeholder (MIT is
unambiguous, but sloppy for a 12k-star project)
- Default branch is =development=, not =main= — pin with care
- Heavy commercialization signals (website at skillseekersweb.com,
Trendshift promo, branded badges) — license might shift later; watch
- Companion =skill-seekers-configs= community repo has only 8 stars
despite main's 12.9k — ecosystem thinner than headline adoption
|