aboutsummaryrefslogtreecommitdiff
path: root/languages
diff options
context:
space:
mode:
authorCraig Jennings <c@cjennings.net>2026-04-19 12:36:04 -0500
committerCraig Jennings <c@cjennings.net>2026-04-19 12:36:04 -0500
commit019db5f9677902ba02d703a8554667d1b6e88f6b (patch)
tree37c7991c7e29b443348996f1c2407030b3571026 /languages
parent18fcaf9f27d03849487078b30f667c3b574e6554 (diff)
downloadrulesets-019db5f9677902ba02d703a8554667d1b6e88f6b.tar.gz
rulesets-019db5f9677902ba02d703a8554667d1b6e88f6b.zip
refactor: generalize testing.md, split Python specifics, DRY install
claude-rules/testing.md is now language-agnostic (TDD principles, test categories, coverage targets, anti-patterns). Scope header widened to **/*. Python-specific content (pytest, fixtures, parametrize, anyio, Django DB testing) moved to languages/python/claude/rules/python-testing.md. Added languages/python/ bundle (rules only so far; no CLAUDE.md template or hooks yet — Python validation tooling differs from Elisp). Added install-python shortcut to the Makefile. Updated scripts/install-lang.sh to copy claude-rules/*.md into each target project's .claude/rules/. Bundles no longer need to carry their own verification.md copy — deleted languages/elisp/claude/rules/verification.md. Single source of truth in claude-rules/, fans out via install. Elisp-testing.md now references testing.md as its base (matches the python-testing.md pattern).
Diffstat (limited to 'languages')
-rw-r--r--languages/elisp/claude/rules/elisp-testing.md3
-rw-r--r--languages/elisp/claude/rules/verification.md42
-rw-r--r--languages/python/claude/rules/python-testing.md101
3 files changed, 104 insertions, 42 deletions
diff --git a/languages/elisp/claude/rules/elisp-testing.md b/languages/elisp/claude/rules/elisp-testing.md
index fcad9de..6cb59b1 100644
--- a/languages/elisp/claude/rules/elisp-testing.md
+++ b/languages/elisp/claude/rules/elisp-testing.md
@@ -2,6 +2,9 @@
Applies to: `**/tests/*.el`
+Implements the core principles from `testing.md`. All rules there apply here —
+this file covers Elisp-specific patterns.
+
## Framework: ERT
Use `ert-deftest` for all tests. One test = one scenario.
diff --git a/languages/elisp/claude/rules/verification.md b/languages/elisp/claude/rules/verification.md
deleted file mode 100644
index 8993736..0000000
--- a/languages/elisp/claude/rules/verification.md
+++ /dev/null
@@ -1,42 +0,0 @@
-# Verification Before Completion
-
-Applies to: `**/*`
-
-## The Rule
-
-Do not claim work is done without fresh verification evidence. Run the command, read the output, confirm it matches the claim, then — and only then — declare success.
-
-This applies to every completion claim:
-- "Tests pass" → Run the test suite. Read the output. Confirm all green.
-- "Linter is clean" → Run the linter. Read the output. Confirm no warnings.
-- "Build succeeds" → Run the build. Read the output. Confirm no errors.
-- "Bug is fixed" → Run the reproduction steps. Confirm the bug is gone.
-- "No regressions" → Run the full test suite, not just the tests you added.
-
-## What Fresh Means
-
-- Run the verification command **now**, in the current session
-- Do not rely on a previous run from before your changes
-- Do not assume your changes didn't break something unrelated
-- Do not extrapolate from partial output — read the whole result
-
-## Red Flags
-
-If you find yourself using these words, you haven't verified:
-
-- "should" ("tests should pass")
-- "probably" ("this probably works")
-- "I believe" ("I believe the build is clean")
-- "based on the changes" ("based on the changes, nothing should break")
-
-Replace beliefs with evidence. Run the command.
-
-## Before Committing
-
-Before any commit:
-1. Run the test suite — confirm all tests pass
-2. Run the linter — confirm no new warnings
-3. Run the type checker — confirm no new errors
-4. Review the diff — confirm only intended changes are staged
-
-Do not commit based on the assumption that nothing broke. Verify.
diff --git a/languages/python/claude/rules/python-testing.md b/languages/python/claude/rules/python-testing.md
new file mode 100644
index 0000000..6f04b7f
--- /dev/null
+++ b/languages/python/claude/rules/python-testing.md
@@ -0,0 +1,101 @@
+# Python Testing Rules
+
+Applies to: `**/*.py`
+
+Implements the core principles from `testing.md`. All rules there apply here —
+this file covers Python-specific patterns.
+
+## Framework: pytest (NEVER unittest)
+
+Use `pytest` for all Python tests. Do not use `unittest.TestCase` unless
+integrating with legacy code that requires it.
+
+## Test Structure
+
+Group tests in classes that mirror the source module:
+
+```python
+class TestCartService:
+ """Tests for CartService."""
+
+ @pytest.fixture
+ def cart(self):
+ return Cart(user_id=42)
+
+ def test_add_item_normal(self, cart):
+ """Normal: adding an in-stock item increases quantity."""
+ cart.add("SKU-1", quantity=2)
+ assert cart.item_count("SKU-1") == 2
+
+ def test_add_item_boundary_zero_quantity(self, cart):
+ """Boundary: quantity 0 is a no-op, not an error."""
+ cart.add("SKU-1", quantity=0)
+ assert cart.item_count("SKU-1") == 0
+
+ def test_add_item_error_negative(self, cart):
+ """Error: negative quantity raises ValueError."""
+ with pytest.raises(ValueError, match="quantity must be non-negative"):
+ cart.add("SKU-1", quantity=-1)
+```
+
+## Fixtures Over Factories
+
+- Use `pytest` fixtures for test data setup
+- Use `@pytest.fixture(autouse=True)` sparingly — prefer explicit injection
+- Avoid `factory_boy` unless object graphs are genuinely complex
+- Django: prefer pytest fixtures over `setUpTestData` unless you have a
+ performance reason
+
+## Parametrize for Category Coverage
+
+Use `@pytest.mark.parametrize` to cover normal, boundary, and error cases
+concisely instead of hand-writing near-duplicate tests:
+
+```python
+@pytest.mark.parametrize("quantity,valid", [
+ (1, True), # Normal
+ (100, True), # Normal: bulk
+ (0, True), # Boundary: zero is a no-op
+ (-1, False), # Error: negative
+])
+def test_add_item_quantity_validation(cart, quantity, valid):
+ if valid:
+ cart.add("SKU-1", quantity=quantity)
+ else:
+ with pytest.raises(ValueError):
+ cart.add("SKU-1", quantity=quantity)
+```
+
+## Mocking Guidelines
+
+### Mock these (external boundaries):
+- External APIs (`requests`, `httpx`, `boto3` clients)
+- Time (`freezegun` or `time-machine`)
+- File uploads (Django: `SimpleUploadedFile`)
+- Celery tasks (`@override_settings(CELERY_ALWAYS_EAGER=True)`)
+- Email sending (Django: `django.core.mail.outbox`)
+
+### Never mock these (internal domain):
+- ORM queries (SQLAlchemy, Django ORM)
+- Model methods and properties
+- Form and serializer validation
+- Middleware
+- Your own service functions
+
+## Async Testing
+
+Use `anyio` for async tests (not raw `asyncio`):
+
+```python
+@pytest.mark.anyio
+async def test_process_order_async():
+ result = await process_order_async(sample_order)
+ assert result.status == "processed"
+```
+
+## Database Testing (Django)
+
+- Mark database tests with `@pytest.mark.django_db`
+- Use transactions for isolation (pytest-django default)
+- Prefer in-memory SQLite for speed in unit tests
+- Use `select_related` / `prefetch_related` assertions to catch N+1 regressions