aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--Makefile6
-rw-r--r--claude-rules/testing.md21
-rw-r--r--languages/elisp/claude/rules/elisp-testing.md3
-rw-r--r--languages/elisp/claude/rules/verification.md42
-rw-r--r--languages/python/claude/rules/python-testing.md101
-rwxr-xr-xscripts/install-lang.sh12
6 files changed, 136 insertions, 49 deletions
diff --git a/Makefile b/Makefile
index e42662e..cf1fd5d 100644
--- a/Makefile
+++ b/Makefile
@@ -5,7 +5,7 @@ RULES := $(wildcard claude-rules/*.md)
LANGUAGES := $(notdir $(wildcard languages/*))
.PHONY: help install uninstall list \
- install-lang list-languages install-elisp
+ install-lang list-languages install-elisp install-python
help:
@echo "rulesets — Claude Code skills, rules, and language bundles"
@@ -18,6 +18,7 @@ help:
@echo " Per-project language rulesets:"
@echo " make install-lang LANG=<lang> PROJECT=<path> [FORCE=1]"
@echo " make install-elisp PROJECT=<path> [FORCE=1] (shortcut)"
+ @echo " make install-python PROJECT=<path> [FORCE=1] (shortcut)"
@echo " make list-languages - Show available language bundles"
@echo ""
@echo " FORCE=1 overwrites an existing CLAUDE.md (other files always overwrite)."
@@ -110,3 +111,6 @@ install-lang:
install-elisp:
@$(MAKE) install-lang LANG=elisp PROJECT="$(PROJECT)" FORCE="$(FORCE)"
+
+install-python:
+ @$(MAKE) install-lang LANG=python PROJECT="$(PROJECT)" FORCE="$(FORCE)"
diff --git a/claude-rules/testing.md b/claude-rules/testing.md
index 37ba412..42cc528 100644
--- a/claude-rules/testing.md
+++ b/claude-rules/testing.md
@@ -1,6 +1,10 @@
# Testing Standards
-Applies to: `**/*.py`, `**/*.ts`, `**/*.tsx`, `**/*.js`, `**/*.jsx`
+Applies to: `**/*`
+
+Core TDD discipline and test quality rules. Language-specific patterns
+(frameworks, fixture idioms, mocking tools) live in per-language testing files
+under `languages/<lang>/claude/rules/`.
## Test-Driven Development (Default)
@@ -56,6 +60,8 @@ Every unit under test requires coverage across three categories:
## Test Organization
+Typical layout:
+
```
tests/
unit/ # One test file per source file
@@ -63,14 +69,21 @@ tests/
e2e/ # Full system tests
```
+Per-language files may adjust this (e.g. Elisp collates ERT tests into
+`tests/test-<module>*.el` without subdirectories).
+
## Naming Convention
- Unit: `test_<module>_<function>_<scenario>_<expected>`
- Integration: `test_integration_<workflow>_<scenario>_<outcome>`
Examples:
-- `test_satellite_calculate_position_null_input_raises_error`
-- `test_integration_telemetry_sync_network_timeout_retries_three_times`
+- `test_cart_apply_discount_expired_coupon_raises_error`
+- `test_integration_order_sync_network_timeout_retries_three_times`
+
+Languages that prefer camelCase, kebab-case, or other conventions keep the
+structure but use their idiom. Consistency within a project matters more than
+the specific case choice.
## Test Quality
@@ -100,7 +113,7 @@ Mock external dependencies at the system boundary:
Never mock:
- The code under test
- Internal domain logic
-- Framework behavior (ORM queries, middleware, hooks)
+- Framework behavior (ORM queries, middleware, hooks, buffer primitives)
## Coverage Targets
diff --git a/languages/elisp/claude/rules/elisp-testing.md b/languages/elisp/claude/rules/elisp-testing.md
index fcad9de..6cb59b1 100644
--- a/languages/elisp/claude/rules/elisp-testing.md
+++ b/languages/elisp/claude/rules/elisp-testing.md
@@ -2,6 +2,9 @@
Applies to: `**/tests/*.el`
+Implements the core principles from `testing.md`. All rules there apply here —
+this file covers Elisp-specific patterns.
+
## Framework: ERT
Use `ert-deftest` for all tests. One test = one scenario.
diff --git a/languages/elisp/claude/rules/verification.md b/languages/elisp/claude/rules/verification.md
deleted file mode 100644
index 8993736..0000000
--- a/languages/elisp/claude/rules/verification.md
+++ /dev/null
@@ -1,42 +0,0 @@
-# Verification Before Completion
-
-Applies to: `**/*`
-
-## The Rule
-
-Do not claim work is done without fresh verification evidence. Run the command, read the output, confirm it matches the claim, then — and only then — declare success.
-
-This applies to every completion claim:
-- "Tests pass" → Run the test suite. Read the output. Confirm all green.
-- "Linter is clean" → Run the linter. Read the output. Confirm no warnings.
-- "Build succeeds" → Run the build. Read the output. Confirm no errors.
-- "Bug is fixed" → Run the reproduction steps. Confirm the bug is gone.
-- "No regressions" → Run the full test suite, not just the tests you added.
-
-## What Fresh Means
-
-- Run the verification command **now**, in the current session
-- Do not rely on a previous run from before your changes
-- Do not assume your changes didn't break something unrelated
-- Do not extrapolate from partial output — read the whole result
-
-## Red Flags
-
-If you find yourself using these words, you haven't verified:
-
-- "should" ("tests should pass")
-- "probably" ("this probably works")
-- "I believe" ("I believe the build is clean")
-- "based on the changes" ("based on the changes, nothing should break")
-
-Replace beliefs with evidence. Run the command.
-
-## Before Committing
-
-Before any commit:
-1. Run the test suite — confirm all tests pass
-2. Run the linter — confirm no new warnings
-3. Run the type checker — confirm no new errors
-4. Review the diff — confirm only intended changes are staged
-
-Do not commit based on the assumption that nothing broke. Verify.
diff --git a/languages/python/claude/rules/python-testing.md b/languages/python/claude/rules/python-testing.md
new file mode 100644
index 0000000..6f04b7f
--- /dev/null
+++ b/languages/python/claude/rules/python-testing.md
@@ -0,0 +1,101 @@
+# Python Testing Rules
+
+Applies to: `**/*.py`
+
+Implements the core principles from `testing.md`. All rules there apply here —
+this file covers Python-specific patterns.
+
+## Framework: pytest (NEVER unittest)
+
+Use `pytest` for all Python tests. Do not use `unittest.TestCase` unless
+integrating with legacy code that requires it.
+
+## Test Structure
+
+Group tests in classes that mirror the source module:
+
+```python
+class TestCartService:
+ """Tests for CartService."""
+
+ @pytest.fixture
+ def cart(self):
+ return Cart(user_id=42)
+
+ def test_add_item_normal(self, cart):
+ """Normal: adding an in-stock item increases quantity."""
+ cart.add("SKU-1", quantity=2)
+ assert cart.item_count("SKU-1") == 2
+
+ def test_add_item_boundary_zero_quantity(self, cart):
+ """Boundary: quantity 0 is a no-op, not an error."""
+ cart.add("SKU-1", quantity=0)
+ assert cart.item_count("SKU-1") == 0
+
+ def test_add_item_error_negative(self, cart):
+ """Error: negative quantity raises ValueError."""
+ with pytest.raises(ValueError, match="quantity must be non-negative"):
+ cart.add("SKU-1", quantity=-1)
+```
+
+## Fixtures Over Factories
+
+- Use `pytest` fixtures for test data setup
+- Use `@pytest.fixture(autouse=True)` sparingly — prefer explicit injection
+- Avoid `factory_boy` unless object graphs are genuinely complex
+- Django: prefer pytest fixtures over `setUpTestData` unless you have a
+ performance reason
+
+## Parametrize for Category Coverage
+
+Use `@pytest.mark.parametrize` to cover normal, boundary, and error cases
+concisely instead of hand-writing near-duplicate tests:
+
+```python
+@pytest.mark.parametrize("quantity,valid", [
+ (1, True), # Normal
+ (100, True), # Normal: bulk
+ (0, True), # Boundary: zero is a no-op
+ (-1, False), # Error: negative
+])
+def test_add_item_quantity_validation(cart, quantity, valid):
+ if valid:
+ cart.add("SKU-1", quantity=quantity)
+ else:
+ with pytest.raises(ValueError):
+ cart.add("SKU-1", quantity=quantity)
+```
+
+## Mocking Guidelines
+
+### Mock these (external boundaries):
+- External APIs (`requests`, `httpx`, `boto3` clients)
+- Time (`freezegun` or `time-machine`)
+- File uploads (Django: `SimpleUploadedFile`)
+- Celery tasks (`@override_settings(CELERY_ALWAYS_EAGER=True)`)
+- Email sending (Django: `django.core.mail.outbox`)
+
+### Never mock these (internal domain):
+- ORM queries (SQLAlchemy, Django ORM)
+- Model methods and properties
+- Form and serializer validation
+- Middleware
+- Your own service functions
+
+## Async Testing
+
+Use `anyio` for async tests (not raw `asyncio`):
+
+```python
+@pytest.mark.anyio
+async def test_process_order_async():
+ result = await process_order_async(sample_order)
+ assert result.status == "processed"
+```
+
+## Database Testing (Django)
+
+- Mark database tests with `@pytest.mark.django_db`
+- Use transactions for isolation (pytest-django default)
+- Prefer in-memory SQLite for speed in unit tests
+- Use `select_related` / `prefetch_related` assertions to catch N+1 regressions
diff --git a/scripts/install-lang.sh b/scripts/install-lang.sh
index f9b7a31..2ee4aa7 100755
--- a/scripts/install-lang.sh
+++ b/scripts/install-lang.sh
@@ -35,14 +35,22 @@ PROJECT="$(cd "$PROJECT" && pwd)"
echo "Installing '$LANG' ruleset into $PROJECT"
-# 1. .claude/ — rules, hooks, settings (authoritative, always overwrite)
+# 1. Generic rules from claude-rules/ (shared across all languages)
+if [ -d "$REPO_ROOT/claude-rules" ]; then
+ mkdir -p "$PROJECT/.claude/rules"
+ cp "$REPO_ROOT/claude-rules"/*.md "$PROJECT/.claude/rules/" 2>/dev/null || true
+ count=$(ls -1 "$REPO_ROOT/claude-rules"/*.md 2>/dev/null | wc -l)
+ echo " [ok] .claude/rules/ — $count generic rule(s) from claude-rules/"
+fi
+
+# 2. .claude/ — language-specific rules, hooks, settings (authoritative, always overwrite)
if [ -d "$SRC/claude" ]; then
mkdir -p "$PROJECT/.claude"
cp -rT "$SRC/claude" "$PROJECT/.claude"
if [ -d "$PROJECT/.claude/hooks" ]; then
find "$PROJECT/.claude/hooks" -type f -name '*.sh' -exec chmod +x {} \;
fi
- echo " [ok] .claude/ installed"
+ echo " [ok] .claude/ — language-specific content"
fi
# 2. githooks/ — pre-commit etc.