|
Created complete test coverage for the LanguageTool integration:
Unit Tests (test-flycheck-languagetool-setup.el):
- 6 tests covering installation and configuration
- Verifies LanguageTool binary availability
- Checks wrapper script exists and is executable
- Validates wrapper script structure (shebang, imports)
- Tests error handling for missing arguments
- All tests pass ✓
Integration Tests (test-integration-grammar-checking.el):
- 9 tests covering end-to-end grammar checking workflow
- Tests wrapper script with real LanguageTool execution
- Validates output format (filename:line:column: message)
- Tests normal cases (error detection, formatting)
- Tests boundary cases (empty files, single word, multiple paragraphs)
- Tests error cases (nonexistent files, missing arguments)
- Uses real test fixtures with known grammar errors
- All tests pass ✓ (takes ~35 seconds due to LanguageTool execution)
Test Fixtures (tests/fixtures/grammar-*.txt):
- grammar-errors-basic.txt: Common errors (subject-verb, could of, etc.)
- grammar-errors-punctuation.txt: Punctuation and spacing errors
- grammar-correct.txt: Clean text with no errors
Testing Philosophy Applied:
- Focus on OUR code (wrapper script), not flycheck internals
- Trust external frameworks work correctly
- Test real integration (wrapper → LanguageTool → output)
- No mocking of domain logic, only external side-effects
- Clear test categories: Normal, Boundary, Error cases
- Comprehensive docstrings listing integrated components
- Deterministic tests using real fixtures
Usage:
make test-file FILE=test-flycheck-languagetool-setup.el
make test-file FILE=test-integration-grammar-checking.el
make test-integration # Includes grammar integration test
Tests automatically discovered by Makefile wildcards.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
|
|
Added 83 test cases across 9 test files with 100% pass rate, covering
device detection, parsing, grouping, and complete workflow integration.
## What Was Done
### Refactoring for Testability
- Extracted `cj/recording--parse-pactl-output` from `cj/recording-parse-sources`
- Separated parsing logic from shell command execution
- Enables testing with fixture data instead of live system calls
### Test Fixtures Created
- `pactl-output-normal.txt` - All device types (built-in, USB, Bluetooth)
- `pactl-output-empty.txt` - Empty output
- `pactl-output-single.txt` - Single device
- `pactl-output-monitors-only.txt` - Only monitor devices
- `pactl-output-inputs-only.txt` - Only input devices
- `pactl-output-malformed.txt` - Invalid/malformed output
### Unit Tests (8 files, 78 test cases)
1. **test-video-audio-recording-friendly-state.el** (10 tests)
- State name conversion: SUSPENDED→Ready, RUNNING→Active
2. **test-video-audio-recording-parse-pactl-output.el** (14 tests)
- Parse raw pactl output into structured data
- Handle empty, malformed, and mixed valid/invalid input
3. **test-video-audio-recording-parse-sources.el** (6 tests)
- Shell command wrapper testing with mocked output
4. **test-video-audio-recording-detect-mic-device.el** (13 tests)
- Documents bugs: Returns ID numbers instead of device names
- Doesn't filter monitors (legacy function, not actively used)
5. **test-video-audio-recording-detect-system-device.el** (13 tests)
- Works correctly: Returns full device names
- Tests monitor detection with various device types
6. **test-video-audio-recording-group-devices-by-hardware.el** (12 tests)
- CRITICAL: Bluetooth MAC address normalization (colons vs underscores)
- Device pairing logic (mic + monitor from same hardware)
- Friendly name assignment
- Filters incomplete devices
7. **test-video-audio-recording-check-ffmpeg.el** (3 tests)
- ffmpeg availability detection
8. **test-video-audio-recording-get-devices.el** (7 tests)
- Auto-detection fallback logic
- Error handling for incomplete detection
### Integration Tests (1 file, 5 test cases)
9. **test-integration-recording-device-workflow.el** (5 tests)
- Complete workflow: parse → group → friendly names
- Bluetooth MAC normalization end-to-end
- Incomplete device filtering across components
- Malformed data graceful handling
## Key Testing Insights
### Bugs Documented
- `cj/recording-detect-mic-device` has bugs (returns IDs, doesn't filter monitors)
- These functions appear to be legacy code not used by main workflow
- Tests document current behavior to catch regressions if fixed
### Critical Features Validated
- **Bluetooth MAC normalization**: Input uses colons (00:1B:66:C0:91:6D),
output uses underscores (00_1B_66_C0_91_6D), grouping normalizes correctly
- **Device pairing**: Only devices with BOTH mic and monitor are included
- **Friendly names**: USB/PCI/Bluetooth patterns correctly identified
### Test Coverage
- Normal cases: Valid inputs, typical workflows
- Boundary cases: Empty, single device, incomplete pairs
- Error cases: Malformed input, missing devices, partial detection
## Test Execution
All tests pass: 9/9 files, 83/83 test cases (100% pass rate)
```bash
make test-file FILE=test-video-audio-recording-*.el
# All pass individually
# Integration test also passes
make test-file FILE=test-integration-recording-device-workflow.el
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
|