summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCraig Jennings <c@cjennings.net>2025-11-13 23:26:21 -0600
committerCraig Jennings <c@cjennings.net>2025-11-13 23:26:21 -0600
commit2e10a8856d0bdd4c8f77c53320221ad1b8deaa13 (patch)
tree95832c3b74fc523fe9d8319e25c5ea5bf1d40433
parentfd9cce59993556400b635256d712a65d87f5d72d (diff)
fix(archsetup): implement critical bug fixes and test improvements
This commit addresses several high-priority bugs and enhances the testing infrastructure: **Bug Fixes:** 1. Add root permission check at script start to fail fast with clear error message 2. Disable debug package installation by adding --nodebug flag to all yay calls 3. Replace unsafe `git pull --force` with safe rm + fresh clone to prevent data loss 4. Add geoclue package with correct systemd service configuration for geolocation 5. Add completion marker for reliable automated test detection **Testing Infrastructure:** - Add comprehensive VM-based testing framework in scripts/testing/ - Fix test script pgrep infinite loop using grep bracket self-exclusion pattern - Add network diagnostics and pre-flight checks - Support snapshot-based testing for reproducible test runs **Package Management:** - Remove anki (build hangs 98+ minutes) - Remove adwaita-color-schemes (CMake build issues) Test Results: 0 errors, 1,363 packages installed in 40 minutes πŸ€– Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
-rw-r--r--TODO.org101
-rwxr-xr-xarchsetup41
-rw-r--r--scripts/testing/README.org508
-rw-r--r--scripts/testing/archinstall-config.json117
-rwxr-xr-xscripts/testing/cleanup-tests.sh171
-rwxr-xr-xscripts/testing/create-base-vm.sh173
-rwxr-xr-xscripts/testing/debug-vm.sh133
-rwxr-xr-xscripts/testing/finalize-base-vm.sh31
-rwxr-xr-xscripts/testing/lib/finalize-base-vm.sh21
-rwxr-xr-xscripts/testing/lib/logging.sh151
-rw-r--r--scripts/testing/lib/network-diagnostics.sh60
-rwxr-xr-xscripts/testing/lib/vm-utils.sh321
-rwxr-xr-xscripts/testing/run-test.sh382
-rwxr-xr-xscripts/testing/setup-testing-env.sh191
14 files changed, 2373 insertions, 28 deletions
diff --git a/TODO.org b/TODO.org
index 205785d..9fa4f62 100644
--- a/TODO.org
+++ b/TODO.org
@@ -3,6 +3,26 @@
#+DATE: 2025-11-06
#+FILETAGS: :v2mom:strategy:archsetup:
+* URGENT Package Installation Fixes
+** TODO [#A] Replace nitrogen with feh for wallpaper management
+Nitrogen is no longer in the official Arch repos. Need to:
+- Replace nitrogen with feh in archsetup script
+- Update ranger configuration to use feh instead of nitrogen
+- Update emacs configuration to use feh instead of nitrogen
+- Update any scripts that change wallpaper to use feh instead of nitrogen
+- Test that feh provides equivalent functionality
+
+TEMPORARILY DISABLED in archsetup:668
+
+** TODO [#A] Disable or fix adwaita-color-schemes AUR package
+adwaita-color-schemes is failing to build due to CMake version incompatibility
+in qgnomeplatform dependency. Either:
+- Disable for now and wait for AUR maintainer to fix
+- Find alternative color scheme package
+- Investigate if we actually need this package
+
+TEMPORARILY DISABLED in archsetup:688
+
* What is V2MOM?
V2MOM is a strategic framework that provides clarity for decision-making, ruthless prioritization, and measuring progress. It transforms vague intentions into concrete action plans.
@@ -119,8 +139,9 @@ The script handles edge cases gracefully, provides detailed error messages with
*Why this is Method 1:* Can't build testing infrastructure or maintain packages if the script doesn't work. This is the foundationβ€”everything else depends on reliable execution.
-*** TODO [#A] Fix: no dotfiles were set up on last run
-CRITICAL - system is unusable without dotfiles; all 50+ scripts and configs missing
+*** DONE [#A] Fix: no dotfiles were set up on last run
+CLOSED: [2025-11-13 Wed]
+RESOLVED - VM test confirms dotfiles are properly stowed as symlinks; all configs and scripts in place
*** TODO [#A] Add root check at script start
Prevents running with wrong permissions that cause silent failures
@@ -149,21 +170,40 @@ Currently archsetup downloads a -debug package for every package installed, doub
Add ~!debug~ to OPTIONS in /etc/makepkg.conf or create ~/.config/pacman/makepkg.conf with the setting
Critical performance issue - cuts install time in half
+*** TODO [#B] Review slow and failed packages from 8GB RAM test
+See [[file:docs/slow-failed-packages.org][Slow and Failed Packages Analysis]]
+
+Test run from 2025-11-09 with 8GB RAM, 50GB disk identified:
+- 2 packages that hang indefinitely (anki, tageditor)
+- 4 packages that fail to install (nitrogen, gtk-engine-murrine, adwaita-color-schemes, vagrant)
+- Several slow but successful packages (multimarkdown, ptyxis, thunderbird, etc.)
+
+High priority actions:
+- Remove or make optional: anki (hangs 98 min), tageditor (hangs on qt5-webengine)
+- Investigate repository/build issues for failing packages
+
*** TODO [#B] Resolve all 8 failed packages from last run
-**** TODO [#B] adwaita-color-schemes (CMake compatibility issue)
-Error: "Compatibility with CMake < 3.5 has been removed" - qgnomeplatform build fails
-**** TODO [#B] geoclue (service doesn't exist)
-Failed to enable unit: Unit geoclue-agent@cjennings.service does not exist - check if service name is correct
-**** TODO [#B] tor-browser (PGP key import failure)
-PGP key EF6E286DDA85EA2A4BA7DE684E2C6E8793298290 required but keyserver receive failed: No data
-**** TODO [#B] multimarkdown (CMake compatibility issue)
-Same CMake < 3.5 compatibility error as adwaita-color-schemes
-**** TODO [#B] vagrant (deprecated/not found in repos)
-Error: target not found: vagrant - package no longer in official repos, may need AUR alternative
-**** TODO [#B] anki (missing .gitconfig during build)
-Cargo build failed: "failed to stat '/home/cjennings/.gitconfig'" - .gitconfig doesn't exist yet during build
-**** TODO [#B] figlet-fonts (FTP download with curl error)
-curl: option --ftp-pasv: is unknown - FTP download from ftp://ftp.figlet.org fails
+**** DONE [#B] adwaita-color-schemes (CMake compatibility issue)
+CLOSED: [2025-11-13 Wed]
+REMOVED from archsetup - package removed due to CMake build issues
+**** DONE [#B] geoclue (service doesn't exist)
+CLOSED: [2025-11-13 Wed]
+FIXED - Added geoclue package to archsetup and enabled correct service name: geoclue.service (was incorrectly trying geoclue-agent@cjennings.service)
+**** DONE [#B] tor-browser (PGP key import failure)
+CLOSED: [2025-11-13 Wed]
+PGP key issue resolved - tor-browser-bin 15.0-1 successfully installs in VM test
+**** DONE [#B] multimarkdown (CMake compatibility issue)
+CLOSED: [2025-11-13 Wed]
+CMake issue resolved - multimarkdown 6.7.0-2 successfully installs in VM test
+**** DONE [#B] vagrant (deprecated/not found in repos)
+CLOSED: [2025-11-13 Wed]
+vagrant 2.4.9-1 now available and successfully installs in VM test
+**** DONE [#B] anki (missing .gitconfig during build)
+CLOSED: [2025-11-13 Wed]
+REMOVED from archsetup - package removed due to build issues (hangs 98+ minutes, missing .gitconfig during cargo build)
+**** DONE [#B] figlet-fonts (FTP download with curl error)
+CLOSED: [2025-11-13 Wed]
+FTP download issue resolved - figlet-fonts 1.1-1 successfully installs in VM test
*** TODO [#B] Improve error handling: UFW firewall, rmmod pcspkr, mkdir missing quotes
**** TODO [#B] Fix UFW firewall error handling (archsetup:395,410)
@@ -285,6 +325,27 @@ Core automation infrastructure - enables continuous validation
*** TODO [#A] Generate recovery scripts from test failures
Auto-create post-install fix scripts for failed packages - makes failures actionable
+*** TODO [#B] Implement Testinfra test suite for archsetup
+Create comprehensive integration tests using Testinfra (Python + pytest) to validate archsetup installations
+
+See complete documentation: [[file:docs/testing-strategy.org::*Test Automation Framework][Testing Strategy - Test Automation Framework]]
+
+Tests should cover:
+- Smoke tests: user created, key packages installed, dotfiles present
+- Integration tests: services running, configs valid, X11 starts, apps launch
+- End-to-end tests: login as user, startx, open terminal, run emacs, verify workflows
+
+Framework: Testinfra with pytest (SSH-native, built-in modules for files/packages/services/commands)
+Location: scripts/testing/tests/ directory
+Integration: Run via pytest against test VMs after archsetup completes
+Benefits: Expressive Python tests, excellent reporting, can test interactive scenarios
+
+The testing-strategy.org document includes:
+- Complete example test suite (test_integration.py)
+- Tiered testing strategy (smoke/integration/end-to-end)
+- How to run tests and integrate with run-test.sh
+- Comparison with alternatives (Goss)
+
*** TODO [#B] Set up automated test schedule
Weekly full run to catch deprecated packages even without commits
@@ -389,6 +450,14 @@ If laptop is stolen, data remains protected
**** TODO [#A] Document which ports need to be open (SSH, Proton Bridge, etc.)
**** TODO [#A] Test that all needed services work with firewall enabled
+*** TODO [#C] Fix VM cloning machine-ID conflicts for parallel testing
+Currently using snapshot-based testing which works but limits to sequential test runs
+Cloned VMs fail to get DHCP/network even with machine-ID manipulation (truncate/remove)
+Root cause: Truncating /etc/machine-id breaks systemd/NetworkManager startup
+Need to investigate proper machine-ID regeneration that doesn't break networking
+Would enable parallel test execution in CI/CD (Method 2)
+Priority C because snapshot-based testing meets current needs
+
*** TODO [#A] Complete security education within 3 months
Read recommended resources to make informed security decisions (see metrics for Claude suggestions)
diff --git a/archsetup b/archsetup
index 65c1f75..5b43988 100755
--- a/archsetup
+++ b/archsetup
@@ -18,6 +18,14 @@
# uncomment to stop on any error
# set -e
+### Root Check
+
+if [ "$EUID" -ne 0 ]; then
+ echo "ERROR: This script must be run as root"
+ echo "Usage: sudo $0"
+ exit 1
+fi
+
### Parse Arguments
skip_slow_packages=false
@@ -144,9 +152,11 @@ git_install() {
display "task" "$action"
if ! (sudo -u "$username" git clone --depth 1 "$1" "$build_dir" >> "$logfile" 2>&1); then
- error "error" "cloning source code for $prog_name" "$?"
- (cd "$build_dir" && sudo -u "$username" git pull --force origin master >> "$logfile" 2>&1) || \
- error "error" "pulling source code for $prog_name" "$?"
+ error "error" "cloning source code for $prog_name - directory may exist, removing and retrying" "$?"
+ (rm -rf "$build_dir" >> "$logfile" 2>&1) || \
+ error "error" "removing existing directory for $prog_name" "$?"
+ (sudo -u "$username" git clone --depth 1 "$1" "$build_dir" >> "$logfile" 2>&1) || \
+ error "crash" "re-cloning source code for $prog_name after cleanup" "$?"
fi
(cd "$build_dir" && make install >> "$logfile" 2>&1) || \
@@ -156,11 +166,11 @@ git_install() {
# AUR Install
aur_install() {
action="installing $1 via the AUR" && display "task" "$action"
- if ! (sudo -u "$username" yay -S --noconfirm "$1" >> "$logfile" 2>&1); then
+ if ! (sudo -u "$username" yay -S --noconfirm --nodebug "$1" >> "$logfile" 2>&1); then
action="retrying $1" && display "task" "$action"
- if ! (sudo -u "$username" yay -S --noconfirm "$1" >> "$logfile" 2>&1); then
+ if ! (sudo -u "$username" yay -S --noconfirm --nodebug "$1" >> "$logfile" 2>&1); then
action="retrying $1 once more" && display "task" "$action"
- (sudo -u "$username" yay -S --noconfirm "$1" >> "$logfile" 2>&1) ||
+ (sudo -u "$username" yay -S --noconfirm --nodebug "$1" >> "$logfile" 2>&1) ||
error "error" "$action" "$?"
fi
fi
@@ -354,9 +364,11 @@ aur_installer () {
display "task" "fetching source code for yay"
if ! (sudo -u "$username" git clone --depth 1 "$yay_repo" "$build_dir" >> "$logfile" 2>&1); then
- error "error" "cloning source code for yay"
- (cd "$build_dir" && sudo -u "$username" git pull --force origin master >> "$logfile" 2>&1) || \
- error "crash" "changing directories to $build_dir and pulling source code" "$?"
+ error "error" "cloning source code for yay - directory may exist, removing and retrying"
+ (rm -rf "$build_dir" >> "$logfile" 2>&1) || \
+ error "crash" "removing existing directory for yay" "$?"
+ (sudo -u "$username" git clone --depth 1 "$yay_repo" "$build_dir" >> "$logfile" 2>&1) || \
+ error "crash" "re-cloning source code for yay after cleanup" "$?"
fi
action="packaging and installing yay"; display "task" "$action"
@@ -453,6 +465,10 @@ essential_services() {
systemctl disable systemd-resolved.service >> "$logfile" 2>&1 || error "error" "$action" "$?"
systemctl enable avahi-daemon.service >> "$logfile" 2>&1 || error "error" "$action" "$?"
+ pacman_install geoclue # geolocation service for location-aware apps
+ action="enabling geoclue geolocation service" && display "task" "$action"
+ systemctl enable geoclue.service >> "$logfile" 2>&1 || error "error" "$action" "$?"
+
# Job Scheduling
display "subtitle" "Job Scheduling"
@@ -685,7 +701,6 @@ desktop_environment() {
papirus-icon-theme qt6ct qt5ct; do
aur_install $software
done;
- # adwaita-color-schemes disabled - TODO: fix CMake build issue
pacman_install libappindicator-gtk3 # required by some applets
@@ -921,7 +936,6 @@ supplemental_software() {
pacman_install zlib # compression library
# aur installs
- # aur_install anki # flashcards / drill tool (DISABLED: hangs for 98+ minutes, missing .gitconfig during cargo build)
aur_install dtrx # extraction tool
aur_install figlet-fonts # fonts for figlet
aur_install foliate # pretty epub reader
@@ -942,7 +956,7 @@ supplemental_software() {
# working around an temp integ issue with python-lyricsgenius expiration date
action="prep to workaround tidal-dl issue" && display "task" "$action"
- yay -S --noconfirm --mflags --skipinteg python-lyricsgenius >> "$logfile" 2>&1 || error "error" "$action" "$?"
+ yay -S --noconfirm --nodebug --mflags --skipinteg python-lyricsgenius >> "$logfile" 2>&1 || error "error" "$action" "$?"
aur_install tidal-dl # tidal-dl:tidal as yt-dlp:youtube
}
@@ -1025,6 +1039,9 @@ outro() {
printf "Packages installed : %s\n" "$new_packages"
printf "\n"
printf "Please reboot before working with your new workstation.\n\n"
+
+ # Completion marker for automated testing
+ printf "=== ARCHSETUP_EXECUTION_COMPLETE ===\n" | tee -a "$logfile"
}
### Installation Steps
diff --git a/scripts/testing/README.org b/scripts/testing/README.org
new file mode 100644
index 0000000..a52dbea
--- /dev/null
+++ b/scripts/testing/README.org
@@ -0,0 +1,508 @@
+#+TITLE: ArchSetup Testing Infrastructure
+#+AUTHOR: Craig Jennings
+#+DATE: 2025-11-08
+
+* Overview
+
+This directory contains the complete testing infrastructure for archsetup, built on QEMU/KVM virtualization. It provides automated, reproducible testing of the archsetup installation script in isolated virtual machines.
+
+** Philosophy
+
+*Realism over speed.* We use full VMs (not containers) to test everything archsetup does: user creation, systemd services, X11/DWM, hardware drivers, and boot process.
+
+* Quick Start
+
+** One-Time Setup
+
+#+begin_src bash
+# Install required packages and configure environment
+./scripts/testing/setup-testing-env.sh
+
+# Log out and back in (for libvirt group membership)
+
+# Create the base VM (minimal Arch installation)
+./scripts/testing/create-base-vm.sh
+# Follow on-screen instructions to complete installation
+# Run finalize-base-vm.sh when done
+#+end_src
+
+** Run a Test
+
+#+begin_src bash
+# Test the current archsetup script
+./scripts/testing/run-test.sh
+
+# Test a specific version
+./scripts/testing/run-test.sh --script /path/to/archsetup
+
+# Keep VM running for debugging
+./scripts/testing/run-test.sh --keep
+#+end_src
+
+** Debug Interactively
+
+#+begin_src bash
+# Clone base VM for debugging
+./scripts/testing/debug-vm.sh
+
+# Use existing test disk
+./scripts/testing/debug-vm.sh test-results/20251108-143000/test.qcow2
+
+# Use base VM (read-only)
+./scripts/testing/debug-vm.sh --base
+#+end_src
+
+** Clean Up
+
+#+begin_src bash
+# Interactive cleanup (prompts for confirmation)
+./scripts/testing/cleanup-tests.sh
+
+# Keep last 10 test results
+./scripts/testing/cleanup-tests.sh --keep 10
+
+# Force cleanup without prompts
+./scripts/testing/cleanup-tests.sh --force
+#+end_src
+
+* Scripts
+
+** setup-testing-env.sh
+
+*Purpose:* One-time setup of testing infrastructure
+
+*What it does:*
+- Installs QEMU/KVM, libvirt, and related tools
+- Configures libvirt networking
+- Adds user to libvirt group
+- Verifies KVM support
+- Creates directories for artifacts
+
+*When to run:* Once per development machine
+
+*Usage:*
+#+begin_src bash
+./scripts/testing/setup-testing-env.sh
+#+end_src
+
+** create-base-vm.sh
+
+*Purpose:* Create the "golden image" minimal Arch VM
+
+*What it does:*
+- Downloads latest Arch ISO
+- Creates VM and boots from ISO
+- Opens virt-viewer for you to complete installation manually
+
+*When to run:* Once (or when you want to refresh base image)
+
+*Usage:*
+#+begin_src bash
+./scripts/testing/create-base-vm.sh
+#+end_src
+
+*Process:*
+1. Script creates VM and boots from Arch ISO
+2. virt-viewer opens automatically showing VM display
+3. You complete installation manually using archinstall:
+ - Login as root (no password)
+ - Run =archinstall=
+ - Configure: hostname=archsetup-test, root password=archsetup
+ - Install packages: openssh git vim sudo
+ - Enable services: sshd, dhcpcd
+ - Configure SSH to allow root login
+ - Poweroff when done
+4. Run =./scripts/testing/finalize-base-vm.sh= to complete
+
+*See also:* [[file:../../docs/base-vm-installation-checklist.org][Base VM Installation Checklist]]
+
+*Result:* Base VM image at =vm-images/archsetup-base.qcow2=
+
+** run-test.sh
+
+*Purpose:* Execute a full test run of archsetup
+
+*What it does:*
+- Clones base VM (copy-on-write, fast)
+- Starts test VM
+- Transfers archsetup script and dotfiles
+- Executes archsetup inside VM
+- Captures logs and results
+- Runs validation checks
+- Generates test report
+- Cleans up (unless =--keep=)
+
+*When to run:* Every time you want to test archsetup
+
+*Usage:*
+#+begin_src bash
+# Test current archsetup
+./scripts/testing/run-test.sh
+
+# Test specific version
+./scripts/testing/run-test.sh --script /path/to/archsetup
+
+# Keep VM for debugging
+./scripts/testing/run-test.sh --keep
+#+end_src
+
+*Time:* 30-60 minutes (mostly downloading packages)
+
+*Results:* Saved to =test-results/TIMESTAMP/=
+- =test.log= - Complete log output
+- =test-report.txt= - Summary of results
+- =archsetup-*.log= - Log from archsetup script
+- =*.txt= - Package lists from VM
+
+** debug-vm.sh
+
+*Purpose:* Launch VM for interactive debugging
+
+*What it does:*
+- Creates VM from base image or existing test disk
+- Configures console and SSH access
+- Provides connection instructions
+
+*When to run:* When you need to manually test or debug
+
+*Usage:*
+#+begin_src bash
+# Clone base VM for debugging
+./scripts/testing/debug-vm.sh
+
+# Use existing test disk
+./scripts/testing/debug-vm.sh vm-images/archsetup-test-20251108-143000.qcow2
+
+# Use base VM (read-only)
+./scripts/testing/debug-vm.sh --base
+#+end_src
+
+*Connect via:*
+- Console: =virsh console archsetup-debug-TIMESTAMP=
+- SSH: =ssh root@IP_ADDRESS= (password: archsetup)
+- VNC: =virt-viewer archsetup-debug-TIMESTAMP=
+
+** cleanup-tests.sh
+
+*Purpose:* Clean up old test VMs and artifacts
+
+*What it does:*
+- Lists all test VMs and destroys them
+- Removes test disk images
+- Keeps last N test results, deletes rest
+
+*When to run:* Periodically to free disk space
+
+*Usage:*
+#+begin_src bash
+# Interactive cleanup
+./scripts/testing/cleanup-tests.sh
+
+# Keep last 3 test results
+./scripts/testing/cleanup-tests.sh --keep 3
+
+# Force without prompts
+./scripts/testing/cleanup-tests.sh --force
+#+end_src
+
+* Directory Structure
+
+#+begin_example
+archsetup/
+β”œβ”€β”€ scripts/
+β”‚ └── testing/
+β”‚ β”œβ”€β”€ README.org # This file
+β”‚ β”œβ”€β”€ setup-testing-env.sh # Setup infrastructure
+β”‚ β”œβ”€β”€ create-base-vm.sh # Create base VM
+β”‚ β”œβ”€β”€ run-test.sh # Run tests
+β”‚ β”œβ”€β”€ debug-vm.sh # Interactive debugging
+β”‚ β”œβ”€β”€ cleanup-tests.sh # Clean up
+β”‚ β”œβ”€β”€ finalize-base-vm.sh # Finalize base (generated)
+β”‚ β”œβ”€β”€ archinstall-config.json # Archinstall config
+β”‚ └── lib/
+β”‚ β”œβ”€β”€ logging.sh # Logging utilities
+β”‚ └── vm-utils.sh # VM management
+β”œβ”€β”€ vm-images/ # VM disk images (gitignored)
+β”‚ β”œβ”€β”€ archsetup-base.qcow2 # Golden image
+β”‚ β”œβ”€β”€ arch-latest.iso # Arch ISO
+β”‚ └── archsetup-test-*.qcow2 # Test VMs
+β”œβ”€β”€ test-results/ # Test results (gitignored)
+β”‚ β”œβ”€β”€ TIMESTAMP/
+β”‚ β”‚ β”œβ”€β”€ test.log
+β”‚ β”‚ β”œβ”€β”€ test-report.txt
+β”‚ β”‚ └── archsetup-*.log
+β”‚ └── latest -> TIMESTAMP/ # Symlink to latest
+└── docs/
+ └── testing-strategy.org # Complete strategy doc
+#+end_example
+
+* Configuration
+
+** VM Specifications
+
+All test VMs use:
+- *CPUs:* 2 vCPUs
+- *RAM:* 4GB (matches archsetup tmpfs build directory)
+- *Disk:* 50GB (thin provisioned qcow2)
+- *Network:* NAT via libvirt default network
+- *Boot:* UEFI (systemd-boot bootloader)
+- *Display:* Serial console + VNC
+
+Set environment variables to customize:
+#+begin_src bash
+VM_CPUS=4 VM_RAM=8192 ./scripts/testing/run-test.sh
+#+end_src
+
+** Base VM Specifications
+
+The base VM contains a minimal Arch installation:
+- Base system packages
+- Linux kernel and firmware
+- OpenSSH server (for automation)
+- dhcpcd (for networking)
+- git, vim, sudo (essentials)
+- Root password: "archsetup"
+
+This matches the documented prerequisites for archsetup.
+
+* Validation Checks
+
+Each test run performs these validation checks:
+
+** Critical (Must Pass)
+1. archsetup exits with code 0
+2. User 'cjennings' was created
+3. Dotfiles are stowed (symlinks exist)
+4. yay (AUR helper) is installed
+5. DWM is built and installed
+
+** Additional (Future)
+- All expected packages installed
+- X11 can start
+- systemd services enabled
+- Firewall configured
+
+* Troubleshooting
+
+** VM fails to start
+
+Check if libvirtd is running:
+#+begin_src bash
+sudo systemctl status libvirtd
+sudo systemctl start libvirtd
+#+end_src
+
+** Cannot get VM IP address
+
+The VM may not have booted completely or networking failed:
+#+begin_src bash
+# Check VM status
+virsh domstate VM_NAME
+
+# Connect to console to debug
+virsh console VM_NAME
+
+# Check if dhcpcd is running in VM
+systemctl status dhcpcd
+#+end_src
+
+** SSH connection refused
+
+Wait longer for VM to boot, or check if sshd is enabled:
+#+begin_src bash
+virsh console VM_NAME
+# Inside VM:
+systemctl status sshd
+systemctl start sshd
+#+end_src
+
+** KVM not available
+
+Check if virtualization is enabled in BIOS and KVM modules loaded:
+#+begin_src bash
+# Check for /dev/kvm
+ls -l /dev/kvm
+
+# Load KVM module (Intel)
+sudo modprobe kvm-intel
+
+# Or for AMD
+sudo modprobe kvm-amd
+
+# Verify
+lsmod | grep kvm
+#+end_src
+
+** Disk space issues
+
+Clean up old tests:
+#+begin_src bash
+./scripts/testing/cleanup-tests.sh --force
+#+end_src
+
+Check disk usage:
+#+begin_src bash
+du -sh vm-images/ test-results/
+#+end_src
+
+** Base VM Installation Issues
+
+*** Firewall blocking VM network
+
+If the VM cannot reach the internet (100% packet loss when pinging), check the host firewall:
+
+#+begin_src bash
+# Check UFW status on host
+sudo ufw status
+
+# Check libvirt NAT rules on host
+sudo iptables -t nat -L -n -v | grep -i libvirt
+#+end_src
+
+*Solution:* Temporarily disable UFW during base VM creation:
+#+begin_src bash
+sudo ufw disable
+# Create base VM
+sudo ufw enable
+#+end_src
+
+Or add libvirt rules to UFW:
+#+begin_src bash
+sudo ufw allow in on virbr0
+sudo ufw allow out on virbr0
+#+end_src
+
+*** VM network not working after boot
+
+If =dhcpcd= isn't running or network isn't configured:
+
+#+begin_src bash
+# In the VM - restart network from scratch
+killall dhcpcd
+
+# Bring interface down and up (replace enp1s0 with your interface)
+ip link set enp1s0 down
+ip link set enp1s0 up
+
+# Start dhcpcd
+dhcpcd enp1s0
+
+# Wait and verify
+sleep 3
+ip addr show enp1s0
+ip route
+
+# Test connectivity
+ping -c 3 192.168.122.1 # Gateway
+ping -c 3 8.8.8.8 # Google DNS
+#+end_src
+
+*** DNS not working (127.0.0.53 in resolv.conf)
+
+The Live ISO uses systemd-resolved stub resolver which may not work:
+
+#+begin_src bash
+# In the VM - set real DNS servers
+echo "nameserver 8.8.8.8" > /etc/resolv.conf
+echo "nameserver 1.1.1.1" >> /etc/resolv.conf
+
+# Test
+ping -c 3 archlinux.org
+#+end_src
+
+*** Cannot paste into virt-viewer terminal
+
+Clipboard integration doesn't work well with virt-viewer. Use HTTP server instead:
+
+#+begin_src bash
+# On host - serve the installation script
+cd vm-images
+python -m http.server 8000
+
+# In VM - download and run
+curl http://192.168.122.1:8000/auto-install.sh | bash
+
+# Or download first to review
+curl http://192.168.122.1:8000/auto-install.sh -o install.sh
+cat install.sh
+bash install.sh
+#+end_src
+
+*** Partitions "in use" error
+
+If re-running installation after a failed attempt:
+
+#+begin_src bash
+# In the VM - unmount and wipe partitions
+mount | grep vda
+umount /mnt/boot 2>/dev/null
+umount /mnt 2>/dev/null
+umount -l /mnt # Lazy unmount if still busy
+
+# Wipe partition table completely
+wipefs -a /dev/vda
+
+# Run install script again
+bash install.sh
+#+end_src
+
+*** Alternative: Use archinstall
+
+Instead of the auto-install.sh script, you can use Arch's built-in installer:
+
+#+begin_src bash
+# In the VM
+archinstall
+#+end_src
+
+*Recommended settings:*
+- Disk: =/dev/vda=
+- Filesystem: =ext4=
+- Bootloader: =systemd-boot=
+- Hostname: =archsetup-test=
+- Root password: =archsetup=
+- Profile: =minimal=
+- Additional packages: =openssh dhcpcd git vim sudo=
+- Network: =NetworkManager= or =systemd-networkd=
+
+*After installation, before rebooting:*
+#+begin_src bash
+# Chroot into new system
+arch-chroot /mnt
+
+# Enable services
+systemctl enable sshd
+systemctl enable dhcpcd # or NetworkManager
+
+# Allow root SSH login
+sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
+sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config
+
+# Exit and poweroff
+exit
+poweroff
+#+end_src
+
+* Future Enhancements
+
+** Planned Improvements
+- [ ] Fully automated base VM creation (expect-based console automation)
+- [ ] Parallel test execution (multiple VMs)
+- [ ] Screenshot capture of X11 desktop
+- [ ] CI/CD integration (GitHub Actions / GitLab CI)
+- [ ] Performance benchmarking over time
+- [ ] Cloud-init based base image (faster provisioning)
+
+** Test Scenarios
+- [ ] Idempotency test (run archsetup twice)
+- [ ] Network failure recovery
+- [ ] Offline installation (local package cache)
+- [ ] Different hardware profiles (CPU/RAM variations)
+
+* References
+
+- [[file:../../docs/testing-strategy.org][Testing Strategy Document]]
+- [[https://wiki.archlinux.org/title/Libvirt][Arch Wiki: libvirt]]
+- [[https://wiki.archlinux.org/title/QEMU][Arch Wiki: QEMU]]
+- [[file:../../archsetup][Main archsetup script]]
+- [[file:../../TODO.org][Project TODO]]
diff --git a/scripts/testing/archinstall-config.json b/scripts/testing/archinstall-config.json
new file mode 100644
index 0000000..a55e2a1
--- /dev/null
+++ b/scripts/testing/archinstall-config.json
@@ -0,0 +1,117 @@
+{
+ "!users": {
+ "0": {
+ "!password": "archsetup",
+ "username": "root",
+ "sudo": false
+ }
+ },
+ "archinstall-language": "English",
+ "audio_config": null,
+ "bootloader": "systemd-bootctl",
+ "config_version": "2.8.0",
+ "debug": false,
+ "disk_config": {
+ "config_type": "default_layout",
+ "device_modifications": [
+ {
+ "device": "/dev/vda",
+ "partitions": [
+ {
+ "btrfs": [],
+ "flags": [
+ "Boot"
+ ],
+ "fs_type": "fat32",
+ "length": {
+ "sector_size": null,
+ "total_size": null,
+ "unit": "MiB",
+ "value": 512
+ },
+ "mount_options": [],
+ "mountpoint": "/boot",
+ "obj_id": "boot_partition",
+ "start": {
+ "sector_size": null,
+ "total_size": null,
+ "unit": "MiB",
+ "value": 1
+ },
+ "status": "create",
+ "type": "primary"
+ },
+ {
+ "btrfs": [],
+ "flags": [],
+ "fs_type": "ext4",
+ "length": {
+ "sector_size": null,
+ "total_size": null,
+ "unit": "MiB",
+ "value": 100
+ },
+ "mount_options": [],
+ "mountpoint": "/",
+ "obj_id": "root_partition",
+ "start": {
+ "sector_size": null,
+ "total_size": null,
+ "unit": "MiB",
+ "value": 513
+ },
+ "status": "create",
+ "type": "primary"
+ }
+ ],
+ "wipe": true
+ }
+ ]
+ },
+ "disk_encryption": null,
+ "hostname": "archsetup-test",
+ "kernels": [
+ "linux"
+ ],
+ "locale_config": {
+ "kb_layout": "us",
+ "sys_enc": "UTF-8",
+ "sys_lang": "en_US"
+ },
+ "mirror_config": {
+ "custom_mirrors": [],
+ "mirror_regions": {
+ "United States": [
+ "https://mirror.rackspace.com/archlinux/$repo/os/$arch",
+ "https://mirror.leaseweb.com/archlinux/$repo/os/$arch"
+ ]
+ }
+ },
+ "network_config": {
+ "type": "nm"
+ },
+ "no_pkg_lookups": false,
+ "ntp": true,
+ "offline": false,
+ "packages": [
+ "openssh",
+ "dhcpcd",
+ "git",
+ "vim"
+ ],
+ "parallel downloads": 5,
+ "profile_config": {
+ "gfx_driver": "All open-source",
+ "greeter": null,
+ "profile": {
+ "custom_settings": {},
+ "details": [],
+ "main": "Minimal"
+ }
+ },
+ "script": "guided",
+ "silent": false,
+ "swap": false,
+ "timezone": "America/Chicago",
+ "version": "2.8.0"
+}
diff --git a/scripts/testing/cleanup-tests.sh b/scripts/testing/cleanup-tests.sh
new file mode 100755
index 0000000..e4289a7
--- /dev/null
+++ b/scripts/testing/cleanup-tests.sh
@@ -0,0 +1,171 @@
+#!/bin/bash
+# Clean up old test VMs and artifacts
+# Author: Craig Jennings <craigmartinjennings@gmail.com>
+# License: GNU GPLv3
+
+set -e
+
+# Get script directory
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
+
+# Source utilities
+source "$SCRIPT_DIR/lib/logging.sh"
+source "$SCRIPT_DIR/lib/vm-utils.sh"
+
+# Parse arguments
+KEEP_LAST=5
+FORCE=false
+
+while [[ $# -gt 0 ]]; do
+ case $1 in
+ --keep)
+ KEEP_LAST="$2"
+ shift 2
+ ;;
+ --force)
+ FORCE=true
+ shift
+ ;;
+ *)
+ echo "Usage: $0 [--keep N] [--force]"
+ echo " --keep N Keep last N test results (default: 5)"
+ echo " --force Skip confirmation prompts"
+ exit 1
+ ;;
+ esac
+done
+
+# Initialize logging
+LOGFILE="/tmp/cleanup-tests-$(date +'%Y%m%d-%H%M%S').log"
+init_logging "$LOGFILE"
+
+section "Cleaning Up Test Artifacts"
+
+# Find all test VMs
+step "Finding test VMs"
+TEST_VMS=$(virsh --connect qemu:///system list --all | grep "archsetup-test-" | awk '{print $2}' || true)
+
+if [ -z "$TEST_VMS" ]; then
+ info "No test VMs found"
+else
+ VM_COUNT=$(echo "$TEST_VMS" | wc -l)
+ info "Found $VM_COUNT test VM(s)"
+
+ if ! $FORCE; then
+ echo ""
+ echo "$TEST_VMS"
+ echo ""
+ read -p "Destroy these VMs? [y/N] " -n 1 -r
+ echo ""
+ if [[ ! $REPLY =~ ^[Yy]$ ]]; then
+ info "Skipping VM cleanup"
+ else
+ for vm in $TEST_VMS; do
+ step "Destroying VM: $vm"
+ if vm_is_running "$vm"; then
+ virsh --connect qemu:///system destroy "$vm" >> "$LOGFILE" 2>&1
+ fi
+ virsh --connect qemu:///system undefine "$vm" --nvram >> "$LOGFILE" 2>&1 || true
+ success "VM destroyed: $vm"
+ done
+ fi
+ else
+ for vm in $TEST_VMS; do
+ step "Destroying VM: $vm"
+ if vm_is_running "$vm"; then
+ virsh --connect qemu:///system destroy "$vm" >> "$LOGFILE" 2>&1
+ fi
+ virsh --connect qemu:///system undefine "$vm" --nvram >> "$LOGFILE" 2>&1 || true
+ success "VM destroyed: $vm"
+ done
+ fi
+fi
+
+# Clean up test disk images
+section "Cleaning Up Disk Images"
+
+step "Finding test disk images"
+if [ -d "$PROJECT_ROOT/vm-images" ]; then
+ TEST_DISKS=$(find "$PROJECT_ROOT/vm-images" -name "archsetup-test-*.qcow2" 2>/dev/null || true)
+
+ if [ -z "$TEST_DISKS" ]; then
+ info "No test disk images found"
+ else
+ DISK_COUNT=$(echo "$TEST_DISKS" | wc -l)
+ DISK_SIZE=$(du -ch $TEST_DISKS | tail -1 | awk '{print $1}')
+ info "Found $DISK_COUNT test disk image(s) totaling $DISK_SIZE"
+
+ if ! $FORCE; then
+ echo ""
+ echo "$TEST_DISKS"
+ echo ""
+ read -p "Delete these disk images? [y/N] " -n 1 -r
+ echo ""
+ if [[ ! $REPLY =~ ^[Yy]$ ]]; then
+ info "Skipping disk cleanup"
+ else
+ echo "$TEST_DISKS" | while read disk; do
+ rm -f "$disk"
+ done
+ success "Test disk images deleted"
+ fi
+ else
+ echo "$TEST_DISKS" | while read disk; do
+ rm -f "$disk"
+ done
+ success "Test disk images deleted"
+ fi
+ fi
+fi
+
+# Clean up old test results
+section "Cleaning Up Test Results"
+
+if [ ! -d "$PROJECT_ROOT/test-results" ]; then
+ info "No test results directory"
+else
+ step "Finding test result directories"
+ TEST_RESULTS=$(find "$PROJECT_ROOT/test-results" -maxdepth 1 -type d -name "20*" 2>/dev/null | sort -r || true)
+
+ if [ -z "$TEST_RESULTS" ]; then
+ info "No test results found"
+ else
+ RESULT_COUNT=$(echo "$TEST_RESULTS" | wc -l)
+ info "Found $RESULT_COUNT test result directory(ies)"
+
+ if [ $RESULT_COUNT -le $KEEP_LAST ]; then
+ info "Keeping all results (count <= $KEEP_LAST)"
+ else
+ TO_DELETE=$(echo "$TEST_RESULTS" | tail -n +$((KEEP_LAST + 1)))
+ DELETE_COUNT=$(echo "$TO_DELETE" | wc -l)
+ info "Keeping last $KEEP_LAST, deleting $DELETE_COUNT old result(s)"
+
+ if ! $FORCE; then
+ echo ""
+ echo "Will delete:"
+ echo "$TO_DELETE"
+ echo ""
+ read -p "Delete these test results? [y/N] " -n 1 -r
+ echo ""
+ if [[ ! $REPLY =~ ^[Yy]$ ]]; then
+ info "Skipping results cleanup"
+ else
+ echo "$TO_DELETE" | while read dir; do
+ rm -rf "$dir"
+ done
+ success "Old test results deleted"
+ fi
+ else
+ echo "$TO_DELETE" | while read dir; do
+ rm -rf "$dir"
+ done
+ success "Old test results deleted"
+ fi
+ fi
+ fi
+fi
+
+section "Cleanup Complete"
+
+info "Log file: $LOGFILE"
diff --git a/scripts/testing/create-base-vm.sh b/scripts/testing/create-base-vm.sh
new file mode 100755
index 0000000..03409fe
--- /dev/null
+++ b/scripts/testing/create-base-vm.sh
@@ -0,0 +1,173 @@
+#!/bin/bash
+# Create base VM for archsetup testing - Manual Installation
+# Author: Craig Jennings <craigmartinjennings@gmail.com>
+# License: GNU GPLv3
+#
+# This script creates a VM booted from Arch ISO, then waits for you to
+# manually install Arch using archinstall.
+
+set -e
+
+# Get script directory
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
+
+# Source utilities
+source "$SCRIPT_DIR/lib/logging.sh"
+source "$SCRIPT_DIR/lib/vm-utils.sh"
+
+# Configuration
+VM_NAME="archsetup-base"
+VM_CPUS="${VM_CPUS:-4}"
+VM_RAM="${VM_RAM:-8192}" # MB
+VM_DISK="${VM_DISK:-50}" # GB
+VM_IMAGES_DIR="$PROJECT_ROOT/vm-images"
+ISO_URL="https://mirrors.kernel.org/archlinux/iso/latest/archlinux-x86_64.iso"
+ISO_PATH="$VM_IMAGES_DIR/arch-latest.iso"
+DISK_PATH="$VM_IMAGES_DIR/archsetup-base.qcow2"
+
+# Initialize logging
+LOGFILE="$PROJECT_ROOT/test-results/create-base-vm-$(date +'%Y%m%d-%H%M%S').log"
+init_logging "$LOGFILE"
+
+section "Creating Base VM for ArchSetup Testing"
+
+# Verify prerequisites
+step "Checking prerequisites"
+check_libvirt || fatal "libvirt not running"
+check_libvirt_group || fatal "User not in libvirt group"
+check_kvm || fatal "KVM not available"
+success "Prerequisites satisfied"
+
+# Create vm-images directory
+mkdir -p "$VM_IMAGES_DIR"
+
+# Download Arch ISO if needed
+section "Preparing Arch Linux ISO"
+
+if [ -f "$ISO_PATH" ]; then
+ info "Arch ISO exists: $ISO_PATH"
+
+ # Check if ISO is older than 30 days
+ if [ $(find "$ISO_PATH" -mtime +30 | wc -l) -gt 0 ]; then
+ warn "ISO is older than 30 days"
+ info "Downloading latest version..."
+ rm -f "$ISO_PATH"
+ else
+ success "Using existing ISO"
+ fi
+fi
+
+if [ ! -f "$ISO_PATH" ]; then
+ step "Downloading latest Arch ISO"
+ info "URL: $ISO_URL"
+ info "This may take several minutes..."
+
+ if wget --progress=dot:giga -O "$ISO_PATH" "$ISO_URL" 2>&1 | tee -a "$LOGFILE"; then
+ success "ISO downloaded"
+ else
+ fatal "ISO download failed"
+ fi
+fi
+
+# Remove existing VM and disk
+if vm_exists "$VM_NAME"; then
+ warn "VM $VM_NAME already exists - destroying it"
+ if vm_is_running "$VM_NAME"; then
+ virsh destroy "$VM_NAME" >> "$LOGFILE" 2>&1
+ fi
+ virsh undefine "$VM_NAME" --nvram >> "$LOGFILE" 2>&1 || true
+fi
+
+[ -f "$DISK_PATH" ] && rm -f "$DISK_PATH"
+
+# Create and start VM
+section "Creating and Starting VM"
+
+info "Creating VM: $VM_NAME"
+info " CPUs: $VM_CPUS | RAM: ${VM_RAM}MB | Disk: ${VM_DISK}GB"
+
+virt-install \
+ --connect qemu:///system \
+ --name "$VM_NAME" \
+ --memory "$VM_RAM" \
+ --vcpus "$VM_CPUS" \
+ --disk path="$DISK_PATH",size="$VM_DISK",format=qcow2,bus=virtio \
+ --cdrom "$ISO_PATH" \
+ --os-variant archlinux \
+ --network network=default,model=virtio \
+ --graphics vnc,listen=127.0.0.1 \
+ --console pty,target.type=serial \
+ --boot uefi \
+ --noreboot \
+ --check path_in_use=off \
+ --filesystem type=mount,mode=mapped,source="$PROJECT_ROOT/scripts",target=host-scripts \
+ >> "$LOGFILE" 2>&1 &
+
+VIRT_INSTALL_PID=$!
+
+progress "Waiting for VM to boot from ISO"
+sleep 30
+
+# Check if VM started
+if ! vm_is_running "$VM_NAME"; then
+ wait $VIRT_INSTALL_PID
+ EXIT_CODE=$?
+ fatal "VM failed to start (exit code: $EXIT_CODE)"
+fi
+
+success "VM started successfully"
+
+# Display manual installation instructions
+section "Manual Installation Required"
+
+cat << 'EOF'
+
+[i]
+[i] Base VM is running from Arch ISO
+[i]
+[i] NEXT STEPS - Complete installation manually:
+[i]
+[i] 1. Open virt-viewer (should already be open):
+[i] virt-viewer --connect qemu:///system archsetup-base
+[i]
+[i] 2. Login as 'root' (no password)
+[i]
+[i] 3. Run: archinstall
+[i]
+[i] 4. Configure with these settings:
+[i] - Hostname: archsetup-test
+[i] - Root password: archsetup
+[i] - Profile: minimal
+[i] - Network: dhcpcd (or NetworkManager)
+[i] - Additional packages: openssh git vim sudo iperf3 mtr traceroute bind net-tools sshfs
+[i] - Enable: sshd, dhcpcd (or NetworkManager)
+[i]
+[i] 5. After archinstall completes:
+[i] - Chroot into /mnt: arch-chroot /mnt
+[i] - Edit /etc/ssh/sshd_config:
+[i] sed -i 's/#PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config
+[i] sed -i 's/#PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config
+[i] - Set up shared folder mount (9p filesystem):
+[i] mkdir -p /mnt/host-scripts
+[i] echo 'host-scripts /mnt/host-scripts 9p trans=virtio,version=9p2000.L,rw 0 0' >> /etc/fstab
+[i] - Exit chroot: exit
+[i] - Poweroff: poweroff
+[i]
+[i] 6. After VM powers off, run:
+[i] ./scripts/testing/finalize-base-vm.sh
+[i]
+[i] Log file: $LOGFILE
+[i]
+
+EOF
+
+info "Waiting for VM to power off..."
+info "(This script will exit when you manually power off the VM)"
+
+# Wait for virt-install to finish (VM powers off)
+wait $VIRT_INSTALL_PID || true
+
+success "VM has powered off"
+info ""
+info "Next step: Run ./scripts/testing/finalize-base-vm.sh"
diff --git a/scripts/testing/debug-vm.sh b/scripts/testing/debug-vm.sh
new file mode 100755
index 0000000..a442850
--- /dev/null
+++ b/scripts/testing/debug-vm.sh
@@ -0,0 +1,133 @@
+#!/bin/bash
+# Launch VM for interactive debugging
+# Author: Craig Jennings <craigmartinjennings@gmail.com>
+# License: GNU GPLv3
+
+set -e
+
+# Get script directory
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
+
+# Source utilities
+source "$SCRIPT_DIR/lib/logging.sh"
+source "$SCRIPT_DIR/lib/vm-utils.sh"
+
+# Parse arguments
+VM_DISK=""
+USE_BASE=false
+
+if [ $# -eq 0 ]; then
+ USE_BASE=true
+elif [ "$1" = "--base" ]; then
+ USE_BASE=true
+elif [ -f "$1" ]; then
+ VM_DISK="$1"
+else
+ echo "Usage: $0 [disk-image.qcow2 | --base]"
+ echo ""
+ echo "Options:"
+ echo " --base Use base VM (read-only, safe for testing)"
+ echo " disk-image.qcow2 Use existing test disk image"
+ echo " (no args) Clone base VM for debugging"
+ exit 1
+fi
+
+# Configuration
+TIMESTAMP=$(date +'%Y%m%d-%H%M%S')
+DEBUG_VM_NAME="archsetup-debug-$TIMESTAMP"
+BASE_DISK="$PROJECT_ROOT/vm-images/archsetup-base.qcow2"
+ROOT_PASSWORD="archsetup"
+
+# Initialize logging
+LOGFILE="/tmp/debug-vm-$TIMESTAMP.log"
+init_logging "$LOGFILE"
+
+section "Launching Debug VM"
+
+# Determine which disk to use
+if $USE_BASE; then
+ if [ ! -f "$BASE_DISK" ]; then
+ fatal "Base disk not found: $BASE_DISK"
+ fi
+ VM_DISK="$BASE_DISK"
+ info "Using base VM (read-only snapshot mode)"
+else
+ if [ -z "$VM_DISK" ]; then
+ # Clone base VM
+ VM_DISK="$PROJECT_ROOT/vm-images/$DEBUG_VM_NAME.qcow2"
+ step "Cloning base VM for debugging"
+ clone_disk "$BASE_DISK" "$VM_DISK" || fatal "Failed to clone base VM"
+ success "Debug disk created: $VM_DISK"
+ else
+ info "Using existing disk: $VM_DISK"
+ fi
+fi
+
+# Create debug VM
+step "Creating debug VM: $DEBUG_VM_NAME"
+virt-install \
+ --connect qemu:///system \
+ --name "$DEBUG_VM_NAME" \
+ --memory 4096 \
+ --vcpus 2 \
+ --disk path="$VM_DISK",format=qcow2,bus=virtio \
+ --os-variant archlinux \
+ --network network=default,model=virtio \
+ --graphics vnc,listen=127.0.0.1 \
+ --console pty,target.type=serial \
+ --boot uefi \
+ --import \
+ --noautoconsole \
+ >> "$LOGFILE" 2>&1
+
+success "Debug VM created"
+
+# Wait for boot
+step "Waiting for VM to boot..."
+sleep 20
+
+# Get VM IP
+VM_IP=$(get_vm_ip "$DEBUG_VM_NAME" 2>/dev/null || true)
+
+# Display connection information
+section "Debug VM Ready"
+
+info ""
+info "VM Name: $DEBUG_VM_NAME"
+if [ -n "$VM_IP" ]; then
+ info "IP Address: $VM_IP"
+fi
+info "Disk: $VM_DISK"
+info ""
+info "Connect via:"
+info " Console: virsh console $DEBUG_VM_NAME"
+if [ -n "$VM_IP" ]; then
+ info " SSH: ssh root@$VM_IP"
+fi
+info " VNC: virt-viewer $DEBUG_VM_NAME"
+info ""
+info "Root password: $ROOT_PASSWORD"
+info ""
+info "When done debugging:"
+info " virsh destroy $DEBUG_VM_NAME"
+info " virsh undefine $DEBUG_VM_NAME"
+if [ ! "$VM_DISK" = "$BASE_DISK" ] && [ -z "$1" ]; then
+ info " rm $VM_DISK"
+fi
+info ""
+info "Log file: $LOGFILE"
+info ""
+
+# Offer to connect to console
+read -p "Connect to console now? [Y/n] " -n 1 -r
+echo ""
+if [[ $REPLY =~ ^[Nn]$ ]]; then
+ info "VM is running in background"
+ info "Connect later with: virsh console $DEBUG_VM_NAME"
+else
+ info "Connecting to console..."
+ info "Press Ctrl+] to disconnect from console"
+ sleep 2
+ virsh console "$DEBUG_VM_NAME"
+fi
diff --git a/scripts/testing/finalize-base-vm.sh b/scripts/testing/finalize-base-vm.sh
new file mode 100755
index 0000000..225ffae
--- /dev/null
+++ b/scripts/testing/finalize-base-vm.sh
@@ -0,0 +1,31 @@
+#!/bin/bash
+# Finalize base VM after installation
+VM_NAME="archsetup-base"
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
+BASE_DISK="$PROJECT_ROOT/vm-images/archsetup-base.qcow2"
+
+echo "[i] Removing ISO from VM..."
+virsh --connect qemu:///system change-media $VM_NAME sda --eject 2>/dev/null || true
+virsh --connect qemu:///system change-media $VM_NAME hda --eject 2>/dev/null || true
+echo "[βœ“] ISO removed"
+
+echo "[i] Fixing base disk permissions..."
+sudo chown $USER:$USER "$BASE_DISK"
+sudo chmod 644 "$BASE_DISK"
+echo "[βœ“] Permissions fixed"
+
+echo "[i] Starting VM from installed system..."
+virsh --connect qemu:///system start $VM_NAME
+echo "[i] Waiting for boot..."
+sleep 30
+IP=$(virsh --connect qemu:///system domifaddr $VM_NAME 2>/dev/null | grep -oP '(\d+\.){3}\d+' | head -1)
+echo "[βœ“] Base VM is ready!"
+echo ""
+echo "Connect via:"
+echo " Console: virsh console $VM_NAME"
+echo " SSH: ssh root@$IP"
+echo " Password: archsetup"
+echo ""
+echo "To create a test clone:"
+echo " ./scripts/testing/run-test.sh"
diff --git a/scripts/testing/lib/finalize-base-vm.sh b/scripts/testing/lib/finalize-base-vm.sh
new file mode 100755
index 0000000..e3913ea
--- /dev/null
+++ b/scripts/testing/lib/finalize-base-vm.sh
@@ -0,0 +1,21 @@
+#!/bin/bash
+# Finalize base VM after installation
+VM_NAME="archsetup-base"
+echo "[i] Removing ISO from VM..."
+virsh change-media $VM_NAME sda --eject 2>/dev/null || true
+virsh change-media $VM_NAME hda --eject 2>/dev/null || true
+echo "[βœ“] ISO removed"
+echo "[i] Starting VM from installed system..."
+virsh start $VM_NAME
+echo "[i] Waiting for boot..."
+sleep 30
+IP=$(virsh domifaddr $VM_NAME 2>/dev/null | grep -oP '(\d+\.){3}\d+' | head -1)
+echo "[βœ“] Base VM is ready!"
+echo ""
+echo "Connect via:"
+echo " Console: virsh console $VM_NAME"
+echo " SSH: ssh root@$IP"
+echo " Password: archsetup"
+echo ""
+echo "To create a test clone:"
+echo " ./scripts/testing/run-test.sh"
diff --git a/scripts/testing/lib/logging.sh b/scripts/testing/lib/logging.sh
new file mode 100755
index 0000000..eda9eb1
--- /dev/null
+++ b/scripts/testing/lib/logging.sh
@@ -0,0 +1,151 @@
+#!/bin/bash
+# Logging utilities for archsetup testing
+# Author: Craig Jennings <craigmartinjennings@gmail.com>
+# License: GNU GPLv3
+
+# Global log file (set by calling script)
+LOGFILE="${LOGFILE:-/tmp/archsetup-test.log}"
+
+# Initialize logging
+init_logging() {
+ local logfile="$1"
+ LOGFILE="$logfile"
+
+ # Create log directory if it doesn't exist
+ mkdir -p "$(dirname "$LOGFILE")"
+
+ # Initialize log file
+ echo "=== Test Log Started: $(date +'%Y-%m-%d %H:%M:%S') ===" > "$LOGFILE"
+ echo "" >> "$LOGFILE"
+}
+
+# Log message (to file and optionally stdout)
+log() {
+ local message="$1"
+ local timestamp
+ timestamp=$(date +'%Y-%m-%d %H:%M:%S')
+ echo "[$timestamp] $message" >> "$LOGFILE"
+}
+
+# Info message
+info() {
+ local message="$1"
+ echo "[i] $message"
+ log "INFO: $message"
+}
+
+# Success message
+success() {
+ local message="$1"
+ echo "[βœ“] $message"
+ log "SUCCESS: $message"
+}
+
+# Warning message
+warn() {
+ local message="$1"
+ echo "[!] $message"
+ log "WARNING: $message"
+}
+
+# Error message
+error() {
+ local message="$1"
+ echo "[βœ—] $message" >&2
+ log "ERROR: $message"
+}
+
+# Fatal error (exits script)
+fatal() {
+ local message="$1"
+ local exit_code="${2:-1}"
+ echo "[βœ—] FATAL: $message" >&2
+ log "FATAL: $message (exit code: $exit_code)"
+ exit "$exit_code"
+}
+
+# Section header
+section() {
+ local title="$1"
+ echo ""
+ echo "=== $title ==="
+ log "=== $title ==="
+}
+
+# Step message
+step() {
+ local message="$1"
+ echo " -> $message"
+ log " STEP: $message"
+}
+
+# Progress indicator (for long-running operations)
+progress() {
+ local message="$1"
+ echo " ... $message"
+ log " PROGRESS: $message"
+}
+
+# Clear progress line and show completion
+complete() {
+ local message="$1"
+ echo " [βœ“] $message"
+ log " COMPLETE: $message"
+}
+
+# Show command being executed (useful for debugging)
+show_cmd() {
+ local cmd="$1"
+ echo "$ $cmd"
+ log "CMD: $cmd"
+}
+
+# Separator line
+separator() {
+ echo "----------------------------------------"
+}
+
+# Summary statistics
+summary() {
+ local passed="$1"
+ local failed="$2"
+ local total=$((passed + failed))
+
+ echo ""
+ separator
+ section "Test Summary"
+ echo " Total: $total"
+ echo " Passed: $passed"
+ echo " Failed: $failed"
+ separator
+ echo ""
+
+ log "=== Test Summary ==="
+ log "Total: $total, Passed: $passed, Failed: $failed"
+}
+
+# Timer utilities
+declare -A TIMERS
+
+start_timer() {
+ local name="${1:-default}"
+ TIMERS[$name]=$(date +%s)
+ log "TIMER START: $name"
+}
+
+stop_timer() {
+ local name="${1:-default}"
+ local start=${TIMERS[$name]}
+ local end=$(date +%s)
+ local duration=$((end - start))
+ local mins=$((duration / 60))
+ local secs=$((duration % 60))
+
+ if [ $mins -gt 0 ]; then
+ echo " Time: ${mins}m ${secs}s"
+ log "TIMER STOP: $name (${mins}m ${secs}s)"
+ else
+ echo " Time: ${secs}s"
+ log "TIMER STOP: $name (${secs}s)"
+ fi
+}
diff --git a/scripts/testing/lib/network-diagnostics.sh b/scripts/testing/lib/network-diagnostics.sh
new file mode 100644
index 0000000..3f9735b
--- /dev/null
+++ b/scripts/testing/lib/network-diagnostics.sh
@@ -0,0 +1,60 @@
+#!/bin/bash
+# Network diagnostics for VM testing
+# Author: Craig Jennings <craigmartinjennings@gmail.com>
+# License: GNU GPLv3
+
+# Note: logging.sh should already be sourced by the calling script
+
+# Run quick network diagnostics
+# Args: $1 = VM IP address or hostname
+run_network_diagnostics() {
+ local vm_host="$1"
+
+ section "Pre-flight Network Diagnostics"
+
+ # Test 1: Basic connectivity
+ step "Testing internet connectivity"
+ if sshpass -p 'archsetup' ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@$vm_host "ping -c 3 8.8.8.8 >/dev/null 2>&1"; then
+ success "Internet connectivity OK"
+ else
+ error "No internet connectivity"
+ return 1
+ fi
+
+ # Test 2: DNS resolution
+ step "Testing DNS resolution"
+ if sshpass -p 'archsetup' ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@$vm_host "nslookup archlinux.org >/dev/null 2>&1"; then
+ success "DNS resolution OK"
+ else
+ error "DNS resolution failed"
+ return 1
+ fi
+
+ # Test 3: Arch mirror accessibility
+ step "Testing Arch mirror access"
+ if sshpass -p 'archsetup' ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@$vm_host "curl -s -I https://mirrors.kernel.org/archlinux/ | head -1 | grep -qE '(200|301)'"; then
+ success "Arch mirrors accessible"
+ else
+ error "Cannot reach Arch mirrors"
+ return 1
+ fi
+
+ # Test 4: AUR accessibility
+ step "Testing AUR access"
+ if sshpass -p 'archsetup' ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@$vm_host "curl -s -I https://aur.archlinux.org/ | head -1 | grep -qE '(200|405)'"; then
+ success "AUR accessible"
+ else
+ error "Cannot reach AUR"
+ return 1
+ fi
+
+ # Show network info
+ info "Network configuration:"
+ sshpass -p 'archsetup' ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@$vm_host \
+ "ip addr show | grep 'inet ' | grep -v '127.0.0.1'" 2>/dev/null | while read line; do
+ info " $line"
+ done
+
+ success "Network diagnostics complete"
+ return 0
+}
diff --git a/scripts/testing/lib/vm-utils.sh b/scripts/testing/lib/vm-utils.sh
new file mode 100755
index 0000000..81aec33
--- /dev/null
+++ b/scripts/testing/lib/vm-utils.sh
@@ -0,0 +1,321 @@
+#!/bin/bash
+# VM management utilities for archsetup testing
+# Author: Craig Jennings <craigmartinjennings@gmail.com>
+# License: GNU GPLv3
+
+# Note: logging.sh should already be sourced by the calling script
+
+# VM configuration defaults
+VM_CPUS="${VM_CPUS:-4}"
+VM_RAM="${VM_RAM:-8192}" # MB
+VM_DISK="${VM_DISK:-50}" # GB
+VM_NETWORK="${VM_NETWORK:-default}"
+LIBVIRT_URI="qemu:///system" # Use system session, not user session
+
+# Check if libvirt is running
+check_libvirt() {
+ if ! systemctl is-active --quiet libvirtd; then
+ error "libvirtd service is not running"
+ info "Start it with: sudo systemctl start libvirtd"
+ return 1
+ fi
+ return 0
+}
+
+# Check if user is in libvirt group
+check_libvirt_group() {
+ if ! groups | grep -q libvirt; then
+ warn "Current user is not in libvirt group"
+ info "Add yourself with: sudo usermod -a -G libvirt $USER"
+ info "Then log out and back in for changes to take effect"
+ return 1
+ fi
+ return 0
+}
+
+# Check if KVM is available
+check_kvm() {
+ if [ ! -e /dev/kvm ]; then
+ error "KVM is not available"
+ info "Check if virtualization is enabled in BIOS"
+ info "Load kvm module: sudo modprobe kvm-intel (or kvm-amd)"
+ return 1
+ fi
+ return 0
+}
+
+# Wait for VM to boot (check for SSH or serial console)
+wait_for_vm() {
+ local vm_name="$1"
+ local timeout="${2:-300}" # 5 minutes default
+ local elapsed=0
+
+ progress "Waiting for VM $vm_name to boot..."
+
+ while [ $elapsed -lt $timeout ]; do
+ if virsh --connect "$LIBVIRT_URI" domstate "$vm_name" 2>/dev/null | grep -q "running"; then
+ sleep 5
+ complete "VM $vm_name is running"
+ return 0
+ fi
+ sleep 2
+ elapsed=$((elapsed + 2))
+ done
+
+ error "Timeout waiting for VM $vm_name to boot"
+ return 1
+}
+
+# Check if VM exists
+vm_exists() {
+ local vm_name="$1"
+ virsh --connect "$LIBVIRT_URI" dominfo "$vm_name" &>/dev/null
+ return $?
+}
+
+# Check if VM is running
+vm_is_running() {
+ local vm_name="$1"
+ [ "$(virsh --connect "$LIBVIRT_URI" domstate "$vm_name" 2>/dev/null)" = "running" ]
+ return $?
+}
+
+# Start VM
+start_vm() {
+ local vm_name="$1"
+
+ if vm_is_running "$vm_name"; then
+ warn "VM $vm_name is already running"
+ return 0
+ fi
+
+ step "Starting VM: $vm_name"
+ if virsh --connect "$LIBVIRT_URI" start "$vm_name" >> "$LOGFILE" 2>&1; then
+ success "VM $vm_name started"
+ return 0
+ else
+ error "Failed to start VM $vm_name"
+ return 1
+ fi
+}
+
+# Stop VM gracefully
+stop_vm() {
+ local vm_name="$1"
+ local timeout="${2:-60}"
+
+ if ! vm_is_running "$vm_name"; then
+ info "VM $vm_name is not running"
+ return 0
+ fi
+
+ step "Shutting down VM: $vm_name"
+ if virsh --connect "$LIBVIRT_URI" shutdown "$vm_name" >> "$LOGFILE" 2>&1; then
+ # Wait for graceful shutdown
+ local elapsed=0
+ while [ $elapsed -lt $timeout ]; do
+ if ! vm_is_running "$vm_name"; then
+ success "VM $vm_name stopped gracefully"
+ return 0
+ fi
+ sleep 2
+ elapsed=$((elapsed + 2))
+ done
+
+ warn "VM $vm_name did not stop gracefully, forcing..."
+ virsh --connect "$LIBVIRT_URI" destroy "$vm_name" >> "$LOGFILE" 2>&1
+ fi
+
+ success "VM $vm_name stopped"
+ return 0
+}
+
+# Destroy VM (force stop)
+destroy_vm() {
+ local vm_name="$1"
+
+ if ! vm_exists "$vm_name"; then
+ info "VM $vm_name does not exist"
+ return 0
+ fi
+
+ step "Destroying VM: $vm_name"
+ if vm_is_running "$vm_name"; then
+ virsh --connect "$LIBVIRT_URI" destroy "$vm_name" >> "$LOGFILE" 2>&1
+ fi
+
+ virsh --connect "$LIBVIRT_URI" undefine "$vm_name" --nvram >> "$LOGFILE" 2>&1
+ success "VM $vm_name destroyed"
+ return 0
+}
+
+# Create snapshot
+create_snapshot() {
+ local vm_name="$1"
+ local snapshot_name="$2"
+
+ step "Creating snapshot: $snapshot_name"
+ if virsh --connect "$LIBVIRT_URI" snapshot-create-as "$vm_name" "$snapshot_name" >> "$LOGFILE" 2>&1; then
+ success "Snapshot $snapshot_name created"
+ return 0
+ else
+ error "Failed to create snapshot $snapshot_name"
+ return 1
+ fi
+}
+
+# Restore snapshot
+restore_snapshot() {
+ local vm_name="$1"
+ local snapshot_name="$2"
+
+ step "Restoring snapshot: $snapshot_name"
+ if virsh --connect "$LIBVIRT_URI" snapshot-revert "$vm_name" "$snapshot_name" >> "$LOGFILE" 2>&1; then
+ success "Snapshot $snapshot_name restored"
+ return 0
+ else
+ error "Failed to restore snapshot $snapshot_name"
+ return 1
+ fi
+}
+
+# Delete snapshot
+delete_snapshot() {
+ local vm_name="$1"
+ local snapshot_name="$2"
+
+ step "Deleting snapshot: $snapshot_name"
+ if virsh --connect "$LIBVIRT_URI" snapshot-delete "$vm_name" "$snapshot_name" >> "$LOGFILE" 2>&1; then
+ success "Snapshot $snapshot_name deleted"
+ return 0
+ else
+ error "Failed to delete snapshot $snapshot_name"
+ return 1
+ fi
+}
+
+# Clone disk image (copy-on-write)
+clone_disk() {
+ local base_image="$1"
+ local new_image="$2"
+
+ if [ ! -f "$base_image" ]; then
+ error "Base image not found: $base_image"
+ return 1
+ fi
+
+ step "Cloning disk image (full copy)"
+ if qemu-img convert -f qcow2 -O qcow2 "$base_image" "$new_image" >> "$LOGFILE" 2>&1; then
+ success "Disk cloned: $new_image"
+ else
+ error "Failed to clone disk"
+ return 1
+ fi
+
+ # Truncate machine-id so systemd generates a new one on boot (avoids DHCP conflicts)
+ step "Clearing machine-id for unique network identity"
+ if guestfish -a "$new_image" -i truncate /etc/machine-id >> "$LOGFILE" 2>&1; then
+ success "Machine-ID cleared (will regenerate on boot)"
+ return 0
+ else
+ warn "Failed to clear machine-ID (guestfish failed)"
+ info "Network may conflict with base VM if both run simultaneously"
+ return 0 # Don't fail the whole operation
+ fi
+}
+
+# Get VM IP address (requires guest agent or DHCP lease)
+get_vm_ip() {
+ local vm_name="$1"
+
+ # Try guest agent first
+ local ip
+ ip=$(virsh --connect "$LIBVIRT_URI" domifaddr "$vm_name" 2>/dev/null | grep -oP '(\d+\.){3}\d+' | head -1)
+
+ if [ -n "$ip" ]; then
+ echo "$ip"
+ return 0
+ fi
+
+ # Fall back to DHCP leases
+ local mac
+ mac=$(virsh --connect "$LIBVIRT_URI" domiflist "$vm_name" | grep -oP '([0-9a-f]{2}:){5}[0-9a-f]{2}' | head -1)
+
+ if [ -n "$mac" ]; then
+ ip=$(grep "$mac" /var/lib/libvirt/dnsmasq/default.leases 2>/dev/null | awk '{print $3}')
+ if [ -n "$ip" ]; then
+ echo "$ip"
+ return 0
+ fi
+ fi
+
+ return 1
+}
+
+# Execute command in VM via SSH
+vm_exec() {
+ local vm_name="$1"
+ shift
+ local cmd="$*"
+
+ local ip
+ ip=$(get_vm_ip "$vm_name")
+
+ if [ -z "$ip" ]; then
+ error "Could not get IP address for VM $vm_name"
+ return 1
+ fi
+
+ ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+ "root@$ip" "$cmd" 2>> "$LOGFILE"
+}
+
+# Copy file to VM
+copy_to_vm() {
+ local vm_name="$1"
+ local local_file="$2"
+ local remote_path="$3"
+
+ local ip
+ ip=$(get_vm_ip "$vm_name")
+
+ if [ -z "$ip" ]; then
+ error "Could not get IP address for VM $vm_name"
+ return 1
+ fi
+
+ step "Copying $local_file to VM"
+ if scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+ "$local_file" "root@$ip:$remote_path" >> "$LOGFILE" 2>&1; then
+ success "File copied to VM"
+ return 0
+ else
+ error "Failed to copy file to VM"
+ return 1
+ fi
+}
+
+# Copy file from VM
+copy_from_vm() {
+ local vm_name="$1"
+ local remote_file="$2"
+ local local_path="$3"
+
+ local ip
+ ip=$(get_vm_ip "$vm_name")
+
+ if [ -z "$ip" ]; then
+ error "Could not get IP address for VM $vm_name"
+ return 1
+ fi
+
+ step "Copying $remote_file from VM"
+ if scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+ "root@$ip:$remote_file" "$local_path" >> "$LOGFILE" 2>&1; then
+ success "File copied from VM"
+ return 0
+ else
+ error "Failed to copy file from VM"
+ return 1
+ fi
+}
diff --git a/scripts/testing/run-test.sh b/scripts/testing/run-test.sh
new file mode 100755
index 0000000..4bcb55b
--- /dev/null
+++ b/scripts/testing/run-test.sh
@@ -0,0 +1,382 @@
+#!/bin/bash
+# Run archsetup test in a VM using snapshots
+# Author: Craig Jennings <craigmartinjennings@gmail.com>
+# License: GNU GPLv3
+#
+# This script:
+# 1. Reverts base VM to clean snapshot
+# 2. Starts the base VM
+# 3. Transfers archsetup and dotfiles
+# 4. Executes archsetup in the VM
+# 5. Captures logs and validates results
+# 6. Generates test report
+# 7. Reverts to clean snapshot for next run
+
+set -e
+
+# Get script directory
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
+
+# Source utilities
+source "$SCRIPT_DIR/lib/logging.sh"
+source "$SCRIPT_DIR/lib/vm-utils.sh"
+source "$SCRIPT_DIR/lib/network-diagnostics.sh"
+
+# Parse arguments
+KEEP_VM=false
+ARCHSETUP_SCRIPT="$PROJECT_ROOT/archsetup"
+SNAPSHOT_NAME="clean-install"
+SKIP_SLOW_PACKAGES=false
+
+while [[ $# -gt 0 ]]; do
+ case $1 in
+ --keep)
+ KEEP_VM=true
+ shift
+ ;;
+ --script)
+ ARCHSETUP_SCRIPT="$2"
+ shift 2
+ ;;
+ --snapshot)
+ SNAPSHOT_NAME="$2"
+ shift 2
+ ;;
+ --skip-slow-packages)
+ SKIP_SLOW_PACKAGES=true
+ shift
+ ;;
+ *)
+ echo "Usage: $0 [--keep] [--script /path/to/archsetup] [--snapshot name] [--skip-slow-packages]"
+ echo " --keep Keep VM in post-test state (for debugging)"
+ echo " --script Specify custom archsetup script to test"
+ echo " --snapshot Snapshot name to revert to (default: clean-install)"
+ echo " --skip-slow-packages Skip slow packages (texlive-meta, topgrade) for faster testing"
+ exit 1
+ ;;
+ esac
+done
+
+# Configuration
+TIMESTAMP=$(date +'%Y%m%d-%H%M%S')
+VM_NAME="archsetup-base"
+TEST_RESULTS_DIR="$PROJECT_ROOT/test-results/$TIMESTAMP"
+ROOT_PASSWORD="archsetup"
+
+# Initialize logging
+mkdir -p "$TEST_RESULTS_DIR"
+LOGFILE="$TEST_RESULTS_DIR/test.log"
+init_logging "$LOGFILE"
+
+section "ArchSetup Test Run: $TIMESTAMP"
+
+# Verify archsetup script exists
+if [ ! -f "$ARCHSETUP_SCRIPT" ]; then
+ fatal "ArchSetup script not found: $ARCHSETUP_SCRIPT"
+fi
+
+# Check if VM exists
+if ! vm_exists "$VM_NAME"; then
+ fatal "Base VM not found: $VM_NAME"
+ info "Create it first: ./scripts/testing/create-base-vm.sh"
+fi
+
+# Check if snapshot exists
+section "Preparing Test Environment"
+
+step "Checking for snapshot: $SNAPSHOT_NAME"
+if ! virsh --connect "$LIBVIRT_URI" snapshot-list "$VM_NAME" --name 2>/dev/null | grep -q "^$SNAPSHOT_NAME$"; then
+ fatal "Snapshot '$SNAPSHOT_NAME' not found on VM $VM_NAME"
+ info "Available snapshots:"
+ virsh --connect "$LIBVIRT_URI" snapshot-list "$VM_NAME" 2>/dev/null || info " (none)"
+ info ""
+ info "Create snapshot with:"
+ info " virsh snapshot-create-as $VM_NAME $SNAPSHOT_NAME --description 'Clean Arch install'"
+fi
+success "Snapshot $SNAPSHOT_NAME exists"
+
+# Shut down VM if running
+if vm_is_running "$VM_NAME"; then
+ warn "VM $VM_NAME is currently running - shutting down for snapshot revert"
+ stop_vm "$VM_NAME"
+fi
+
+# Revert to clean snapshot
+step "Reverting to snapshot: $SNAPSHOT_NAME"
+if restore_snapshot "$VM_NAME" "$SNAPSHOT_NAME"; then
+ success "Reverted to clean state"
+else
+ fatal "Failed to revert snapshot"
+fi
+
+# Start VM
+start_timer "boot"
+step "Starting VM and waiting for SSH..."
+if ! start_vm "$VM_NAME"; then
+ fatal "Failed to start VM"
+fi
+
+sleep 10 # Give VM time to boot
+
+# Get VM IP address
+VM_IP=""
+for i in {1..30}; do
+ VM_IP=$(get_vm_ip "$VM_NAME" 2>/dev/null || true)
+ if [ -n "$VM_IP" ]; then
+ break
+ fi
+ sleep 2
+done
+
+if [ -z "$VM_IP" ]; then
+ error "Could not get VM IP address"
+ info "VM may not have booted correctly"
+ fatal "VM boot failed"
+fi
+
+success "VM is running at $VM_IP"
+stop_timer "boot"
+
+# Wait for SSH
+step "Waiting for SSH to become available..."
+for i in {1..60}; do
+ if sshpass -p "$ROOT_PASSWORD" ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+ -o ConnectTimeout=2 "root@$VM_IP" "echo connected" &>/dev/null; then
+ break
+ fi
+ sleep 2
+done
+
+success "SSH is available"
+
+# Run network diagnostics
+if ! run_network_diagnostics "$VM_IP"; then
+ fatal "Network diagnostics failed - aborting test"
+fi
+
+# Transfer files to VM (simulating git clone)
+section "Simulating Git Clone"
+
+step "Creating shallow git clone on VM"
+info "This simulates: git clone --depth 1 <repo> /home/cjennings/code/archsetup"
+
+# Create a temporary git bundle from current repo
+BUNDLE_FILE=$(mktemp)
+git bundle create "$BUNDLE_FILE" HEAD >> "$LOGFILE" 2>&1
+
+# Transfer bundle and extract on VM
+sshpass -p "$ROOT_PASSWORD" ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+ "root@$VM_IP" "rm -rf /tmp/archsetup-test && mkdir -p /tmp/archsetup-test" >> "$LOGFILE" 2>&1
+
+sshpass -p "$ROOT_PASSWORD" scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+ "$BUNDLE_FILE" "root@$VM_IP:/tmp/archsetup.bundle" >> "$LOGFILE" 2>&1
+
+# Clone from bundle on VM (simulates git clone)
+sshpass -p "$ROOT_PASSWORD" ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+ "root@$VM_IP" "cd /tmp && git clone --depth 1 /tmp/archsetup.bundle archsetup-test && rm /tmp/archsetup.bundle" >> "$LOGFILE" 2>&1
+
+rm -f "$BUNDLE_FILE"
+success "Repository cloned to VM (simulating git clone --depth 1)"
+
+# Execute archsetup
+section "Executing ArchSetup"
+
+start_timer "archsetup"
+step "Starting archsetup script in detached session on VM..."
+info "This will take 30-60 minutes depending on network speed"
+info "Log file: $LOGFILE"
+
+# Start archsetup in a detached session on the VM (resilient to SSH disconnections)
+REMOTE_LOG="/tmp/archsetup-test/archsetup-output.log"
+ARCHSETUP_ARGS=""
+if $SKIP_SLOW_PACKAGES; then
+ ARCHSETUP_ARGS="--skip-slow-packages"
+ info "Running archsetup with --skip-slow-packages flag"
+fi
+sshpass -p "$ROOT_PASSWORD" ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+ "root@$VM_IP" "cd /tmp/archsetup-test && nohup bash archsetup $ARCHSETUP_ARGS > $REMOTE_LOG 2>&1 & echo \$!" \
+ >> "$LOGFILE" 2>&1
+
+if [ $? -ne 0 ]; then
+ fatal "Failed to start archsetup on VM"
+fi
+
+success "ArchSetup started in background on VM"
+
+# Poll for completion
+step "Monitoring archsetup progress (polling every 30 seconds)..."
+POLL_COUNT=0
+MAX_POLLS=180 # 90 minutes max (180 * 30 seconds)
+
+while [ $POLL_COUNT -lt $MAX_POLLS ]; do
+ # Check if archsetup process is still running
+ # Use ps to avoid pgrep matching its own SSH command
+ if sshpass -p "$ROOT_PASSWORD" ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+ "root@$VM_IP" "ps aux | grep '[b]ash archsetup' > /dev/null" 2>/dev/null; then
+ # Still running, wait and continue
+ sleep 30
+ POLL_COUNT=$((POLL_COUNT + 1))
+
+ # Show progress every 5 minutes
+ if [ $((POLL_COUNT % 10)) -eq 0 ]; then
+ ELAPSED_MINS=$((POLL_COUNT / 2))
+ info "Still running... ($ELAPSED_MINS minutes elapsed)"
+ fi
+ else
+ # Process finished
+ break
+ fi
+done
+
+if [ $POLL_COUNT -ge $MAX_POLLS ]; then
+ error "ArchSetup timed out after 90 minutes"
+ ARCHSETUP_EXIT_CODE=124
+else
+ # Get exit code from the remote log
+ step "Retrieving archsetup exit status..."
+ ARCHSETUP_EXIT_CODE=$(sshpass -p "$ROOT_PASSWORD" ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+ "root@$VM_IP" "grep -q 'ARCHSETUP_EXECUTION_COMPLETE' /var/log/archsetup-*.log 2>/dev/null && echo 0 || echo 1" 2>/dev/null)
+
+ if [ "$ARCHSETUP_EXIT_CODE" = "0" ]; then
+ success "ArchSetup completed successfully"
+ else
+ error "ArchSetup may have encountered errors (check logs)"
+ fi
+fi
+
+# Copy the remote output log
+step "Retrieving archsetup output from VM..."
+sshpass -p "$ROOT_PASSWORD" scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+ "root@$VM_IP:$REMOTE_LOG" "$TEST_RESULTS_DIR/archsetup-output.log" 2>> "$LOGFILE" || \
+ warn "Could not copy remote output log"
+
+# Append remote output to main test log
+if [ -f "$TEST_RESULTS_DIR/archsetup-output.log" ]; then
+ cat "$TEST_RESULTS_DIR/archsetup-output.log" >> "$LOGFILE"
+fi
+
+stop_timer "archsetup"
+
+# Capture logs and artifacts from VM
+section "Capturing Test Artifacts"
+
+step "Copying archsetup log from VM"
+sshpass -p "$ROOT_PASSWORD" scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+ "root@$VM_IP:/var/log/archsetup-*.log" "$TEST_RESULTS_DIR/" 2>> "$LOGFILE" || \
+ warn "Could not copy archsetup log"
+
+step "Copying package lists from VM"
+sshpass -p "$ROOT_PASSWORD" scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+ "root@$VM_IP:/root/.local/src/archsetup-*.txt" "$TEST_RESULTS_DIR/" 2>> "$LOGFILE" || \
+ warn "Could not copy package lists"
+
+# Run validation
+section "Validating Installation"
+
+VALIDATION_PASSED=true
+
+# Check if user was created
+step "Checking if user 'cjennings' was created"
+if sshpass -p "$ROOT_PASSWORD" ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+ "root@$VM_IP" "id cjennings" &>> "$LOGFILE"; then
+ success "User cjennings exists"
+else
+ error "User cjennings not found"
+ VALIDATION_PASSED=false
+fi
+
+# Check if dotfiles were stowed
+step "Checking if dotfiles are stowed"
+if sshpass -p "$ROOT_PASSWORD" ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+ "root@$VM_IP" "test -L /home/cjennings/.profile" &>> "$LOGFILE"; then
+ success "Dotfiles appear to be stowed"
+else
+ warn "Dotfiles may not be properly stowed"
+ VALIDATION_PASSED=false
+fi
+
+# Check if yay is installed
+step "Checking if yay (AUR helper) is installed"
+if sshpass -p "$ROOT_PASSWORD" ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+ "root@$VM_IP" "/usr/bin/which yay" &>> "$LOGFILE"; then
+ success "yay is installed"
+else
+ error "yay not found"
+ VALIDATION_PASSED=false
+fi
+
+# Check if DWM was built
+step "Checking if DWM is installed"
+if sshpass -p "$ROOT_PASSWORD" ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+ "root@$VM_IP" "test -f /usr/local/bin/dwm" &>> "$LOGFILE"; then
+ success "DWM is installed"
+else
+ error "DWM not found"
+ VALIDATION_PASSED=false
+fi
+
+# Generate test report
+section "Generating Test Report"
+
+REPORT_FILE="$TEST_RESULTS_DIR/test-report.txt"
+cat > "$REPORT_FILE" << EOFREPORT
+========================================
+ArchSetup Test Report
+========================================
+
+Test ID: $TIMESTAMP
+Date: $(date +'%Y-%m-%d %H:%M:%S')
+Test Method: Snapshot-based
+
+VM Configuration:
+ Name: $VM_NAME
+ IP: $VM_IP
+ Snapshot: $SNAPSHOT_NAME
+
+Results:
+ ArchSetup Exit Code: $ARCHSETUP_EXIT_CODE
+ Validation: $(if $VALIDATION_PASSED; then echo "PASSED"; else echo "FAILED"; fi)
+
+Artifacts:
+ Log file: $LOGFILE
+ Report: $REPORT_FILE
+ Results: $TEST_RESULTS_DIR/
+
+EOFREPORT
+
+info "Test report saved: $REPORT_FILE"
+
+# Cleanup or keep VM
+section "Cleanup"
+
+if $KEEP_VM; then
+ info "VM is still running in post-test state (--keep flag was used)"
+ info "Connect with:"
+ info " Console: virsh console $VM_NAME"
+ info " SSH: ssh root@$VM_IP"
+ info ""
+ info "To revert to clean state when done:"
+ info " virsh shutdown $VM_NAME"
+ info " virsh snapshot-revert $VM_NAME $SNAPSHOT_NAME"
+else
+ step "Shutting down VM and reverting to clean snapshot"
+ stop_vm "$VM_NAME"
+ if restore_snapshot "$VM_NAME" "$SNAPSHOT_NAME"; then
+ success "VM reverted to clean state"
+ else
+ warn "Failed to revert snapshot - VM may be in modified state"
+ fi
+fi
+
+# Final summary
+section "Test Complete"
+
+if [ $ARCHSETUP_EXIT_CODE -eq 0 ] && $VALIDATION_PASSED; then
+ success "TEST PASSED"
+ exit 0
+else
+ error "TEST FAILED"
+ info "Check logs in: $TEST_RESULTS_DIR"
+ exit 1
+fi
diff --git a/scripts/testing/setup-testing-env.sh b/scripts/testing/setup-testing-env.sh
new file mode 100755
index 0000000..e682553
--- /dev/null
+++ b/scripts/testing/setup-testing-env.sh
@@ -0,0 +1,191 @@
+#!/bin/bash
+# Setup testing environment for archsetup
+# Author: Craig Jennings <craigmartinjennings@gmail.com>
+# License: GNU GPLv3
+#
+# This script performs one-time setup of the testing infrastructure:
+# - Installs QEMU/KVM and libvirt
+# - Configures libvirt networking
+# - Adds user to libvirt group
+# - Verifies KVM support
+# - Creates directories for test artifacts
+
+set -e
+
+# Get script directory
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
+
+# Source utilities
+source "$SCRIPT_DIR/lib/logging.sh"
+
+# Initialize logging
+LOGFILE="$PROJECT_ROOT/test-results/setup-$(date +'%Y%m%d-%H%M%S').log"
+init_logging "$LOGFILE"
+
+section "ArchSetup Testing Environment Setup"
+
+# Check if running on Arch Linux
+if [ ! -f /etc/arch-release ]; then
+ fatal "This script is designed for Arch Linux"
+fi
+
+# Check if user has sudo
+if ! sudo -n true 2>/dev/null; then
+ warn "This script requires sudo access"
+ info "You may be prompted for your password"
+fi
+
+# Install required packages
+section "Installing Required Packages"
+
+PACKAGES=(
+ qemu-full
+ libvirt
+ virt-manager
+ dnsmasq
+ bridge-utils
+ iptables
+ virt-install
+ libguestfs
+)
+
+for pkg in "${PACKAGES[@]}"; do
+ if pacman -Qi "$pkg" &>/dev/null; then
+ info "$pkg is already installed"
+ else
+ step "Installing $pkg"
+ if sudo pacman -S --noconfirm "$pkg" >> "$LOGFILE" 2>&1; then
+ success "$pkg installed"
+ else
+ error "Failed to install $pkg"
+ fatal "Package installation failed"
+ fi
+ fi
+done
+
+# Enable and start libvirt service
+section "Configuring libvirt Service"
+
+step "Enabling libvirtd service"
+if sudo systemctl enable libvirtd.service >> "$LOGFILE" 2>&1; then
+ success "libvirtd service enabled"
+else
+ warn "Failed to enable libvirtd service (may already be enabled)"
+fi
+
+step "Starting libvirtd service"
+if sudo systemctl start libvirtd.service >> "$LOGFILE" 2>&1; then
+ success "libvirtd service started"
+else
+ if sudo systemctl is-active --quiet libvirtd.service; then
+ info "libvirtd service is already running"
+ else
+ error "Failed to start libvirtd service"
+ fatal "Service startup failed"
+ fi
+fi
+
+# Add user to libvirt group
+section "Configuring User Permissions"
+
+if groups | grep -q libvirt; then
+ success "User $USER is already in libvirt group"
+else
+ step "Adding user $USER to libvirt group"
+ if sudo usermod -a -G libvirt "$USER" >> "$LOGFILE" 2>&1; then
+ success "User added to libvirt group"
+ warn "You must log out and back in for group membership to take effect"
+ warn "After logging back in, re-run this script to continue"
+ exit 0
+ else
+ error "Failed to add user to libvirt group"
+ fatal "User configuration failed"
+ fi
+fi
+
+# Verify KVM support
+section "Verifying KVM Support"
+
+if [ -e /dev/kvm ]; then
+ success "KVM is available"
+else
+ error "KVM is not available"
+ info "Check if virtualization is enabled in BIOS"
+ info "Load kvm module: sudo modprobe kvm-intel (or kvm-amd)"
+ fatal "KVM not available"
+fi
+
+# Check which KVM module is loaded
+if lsmod | grep -q kvm_intel; then
+ info "Using Intel KVM"
+elif lsmod | grep -q kvm_amd; then
+ info "Using AMD KVM"
+else
+ warn "No KVM module detected"
+ info "Load with: sudo modprobe kvm-intel (or kvm-amd)"
+fi
+
+# Create directory structure
+section "Creating Directory Structure"
+
+DIRS=(
+ "$PROJECT_ROOT/vm-images"
+ "$PROJECT_ROOT/test-results"
+)
+
+for dir in "${DIRS[@]}"; do
+ if [ -d "$dir" ]; then
+ info "Directory exists: $dir"
+ else
+ step "Creating directory: $dir"
+ if mkdir -p "$dir" 2>> "$LOGFILE"; then
+ success "Directory created: $dir"
+ else
+ error "Failed to create directory: $dir"
+ fi
+ fi
+done
+
+# Configure default libvirt network
+section "Configuring libvirt Network"
+
+if virsh net-info default &>/dev/null; then
+ info "Default network exists"
+
+ if virsh net-info default | grep -q "Active:.*yes"; then
+ success "Default network is active"
+ else
+ step "Starting default network"
+ if virsh net-start default >> "$LOGFILE" 2>&1; then
+ success "Default network started"
+ else
+ error "Failed to start default network"
+ fi
+ fi
+
+ if virsh net-info default | grep -q "Autostart:.*yes"; then
+ info "Default network autostart is enabled"
+ else
+ step "Enabling default network autostart"
+ if virsh net-autostart default >> "$LOGFILE" 2>&1; then
+ success "Default network autostart enabled"
+ else
+ warn "Failed to enable default network autostart"
+ fi
+ fi
+else
+ error "Default network not found"
+ info "This is unusual - libvirt should create it automatically"
+fi
+
+# Summary
+section "Setup Complete"
+
+success "Testing environment is ready"
+info ""
+info "Next steps:"
+info " 1. Create base VM: ./scripts/testing/create-base-vm.sh"
+info " 2. Run a test: ./scripts/testing/run-test.sh"
+info ""
+info "Log file: $LOGFILE"