================================================================================ ARCHZFS RESCUE GUIDE ================================================================================ This guide covers common rescue and recovery scenarios. For quick command reference, use: tldr Table of Contents: 1. ZFS Recovery 2. Data Recovery 3. Boot Repair 4. Windows Recovery 5. Hardware Diagnostics 6. Disk Operations 7. Network Troubleshooting 8. Encryption & GPG 9. System Tracing (eBPF/bpftrace) 10. Terminal Web Browsing ================================================================================ 1. ZFS RECOVERY ================================================================================ QUICK REFERENCE --------------- tldr zfs # ZFS filesystem commands tldr zpool # ZFS pool commands man zfs # Full ZFS manual man zpool # Full zpool manual SCENARIO: Import a pool from another system ------------------------------------------- List pools available for import: zpool import Import a specific pool: zpool import poolname If the pool was not cleanly exported (e.g., system crash): zpool import -f poolname Import with a different name (to avoid conflicts): zpool import oldname newname SCENARIO: Pool won't import - "pool may be in use" -------------------------------------------------- Force import (use when you know it's safe): zpool import -f poolname If that fails, try recovery mode: zpool import -F poolname Last resort - import read-only to recover data: zpool import -o readonly=on poolname SCENARIO: Check pool health and repair -------------------------------------- Check pool status: zpool status poolname Start a scrub (checks all data, can take hours): zpool scrub poolname Check scrub progress: zpool status poolname Clear transient errors after fixing hardware: zpool clear poolname SCENARIO: Recover from snapshot / Rollback ------------------------------------------ List all snapshots: zfs list -t snapshot Rollback to a snapshot (destroys changes since snapshot): zfs rollback poolname/dataset@snapshot For snapshots with intermediate snapshots, use -r: zfs rollback -r poolname/dataset@snapshot SCENARIO: Copy data from ZFS pool --------------------------------- Mount datasets if not auto-mounted: zfs mount -a Or mount specific dataset: zfs set mountpoint=/mnt/recovery poolname/dataset zfs mount poolname/dataset Copy with rsync (preserves permissions, shows progress): rsync -avP --progress /mnt/recovery/ /destination/ SCENARIO: Send/Receive snapshots (backup/migrate) ------------------------------------------------- Create a snapshot first: zfs snapshot poolname/dataset@backup Send to a file (local backup): zfs send poolname/dataset@backup > /path/to/backup.zfs Send with progress indicator: zfs send poolname/dataset@backup | pv > /path/to/backup.zfs Send to another pool locally: zfs send poolname/dataset@backup | zfs recv newpool/dataset Send to remote system over SSH: zfs send poolname/dataset@backup | ssh user@remote zfs recv pool/dataset With progress and buffering for network transfers: zfs send poolname/dataset@backup | pv | mbuffer -s 128k -m 1G | \ ssh user@remote "mbuffer -s 128k -m 1G | zfs recv pool/dataset" SCENARIO: Encrypted pool - unlock and mount ------------------------------------------- Load the encryption key (will prompt for passphrase): zfs load-key poolname Or for all encrypted datasets: zfs load-key -a Then mount: zfs mount -a SCENARIO: Replace failed drive in mirror/raidz ---------------------------------------------- Check which drive failed: zpool status poolname Replace the drive (assuming /dev/sdc is new drive): zpool replace poolname /dev/old-drive /dev/sdc Monitor resilver progress: zpool status poolname SCENARIO: See what's using a dataset (before unmount) ----------------------------------------------------- Check what processes have files open: lsof /mountpoint Or for all ZFS mounts: lsof | grep poolname USEFUL ZFS COMMANDS ------------------- zpool status # Pool health overview zpool list # Pool capacity zpool history poolname # Command history zfs list # All datasets zfs list -t snapshot # All snapshots zfs get all poolname # All properties zdb -l /dev/sdX # Low-level pool label info ================================================================================ 2. DATA RECOVERY ================================================================================ QUICK REFERENCE --------------- tldr ddrescue # Clone failing drives tldr testdisk # Partition/file recovery tldr photorec # Recover deleted files by type tldr smartctl # Check drive health FIRST: Assess drive health before recovery ------------------------------------------ Check if drive is failing (SMART data): smartctl -H /dev/sdX # Quick health check smartctl -a /dev/sdX # Full SMART report Key things to look for: - "PASSED" vs "FAILED" health status - Reallocated_Sector_Ct - bad sectors remapped (increasing = dying) - Current_Pending_Sector - sectors waiting to be remapped - Offline_Uncorrectable - sectors that couldn't be read If SMART shows problems, STOP and use ddrescue immediately. Do not run fsck or other tools that write to a failing drive. SCENARIO: Clone a failing drive (CRITICAL - do this first!) ------------------------------------------------------------ Golden rule: NEVER work directly on a failing drive. Clone it first, then recover from the clone. Clone to an image file (safest): ddrescue -d -r3 /dev/sdX /path/to/image.img /path/to/logfile.log -d = direct I/O, bypass cache -r3 = retry bad sectors 3 times logfile = allows resuming if interrupted Clone to another drive: ddrescue -d -r3 /dev/sdX /dev/sdY /path/to/logfile.log Monitor progress (ddrescue shows its own progress, but for pipes): ddrescue -d /dev/sdX - 2>/dev/null | pv > /path/to/image.img Resume an interrupted clone: ddrescue -d -r3 /dev/sdX /path/to/image.img /path/to/logfile.log The log file tracks what's been copied. Same command resumes. If drive is very bad, do a quick pass first, then retry bad sectors: ddrescue -d -n /dev/sdX image.img logfile.log # Fast pass, skip errors ddrescue -d -r3 /dev/sdX image.img logfile.log # Retry bad sectors SCENARIO: Recover deleted files (PhotoRec) ------------------------------------------ PhotoRec recovers files by their content signatures, not filesystem. Works even if filesystem is damaged or reformatted. Run PhotoRec (included with testdisk): photorec /dev/sdX # From device photorec image.img # From disk image Interactive steps: 1. Select the disk/partition 2. Choose filesystem type (usually "Other" for FAT/NTFS/exFAT) 3. Choose "Free" (unallocated) or "Whole" (entire partition) 4. Select destination folder for recovered files 5. Wait (can take hours for large drives) Recovered files are named by type (e.g., f0001234.jpg) in recup_dir.*/ SCENARIO: Recover lost partition / Fix partition table ------------------------------------------------------ TestDisk can find and recover lost partitions. Run TestDisk: testdisk /dev/sdX # From device testdisk image.img # From disk image Interactive steps: 1. Select disk 2. Select partition table type (usually Intel/PC for MBR, EFI GPT) 3. Choose "Analyse" to scan for partitions 4. "Quick Search" finds most partitions 5. "Deeper Search" if quick search misses any 6. Review found partitions, select ones to recover 7. "Write" to save new partition table (or just note the info) TestDisk can also: - Recover deleted files from FAT/NTFS/ext filesystems - Repair FAT/NTFS boot sectors - Rebuild NTFS MFT SCENARIO: Recover specific file types (Foremost) ------------------------------------------------ Foremost carves files based on headers/footers. Useful when PhotoRec doesn't find what you need. Basic usage: foremost -t all -i /dev/sdX -o /output/dir foremost -t all -i image.img -o /output/dir Specific file types: foremost -t jpg,png,gif -i image.img -o /output/dir foremost -t pdf,doc,xls -i image.img -o /output/dir Supported types: jpg, gif, png, bmp, avi, exe, mpg, wav, riff, wmv, mov, pdf, ole (doc/xls/ppt), doc, zip, rar, htm, cpp, all SCENARIO: Can't mount filesystem - try repair ---------------------------------------------- WARNING: Only run fsck on a COPY, not the original failing drive! For ext2/ext3/ext4: fsck.ext4 -n /dev/sdX # Check only, no changes (safe) fsck.ext4 -p /dev/sdX # Auto-repair safe problems fsck.ext4 -y /dev/sdX # Say yes to all repairs (risky) For NTFS: ntfsfix /dev/sdX # Fix common NTFS issues For XFS: xfs_repair -n /dev/sdX # Check only xfs_repair /dev/sdX # Repair For FAT32: fsck.fat -n /dev/sdX # Check only fsck.fat -a /dev/sdX # Auto-repair SCENARIO: Mount a disk image for file access --------------------------------------------- Mount a full disk image (find partitions first): fdisk -l image.img # List partitions and offsets Note the "Start" sector of the partition you want, multiply by 512: mount -o loop,offset=$((START*512)) image.img /mnt/recovery Or use losetup to set up loop devices for all partitions: losetup -P /dev/loop0 image.img mount /dev/loop0p1 /mnt/recovery For NTFS images: mount -t ntfs-3g -o loop,offset=$((START*512)) image.img /mnt/recovery SCENARIO: Low-level recovery from very bad drives (safecopy) ------------------------------------------------------------ Safecopy is more aggressive than ddrescue for very damaged media. Use when ddrescue can't make progress. safecopy /dev/sdX image.img With multiple passes (increasingly aggressive): safecopy --stage1 /dev/sdX image.img # Quick pass safecopy --stage2 /dev/sdX image.img # Retry errors safecopy --stage3 /dev/sdX image.img # Maximum recovery DATA RECOVERY TIPS ------------------ 1. STOP using a failing drive immediately - every access risks more damage 2. Clone first, recover from clone - never work on original 3. Keep the log file from ddrescue - allows resuming 4. Recover to a DIFFERENT drive - never same drive 5. For deleted files on working drive, unmount immediately to prevent overwriting the deleted data 6. If drive makes clicking/grinding noises, consider professional recovery 7. For SSDs, TRIM may have already zeroed deleted blocks - recovery harder ================================================================================ 3. BOOT REPAIR ================================================================================ QUICK REFERENCE --------------- tldr grub-install # Install GRUB bootloader tldr efibootmgr # Manage UEFI boot entries tldr arch-chroot # Chroot into installed system man mkinitcpio # Rebuild initramfs FIRST: Identify your boot mode ------------------------------ Check if system is UEFI or Legacy BIOS: ls /sys/firmware/efi # If exists, you're in UEFI mode If booting from this rescue USB in UEFI mode, you need to fix UEFI. If booting in Legacy mode, you need to fix MBR/Legacy boot. SCENARIO: Chroot into broken system (preparation for most repairs) ------------------------------------------------------------------ This is the foundation for most boot repairs. 1. Find your partitions: lsblk -f # Shows filesystems and labels 2. Mount the root filesystem: mount /dev/sdX2 /mnt # Replace with your root partition For ZFS root: zpool import -R /mnt zroot zfs mount -a 3. Mount required system directories: mount /dev/sdX1 /mnt/boot # EFI partition (if separate) mount --bind /dev /mnt/dev mount --bind /proc /mnt/proc mount --bind /sys /mnt/sys mount --bind /sys/firmware/efi/efivars /mnt/sys/firmware/efi/efivars Or use arch-chroot (handles mounts automatically): arch-chroot /mnt 4. Now you can run commands as if booted into the system. SCENARIO: Reinstall GRUB (UEFI) ------------------------------- After chrooting into the system: grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=GRUB If EFI partition is mounted elsewhere: grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=GRUB Regenerate GRUB config: grub-mkconfig -o /boot/grub/grub.cfg SCENARIO: Reinstall GRUB (Legacy BIOS/MBR) ------------------------------------------ After chrooting into the system: grub-install --target=i386-pc /dev/sdX # Note: device, not partition Regenerate GRUB config: grub-mkconfig -o /boot/grub/grub.cfg SCENARIO: Fix UEFI boot entries ------------------------------- List current boot entries: efibootmgr -v Delete a broken entry (replace XXXX with boot number): efibootmgr -b XXXX -B Create a new boot entry: efibootmgr --create --disk /dev/sdX --part 1 --label "Arch Linux" \ --loader /EFI/GRUB/grubx64.efi Change boot order (comma-separated boot numbers): efibootmgr -o 0001,0002,0003 Set next boot only: efibootmgr -n 0001 SCENARIO: Rebuild initramfs (kernel panic, missing modules) ----------------------------------------------------------- After chrooting into the system: List available presets: ls /etc/mkinitcpio.d/ Rebuild for specific kernel: mkinitcpio -p linux # Standard kernel mkinitcpio -p linux-lts # LTS kernel Rebuild all: mkinitcpio -P Check mkinitcpio.conf for ZFS: grep "^HOOKS" /etc/mkinitcpio.conf For ZFS, HOOKS should include 'zfs': HOOKS=(base udev autodetect modconf block zfs filesystems keyboard fsck) SCENARIO: GRUB not detecting Windows (dual-boot) ------------------------------------------------ After chrooting into the system: Enable os-prober in GRUB config: echo 'GRUB_DISABLE_OS_PROBER=false' >> /etc/default/grub Mount the Windows EFI partition if not already mounted. Regenerate GRUB config: grub-mkconfig -o /boot/grub/grub.cfg os-prober should find Windows and add it to the menu. SCENARIO: Restore Windows MBR (remove GRUB, restore Windows boot) ----------------------------------------------------------------- If you need to remove Linux and restore Windows-only MBR: ms-sys -w /dev/sdX # Write Windows 7+ MBR Other options: ms-sys -7 /dev/sdX # Windows 7 MBR specifically ms-sys -i /dev/sdX # Show current MBR type SCENARIO: Install syslinux (lightweight alternative to GRUB) ------------------------------------------------------------ For Legacy BIOS: syslinux-install_update -i -a -m For UEFI, copy the EFI binary: cp /usr/lib/syslinux/efi64/* /boot/EFI/syslinux/ Create /boot/syslinux/syslinux.cfg with boot entries. SCENARIO: Can't boot - kernel panic with ZFS -------------------------------------------- Common causes: 1. ZFS module not in initramfs - rebuild with mkinitcpio 2. Pool name changed - check zpool.cache 3. hostid mismatch - regenerate hostid After chrooting: Check if ZFS hook is present: grep zfs /etc/mkinitcpio.conf Regenerate hostid if needed: zgenhostid $(hostid) Rebuild initramfs: mkinitcpio -P SCENARIO: Emergency boot from GRUB command line ----------------------------------------------- If GRUB loads but config is broken, press 'c' for command line: For Linux (non-ZFS): set root=(hd0,gpt2) linux /boot/vmlinuz-linux root=/dev/sda2 initrd /boot/initramfs-linux.img boot For Linux with ZFS root: set root=(hd0,gpt1) linux /vmlinuz-linux-lts root=ZFS=zroot/ROOT/default initrd /initramfs-linux-lts.img boot Tab completion works in GRUB command line! BOOT REPAIR TIPS ---------------- 1. Always backup your current EFI partition before making changes 2. Use 'efibootmgr -v' to see full paths and verify entries 3. Some UEFI firmwares are picky about the bootloader path - try /EFI/BOOT/BOOTX64.EFI as a fallback 4. If all else fails, most UEFI has a boot menu (F12, F8, Esc at POST) 5. GRUB reinstall usually fixes most boot issues 6. For ZFS, the initramfs must include the zfs hook ================================================================================ 4. WINDOWS RECOVERY ================================================================================ QUICK REFERENCE --------------- tldr chntpw # Reset Windows passwords tldr ntfs-3g # Mount NTFS filesystems man dislocker # Access BitLocker drives man hivexregedit # Edit Windows registry FIRST: Identify and mount the Windows partition ----------------------------------------------- Find Windows partition: lsblk -f # Look for "ntfs" filesystem fdisk -l # Look for "Microsoft basic data" type Check if BitLocker encrypted: lsblk -f # Will show "BitLocker" instead of "ntfs" Mount NTFS partition (read-write): mkdir -p /mnt/windows mount -t ntfs-3g /dev/sdX1 /mnt/windows If Windows wasn't shut down cleanly (hibernation/fast startup): mount -t ntfs-3g -o remove_hiberfile /dev/sdX1 /mnt/windows Read-only mount (safer): mount -t ntfs-3g -o ro /dev/sdX1 /mnt/windows SCENARIO: Reset forgotten Windows password ------------------------------------------ Mount the Windows partition first (see above). Navigate to the SAM database: cd /mnt/windows/Windows/System32/config List all users: chntpw -l SAM Reset password for a specific user (interactive): chntpw -u "Username" SAM In the interactive menu: 1. Clear (blank) user password <-- Recommended 2. Unlock and enable user account 3. Promote user to administrator q. Quit After making changes, type 'q' to quit, then 'y' to save. Alternative - blank ALL passwords: chntpw -i SAM # Interactive mode, select options SCENARIO: Unlock disabled/locked Windows account ------------------------------------------------ cd /mnt/windows/Windows/System32/config chntpw -u "Username" SAM Select option 2: "Unlock and enable user account" SCENARIO: Promote user to Administrator --------------------------------------- cd /mnt/windows/Windows/System32/config chntpw -u "Username" SAM Select option 3: "Promote user (make user an administrator)" SCENARIO: Access BitLocker encrypted drive ------------------------------------------ You MUST have either: - The BitLocker password, OR - The 48-digit recovery key Find your recovery key: - Microsoft account: account.microsoft.com/devices/recoverykey - Printed/saved during BitLocker setup - Active Directory (for domain-joined PCs) Decrypt with password: mkdir -p /mnt/bitlocker-decrypted /mnt/windows dislocker -V /dev/sdX1 -u -- /mnt/bitlocker-decrypted # Enter password when prompted Decrypt with recovery key: dislocker -V /dev/sdX1 -p123456-789012-345678-901234-567890-123456-789012-345678 -- /mnt/bitlocker-decrypted Now mount the decrypted volume: mount -t ntfs-3g /mnt/bitlocker-decrypted/dislocker-file /mnt/windows When done: umount /mnt/windows umount /mnt/bitlocker-decrypted SCENARIO: Copy files from Windows that won't boot ------------------------------------------------- Mount the Windows partition (see above), then: Copy specific files/folders: cp -r "/mnt/windows/Users/Username/Documents" /destination/ Copy with rsync (shows progress, preserves attributes): rsync -avP "/mnt/windows/Users/Username/" /destination/ Common locations for user data: /mnt/windows/Users/Username/Desktop/ /mnt/windows/Users/Username/Documents/ /mnt/windows/Users/Username/Downloads/ /mnt/windows/Users/Username/Pictures/ /mnt/windows/Users/Username/AppData/ (hidden app data) SCENARIO: Edit Windows Registry ------------------------------- The registry is stored in several hive files: SYSTEM - Hardware, services, boot config SOFTWARE - Installed programs, system settings SAM - User accounts (password hashes) SECURITY - Security policies DEFAULT - Default user profile NTUSER.DAT - Per-user settings (in each user's profile) View registry contents: hivexregedit --export /mnt/windows/Windows/System32/config/SYSTEM '\' > system.reg Merge changes from a .reg file: hivexregedit --merge /mnt/windows/Windows/System32/config/SOFTWARE changes.reg Interactive registry shell: hivexsh /mnt/windows/Windows/System32/config/SYSTEM # Commands: cd, ls, lsval, cat, exit SCENARIO: Fix Windows boot (from Linux) --------------------------------------- Sometimes you can fix Windows boot issues from Linux: Rebuild BCD (Windows Boot Configuration Data): - This usually requires Windows Recovery Environment - From Linux, you can backup/restore the BCD file: cp /mnt/windows/Boot/BCD /mnt/windows/Boot/BCD.backup Restore Windows bootloader to MBR (if GRUB overwrote it): ms-sys -w /dev/sdX # Write Windows 7+ compatible MBR For UEFI systems, Windows boot files are in: /mnt/efi/EFI/Microsoft/Boot/ SCENARIO: Scan Windows for malware (offline scan) ------------------------------------------------- Update ClamAV definitions first (requires internet): freshclam Scan the Windows partition: clamscan -r /mnt/windows # Basic scan clamscan -r -i /mnt/windows # Only show infected files clamscan -r --move=/quarantine /mnt/windows # Quarantine infected Scan common malware locations: clamscan -r "/mnt/windows/Users/*/AppData" clamscan -r "/mnt/windows/Windows/Temp" clamscan -r "/mnt/windows/ProgramData" Note: ClamAV detection isn't as comprehensive as commercial AV. Best for known malware; may miss new/sophisticated threats. SCENARIO: Disable Windows Fast Startup (to mount NTFS read-write) ----------------------------------------------------------------- Windows 8+ uses "Fast Startup" (hybrid shutdown) by default. This leaves NTFS in a "dirty" state, preventing safe writes from Linux. Option 1: Force mount (may cause issues): mount -t ntfs-3g -o remove_hiberfile /dev/sdX1 /mnt/windows Option 2: Boot Windows and disable Fast Startup: - Control Panel > Power Options > "Choose what the power buttons do" - Click "Change settings that are currently unavailable" - Uncheck "Turn on fast startup" - Shutdown (not restart) Windows Option 3: Via registry from Linux: hivexregedit --merge /mnt/windows/Windows/System32/config/SYSTEM << 'EOF' Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Power] "HiberbootEnabled"=dword:00000000 EOF WINDOWS RECOVERY TIPS --------------------- 1. Always try mounting read-only first to assess the situation 2. Windows Fast Startup/hibernation prevents safe NTFS writes 3. BitLocker recovery key is essential - no key = no access 4. chntpw blanks passwords; it cannot recover/show old passwords 5. Back up registry hives before editing them 6. If Windows is bootable but locked out, just reset the password 7. For serious Windows issues, Windows Recovery Environment may be needed 8. Some antivirus/security software may re-lock accounts on next boot ================================================================================ 5. HARDWARE DIAGNOSTICS ================================================================================ QUICK REFERENCE --------------- tldr smartctl # Check drive health tldr lshw # List hardware tldr hdparm # Disk info and benchmarks man memtester # Memory testing man stress-ng # Stress testing man iotop # Disk I/O monitor by process SCENARIO: Check if a drive is failing (SMART) --------------------------------------------- Quick health check: smartctl -H /dev/sdX Full SMART report: smartctl -a /dev/sdX For NVMe drives: smartctl -a /dev/nvme0n1 nvme smart-log /dev/nvme0n1 Key SMART attributes to watch: - Reallocated_Sector_Ct: Bad sectors remapped (increasing = dying) - Current_Pending_Sector: Sectors waiting to be remapped - Offline_Uncorrectable: Unreadable sectors - UDMA_CRC_Error_Count: Cable/connection issues - Wear_Leveling_Count: SSD wear (lower = more worn) Run a self-test: smartctl -t short /dev/sdX # Quick test (~2 min) smartctl -t long /dev/sdX # Thorough test (~hours) Check test results: smartctl -l selftest /dev/sdX SCENARIO: Test RAM for errors ----------------------------- Option 1: Memtest86+ (from boot menu) - Restart and select "Memtest86+" from the boot menu - Most thorough test, runs before OS loads - Let it run for at least 1-2 passes (can take hours) Option 2: memtester (from running system) - Tests available RAM while system is running - Can't test RAM used by kernel/programs Test 1GB of RAM (adjust based on free memory): free -h # Check available memory memtester 1G 1 # Test 1GB, 1 iteration memtester 2G 5 # Test 2GB, 5 iterations Note: memtester can only test free RAM. For thorough testing, use Memtest86+ from the boot menu. SCENARIO: Monitor temperatures, fans, voltages ---------------------------------------------- First, detect and load sensor modules: sensors-detect --auto # Auto-detect sensors Then view readings: sensors # Show all sensor data Continuous monitoring: watch -n 1 sensors # Update every second If sensors shows nothing, modules may need loading: modprobe coretemp # Intel CPU temps modprobe k10temp # AMD CPU temps modprobe nct6775 # Common motherboard chip SCENARIO: Stress test hardware (verify stability) ------------------------------------------------- Useful for: - Testing used/refurbished hardware - Verifying overclocking stability - Burn-in testing before deployment - Reproducing intermittent issues CPU stress test: stress-ng --cpu $(nproc) --timeout 300s # All cores, 5 min Memory stress test: stress-ng --vm 2 --vm-bytes 1G --timeout 300s Combined CPU + memory: stress-ng --cpu $(nproc) --vm 2 --vm-bytes 1G --timeout 600s Disk I/O stress: stress-ng --hdd 2 --timeout 300s Monitor during stress test (in another terminal): watch -n 1 sensors # Watch temperatures htop # Watch CPU/memory usage SCENARIO: Get detailed hardware information ------------------------------------------- Full hardware report: lshw # All hardware (verbose) lshw -short # Summary view lshw -html > hardware.html # HTML report Specific components: lshw -class processor # CPU info lshw -class memory # RAM info lshw -class disk # Disk info lshw -class network # Network adapters BIOS/motherboard info: dmidecode # All DMI tables dmidecode -t bios # BIOS info dmidecode -t system # System/motherboard dmidecode -t memory # Memory slots and modules dmidecode -t processor # CPU socket info Quick system overview: inxi -Fxz # If inxi is installed cat /proc/cpuinfo # CPU details cat /proc/meminfo # Memory details SCENARIO: Test disk speed / benchmark ------------------------------------- Basic read speed test: hdparm -t /dev/sdX # Buffered read speed hdparm -T /dev/sdX # Cached read speed More accurate test (run 3 times, average): hdparm -tT /dev/sdX hdparm -tT /dev/sdX hdparm -tT /dev/sdX Get drive information: hdparm -I /dev/sdX # Detailed drive info For NVMe drives: nvme list # List NVMe drives nvme id-ctrl /dev/nvme0n1 # Controller info nvme smart-log /dev/nvme0n1 # SMART/health data SCENARIO: Check for bad blocks (surface scan) --------------------------------------------- WARNING: This is read-only but takes a long time on large drives. badblocks -sv /dev/sdX For faster progress indication: badblocks -sv -b 4096 /dev/sdX Note: For modern drives, SMART is usually more informative. badblocks is useful for older drives without good SMART support. SCENARIO: Identify unknown hardware / find drivers -------------------------------------------------- List PCI devices: lspci # All PCI devices lspci -v # Verbose (with drivers) lspci -k # Show kernel drivers List USB devices: lsusb # All USB devices lsusb -v # Verbose Find what driver a device is using: lspci -k | grep -A3 "Network" # Network adapter driver lspci -k | grep -A3 "VGA" # Graphics driver SCENARIO: Find what's doing disk I/O (iotop) -------------------------------------------- iotop shows disk read/write by process - like top for disk I/O. Useful when disk is thrashing and you need to find the cause. Basic usage (requires root): iotop Only show processes doing I/O: iotop -o Batch mode (non-interactive, for logging): iotop -b -n 5 # 5 iterations then exit Show accumulated I/O instead of bandwidth: iotop -a Key columns: - DISK READ: current read bandwidth - DISK WRITE: current write bandwidth - IO>: percentage of time spent waiting on I/O Interactive commands: - o: toggle showing only active processes - a: toggle accumulated vs bandwidth - r: reverse sort - q: quit Common culprits for high I/O: - jbd2: journaling (normal on ext4) - kswapd: swapping (need more RAM) - Large file copies or database operations HARDWARE DIAGNOSTICS TIPS ------------------------- 1. Run SMART checks regularly - drives often show warning signs 2. Memtest86+ (from boot menu) is more thorough than memtester 3. Stress test new/used hardware before trusting it with data 4. High temperatures during stress test = cooling problem 5. Random crashes/errors often indicate RAM or power issues 6. SMART "Reallocated Sector Count" increasing = drive dying 7. Back up immediately if SMART shows any warnings 8. SSDs have limited write cycles - check Wear_Leveling_Count 9. iotop -o filters to only processes actively doing I/O ================================================================================ 6. DISK OPERATIONS ================================================================================ QUICK REFERENCE --------------- tldr partclone # Filesystem-aware partition cloning tldr fsarchiver # Backup/restore filesystems to archive man nwipe # Secure disk wiping (DBAN replacement) tldr parted # Partition management tldr mkfs # Create filesystems tldr ncdu # Interactive disk usage analyzer tldr tree # Directory tree viewer FIRST: Understand your options for disk copying ----------------------------------------------- Different tools for different situations: dd / ddrescue - Byte-for-byte copy (use for failing drives) partclone - Filesystem-aware, only copies used blocks (faster) fsarchiver - Creates compressed archive (smallest, most flexible) partimage - Legacy imaging (for restoring old partimage backups) Rule of thumb: - Failing drive? Use ddrescue (section 2) - Clone partition quickly? Use partclone - Backup for long-term storage? Use fsarchiver - Restore old .img.gz from partimage? Use partimage SCENARIO: Clone a partition (partclone - faster than dd) -------------------------------------------------------- Partclone only copies used blocks. A 500GB partition with 50GB used takes ~50GB to clone instead of 500GB. Clone ext4 partition to image: partclone.ext4 -c -s /dev/sdX1 -o partition.img Clone with compression (recommended): partclone.ext4 -c -s /dev/sdX1 | gzip -c > partition.img.gz -c = clone mode -s = source -o = output Restore from image: partclone.ext4 -r -s partition.img -o /dev/sdX1 Restore from compressed image: gunzip -c partition.img.gz | partclone.ext4 -r -s - -o /dev/sdX1 Supported filesystems: partclone.ext4 partclone.ext3 partclone.ext2 partclone.ntfs partclone.fat32 partclone.fat16 partclone.xfs partclone.btrfs partclone.exfat partclone.f2fs partclone.dd (dd mode for any fs) SCENARIO: Create a full system backup (fsarchiver) -------------------------------------------------- Fsarchiver creates compressed, portable archives. Archives can be restored to different-sized partitions. Backup a filesystem: fsarchiver savefs backup.fsa /dev/sdX1 Backup with compression level and progress: fsarchiver savefs -v -z7 backup.fsa /dev/sdX1 -v = verbose -z7 = compression level (1-9, higher = smaller but slower) Backup multiple filesystems to one archive: fsarchiver savefs backup.fsa /dev/sdX1 /dev/sdX2 /dev/sdX3 List contents of archive: fsarchiver archinfo backup.fsa Restore to a partition: fsarchiver restfs backup.fsa id=0,dest=/dev/sdX1 id=0 = first filesystem in archive (0, 1, 2...) Restore to different-sized partition (will resize): fsarchiver restfs backup.fsa id=0,dest=/dev/sdY1 SCENARIO: Restore a legacy partimage backup ------------------------------------------- Partimage is legacy software but you may have old backups to restore. Restore partimage backup: partimage restore /dev/sdX1 backup.img.gz Interactive mode: partimage Note: partimage cannot create images of ext4, GPT, or modern filesystems. Use fsarchiver for new backups. SCENARIO: Securely wipe a drive (nwipe) --------------------------------------- DANGER: This PERMANENTLY DESTROYS all data. Triple-check the device! Interactive mode (recommended - shows all drives, select with space): nwipe Wipe specific drive with single zero pass (usually sufficient): nwipe --method=zero /dev/sdX Wipe with DoD 3-pass method: nwipe --method=dod /dev/sdX Wipe with verification: nwipe --verify=last /dev/sdX Available wipe methods: zero - Single pass of zeros (fastest, usually sufficient) one - Single pass of ones random - Random data dod - DoD 5220.22-M (3 passes) dodshort - DoD short (3 passes) gutmann - Gutmann 35-pass (overkill for modern drives) For SSDs, use the drive's built-in secure erase instead: # Set a temporary password hdparm --user-master u --security-set-pass Erase /dev/sdX # Trigger secure erase (password is cleared after) hdparm --user-master u --security-erase Erase /dev/sdX For NVMe SSDs: nvme format /dev/nvme0n1 --ses=1 # Cryptographic erase SCENARIO: Work with XFS filesystems ----------------------------------- Create XFS filesystem: mkfs.xfs /dev/sdX1 mkfs.xfs -L "mylabel" /dev/sdX1 # With label Repair XFS (must be unmounted): xfs_repair /dev/sdX1 xfs_repair -n /dev/sdX1 # Check only, no changes Grow XFS filesystem (while mounted): xfs_growfs /mountpoint Note: XFS cannot be shrunk, only grown. Show XFS info: xfs_info /mountpoint SCENARIO: Work with Btrfs filesystems ------------------------------------- Create Btrfs filesystem: mkfs.btrfs /dev/sdX1 mkfs.btrfs -L "mylabel" /dev/sdX1 # With label Check Btrfs (must be unmounted): btrfs check /dev/sdX1 btrfs check --repair /dev/sdX1 # Repair (use with caution!) Scrub (online integrity check - safe): btrfs scrub start /mountpoint btrfs scrub status /mountpoint Show filesystem info: btrfs filesystem show btrfs filesystem df /mountpoint btrfs filesystem usage /mountpoint List/manage subvolumes: btrfs subvolume list /mountpoint btrfs subvolume create /mountpoint/newsubvol btrfs subvolume delete /mountpoint/subvol SCENARIO: Work with F2FS filesystems (Flash-Friendly) ----------------------------------------------------- F2FS is optimized for flash storage (SSDs, SD cards, USB drives). Common on Android devices. Create F2FS filesystem: mkfs.f2fs /dev/sdX1 mkfs.f2fs -l "mylabel" /dev/sdX1 # With label Check/repair F2FS: fsck.f2fs /dev/sdX1 fsck.f2fs -a /dev/sdX1 # Auto-repair SCENARIO: Work with exFAT filesystems ------------------------------------- exFAT is common on USB drives and SD cards (>32GB). Cross-platform compatible (Windows, Mac, Linux). Create exFAT filesystem: mkfs.exfat /dev/sdX1 mkfs.exfat -L "LABEL" /dev/sdX1 # With label (uppercase recommended) Check/repair exFAT: fsck.exfat /dev/sdX1 fsck.exfat -a /dev/sdX1 # Auto-repair SCENARIO: Partition a disk -------------------------- Interactive partition editors: parted /dev/sdX # Works with GPT and MBR gdisk /dev/sdX # GPT-specific (recommended for UEFI) fdisk /dev/sdX # Traditional (MBR or GPT) Create GPT partition table: parted /dev/sdX mklabel gpt Create partitions (example: 512MB EFI + rest for Linux): parted /dev/sdX mkpart primary fat32 1MiB 513MiB parted /dev/sdX set 1 esp on parted /dev/sdX mkpart primary ext4 513MiB 100% View partition layout: parted /dev/sdX print lsblk -f /dev/sdX fdisk -l /dev/sdX SCENARIO: Find what's using disk space (ncdu) --------------------------------------------- ncdu is an interactive disk usage analyzer - much faster than repeatedly running du. Analyze current directory: ncdu Analyze specific path: ncdu /home ncdu /var Analyze root filesystem: ncdu / Exclude mounted filesystems (just local disk): ncdu -x / Navigation: - Arrow keys or j/k to move - Enter to drill into directory - d to delete file/folder (confirms first) - q to quit - g to show percentage/graph - n to sort by name - s to sort by size Export scan to file (for slow disks, scan once): ncdu -o scan.json / ncdu -f scan.json # Load later SCENARIO: Visualize directory structure (tree) ---------------------------------------------- tree shows directories as an indented tree. Show current directory: tree Show specific path: tree /etc/systemd Limit depth: tree -L 2 # Only 2 levels deep tree -L 3 /home # 3 levels under /home Show hidden files: tree -a Show only directories: tree -d With file sizes: tree -h # Human-readable sizes tree -sh # Include size for files Filter by pattern: tree -P "*.conf" # Only .conf files tree -I "node_modules|.git" # Exclude patterns DISK OPERATIONS TIPS -------------------- 1. partclone is 5-10x faster than dd for partially-filled partitions 2. fsarchiver archives can restore to different-sized partitions 3. For SSDs, nwipe is less effective than ATA/NVMe secure erase 4. Always verify backups can be restored before wiping originals 5. XFS cannot be shrunk, only grown - plan partition sizes carefully 6. Btrfs check --repair is risky; try without --repair first 7. Keep partition tables aligned to 1MiB boundaries for SSD performance 8. exFAT is best for cross-platform USB drives >32GB 9. F2FS is optimized for flash but less portable than ext4 10. ncdu -x avoids crossing filesystem boundaries (stays on one disk) 11. tree -L 2 gives quick overview without overwhelming detail ================================================================================ 7. NETWORK TROUBLESHOOTING ================================================================================ QUICK REFERENCE --------------- tldr ip # Network interface configuration tldr nmcli # NetworkManager CLI tldr ping # Test connectivity tldr ss # Socket statistics (netstat replacement) tldr curl # Transfer data from URLs tldr mtr # Combined ping + traceroute tldr iperf3 # Network bandwidth testing tldr tcpdump # Packet capture and analysis tldr nmap # Network scanner man iftop # Live bandwidth monitor man nethogs # Per-process bandwidth man tshark # Wireshark CLI (packet analysis) tldr speedtest-cli # Internet speed test tldr mosh # Mobile shell (survives disconnects) tldr aria2c # Multi-protocol downloader tldr tmate # Terminal sharing tldr sshuttle # VPN over SSH FIRST: Check basic network connectivity --------------------------------------- Is the interface up? ip link show ip a # Show all addresses Is there an IP address? ip addr show dev eth0 # Replace eth0 with your interface ip addr show dev wlan0 # For WiFi Can you reach the gateway? ip route # Show default gateway ping -c 3 $(ip route | grep default | awk '{print $3}') Can you reach the internet? ping -c 3 1.1.1.1 # Test IP connectivity ping -c 3 google.com # Test DNS resolution SCENARIO: Configure network with NetworkManager ----------------------------------------------- List connections: nmcli connection show Show WiFi networks: nmcli device wifi list Connect to WiFi: nmcli device wifi connect "SSID" password "password" Show current connection details: nmcli device show Restart networking: systemctl restart NetworkManager SCENARIO: Configure network manually (no NetworkManager) -------------------------------------------------------- Bring up interface: ip link set eth0 up Get IP via DHCP: dhclient eth0 # or dhcpcd eth0 Set static IP: ip addr add 192.168.1.100/24 dev eth0 ip route add default via 192.168.1.1 Set DNS: echo "nameserver 1.1.1.1" > /etc/resolv.conf SCENARIO: Mount remote filesystem over SSH (sshfs) -------------------------------------------------- Access files on a remote system as if they were local. Useful for copying data to/from a working machine during recovery. Mount remote directory: mkdir -p /mnt/remote sshfs user@hostname:/path/to/dir /mnt/remote Mount with password prompt (if no SSH keys): sshfs user@hostname:/home/user /mnt/remote -o password_stdin Mount remote root filesystem: sshfs root@192.168.1.100:/ /mnt/remote Common options: sshfs user@host:/path /mnt/remote -o reconnect # Auto-reconnect sshfs user@host:/path /mnt/remote -o port=2222 # Custom SSH port sshfs user@host:/path /mnt/remote -o IdentityFile=~/.ssh/key # SSH key Copy files to/from mounted remote: cp /mnt/remote/important-file.txt /local/backup/ rsync -avP /local/data/ /mnt/remote/backup/ Unmount when done: fusermount -u /mnt/remote # or umount /mnt/remote Why use sshfs instead of scp/rsync? - Browse remote files interactively before deciding what to copy - Run local tools on remote files (grep, diff, etc.) - Easier than remembering rsync syntax for quick operations SCENARIO: Transfer files over SSH --------------------------------- Copy file to remote: scp localfile.txt user@host:/path/to/destination/ Copy file from remote: scp user@host:/path/to/file.txt /local/destination/ Copy directory recursively: scp -r /local/dir user@host:/remote/path/ With progress and compression: rsync -avzP /local/path/ user@host:/remote/path/ SCENARIO: Test network path and latency (mtr) --------------------------------------------- mtr combines ping and traceroute into one tool. Shows packet loss and latency at each hop in real-time. Interactive mode (updates continuously): mtr google.com Report mode (runs 10 cycles and exits): mtr -r -c 10 google.com With IP addresses only (faster, no DNS lookups): mtr -n google.com Show both hostnames and IPs: mtr -b google.com Reading mtr output: - Loss% = packet loss at that hop (>0% = problem) - Snt = packets sent - Last/Avg/Best/Wrst = latency in ms - StDev = latency variation (high = inconsistent) Common patterns: - High loss at one hop, normal after = that router deprioritizes ICMP (OK) - Loss increasing at each hop = real network problem - Sudden latency jump = congested link or long physical distance SCENARIO: Test bandwidth between two machines (iperf3) ------------------------------------------------------ iperf3 measures actual throughput between two endpoints. Requires iperf3 running on both ends. On the server (machine to test TO): iperf3 -s # Listen on default port 5201 On the client (machine to test FROM): iperf3 -c server-ip # Basic test (10 seconds) iperf3 -c server-ip -t 30 # Test for 30 seconds iperf3 -c server-ip -R # Reverse (test download instead of upload) Test both directions: iperf3 -c server-ip # Upload speed iperf3 -c server-ip -R # Download speed With parallel streams (better for high-latency links): iperf3 -c server-ip -P 4 # 4 parallel streams Test UDP (for VoIP/streaming quality): iperf3 -c server-ip -u -b 100M # UDP at 100 Mbps Interpreting results: - Bitrate = actual throughput achieved - Retr = TCP retransmissions (high = packet loss) - Cwnd = TCP congestion window SCENARIO: Monitor live bandwidth usage (iftop) ---------------------------------------------- iftop shows bandwidth usage per connection in real-time. Like top, but for network traffic. Monitor all interfaces: iftop Monitor specific interface: iftop -i eth0 iftop -i wlan0 Without DNS lookups (faster): iftop -n Show port numbers: iftop -P Filter to specific host: iftop -f "host 192.168.1.100" Interactive commands while running: h = help n = toggle DNS resolution s = toggle source display d = toggle destination display p = toggle port display P = pause display q = quit SCENARIO: Find which process is using bandwidth (nethogs) --------------------------------------------------------- nethogs shows bandwidth usage per process, not per connection. Essential for finding what's eating your bandwidth. Monitor all interfaces: nethogs Monitor specific interface: nethogs eth0 Refresh faster (every 0.5 seconds): nethogs -d 0.5 Interactive commands: m = cycle through display modes (KB/s, KB, B, MB) r = sort by received s = sort by sent q = quit SCENARIO: Check network interface details (ethtool) --------------------------------------------------- ethtool shows and configures network interface settings. Show interface status: ethtool eth0 Key information: - Speed: 1000Mb/s (link speed) - Duplex: Full (full or half duplex) - Link detected: yes (cable connected) Show driver information: ethtool -i eth0 Show interface statistics: ethtool -S eth0 Check for errors (look for non-zero values): ethtool -S eth0 | grep -i error ethtool -S eth0 | grep -i drop Wake-on-LAN settings: ethtool eth0 | grep Wake-on Enable Wake-on-LAN: ethtool -s eth0 wol g SCENARIO: Capture and analyze packets (tcpdump) ----------------------------------------------- tcpdump captures network traffic for analysis. Essential for debugging network issues at the packet level. Capture all traffic on an interface: tcpdump -i eth0 Capture with more detail: tcpdump -i eth0 -v # Verbose tcpdump -i eth0 -vv # More verbose tcpdump -i eth0 -X # Show packet contents in hex + ASCII Capture to a file (for later analysis): tcpdump -i eth0 -w capture.pcap Read a capture file: tcpdump -r capture.pcap Common filters: tcpdump -i eth0 host 192.168.1.100 # Traffic to/from host tcpdump -i eth0 port 80 # HTTP traffic tcpdump -i eth0 port 443 # HTTPS traffic tcpdump -i eth0 tcp # TCP only tcpdump -i eth0 udp # UDP only tcpdump -i eth0 icmp # Ping traffic tcpdump -i eth0 'port 22 and host 10.0.0.1' # SSH to specific host Capture only N packets: tcpdump -i eth0 -c 100 # Stop after 100 packets Show only packet summaries (no payload): tcpdump -i eth0 -q Useful for debugging: # See DNS queries tcpdump -i eth0 port 53 # See all SYN packets (connection attempts) tcpdump -i eth0 'tcp[tcpflags] & tcp-syn != 0' # See HTTP requests tcpdump -i eth0 -A port 80 | grep -E '^(GET|POST|HEAD)' SCENARIO: Scan network and discover hosts (nmap) ------------------------------------------------ nmap is a powerful network scanner for discovery and security auditing. Discover hosts on local network: nmap -sn 192.168.1.0/24 # Ping scan (no port scan) Quick scan of common ports: nmap 192.168.1.100 # Top 1000 ports Scan specific ports: nmap -p 22,80,443 192.168.1.100 nmap -p 1-1000 192.168.1.100 # Port range nmap -p- 192.168.1.100 # All 65535 ports (slow) Service version detection: nmap -sV 192.168.1.100 # Detect service versions Operating system detection: nmap -O 192.168.1.100 # Requires root Comprehensive scan: nmap -A 192.168.1.100 # OS detection, version, scripts, traceroute Fast scan (fewer ports): nmap -F 192.168.1.100 # Top 100 ports only Scan multiple hosts: nmap 192.168.1.1-50 # Range nmap 192.168.1.1 192.168.1.2 # Specific hosts nmap -iL hosts.txt # From file Output formats: nmap -oN scan.txt 192.168.1.100 # Normal output nmap -oX scan.xml 192.168.1.100 # XML output nmap -oG scan.grep 192.168.1.100 # Greppable output Common use cases: # Find all web servers on network nmap -p 80,443 192.168.1.0/24 # Find SSH servers nmap -p 22 192.168.1.0/24 # Find all live hosts quickly nmap -sn -T4 192.168.1.0/24 SCENARIO: Deep packet analysis (tshark/Wireshark CLI) ----------------------------------------------------- tshark is the command-line version of Wireshark. More powerful than tcpdump for protocol analysis. Capture on interface: tshark -i eth0 Capture to file: tshark -i eth0 -w capture.pcap Read and analyze capture file: tshark -r capture.pcap Filter during capture: tshark -i eth0 -f "port 80" # Capture filter (BPF syntax) Filter during display: tshark -r capture.pcap -Y "http" # HTTP traffic tshark -r capture.pcap -Y "dns" # DNS traffic tshark -r capture.pcap -Y "tcp.port == 443" # HTTPS tshark -r capture.pcap -Y "ip.addr == 192.168.1.1" # Specific host Show specific fields: tshark -r capture.pcap -T fields -e ip.src -e ip.dst -e tcp.port Protocol statistics: tshark -r capture.pcap -q -z io,stat,1 # I/O statistics tshark -r capture.pcap -q -z conv,tcp # TCP conversations tshark -r capture.pcap -q -z http,tree # HTTP statistics Follow a TCP stream: tshark -r capture.pcap -q -z follow,tcp,ascii,0 # First TCP stream Extract HTTP objects: tshark -r capture.pcap --export-objects http,./extracted/ Useful filters: # Failed TCP connections tshark -r capture.pcap -Y "tcp.flags.reset == 1" # DNS queries only tshark -r capture.pcap -Y "dns.flags.response == 0" # HTTP requests tshark -r capture.pcap -Y "http.request" # TLS handshakes tshark -r capture.pcap -Y "tls.handshake" SCENARIO: Debug DNS issues -------------------------- Check current DNS servers: cat /etc/resolv.conf Test DNS resolution: host google.com dig google.com nslookup google.com Test specific DNS server: dig @1.1.1.1 google.com dig @8.8.8.8 google.com Temporarily use different DNS: echo "nameserver 1.1.1.1" > /etc/resolv.conf SCENARIO: Check what's listening on ports ----------------------------------------- Show all listening ports: ss -tlnp # TCP ss -ulnp # UDP ss -tulnp # Both Check if specific port is open: ss -tlnp | grep :22 # SSH ss -tlnp | grep :80 # HTTP Check what process is using a port: ss -tlnp | grep :8080 SCENARIO: Download files ------------------------ Download with curl: curl -O https://example.com/file.iso curl -L -O https://example.com/file # Follow redirects Download with wget: wget https://example.com/file.iso wget -c https://example.com/file.iso # Resume partial download Download and verify checksum: curl -O https://example.com/file.iso curl -O https://example.com/file.iso.sha256 sha256sum -c file.iso.sha256 SCENARIO: Test internet connection speed (speedtest-cli) -------------------------------------------------------- Tests download/upload speed using speedtest.net servers. Basic speed test: speedtest-cli Show simple output (just speeds): speedtest-cli --simple List nearby servers: speedtest-cli --list Test against specific server: speedtest-cli --server 1234 No download test (upload only): speedtest-cli --no-download No upload test (download only): speedtest-cli --no-upload Output as JSON (for scripting): speedtest-cli --json Note: Requires working internet and DNS. Test basic connectivity first with: ping 1.1.1.1 SCENARIO: SSH over unreliable connection (mosh) ----------------------------------------------- mosh is SSH that survives disconnects, IP changes, and high latency. Shows local echo immediately - feels responsive even on slow links. Connect to server: mosh user@hostname With specific SSH port: mosh --ssh="ssh -p 2222" user@hostname With SSH key: mosh --ssh="ssh -i ~/.ssh/key" user@hostname How it works: - Initial connection via SSH (for auth) - Then switches to UDP for the session - Reconnects automatically when network changes - Local echo - typing appears instantly Requirements: - mosh-server must be installed on the remote - UDP port 60001 (default) must be open When to use mosh vs SSH: - Flaky WiFi: mosh - Cellular/roaming: mosh - Stable network: SSH is fine - Need port forwarding: SSH (mosh doesn't support it) SCENARIO: Download files reliably (aria2) ----------------------------------------- aria2 is a multi-protocol downloader with resume, parallel connections, and BitTorrent support. Basic download: aria2c https://example.com/file.iso Resume interrupted download: aria2c -c https://example.com/file.iso Multiple connections (faster for large files): aria2c -x 8 https://example.com/file.iso # 8 connections Download multiple files: aria2c -i urls.txt # One URL per line Download with specific filename: aria2c -o myfile.iso https://example.com/file.iso BitTorrent: aria2c file.torrent aria2c "magnet:?xt=..." Metalink (auto-selects mirrors): aria2c file.metalink Limit download speed: aria2c --max-download-limit=1M https://example.com/file.iso Why aria2 over wget/curl: - Multi-connection downloads (significantly faster) - Automatic resume - BitTorrent built-in - Downloads from multiple sources simultaneously SCENARIO: Share terminal for remote assistance (tmate) ------------------------------------------------------ tmate lets you share your terminal session via a URL. Someone can view or control your terminal from anywhere. Start a shared session: tmate tmate shows connection strings: ssh session: ssh XYZ123@nyc1.tmate.io read-only: ssh ro-XYZ123@nyc1.tmate.io web (rw): https://tmate.io/t/XYZ123 web (ro): https://tmate.io/t/ro-XYZ123 Share the appropriate link: - Full access: give them the ssh or web (rw) link - View only: give them the ro- link Get the links programmatically: tmate show-messages End the session: exit # Or Ctrl+D Security notes: - Anyone with the link has access - Use read-only link unless they need to type - Session ends when you exit - New session = new random URL SCENARIO: VPN over SSH (sshuttle) --------------------------------- sshuttle tunnels all traffic through an SSH connection. No server-side setup needed - just SSH access. Tunnel all traffic through remote server: sshuttle -r user@server 0/0 Tunnel only specific subnet: sshuttle -r user@server 10.0.0.0/8 sshuttle -r user@server 192.168.1.0/24 Exclude local network: sshuttle -r user@server 0/0 -x 192.168.1.0/24 With specific SSH port: sshuttle -r user@server:2222 0/0 DNS through tunnel too: sshuttle --dns -r user@server 0/0 Use cases: - Access office network from rescue environment - Bypass network restrictions - Secure all traffic on untrusted network - Access remote resources without full VPN setup Requirements: - SSH access to a server on the target network - Python on remote server (most Linux servers have it) - Root locally (uses iptables) NETWORK TROUBLESHOOTING TIPS ---------------------------- 1. If no IP, check cable/wifi and try dhclient or dhcpcd 2. If IP but no internet, check gateway with ip route 3. If gateway reachable but no internet, check DNS 4. Use ping 1.1.1.1 to test IP connectivity without DNS 5. sshfs is great for browsing before deciding what to copy 6. rsync -avzP is better than scp for large transfers (resumable) 7. Check firewall if services aren't reachable: iptables -L 8. For WiFi issues, check rfkill: rfkill list 9. mtr is better than traceroute - shows packet loss at each hop 10. Use iperf3 to test actual throughput, not just connectivity 11. nethogs shows bandwidth by process; iftop shows by connection 12. tcpdump -w saves packets; analyze later with tshark 13. nmap -sn for quick host discovery without port scanning 14. ethtool shows link speed and cable status (Link detected: yes/no) 15. High latency + low packet loss = congestion; high loss = hardware issue 16. tcpdump and tshark capture files (.pcap) are interchangeable 17. mosh survives network changes; use for flaky connections 18. aria2c -x 8 uses 8 connections for faster downloads 19. tmate for instant terminal sharing - great for getting remote help 20. sshuttle -r user@server 0/0 tunnels ALL traffic through SSH ================================================================================ 8. ENCRYPTION & GPG ================================================================================ QUICK REFERENCE --------------- tldr gpg # GNU Privacy Guard tldr cryptsetup # LUKS disk encryption tldr pass # Password manager man gpg # Full GPG manual FIRST: Understand encryption types you may encounter ---------------------------------------------------- Common encryption scenarios in recovery: GPG symmetric - Password-protected files (gpg -c) GPG asymmetric - Public/private key encrypted files LUKS - Full disk/partition encryption (Linux standard) BitLocker - Windows disk encryption (see section 4) ZFS encryption - ZFS native encryption (see section 1) This section covers GPG and LUKS. For BitLocker, see section 4. For ZFS encryption, see section 1. SCENARIO: Decrypt a password-protected file (GPG symmetric) ----------------------------------------------------------- Files encrypted with `gpg -c` use a password only, no keys needed. Decrypt to original filename: gpg -d encrypted-file.gpg > decrypted-file Decrypt (GPG auto-detects output name if .gpg extension): gpg encrypted-file.gpg You'll be prompted for the password. Decrypt with password on command line (less secure, visible in history): gpg --batch --passphrase "password" -d file.gpg > file SCENARIO: Decrypt a file encrypted to your GPG key -------------------------------------------------- Files encrypted with `gpg -e -r yourname@email.com` require your private key. If your private key is on this system: gpg -d encrypted-file.gpg > decrypted-file If you need to import your private key first: gpg --import /path/to/private-key.asc gpg -d encrypted-file.gpg > decrypted-file You'll be prompted for your key's passphrase. SCENARIO: Import GPG keys (public or private) --------------------------------------------- Import a public key (to verify signatures or encrypt to someone): gpg --import public-key.asc Import from a keyserver: gpg --keyserver keyserver.ubuntu.com --recv-keys KEYID Import your private key (for decryption): gpg --import private-key.asc List keys on the system: gpg --list-keys # Public keys gpg --list-secret-keys # Private keys SCENARIO: Verify a signed file or ISO ------------------------------------- Verify a detached signature (.sig or .asc file): gpg --verify file.iso.sig file.iso If you don't have the signer's public key: # Find the key ID in the error message, then: gpg --keyserver keyserver.ubuntu.com --recv-keys KEYID gpg --verify file.iso.sig file.iso Verify an inline-signed message: gpg --verify signed-message.asc SCENARIO: Encrypt a file for safe transfer ------------------------------------------ Symmetric encryption (password only - recipient needs password): gpg -c sensitive-file.txt # Creates sensitive-file.txt.gpg With specific cipher and compression: gpg -c --cipher-algo AES256 sensitive-file.txt Asymmetric encryption (to someone's public key): gpg -e -r recipient@email.com sensitive-file.txt Encrypt to multiple recipients: gpg -e -r alice@example.com -r bob@example.com file.txt SCENARIO: Unlock a LUKS-encrypted partition ------------------------------------------- LUKS is the standard Linux disk encryption. Check if a partition is LUKS-encrypted: cryptsetup isLuks /dev/sdX1 && echo "LUKS encrypted" lsblk -f # Shows "crypto_LUKS" for encrypted partitions Open (decrypt) a LUKS partition: cryptsetup open /dev/sdX1 decrypted # Enter passphrase when prompted # Creates /dev/mapper/decrypted Mount the decrypted partition: mount /dev/mapper/decrypted /mnt/recovery When done, unmount and close: umount /mnt/recovery cryptsetup close decrypted SCENARIO: Open LUKS with a key file ----------------------------------- If LUKS was set up with a key file instead of (or in addition to) password: cryptsetup open /dev/sdX1 decrypted --key-file /path/to/keyfile Key file might be on a USB drive: mount /dev/sdb1 /mnt/usb cryptsetup open /dev/sdX1 decrypted --key-file /mnt/usb/luks-key SCENARIO: Recover data from damaged LUKS header ----------------------------------------------- If LUKS header is damaged, you need a header backup (hopefully you made one). Restore LUKS header from backup: cryptsetup luksHeaderRestore /dev/sdX1 --header-backup-file header-backup.img If no backup exists and header is damaged, data is likely unrecoverable. This is why LUKS header backups are critical: # How to create a header backup (do this BEFORE disaster): cryptsetup luksHeaderBackup /dev/sdX1 --header-backup-file header-backup.img SCENARIO: Access eCryptfs encrypted home directory -------------------------------------------------- Ubuntu's legacy home encryption uses eCryptfs. Mount an eCryptfs-encrypted home: # You need the user's login password ecryptfs-recover-private Or manually: mount -t ecryptfs /home/.ecryptfs/username/.Private /mnt/recovery SCENARIO: Access stored passwords (pass) ---------------------------------------- pass is the standard Unix password manager. Passwords are GPG-encrypted files in ~/.password-store. If you use pass, your passwords may be recoverable if you have: - Your GPG private key - Your ~/.password-store directory List all passwords: pass Show a password: pass Email/gmail pass -c Email/gmail # Copy to clipboard instead Search passwords: pass grep searchterm Initialize new password store (if setting up): pass init GPG-KEY-ID Import existing password store: 1. Import your GPG private key: gpg --import key.asc 2. Copy ~/.password-store from backup 3. Use pass commands as normal Generate new password: pass generate -n 20 NewSite/login Note: Requires your GPG private key to decrypt. If you don't use pass, this tool isn't useful for you. ENCRYPTION TIPS --------------- 1. GPG symmetric encryption (gpg -c) only needs the password to decrypt 2. GPG asymmetric encryption requires the private key - no key = no access 3. Always keep LUKS header backups separate from the encrypted drive 4. BitLocker recovery keys are often in Microsoft accounts 5. ZFS encryption keys are derived from passphrase - no separate key file 6. eCryptfs wrapped passphrase is in ~/.ecryptfs/wrapped-passphrase 7. If you forget encryption passwords and have no backups, data is gone 8. Hardware security keys (YubiKey) may be required for some GPG keys 9. pass stores passwords as GPG-encrypted files - need your GPG key to access ================================================================================ 9. SYSTEM TRACING (eBPF/bpftrace) ================================================================================ Linux equivalent of DTrace. Uses eBPF (extended Berkeley Packet Filter) for safe, dynamic kernel tracing. Essential for diagnosing performance issues, kernel problems, and understanding system behavior. QUICK REFERENCE --------------- tldr bpftrace # Quick examples man bpftrace # Full manual bpftrace -l # List available probes bpftrace -e 'BEGIN { printf("hello\n"); }' # Test it works TOOLS AVAILABLE --------------- bpftrace - High-level tracing language (like DTrace) bcc-tools - 100+ pre-built diagnostic tools perf - Linux kernel profiler USEFUL BCC TOOLS (run as root) ------------------------------ execsnoop # Trace new process execution opensnoop # Trace file opens biolatency # Block I/O latency histogram tcpconnect # Trace TCP connections tcpaccept # Trace TCP accepts ext4slower # Trace slow ext4 operations zfsslower # Trace slow ZFS operations (if available) runqlat # CPU scheduler latency cpudist # CPU usage distribution cachestat # Page cache hit/miss stats memleak # Memory leak detector BPFTRACE ONE-LINERS ------------------- Count system calls by process: bpftrace -e 'tracepoint:raw_syscalls:sys_enter { @[comm] = count(); }' Trace disk I/O latency histogram: bpftrace -e 'kprobe:blk_account_io_start { @start[arg0] = nsecs; } kprobe:blk_account_io_done /@start[arg0]/ { @usecs = hist((nsecs - @start[arg0]) / 1000); delete(@start[arg0]); }' Trace file opens: bpftrace -e 'tracepoint:syscalls:sys_enter_open { printf("%s %s\n", comm, str(args->filename)); }' Trace TCP connections: bpftrace -e 'kprobe:tcp_connect { printf("%s connecting\n", comm); }' Profile kernel stacks at 99Hz: bpftrace -e 'profile:hz:99 { @[kstack] = count(); }' ZFS-SPECIFIC TRACING -------------------- Trace ZFS reads: bpftrace -e 'kprobe:zfs_read { @[comm] = count(); }' Trace ZFS writes: bpftrace -e 'kprobe:zfs_write { @[comm] = count(); }' PERF BASICS ----------- Record CPU profile for 10 seconds: perf record -g sleep 10 View the report: perf report List available events: perf list Real-time top-like view: perf top LEARN MORE ---------- https://www.brendangregg.com/bpf-performance-tools-book.html https://github.com/iovisor/bcc https://github.com/iovisor/bpftrace https://www.brendangregg.com/ebpf.html ================================================================================ 10. TERMINAL WEB BROWSING ================================================================================ Two terminal web browsers available for documentation and troubleshooting. BROWSERS AVAILABLE ------------------ lynx - Classic text browser, most compatible, keyboard-driven w3m - Better table rendering, can display images in some terminals LYNX BASICS ----------- Start browsing: lynx https://wiki.archlinux.org lynx file.html Navigation: Arrow keys - Move around Enter - Follow link Backspace - Go back q - Quit / - Search in page g - Go to URL p - Print/save page W3M BASICS ---------- Start browsing: w3m https://wiki.archlinux.org w3m file.html Navigation: Arrow keys - Scroll Enter - Follow link B - Go back U - Enter URL q - Quit (Q to quit without confirm) / - Search forward Tab - Next link Shift+Tab - Previous link OFFLINE ARCH WIKI (NO NETWORK NEEDED) ------------------------------------- This ISO includes the full Arch Wiki for offline use - invaluable when networking is broken and you need documentation. arch-wiki-lite (CLI, smaller): wiki-search zfs # Search for articles wiki-search mkinitcpio # Find mkinitcpio docs wiki-search "grub rescue" # Search with spaces arch-wiki-docs (HTML, complete): Location: /usr/share/doc/arch-wiki/html/ Browse with w3m: w3m /usr/share/doc/arch-wiki/html/index.html Search for topic: find /usr/share/doc/arch-wiki/html -iname "*zfs*" w3m /usr/share/doc/arch-wiki/html/en/ZFS.html USEFUL URLS FOR RESCUE (WHEN ONLINE) ------------------------------------ https://wiki.archlinux.org https://wiki.archlinux.org/title/ZFS https://wiki.archlinux.org/title/GRUB https://wiki.archlinux.org/title/Mkinitcpio https://bbs.archlinux.org https://openzfs.github.io/openzfs-docs/ SAVE PAGE FOR OFFLINE --------------------- lynx -dump URL > page.txt # Save as text w3m -dump URL > page.txt # Save as text wget -p -k URL # Download with assets curl URL > page.html # Just the HTML ================================================================================ END OF GUIDE ================================================================================