Muvirt reinstall, preserving VMs?

Hello.

My Ten64 was happily running muvirt (muvirt 22.03.2+traverse, r19803-9a599fee93) until this evening when I broke the installation of muvirt when I reached a condition of the root partition reaching 100% capacity (yeah I know, rookie mistake).

The muvirt installation will no longer boot and with the extended debugging (level 4) set at boot I see the error repeated:

[   23.726007] procd: Finished hotplug exec instance, pid=2921
[   23.731806] procd: Launched hotplug exec instance, pid=2929
[   23.938027] procd: Finished hotplug exec instance, pid=2929
[   23.943831] procd: Launched hotplug exec instance, pid=2937
sh: write error: No such device
[   24.166106] procd: Finished hotplug exec instance, pid=2937
[   24.171959] procd: Launched hotplug exec instance, pid=2945
sh: write error:[   24.177643] procd: Coldplug complete
[   24.182565] procd: Change state 1 -> 2

[   24.187695] procd: - watchdog -
[   24.190914] procd: Ping
[   24.193430] procd: Opened watchdog with timeout 30s
[   24.198377] procd: Watchdog does not have CARDRESET support
[   24.204099] procd: - ubus -
[   24.206998] procd: Create service ubus
[   24.210937] procd: Start instance ubus::instance1
[   24.215874] procd: Started instance ubus::instance1[2950]
[   24.221354] procd: running /etc/init.d/ubus running
[   24.226553] procd: glob failed on /etc/init.d/ubus
[   24.256754] procd: Connection to ubus failed
[   24.360438] procd: Connection to ubus failed
[   24.433995] procd: Finished hotplug exec instance, pid=2945
[   24.439830] procd: Launched hotplug exec instance, pid=2954
sh: write error: No such device
[   24.564837] procd: Connection to ubus failed
[   24.701972] procd: Finished hotplug exec instance, pid=2954
[   24.707844] procd: Launched hotplug exec instance, pid=2962
sh: write error: No such device
[   24.905975] procd: Finished hotplug exec instance, pid=2962
[   24.911776] procd: Launched hotplug exec instance, pid=2970
sh: write error: No such device
[   24.968764] procd: Connection to ubus failed
[   25.145999] procd: Finished hotplug exec instance, pid=2970
[   25.151870] procd: Launched hotplug exec instance, pid=2978
sh: write error: No such device
[   25.393990] procd: Finished hotplug exec instance, pid=2978
[   25.399792] procd: Launched hotplug exec instance, pid=2986
[   25.633981] procd: Finished hotplug exec instance, pid=2986
[   25.639849] procd: Launched hotplug exec instance, pid=2994
sh: write error: No such device
[   25.676972] random: crng init done
[   25.680408] random: 25 urandom warning(s) missed due to ratelimiting
[   25.772843] procd: Connection to ubus failed
[   25.885404] procd: Finished hotplug exec instance, pid=2994
[   25.891193] procd: Launched hotplug exec instance, pid=3002
sh: write error: No such device
[   26.122038] procd: Finished hotplug exec instance, pid=3002
[   26.127891] procd: Launched hotplug exec instance, pid=3010
  Found volume group "vmdata" using metadata type lvm2
  6 logical volume(s) in volume group "vmdata" now active
[   26.366162] procd: Finished hotplug exec instance, pid=3010
[   26.372017] procd: Launched hotplug exec instance, pid=3018
sh: write error: No such device
[   26.582083] procd: Finished hotplug exec instance, pid=3018
[   26.587946] procd: Launched hotplug exec instance, pid=3026
[   26.777054] procd: Connection to ubus failed
[   26.781948] procd: Finished hotplug exec instance, pid=3026
[   26.787776] procd: Launched hotplug exec instance, pid=3034
sh: write error: No such device
[   26.998014] procd: Finished hotplug exec instance, pid=3034
[   27.003898] procd: Launched hotplug exec instance, pid=3042
sh: write error: No such device
[   27.193410] procd: Finished hotplug exec instance, pid=3042
[   27.199202] procd: Launched hotplug exec instance, pid=3050
sh: write error: No such device
[   27.389542] procd: Finished hotplug exec instance, pid=3050
[   27.395408] procd: Launched hotplug exec instance, pid=3058
sh: write error: No such device
[   27.609508] procd: Finished hotplug exec instance, pid=3058
[   27.615351] procd: Launched hotplug exec instance, pid=3066
sh: write error: No such device
[   27.781453] procd: Connection to ubus failed
[   27.818057] procd: Finished hotplug exec instance, pid=3066
[   27.823907] procd: Launched hotplug exec instance, pid=3074
sh: write error: No such device
[   28.045983] procd: Finished hotplug exec instance, pid=3074
[   28.786595] procd: Connection to ubus failed
[   29.194349] procd: Ping
[   29.791602] procd: Connection to ubus failed
[   30.797090] procd: Connection to ubus failed
[   31.802565] procd: Connection to ubus failed
[   32.807958] procd: Connection to ubus failed
[   33.813432] procd: Connection to ubus failed
[   34.198162] procd: Ping
[   34.818427] procd: Connection to ubus failed
[   35.823903] procd: Connection to ubus failed
[   36.829375] procd: Connection to ubus failed
[   37.834845] procd: Connection to ubus failed
[   38.839945] procd: Connection to ubus failed
[   39.201652] procd: Ping
[   39.844936] procd: Connection to ubus failed
[   40.850408] procd: Connection to ubus failed

Likely the easiest way forward is to simply re-install muvirt on the SSD. However, I’d like to be able to preserve the VMs I’ve previously created:

root@(none):/# lvscan
File descriptor 3 (/dev/watchdog) leaked on lvscan invocation. Parent PID 1345: ash
  inactive          '/dev/vmdata/muvirtwork' [80.00 GiB] inherit
  inactive          '/dev/vmdata/rockstor' [50.00 GiB] inherit
  inactive          '/dev/vmdata/archie' [16.00 GiB] inherit
  inactive          '/dev/vmdata/k3controller' [10.00 GiB] inherit
  inactive          '/dev/vmdata/k3node1' [10.00 GiB] inherit
  inactive          '/dev/vmdata/k3node2' [10.00 GiB] inherit

Is there a way that I can achieve this? I’m in particular keen on keeping my rockstor VM as it had the custom kernel needed with DPAA2 support. I could start from scratch, but that would just take me some time.

Well, after staring Sod’s law in the face this evening, I was finally able to rectify the issue with my muvirt installation. Roughly I did the following:

  1. Booted to the recovery image
  2. Ran fsck against all of the filesystems on the SSD. It corrected a number of errors that were found on /dev/sda2 in my case, which is where the “/” for muvirt is located
  3. Populated the empty /etc/group file (it was zero bytes). This is where a backup of my muvirt configuration which I created on 11/24 came in very handy
  4. Booted muvirt from the SSD (hoorah)
  5. This left me with a partially working muvirt system. I saw a number of errors about missing kernel modules during bootup - BUT I could still access the UI
  6. Via the muvirt UI, I flashed the same version of muvirt
  7. Rebooted again and everything is working know

It would appear ultimately that by trying to setup a new VM (CentOS 8 Stream) is what filled / during the image download phase. I’ll watch the disk space more carefully.