Muvirt and disk type for Rockstor

Hello. I’ve successfully configured a Rockstor VM under Muvirt (w/DPAA2). This is thanks to all of the great help I received in this thread.

I’ve added 2 scratch disks to the configuration and built out a pool in Rockstor. Reading through the Rockstor guide here, it states: “virtio drives, although more efficient, are currently not supported.”

I’ve configured the disks for the VM as folows:

$ uci show virt.rockstor.disks
virt.rockstor.disks=’/dev/mapper/vmdata-rockstor,serial=ROCKSTOR’ ‘/dev/sdb,serial=STORAGE1’ ‘/dev/sdc,serial=STORAGE2’

The script /etc/init.d/muvirt appears to add/force the type virtio-blk for the disks. I know this isn’t a forum for Rockstor, but does anybody have recommendations to share on this? Should I be forcing type sata-blk or something similar in the /etc/init.d/muvirt script for example?

I should note that with virtio-blk, I was able to successfully create a pool under Rockstor and write/ready data to the test share. I do note that SMART status is not available however.
Regards,

Gabor

virtio-blk is the best type to use for now, and I’ve used it on two separate installs for quite a while without issues. I think the Rockstor doc is severely out of date given the age of the screenshots!

Using virtio-scsi might allow things like SMART status to work if you can pass through the ‘LUN’ presented by the controller but I haven’t attempted this yet.

Passing through your SATA controller as a PCIe device would be even better and would allow SMART to work, but I’ve had stability problems every time I have attempted to do this. I think (unproven!) it’s because the AHCI (SATA) driver appears to be sensitive to how and when interrupts arrive.

1 Like

Good point about the date of the post, screenshots.

SMART was one thing I was thinking would be useful. I did try earlier today to modify /etc/init.d/muvirt to specify the correct devices, etc. But to no avail…but that’s more of a qemu configuration question. I’ll reconfigure to use virtio-blk as that seemed to give good performance (short of passing through the SATA controller).

# procd_append_param command -device
# procd_append_param command "virtio-scsi-pci,id=scsi,num_queues=4"

hdnum=0
for diskref in $disks; do
        echo "Disk ${hdnum}: $diskref"
        storagedevice=$(get_storage_device "${diskref}")
        driveargs=$(get_storage_device_arguments "${diskref}")
        procd_append_param command -drive
        # procd_append_param command "if=none,id=hd${hdnum},file=${stora
        procd_append_param command "if=none,id=hd${hdnum},file=${storage
        procd_append_param command -device
        procd_append_param command "virtio-blk,drive=hd${hdnum}${drivear
        # procd_append_param command "scsi-hd,drive=hd${hdnum}${drivearg
        let hdnum=$((hdnum+1))
done

Matt just revisiting this. I think having the SMART status would he useful. If you have any pointers in the future about scsi-virtio it would he appreciated. I know it’s about qemu options but I didn’t have much luck when I tried.

I may experiment passing the controller in. How did the stability issues manifest themselves in this case?

-Gabor

I haven’t tried this (for any extended period of time) in a year or so, so hopefully the situation could have changed.

Basically the controller would pass through successfully and you can use it inside the VM as you normally would, but at some random point later, the SATA driver inside the VM suddenly believes all the disks attached to the controller have timed out on some command and treats them as they have “crashed”. The only way to recover is to reboot the host machine as that would do a hardware reset for the controller :frowning:

EDIT 18 Sept: This is an example of what happens:

I think it could be due to the Linux AHCI driver being latency sensitive, and perhaps other processes on the host are disturbing the VM enough to cause a problem. It’s not an issue that impacts other types of cards (like network adapters) as they aren’t so “real time” in their operation.

It might be possible to mitigate this by carefully isolating the VM onto it’s own CPU cores - using taskset or a similar tool on the QEMU processeses, using isolcpus= on the VM host to Linux doesn’t try to schedule anything on those cores, and ensuring the ‘proxy’ interrupts (vfio-msi) are pinned to the same CPU cores as the VM is on.

I haven’t figured this out myself yet, there are quite a few examples for libvirt around but very little showing the actual syntax sent to QEMU.
My hope was that virtualizing SCSI at the ‘LUN’ level would work:

-device virtio-scsi-pci,id=scsi0,bus=pci.0, addr=0x8
-device scsi-generic,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-hostdev0,id=hostdev0

(as seen Windows10 dvd passthrough slow read · Issue #448 · virtio-win/kvm-guest-drivers-windows · GitHub)
There is also vhost-scsi as well.

Apparently virtio-scsi now does TRIM and native command queuing so it would be worth looking into this even if SMART doesn’t work.

Other posts I’ve read suggest this method won’t work though, as SMART doesn’t go through the SCSI layer. Unless you have actual SAS controllers and SAS disks which will respond to the SCSI equivalents.

I would be also interested in both, virtio-scsi as well as isolcpus=. Is the latter already exposed in muvirt?

1 Like

Thanks for raising the core affinity (isolation) that would be great if we could use it.