Showing posts with label nvidia. Show all posts
Showing posts with label nvidia. Show all posts

Monday, 30 November 2020

VirGL 3D Acceleration on KVM with LibVirt and Nvidia Drivers

Nvidia drivers seem to be causing issues with the normal 3D acceleration libvirt configuration in virt-manager and Gnome Boxes. What happens is that the viewer returns a black/blank screen when the normal configuration to enable VirGL is applied -
<graphics type="spice">
  <listen type="none"/>
  <gl rendernode="/dev/dri/renderD128"/>
</graphics>
<video>
  <model type="virtio" heads="1" primary="yes">
    <acceleration accel3d="yes"/>
  </model>
  <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
</video>
The error that it returns is -
qemu_spice_gl_scanout_texture: failed to get fd for texture
Looking around, this seems to be a problem due to EGL support being required for with the proprietary Nvidia drivers. The configuration that worked for me was this -
<graphics type="spice">
  <listen type="none"/>
</graphics>
<graphics type="egl-headless">
  <gl rendernode="/dev/dri/renderD128"/>
</graphics>
<video>
  <model type="virtio" heads="1" primary="yes">
    <acceleration accel3d="yes"/>
  </model>
  <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
</video>
To check it is working inside the VM, use the following commands on the terminal in the VM -
$ dmesg |grep -i drm
[    5.326446] systemd[1]: Starting Load Kernel Module drm...
[    5.869884] [drm] pci: virtio-vga detected at 0000:00:01.0
[    5.871953] [drm] features: +virgl +edid
[    5.873360] [drm] number of scanouts: 1
[    5.873365] [drm] number of cap sets: 2
[    5.882054] [drm] cap set 0: id 1, max-version 1, max-size 308
[    5.882247] [drm] cap set 1: id 2, max-version 2, max-size 688
[    5.885640] [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0
[    5.891133] virtio_gpu virtio0: fb0: virtio_gpudrmfb frame buffer device

$ glxinfo |grep -i vir
    Device: virgl (0x1010)
OpenGL renderer string: virgl


Missing GPUGraphicsClockOffset and GPUMemoryTransferRateOffset in nvidia-settings for overclocking from the terminal (Linux)

For those that are using Linux and encountering the following error messages when trying to adjust the overclock offset in the terminal - ER...