Microsoft has released a security update—KB4571756—which will disable the RemoteFX vGPU feature because of a security vulnerability. It applies to Windows 10, version 2004, and all editions Windows Server version 2004.
Post this update, any VM that has RemoteFX vGPU enabled will fail with the following error messages:
- The virtual machine cannot be started because all the RemoteFX-capable GPUs are disabled in Hyper-V Manager.
- The virtual machine cannot be started because the server has insufficient GPU resources.
Even if the end-user tries to re-enable the RemoteFX vGPU, the VM will display the error message—
We no longer support the RemoteFX 3D video adapter. If you are still using this adapter, you may become vulnerable to security risk.
What is the RemoteFX vGPU feature?
When running Virtual Machines, the RemoteFX vGPU feature lets you share the physical GPU. The feature fits well when physical GPU is too much of a resource, but instead, all VMs can dynamically share the GPU for their workload. The advantage is, of course, the reduction in the cost of GPU and decreasing CPU load. If you want to imagine, it is like running multiple DirectX applications at the same time on the same physical GPU. So instead of buying 4 GPUs, one GPU could help, depending on the workload. It also came with countermeasures that restricted the overuse of physical GPU.
What is the security vulnerability around RemoteFX vGPU?
RemoteFX vGPU is old. It was introduced in Windows 7 and is now facing a remote code execution vulnerability. A remote code execution vulnerability exists when Hyper-V RemoteFX vGPU on a host server fails to properly validate input from an authenticated user on a guest operating system. It happens when Hyper-V RemoteFX vGPU on a host server fails to properly validate input from an authenticated user on a guest operating system when an attacker runs a crafted application on a guest OS, which attacks individual third-party video drivers running on the Hyper-V host.
Once the attacker has access, he can run any code on the host OS. Since this is an architectural issue, there is no fix for it.
Alternatives to RemoteFX vGPU
The only option is to use an alternate vGPU, which could be from third-party applications or Microsoft suggests using Discrete Device Assignment (DDA). It allows you to entire PCIe Device into a VM. Not only can you allow access to Graphics cars, but you can also share NVMe storage.
The biggest advantage of DDA apart from that it’s secure, there is no need to install drivers on the host before the device being mounted within the VM. As long as VM can identify the device’s PCIe Location, the Path can be determined for the VM to mount it. In short, DDA passing a GPU to a VM allows the native GPU driver to be used within the VM and all capabilities. That includes DirectX 12, CUDA, etc., which was not possible with RemoteFX vGPU.
How to re-enable RemoteFX vGPU
Microsoft clearly warns that you should not be using the RemoteFX vGPU, but if you have to, there is a way to enable it again at your own risk.
Assuming you have already configured the RemoteFX vGPU 3D adapter, here are the details that will work only on Windows 10, version 1803, and earlier versions
Configure RemoteFX vGPU with Hyper-V Manager
To configure the RemoteFX vGPU 3D by using Hyper-V Manager, follow these steps:
- Stop the Virtual Machine
- Open Hyper-V Manager and navigate to VM Settings.
- Click on Add Hardware.
- Select RemoteFX 3D Graphics Adapter, and then select Add.
Configure RemoteFX vGPU with PowerShell cmdlets
- Enable-VMRemoteFXPhysicalVideoAdapter
- Add-VMRemoteFx3dVideoAdapter
- Get-VMRemoteFx3dVideoAdapter
- Set-VMRemoteFx3dVideoAdapter
- Get-VMRemoteFXPhysicalVideoAdapter
You can read more about it here on Microsoft.