[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: Re: [PATCH 00/11] Introduce transitional mmr pci device
On 4/3/2023 1:28 PM, Michael S. Tsirkin wrote:
The intent is to provide backward compatibility to the legacy interface, and not really fixing the legacy interface in itself as it may break the legacy itself.On Mon, Apr 03, 2023 at 03:47:56PM +0000, Parav Pandit wrote:From: Michael S. Tsirkin <mst@redhat.com> Sent: Monday, April 3, 2023 11:34 AMAnother is that we can actually work around legacy bugs in the hypervisor. For example, atomicity and alignment bugs do not exist under DMA. Consider MAC field, writeable in legacy. Problem this write is not atomic, so there is a window where MAC is corrupted. If you do MMIO then you just have to copy this bug. If you do DMA then hypervisor can buffer all of MAC and send to device in one go.I am familiar with this bug. Users feedback that we received so far has kernels with driver support that uses CVQ for setting the mac address on legacy device. So, it may help but not super important. Also, if I recollect correctly, the mac address is configuring bit early in if-scripts sequence before bringing up the interface. So, haven't seen real issue around it.It's an example, there are other bugs in legacy interfaces.
Legacy driver would do this anyway. It would expect certain flow to work that has been worked for it when it was working over previous sw-hypervisor.Take inability to decline feature negotiation as an example.
Hypervisor attempting to fail what was working before will not help.
With transport vq we can fail at transport level and hypervisor can decide what to do, such as stopping guest or unplugging device, etc.
So something like a vq would be a step up. I would like to understand the performance angle though. What you describe is pretty bad.
Do you mean latency is bad or the description?
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]