OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] Re: [PATCH v1] virtio-mmio: Specify wait needed in driver during reset



在 2021/8/17 下午6:03, Srivatsa Vaddagiri 写道:
* Michael S. Tsirkin <mst@redhat.com> [2021-08-17 03:51:47]:

So before we move on I'd like to know whether we do something as drastic
as incrementing the version number for a theoretical or practical
benefit.
We initially stumbled on this reset issue when doing some optimization in
Qualcomm Type-1 hypervisor for virtio. In our case, virtio frontend and backend
drivers are in separate VMs. The Android VM that hosts backend driver is
considered untrusted, which among other things meant front-end could see large
latencies for its MMIO register r/w requests (largely owing to scheduling
delays). In some cases, I measured 5-10 ms for a single MMIO register read or
write request. Few of the registers are accessed in hot-path (like
VIRTIO_MMIO_QUEUE_NOTIFY or VIRTIO_MMIO_INTERRUPT_ACK) which we wanted to be of
low-latency as much as possible.

The optimization we have to help reduce this latency is for hypervisor to
acknowledge MMIO writes without waiting on backend to respond. For example, when
VM writes to VIRTIO_MMIO_QUEUE_NOTIFY, it causes a trap and normally hypervisor
would have required to stall the vcpu until backend acknowledges it.


This looks not necessary. Qemu/KVM allows an eventfd to be installed here. Then we can unblock the vcpu immediately after event is signaled and we don't need to wait for the acknowledge.


In our
case, hypervisor would unblock the vcpu immediately after injecting an interrupt
to backend (to let backend know that there is a queue_notify event). Handling
writes to VIRTIO_MMIO_INTERRUPT_ACK was bit tricky but we managed that with few
changes in backend (especially any awareness backend had about front-end being
still in a interrupt handler). Similarly other registers that are written are
completely handled in hypervisor without requiring intervention from backend.

Handling reset is the only open issue we have, as guest triggering reset has
currently no provision to poll for reset completion and hypervisor itself cannot
handle reset completly. This is where we observed discrepancy between PCI vs
MMIO in handling reset, which we wanted to address with this discussion.


It looks like we may suffer from similar issue for other untrusted devices like VDUSE.



I think the option we discussed earlier of a new feature bit seems less
intrusive than incrementing MMIO version?

https://lists.oasis-open.org/archives/virtio-dev/202107/msg00168.html


Using features will result some interesting question:

1) the drivers usually reset the device before feature negotiation
2) it means the driver must mandate this behavior even before feature negotiation is done

We don't have those issue if we increase the version (but it looks more intrusive).

Thanks



- vatsa

--

Qualcomm Innovation Center, Inc. is submitting the attached "feedback" as a
on-member to the virtio-dev mailing list for consideration and inclusion.




[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]