OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] On doorbells (queue notifications)


On Wed, Jul 15, 2020 at 05:40:33PM +0100, Alex Bennée wrote:
> 
> Stefan Hajnoczi <stefanha@redhat.com> writes:
> 
> > On Wed, Jul 15, 2020 at 02:29:04PM +0100, Alex Bennée wrote:
> >> Stefan Hajnoczi <stefanha@redhat.com> writes:
> >> > On Tue, Jul 14, 2020 at 10:43:36PM +0100, Alex Bennée wrote:
> >> >> Finally I'm curious if this is just a problem avoided by the s390
> >> >> channel approach? Does the use of messages over a channel just avoid the
> >> >> sort of bouncing back and forth that other hypervisors have to do when
> >> >> emulating a device?
> >> >
> >> > What does "bouncing back and forth" mean exactly?
> >> 
> >> Context switching between guest and hypervisor.
> >
> > I have CCed Cornelia Huck, who can explain the lifecycle of an I/O
> > request on s390 channel I/O.
> 
> Thanks.
> 
> I was also wondering about the efficiency of doorbells/notifications the
> other way. AFAIUI for both PCI and MMIO only a single write is required
> to the notify flag which causes a trap to the hypervisor and the rest of
> the processing. The hypervisor doesn't have the cost multiple exits to
> read the guest state although it obviously wants to be as efficient as
> possible passing the data back up to what ever is handling the backend
> of the device so it doesn't need to do multiple context switches.
> 
> Has there been any investigation into other mechanisms for notifying the
> hypervisor of an event - for example using a HYP call or similar
> mechanism?
> 
> My gut tells me this probably doesn't make any difference as a trap to
> the hypervisor is likely to cost the same either way because you still
> need to save the guest context before actioning something but it would
> be interesting to know if anyone has looked at it. Perhaps there is a
> benefit in partitioned systems where core running the guest can return
> straight away after initiating what it needs to internally in the
> hypervisor to pass the notification to something that can deal with it?

It's very architecture-specific. This is something Michael Tsirkin
looked in in the past. He found that MMIO and PIO perform differently on
x86. VIRTIO supports both so the device can be configured optimally.
There was an old discussion from 2013 here:
https://lkml.org/lkml/2013/4/4/299

Without nested page tables MMIO was slower than PIO. But with nested
page tables it was faster.

Another option on x86 is using Model-Specific Registers (for hypercalls)
but this doesn't fit into the PCI device model.

A bigger issue than vmexit latency is device emulation thread wakeup
latency. There is a thread (QEMU, vhost-user, vhost, etc) monitoring the
ioeventfd but it may be descheduled. Its physical CPU may be in a low
power state. I ran a benchmark late last year with QEMU's AioContext
adaptive polling disabled so we can measure the wakeup latency:

       CPU 0/KVM 26102 [000] 85626.737072:       kvm:kvm_fast_mmio:
fast mmio at gpa 0xfde03000
    IO iothread1 26099 [001] 85626.737076: syscalls:sys_exit_ppoll: 0x1
                   4 microseconds ------^

(I did not manually configure physical CPU power states or use the
idle=poll host kernel parameter.)

Each virtqueue kick had 4 microseconds of latency before the device
emulation thread had a chance to process the virtqueue. This means the
maximum I/O Operations Per Second (IOPS) is capped at 250k before
virtqueue processing has even begun!

QEMU AioContext adaptive polling helps here because we skip the vmexit
entirely while the IOThread is polling the vring (for up to 32
microseconds by default).

It would be great if more people dig into this and optimize
notifications further.

Stefan

Attachment: signature.asc
Description: PGP signature



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]