OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] VM memory protection and zero-copy transfers.


Hello,

I ran some benchmarks to compare the performances achieved by the swiotlb
approach and our dynamic memory granting solution while using different buffer
sizes. Without any surprise, the swiotlb approach performs much better when the
buffers are small. Actually for small buffers, the performances are on par with
the original configuration where the entire memory is shared. Of course these
results are specific to the platform I used and the system workload (e.g. CPU
utilization, caches utilization).

At the moment we are not planning to add a mechanism that would take the
decision between copying or granting the buffers dynamically based on their
sizes, but this experience showed us that devices that uses small packets would
benefits from going through the swiotlb. So we are considering making this
configurable on a per-device basis in our solution.

We have also experimented with the use of a virtual IOMMU on the guest side and
we have a few concerns with this option.

If we add a virtual IOMMU, we can see mapping commands being issued as virtio
buffers are exchanged between the device and the driver. However the kernel
controls the mappings from DMA addresses to physical addresses. In theory, we
could remap the memory in the host address space to "implement" these mappings
but we have some additional constraints that make this approach problematic.

Our solution runs on systems where the physical IOMMU does not support address
translation. So we rely on having an identity mapping between the guest address
space and the physical address space to allow the guest OS to initiate DMA
transactions. If the memory that we import for virtio buffers uses translated
addresses, these buffers cannot be used in DMA transactions.

We also have an issue with letting the driver control the exported memory
through an IOMMU. If we do this, we need to consider what will happen if the
guest unmaps a virtio buffer while it is in use on the device side.

Although it looks possible to recover from such a scenario in the case of a
device doing CPU accesses to the shared memory, things get more complicated if
we start considering that the buffer may be involved in a DMA transaction.

In some previous projects, we have learned that the ability for the hardware
device and/or its associated driver to recover from an aborted transaction is
not something that we can rely upon in the general case.

For this reason, in our typical memory granting scenarios, we usually "lock" the
shared memory regions to prevent the exporter from revoking the mappings until
the importer says it is ok to do so.

Note that locking the mappings could be applied here as well. In this case, we
would still use this concept of shadow virtqueues and the hypervisor would be
responsible for locking/unlocking the virtio buffers as they cycle between the
device and the driver. This design is likely to be slower than the original
implementation as the cost of locking the mappings is significant (i.e. an extra
page table walk to validate the memory regions).

As we discussed in this thread, there are a few options available to enable
virtio in configurations where the VM address spaces are isolated. I think they
all have different trade-offs. Our approach certainly have some drawbacks but it
also addresses some specific considerations that are relevant in our use
case. Different configurations will probably require different solutions to this
question.

What would be the next steps to go forward with adding a new feature bit such as
the one I discussed in my original email? Should we prepare a patch on the
specification and post it here for further discussions?

Baptiste


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]