OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] RFC: virtio-hostmem (+ Continuation of discussion from [virtio-dev] Memory sharing device)


* Frank Yang (lfy@google.com) wrote:
> virtio-hostmem is a proposed way to share host memory to the guest and
> communicate notifications. One potential use case is to have userspace
> drivers for virtual machines.
> 
> The latest version of the spec proposal can be found at
> 
> https://github.com/741g/virtio-spec/blob/master/virtio-hostmem.tex
> 
> The revision history so far:
> 
> https://github.com/741g/virtio-spec/commit/7c479f79ef6236a064471c5b1b8bc125c887b948
> - originally called virtio-user
> https://github.com/741g/virtio-spec/commit/206b9386d76f2ce18000dfc2b218375e423ac8e0
> - renamed to virtio-hostmem and removed dependence on host callbacks
> https://github.com/741g/virtio-spec/commit/e3e5539b08cfbaab22bf644fd4e50c00ec428928
> - removed a straggling mention of a host callback
> https://github.com/741g/virtio-spec/commit/61c500d5585552658a7c98ef788a625ffe1e201c
> - Added an example usage of virtio-hostmem
> 
> This first RFC email includes replies to comments from mst@redhat.com:
> 
>   > \item Guest allocates into the PCI region via config virtqueue messages.
> 
> Michael: OK so who allocates memory out of the PCI region?
> Response:
> 
> Allocation will be split by guest address space versus host address space.
> 
> Guest address space: The guest driver determines the offset into the BAR in
> which to allocate the new region. The implementation of the allocator
> itself may live on the host (while guest triggers such allocations via the
> config virtqueue messages), but the ownership of region offsets and sizes
> will be in the guest. This allows for the easy use of existing guest
> ref-counting mechanisms such as last close() calling release() to clean up
> the memory regions in the guest.
> 
> Host address space: The backing of such memory regions is considered
> completely optional. The host may service a guest region with a memory of
> its choice that depends on the usage of the device. The time this servicing
> happens may be any time after the guest communicates the message to create
> a memory region, but before the guest destroys the memory region. In the
> meantime, some examples of how the host may respond to the allocation
> request:
> 
>    - The host does not back the region at all and a page fault happens.

Note that a mapping missing on the host wont necessarily turn into a
page fault in the guest; on qemu for example, if you have a memory
region like this where the guest accesses an area with no mapping, I
think we hit a kvm error.

Dave
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]