OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] Memory sharing device


On Tue, Feb 12, 2019 at 11:01:21AM -0800, Frank Yang wrote:
> 
> 
> 
> On Tue, Feb 12, 2019 at 10:22 AM Michael S. Tsirkin <mst@redhat.com> wrote:
> 
>     On Tue, Feb 12, 2019 at 07:56:58AM -0800, Frank Yang wrote:
>     > Stepping back to standardization and portability concerns, it is also not
>     > necessarily desirable to use general pipes to do what we want, because
>     even
>     > though that device exists and is part of the spec already, that results
>     in
>     > _de-facto_ non-portability.
> 
>     That's not different from e.g. TCP.
> 
>     > If we had some kind of spec to enumerate such
>     > 'user-defined' devices, at least we can have _de-jure_ non-portability;
>     an
>     > enumerated device doesn't work as advertised.
> 
>     I am not sure distinguishing between different types of non portability
>     will be in scope for virtio. Actually having devices that are portable
>     would be.
> 
> 
> The device itself is portable; the user-defined drivers that run on them will
> work or not depending on
> negotiating device IDs.
> 
>     ... 
> 
>     > Note that virtio-serial/virtio-vsock is not considered because they do
>     not
>     > standardize the set of devices that operate on top of them, but in
>     practice,
>     > are often used for fully general devices.  Spec-wise, this is not a great
>     > situation because we would still have potentially non portable device
>     > implementations where there is no standard mechanism to determine whether
>     or
>     > not things are portable.
> 
>     Well it's easy to add an enumeration on top of sockets, and several well
>     known solutions exist. There's an advantage to just reusing these.  
> 
> 
> Sure, but there are many unique features/desirable properties of having the
> virtio meta device
> because (as explained in the spec) there are limitations to network/socket
> based communication.
>  
> 
>     > virtio-user provides a device enumeration mechanism
>     > to better control this.
> 
>     We'll have to see what it all looks like. For virtio pci transport it's
>     important that you can reason about the device at a basic level based on
>     it's PCI ID, and that is quite fundamental.
> 
> 
> The spec contains more details; basically the device itself is always portable,
> and there is a configuration protocol
> to negotiate whether a particular use of the device is available. This is
> similar to PCI,
> but with more defined ways to operate the device in terms of callbacks in
> shared libraries on the host.
>  
> 
>     Maybe what you are looking for is a new virtio transport then?
> 
> 
>  
> Perhaps, something like virtio host memory transport? But
> at the same time, it needs to interact with shared memory which is best set as
> a PCI device.
> Can we mix transport types? In any case, the analog of "PCI ID"'s here (the
> vendor/device/version numbers)
> are meaningful, with the contract being that the user of the device needs to
> match on vendor/device id and
> negotiate on version number.

Virtio is fundamentally using feature bits not versions.
It's been pretty successful in maintaining compatiblity
across a wide range of hypervisor/guest revisions.


> Wha are the advantages of defining a new virtio transport type?
> it would be something that has the IDs, and be able to handle resolving offsets
> to
> physical addresses to host memory addresses,
> in addition to dispatching to callbacks on the host.
> But it would be effectively equivalent to having a new virtio device type with
> device ID enumeration, right?

Under virtio PCI Device IDs are all defined in virtio spec.
If you want your own ID scheme you want an alternative transport.
But now what you describe looks kind of like vhost pci to me.


> 
> 
>     > In addition, for performance considerations in applications such as
>     graphics
>     > and media, virtio-serial/virtio-vsock have the overhead of sending actual
>     > traffic through the virtqueue, while an approach based on shared memory
>     can
>     > result in having fewer copies and virtqueue messages.  virtio-serial is
>     also
>     > limited in being specialized for console forwarding and having a cap on
>     the
>     > number of clients.  virtio-vsock is also not optimal in its choice of
>     sockets
>     > API for transport; shared memory cannot be used, arbitrary strings can be
>     > passed without an designation of the device/driver being run de-facto,
>     and the
>     > guest must have additional machinery to handle socket APIs.  In addition,
>     on
>     > the host, sockets are only dependable on Linux, with less predictable
>     behavior
>     > from Windows/macOS regarding Unix sockets.  Waiting for socket traffic on
>     the
>     > host also requires a poll() loop, which is suboptimal for latency.  With
>     > virtio-user, only the bare set of standard driver calls
>     > (open/close/ioctl/mmap/read) is needed, and RAM is a more universal
>     transport
>     > abstraction.  We also explicitly spec out callbacks on host that are
>     triggered
>     > by virtqueue messages, which results in lower latency and makes it easy
>     to
>     > dispatch to a particular device implementation without polling.
> 
>     open/close/mmap/read seem to make sense. ioctl gives one pause.
> 
> 
> ioctl would be to send ping messages, but I'm not fixated on that choice. write
> () is also a possibility to send ping messages; I preferred ioctl() because it
> should be clear that it's a control message not a data message.

Yes if ioctls supported are white-listed and not blindly passed through
(e.g. send a ping message), then it does not matter.


> 
>     Given open/close this begins to look a bit like virtio-fs.
>     Have you looked at that?
> 
> 
>  
> That's an interesting possibility since virtio-fs maps host pointers as well,
> which fits our use cases.
> Another alternative is to add the features unique about virtio-user to
> virtio-fs:
> device enumeration, memory sharing operations, operation in terms of callbacks
> on the host.
> However, it doesn't seem like a good fit due to being specialized to filesystem
> operations.

Well everything is a file :)

> 
> 
>     --
>     MST
> 


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]