OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] Memory sharing device


On Tue, Feb 12, 2019 at 07:56:58AM -0800, Frank Yang wrote:
> Stepping back to standardization and portability concerns, it is also not
> necessarily desirable to use general pipes to do what we want, because even
> though that device exists and is part of the spec already, that results in
> _de-facto_ non-portability.

That's not different from e.g. TCP.

> If we had some kind of spec to enumerate such
> 'user-defined' devices, at least we can have _de-jure_ non-portability; an
> enumerated device doesn't work as advertised.

I am not sure distinguishing between different types of non portability
will be in scope for virtio. Actually having devices that are portable
would be.

...

> Note that virtio-serial/virtio-vsock is not considered because they do not
> standardize the set of devices that operate on top of them, but in practice,
> are often used for fully general devices.  Spec-wise, this is not a great
> situation because we would still have potentially non portable device
> implementations where there is no standard mechanism to determine whether or
> not things are portable.

Well it's easy to add an enumeration on top of sockets, and several well
known solutions exist. There's an advantage to just reusing these.

> virtio-user provides a device enumeration mechanism
> to better control this.

We'll have to see what it all looks like. For virtio pci transport it's
important that you can reason about the device at a basic level based on
it's PCI ID, and that is quite fundamental.

Maybe what you are looking for is a new virtio transport then?


> In addition, for performance considerations in applications such as graphics
> and media, virtio-serial/virtio-vsock have the overhead of sending actual
> traffic through the virtqueue, while an approach based on shared memory can
> result in having fewer copies and virtqueue messages.  virtio-serial is also
> limited in being specialized for console forwarding and having a cap on the
> number of clients.  virtio-vsock is also not optimal in its choice of sockets
> API for transport; shared memory cannot be used, arbitrary strings can be
> passed without an designation of the device/driver being run de-facto, and the
> guest must have additional machinery to handle socket APIs.  In addition, on
> the host, sockets are only dependable on Linux, with less predictable behavior
> from Windows/macOS regarding Unix sockets.  Waiting for socket traffic on the
> host also requires a poll() loop, which is suboptimal for latency.  With
> virtio-user, only the bare set of standard driver calls
> (open/close/ioctl/mmap/read) is needed, and RAM is a more universal transport
> abstraction.  We also explicitly spec out callbacks on host that are triggered
> by virtqueue messages, which results in lower latency and makes it easy to
> dispatch to a particular device implementation without polling.

open/close/mmap/read seem to make sense. ioctl gives one pause.

Given open/close this begins to look a bit like virtio-fs.
Have you looked at that?


-- 
MST


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]