OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [virtio-dev] Re: [PATCH v6] vsock: add vsock device

On 28 April 2016 at 15:53, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> On Fri, Apr 22, 2016 at 01:56:24PM +0100, Ian Campbell wrote:
>> On 21 April 2016 at 15:36, Stefan Hajnoczi <stefanha@redhat.com> wrote:
>> > The virtio vsock device is a zero-configuration socket communications
>> > device.  It is designed as a guest<->host management channel suitable
>> > for communicating with guest agents.
>> >
>> > vsock is designed with the sockets API in mind and the driver is
>> > typically implemented as an address family (at the same level as
>> > AF_INET).  Applications written for the sockets API can be ported with
>> > minimal changes (similar amount of effort as adding IPv6 support to an
>> > IPv4 application).
>> >
>> > Unlike the existing console device, which is also used for guest<->host
>> > communication, multiple clients can connect to a server at the same time
>> > over vsock.  This limitation requires console-based users to arbitrate
>> > access through a single client.  In vsock they can connect directly and
>> > do not have to synchronize with each other.
>> >
>> > Unlike network devices, no configuration is necessary because the device
>> > comes with its address in the configuration space.
>> >
>> > The vsock device was prototyped by Gerd Hoffmann and Asias He.  I picked
>> > the code and design up from them.
>> First: Do you happen to have this in a git tree, i.e. where I can
>> easily run make and get a more readable document? (Bonus points if you
>> are publishing the formatted docs somewhere already ;-))
>> Secondly and the main reason for mailing: I've tripped over an
>> interesting issue recently. I'm using your v5 Linux frontend patches +
>> my own backend.
>> One property of my backend is that when a TX request requires a
>> response (e.g. a RW=>CREDIT_UPDATE, REQUEST=>RESPONSE, *=>RST) it
>> waits synchronously for an RX slot in order to send the reply.
> Assuming we keep synchronous replies during rx processing, the
> requirement is that we always have a tx vring descriptor free.  If not
> then we must wait before processing the rx ring.
> This way we avoid the deadlock where both sides send a packet at the
> same time and the rings are already filled to N-1.
> Basically this means changing the strategy in the Linux driver from
> "assume there is vring space and block if we run out" to "reserve vring
> space before taking on any work that requires synchronous replies".

IOW the frontend RX processing loop would refuse to proceed unless the
is a TX descriptor available and in its hand?

I'm not sure that the deadlock I observed was due to the RX thread
spinning waiting for a TX descriptor though, AFAICS the frontend was
just independently processing RX and TX requests (in
virtio_transport_recv_pkt() and virtio_transport_send_one_pkt()
respectively) and it happened that both of them needed the AF_VSOCK
lock, it didn't involve the call to
virtio_transport_send_credit_update() in
virtio_transport_stream_do_dequeue() which I think is what you are
addressing here.

> This does not require any virtio spec change.  It just requires a change
> to the driver implementation.

It would be useful if the spec would at least describe this frontend
implementation requirement and give some guidance to the backend on
what an acceptable reaction is if it finds a frontend not doing so
(i.e. justifying resetting the device if it needs to send a reply and
no rx slots are available in a timely manner). It'd be better if the
protocol was immune to this sort of thing by design though.


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]