OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [virtio-dev] Re: [PATCH] virtio-net: keep the packet layout intact

On 05/15/2017 10:46 PM, Michael S. Tsirkin wrote:
On Mon, May 15, 2017 at 05:29:15PM +0800, Wei Wang wrote:
Ping for comments, thanks.

On 05/11/2017 12:57 PM, Wei Wang wrote:
The current implementation may change the packet layout when
vnet_hdr needs an endianness swap. The layout change causes
one more iov to be added to the iov[] passed from the guest, which
is a barrier to making the TX queue size 1024 due to the possible
off-by-one issue.
It blocks making it 512 but I don't think we can make it 1024
as entries might cross page boundaries and get split.

I agree with the performance lose issue you mentioned
below, thanks. To understand more here, could you please
shed some light on "entries can't cross page boundaries"?

Seems the virtio spec doesn't mention that vring_desc entries
shouldn't be in two physically continuous pages. Also didn't
find an issue from the implementation.
On the device side, the writev manual does require the iov[]
array to be in one page only, and the limit to iovcnt is 1024.

This patch changes the implementation to remain the packet layout
intact. In this case, the number of iov[] passed to writev will be
equal to the number obtained from the guest.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
As this is at the cost of a full data copy, I don't think
this makes sense. We could limit this when sg list does not fit
in 1024.

Yes, this would cause a performance loss with the layout that data
is adjacent with vnet_hdr. Since you prefer another solution below,
I will skip and not discuss my ideas to avoid that copy.

But I really think we should just add a max s/g field to virtio
and then we'll be free to increase the ring size.

Yes, that's also a good way to solve it. So, add a new device
property, "max_chain_size" and a feature bit to detect it?


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]