OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [PATCH v5 2/2] virtio-net: use mtu size as buffer length for big packets


On Wed, Sep 07, 2022 at 07:18:06PM +0000, Parav Pandit wrote:
> 
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: Wednesday, September 7, 2022 3:12 PM
> 
> > > Because of shallow queue of 16 entries deep.
> > 
> > but why is the queue just 16 entries?
> I explained the calculation in [1] about 16 entries.
> 
> [1] PH0PR12MB54812EC7F4711C1EA4CAA119DC419@PH0PR12MB5481.namprd12.prod.outlook.com/">https://lore.kernel.org/netdev/PH0PR12MB54812EC7F4711C1EA4CAA119DC419@PH0PR12MB5481.namprd12.prod.outlook.com/
> 
> > does the device not support indirect?
> >
> Yes, indirect feature bit is disabled on the device.

OK that explains it.

> > because with indirect you get 256 entries, with 16 s/g each.
> > 
> Sure. I explained below that indirect comes with 7x memory cost that is not desired.
> (Ignored the table memory allocation cost and extra latency).

Oh sure, it's a waste. I wonder what effect does the patch have
on bandwidth with indirect enabled though.


> Hence don't want to enable indirect in this scenario.
> This optimization also works with indirect with smaller indirect table.
> 
> > 
> > > With driver turn around time to repost buffers, device is idle without any
> > RQ buffers.
> > > With this improvement, device has 85 buffers instead of 16 to receive
> > packets.
> > >
> > > Enabling indirect in device can help at cost of 7x higher memory per VQ in
> > the guest VM.



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]