OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Request for vrtio tx packet drop counters


Hi,
 
On the DPDK Userspace conference in Dublin and the last Monthly DPDK-VirtIO Meeting we had a chat about a sorely missing virtio feature:

The issue is that the vhost backend (in our case OVS-DPDK) can only observe and report packet drop in tx direction. Any packets dropped by the virtio driver inside the guest or the DPDK application on top because of congestion of their virtio tx queue(s) are not visible at all on the host in general and to the vhost backend in particular.
 
In practice, however, we frequently experience packet drop in tx direction inside the guest, typically due to congestion of the vSwitch. But in order to observe and quantify those (over time) one needs access to the VM and often detailed knowledge of the application and/or driver.
 
We would like to ask to enhance the virtio tx queue with a packet drop counter that a virtio driver or the (DPDK) application on top would step when deciding to drop excess packets. The virtio backend could read these counters in order to report rx packet drop from the backend perspective (analogous to rx drops reported by physical NICs). The vhostuser DPDK library already includes rx drop counter, but they are always zero.
 
Note: It would be preferable that the drop counter is not unconditionally stepped by the virtio driver whenever it cannot transmit a complete packet batch due to tx queue running full. The application (or upper stack layers) may decide retry the transmission the rejected packets, so that the drop count could have many false positives. Some real-world DPDK VNFs have in the past implemented such internal buffering to work-around too the short virtio tx queues in Qemu.
 
Unconditional “drop” counting by the driver would have the advantage that “congestion onset” would become visible to the backend even without application changes, but we still would not be able to match real end-to-end packet drop with per port drop counters.
 
Thanks, Jan
 


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]