OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [PATCH v1 2/2] virtio-mmio: add features for virtio-mmio specification version 3


On Thu, Jan 09, 2020 at 02:15:51PM +0800, Liu, Jing2 wrote:
> 
> On 1/5/2020 7:04 PM, Michael S. Tsirkin wrote:
> > On Wed, Dec 25, 2019 at 10:50:23AM +0800, Zha Bin wrote:
> > > From: Liu Jiang<gerry@linux.alibaba.com>
> > > 
> > > Userspace VMMs (e.g. Qemu microvm, Firecracker) take advantage of using
> > > virtio over mmio devices as a lightweight machine model for modern
> > > cloud. The standard virtio over MMIO transport layer only supports one
> > > legacy interrupt, which is much heavier than virtio over PCI transport
> > > layer using MSI. Legacy interrupt has long work path and causes specific
> > > VMExits in following cases, which would considerably slow down the
> > > performance:
> > > 
> > > 1) read interrupt status register
> > > 2) update interrupt status register
> > > 3) write IOAPIC EOI register
> > > 
> > > We proposed to update virtio over MMIO to version 3[1] to add the
> > > following new features and enhance the performance.
> > > 
> > > 1) Support Message Signaled Interrupt(MSI), which increases the
> > >     interrupt performance for virtio multi-queue devices
> > > 2) Support per-queue doorbell, so the guest kernel may directly write
> > >     to the doorbells provided by virtio devices.
> > Do we need to come up with new "doorbell" terminology?
> > virtio spec calls these available event notifications,
> > let's stick to this.
> 
> Yes, let's keep virtio words, which just calls notifications.
> 
> > > The following is the network tcp_rr performance testing report, tested
> > > with virtio-pci device, vanilla virtio-mmio device and patched
> > > virtio-mmio device (run test 3 times for each case):
> > > 
> > > 	netperf -t TCP_RR -H 192.168.1.36 -l 30 -- -r 32,1024
> > > 
> > > 		Virtio-PCI    Virtio-MMIO   Virtio-MMIO(MSI)
> > > 	trans/s	    9536	6939		9500
> > > 	trans/s	    9734	7029		9749
> > > 	trans/s	    9894	7095		9318
> > > 
> > > [1]https://lkml.org/lkml/2019/12/20/113
> > > 
> > > Signed-off-by: Liu Jiang<gerry@linux.alibaba.com>
> > > Signed-off-by: Zha Bin<zhabin@linux.alibaba.com>
> > > Signed-off-by: Chao Peng<chao.p.peng@linux.intel.com>
> > > Signed-off-by: Jing Liu<jing2.liu@linux.intel.com>
> > Do we need a new version though? What is wrong with
> > a feature bit? This way we can create compatible devices
> > and drivers.
> 
> We considered using 1 feature bit of 24~37 to specify MSI capability, but
> 
> this feature bit only means for mmio transport layer, but not representing
> 
> comment feature negotiation of the virtio device. So we're not sure if this
> is a good choice.

We are not short on bits, just don't use bits below 32
since these are for legacy devices.


> > > [...]
> > > +static void mmio_write_msi_msg(struct msi_desc *desc, struct msi_msg *msg)
> > > +{
> > > +	struct device *dev = desc->dev;
> > > +	struct virtio_device *vdev = dev_to_virtio(dev);
> > > +	struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
> > > +	void __iomem *pos = vm_dev->base;
> > > +	uint16_t cmd = VIRTIO_MMIO_MSI_CMD(VIRTIO_MMIO_MSI_CMD_UPDATE,
> > > +			desc->platform.msi_index);
> > > +
> > > +	writel(msg->address_lo, pos + VIRTIO_MMIO_MSI_ADDRESS_LOW);
> > > +	writel(msg->address_hi, pos + VIRTIO_MMIO_MSI_ADDRESS_HIGH);
> > > +	writel(msg->data, pos + VIRTIO_MMIO_MSI_DATA);
> > > +	writew(cmd, pos + VIRTIO_MMIO_MSI_COMMAND);
> > > +}
> > All this can happen when IRQ affinity changes while device
> > is sending interrupts. An interrupt sent between the writel
> > operations will then be directed incorrectly.
> 
> When investigating kernel MSI behavior, I found in most case there's no
> action during IRQ affinity changes to avoid the interrupt coming.
> 
> For example, when migrate_one_irq, it masks the irq before
> irq_do_set_affinity. But for others, like user setting any irq affinity
> 
> via /proc/, it only holds desc->lock instead of disable/mask irq. In such
> case, how can it ensure the interrupt sending between writel ops?

Could be a bug too. E.g. PCI spec explicitly says it's illegal to
change non-masked interrupts exactly for this reason.



> 
> > > [...]
> > > +
> > > +/* RO: MSI feature enabled mask */
> > > +#define VIRTIO_MMIO_MSI_ENABLE_MASK	0x8000
> > I don't understand the comment. Is this a way for
> > a version 3 device to say "I want/do not want MSI"?
> > Why not just use a feature bit? We are not short on these.
> 
> This is just used for current MSI enabled/disabled status, after all MSI
> configurations setup finished.
> 
> Not for showing MSI capability.
> 
> In other words, since the concern of feature bit, we choose to update the
> virtio mmio
> 
> version that devices with v3 have MSI capability and notifications.
> 
> 
> Thanks,
> 
> Jing

MSI looks like an optimization. I don't see how that
justifies incrementing a major version and breaking
compat with all existing guests.

-- 
MST



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]