OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability


On Wed, Apr 12, 2023 at 01:37:59PM +0800, Jason Wang wrote:
> On Wed, Apr 12, 2023 at 1:25âPM Michael S. Tsirkin <mst@redhat.com> wrote:
> >
> > On Wed, Apr 12, 2023 at 12:53:52PM +0800, Jason Wang wrote:
> > > On Wed, Apr 12, 2023 at 12:20âPM Michael S. Tsirkin <mst@redhat.com> wrote:
> > > >
> > > > On Wed, Apr 12, 2023 at 12:07:26PM +0800, Jason Wang wrote:
> > > > > On Wed, Apr 12, 2023 at 5:25âAM Michael S. Tsirkin <mst@redhat.com> wrote:
> > > > > >
> > > > > > On Tue, Apr 11, 2023 at 07:01:16PM +0000, Parav Pandit wrote:
> > > > > > >
> > > > > > > > From: virtio-dev@lists.oasis-open.org <virtio-dev@lists.oasis-open.org> On
> > > > > > > > Behalf Of Jason Wang
> > > > > > > > Sent: Monday, April 10, 2023 11:29 PM
> > > > > > >
> > > > > > > > > However, it is not backward compatible, if the device place them in
> > > > > > > > > extended capability, it will not work.
> > > > > > > > >
> > > > > > > >
> > > > > > > > It is kind of intended since it is only used for new PCI-E features:
> > > > > > > >
> > > > > > > New fields in new extended pci cap area is fine.
> > > > > > > Migrating old fields to be present in the new extended pci cap, is not your intention. Right?
> > > > > > >
> > > > > > > > "
> > > > > > > > +The location of the virtio structures that depend on the PCI Express
> > > > > > > > +capability are specified using a vendor-specific extended capabilities
> > > > > > > > +on the extended capabilities list in PCI Express extended configuration
> > > > > > > > +space of the device.
> > > > > > > > "
> > > > > > > >
> > > > > > > > > To make it backward compatible, a device needs to expose existing
> > > > > > > > > structure in legacy area. And extended structure for same capability
> > > > > > > > > in extended pci capability region.
> > > > > > > > >
> > > > > > > > > In other words, it will have to be a both places.
> > > > > > > >
> > > > > > > > Then we will run out of config space again?
> > > > > > > No.
> > > > > > > Only currently defined caps to be placed in two places.
> > > > > > > New fields donât need to be placed in PCI cap, because no driver is looking there.
> > > > > > >
> > > > > > > We probably already discussed this in previous email by now.
> > > > > > >
> > > > > > > > Otherwise we need to deal with the
> > > > > > > > case when existing structures were only placed at extended capability. Michael
> > > > > > > > suggest to add a new feature, but the driver may not negotiate the feature
> > > > > > > > which requires more thought.
> > > > > > > >
> > > > > > > Not sure I understand feature bit.
> > > > > >
> > > > > > This is because we have a concept of dependency between
> > > > > > features but not a concept of dependency of feature on
> > > > > > capability.
> > > > > >
> > > > > > > PCI transport fields existence is usually not dependent on upper layer protocol.
> > > > > > >
> > > > > > > > > We may need it even sooner than this because the AQ patch is expanding
> > > > > > > > > the structure located in legacy area.
> > > > > > > >
> > > > > > > > Just to make sure I understand this, assuming we have adminq, any reason a
> > > > > > > > dedicated pcie ext cap is required?
> > > > > > > >
> > > > > > > No. it was my short sight. I responded right after above text that AQ doesnât need cap extension.
> > > > > >
> > > > > >
> > > > > >
> > > > > > You know, thinking about this, I begin to feel that we should
> > > > > > require that if at least one extended config exists then
> > > > > > all caps present in the regular config are *also*
> > > > > > mirrored in the extended config. IOW extended >= regular.
> > > > > > The reason is that extended config can be emulated more efficiently
> > > > > > (2x less exits).
> > > > >
> > > > > Any reason for it to get less exits?
> > > >
> > > > For a variety of reasons having to do with buggy hardware e.g. linux
> > > > likes to use cf8/cfc for legacy ranges. 2 accesses are required for each
> > > > read/write.  extended space is just 1.
> > > >
> > >
> > > Ok.
> > >
> > > >
> > > > > At least it has not been done in
> > > > > current Qemu's emulation. (And do we really care about the performance
> > > > > of config space access?)
> > > > >
> > > > > Thanks
> > > >
> > > > For boot speed, yes. Not minor 5% things but 2x, sure.
> > >
> > > If we care about boot speed we should avoid using the PCI layer in the
> > > guest completely.
> > >
> > > Thanks
> >
> > Woa. And do what? Add a ton of functionality in a PV way to MMIO?
> 
> Probably, we have microVM already. And hyperv drops PCI since Gen2.
> 
> > NUMA, MSI, power management .... the list goes on and on.
> > If you have pci on the host it is way easier to pass that
> > through to guest than do a completely different thing.
> 
> It's a balance. If you want functionality, PCI is probably a must. But
> if you care about only the boot speed, the boot speed is not slowed
> down by a single device but the whole PCI layer.
> 
> Thanks

I don't know that if we add a ton of features to the mmio layer
that it won't slow down, too.

> >
> > --
> > MST
> >



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]