OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [EXT] Re: [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add PAGE_SIZE_MASK property


On Fri, Mar 27, 2020 at 05:20:56AM +0000, Bharat Bhushan wrote:
> Hi Jean,
> 
> > -----Original Message-----
> > From: Auger Eric <eric.auger@redhat.com>
> > Sent: Thursday, March 26, 2020 4:50 PM
> > To: Jean-Philippe Brucker <jean-philippe@linaro.org>
> > Cc: virtio-dev@lists.oasis-open.org; Bharat Bhushan <bbhushan2@marvell.com>
> > Subject: [EXT] Re: [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add PAGE_SIZE_MASK
> > property
> > 
> > External Email
> > 
> > ----------------------------------------------------------------------
> > Hi Jean,
> > 
> > On 3/26/20 11:49 AM, Jean-Philippe Brucker wrote:
> > > On Mon, Mar 23, 2020 at 03:22:40PM +0100, Auger Eric wrote:
> > >> Hi Jean,
> > >>
> > >> On 3/23/20 2:38 PM, Jean-Philippe Brucker wrote:
> > >>> Add a PROBE property to declare the mapping granularity per endpoint.
> > >>> The virtio-iommu device already declares a granule in its config
> > >>> space, but when endpoints are behind different physical IOMMUs, they
> > >>> may have different mapping granules. This new property allows to
> > >>> override the global page_size_mask for each endpoint.
> > >>>
> > >>> In the future it may be useful to describe more than one
> > >>> page_size_mask for each endpoint, and allow them to negotiate it
> > >>> during ATTACH. For example two masks could allow the driver to
> > >>> choose between 4k and 64k granule, along with their respective block
> > >>> mapping sizes. This could be added by replacing \field{reserved} with an array
> > length, for example.
> > >> Sorry I don't get the use case where several page size bitmaps should
> > >> be exposed.
> > >
> > > For a 4k granule you get block mappings of 2M and 1G. For a 64k
> > > granule you get 512M and 4T block mappings. If you want to communicate
> > > both options to the guest, you need two separate masks, 0x40201000 and
> > > 0x40020010000. Then the guest could choose one of the granules during
> > > attach, if we add a flag to the attach request. I'm not suggesting we
> > > do that now, just trying to make sure it can be extended if anyone
> > > actually wants it. Personally I don't think it's worth adding,
> > > especially given the additional work required in the host.
> > OK I get it now.
> 
> What some clarification about two page-size-mask configurations available.
>  - Global configuration for page-size-mask
>  - per endpoint page-size-mask configuration
> 
> PAGE_SIZE_MASK probe for and endpoint can return zero or non-zero value.
> If it returns non-zero value than it will override the global configuration.
> If PAGE_SIZE_MASK probe for and endpoint return zero value than global page-size-mask configuration will be used.
> 
> Is that correct?

Yes. If a PAGE_SIZE_MASK property is available for an endpoint, the driver
should use that mask. Otherwise it should use the global mask, which is
always provided.

I wonder, should we introduce some form of negotiation now?  If the driver
doesn't know about the new probe property, it will use the global mask. At
some point it will send a MAP request not aligned on the page granule, and
the device will abort the request. If instead we add a flag and page mask
field to the attach request, the device would know that the driver didn't
understand the per-endpoint page mask and abort the attach.

Thanks,
Jean



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]