OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [RFC 2/3] virtio-iommu: device probing and operations


On 21/04/17 10:02, Tian, Kevin wrote:
>> From: Jean-Philippe Brucker [mailto:jean-philippe.brucker@arm.com]
>> Sent: Wednesday, April 19, 2017 2:46 AM
>>
>> On 18/04/17 11:26, Tian, Kevin wrote:
>>>> From: Jean-Philippe Brucker
>>>> Sent: Saturday, April 8, 2017 3:18 AM
>>>>
>>> [...]
>>>>   II. Feature bits
>>>>   ================
>>>>
>>>> VIRTIO_IOMMU_F_INPUT_RANGE (0)
>>>>  Available range of virtual addresses is described in input_range
>>>
>>> Usually only the maximum supported address bits are important.
>>> Curious do you see such situation where low end of the address
>>> space is not usable (since you have both start/end defined later)?
>>
>> A start address would allow to provide something resembling a GART to the
>> guest: an IOMMU with one address space (ioasid_bits=0) and a small IOVA
>> aperture. I'm not sure how useful that would be in practice.
> 
> Intel VT-d has no such limitation, which I can tell. :-)
> 
>>
>> On a related note, the virtio-iommu itself doesn't provide a
>> per-address-space aperture as it stands. For example, attaching a device
>> to an address space might restrict the available IOVA range for the whole
>> AS if that device cannot write to high memory (above 32-bit). If the guest
>> attempts to map an IOVA outside this window into the device's address
>> space, it should expect the MAP request to fail. And when attaching, if
>> the address space already has mappings outside this window, then ATTACH
>> should fail.
>>
>> This too seems to be something that ought to be communicated by firmware,
>> but bits are missing (I can't find anything equivalent to DT's dma-ranges
>> for PCI root bridges in ACPI tables, for example). In addition VFIO
>> doesn't communicate any DMA mask for devices, and doesn't check them
>> itself. I guess that the host could find out the DMA mask of devices one
>> way or another, but it is tricky to enforce, so I didn't make this a hard
>> requirement. Although I should probably add a few words about it.
> 
> If there is no such communication on bare metal, then same for pvIOMMU.
> 
>>
>>> [...]
>>>>   1. Attach device
>>>>   ----------------
>>>>
>>>> struct virtio_iommu_req_attach {
>>>> 	le32	address_space;
>>>> 	le32	device;
>>>> 	le32	flags/reserved;
>>>> };
>>>>
>>>> Attach a device to an address space. 'address_space' is an identifier
>>>> unique to the guest. If the address space doesn't exist in the IOMMU
>>>
>>> Based on your description this address space ID is per operation right?
>>> MAP/UNMAP and page-table sharing should have different ID spaces...
>>
>> I think it's simpler if we keep a single IOASID space per virtio-iommu
>> device, because the maximum number of address spaces (described by
>> ioasid_bits) might be a restriction of the pIOMMU. For page-table sharing
>> you still need to define which devices will share a page directory using
>> ATTACH requests, though that interface is not set in stone.
> 
> got you. yes VM is supposed to consume less IOASIDs than physically
> available. It doesn’t hurt to have one IOASID space for both IOVA
> map/unmap usages (one IOASID per device) and SVM usages (multiple
> IOASIDs per device). The former is digested by software and the latter
> will be bound to hardware.
> 

Hmm, I'm using address space indexed by IOASID for "classic" IOMMU, and
then contexts indexed by PASID when talking about SVM. So in my mind an
address space can have multiple sub-address-spaces (contexts). Number of
IOASIDs is a limitation of the pIOMMU, and number of PASIDs is a
limitation of the device. Therefore attaching devices to address spaces
would update the number of available contexts in that address space. The
terminology is not ideal, and I'd be happy to change it for something more
clear.

Thanks,
Jean-Philippe


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]