OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] Re: [RFC] ivshmem v2: Shared memory device specification


On 27.07.20 15:56, Michael S. Tsirkin wrote:
On Mon, Jul 27, 2020 at 03:39:32PM +0200, Jan Kiszka wrote:
On 27.07.20 15:20, Michael S. Tsirkin wrote:
On Mon, May 25, 2020 at 09:58:28AM +0200, Jan Kiszka wrote:
#### Vendor Specific Capability (ID 09h)

This capability must always be present.

| Offset | Register            | Content                                        |
|-------:|:--------------------|:-----------------------------------------------|
|    00h | ID                  | 09h                                            |
|    01h | Next Capability     | Pointer to next capability or 00h              |
|    02h | Length              | 20h if Base Address is present, 18h otherwise  |
|    03h | Privileged Control  | Bit 0 (read/write): one-shot interrupt mode    |
|        |                     | Bits 1-7: Reserved (0 on read, writes ignored) |
|    04h | State Table Size    | 32-bit size of read-only State Table           |
|    08h | R/W Section Size    | 64-bit size of common read/write section       |
|    10h | Output Section Size | 64-bit size of output sections                 |
|    18h | Base Address        | optional: 64-bit base address of shared memory |

All registers are read-only. Writes are ignored, except to bit 0 of
the Privileged Control register.


Is there value in making this follow the virtio vendor-specific
capability format? That will cost several extra bytes - do you envision
having many of these in the config space?

Of course, this could be modeled with via virtio_pci_cap as well. Would add
12 unused by bytes and one type byte. If it helps to make the device look
more virtio'ish, but I'm afraid there are more differences at PCI level.

I guess it will be useful if we ever find it handy to make an ivshmem
device also be a virtio device. Can't say why yet but if we don't care
it vaguely seems kind of like a good idea. I guess it will also be handy
if you ever need another vendor specific cap: you already get a way to
identify it without breaking drivers.


I can look into that. Those 12 wasted bytes are a bit ugly, but so far we are not short on config space, even in the non-extended range.

More problematic is that the existing specification of virtio_pci_cap assumes that this describes a structure in a PCI resource, rather than even being that data itself, and even a register (privileged control).

Would it be possible to split the types into two ranges, one for the existing structure, one for others - like ivshmem - that will only share the cfg_type field?


I do not see a use case for having multiple of those caps above per device.
If someone comes around with a valid use case for having multiple,
non-consequitive shared memory regions for one device, we would need to add
registers for them. But that would also only work for non-BAR regions due to
limited BARs.


OK, I guess this answers the below too.

Also, do we want to define an extended capability format in case this
is a pci extended capability?


What would be the practical benefit? Do you see PCIe caps that could become
useful in virtual setups?

So if we ever have a huge number of these caps, PCIe allows many more
caps.

We don't do that for regular virtio devices
either, do we?

We don't, there's a small number of these so we don't run out of config
space.

Right. But then it would not a be a problem to add PCIe (right before adding it becomes impossible) and push new caps into the extended space. And all that without breaking existing drivers. It's just a cap, and the spec so far does not state that there must be no other cap, neither in current virtio nor this ivshmem device.

Jan

--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]