OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: [virtio-comment] RE: virtio-device in µC of a SoC


Hi Stefan,

> > Maybe making a concrete proposal is a better way to handle such a proposal.
> > Please let me know if this is appropriate or not - just give me some 
> > feedback, sort of either "Go away, we'll never support something like 
> > this" or "Yes, this use case sounds valid. Let us refine/see how we could support this".
>
> Hi Joerg,
> This is the right mailing list. Maybe the people who are interested in similar use cases are busy right now, but don't be discouraged!
>
> Your proposal sounds like it's within the scope of VIRTIO and makes sense in specific circumstances. The next step to making this more 
> concrete is to send a specification patch (https://github.com/oasis-tcs/virtio-spec/) for review and voting.

Thanks a lot for the feedback. Then we'll elaborate a concrete specification change and feed it into the github virtio-spec project.

> > 
> > Proposal about: 
> > ---------------
> > 
> > Extending virtio-mmio transport by explicit synchronization.
> > 
> > Use case: 
> > ---------
> > 
> > If virtio device is implemented on a different CPU than virtio driver 
> > and accesses to MMIO region by the driver cannot be "trapped" by the 
> > device, the need for additional synchronization between virtio driver 
> > and device is given.
> > The current virtio MMIO transport specification implicitly expects 
> > especially during device initialization that virtio device 
> > synchronously responds to interactive MMIO register write operations by the driver.
> > In case of a device implemented by a hypervisor this means the 
> > hypervisor traps accesses to the MMIO region of the driver implemented in a guest.
> > In case of a device implemented on a different CPU this does not work 
> > out anymore. For such a case dedicated device initialization steps 
> > require additional synchronization of MMIO write operations by the driver.
>
> In Linux there is the VIRTIO-based rpmsg inter-processor messaging bus:
> https://docs.kernel.org/staging/rpmsg.html
>
> It's not developed as part of the VIRTIO spec and unfortunately I don't know much about how it works, but it came to mind 
> when I read about your usecase. I believe it does not notifications (interrupts) though, so maybe your use case is different.

Yes, I have had a close look on rpmsg as well. The same applies to the OpenAMP approach and all the different chip vendors 
(NXP, TI, ...) implementing their flavor of rpmsg. After analysis these approaches are at least for our use cases not 
appropriate to provide an hardware-independent interchangeable solution to inter-processor-communication.

The item I am confused about most is that they claim to be based on VIRTIO, but this is IMHO not the case. They
reuse the vring data structure but I do not comprehend how this implies VIRTIO, which is in my eyes fundamentally based 
on this driver <-> device model and the interaction between them. Rpmsg does not at all touch this. Additionally,
"VIRTIO-based" suggests that there is a standard on which it is built upon but there is no rpmsg standard. There is no
rpmsg driver which could be easily run on a VIRTIO device or a potential rpmsg device run on a µC CPU of a different chip vendor.

Another questionable item is they implement a mixture of SoC state management (starting/stopping CPU cores) and application 
communication which might not always be intended to be integrated into a single software instance from architecture 
perspective.

The third downside is that these approaches just define an API or provide a library which must potentially licensed.
This is also not a favored architecture model. VIRTIO on the other hand defines a standard which supports any implementation
and if devices/drivers are developed conforming to this standard. Driver/Device binaries from different contributors are supposed 
to be able to work together.
Therefore we would prefer VIRTIO, if it would support a transport across CPUs on a SoC sharing memory. Especially
because it would open up possibilities for a whole bunch of different device types defined by the standard.

> > 
> > Design proposal:
> > ----------------
> > 
> > 1. Introduce a new device feature bit indicating requirement for 
> >    synchronization when using the MMIO transport.
> > 
> >    E.g. VIRTIO_F_MMIO_SYNCHRONIZE. This feature bit indicates that the device
> >    requires MMIO synchronization. It must be defined in the first 32 feature 
> >    bits of the device. 
> > 
> > 2. Driver requirements for VIRTIO_F_MMIO_SYNCHRONIZE:
> > 
> >    a. The driver must read the first 32 device feature bits at first in order
> >       to identify if MMIO access must be synchronized.
> >    b. In case VIRTIO_F_MMIO_SYNCHRONIZE is indicated by the device, the
> >       driver must wait for the synchronization acknowledgement by the device
> >       after writing to: 
> >         DeviceFeaturesSel
> >         DriverFeaturesSel
> >         DriverFeatures
> >         QueueSel
> >         Status
> >         QueueDescHigh
> >         QueueDriverHigh
> >         QueueDeviceHigh
> >         SHMSel
> >         SHMLenHigh
> >         SHMBaseHigh
> >         QueueReset
> > 
> > 3. The MMIO synchronization
> > 
> >    The MMIO synchronization is realized by 2 additional registers in the
> >    MMIO region:
> > 
> >         DriverSyncWatermark (RW)
> >         DeviceSyncWatermark (R)
> > 
> >    The synchronization works this way: 
> >    Whenever the driver has written to the registers listed in section 2. 
> >    above, it increments the value in DriverSyncWatermark by one.
> >    The driver does not continue with reading/writing any MMIO registers 
> >    before the device has set the DeviceSyncWatermark to the same value as 
> >    in DriverSyncWatermark. By updating DeviceSyncWatermark to the same
> >    value as DriverSyncWatermark the device indicates acknowledgement of
> >    the change to MMIO registers by the driver.
>
> On some CPU architectures there are memory wait instructions for monitoring the contents of a memory location. 
> DriverSyncWatermark and DeviceSyncWatermark look like ideal targets for this type of CPU instruction.

I actually don't know these instructions. The SoC architectures we had a look on don't even share caches 
between µP and µC CPU cores, so DRAM (and possibly SRAM) at the lowest level is the only common resource to 
synchronize on. But if there are synchronous instructions waiting on memory change, even better.

> > 4. Issues
> > 
> >    - Driver initiated device reset at start of device initialization:
> > 
> >      According to the specification, the driver must reset the device
> >      before reading the DeviceFeatures. This is before it knows that
> >      the device needs synchronization.
> >      In this case the driver must restart the device initialization,
> >      i.e. reinitiate device reset, this time waiting for 
> >      acknowledgement of the device.
>
> This sounds fine. It doesn't stop your approach from working.
>
> > 
> >    - driver -> device notification of queue updates, i.e. QueueNotify:
> > 
> >      If the driver would have to wait for device acknowledgement of 
> >      updates to QueueNotify after device initialization phase, this
> >      would have signification impact on the performance and would 
> >      probably be not accepted by users. Therefore the requirement for 
> >      synchronization on updates to QueueNotify is dropped.
> >      Instead, the device is supposed to periodically check for updates 
> >      on QueueNotify or check the physical memory location of the
> >      driver queues directly for updates.
>
> The queue index or VIRTIO_F_NOTIFICATION_DATA value may be overwritten by the driver by the time 
> the device sees it. If the device needs to poll anyway > then it might as well poll the vring. 
> QueueNotify can be skipped entirely and is unnecessary with VIRTIO_F_MMIO_SYNCHRONIZE.

Yes. I agree.

> How do Used Buffer Notifications (typically device->driver interrupts) work? Are they skipped too 
> and the driver polls the vrings?

I'd like to keep the requirements low as I am not sure if and how µC -> µP interrupts can be configured.
Therefore yes, I'd suggest to skip that for now. But actually I don't know how e.g. Linux would
handle a driver polling mode therefore this is something we need to investigate further. Good point.

Thanks,
Joerg


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]