OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: [PATCH 10/11] transport-pci: Use driver notification PCI capability


> From: Michael S. Tsirkin <mst@redhat.com>
> Sent: Wednesday, April 12, 2023 12:43 AM
> To: Parav Pandit <parav@nvidia.com>
> Cc: virtio-dev@lists.oasis-open.org; cohuck@redhat.com; virtio-
> comment@lists.oasis-open.org; Shahaf Shuler <shahafs@nvidia.com>;
> Satananda Burla <sburla@marvell.com>
> Subject: Re: [PATCH 10/11] transport-pci: Use driver notification PCI capability
> 
> On Wed, Apr 12, 2023 at 04:37:05AM +0000, Parav Pandit wrote:
> >
> >
> > > From: Michael S. Tsirkin <mst@redhat.com>
> > > Sent: Wednesday, April 12, 2023 12:31 AM
> > >
> > > On Fri, Mar 31, 2023 at 01:58:33AM +0300, Parav Pandit wrote:
> > > > PCI devices support memory BAR regions for performant driver
> > > > notifications using the notification capability.
> > > > Enable transitional MMR devices to use it in simpler manner.
> > > >
> > > > Co-developed-by: Satananda Burla <sburla@marvell.com>
> > > > Signed-off-by: Parav Pandit <parav@nvidia.com>
> > > > ---
> > > >  transport-pci.tex | 28 ++++++++++++++++++++++++++++
> > > >  1 file changed, 28 insertions(+)
> > > >
> > > > diff --git a/transport-pci.tex b/transport-pci.tex index
> > > > 55a6aa0..4fd9898 100644
> > > > --- a/transport-pci.tex
> > > > +++ b/transport-pci.tex
> > > > @@ -763,6 +763,34 @@ \subsubsection{Notification structure
> > > > layout}\label{sec:Virtio Transport Options  cap.length >=
> > > > queue_notify_off * notify_off_multiplier + 4  \end{lstlisting}
> > > >
> > > > +\paragraph{Transitional MMR Interface: A note on Notification
> > > > +Capability} \label{sec:Virtio Transport Options / Virtio Over PCI
> > > > +Bus / Virtio Structure PCI Capabilities / Notification capability
> > > > +/ Transitional MMR Interface}
> > > > +
> > > > +The transitional MMR device benefits from receiving driver
> > > > +notifications at the Queue Notification address offered using the
> > > > +notification capability, rather than via the memory mapped legacy
> > > > +QueueNotify configuration register.
> > > > +
> > > > +Transitional MMR device uses same Queue Notification address
> > > > +within a BAR for all virtqueues:
> > > > +\begin{lstlisting}
> > > > +cap.offset
> > > > +\end{lstlisting}
> > > > +
> > > > +The transitional MMR device MUST support Queue Notification
> > > > +address within a BAR for all virtqueues at:
> > > > +\begin{lstlisting}
> > > > +cap.offset
> > > > +\end{lstlisting}
> > > > +
> > > > +The transitional MMR driver that wants to use driver
> > > > +notifications offered using notification capability MUST use same
> > > > +Queue Notification address within a BAR for all virtqueues at:
> > > > +
> > > > +\begin{lstlisting}
> > > > +cap.offset
> > > > +\end{lstlisting}
> > > > +
> > > Why? What exactly is going on here? legacy drivers will not do this.
> >
> > Legacy driver does in the q notify register that was sandwitched in between
> of slow configuration registers.
> > This is the notification offset for the hypervisor driver to perform the
> notification on behalf of the guest driver so that the acceleration available for
> the non-transitional device can be utilized here as well.
> 
> I don't get it. What acceleration? for guests you need a separate page so card
> can be mapped directly while config causes an exit. But hypervisor can access
> any register without vmexits.

Typically when guest VM writes to IOBAR q notification register, a vmexit occurs.
On that occurrence, hypervisor driver forwards the q notification using the q notification region which is defined by struct virtio_pci_notify_cap.



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]