OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev][PATCH 2/2] virtio-spi: add the device specification


On Fri, Jul 7, 2023 at 3:21âAM Haixu Cui <quic_haixcui@quicinc.com> wrote:
>
> Hi Jonathon Reinhart,
>
>      Thank you so much for all your helpful advice and info!
>
>      I took a look at latest Linux SPI driver and found, for
> cs_setup/cs_hold/cs_inactive set in device-tree, there are following two
> conditions:
>      1) if SPI controller supports CS timing configuration and CS is not
> using a GPIO line, then the SPI driver set the
> cs_setup/cs_hold/ca_inactive values into the corresponding SPI CS registers;
>      2) if CS is using a GPIO line, or SPI controller doesn't support CS
> timing configuration, it is the software to perform the
> cs_setup/cs_hold/cs_inactive delays, which is implemented in function
> spi_set_cs in driver/spi/spi.c.
>
>      So I agree it is necessary to pass cs_setup/cs_hold/cs_inactive to
> the host for each transfer.
>
>      For condition 1, host should set these values into the CS timing
> registers. And as you mentioned "one might require different CS
> setup/hold times, depending on which slave_id they are talking to (on
> the same bus)", if so, host need to overwrite the CS timing registers
> between the two transfers talking to different salve_id.
>
>      For condition 2, SPI driver running in the host performing the
> cs_setup/cs_hold/cs_inactive delays seems more accurate, compared with
> performing delays in guest.
>
>
>      And the latest virtio spi transfer structure is defined as follows:
>
>       struct spi_transfer_head {
>          u8 slave_id;
>          u8 bits_per_word;
>          u8 cs_change;
>          u8 tx_nbits;
>         u8 rx_nbits;
>         u8 reserved[3];
>          u32 mode;
>          u32 freq;
>          u32 word_delay_ns;
>          u32 delay_ns;
>          u32 cs_change_delay_ns;
>          u32 cs_setup_ns;
>          u32 cs_hold_ns;
>          u32 cs_inactive_ns;
>       };
>

Hello Haixu Cui,

I think there may be some unnecessary redundancy in these fields. See below.

>       @slave_id: chipselect index the SPI transfer used
>       @bits_per_word: the number of bits in each SPI transfer word
>       @cs_change: True to deselect device before starting the next transfer
>       @tx_nbits: number of bits used for writing
>       @rx_nbits: number of bits used for reading
>       @reserved: reserved for future use
>       @mode: the spi mode defines how data is clocked out and in
>       @freq: transfer speed
>       @word_delay_ns: delay to be inserted between consecutive words of
> a transfer, in ns unit
>       @cs_setup_ns: delay to be introduced by the controller after CS is
> asserted, in ns unit

(I am reordering these in my quoted text to be grouped appropriately.)

>       @delay_ns: delay to be introduced after this transfer before
> (optionally) changing the chipselect status, in ns unit
>       @cs_hold_ns: delay to be introduced by the controller before CS is
> deasserted, in ns unit

Aren't @delay_ns and @cs_hold_ns specifying the same thing?

Linux spi_transfer defines @delay as:
    delay to be introduced after this transfer before
    (optionally) changing the chipselect status, then starting the
    next transfer or completing this spi_message.

...and spi_device @cs_hold as: delay to be introduced by the
    controller before CS is deasserted.

(This is the period labeled "D" in my diagram.)

I guess the only difference is if you have a series of transfers, where
only the last transfer has @cs_change=true, and all preceding messages
have @cs_change=false. In this case, @delay_ns would apply between each
transfer, and after the last transfer @delay_ns + @cs_hold would apply
before CS is deasserted:

    Transfer[0] (cs_change=false)
    delay(@delay_ns)
    Transfer[1] (cs_change=false)
    delay(@delay_ns)
    Transfer[3] (cs_change=true)
    delay(@delay_ns) + delay(@cs_hold_ns)
    CS deasserted


>       @cs_change_delay_ns: delay between cs deassert and assert when
> cs_change is set, in ns unit
>       @cs_inactive_ns: delay to be introduced by the controller after CS
> is deasserted, in ns unit

Similarly, aren't @cs_change_delay_ns and @cs_inactive_ns applied at the
same point in the cycle?

(This is the period labeled "E" in my diagram.)

Linux spi_device defines @cs_inactive as: delay to be introduced by the
    controller after CS is deasserted. If @cs_change_delay is used from
    @spi_transfer, then the two delays will be added up.

...and spi_transfer defines @cs_change_delay as: delay between cs
    deassert and assert when @cs_change is set and @spi_transfer is not
    the last in @spi_message.

It even says they will be added up.

The only difference seems to be that @cs_change_delay applies only when
the transfer is not the last in a list (spi_message, which we don't have
here.)

    CS asserted
    Transfer[0] (cs_change=true)
    CS deasserted
    delay(@cs_inactive) + delay(@cs_change_delay)
    CS asserted
    Transfer[1] (cs_change=true)
    CS deasserted
    delay(@cs_inactive)


In both cases of redundancy above, the difference in semantics between
the spi_device and spi_transfer only seem to apply when a series of
transfers (spi_message or SPI_IOC_MESSAGE) are involved. Since (AFAICT)
you aren't proposing that construct here, maybe the separate fields are
not necessary? IOW, maybe @delay_ns and @cs_change_delay can be
eliminated?

Please let me know if I've misunderstood anything.

Jonathon

>
>      Could you please help check if this new structure contains enough
> information. Really appreciate for kind help.
>
> Best Regards
> Haixu Cui
>
> On 7/1/2023 5:59 AM, Jonathon Reinhart wrote:
> >> Hi @jrreinhart@google.com,
> >>       Thank you very much for your helpful comments.
> >>
> >>       I missed the delay and cs_change_delay parameters. I will add both
> >> of them, although cs_change_delay can not be set from userspace, but can
> >> be set in kernel space.
> >>
> >>
> >>       For CS delays such as PRE_DELAY/POST_DELAY, it seems that they are
> >> defined in structure spi_device:
> >>
> >>       @cs_setup: delay to be introduced by the controller after CS is
> >> asserted.
> >>
> >>       @cs_hold: delay to be introduced by the controller before CS is
> >> deasserted.
> >>
> >>       @cs_inactive: delay to be introduced by the controller after CS is
> >> deasserted. If @cs_change_delay is used from @spi_transfer, then the
> >> two delays will be added up.
> >>
> >>       They show the SPI controller ability to control the CS
> >> assertion/deassertion timing and should not be changed for each transfer
> >> (because thay can be updated by setting structure spi_transfer or
> >> structure spi_ioc_transfer). I think it better to define these parameter
> >> in host OS rather than in guest OS since it's the host OS to operate the
> >> hardware SPI controller directly. Besides, it can also avoid passing the
> >> same values from guest to host time after time.
> >>
> >>       What's your opinion on this topic? Again, thank you very much.
> >
> > Hi Haixu,
> >
> > I took another look at the Linux structures and attempted to map up the
> > delay parameters to the various points in a SPI sequence. Please correct
> > me if I'm wrong about any of this.
> >
> > Consider this diagram where CS# is an active-low chip select, and SCLK
> > is the serial clock:
> >
> >        ___.                                                   ._____.
> >    CS#    |___________________________________________________|     |___
> >           .                                                   .     .
> >           .                                                   .     .
> >   SCLK ________NNNN___NNNN___NNNN_______NNNN___NNNN___NNNN______________
> >           .   .    . .           .     .                  .   .     .
> > Delays:  +-A-+    +B+           +--C--+                  +-D-+--E--+
> >           .   .    . .           .     .                  .   .     .
> >           0   1    2 3           4     5                  6   7     8
> >           .   .    .             .     .                  .   .
> > Terms:   .   +Word+             .     .                  .   .
> >           .   .                  .     .                  .   .
> >           .   +-------Frame------+     +-------Frame------+   .
> >           .                                                   .
> >           +-----------------Transfer--------------------------+
> >
> > Using NXP terminology:
> >
> >    From 1 to 2 is a "word"     (with e.g. 8 bits).
> >    From 1 to 4 is a "frame"    (with 3 words).
> >    From 1 to 6 is a "transfer" (with 3 frames).
> >
> > Linux does not have the concept of a frame, only a series of transfers
> > (a spi_message), each of which can specify whether CS changes or not.
> >
> > I've identified these various timing points and attempted to match them
> > to the Linux spi_device and spi_transfer parameters:
> >
> > A - CS Setup: Delay after CS asserted before clock starts.
> >      spi_device.cs_setup
> >
> > B - Word Delay: Delay between words.
> >      spi_transfer.word_delay
> >
> > C - Frame Delay: Delay between frames.
> >      (Linux does not have the concept of a SPI "frame".)
> >
> > D - CS Hold: Delay after clock stops, and before CS deasserted.
> >      spi_device.cs_hold (+ spi_transfer.delay ??)
> >
> > E - CS Inactive: Delay after CS deasserted, before asserting again.
> >      spi_device.cs_inactive + spi_transfer.cs_change_delay
> >
> >
> > While I agree with you that some of these timings are unlikely to change
> > from transfer-to-transfer, the subset of which should be specified
> > at the device level vs. per-transfer seems somewhat arbitrary. As you
> > can see there is some overlap between them.
> >
> > If I understand correctly, it appears that the CS hold time *can* be
> > controlled on a per-transfer bases, using spi_transfer.delay
> > ("after this transfer before changing the chipselect status").
> >
> > That leaves CS setup as the only parameter that cannot be influenced on
> > a per-transfer basis.
> >
> > Theoretically, one might require different CS setup/hold times,
> > depending on which slave_id they are talking to (on the same bus).
> > In that case, one must set those spi_device parameters to the worst-case
> > (maximum) values. However, this is already a Linux limitation, so I'm
> > not sure it needs to be improved here.
> >
> > I think you've achieved parity with what the Linux kernel API allows,
> > and that's probably good enough. As you said, anything else can be
> > adjusted in the host OS -- I'm not sure how the guest would go about
> > achieving that, though. You're not proposing the use of configuration
> > space for virtio-spi, are you?
> >
> > Regards,
> > Jonathon Reinhart
> >
> >
> >>
> >> Best Regards
> >> Haixu Cui
> >>
> >> On 6/28/2023 1:06 AM, jrreinhart@google.com wrote:
> >>> Hi Haixu,
> >>>
> >>>> +The \field{word_delay} defines how long to wait between words within
> >>>> one SPI transfer,
> >>>> +in ns unit.
> >>>
> >>> I'm a little surprised to see a word_delay but no frame_delay or
> >>> transfer_delay.
> >>>
> >>> For example, many SPI peripherals require a delay after CS is asserted,
> >>> but before the first SCLK edge, allowing them to prepare to send data.
> >>> (E.g. an ADC might begin taking a sample.)
> >>>
> >>>
> >>> The linux struct spi_transfer has three delay fields:
> >>>
> >>> * @cs_change_delay: delay between cs deassert and assert when
> >>> *      @cs_change is set and @spi_transfer is not the last in
> >>> *      @spi_message
> >>> * @delay: delay to be introduced after this transfer before
> >>> *    (optionally) changing the chipselect status, then starting the
> >>> *    next transfer or completing this @spi_message.
> >>> * @word_delay: inter word delay to be introduced after each word size
> >>> *    (set by bits_per_word) transmission.
> >>>
> >>> The userspace spidev.h API has only two:
> >>>
> >>> * @delay_usecs: If nonzero, how long to delay after the last bit
> >>> *       transfer before optionally deselecting the device before the
> >>> *       next transfer.
> >>> * @word_delay_usecs: If nonzero, how long to wait between words within
> >>> *       one transfer. This property needs explicit support in the SPI
> >>> *       controller, otherwise it is silently ignored.
> >>>
> >>> The NXP i.MX RT500 SPI (master) controller, for example, has four
> >>> configurable delays:
> >>>
> >>> - PRE_DELAY: After CS assertion, before first SCLK edge.
> >>> - POST_DELAY: After a transfer, before CS deassertion.
> >>> - FRAME_DELAY: Between frames (which are arbitrarily defined by
> >>>     software).
> >>> - TRANSFER_DELAY: Minimum CS deassertion time between transfers.
> >>>
> >>> The NVIDIA Tegra114 SPI controller has:
> >>>
> >>> - nvidia,cs-setup-clk-count: CS setup timing parameter.
> >>> - nvidia,cs-hold-clk-count: CS hold timing parameter.
> >>>
> >>>
> >>> I understand that accurately representing all possible delays might be
> >>> difficult or futile, but I'm curious to understand why, of all the
> >>> possible delays, inter-word delay was chosen for inclusion.
> >>>
> >>> In a microcontroller, delays around CS (de)assertion can be customized
> >>> by using a GPIO -- rather than the hardware SPI block -- to drive the CS
> >>> signal. This way, delays can be inserted where needed. To do so with a
> >>> virtualized SPI controller might prove difficult however, as the virtual
> >>> channel carrying a CS GPIO signal might not be synchronized to the
> >>> channel carrying the SPI data.
> >>>
> >>> Curious to hear your thoughts.
> >>>
> >>> Thanks,
> >>> Jonathon Reinhart


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]