OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] Re: clarifying the handling of responses for virtio-rpmb


Harald Mommer <hmo@opensynergy.com> writes:

> Hello,
>
> I had my hands in a virtio RPMB device implementation the last few
> weeks. During the development process I had to apply some patches to the
> virtio RPMB driver:
>
>   * Change the device id from 0xFFFF to 28
>
>   * (Add some debug facilities. Needed to see the frames. Got first no
>     request frames on the device side, nothing.)
>
>   * Fix descriptor directions. For the outgoing frames num_in was
>     incremented instead of num_out.
>
> The frames in the for-loop may be outgoing or intended for incoming
> data. Decided on the RPMB_F_WRITE flag what to do with those frames:
>
>    for (i = 0; i < ncmds; i++) {
>          ...
>
>          if (cmds[i].flags & RPMB_F_WRITE)
>              sgs[num_out++ + num_in] = &frame[i];
>          else
>              sgs[num_out + num_in++] = &frame[i];
>      }
>
>   * Got now too much data comparing to the virtio spec. Removed those
>     additional frames in the beginning disabling some pieces of code in
>     the virtio RPMB driver.
>
> You are probably puzzled by something which I think is a bug in the
> virtio RPMB driver regarding the descriptor directions. Could be that
> some device implementations do not really care about provided descriptor
> directions, in this case this may go unnoticed for a while.

I wonder if we've ended up making very similar changes to the virtio
driver? I suspect because the originally driver had a whole bunch of
command frames for something that never made it into the final spec.

FWIW my current hacked up tree is here:

  https://git.linaro.org/people/alex.bennee/linux.git/log/?h=testing/ivshmem-and-rpm-aug2020

I was pondering if it was worth removing the file-system integration
patches and just posting a series which implements the rpmb char device,
userspace tool and the virtio-rpmb driver?

>
>
> Am 10.09.20 um 15:08 schrieb Alex BennÃe:
>> CAUTION: This email originated from outside of the organization.
>> Do not click links or open attachments unless you recognize the sender and know the content is safe.
>>
>>
>> Alex BennÃe <alex.bennee@linaro.org> writes:
>>
>>> Hi,
>>>
>>> The specification lists a number of commands that have responses:
>>>
>>>    The operation of a virtio RPMB device is driven by the requests placed
>>>    on the virtqueue. The type of request can be program key
>>>    (VIRTIO_RPMB_REQ_PROGRAM_KEY), get write counter
>>>    (VIRTIO_RPMB_REQ_GET_WRITE_COUNTER), write
>>>    (VIRTIO_RPMB_REQ_DATA_WRITE), and read (VIRTIO_RPMB_REQ_DATA_READ). A
>>>    program key or write request can also combine with a result read
>>>    (VIRTIO_RPMB_REQ_RESULT_READ) for a returned result.
>>>
>>> Now I'm deep in the guts of virt queues doing the implementation I'm
>>> trying to clarify exactly what frames should be sent to the device and
>>> if they should be out_sgs or in_sgs. I suspect there is some difference
>>> between the original prototype interface and what we have now.
>>>
>>> Some operations obviously have an implied response
>>> (VIRTIO_RPMB_REQ_GET_WRITE_COUNTER and VIRTIO_RPMB_REQ_DATA_READ). As
>>> far as I could tell the frame should be simple:
>>>
>>>    sg[0] (out_sg=1) - frame with command
>>>    sg[1..n] (in_sg=1..n) - space for the reply to be filled in by the device
>>>
>>> However the language for the program key and data write say they can be
>>> combined with a VIRTIO_RPMB_REQ_RESULT_READ to optionally return a
>>> result. My question is is this result read meant to be in a separate
>>> request frame and response frame so you get:
>>>
>>>   sg[0] - VIRTIO_RPMB_REQ_PROGRAM_KEY/VIRTIO_RPMB_REQ_DATA_WRITE frame
>>>   sg[1] - VIRTIO_RPMB_REQ_RESULT_READ (out_sg=2)
>>>   sg[2] - empty frame for response (in_sg=1)
> This is what works after applying the direction patch above in the
> virtio driver and which makes also sense to me. See also below my
> comment for the rpmb_ioctl() code.
>>>
>>> or:
>>>
>>>   sg[0] - VIRTIO_RPMB_REQ_PROGRAM_KEY/VIRTIO_RPMB_REQ_DATA_WRITE frame (out_sg=1)
>>>   sg[1] - VIRTIO_RPMB_REQ_RESULT_READ (in_sg=1)
> Makes no sense for me. The VIRTIO_RPMB_REQ_RESULT_READ is a request
> (command) in the same way as the other requests.
>>>
>>> where the result frame is filled in and sent back?
>>>
>>> I must say I'm a little confused by the logic in rpmb_ioctl (in the
>>> userspace tool) which creates both out_frames and resp frames:
>
> Was also confused but it's not that complicated (after some hours). For
> REQ_PROGRAM_KEY/REQ_WRITE_DATA is always an additional REQ_RESULT_READ
> added. So in the end as last descriptor there is always an incoming
> frame to be filled either with the  RESULT_READ data or the response
> data for REQ_GET_WRITE_COUNTER/REQ_DATA_READ.
>
>>>    static int rpmb_ioctl(uint8_t frame_type, int fd, uint16_t req,
>>>                          const void *frames_in, unsigned int cnt_in,
>>>                          void *frames_out, unsigned int cnt_out)
>>>    {
>>>            int ret;
>>>            struct __packed {
>>>                    struct rpmb_ioc_seq_cmd h;
>>>                    struct rpmb_ioc_cmd cmd[3];
>>>            } iseq = {};
>>>
>>>            void *frame_res = NULL;
>>>            int i;
>>>            uint32_t flags;
>>>
>>>            rpmb_dbg("RPMB OP: %s\n", rpmb_op_str(req));
>>>            dbg_dump_frame(frame_type, "In Frame: ", frames_in, cnt_in);
>>>
>>>            i = 0;
>>>            flags = RPMB_F_WRITE;
>>>            if (req == RPMB_WRITE_DATA || req == RPMB_PROGRAM_KEY)
>>>                    flags |= RPMB_F_REL_WRITE;
>>>            rpmb_ioc_cmd_set(iseq.cmd[i], flags, frames_in, cnt_in);
>>>            i++;
>>>
>>>            if (req == RPMB_WRITE_DATA || req == RPMB_PROGRAM_KEY) {
>>>                    frame_res = rpmb_frame_alloc(frame_type, 0);
>>>                    if (!frame_res)
>>>                            return -ENOMEM;
>>>                    rpmb_frame_set(frame_type, frame_res,
>>>                                   RPMB_RESULT_READ, 0, 0, 0);
>>>                    rpmb_ioc_cmd_set(iseq.cmd[i], RPMB_F_WRITE, frame_res, 0);
>>>                    i++;
>>>            }
>>>
>>>            rpmb_ioc_cmd_set(iseq.cmd[i], 0, frames_out, cnt_out);
>>>            i++;
>>>
>>>            iseq.h.num_of_cmds = i;
>>>            ret = ioctl(fd, RPMB_IOC_SEQ_CMD, &iseq);
>>>            if (ret < 0)
>>>                    rpmb_err("ioctl failure %d: %s.\n", ret, strerror(errno));
>>>
>>>            ret = rpmb_check_req_resp(frame_type, req, frames_out);
>>>
>>>            dbg_dump_frame(frame_type, "Res Frame: ", frame_res, 1);
>>>            dbg_dump_frame(frame_type, "Out Frame: ", frames_out, cnt_out);
>>>            free(frame_res);
>>>            return ret;
>>>    }
>>>
>>> although I'm guessing this might just be an impedance between the
>>> chardev ioctl interface for rpmb and the virtio FE driver which is only
>>> one potential consumer of these rpmb ioc commands?
>>>
>>> Can anyone clarify?
>> Ping?
>>
>> --
>> Alex BennÃe
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
>> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
>>
>>


-- 
Alex BennÃe


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]