OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [Qemu-devel] [PATCH RFC 2/2] tests/virtio-blk: add test for WRITE_ZEROES command

On Sun, Jan 27, 2019 at 01:42:03PM -0500, Michael S. Tsirkin wrote:
> On Sun, Jan 27, 2019 at 12:57:20PM +0000, Stefan Hajnoczi wrote:
> > On Fri, Jan 25, 2019 at 02:17:01PM -0500, Michael S. Tsirkin wrote:
> > > On Fri, Jan 25, 2019 at 03:12:45PM +0000, Stefan Hajnoczi wrote:
> > > > Based on the Linux guest driver code and the lack of more evidence in
> > > > the spec, I'm pretty sure data[] doesn't need to be padded to 512 bytes
> > > > for discard/write zero requests.
> > > 
> > > OK. Must devices support such padding?
> > 
> > I see no reason to require padding.  Host APIs and physical storage
> > controllers do not require it.
> I'm not talking about requiring padding I'm talking about
> supporting padding on device side.
> One reason would be ease of extending the spec for guest.
> E.g. imagine adding more fields to the command.
> With suppport for padding guest simply adds fields
> to its structure, and the unused ones are ignored
> by device. Without support for padding you need two
> versions of the structure.

The simplest way is to say each struct virtio_blk_discard_write_zeroes
(currently 16 bytes long) is padded to 512 bytes, but this wastes a lot
of memory.

Stefano and I looked at the status of multiple segment discard/write
zeroes in Linux: only the NVMe driver supports multiple segments.  Even
SCSI sd doesn't support multiple segments.

Userspace APIs do not offer a way for applications to submit multiple
segments in a single call.  Only the block I/O scheduler creates
requests with multiple segments.

SPDK's virtio-blk driver and vhost-user-blk device backend also only
support 1 segment per request.

Given this situation, I'm not sure it's worth the complexity to spec out
a fancy padding scheme that no one will implement.  Wasting 512 - 16
bytes per segment also seems unattractive.  IMO it's acceptable to
introduce a feature bit if struct virtio_blk_discard_write_zeroes needs
extension in the future.

> Another reason would be compatibility.  For better or worse the spec
> does make it look like 512 padding is present. This patch was merged very
> recently so I agree it's not too late to clarify it that it's not
> required, but it seems a bit too much to disallow that completely.
> Let's assume for a minute a device turns up that already
> implemented the 512 byte assumption? If padding is allowed then we can
> write a driver that works with both devices.

The Linux guest driver doesn't honor 512 byte padding, so device
backends would already need to make an exception if we spec 512 byte

Let's begin VIRTIO 1.1 with the state of real-world implementations:
data[] must be a multiple of sizeof(struct

I'll send a patch to the virtio TC and we can discuss further in that


Attachment: signature.asc
Description: PGP signature

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]