OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [virtio] [OASIS Issue Tracker] Created: (VIRTIO-28) Implement new balloon device (ID 13)

Daniel Kiper <daniel.kiper@oracle.com> writes:
> On Tue, Oct 29, 2013 at 10:44:53AM +1030, Rusty Russell wrote:
>> "Michael S. Tsirkin" <mst@redhat.com> writes:
>> > OK there are some more issues to consider around deflate.
>> >
>> >
>> > 1.  On KVM, we actually want to change QEMU so that pagefaults don't
>> > work either.  Specifically, we want to skip pages in the balloon for
>> > migration.
>> > However, migration is done in userspace while pagefaults
>> > are done in kernel.
>> > I think the implication is that
>> > -	 you should be able to ask guest to inflate balloon
>> > 	 with pages that can be paged in (when you don't want to migrate
>> > 	 and want max local performance) or with pages that can not be paged in
>> > 	(when you want to migrate faster), dynamically, not through a
>> > 	device feature
>> > -	 "will notify before use" feature should be per a bunch or pages actually.
>> I am always reluctant to implement a spec for things which don't exist.
>> This is the cause of the current "negative feature" mess with
>> So if we *ever* want to ask for pages, let's make the the driver always
>> ask for pages.  You place a buffer in the queue and the device fills it
>> with page addresses you can now use.
> You mean PFNs?

PFN << PAGE_BITS.  Since we're dealing with different size pages, using
exact addresses is clearer, I think.

>> +3. To withdraw pages from the balloon, the same structure should be
>> +   placed in the todevq queue, with the page array writable:
>> +
>> +	struct virtio_balloon_pages {
>> +		u32 type; // VIRTIO_BALLOON_REQ_PAGES
>> +		u64 page[];
> What is the size of this array?

It's implied by the length of the request.

>> +	};
>> +
>> +   The device may not fill the entire page array.  The contents
>> +   of the pages received will be undefined.  The device should
>>     keep count of how many pages remain in the balloon so it can
>>     correctly respond to future resize requests.
> What happen if driver request more pages than are in balloon?
> Are we going to support such cases? I am asking in context
> of memory hotplug support.

I don't think so.  The device won't fill the entire array in that case
(remember, virtio gets a "used" field returned, which says how many
bytes were written by the device).

Memory hotplug is properly the realm of platform specific methods
(eg. ACPI), so I think it's outside the virtio spec.

Ballooning is simpler, but has shown to be useful.  Both can coexist.


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]