OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio] [OASIS Issue Tracker] Created: (VIRTIO-28) Implement new balloon device (ID 13)


On Tue, Oct 29, 2013 at 05:37:26PM +0100, Daniel Kiper wrote:
> On Tue, Oct 29, 2013 at 11:15:59PM +1030, Rusty Russell wrote:
> > Daniel Kiper <daniel.kiper@oracle.com> writes:
> > > On Tue, Oct 29, 2013 at 10:44:53AM +1030, Rusty Russell wrote:
> > >> "Michael S. Tsirkin" <mst@redhat.com> writes:
> > >> > OK there are some more issues to consider around deflate.
> > >> >
> > >> >
> > >> > 1.  On KVM, we actually want to change QEMU so that pagefaults don't
> > >> > work either.  Specifically, we want to skip pages in the balloon for
> > >> > migration.
> > >> > However, migration is done in userspace while pagefaults
> > >> > are done in kernel.
> > >> > I think the implication is that
> > >> > -	 you should be able to ask guest to inflate balloon
> > >> > 	 with pages that can be paged in (when you don't want to migrate
> > >> > 	 and want max local performance) or with pages that can not be paged in
> > >> > 	(when you want to migrate faster), dynamically, not through a
> > >> > 	device feature
> > >> > -	 "will notify before use" feature should be per a bunch or pages actually.
> > >>
> > >> I am always reluctant to implement a spec for things which don't exist.
> > >> This is the cause of the current "negative feature" mess with
> > >> VIRTIO_BALLOON_F_MUST_TELL_HOST.
> > >>
> > >> So if we *ever* want to ask for pages, let's make the the driver always
> > >> ask for pages.  You place a buffer in the queue and the device fills it
> > >> with page addresses you can now use.
> > >
> > > You mean PFNs?
> >
> > PFN << PAGE_BITS.  Since we're dealing with different size pages, using
> > exact addresses is clearer, I think.
>
> OK, it makes sens. However, I have done quick review of existing Xen balloon driver
> and PFNs or exact addresses probably will not work on Xen. Even on HVM. It must
> be confirmed. I will do that after releasing first draft.
>
> > >> +3. To withdraw pages from the balloon, the same structure should be
> > >> +   placed in the todevq queue, with the page array writable:
> > >> +
> > >> +	struct virtio_balloon_pages {
> > >> +#define VIRTIO_BALLOON_REQ_PAGES	2
> > >> +		u32 type; // VIRTIO_BALLOON_REQ_PAGES
> > >> +		u64 page[];
> > >
> > > What is the size of this array?
> >
> > It's implied by the length of the request.
>
> OK.
>
> > >> +	};
> > >> +
> > >> +   The device may not fill the entire page array.  The contents
> > >> +   of the pages received will be undefined.  The device should
> > >>     keep count of how many pages remain in the balloon so it can
> > >>     correctly respond to future resize requests.
> > >
> > > What happen if driver request more pages than are in balloon?
> > > Are we going to support such cases? I am asking in context
> > > of memory hotplug support.
> >
> > I don't think so.  The device won't fill the entire array in that case
> > (remember, virtio gets a "used" field returned, which says how many
> > bytes were written by the device).
> >
> > Memory hotplug is properly the realm of platform specific methods
> > (eg. ACPI), so I think it's outside the virtio spec.
>
> In general ACPI is used as a notifier which says that new memory was
> installed in system. If memory hotplug is built in system it creates
> all structures needed to use newly added memory. However, memory must
> be activated by user. This process works in similar way like
> balloon deflate.
>
> Instead of ACPI another notification mechanism could be used but memory
> hotplug mechanism itself is quite generic and could be used everywhere.
> We use that features in Xen implementation. Hence, we could use the
> same in VIRTIO balloon driver. We just need device which gives pages
> to VM without any limitations (i.e. accordingly to limits established
> by admin and not limited by amount of memory assigned at boot). This
> way guest memory could be extended without stopping. If we just use
> balloon it would not work in that way.
>
> > Ballooning is simpler, but has shown to be useful.  Both can coexist.
>
> Right. We did that in Xen balloon driver. It supports ballooning but
> memory hotplug is used when more memory is requested than it was
> assigned to VM at boot. Later all stuff works as usual. Even balloon
> driver works in the same way on hotplugged memory like on memory
> allocated at boot. So we just need a device which gives pages
> to VM without any limitations.
>
> Of course we could consider another device for memory hotplug but
> I do not think it makes sens.

Any comments?

Daniel


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]