OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [virtio] New virtio balloon...

On Mon, 03 Feb 2014 13:37:17 +1030
Rusty Russell <rusty@au1.ibm.com> wrote:

> >> > - how to accomodate memory pressure in guest?
> >> >   Let's add a field telling host how hard do we
> >> >   want our memory back
> >> 
> >> That's very hard to define across guests.  Should we be using stats for
> >> that instead?  In fact, should we allow gratuitous stats sending,
> >> instead of a simple NEED_MEM flag?
> >> 
> >> > - assume you want to over-commit host and start
> >> >   inflating balloon.
> >> >   If low on memory it might be better for guest to
> >> >   wait a bit before inflating.
> >> >   Also, if host asks for a lot of memory a ton of
> >> >   allocations will slow guest significantly.
> >> >   But for guest to do the right thing we need host to tell guest what
> >> >   are its memory and time contraints.
> >> >   Let's add a field telling guest how hard do we
> >> >   want it to give us memory (e.g. time limit)
> >> 
> >> We can't have intelligence at both ends, I think.  We've chosen a
> >> host-led model, so we should stick to that
> >
> > I'm saying let's control speed of allocations from host,
> > that's still host-led?
> You want the guest to wait a bit, and control the rate at which it
> allocates memory.  If that's what we want, let's get the host to delay
> telling it to inflate, and then inflate slowly.  Otherwise we have to
> debug both host and guest sides when we hit performance problems.
> I changed the STATS_REPLY to STATS, and included a "want more mem"
> flag.  The implication is that the host compare stats across different
> guests.

When would the host do that? Can you elaborate a bit how this would
be used?

I feel that what you're proposing is not far away from automatic
ballooning. Basically, my current idea for automatic ballooning more or
less is:

 1. QEMU registers for vmpressure events in the host 
    (see Documentation/cgroups/memory.txt "Memory Pressure" section)

 2. The virtio-balloon driver in the guest registers for
    in-kernel memory pressure notification (not upstream yet)

 3. When the host is under pressure, QEMU is notified and it asks the
    guest to inflate its balloon by some amount

 4. When the guest is under pressure, QEMU is notified by the
    virtio-balloon driver and QEMU asks the guest to deflate by
    some value

Now, doing one inflate/deflate per event is not very good. I'm trying
to find a way where we use the amount of events to determine:

 A. When memory should be moved from the guest to the host

 B. When memory should be moved from the host to the guest

 C. When memory shouldn't move (ie. when both guest and host experience
    similar pressure)

Note that there's no "central authority" that has information about
all guests to decide how do that. Each qemu instance has to decide it
itself, based on the information it has about the host and about its guest.

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]