OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: [Qemu-devel] [virtio-dev] Re: [PATCH v5 4/5] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT


On Monday, March 26, 2018 7:09 PM, Daniel P. Berrangé wrote:
> 
> As far as libvirt is concerned there are three sets of threads it provides
> control over
> 
>  - vCPUs - each VCPU in KVM has a thread. Libvirt provides per-thread
>    tunable control
> 
>  - IOThreads - each named I/O thread can be associated with one or more
>    devices. Libvirt provides per-thread tunable control.
> 
>  - Emulator - any other QEMU thread which isn't an vCPU thread or IO thread
>    gets called an emulator thread by libvirt. There is no-per thread
>    tunable control - we can set tunables for entire set of emulator threads
>    at once.
> 


Hi Daniel,
Thanks for sharing the details, they are very helpful. I still have a question:

There is no fundamental difference between iothread and our optimization thread (it is similar to the migration thread, which is created when migration begins and terminated when migration is done) - both of them are pthreads and each has a name. Could we also add the similar per-thread tunable control in libvirt for such threads?

For example, in QEMU we can add a new migration qmp command, migrate_enable_free_page_optimization (just like other commands migrate_set_speed 10G), this command will create the optimization thread. In this way, creation of the thread is in libvirt's control, and libvirt can then support tuning the thread (e.g. pin it to any pCPU), right?


> So, if this balloon driver thread needs to support tuning controls separately
> from other general purpose QEMU threads, then it would ideally use
> iothread infrastructure.
> 
> I don't particularly understand what this code is doing, but please consider
> whether NUMA has any impact on the work done in this thread. Specifically
> when the guest has multiple virtual NUMA nodes, each associated with a
> specific host NUMA node. If there is any memory intensive work being done
> here, then it might need to be executed on the correct host NUMA node
> according to the memory region being touched.
> 
 
I think it would not be significantly impacted by NUMA, because this optimization thread doesn’t access to the guest memory a lot except the virtqueue (even with iothread, we may still not know which pCPU to pin to match virtqueue in the vNUMA case). Essentially, it gets the free page address and length, then clears bits from the migration dirty bitmap, which is allocated by QEMU itself.
So, I think adding the tunable support is nicer, but I'm not sure if that would be required.

Best,
Wei


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]