OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

On 10/2/19 10:25 AM, Alexander Duyck wrote:

>>> My suggestion would be to look at reworking the patch set and
>>> post numbers for my patch set versus the bitmap approach and we can
>>> look at them then.
>> Agreed. However, in order to fix an issue I have to reproduce it first.
> With the tweak I have suggested above it should make it much easier to
> reproduce. Basically all you need is to have the allocation competing
> against hinting. Currently the hinting isn't doing this because the
> allocations are mostly coming out of 4K pages instead of higher order
> ones.
> Alternatively you could just make the suggestion I had proposed about
> using spin_lock/unlock_irq in your worker thread and that resolved it
> for me.
>>>  I would prefer not to spend my time fixing and
>>> tuning a patch set that I am still not convinced is viable.
>> You  don't have to, I can fix the issues in my patch-set. :)
> Sounds good. Hopefully the stuff I pointed out above helps you to get
> a reproduction and resolve the issues.

So I did observe a significant drop in running my v12 path-set [1] with the
suggested test setup. However, on making certain changes the performance
improved significantly.

I used my v12 patch-set which I have posted earlier and made the following
1. Started reporting only (MAX_ORDER - 1) pages and increased the number of
ÂÂÂ pages that can be reported at a time to 32 from 16. The intent of making
ÂÂÂ these changes was to bring my configuration closer to what Alexander is
ÂÂÂ using.
2. I made an additional change in my bitmap scanning logic to prevent acquiring
ÂÂÂ spinlock if the page is already allocated.

On a 16 vCPU 30 GB single NUMA guest affined to a single host NUMA, I ran the
modified will-it-scale/page_fault number of times and calculated the average
of the number of process and threads launched on the 16th core to compare the
impact of my patch-set against an unmodified kernel.

%Drop in number of processes launched on 16th vCPU =ÂÂÂÂ 1-2%
%Drop in number of threads launched on 16th vCPUÂÂÂÂ =ÂÂÂÂ 5-6%

Other observations:
- I also tried running Alexander's latest v11 page-reporting patch set and
 observe a similar amount of average degradation in the number of processes
 and threads.
- I didn't include the linear component recorded by will-it-scale because for
 some reason it was fluctuating too much even when I was using an unmodified
 kernel. If required I can investigate this further.

Note: If there is a better way to analyze the will-it-scale/page_fault results
then please do let me know.

Other setup details:
Following are the configurations which I enabled to run my tests:
- Set host THP to always
- Set guest THP to madvise
- Added the suggested madvise call in page_fault source code.
@Alexander please let me know if I missed something.

The current state of my v13:
I still have to look into Michal's suggestion of using page-isolation API's
instead of isolating the page. However, I believe at this moment our objective
is to decide with which approach we can proceed and that's why I decided to
post the numbers by making small required changes in v12 instead of posting a
new series.

Following are the changes which I have made on top of my v12:

page_reporting.h change:

page_reporting.c change:
@@ -101,8 +101,12 @@ static void scan_zone_bitmap(struct page_reporting_config
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ /* Process only if the page is still online */
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ page = pfn_to_online_page((setbit << PAGE_REPORTING_MIN_ORDER) +
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ if (!page || !PageBuddy(page)) {
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ clear_bit(setbit, zone->bitmap);
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ atomic_dec(&zone->free_pages);

@Alexander in case you decide to give it a try and find different results,
please do let me know.

[1] https://lore.kernel.org/lkml/20190812131235.27244-1-nitesh@redhat.com/


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]