Re: [tip:x86/platform] x86/hyper-v: Use hypercall for remote TLB flush

From: Vitaly Kuznetsov
Date: Wed Aug 16 2017 - 12:42:56 EST


Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> writes:

> Peter Zijlstra <peterz@xxxxxxxxxxxxx> writes:
>
>> On Fri, Aug 11, 2017 at 09:16:29AM -0700, Linus Torvalds wrote:
>>> On Fri, Aug 11, 2017 at 2:03 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>>> >
>>> > I'm sure we talked about using HAVE_RCU_TABLE_FREE for x86 (and yes that
>>> > would make it work again), but this was some years ago and I cannot
>>> > readily find those emails.
>>>
>>> I think the only time we really talked about HAVE_RCU_TABLE_FREE for
>>> x86 (at least that I was cc'd on) was not because of RCU freeing, but
>>> because we just wanted to use the generic page table lookup code on
>>> x86 *despite* not using RCU freeing.
>>>
>>> And we just ended up renaming HAVE_GENERIC_RCU_GUP as HAVE_GENERIC_GUP.
>>>
>>> There was only passing mention of maybe making x86 use RCU, but the
>>> discussion was really about why the IF flag meant that x86 didn't need
>>> to, iirc.
>>>
>>> I don't recall us ever discussing *really* making x86 use RCU.
>>
>> Google finds me this:
>>
>> https://lwn.net/Articles/500188/
>>
>> Which includes:
>>
>> http://www.mail-archive.com/kvm@xxxxxxxxxxxxxxx/msg72918.html
>>
>> which does as was suggested here, selects HAVE_RCU_TABLE_FREE for
>> PARAVIRT_TLB_FLUSH.
>>
>> But yes, this is very much virt specific nonsense, native would never
>> need this.
>
> In case we decide to go HAVE_RCU_TABLE_FREE for all PARAVIRT-enabled
> kernels (as it seems to be the easiest/fastest way to fix Xen PV) - what
> do you think about the required testing? Any suggestion for a
> specifically crafted micro benchmark in addition to standard
> ebizzy/kernbench/...?

In the meantime I tested HAVE_RCU_TABLE_FREE with kernbench (enablement
patch I used is attached; I know that it breaks other architectures) on
bare metal with PARAVIRT enabled in config. The results are:

6-CPU host:

Average Half load -j 3 Run (std deviation):
CURRENT HAVE_RCU_TABLE_FREE
======= ===================
Elapsed Time 400.498 (0.179679) Elapsed Time 399.909 (0.162853)
User Time 1098.72 (0.278536) User Time 1097.59 (0.283894)
System Time 100.301 (0.201629) System Time 99.736 (0.196254)
Percent CPU 299 (0) Percent CPU 299 (0)
Context Switches 5774.1 (69.2121) Context Switches 5744.4 (79.4162)
Sleeps 87621.2 (78.1093) Sleeps 87586.1 (99.7079)

Average Optimal load -j 24 Run (std deviation):
CURRENT HAVE_RCU_TABLE_FREE
======= ===================
Elapsed Time 219.03 (0.652534) Elapsed Time 218.959 (0.598674)
User Time 1119.51 (21.3284) User Time 1118.81 (21.7793)
System Time 100.499 (0.389308) System Time 99.8335 (0.251423)
Percent CPU 432.5 (136.974) Percent CPU 432.45 (136.922)
Context Switches 81827.4 (78029.5) Context Switches 81818.5 (78051)
Sleeps 97124.8 (9822.4) Sleeps 97207.9 (9955.04)

16-CPU host:

Average Half load -j 8 Run (std deviation):
CURRENT HAVE_RCU_TABLE_FREE
======= ===================
Elapsed Time 213.538 (3.7891) Elapsed Time 212.5 (3.10939)
User Time 1306.4 (1.83399) User Time 1307.65 (1.01364)
System Time 194.59 (0.864378) System Time 195.478 (0.794588)
Percent CPU 702.6 (13.5388) Percent CPU 707 (11.1131)
Context Switches 21189.2 (1199.4) Context Switches 21288.2 (552.388)
Sleeps 89390.2 (482.325) Sleeps 89677 (277.06)

Average Optimal load -j 64 Run (std deviation):
CURRENT HAVE_RCU_TABLE_FREE
======= ===================
Elapsed Time 137.866 (0.787928) Elapsed Time 138.438 (0.218792)
User Time 1488.92 (192.399) User Time 1489.92 (192.135)
System Time 234.981 (42.5806) System Time 236.09 (42.8138)
Percent CPU 1057.1 (373.826) Percent CPU 1057.1 (369.114)
Context Switches 187514 (175324) Context Switches 187358 (175060)
Sleeps 112633 (24535.5) Sleeps 111743 (23297.6)

As you can see, there's no notable difference. I'll think of a
microbenchmark though.

>
> Additionally, I see another option for us: enable 'rcu table free' on
> boot (e.g. by taking tlb_remove_table to pv_ops and doing boot-time
> patching for it) so bare metal and other hypervisors are not affected
> by the change.

It seems there's no need for that and we can keep things simple...

--
Vitaly