Re: [PATCH v7 2/8] x86/flush_tlb: try flush_tlb_single one by onein flush_tlb_range

From: Alex Shi
Date: Thu May 24 2012 - 02:43:33 EST


On 05/23/2012 10:51 PM, Jan Beulich wrote:

>>>> On 23.05.12 at 16:15, Alex Shi <alex.shi@xxxxxxxxx> wrote:
>> @@ -1269,11 +1270,11 @@ static void xen_flush_tlb_others(const struct cpumask *cpus,
>> cpumask_and(to_cpumask(args->mask), cpus, cpu_online_mask);
>> cpumask_clear_cpu(smp_processor_id(), to_cpumask(args->mask));
>>
>> - if (va == TLB_FLUSH_ALL) {
>> + if (start == TLB_FLUSH_ALL) {
>> args->op.cmd = MMUEXT_TLB_FLUSH_MULTI;
>> } else {
>> args->op.cmd = MMUEXT_INVLPG_MULTI;
>> - args->op.arg1.linear_addr = va;
>> + args->op.arg1.linear_addr = start;
>> }
>>
>> MULTI_mmuext_op(mcs.mc, &args->op, 1, NULL, DOMID_SELF);
>
> Unless there is an implicit assumption that 'start' and 'end' are on
> the same page (which I doubt, as then it would be pointless to
> add 'end' here), this one is definitely wrong - you'd either have
> to issue multiple MMUEXT_INVLPG_MULTI-s, or you'd have to
> also use MMUEXT_TLB_FLUSH_MULTI for the multi-page case.






Thanks comments!
So, the following change should be more safe for PV?

- if (va == TLB_FLUSH_ALL) {
- args->op.cmd = MMUEXT_TLB_FLUSH_MULTI;
- } else {
- args->op.cmd = MMUEXT_INVLPG_MULTI;
- args->op.arg1.linear_addr = va;
- }
+ args->op.cmd = MMUEXT_TLB_FLUSH_MULTI;
+ if (start != TLB_FLUSH_ALL)
+ args->op.arg1.linear_addr = start;

MULTI_mmuext_op(mcs.mc, &args->op, 1, NULL, DOMID_SELF);

Do you know why the old patch is past the performance testing of Yongjie?
https://lkml.org/lkml/2012/5/4/20

---