Re: [PATCH v4 3/7] x86/flush_tlb: try flush_tlb_single one by onein flush_tlb_range

From: Alex Shi
Date: Thu May 10 2012 - 04:52:22 EST


>

> Ok, question:
>
> we're comparing TLB size with the amount of pages mapped by this mm
> struct. AFAICT, that doesn't mean that all those mapped pages do have
> respective entries in the TLB, does it?
>
> If so, then the actual entries number is kinda inaccurate, no? We don't
> really know how many TLB entries actually belong to this mm struct. Or am I
> missing something?


No, we can not know the exactly TLB entires for. But usually, when you
process is doing the mprotect/munmap etc system call, your process has
taken much of memory and already filled lots of TLB entries.

This point is considered imply in the balance point calculation.
checking following equation
(512 - X) * 100ns(assumed TLB refill cost) =
X(TLB flush entries) * 100ns(assumed invlpg cost)

The X value we got is far lower then theory value. That means remain TLB
entries is may not so much, or TLB refill cost is much lower due to
hardware pre-fetcher.

>

>> + if ((end - start)/PAGE_SIZE > act_entries/FLUSHALL_BAR)
>
> Oh, in a later patch you do this:
>
> + if ((end - start) >> PAGE_SHIFT >
> + act_entries >> tlb_flushall_factor)
>
> and the tlb_flushall_factor factor is 5 or 6 but the division by 16
> (FLUSHALL_BAR) was a >> 4. So, is this to assume that it is not 16 but
> actually more than 32 or even 64 TLB entries where a full TLB flush
> makes sense and one-by-one if less?


Yes, the FLUSHALL_BAR is just a guessing value here. And take your
advice, I modify the macro benchmark a little and get the more sensible
value in later patch.

BTW, I found 8% performance increase on kbuild on SNB EP from the
average multiple testing, while result variation is up to 15%.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/