Re: [PATCH 2/6] x86: mm: rip out complicated, out-of-date, buggy TLB flushing

From: Mel Gorman
Date: Thu Apr 24 2014 - 14:00:43 EST


On Thu, Apr 24, 2014 at 09:58:11AM -0700, Dave Hansen wrote:
> On 04/24/2014 01:45 AM, Mel Gorman wrote:
> >> +/*
> >> + * See Documentation/x86/tlb.txt for details. We choose 33
> >> + * because it is large enough to cover the vast majority (at
> >> + * least 95%) of allocations, and is small enough that we are
> >> + * confident it will not cause too much overhead. Each single
> >> + * flush is about 100 cycles, so this caps the maximum overhead
> >> + * at _about_ 3,000 cycles.
> >> + */
> >> +/* in units of pages */
> >> +unsigned long tlb_single_page_flush_ceiling = 1;
> >> +
> >
> > This comment is premature. The documentation file does not exist yet and
> > 33 means nothing yet. Out of curiousity though, how confident are you
> > that a TLB flush is generally 100 cycles across different generations
> > and manufacturers of CPUs? I'm not suggesting you change it or auto-tune
> > it, am just curious.
>
> Yeah, the comment belongs in the later patch where I set it to 33.
>
> I looked at this on the last few generations of Intel CPUs. "100
> cycles" was a very general statement, and not precise at all. My laptop
> averages out to 113 cycles overall, but the flushes of 25 pages averaged
> 96 cycles/page while the flushes of 2 averaged 219/page.
>
> Those cycles include some costs of from the instrumentation as well.
>
> I did not test on other CPU manufacturers, but this should be pretty
> easy to reproduce. I'm happy to help folks re-run it on other hardware.
>
> I also believe with the modalias stuff we've got in sysfs for the CPU
> objects we can do this in the future with udev rules instead of
> hard-coding it in the kernel.
>

You convinced me. Regardless of whether you move the comment or update
the changelog;

Acked-by: Mel Gorman <mgorman@xxxxxxx>

--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/