Re: [PATCH] mm: cache largest vma

From: Michel Lespinasse
Date: Mon Nov 11 2013 - 02:43:54 EST


On Sun, Nov 10, 2013 at 8:12 PM, Davidlohr Bueso <davidlohr@xxxxxx> wrote:
> 2) Oracle Data mining (4K pages)
> +------------------------+----------+------------------+---------+
> | mmap_cache type | hit-rate | cycles (billion) | stddev |
> +------------------------+----------+------------------+---------+
> | no mmap_cache | - | 63.35 | 0.20207 |
> | current mmap_cache | 65.66% | 19.55 | 0.35019 |
> | mmap_cache+largest VMA | 71.53% | 15.84 | 0.26764 |
> | 4 element hash table | 70.75% | 15.90 | 0.25586 |
> | per-thread mmap_cache | 86.42% | 11.57 | 0.29462 |
> +------------------------+----------+------------------+---------+
>
> This workload sure makes the point of how much we can benefit of caching
> the vma, otherwise find_vma() can cost more than 220% extra cycles. We
> clearly win here by having a per-thread cache instead of per address
> space. I also tried the same workload with 2Mb hugepages and the results
> are much more closer to the kernel build, but with the per-thread vma
> still winning over the rest of the alternatives.
>
> All in all I think that we should probably have a per-thread vma cache.
> Please let me know if there is some other workload you'd like me to try
> out. If folks agree then I can cleanup the patch and send it out.

Per thread cache sounds interesting - with per-mm caches there is a
real risk that some modern threaded apps pay the cost of cache updates
without seeing much of the benefit. However, how do you cheaply handle
invalidations for the per thread cache ?

If you have a nice simple scheme for invalidations, I could see per
thread LRU cache working well.

That said, the difficulty with this kind of measurements
(instrumenting code to fish out the cost of a particular function) is
that it would be easy to lose somewhere else - for example for keeping
the cache up to date - and miss that on the instrumented measurement.

--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/