Re: [RFC][PATCH 4/6] arm, mm: Convert arm to generic tlb

From: Peter Zijlstra
Date: Thu May 17 2012 - 13:12:09 EST


On Thu, 2012-05-17 at 18:01 +0100, Catalin Marinas wrote:
> > So the RCU code can from ppc in commit
> > 267239116987d64850ad2037d8e0f3071dc3b5ce, which has similar behaviour.
> > Also I suspect the mm_users < 2 test will be incorrect for ARM since
> > even the one user can be concurrent with your speculation engine.
>
> That's correct.

(I'm not sending this... really :-)

---
commit cd94154cc6a28dd9dc271042c1a59c08d26da886
Author: Martin Schwidefsky <schwidefsky@xxxxxxxxxx>
Date: Wed Apr 11 14:28:07 2012 +0200

[S390] fix tlb flushing for page table pages

Git commit 36409f6353fc2d7b6516e631415f938eadd92ffa "use generic RCU
page-table freeing code" introduced a tlb flushing bug. Partially revert
the above git commit and go back to s390 specific page table flush code.

For s390 the TLB can contain three types of entries, "normal" TLB
page-table entries, TLB combined region-and-segment-table (CRST) entries
and real-space entries. Linux does not use real-space entries which
leaves normal TLB entries and CRST entries. The CRST entries are
intermediate steps in the page-table translation called translation paths.
For example a 4K page access in a three-level page table setup will
create two CRST TLB entries and one page-table TLB entry. The advantage
of that approach is that a page access next to the previous one can reuse
the CRST entries and needs just a single read from memory to create the
page-table TLB entry. The disadvantage is that the TLB flushing rules are
more complicated, before any page-table may be freed the TLB needs to be
flushed.

In short: the generic RCU page-table freeing code is incorrect for the
CRST entries, in particular the check for mm_users < 2 is troublesome.

This is applicable to 3.0+ kernels.

Cc: <stable@xxxxxxxxxxxxxxx>
Signed-off-by: Martin Schwidefsky <schwidefsky@xxxxxxxxxx>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/