Inefficient TLB flushing

From: Jack Steiner
Date: Wed Nov 12 2003 - 15:02:50 EST




Something appears broken in TLB flushing on IA64 (& possibly other
architectures). Functionally, it works but performance is bad on
systems with large cpu counts.

The result is that TLB flushing in exit_mmap() is frequently being done via
IPIs to all cpus rather than with a "ptc" instruction or with a new context..



Take a look at exit_mmap() (in mm/mmap.c):

void exit_mmap(struct mm_struct *mm)
{
(1) tlb = tlb_gather_mmu(mm, 1);
unmap_vmas(&tlb,...
...
(2) tlb_finish_mmu(tlb, 0, MM_VM_SIZE(mm));
...
}


static inline void
tlb_finish_mmu (struct mmu_gather *tlb, unsigned long start, unsigned long end)
{
...
ia64_tlb_flush_mmu(tlb, start, end);
...
}


static inline void
ia64_tlb_flush_mmu (struct mmu_gather *tlb, unsigned long start, unsigned long end)
{
...
(3) if (tlb->fullmm) {
flush_tlb_mm(tlb->mm);
(4) } else if (unlikely (end - start >= 1TB ...
/* This should be very rare and is not worth optimizing for.
(5) flush_tlb_all();
...

flush_tlb_all() flushes with IPI to every cpu


<start> & <end> are passed from exit_mmap at line (2) as 0..MM_VM_SIZE which is
0..0xa000000000000000 (on IA64) & the expression at (4) is always TRUE. If tlb->fullmm is
false, the code in ia64_tlb_flush_mmu calls flush_tlb_all. flush_tlb_all uses IPIs to
do the TLB flushing.


Normally, tlb->fullmm at line (3) is 1 since it is initialized to 1 at line (1). However, in
unmap_vmas(), if a resched interrupt occurs during VMA teardown, the tlb flushing
is reinitialized & tlb_gather_mmu() is called with tlb_gather_mmu(mm, 0). When the
call to tlb_finish_mmu() is made at line (2), TLB flushing is done with an IPI


Does this analysis look correct?? AFAICT, this bug (inefficiency) may apply to other
architectures although I cant access it's peformance impact.


Here is the patch that I am currently testing:


--- /usr/tmp/TmpDir.19957-0/linux/mm/memory.c_1.79 Wed Nov 12 13:56:25 2003
+++ linux/mm/memory.c Wed Nov 12 12:57:25 2003
@@ -574,9 +574,10 @@
if ((long)zap_bytes > 0)
continue;
if (need_resched()) {
+ int fullmm = (*tlbp)->fullmm;
tlb_finish_mmu(*tlbp, tlb_start, start);
cond_resched_lock(&mm->page_table_lock);
- *tlbp = tlb_gather_mmu(mm, 0);
+ *tlbp = tlb_gather_mmu(mm, fullmm);
tlb_start_valid = 0;
}
zap_bytes = ZAP_BLOCK_SIZE;
--
Thanks

Jack Steiner (steiner@xxxxxxx) 651-683-5302
Principal Engineer SGI - Silicon Graphics, Inc.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/