RE: Hugetlb demanding paging for -mm tree

From: Chen, Kenneth W
Date: Mon Aug 09 2004 - 14:03:47 EST


William Lee Irwin III wrote on Saturday, August 07, 2004 1:36 AM
> On Thu, Aug 05, 2004 at 06:39:59AM -0700, Chen, Kenneth W wrote:
> > +static void scrub_one_pmd(pmd_t * pmd)
> > +{
> > + struct page *page;
> > +
> > + if (pmd && !pmd_none(*pmd) && !pmd_huge(*pmd)) {
> > + page = pmd_page(*pmd);
> > + pmd_clear(pmd);
> > + dec_page_state(nr_page_table_pages);
> > + page_cache_release(page);
> > + }
> > +}
>
> This is needed because we're only freeing pagetables at pgd granularity
> at munmap() -time. It makes more sense to refine it to pmd granularity
> instead of this cleanup pass, as it's a memory leak beyond just hugetlb
> data structure corruption.
>

That would be nice and ease the pain on x86. OTOH, leaving pte persistent
right now may help in mmap/munmap intensive workload since unmap_region()
only destroys all pte allocation at pgd granularity.


> I wonder why this bugfix was rolled into the demand paging patch instead
> of shipped separately. And for that matter, this fix applies to mainline.

The bug fix went into hugetlb_prefault() function in the mainline for the
prefaulting case. It went to that function instead of huge_pte_alloc and huge_pte_offset is to avoid scrubbing at pte lookup time.
One thing we can
do for demand paging case is to scrub it at initial mmap hugetlb vma, so
the penalty is paid upfront instead of at every pte allocation/lookup time.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/