Re: [PATCH] [BUGFIX] mm: hugepages can cause negative commitlimit

From: Russ Anderson
Date: Fri May 27 2011 - 18:22:32 EST


On Thu, May 26, 2011 at 06:07:53PM -0300, Rafael Aquini wrote:
> On Fri, May 20, 2011 at 07:30:32PM -0300, Rafael Aquini wrote:
> > On Fri, May 20, 2011 at 01:04:11PM -0700, Andrew Morton wrote:
> > > On Thu, 19 May 2011 17:11:01 -0500
> > > Russ Anderson <rja@xxxxxxx> wrote:
> > >
> > > > OK, I see your point. The root problem is hugepages allocated at boot are
> > > > subtracted from totalram_pages but hugepages allocated at run time are not.
> > > > Correct me if I've mistate it or are other conditions.
> > > >
> > > > By "allocated at run time" I mean "echo 1 > /proc/sys/vm/nr_hugepages".
> > > > That allocation will not change totalram_pages but will change
> > > > hugetlb_total_pages().
> > > >
> > > > How best to fix this inconsistency? Should totalram_pages include or exclude
> > > > hugepages? What are the implications?
> > >
> > > The problem is that hugetlb_total_pages() is trying to account for two
> > > different things, while totalram_pages accounts for only one of those
> > > things, yes?
> > >
> > > One fix would be to stop accounting for huge pages in totalram_pages
> > > altogether. That might break other things so careful checking would be
> > > needed.
> > >
> > > Or we stop accounting for the boot-time allocated huge pages in
> > > hugetlb_total_pages(). Split the two things apart altogether and
> > > account for boot-time allocated and runtime-allocated pages separately. This
> > > souds saner to me - it reflects what's actually happening in the kernel.
> >
> > Perhaps we can just reinstate the # of pages "stealed" at early boot allocation
> > later, when hugetlb_init() calls gather_bootmem_prealloc()
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 8ee3bd8..d606c9c 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1111,6 +1111,7 @@ static void __init gather_bootmem_prealloc(void)
> > WARN_ON(page_count(page) != 1);
> > prep_compound_huge_page(page, h->order);
> > prep_new_huge_page(h, page, page_to_nid(page));
> > + totalram_pages += 1 << h->order;
> > }
> > }
>
> Howdy Russ,
>
> Were you able to confirm if that proposed change fix the issue you've reported?

Sorry, I have been distracted. I will get to it shortly.

> Although I've tested it with usual size hugepages and it did not messed things up,
> I'm not able to test it with GB hugepages, as I do not have any proc with "pdpe1gb" flag available.
>
> Thanks in advance!
> Cheers!
> --
> Rafael Aquini <aquini@xxxxxxxxx>

--
Russ Anderson, OS RAS/Partitioning Project Lead
SGI - Silicon Graphics Inc rja@xxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/