RE: Hugepages demand paging V1 [4/4]: Numa patch

From: Chen, Kenneth W
Date: Mon Oct 25 2004 - 16:16:34 EST


Christoph Lameter wrote on Friday, October 22, 2004 12:37 PM
> > On Thu, Oct 21, 2004 at 09:58:54PM -0700, Christoph Lameter wrote:
> > > Changelog
> > > * NUMA enhancements (rough first implementation)
> > > * Do not begin search for huge page memory at the first node
> > > but start at the current node and then search previous and
> > > the following nodes for memory.
> > > Signed-off-by: Christoph Lameter <clameter@xxxxxxx>
> >
> > dequeue_huge_page() seems to want a nodemask, not a vma, though I
> > suppose it's not particularly pressing.
>
> How about this variation following __alloc_page:
>
> @@ -32,14 +32,17 @@
> + struct zonelist *zonelist = NODE_DATA(nid)->node_zonelists;
> + struct zone **zones = zonelist->zones;
> + struct zone *z;
> + int i;
> +
> + for(i=0; (z = zones[i])!= NULL; i++) {
> + nid = z->zone_pgdat->node_id;
> + if (list_empty(&hugepage_freelists[node_id]))
> + break;
> }

Must be typos in the if statement. Two fatal errors here: You don't
really mean to break out of the for loop if there are no hugetlb page
on that node, do you? The variable name to index into the freelist is
wrong, should be nid, otherwise this code won't compile. That line
should be this:

+ if (!list_empty(&hugepage_freelists[nid]))


Also this is generic code, we should consider scanning ZONE_HIGHMEM
zonelist. Otherwise, this will likely screw up x86 numa machine.

- Ken


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/