Re: [BUGFIX][PATCH] Fix usemap initialization v3

From: KAMEZAWA Hiroyuki
Date: Sun Apr 27 2008 - 20:38:52 EST


On Sun, 27 Apr 2008 12:18:17 -0700
Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
> > Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx>
> >
> > ---
> > mm/page_alloc.c | 14 ++++++++++++--
> > 1 file changed, 12 insertions(+), 2 deletions(-)
> >
> > Index: linux-2.6.25/mm/page_alloc.c
> > ===================================================================
> > --- linux-2.6.25.orig/mm/page_alloc.c
> > +++ linux-2.6.25/mm/page_alloc.c
> > @@ -2518,7 +2518,9 @@ void __meminit memmap_init_zone(unsigned
> > struct page *page;
> > unsigned long end_pfn = start_pfn + size;
> > unsigned long pfn;
> > + struct zone *z;
> >
> > + z = &NODE_DATA(nid)->node_zones[zone];
> > for (pfn = start_pfn; pfn < end_pfn; pfn++) {
> > /*
> > * There can be holes in boot-time mem_map[]s
> > @@ -2536,7 +2538,6 @@ void __meminit memmap_init_zone(unsigned
> > init_page_count(page);
> > reset_page_mapcount(page);
> > SetPageReserved(page);
> > -
> > /*
> > * Mark the block movable so that blocks are reserved for
> > * movable at startup. This will force kernel allocations
> > @@ -2545,8 +2546,15 @@ void __meminit memmap_init_zone(unsigned
> > * kernel allocations are made. Later some blocks near
> > * the start are marked MIGRATE_RESERVE by
> > * setup_zone_migrate_reserve()
> > + *
> > + * bitmap is created for zone's valid pfn range. but memmap
> > + * can be created for invalid pages (for alignment)
> > + * check here not to call set_pageblock_migratetype() against
> > + * pfn out of zone.
> > */
> > - if ((pfn & (pageblock_nr_pages-1)))
> > + if ((z->zone_start_pfn <= pfn)
> > + && (pfn < z->zone_start_pfn + z->spanned_pages)
> > + && !(pfn & (pageblock_nr_pages - 1)))
> > set_pageblock_migratetype(page, MIGRATE_MOVABLE);
> >
> > INIT_LIST_HEAD(&page->lru);
> > @@ -4460,6 +4468,8 @@ void set_pageblock_flags_group(struct pa
> > pfn = page_to_pfn(page);
> > bitmap = get_pageblock_bitmap(zone, pfn);
> > bitidx = pfn_to_bitidx(zone, pfn);
> > + VM_BUG_ON(pfn < zone->zone_start_pfn);
> > + VM_BUG_ON(pfn >= zone->zone_start_pfn + zone->spanned_pages);
> >
> > for (; start_bitidx <= end_bitidx; start_bitidx++, value <<= 1)
> > if (flags & value)
>
> Do we think this is needed in 2.6.25.x?
>
Yes, I think.

Thanks,
-Kame

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/