Re: [RFC PATCH v2 0/4] mm: reclaim zbud pages on migration and compaction
From: Dave Hansen
Date: Mon Aug 12 2013 - 12:48:21 EST
On 08/11/2013 07:25 PM, Minchan Kim wrote:
> +int set_pinned_page(struct pin_page_owner *owner,
> + struct page *page, void *private)
> +{
> + struct pin_page_info *pinfo = kmalloc(sizeof(pinfo), GFP_KERNEL);
> +
> + INIT_HLIST_NODE(&pinfo->hlist);
> + pinfo->owner = owner;
> +
> + pinfo->pfn = page_to_pfn(page);
> + pinfo->private = private;
> +
> + spin_lock(&hash_lock);
> + hash_add(pin_page_hash, &pinfo->hlist, pinfo->pfn);
> + spin_unlock(&hash_lock);
> +
> + SetPinnedPage(page);
> + return 0;
> +};
I definitely agree that we're getting to the point where we need to look
at this more generically. We've got at least four use-cases that have a
need for deterministically relocating memory:
1. CMA (many sub use cases)
2. Memory hot-remove
3. Memory power management
4. Runtime hugetlb-GB page allocations
Whatever we do, it _should_ be good enough to largely let us replace
PG_slab with this new bit.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/