Re: [PATCH v5 2/5] allow mapping page-less memremaped areas into KVA

From: Matthew Wilcox
Date: Thu Aug 13 2015 - 13:35:59 EST


On Wed, Aug 12, 2015 at 11:01:09PM -0400, Dan Williams wrote:
> +static inline __pfn_t page_to_pfn_t(struct page *page)
> +{
> + __pfn_t pfn = { .val = page_to_pfn(page) << PAGE_SHIFT, };
> +
> + return pfn;
> +}

static inline __pfn_t page_to_pfn_t(struct page *page)
{
__pfn_t __pfn;
unsigned long pfn = page_to_pfn(page);
BUG_ON(pfn > (-1UL >> PFN_SHIFT))
__pfn.val = pfn << PFN_SHIFT;

return __pfn;
}

I have a problem wih PFN_SHIFT being equal to PAGE_SHIFT. Consider a
32-bit kernel; you're asserting that no memory represented by a struct
page can have a physical address above 4GB.

You only need three bits for flags so far ... how about making PFN_SHIFT
be 6? That supports physical addresses up to 2^38 (256GB). That should
be enough, but hardware designers have done some strange things in the
past (I know that HP made PA-RISC hardware that can run 32-bit kernels
with memory between 64GB and 68GB, and they can't be the only strange
hardware people out there).

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/