Re: [PATCH 18 of 66] add pmd mangling functions to x86

From: Andrea Arcangeli
Date: Mon Nov 29 2010 - 12:01:11 EST


On Mon, Nov 29, 2010 at 10:23:11AM +0000, Mel Gorman wrote:
> > > > @@ -353,7 +353,7 @@ static inline unsigned long pmd_page_vad
> > > > * Currently stuck as a macro due to indirect forward reference to
> > > > * linux/mmzone.h's __section_mem_map_addr() definition:
> > > > */
> > > > -#define pmd_page(pmd) pfn_to_page(pmd_val(pmd) >> PAGE_SHIFT)
> > > > +#define pmd_page(pmd) pfn_to_page((pmd_val(pmd) & PTE_PFN_MASK) >> PAGE_SHIFT)
> > > >
> > >
> > > Why is it now necessary to use PTE_PFN_MASK?
> >
> > Just for the NX bit, that couldn't be set before the pmd could be
> > marked PSE.
> >
>
> Sorry, I still am missing something. PTE_PFN_MASK is this
>
> #define PTE_PFN_MASK ((pteval_t)PHYSICAL_PAGE_MASK)
> #define PHYSICAL_PAGE_MASK (((signed long)PAGE_MASK) & __PHYSICAL_MASK)
>
> I'm not seeing how PTE_PFN_MASK affects the NX bit (bit 63).

It simply clears it by doing & 0000... otherwise bit 51 would remain
erroneously set on the pfn passed to pfn_to_page.

Clearing bit 63 wasn't needed before because bit 63 couldn't be set on
a not huge pmd.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/