Re: [tip:x86/security] x86: Add NX protection for kernel data

From: matthieu castet
Date: Sun Jan 23 2011 - 09:28:14 EST


Konrad Rzeszutek Wilk a écrit :
On Fri, Jan 21, 2011 at 10:41:54PM +0100, matthieu castet wrote:
Konrad Rzeszutek Wilk a écrit :
- * .data and .bss should always be writable.
+ * .data and .bss should always be writable, but xen won't like
+ * if we make page table rw (that live in .data or .bss)
*/
+#ifdef CONFIG_X86_32
if (within(address, (unsigned long)_sdata, (unsigned long)_edata) ||
- within(address, (unsigned long)__bss_start, (unsigned long)__bss_stop))
- pgprot_val(required) |= _PAGE_RW;
+ within(address, (unsigned long)__bss_start, (unsigned long)__bss_stop)) {
+ unsigned int level;
+ if (lookup_address(address, &level) && (level != PG_LEVEL_4K))
+ pgprot_val(forbidden) |= _PAGE_RW;
+ }
+#endif
#if defined(CONFIG_X86_64) && defined(CONFIG_DEBUG_RODATA)

fyi, it does make it boot.
Hold it.. ccache is a wonderful tool but I think I've just "rebuilt" the
binaries with the .bss HPAGE_ALIGN aligment by mistake, so this path got never
taken.


Ok,

ATM I saw the following solution to solve the problem :
1) remove the data/bss check in static_protections, it was introduced by NX patches (64edc8ed). But I am not sure it
is really needed anymore.
2) add ". = ALIGN(HPAGE_SIZE)" somewhere after init section. But if we want not to be allocated in image we
should put it before bss. And if we want to be freed after init, we should put before .init.end.
This mean moving .smp_locks (and .data_nosave when x86 will be added) before init section. I have no idea of the impact.
3) add some logic in arch/x86/xen/mmu.c, that will ignore RW page setting for the page table marked RO.
4) make static_protections take and old_prot argument, and only apply RW .data/.bss requirement if page is already RW.

If possible I will go for 1).

Sounds good. Just send me the patch and I will test it.

Ok, what give you the attached patch.

I don't know if I should give the printk or not.


Matthieu diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 8b830ca..eec93c5 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -256,7 +256,6 @@ static inline pgprot_t static_protections(pgprot_t prot, unsigned long address,
unsigned long pfn)
{
pgprot_t forbidden = __pgprot(0);
- pgprot_t required = __pgprot(0);

/*
* The BIOS area between 640k and 1Mb needs to be executable for
@@ -283,11 +282,13 @@ static inline pgprot_t static_protections(pgprot_t prot, unsigned long address,
__pa((unsigned long)__end_rodata) >> PAGE_SHIFT))
pgprot_val(forbidden) |= _PAGE_RW;
/*
- * .data and .bss should always be writable.
+ * .data and .bss should always be writable, but xen won't like
+ * if we make page table rw (that live in .data or .bss)
*/
if (within(address, (unsigned long)_sdata, (unsigned long)_edata) ||
within(address, (unsigned long)__bss_start, (unsigned long)__bss_stop))
- pgprot_val(required) |= _PAGE_RW;
+ if ((pgprot_val(prot) & _PAGE_RW) == 0)
+ printk(KERN_INFO "RO page for 0x%lx in bss/data.\n", address);

#if defined(CONFIG_X86_64) && defined(CONFIG_DEBUG_RODATA)
/*
@@ -327,7 +328,6 @@ static inline pgprot_t static_protections(pgprot_t prot, unsigned long address,
#endif

prot = __pgprot(pgprot_val(prot) & ~pgprot_val(forbidden));
- prot = __pgprot(pgprot_val(prot) | pgprot_val(required));

return prot;
}