Re: [PATCH v3 2/3] x86: query dynamic DEBUG_PAGEALLOC setting

From: Christian Borntraeger
Date: Thu Jan 28 2016 - 04:48:47 EST


On 01/27/2016 11:17 PM, David Rientjes wrote:
> On Wed, 27 Jan 2016, Christian Borntraeger wrote:
>
>> We can use debug_pagealloc_enabled() to check if we can map
>> the identity mapping with 2MB pages. We can also add the state
>> into the dump_stack output.
>>
>> The patch does not touch the code for the 1GB pages, which ignored
>> CONFIG_DEBUG_PAGEALLOC. Do we need to fence this as well?
>>
>> Signed-off-by: Christian Borntraeger <borntraeger@xxxxxxxxxx>
>> Reviewed-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
>> ---
>> arch/x86/kernel/dumpstack.c | 5 ++---
>> arch/x86/mm/init.c | 7 ++++---
>> arch/x86/mm/pageattr.c | 14 ++++----------
>> 3 files changed, 10 insertions(+), 16 deletions(-)
>>
>> diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
>> index 9c30acf..32e5699 100644
>> --- a/arch/x86/kernel/dumpstack.c
>> +++ b/arch/x86/kernel/dumpstack.c
>> @@ -265,9 +265,8 @@ int __die(const char *str, struct pt_regs *regs, long err)
>> #ifdef CONFIG_SMP
>> printk("SMP ");
>> #endif
>> -#ifdef CONFIG_DEBUG_PAGEALLOC
>> - printk("DEBUG_PAGEALLOC ");
>> -#endif
>> + if (debug_pagealloc_enabled())
>> + printk("DEBUG_PAGEALLOC ");
>> #ifdef CONFIG_KASAN
>> printk("KASAN");
>> #endif
>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
>> index 493f541..39823fd 100644
>> --- a/arch/x86/mm/init.c
>> +++ b/arch/x86/mm/init.c
>> @@ -150,13 +150,14 @@ static int page_size_mask;
>>
>> static void __init probe_page_size_mask(void)
>> {
>> -#if !defined(CONFIG_DEBUG_PAGEALLOC) && !defined(CONFIG_KMEMCHECK)
>> +#if !defined(CONFIG_KMEMCHECK)
>> /*
>> - * For CONFIG_DEBUG_PAGEALLOC, identity mapping will use small pages.
>> + * For CONFIG_KMEMCHECK or pagealloc debugging, identity mapping will
>> + * use small pages.
>> * This will simplify cpa(), which otherwise needs to support splitting
>> * large pages into small in interrupt context, etc.
>> */
>> - if (cpu_has_pse)
>> + if (cpu_has_pse && !debug_pagealloc_enabled())
>> page_size_mask |= 1 << PG_LEVEL_2M;
>> #endif
>>
>
> I would have thought free_init_pages() would be modified to use
> debug_pagealloc_enabled() as well?


Indeed, I only touched the identity mapping and dump stack.
The question is do we really want to change free_init_pages as well?
The unmapping during runtime causes significant overhead, but the
unmapping after init imposes almost no runtime overhead. Of course,
things get fishy now as what is enabled and what not.

Kconfig after my patch "mm/debug_pagealloc: Ask users for default setting of debug_pagealloc"
(in mm) now states
----snip----
By default this option will have a small overhead, e.g. by not
allowing the kernel mapping to be backed by large pages on some
architectures. Even bigger overhead comes when the debugging is
enabled by DEBUG_PAGEALLOC_ENABLE_DEFAULT or the debug_pagealloc
command line parameter.
----snip----

So I am tempted to NOT change free_init_pages, but the x86 maintainers
can certainly decide differently. Ingo, Thomas, H. Peter, please advise.