Re: [PATCH 5/7] x86, mm: Break down init_all_memory_mapping

From: Konrad Rzeszutek Wilk
Date: Wed Oct 10 2012 - 10:07:15 EST


On Tue, Oct 09, 2012 at 04:58:33PM -0700, Yinghai Lu wrote:
> Will replace that will top-down page table initialization.

s/will/with?

> new one need to take range.

Huh? I have no idea what this patch does from your description.
It says it will replace something (not identified) with
top-down table initialization.

And the code is not that simple - you should explain _how_ you
are changing it. From what it did before to what it does _now_.

Think of the audience of kernel engineers who have memory loss
and will forget everything in three months - the perfect time when
the merge window opens and bugs start appearing. They (ok, maybe
it is only me who is needs this) need the documentation to figure
out what went wrong. Please explain it.


>
> Signed-off-by: Yinghai Lu <yinghai@xxxxxxxxxx>
> ---
> arch/x86/mm/init.c | 41 +++++++++++++++++++----------------------
> 1 files changed, 19 insertions(+), 22 deletions(-)
>
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index ebf76ce..23ce4db 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -393,40 +393,30 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
> * Depending on the alignment of E820 ranges, this may possibly result in using
> * smaller size (i.e. 4K instead of 2M or 1G) page tables.
> */
> -static void __init init_all_memory_mapping(void)
> +static void __init init_all_memory_mapping(unsigned long all_start,
> + unsigned long all_end)
> {
> unsigned long start_pfn, end_pfn;
> int i;
>
> - /* the ISA range is always mapped regardless of memory holes */
> - init_memory_mapping(0, ISA_END_ADDRESS);
> -
> for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
> u64 start = start_pfn << PAGE_SHIFT;
> u64 end = end_pfn << PAGE_SHIFT;
>
> - if (end <= ISA_END_ADDRESS)
> + if (end <= all_start)
> continue;
>
> - if (start < ISA_END_ADDRESS)
> - start = ISA_END_ADDRESS;
> -#ifdef CONFIG_X86_32
> - /* on 32 bit, we only map up to max_low_pfn */
> - if ((start >> PAGE_SHIFT) >= max_low_pfn)
> + if (start < all_start)
> + start = all_start;
> +
> + if (start >= all_end)
> continue;
>
> - if ((end >> PAGE_SHIFT) > max_low_pfn)
> - end = max_low_pfn << PAGE_SHIFT;
> -#endif
> - init_memory_mapping(start, end);
> - }
> + if (end > all_end)
> + end = all_end;
>
> -#ifdef CONFIG_X86_64
> - if (max_pfn > max_low_pfn) {
> - /* can we preseve max_low_pfn ?*/
> - max_low_pfn = max_pfn;
> + init_memory_mapping(start, end);
> }
> -#endif
> }
>
> void __init init_mem_mapping(void)
> @@ -456,8 +446,15 @@ void __init init_mem_mapping(void)
> (pgt_buf_top << PAGE_SHIFT) - 1);
>
> max_pfn_mapped = 0; /* will get exact value next */
> - init_all_memory_mapping();
> -
> + /* the ISA range is always mapped regardless of memory holes */
> + init_memory_mapping(0, ISA_END_ADDRESS);
> + init_all_memory_mapping(ISA_END_ADDRESS, end);
> +#ifdef CONFIG_X86_64
> + if (max_pfn > max_low_pfn) {
> + /* can we preseve max_low_pfn ?*/
> + max_low_pfn = max_pfn;
> + }
> +#endif
> /*
> * Reserve the kernel pagetable pages we used (pgt_buf_start -
> * pgt_buf_end) and free the other ones (pgt_buf_end - pgt_buf_top)
> --
> 1.7.7
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/