Re: [PATCH] x86, 64bit: Fix a possible bug in switchover in head_64.S

From: Yinghai Lu
Date: Tue May 14 2013 - 01:51:27 EST


On Mon, May 13, 2013 at 5:37 AM, Zhang Yanfei <zhangyanfei.yes@xxxxxxxxx> wrote:
> From: Zhang Yanfei <zhangyanfei@xxxxxxxxxxxxxx>

> It seems line 119 has a potential bug there. For example,
> the kernel is loaded at physical address 511G+1008M, that is
> 000000000 111111111 111111000 000000000000000000000
> and the kernel _end is 512G+2M, that is
> 000000001 000000000 000000001 000000000000000000000
> So in this example, when using the 2nd page to setup PUD (line 114~119),
> rax is 511.
> In line 118, we put rdx which is the address of the PMD page (the 3rd page)
> into entry 511 of the PUD table. But in line 119, the entry we calculate from
> (4096+8)(%rbx,%rax,8) has exceeded the PUD page. IMO, the entry in line
> 119 should be wraparound into entry 0 of the PUD table.
>
> Sorry for not having a machine with memory exceeding 512GB, so I cannot
> test to see if my guess is right or not. Please correct me if I am wrong.
>
> Signed-off-by: Zhang Yanfei <zhangyanfei@xxxxxxxxxxxxxx>
> ---
> arch/x86/kernel/head_64.S | 7 ++++++-
> 1 files changed, 6 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
> index 08f7e80..2395d8f 100644
> --- a/arch/x86/kernel/head_64.S
> +++ b/arch/x86/kernel/head_64.S
> @@ -116,8 +116,13 @@ startup_64:
> shrq $PUD_SHIFT, %rax
> andl $(PTRS_PER_PUD-1), %eax
> movq %rdx, (4096+0)(%rbx,%rax,8)
> + cmp $511, %rax
> + je 1f
> movq %rdx, (4096+8)(%rbx,%rax,8)
> -
> + jmp 2f
> +1:
> + movq %rdx, (4096)(%rbx)
> +2:
> addq $8192, %rbx
> movq %rdi, %rax
> shrq $PMD_SHIFT, %rdi

yes, that is problem.

I did test the code cross before for cross 1T and 2T.
maybe we do not access the code during switch...

change could be more simple and avoid jmps.

please check attached, and it does not use jmp

index 08f7e80..321d65e 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -115,8 +115,10 @@ startup_64:
movq %rdi, %rax
shrq $PUD_SHIFT, %rax
andl $(PTRS_PER_PUD-1), %eax
- movq %rdx, (4096+0)(%rbx,%rax,8)
- movq %rdx, (4096+8)(%rbx,%rax,8)
+ movq %rdx, 4096(%rbx,%rax,8)
+ incl %eax
+ andl $(PTRS_PER_PUD-1), %eax
+ movq %rdx, 4096(%rbx,%rax,8)

addq $8192, %rbx
movq %rdi, %rax

And we need cc to stable.

Yinghai

Attachment: fix_wrap.patch
Description: Binary data