Re: [PATCH] x86, 64-bit: Move K8 B step iret fixup to fault entryasm (v2)

From: Ingo Molnar
Date: Tue Nov 03 2009 - 13:11:07 EST



* Brian Gerst <brgerst@xxxxxxxxx> wrote:

> Move the handling of truncated %rip from an iret fault to the fault
> entry path.
>
> This allows x86-64 to use the standard search_extable() function.
>
> v2: Fixed jump to error_swapgs to be unconditional.

v1 is already in the tip:x86/asm topic tree. Mind sending a delta fix
against:

http://people.redhat.com/mingo/tip.git/README

?

Also, i'm having second thoughts about the change:

> Signed-off-by: Brian Gerst <brgerst@xxxxxxxxx>
> ---
> arch/x86/include/asm/uaccess.h | 1 -
> arch/x86/kernel/entry_64.S | 11 ++++++++---
> arch/x86/mm/extable.c | 31 -------------------------------
> 3 files changed, 8 insertions(+), 35 deletions(-)
>
> diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
> index d2c6c93..abd3e0e 100644
> --- a/arch/x86/include/asm/uaccess.h
> +++ b/arch/x86/include/asm/uaccess.h
> @@ -570,7 +570,6 @@ extern struct movsl_mask {
> #ifdef CONFIG_X86_32
> # include "uaccess_32.h"
> #else
> -# define ARCH_HAS_SEARCH_EXTABLE
> # include "uaccess_64.h"
> #endif
>
> diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
> index b5c061f..1579a6c 100644
> --- a/arch/x86/kernel/entry_64.S
> +++ b/arch/x86/kernel/entry_64.S
> @@ -1491,12 +1491,17 @@ error_kernelspace:
> leaq irq_return(%rip),%rcx
> cmpq %rcx,RIP+8(%rsp)
> je error_swapgs
> - movl %ecx,%ecx /* zero extend */
> - cmpq %rcx,RIP+8(%rsp)
> - je error_swapgs
> + movl %ecx,%eax /* zero extend */
> + cmpq %rax,RIP+8(%rsp)
> + je bstep_iret
> cmpq $gs_change,RIP+8(%rsp)
> je error_swapgs
> jmp error_sti
> +
> +bstep_iret:
> + /* Fix truncated RIP */
> + movq %rcx,RIP+8(%rsp)
> + jmp error_swapgs
> END(error_entry)
>
>
> diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
> index 61b41ca..d0474ad 100644
> --- a/arch/x86/mm/extable.c
> +++ b/arch/x86/mm/extable.c
> @@ -35,34 +35,3 @@ int fixup_exception(struct pt_regs *regs)
>
> return 0;
> }
> -
> -#ifdef CONFIG_X86_64
> -/*
> - * Need to defined our own search_extable on X86_64 to work around
> - * a B stepping K8 bug.
> - */
> -const struct exception_table_entry *
> -search_extable(const struct exception_table_entry *first,
> - const struct exception_table_entry *last,
> - unsigned long value)
> -{
> - /* B stepping K8 bug */
> - if ((value >> 32) == 0)
> - value |= 0xffffffffUL << 32;
> -
> - while (first <= last) {
> - const struct exception_table_entry *mid;
> - long diff;
> -
> - mid = (last - first) / 2 + first;
> - diff = mid->insn - value;
> - if (diff == 0)
> - return mid;
> - else if (diff < 0)
> - first = mid+1;
> - else
> - last = mid-1;
> - }
> - return NULL;
> -}
> -#endif

is this the only way how we can end up having a truncated 64-bit RIP
passed in to search_exception_tables()/search_extable()? Before your
commit we basically had a last-ditch safety net in 64-bit kernels that
zero-extended truncated RIPs - no matter how they got there (via known
or unknown erratums).

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/