Re: [PATCH -next V7 1/7] riscv: ftrace: Fixup panic by disabling preemption

From: Mark Rutland
Date: Thu Jan 12 2023 - 07:59:25 EST


On Thu, Jan 12, 2023 at 12:16:02PM +0000, Mark Rutland wrote:
> Hi Guo,
>
> On Thu, Jan 12, 2023 at 04:05:57AM -0500, guoren@xxxxxxxxxx wrote:
> > From: Andy Chiu <andy.chiu@xxxxxxxxxx>
> >
> > In RISCV, we must use an AUIPC + JALR pair to encode an immediate,
> > forming a jump that jumps to an address over 4K. This may cause errors
> > if we want to enable kernel preemption and remove dependency from
> > patching code with stop_machine(). For example, if a task was switched
> > out on auipc. And, if we changed the ftrace function before it was
> > switched back, then it would jump to an address that has updated 11:0
> > bits mixing with previous XLEN:12 part.
> >
> > p: patched area performed by dynamic ftrace
> > ftrace_prologue:
> > p| REG_S ra, -SZREG(sp)
> > p| auipc ra, 0x? ------------> preempted
> > ...
> > change ftrace function
> > ...
> > p| jalr -?(ra) <------------- switched back
> > p| REG_L ra, -SZREG(sp)
> > func:
> > xxx
> > ret
>
> As mentioned on the last posting, I don't think this is sufficient to fix the
> issue. I've replied with more detail there:
>
> https://lore.kernel.org/lkml/Y7%2F3hoFjS49yy52W@FVFF77S0Q05N/
>
> Even in a non-preemptible SMP kernel, if one CPU can be in the middle of
> executing the ftrace_prologue while another CPU is patching the
> ftrace_prologue, you have the exact same issue.
>
> For example, if CPU X is in the prologue fetches the old AUIPC and the new
> JALR (because it races with CPU Y modifying those), CPU X will branch to the
> wrong address. The race window is much smaller in the absence of preemption,
> but it's still there (and will be exacerbated in virtual machines since the
> hypervisor can preempt a vCPU at any time).

With that in mind, I think your current implementation of ftrace_make_call()
and ftrace_make_nop() have a simlar bug. A caller might execute:

NOP // not yet patched to AUIPC

< AUIPC and JALR instructions both patched >

JALR

... and go to the wrong place.

Assuming individual instruction fetches are atomic, and that you only ever
branch to the same trampoline, you could fix that by always leaving the AUIPC
in place, so that you only patch the JALR to enable/disable the callsite.

Depending on your calling convention, if you have two free GPRs, you might be
able to avoid the stacking of RA by always saving it to a GPR in the callsite,
using a different GPR for the address generation, and having the ftrace
trampoline restore the original RA value, e.g.

MV GPR1, ra
AUIPC GPR2, high_bits_of(ftrace_caller)
JALR ra, high_bits(GPR2) // only patch this

... which'd save an instruction per callsite.

Thanks,
Mark.