Re: [PATCH v1 2/7] arm64: introduce interfaces to hotpatch kernel andmodule code

From: Sandeepa Prabhu
Date: Wed Sep 25 2013 - 10:35:50 EST


On 25 September 2013 16:14, Jiang Liu <liuj97@xxxxxxxxx> wrote:
> From: Jiang Liu <jiang.liu@xxxxxxxxxx>
>
> Introduce aarch64_insn_patch_text() and __aarch64_insn_patch_text()
> to patch kernel and module code.
>
> Function aarch64_insn_patch_text() is a heavy version which may use
> stop_machine() to serialize all online CPUs, and function
> __aarch64_insn_patch_text() is light version without explicitly
> serialization.
Hi Jiang,

I have written kprobes support for aarch64, and need both the
functionality (lightweight and stop_machine() versions).
I would like to rebase these API in kprobes, however slight changes
would require in case of stop_machine version, which I explained
below.
[Though kprobes cannot share Instruction encode support of jump labels
as, decoding & simulation quite different for kprobes/uprobes and
based around single stepping]

>
> Signed-off-by: Jiang Liu <jiang.liu@xxxxxxxxxx>
> Cc: Jiang Liu <liuj97@xxxxxxxxx>
> ---
> arch/arm64/include/asm/insn.h | 2 ++
> arch/arm64/kernel/insn.c | 64 +++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 66 insertions(+)
>
> diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
> index e7d1bc8..0ea7193 100644
> --- a/arch/arm64/include/asm/insn.h
> +++ b/arch/arm64/include/asm/insn.h
> @@ -49,5 +49,7 @@ __AARCH64_INSN_FUNCS(nop, 0xFFFFFFFF, 0xD503201F)
> enum aarch64_insn_class aarch64_get_insn_class(u32 insn);
>
> bool aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn);
> +int aarch64_insn_patch_text(void *addr, u32 *insns, int cnt);
> +int __aarch64_insn_patch_text(void *addr, u32 *insns, int cnt);
>
> #endif /* _ASM_ARM64_INSN_H */
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 8541c3a..50facfc 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -15,6 +15,8 @@
> * along with this program. If not, see <http://www.gnu.org/licenses/>.
> */
> #include <linux/kernel.h>
> +#include <linux/stop_machine.h>
> +#include <asm/cacheflush.h>
> #include <asm/insn.h>
>
> static int aarch64_insn_cls[] = {
> @@ -69,3 +71,65 @@ bool __kprobes aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn)
> return __aarch64_insn_hotpatch_safe(old_insn) &&
> __aarch64_insn_hotpatch_safe(new_insn);
> }
> +
> +struct aarch64_insn_patch {
> + void *text_addr;
> + u32 *new_insns;
> + int insn_cnt;
> +};
> +
> +int __kprobes __aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
> +{
> + int i;
> + u32 *tp = addr;
> +
> + /* instructions must be word aligned */
> + if (cnt <= 0 || ((uintptr_t)addr & 0x3))
> + return -EINVAL;
> +
> + for (i = 0; i < cnt; i++)
> + tp[i] = insns[i];
> +
> + flush_icache_range((uintptr_t)tp, (uintptr_t)tp + cnt * sizeof(u32));
> +
> + return 0;
> +}
Looks fine, but do you need to check for CPU big endian mode here? (I
think swab32() needed if EL1 is in big-endian mode)

> +
> +static int __kprobes aarch64_insn_patch_text_cb(void *arg)
> +{
> + struct aarch64_insn_patch *pp = arg;
> +
> + return __aarch64_insn_patch_text(pp->text_addr, pp->new_insns,
> + pp->insn_cnt);
> +}
> +
> +int __kprobes aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
> +{
> + int ret;
> + bool safe = false;
> +
> + /* instructions must be word aligned */
> + if (cnt <= 0 || ((uintptr_t)addr & 0x3))
> + return -EINVAL;
> +
> + if (cnt == 1)
> + safe = aarch64_insn_hotpatch_safe(*(u32 *)addr, insns[0]);
> +
> + if (safe) {
> + ret = __aarch64_insn_patch_text(addr, insns, cnt);
> + } else {

Can you move the code below this into separate API that just apply
patch with stop_machine? And then a wrapper function for jump label
specific handling that checks for aarch64_insn_hotpatch_safe() ?
Also, it will be good to move the patching code out of insn.c to
patch.c (refer to arch/arm/kernel/patch.c).

Please refer to attached file (my current implementation) to make
sense of exactly what kprobes would need (ignore the big-endian part
for now). I think splitting the code should be straight-forward and we
can avoid two different implementations. Please let me know if this
can be done, I will rebase my patches above your next version.

Thanks,
Sandeepa
> + struct aarch64_insn_patch patch = {
> + .text_addr = addr,
> + .new_insns = insns,
> + .insn_cnt = cnt,
> + };
> +
> + /*
> + * Execute __aarch64_insn_patch_text() on every online CPU,
> + * which ensure serialization among all online CPUs.
> + */
> + ret = stop_machine(aarch64_insn_patch_text_cb, &patch, NULL);
> + }
> +
> + return ret;
> +}
> --
> 1.8.1.2
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
/*
* arch/arm64/kernel/patch_text.c
*
* Copyright (C) 2013 Linaro Limited.
* Based on arch/arm/kernel/patch.c
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/kprobes.h>
#include <linux/stop_machine.h>
#include <linux/swab.h>
#include <asm/cacheflush.h>
#include <asm/smp_plat.h>

#include "patch_text.h"

#define opcode_to_mem_le(x) (x)
#define opcode_to_mem_be(x) swab32(x)

/*
* get cpu endian mode from SCTLR_EL1.EE
* 0x1=Big Endian, 0x0=Litle Endian.
*/
static inline unsigned int is_el1_big_endian(void)
{
u32 sctlr;

asm volatile ("mrs %0, sctlr_el1" : "=r" (sctlr));
return (sctlr >> 25) & 0x1;
}

struct patch {
void *addr;
unsigned int insn;
};

/* Patch text - Supported for kernel in AArch64 mode only */
void __kprobes __patch_text(void *addr, unsigned int insn)
{
int size = sizeof(u32);

/* address 32-bit alignment check */
if ((unsigned long)addr % size) {
pr_warn("text patching failed @unaligned addr %p\n", addr);
return;
}

if (is_el1_big_endian())
*(u32 *) addr = opcode_to_mem_be(insn);
else
*(u32 *) addr = opcode_to_mem_le(insn);

/* sync Icache, ISB is issued as part of this */
flush_icache_range((uintptr_t) (addr), (uintptr_t) (addr) + size);
}

static int __kprobes patch_text_stop_machine(void *data)
{
struct patch *patch = data;

__patch_text(patch->addr, patch->insn);
return 0;
}

void __kprobes patch_text(void *addr, unsigned int insn)
{
struct patch patch = {
.addr = addr,
.insn = insn,
};
stop_machine(patch_text_stop_machine, &patch, cpu_online_mask);
}