Re: [PATCH] perf/x86: Reset the counter to prevent the leak for a RDPMC task

From: Liang, Kan
Date: Thu Jul 30 2020 - 11:54:40 EST




On 7/30/2020 8:58 AM, peterz@xxxxxxxxxxxxx wrote:
On Thu, Jul 30, 2020 at 05:38:15AM -0700, kan.liang@xxxxxxxxxxxxxxx wrote:
From: Kan Liang <kan.liang@xxxxxxxxxxxxxxx>

The counter value of a perf task may leak to another RDPMC task.

Sure, but nowhere did you explain why that is a problem.

The RDPMC instruction is only available for the X86 platform. Only apply
the fix for the X86 platform.

ARM64 can also do it, although I'm not sure what the current state of
things is here.

After applying the patch,

$ taskset -c 0 ./rdpmc_read_all_counters
index 0x0 value 0x0
index 0x1 value 0x0
index 0x2 value 0x0
index 0x3 value 0x0

index 0x0 value 0x0
index 0x1 value 0x0
index 0x2 value 0x0
index 0x3 value 0x0

You forgot about:

- telling us why it's a problem,

The non-privileged RDPMC user can get the counter information from other perf users. It is a security issue. I will add it in the next version.

- telling us how badly it affects performance.

I once did performance test on a HSX machine. There is no notable slow down with the patch. I will add the performance data in the next version.


I would feel much better if we only did this on context switches to
tasks that have RDPMC enabled.

AFAIK, at least for X86, we can only enable/disable RDPMC globally.
How can we know if a specific task that have RDPMC enabled/disabled?


So on del() mark the counter dirty (if we don't already have state that
implies this), but don't WRMSR. And then on
__perf_event_task_sched_in(), _after_ programming the new tasks'
counters, check for inactive dirty counters and wipe those -- IFF RDPMC
is on for that task.


The generic code doesn't have counters' information. It looks like we need to add a new callback to cleanup the dirty counters as below.

In the specific implementation of pmu_cleanup(), we can check and wipe all inactive dirty counters.

Is it OK?

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 1cbf57dc2ac8..3daaf0a7746d 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -3774,6 +3781,15 @@ static void perf_event_context_sched_in(struct perf_event_context *ctx,
if (!RB_EMPTY_ROOT(&ctx->pinned_groups.tree))
cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
perf_event_sched_in(cpuctx, ctx, task);
+
+ /*
+ * Some leftovers from the previous task may still exist on the unused
+ * counters. The new task may illegally read the counters, e.g. via
+ * RDPMC. The information from the previous task will be leaked. Clean
+ * up the PMU before enabling it.
+ */
+ if (ctx->pmu->pmu_cleanup)
+ ctx->pmu->pmu_cleanup(pmu);
perf_pmu_enable(ctx->pmu);

unlock:



Thanks,
Kan