Re: [PATCH 2/4] perf,hw_breakpoint: add lockless reservation forhw_breaks

From: Frederic Weisbecker
Date: Wed Jan 27 2010 - 12:57:05 EST


On Tue, Jan 26, 2010 at 01:25:19PM -0600, Jason Wessel wrote:
> @@ -250,11 +326,16 @@ int reserve_bp_slot(struct perf_event *b
>
> mutex_lock(&nr_bp_mutex);
>
> + ret = dbg_hw_breakpoint_alloc(bp->cpu);
> + if (ret)
> + goto end;
> +



This is totally breaking all the constraints that try to
make the reservation cpu-wide/task-wide aware.

Basically, you just reduced the reservation in 4 breakpoints
per cpu.

The current constraints are able to host thousands of
task wide breakpoints, given none of these tasks has
more than 4 breakpoints. What you've just added here breaks
all this flexibility and reduces every breakpoints to
per cpu breakpoints (or system wide), ignoring the per
task contexts, or non-pinned events.

Now I still don't understand why you refuse to use
a best effort approach wrt locking.

A simple mutex_is_locked() would tell you if someone
is trying to reserve a breakpoint. And this is
safe since all the system is stopped at this time,
right? So once you ensure nobody is fighting against
you for the reservation, you can be sure you are alone
until the end of your reservation.

Or if it is not guaranteed the system is stopped when
you reserve a breakpoint for kgdb, you can use
mutex_trylock(). Basically this is the same approach.

If you are fighting against another breakpoint reservation,
it means you are really unlucky, it only happens when
you create such event through a perf syscall, ptrace or ftrace.

Yes a user can create a perf/ftrace/ptrace breakpoint while
another user creates one kgdb, then if the reservation happen
on the same time, either both can make or kgdb will fail.
This *might* happen once in the universe lifetime, should
we really care about that?

I can write a patch for that if you want.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/