On 2025-04-21 10:56 a.m., Liang, Kan wrote:
The above check may not enough, since the
On 2025-04-19 12:50 a.m., Luo Gengkun wrote:
On 2025/4/19 10:25, Luo Gengkun wrote:Only the PMU with perf_sw_context can be placed in the same group with
On 2025/4/14 22:29, Liang, Kan wrote:
On 2025-04-12 5:14 a.m., Luo Gengkun wrote:Yes, the commit message should be updated. The code implementation only
The following perf command can trigger a warning onPerf should only force the LBR leader for the branch counters case, so
intel_pmu_lbr_counters_reorder.
# perf record -e "{cpu-clock,cycles/call-graph="lbr"/}" -- sleep 1
The reason is that a lbr event are placed in non lbr group. And the
previous implememtation cannot force the leader to be a lbr event in
this
case.
perf only needs to reset the LBRs for the leader.
I don't think the leader restriction should be applied to other cases.
restricts the leader to be an LBRs.
The hw_perf_event contains a union, hw.flags is used only for hardwareAnd is_branch_counters_group will check if the group_leader supportsI think the default flags for all events is 0. Can you point me to where
BRANCH_COUNTERS.
So if a software event becomes a group_leader, which
hw.flags is -1, this check will alway pass.
it is changed to -1?
Thanks,
Kan>
events.
For the software events, it uses hrtimer. Therefor, when
perf_swevent_init_hrtimer
is called, it changes the value of hw.flags too.
Thanks,
Gengkun
It seems that using union is dangerous because different types of
perf_events can be
placed in the same group.
other types.
Currently, a large number of codes directlyFor X86, the topdown, ACR and branch counters will touch the
access the hw
of the leader, which is insecure.
leader.hw->flags. The topdown and ACR have already checked the leader
before updating the flags. The branch counters missed it. I think a
check is required for the branch counters as well, which should be good
enough to address the issue.
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 16f8aea33243..406f58b3b5d4 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -4256,6 +4256,12 @@ static int intel_pmu_hw_config(struct perf_event
*event)
* group, which requires the extra space to store the counters.
*/
leader = event->group_leader;
+ /*
+ * The leader's hw.flags will be used to determine a
+ * branch counter logging group. Force it a X86 event.
+ */
+ if (!is_x86_event(leader))
+ return -EINVAL;
if (branch_sample_call_stack(leader))
return -EINVAL;
if (branch_sample_counters(leader)) {
intel_pmu_lbr_counters_reorder() can be invoked without branch counters
event.
I've posted a fix to address the issue. Please take a look.
https://lore.kernel.org/lkml/20250423221015.268949-1-kan.liang@xxxxxxxxxxxxxxx/
Thanks,
Kan
This part of the logic needs to beOK. Please share your idea.
redesigned to void
similar problems. And I am happy to work for this.
Thanks,
Kan
Thanks,
Gengkun
To fix this problem, using has_branch_stack to judge if leader is lbr
event.
Fixes: 33744916196b ("perf/x86/intel: Support branch counters logging")
Signed-off-by: Luo Gengkun <luogengkun@xxxxxxxxxxxxxxx>
---
arch/x86/events/intel/core.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/
core.c
index 09d2d66c9f21..c6b394019e54 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -4114,6 +4114,13 @@ static int intel_pmu_hw_config(struct
perf_event *event)
event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
}
+ /*
+ * Force the leader to be a LBR event. So LBRs can be reset
+ * with the leader event. See intel_pmu_lbr_del() for details.
+ */
+ if (has_branch_stack(event) && !has_branch_stack(event-
group_leader))+ return -EINVAL;
+
if (branch_sample_counters(event)) {
struct perf_event *leader, *sibling;
int num = 0;
@@ -4157,13 +4164,6 @@ static int intel_pmu_hw_config(struct
perf_event *event)
~(PERF_SAMPLE_BRANCH_PLM_ALL |
PERF_SAMPLE_BRANCH_COUNTERS)))
event->hw.flags &= ~PERF_X86_EVENT_NEEDS_BRANCH_STACK;
-
- /*
- * Force the leader to be a LBR event. So LBRs can be reset
- * with the leader event. See intel_pmu_lbr_del() for details.
- */
- if (!intel_pmu_needs_branch_stack(leader))
- return -EINVAL;
}
if (intel_pmu_needs_branch_stack(event)) {