Skip to content

Commit 95d0220

Browse files
committed
perf/core: Split perf_event_read() and perf_event_count()
jira LE-1907 Rebuild_History Non-Buildable kernel-3.10.0-514.el7 commit-author Sukadev Bhattiprolu <[email protected]> commit 01add3e perf_event_read() does two things: - call the PMU to read/update the counter value, and - compute the total count of the event and its children Not all callers need both. perf_event_reset() for instance needs the first piece but doesn't need the second. Similarly, when we implement the ability to read a group of events using the transaction interface, we would need the two pieces done independently. Break up perf_event_read() and have it just read/update the counter and have the callers compute the total count if necessary. Signed-off-by: Sukadev Bhattiprolu <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]> (cherry picked from commit 01add3e) Signed-off-by: Jonathan Maple <[email protected]>
1 parent 6f51544 commit 95d0220

File tree

1 file changed

+8
-6
lines changed

1 file changed

+8
-6
lines changed

kernel/events/core.c

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3278,7 +3278,7 @@ u64 perf_event_read_local(struct perf_event *event)
32783278
return val;
32793279
}
32803280

3281-
static u64 perf_event_read(struct perf_event *event)
3281+
static void perf_event_read(struct perf_event *event)
32823282
{
32833283
/*
32843284
* If event is enabled and currently active on a CPU, update the
@@ -3304,8 +3304,6 @@ static u64 perf_event_read(struct perf_event *event)
33043304
update_event_times(event);
33053305
raw_spin_unlock_irqrestore(&ctx->lock, flags);
33063306
}
3307-
3308-
return perf_event_count(event);
33093307
}
33103308

33113309
/*
@@ -3734,14 +3732,18 @@ u64 perf_event_read_value(struct perf_event *event, u64 *enabled, u64 *running)
37343732
*running = 0;
37353733

37363734
mutex_lock(&event->child_mutex);
3737-
total += perf_event_read(event);
3735+
3736+
perf_event_read(event);
3737+
total += perf_event_count(event);
3738+
37383739
*enabled += event->total_time_enabled +
37393740
atomic64_read(&event->child_total_time_enabled);
37403741
*running += event->total_time_running +
37413742
atomic64_read(&event->child_total_time_running);
37423743

37433744
list_for_each_entry(child, &event->child_list, child_list) {
3744-
total += perf_event_read(child);
3745+
perf_event_read(child);
3746+
total += perf_event_count(child);
37453747
*enabled += child->total_time_enabled;
37463748
*running += child->total_time_running;
37473749
}
@@ -3901,7 +3903,7 @@ static unsigned int perf_poll(struct file *file, poll_table *wait)
39013903

39023904
static void _perf_event_reset(struct perf_event *event)
39033905
{
3904-
(void)perf_event_read(event);
3906+
perf_event_read(event);
39053907
local64_set(&event->count, 0);
39063908
perf_event_update_userpage(event);
39073909
}

0 commit comments

Comments
 (0)