Skip to content

Commit c7ab62b

Browse files
huangruiIngo Molnar
authored andcommitted
perf/x86/amd/power: Add AMD accumulated power reporting mechanism
Introduce an AMD accumlated power reporting mechanism for the Family 15h, Model 60h processor that can be used to calculate the average power consumed by a processor during a measurement interval. The feature support is indicated by CPUID Fn8000_0007_EDX[12]. This feature will be implemented both in hwmon and perf. The current design provides one event to report per package/processor power consumption by counting each compute unit power value. Here the gory details of how the computation is done: * Tsample: compute unit power accumulator sample period * Tref: the PTSC counter period (PTSC: performance timestamp counter) * N: the ratio of compute unit power accumulator sample period to the PTSC period * Jmax: max compute unit accumulated power which is indicated by MSR_C001007b[MaxCpuSwPwrAcc] * Jx/Jy: compute unit accumulated power which is indicated by MSR_C001007a[CpuSwPwrAcc] * Tx/Ty: the value of performance timestamp counter which is indicated by CU_PTSC MSR_C0010280[PTSC] * PwrCPUave: CPU average power i. Determine the ratio of Tsample to Tref by executing CPUID Fn8000_0007. N = value of CPUID Fn8000_0007_ECX[CpuPwrSampleTimeRatio[15:0]]. ii. Read the full range of the cumulative energy value from the new MSR MaxCpuSwPwrAcc. Jmax = value returned. iii. At time x, software reads CpuSwPwrAcc and samples the PTSC. Jx = value read from CpuSwPwrAcc and Tx = value read from PTSC. iv. At time y, software reads CpuSwPwrAcc and samples the PTSC. Jy = value read from CpuSwPwrAcc and Ty = value read from PTSC. v. Calculate the average power consumption for a compute unit over time period (y-x). Unit of result is uWatt: if (Jy < Jx) // Rollover has occurred Jdelta = (Jy + Jmax) - Jx else Jdelta = Jy - Jx PwrCPUave = N * Jdelta * 1000 / (Ty - Tx) Simple example: root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' make -j4 CHK include/config/kernel.release CHK include/generated/uapi/linux/version.h CHK include/generated/utsrelease.h CHK include/generated/timeconst.h CHK include/generated/bounds.h CHK include/generated/asm-offsets.h CALL scripts/checksyscalls.sh CHK include/generated/compile.h SKIPPED include/generated/compile.h Building modules, stage 2. Kernel: arch/x86/boot/bzImage is ready (#40) MODPOST 4225 modules Performance counter stats for 'system wide': 183.44 mWatts power/power-pkg/ 341.837270111 seconds time elapsed root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' sleep 10 Performance counter stats for 'system wide': 0.18 mWatts power/power-pkg/ 10.012551815 seconds time elapsed Suggested-by: Peter Zijlstra <[email protected]> Suggested-by: Ingo Molnar <[email protected]> Suggested-by: Borislav Petkov <[email protected]> Signed-off-by: Huang Rui <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: David Ahern <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Robert Richter <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Vince Weaver <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] [ Fixed the modular build. ] Signed-off-by: Ingo Molnar <[email protected]>
1 parent 01fe03f commit c7ab62b

File tree

5 files changed

+369
-2
lines changed

5 files changed

+369
-2
lines changed

arch/x86/Kconfig

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1206,6 +1206,15 @@ config MICROCODE_OLD_INTERFACE
12061206
def_bool y
12071207
depends on MICROCODE
12081208

1209+
config PERF_EVENTS_AMD_POWER
1210+
depends on PERF_EVENTS && CPU_SUP_AMD
1211+
tristate "AMD Processor Power Reporting Mechanism"
1212+
---help---
1213+
Provide power reporting mechanism support for AMD processors.
1214+
Currently, it leverages X86_FEATURE_ACC_POWER
1215+
(CPUID Fn8000_0007_EDX[12]) interface to calculate the
1216+
average power consumption on Family 15h processors.
1217+
12091218
config X86_MSR
12101219
tristate "/dev/cpu/*/msr - Model-specific register support"
12111220
---help---

arch/x86/events/Makefile

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
obj-y += core.o
22

33
obj-$(CONFIG_CPU_SUP_AMD) += amd/core.o amd/uncore.o
4+
obj-$(CONFIG_PERF_EVENTS_AMD_POWER) += amd/power.o
45
obj-$(CONFIG_X86_LOCAL_APIC) += amd/ibs.o msr.o
56
ifdef CONFIG_AMD_IOMMU
67
obj-$(CONFIG_CPU_SUP_AMD) += amd/iommu.o

arch/x86/events/amd/power.c

Lines changed: 353 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,353 @@
1+
/*
2+
* Performance events - AMD Processor Power Reporting Mechanism
3+
*
4+
* Copyright (C) 2016 Advanced Micro Devices, Inc.
5+
*
6+
* Author: Huang Rui <[email protected]>
7+
*
8+
* This program is free software; you can redistribute it and/or modify
9+
* it under the terms of the GNU General Public License version 2 as
10+
* published by the Free Software Foundation.
11+
*/
12+
13+
#include <linux/module.h>
14+
#include <linux/slab.h>
15+
#include <linux/perf_event.h>
16+
#include <asm/cpu_device_id.h>
17+
#include "../perf_event.h"
18+
19+
#define MSR_F15H_CU_PWR_ACCUMULATOR 0xc001007a
20+
#define MSR_F15H_CU_MAX_PWR_ACCUMULATOR 0xc001007b
21+
#define MSR_F15H_PTSC 0xc0010280
22+
23+
/* Event code: LSB 8 bits, passed in attr->config any other bit is reserved. */
24+
#define AMD_POWER_EVENT_MASK 0xFFULL
25+
26+
/*
27+
* Accumulated power status counters.
28+
*/
29+
#define AMD_POWER_EVENTSEL_PKG 1
30+
31+
/*
32+
* The ratio of compute unit power accumulator sample period to the
33+
* PTSC period.
34+
*/
35+
static unsigned int cpu_pwr_sample_ratio;
36+
37+
/* Maximum accumulated power of a compute unit. */
38+
static u64 max_cu_acc_power;
39+
40+
static struct pmu pmu_class;
41+
42+
/*
43+
* Accumulated power represents the sum of each compute unit's (CU) power
44+
* consumption. On any core of each CU we read the total accumulated power from
45+
* MSR_F15H_CU_PWR_ACCUMULATOR. cpu_mask represents CPU bit map of all cores
46+
* which are picked to measure the power for the CUs they belong to.
47+
*/
48+
static cpumask_t cpu_mask;
49+
50+
static void event_update(struct perf_event *event)
51+
{
52+
struct hw_perf_event *hwc = &event->hw;
53+
u64 prev_pwr_acc, new_pwr_acc, prev_ptsc, new_ptsc;
54+
u64 delta, tdelta;
55+
56+
prev_pwr_acc = hwc->pwr_acc;
57+
prev_ptsc = hwc->ptsc;
58+
rdmsrl(MSR_F15H_CU_PWR_ACCUMULATOR, new_pwr_acc);
59+
rdmsrl(MSR_F15H_PTSC, new_ptsc);
60+
61+
/*
62+
* Calculate the CU power consumption over a time period, the unit of
63+
* final value (delta) is micro-Watts. Then add it to the event count.
64+
*/
65+
if (new_pwr_acc < prev_pwr_acc) {
66+
delta = max_cu_acc_power + new_pwr_acc;
67+
delta -= prev_pwr_acc;
68+
} else
69+
delta = new_pwr_acc - prev_pwr_acc;
70+
71+
delta *= cpu_pwr_sample_ratio * 1000;
72+
tdelta = new_ptsc - prev_ptsc;
73+
74+
do_div(delta, tdelta);
75+
local64_add(delta, &event->count);
76+
}
77+
78+
static void __pmu_event_start(struct perf_event *event)
79+
{
80+
if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
81+
return;
82+
83+
event->hw.state = 0;
84+
85+
rdmsrl(MSR_F15H_PTSC, event->hw.ptsc);
86+
rdmsrl(MSR_F15H_CU_PWR_ACCUMULATOR, event->hw.pwr_acc);
87+
}
88+
89+
static void pmu_event_start(struct perf_event *event, int mode)
90+
{
91+
__pmu_event_start(event);
92+
}
93+
94+
static void pmu_event_stop(struct perf_event *event, int mode)
95+
{
96+
struct hw_perf_event *hwc = &event->hw;
97+
98+
/* Mark event as deactivated and stopped. */
99+
if (!(hwc->state & PERF_HES_STOPPED))
100+
hwc->state |= PERF_HES_STOPPED;
101+
102+
/* Check if software counter update is necessary. */
103+
if ((mode & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
104+
/*
105+
* Drain the remaining delta count out of an event
106+
* that we are disabling:
107+
*/
108+
event_update(event);
109+
hwc->state |= PERF_HES_UPTODATE;
110+
}
111+
}
112+
113+
static int pmu_event_add(struct perf_event *event, int mode)
114+
{
115+
struct hw_perf_event *hwc = &event->hw;
116+
117+
hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
118+
119+
if (mode & PERF_EF_START)
120+
__pmu_event_start(event);
121+
122+
return 0;
123+
}
124+
125+
static void pmu_event_del(struct perf_event *event, int flags)
126+
{
127+
pmu_event_stop(event, PERF_EF_UPDATE);
128+
}
129+
130+
static int pmu_event_init(struct perf_event *event)
131+
{
132+
u64 cfg = event->attr.config & AMD_POWER_EVENT_MASK;
133+
134+
/* Only look at AMD power events. */
135+
if (event->attr.type != pmu_class.type)
136+
return -ENOENT;
137+
138+
/* Unsupported modes and filters. */
139+
if (event->attr.exclude_user ||
140+
event->attr.exclude_kernel ||
141+
event->attr.exclude_hv ||
142+
event->attr.exclude_idle ||
143+
event->attr.exclude_host ||
144+
event->attr.exclude_guest ||
145+
/* no sampling */
146+
event->attr.sample_period)
147+
return -EINVAL;
148+
149+
if (cfg != AMD_POWER_EVENTSEL_PKG)
150+
return -EINVAL;
151+
152+
return 0;
153+
}
154+
155+
static void pmu_event_read(struct perf_event *event)
156+
{
157+
event_update(event);
158+
}
159+
160+
static ssize_t
161+
get_attr_cpumask(struct device *dev, struct device_attribute *attr, char *buf)
162+
{
163+
return cpumap_print_to_pagebuf(true, buf, &cpu_mask);
164+
}
165+
166+
static DEVICE_ATTR(cpumask, S_IRUGO, get_attr_cpumask, NULL);
167+
168+
static struct attribute *pmu_attrs[] = {
169+
&dev_attr_cpumask.attr,
170+
NULL,
171+
};
172+
173+
static struct attribute_group pmu_attr_group = {
174+
.attrs = pmu_attrs,
175+
};
176+
177+
/*
178+
* Currently it only supports to report the power of each
179+
* processor/package.
180+
*/
181+
EVENT_ATTR_STR(power-pkg, power_pkg, "event=0x01");
182+
183+
EVENT_ATTR_STR(power-pkg.unit, power_pkg_unit, "mWatts");
184+
185+
/* Convert the count from micro-Watts to milli-Watts. */
186+
EVENT_ATTR_STR(power-pkg.scale, power_pkg_scale, "1.000000e-3");
187+
188+
static struct attribute *events_attr[] = {
189+
EVENT_PTR(power_pkg),
190+
EVENT_PTR(power_pkg_unit),
191+
EVENT_PTR(power_pkg_scale),
192+
NULL,
193+
};
194+
195+
static struct attribute_group pmu_events_group = {
196+
.name = "events",
197+
.attrs = events_attr,
198+
};
199+
200+
PMU_FORMAT_ATTR(event, "config:0-7");
201+
202+
static struct attribute *formats_attr[] = {
203+
&format_attr_event.attr,
204+
NULL,
205+
};
206+
207+
static struct attribute_group pmu_format_group = {
208+
.name = "format",
209+
.attrs = formats_attr,
210+
};
211+
212+
static const struct attribute_group *attr_groups[] = {
213+
&pmu_attr_group,
214+
&pmu_format_group,
215+
&pmu_events_group,
216+
NULL,
217+
};
218+
219+
static struct pmu pmu_class = {
220+
.attr_groups = attr_groups,
221+
/* system-wide only */
222+
.task_ctx_nr = perf_invalid_context,
223+
.event_init = pmu_event_init,
224+
.add = pmu_event_add,
225+
.del = pmu_event_del,
226+
.start = pmu_event_start,
227+
.stop = pmu_event_stop,
228+
.read = pmu_event_read,
229+
};
230+
231+
static void power_cpu_exit(int cpu)
232+
{
233+
int target;
234+
235+
if (!cpumask_test_and_clear_cpu(cpu, &cpu_mask))
236+
return;
237+
238+
/*
239+
* Find a new CPU on the same compute unit, if was set in cpumask
240+
* and still some CPUs on compute unit. Then migrate event and
241+
* context to new CPU.
242+
*/
243+
target = cpumask_any_but(topology_sibling_cpumask(cpu), cpu);
244+
if (target < nr_cpumask_bits) {
245+
cpumask_set_cpu(target, &cpu_mask);
246+
perf_pmu_migrate_context(&pmu_class, cpu, target);
247+
}
248+
}
249+
250+
static void power_cpu_init(int cpu)
251+
{
252+
int target;
253+
254+
/*
255+
* 1) If any CPU is set at cpu_mask in the same compute unit, do
256+
* nothing.
257+
* 2) If no CPU is set at cpu_mask in the same compute unit,
258+
* set current STARTING CPU.
259+
*
260+
* Note: if there is a CPU aside of the new one already in the
261+
* sibling mask, then it is also in cpu_mask.
262+
*/
263+
target = cpumask_any_but(topology_sibling_cpumask(cpu), cpu);
264+
if (target >= nr_cpumask_bits)
265+
cpumask_set_cpu(cpu, &cpu_mask);
266+
}
267+
268+
static int
269+
power_cpu_notifier(struct notifier_block *self, unsigned long action, void *hcpu)
270+
{
271+
unsigned int cpu = (long)hcpu;
272+
273+
switch (action & ~CPU_TASKS_FROZEN) {
274+
case CPU_DOWN_FAILED:
275+
case CPU_STARTING:
276+
power_cpu_init(cpu);
277+
break;
278+
case CPU_DOWN_PREPARE:
279+
power_cpu_exit(cpu);
280+
break;
281+
default:
282+
break;
283+
}
284+
285+
return NOTIFY_OK;
286+
}
287+
288+
static struct notifier_block power_cpu_notifier_nb = {
289+
.notifier_call = power_cpu_notifier,
290+
.priority = CPU_PRI_PERF,
291+
};
292+
293+
static const struct x86_cpu_id cpu_match[] = {
294+
{ .vendor = X86_VENDOR_AMD, .family = 0x15 },
295+
{},
296+
};
297+
298+
static int __init amd_power_pmu_init(void)
299+
{
300+
int cpu, target, ret;
301+
302+
if (!x86_match_cpu(cpu_match))
303+
return 0;
304+
305+
if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
306+
return -ENODEV;
307+
308+
cpu_pwr_sample_ratio = cpuid_ecx(0x80000007);
309+
310+
if (rdmsrl_safe(MSR_F15H_CU_MAX_PWR_ACCUMULATOR, &max_cu_acc_power)) {
311+
pr_err("Failed to read max compute unit power accumulator MSR\n");
312+
return -ENODEV;
313+
}
314+
315+
cpu_notifier_register_begin();
316+
317+
/* Choose one online core of each compute unit. */
318+
for_each_online_cpu(cpu) {
319+
target = cpumask_first(topology_sibling_cpumask(cpu));
320+
if (!cpumask_test_cpu(target, &cpu_mask))
321+
cpumask_set_cpu(target, &cpu_mask);
322+
}
323+
324+
ret = perf_pmu_register(&pmu_class, "power", -1);
325+
if (WARN_ON(ret)) {
326+
pr_warn("AMD Power PMU registration failed\n");
327+
goto out;
328+
}
329+
330+
__register_cpu_notifier(&power_cpu_notifier_nb);
331+
332+
pr_info("AMD Power PMU detected\n");
333+
334+
out:
335+
cpu_notifier_register_done();
336+
337+
return ret;
338+
}
339+
module_init(amd_power_pmu_init);
340+
341+
static void __exit amd_power_pmu_exit(void)
342+
{
343+
cpu_notifier_register_begin();
344+
__unregister_cpu_notifier(&power_cpu_notifier_nb);
345+
cpu_notifier_register_done();
346+
347+
perf_pmu_unregister(&pmu_class);
348+
}
349+
module_exit(amd_power_pmu_exit);
350+
351+
MODULE_AUTHOR("Huang Rui <[email protected]>");
352+
MODULE_DESCRIPTION("AMD Processor Power Reporting Mechanism");
353+
MODULE_LICENSE("GPL v2");

arch/x86/events/core.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1602,8 +1602,7 @@ __init struct attribute **merge_attr(struct attribute **a, struct attribute **b)
16021602
return new;
16031603
}
16041604

1605-
ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr,
1606-
char *page)
1605+
ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr, char *page)
16071606
{
16081607
struct perf_pmu_events_attr *pmu_attr = \
16091608
container_of(attr, struct perf_pmu_events_attr, attr);
@@ -1615,6 +1614,7 @@ ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr,
16151614

16161615
return x86_pmu.events_sysfs_show(page, config);
16171616
}
1617+
EXPORT_SYMBOL_GPL(events_sysfs_show);
16181618

16191619
EVENT_ATTR(cpu-cycles, CPU_CYCLES );
16201620
EVENT_ATTR(instructions, INSTRUCTIONS );

0 commit comments

Comments
 (0)