From patchwork Mon Feb 22 09:10:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 62511 Delivered-To: patch@linaro.org Received: by 10.112.43.199 with SMTP id y7csp1116103lbl; Mon, 22 Feb 2016 01:25:16 -0800 (PST) X-Received: by 10.66.154.233 with SMTP id vr9mr36725436pab.66.1456133116836; Mon, 22 Feb 2016 01:25:16 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h11si38621666pfd.42.2016.02.22.01.25.16; Mon, 22 Feb 2016 01:25:16 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754169AbcBVJYu (ORCPT + 30 others); Mon, 22 Feb 2016 04:24:50 -0500 Received: from szxga02-in.huawei.com ([119.145.14.65]:58246 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753874AbcBVJRW (ORCPT ); Mon, 22 Feb 2016 04:17:22 -0500 Received: from 172.24.1.50 (EHLO szxeml431-hub.china.huawei.com) ([172.24.1.50]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DBU58460; Mon, 22 Feb 2016 17:11:50 +0800 (CST) Received: from linux-4hy3.site (10.107.193.248) by szxeml431-hub.china.huawei.com (10.82.67.208) with Microsoft SMTP Server id 14.3.235.1; Mon, 22 Feb 2016 17:11:39 +0800 From: Wang Nan To: Alexei Starovoitov , Arnaldo Carvalho de Melo , Arnaldo Carvalho de Melo , Brendan Gregg CC: Adrian Hunter , Cody P Schafer , "David S. Miller" , He Kuang , =?UTF-8?q?J=C3=A9r=C3=A9mie=20Galarneau?= , Jiri Olsa , Kirill Smelkov , Li Zefan , Masami Hiramatsu , Namhyung Kim , Peter Zijlstra , , Wang Nan , Subject: [PATCH 17/48] perf core: Reduce perf event output overhead by new overflow handler Date: Mon, 22 Feb 2016 09:10:44 +0000 Message-ID: <1456132275-98875-18-git-send-email-wangnan0@huawei.com> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <1456132275-98875-1-git-send-email-wangnan0@huawei.com> References: <1456132275-98875-1-git-send-email-wangnan0@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.193.248] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020202.56CAD0DA.00EF, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 89006523b5a94733c67db542b2a0a114 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org By creating onward and backward specific overflow handlers and setting them according to event's backward setting, normal sampling events don't need checking backward setting of an event any more. This is the last patch of backward writing patchset. After this patch, there's no extra overhead introduced to the fast path of sampling output. Signed-off-by: Wang Nan Cc: He Kuang Cc: Alexei Starovoitov Cc: Arnaldo Carvalho de Melo Cc: Brendan Gregg Cc: Jiri Olsa Cc: Masami Hiramatsu Cc: Namhyung Kim Cc: Peter Zijlstra Cc: Zefan Li Cc: pi3orama@163.com --- include/linux/perf_event.h | 17 +++++++++++++++-- kernel/events/core.c | 41 ++++++++++++++++++++++++++++++++++++----- kernel/events/ring_buffer.c | 12 ++++++++++++ 3 files changed, 63 insertions(+), 7 deletions(-) -- 1.8.3.4 diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 0ce1015..e466cc6 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -827,9 +827,15 @@ extern int perf_event_overflow(struct perf_event *event, struct perf_sample_data *data, struct pt_regs *regs); +extern void perf_event_output_onward(struct perf_event *event, + struct perf_sample_data *data, + struct pt_regs *regs); +extern void perf_event_output_backward(struct perf_event *event, + struct perf_sample_data *data, + struct pt_regs *regs); extern void perf_event_output(struct perf_event *event, - struct perf_sample_data *data, - struct pt_regs *regs); + struct perf_sample_data *data, + struct pt_regs *regs); extern void perf_event_header__init_id(struct perf_event_header *header, @@ -1036,6 +1042,13 @@ static inline bool is_write_backward(struct perf_event *event) extern int perf_output_begin(struct perf_output_handle *handle, struct perf_event *event, unsigned int size); +extern int perf_output_begin_onward(struct perf_output_handle *handle, + struct perf_event *event, + unsigned int size); +extern int perf_output_begin_backward(struct perf_output_handle *handle, + struct perf_event *event, + unsigned int size); + extern void perf_output_end(struct perf_output_handle *handle); extern unsigned int perf_output_copy(struct perf_output_handle *handle, const void *buf, unsigned int len); diff --git a/kernel/events/core.c b/kernel/events/core.c index 9353154..ce70f54 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5531,9 +5531,13 @@ void perf_prepare_sample(struct perf_event_header *header, } } -void perf_event_output(struct perf_event *event, - struct perf_sample_data *data, - struct pt_regs *regs) +static void __always_inline +__perf_event_output(struct perf_event *event, + struct perf_sample_data *data, + struct pt_regs *regs, + int (*output_begin)(struct perf_output_handle *, + struct perf_event *, + unsigned int)) { struct perf_output_handle handle; struct perf_event_header header; @@ -5543,7 +5547,7 @@ void perf_event_output(struct perf_event *event, perf_prepare_sample(&header, data, event, regs); - if (perf_output_begin(&handle, event, header.size)) + if (output_begin(&handle, event, header.size)) goto exit; perf_output_sample(&handle, &header, data, event); @@ -5554,6 +5558,30 @@ exit: rcu_read_unlock(); } +void +perf_event_output_onward(struct perf_event *event, + struct perf_sample_data *data, + struct pt_regs *regs) +{ + __perf_event_output(event, data, regs, perf_output_begin_onward); +} + +void +perf_event_output_backward(struct perf_event *event, + struct perf_sample_data *data, + struct pt_regs *regs) +{ + __perf_event_output(event, data, regs, perf_output_begin_backward); +} + +void +perf_event_output(struct perf_event *event, + struct perf_sample_data *data, + struct pt_regs *regs) +{ + __perf_event_output(event, data, regs, perf_output_begin); +} + /* * read event_id */ @@ -7868,8 +7896,11 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu, if (overflow_handler) { event->overflow_handler = overflow_handler; event->overflow_handler_context = context; + } else if (is_write_backward(event)){ + event->overflow_handler = perf_event_output_backward; + event->overflow_handler_context = NULL; } else { - event->overflow_handler = perf_event_output; + event->overflow_handler = perf_event_output_onward; event->overflow_handler_context = NULL; } diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c index 80b1fa7..7e30e012 100644 --- a/kernel/events/ring_buffer.c +++ b/kernel/events/ring_buffer.c @@ -230,6 +230,18 @@ out: return -ENOSPC; } +int perf_output_begin_onward(struct perf_output_handle *handle, + struct perf_event *event, unsigned int size) +{ + return __perf_output_begin(handle, event, size, false); +} + +int perf_output_begin_backward(struct perf_output_handle *handle, + struct perf_event *event, unsigned int size) +{ + return __perf_output_begin(handle, event, size, true); +} + int perf_output_begin(struct perf_output_handle *handle, struct perf_event *event, unsigned int size) {