From patchwork Mon Jan 18 11:52:01 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 59923 Delivered-To: patch@linaro.org Received: by 10.112.130.2 with SMTP id oa2csp1949460lbb; Mon, 18 Jan 2016 03:52:56 -0800 (PST) X-Received: by 10.98.65.9 with SMTP id o9mr35392219pfa.114.1453117976093; Mon, 18 Jan 2016 03:52:56 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v83si39383792pfi.105.2016.01.18.03.52.55; Mon, 18 Jan 2016 03:52:56 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754807AbcARLwx (ORCPT + 29 others); Mon, 18 Jan 2016 06:52:53 -0500 Received: from szxga02-in.huawei.com ([119.145.14.65]:35833 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754573AbcARLwv (ORCPT ); Mon, 18 Jan 2016 06:52:51 -0500 Received: from 172.24.1.48 (EHLO szxeml425-hub.china.huawei.com) ([172.24.1.48]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DAC37426; Mon, 18 Jan 2016 19:52:25 +0800 (CST) Received: from linux-4hy3.site (10.107.193.248) by szxeml425-hub.china.huawei.com (10.82.67.180) with Microsoft SMTP Server id 14.3.235.1; Mon, 18 Jan 2016 19:52:14 +0800 From: Wang Nan To: , CC: , , , Wang Nan , He Kuang , "Alexei Starovoitov" , Arnaldo Carvalho de Melo , Brendan Gregg , "David S. Miller" , Jiri Olsa , Masami Hiramatsu , Namhyung Kim Subject: [PATCH] perf core: Introduce new ioctl options to pause and resume ring buffer Date: Mon, 18 Jan 2016 11:52:01 +0000 Message-ID: <1453117921-122482-1-git-send-email-wangnan0@huawei.com> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <20160112141430.GH6357@twins.programming.kicks-ass.net> References: <20160112141430.GH6357@twins.programming.kicks-ass.net> MIME-Version: 1.0 X-Originating-IP: [10.107.193.248] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020205.569CD1FB.00BA, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 938b4e804491a54bbd2556d47f79721a Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add an extra ioctl() to pause/resume ring-buffer output. In some situations we want to read from ring buffer only when we ensure nothing can write to the ring buffer during reading. Without this patch we have to turn off all events attached to this ring buffer. This patch is for supporting overwritable ring buffer with TAILSIZE selected. Signed-off-by: Wang Nan Cc: He Kuang Cc: Alexei Starovoitov Cc: Arnaldo Carvalho de Melo Cc: Brendan Gregg Cc: David S. Miller Cc: Jiri Olsa Cc: Masami Hiramatsu Cc: Namhyung Kim Cc: Zefan Li Cc: pi3orama@163.com --- include/uapi/linux/perf_event.h | 2 ++ kernel/events/core.c | 14 ++++++++++++++ kernel/events/internal.h | 11 +++++++++++ kernel/events/ring_buffer.c | 4 +++- 4 files changed, 30 insertions(+), 1 deletion(-) -- 1.8.3.4 diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h index 4e8dde8..9508070 100644 --- a/include/uapi/linux/perf_event.h +++ b/include/uapi/linux/perf_event.h @@ -402,6 +402,8 @@ struct perf_event_attr { #define PERF_EVENT_IOC_SET_FILTER _IOW('$', 6, char *) #define PERF_EVENT_IOC_ID _IOR('$', 7, __u64 *) #define PERF_EVENT_IOC_SET_BPF _IOW('$', 8, __u32) +#define PERF_EVENT_IOC_PAUSE_OUTPUT _IO ('$', 9) +#define PERF_EVENT_IOC_RESUME_OUTPUT _IO ('$', 10) enum perf_event_ioc_flags { PERF_IOC_FLAG_GROUP = 1U << 0, diff --git a/kernel/events/core.c b/kernel/events/core.c index 2d59b59..d5a0c34 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4241,6 +4241,20 @@ static long _perf_ioctl(struct perf_event *event, unsigned int cmd, unsigned lon case PERF_EVENT_IOC_SET_BPF: return perf_event_set_bpf_prog(event, arg); + case PERF_EVENT_IOC_PAUSE_OUTPUT: + case PERF_EVENT_IOC_RESUME_OUTPUT: { + struct ring_buffer *rb; + + rcu_read_lock(); + rb = rcu_dereference(event->rb); + if (!event->rb) { + rcu_read_unlock(); + return -EINVAL; + } + rb_toggle_paused(rb, cmd == PERF_EVENT_IOC_PAUSE_OUTPUT); + rcu_read_unlock(); + return 0; + } default: return -ENOTTY; } diff --git a/kernel/events/internal.h b/kernel/events/internal.h index 2bbad9c..6a93d1b 100644 --- a/kernel/events/internal.h +++ b/kernel/events/internal.h @@ -18,6 +18,7 @@ struct ring_buffer { #endif int nr_pages; /* nr of data pages */ int overwrite; /* can overwrite itself */ + int paused; /* can write into ring buffer */ atomic_t poll; /* POLL_ for wakeups */ @@ -65,6 +66,16 @@ static inline void rb_free_rcu(struct rcu_head *rcu_head) rb_free(rb); } +static inline void +rb_toggle_paused(struct ring_buffer *rb, + bool pause) +{ + if (!pause && rb->nr_pages) + rb->paused = 0; + else + rb->paused = 1; +} + extern struct ring_buffer * rb_alloc(int nr_pages, long watermark, int cpu, int flags); extern void perf_event_wakeup(struct perf_event *event); diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c index 5f8bd89..11a1676 100644 --- a/kernel/events/ring_buffer.c +++ b/kernel/events/ring_buffer.c @@ -125,7 +125,7 @@ int perf_output_begin(struct perf_output_handle *handle, if (unlikely(!rb)) goto out; - if (unlikely(!rb->nr_pages)) + if (unlikely(rb->paused)) goto out; handle->rb = rb; @@ -245,6 +245,8 @@ ring_buffer_init(struct ring_buffer *rb, long watermark, int flags) INIT_LIST_HEAD(&rb->event_list); spin_lock_init(&rb->event_lock); init_irq_work(&rb->irq_work, rb_irq_work); + + rb->paused = rb->nr_pages ? 0 : 1; } static void ring_buffer_put_async(struct ring_buffer *rb)