From patchwork Fri Feb 26 09:32:00 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 63026 Delivered-To: patch@linaro.org Received: by 10.112.199.169 with SMTP id jl9csp617360lbc; Fri, 26 Feb 2016 01:53:43 -0800 (PST) X-Received: by 10.98.64.132 with SMTP id f4mr728554pfd.159.1456480423247; Fri, 26 Feb 2016 01:53:43 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id vb13si2011674pab.68.2016.02.26.01.53.42; Fri, 26 Feb 2016 01:53:43 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753491AbcBZJxQ (ORCPT + 30 others); Fri, 26 Feb 2016 04:53:16 -0500 Received: from szxga01-in.huawei.com ([58.251.152.64]:39720 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750984AbcBZJdM (ORCPT ); Fri, 26 Feb 2016 04:33:12 -0500 Received: from 172.24.1.51 (EHLO szxeml422-hub.china.huawei.com) ([172.24.1.51]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DFG44987; Fri, 26 Feb 2016 17:33:01 +0800 (CST) Received: from linux-4hy3.site (10.107.193.248) by szxeml422-hub.china.huawei.com (10.82.67.152) with Microsoft SMTP Server id 14.3.235.1; Fri, 26 Feb 2016 17:32:48 +0800 From: Wang Nan To: Alexei Starovoitov , Arnaldo Carvalho de Melo , Arnaldo Carvalho de Melo CC: Jiri Olsa , Li Zefan , "Peter Zijlstra" , , Wang Nan , , He Kuang , Brendan Gregg , "Masami Hiramatsu" , Namhyung Kim Subject: [PATCH 12/46] perf core: Prepare writing into ring buffer from end Date: Fri, 26 Feb 2016 09:32:00 +0000 Message-ID: <1456479154-136027-13-git-send-email-wangnan0@huawei.com> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <1456479154-136027-1-git-send-email-wangnan0@huawei.com> References: <1456479154-136027-1-git-send-email-wangnan0@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.193.248] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090201.56D01BCE.00F3, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: e8140b24cf83df2078087af9a5426f08 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Convert perf_output_begin to __perf_output_begin and make the later function able to write records from the end of the ring buffer. Following commits will utilize the 'backward' flag. This patch doesn't introduce any extra performance overhead since we use always_inline. Signed-off-by: Wang Nan Cc: He Kuang Cc: Alexei Starovoitov Cc: Arnaldo Carvalho de Melo Cc: Brendan Gregg Cc: Jiri Olsa Cc: Masami Hiramatsu Cc: Namhyung Kim Cc: Peter Zijlstra Cc: Zefan Li Cc: pi3orama@163.com --- kernel/events/ring_buffer.c | 42 ++++++++++++++++++++++++++++++++++++------ 1 file changed, 36 insertions(+), 6 deletions(-) -- 1.8.3.4 diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c index 22e1a47..37c11c6 100644 --- a/kernel/events/ring_buffer.c +++ b/kernel/events/ring_buffer.c @@ -102,8 +102,21 @@ out: preempt_enable(); } -int perf_output_begin(struct perf_output_handle *handle, - struct perf_event *event, unsigned int size) +static bool __always_inline +ring_buffer_has_space(unsigned long head, unsigned long tail, + unsigned long data_size, unsigned int size, + bool backward) +{ + if (!backward) + return CIRC_SPACE(head, tail, data_size) >= size; + else + return CIRC_SPACE(tail, head, data_size) >= size; +} + +static int __always_inline +__perf_output_begin(struct perf_output_handle *handle, + struct perf_event *event, unsigned int size, + bool backward) { struct ring_buffer *rb; unsigned long tail, offset, head; @@ -146,9 +159,12 @@ int perf_output_begin(struct perf_output_handle *handle, do { tail = READ_ONCE(rb->user_page->data_tail); offset = head = local_read(&rb->head); - if (!rb->overwrite && - unlikely(CIRC_SPACE(head, tail, perf_data_size(rb)) < size)) - goto fail; + if (!rb->overwrite) { + if (unlikely(!ring_buffer_has_space(head, tail, + perf_data_size(rb), + size, backward))) + goto fail; + } /* * The above forms a control dependency barrier separating the @@ -162,9 +178,17 @@ int perf_output_begin(struct perf_output_handle *handle, * See perf_output_put_handle(). */ - head += size; + if (!backward) + head += size; + else + head -= size; } while (local_cmpxchg(&rb->head, offset, head) != offset); + if (backward) { + offset = head; + head = (u64)(-head); + } + /* * We rely on the implied barrier() by local_cmpxchg() to ensure * none of the data stores below can be lifted up by the compiler. @@ -206,6 +230,12 @@ out: return -ENOSPC; } +int perf_output_begin(struct perf_output_handle *handle, + struct perf_event *event, unsigned int size) +{ + return __perf_output_begin(handle, event, size, false); +} + unsigned int perf_output_copy(struct perf_output_handle *handle, const void *buf, unsigned int len) {