From patchwork Fri Oct 13 23:16:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 115638 Delivered-To: patch@linaro.org Received: by 10.140.22.163 with SMTP id 32csp2051902qgn; Thu, 12 Oct 2017 08:11:42 -0700 (PDT) X-Google-Smtp-Source: AOwi7QCPkdcHwAx3uiTBWRaUjgrmW7qWUp+QOjRXYxx58vyY88EgMH3lgCzeaSTaINxVxEHP7JfV X-Received: by 10.84.128.15 with SMTP id 15mr510696pla.232.1507821102313; Thu, 12 Oct 2017 08:11:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1507821102; cv=none; d=google.com; s=arc-20160816; b=QhO4adBUivIJdx5jMVnc6eBD13T/mRKukeDcyByMCH0Bdq+zQTywAyPViFD7u8ggmh RBGgcz+S8H3Eeu+b6NdGq/qCm/phf3lHTj1yw1fS6VAckA/6P+TlRltxiZvYoM7ZpUnB RWCw7B3km8PnM/NCDcX6OnN1jJ/MDDz1su7fghxg+eyhH94nAmbqM7ja7MO/M2/QIfyL /RHtpx73KeKCSt5UR756GxPyi1bnWLeQk040VM6AliXZOMzEmY9o4w13+egxrSqftF2y HJXwBP3dfrMcMcSIn1q4XvIDPDLn4OMT/n67AS4peCO+J7mWN++U7nVJy3wTcOiok7DN nloA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject:cc :to:from:arc-authentication-results; bh=J00dOwTlfC10iprbnNc4aIULLToou8sc25Ohl/fc4bE=; b=KsOjMmKt4/07ysbvGv1O1Ya6iCGLNDsAR1l+YX2xuMQ8uD6Fey3/Erqf5PbuCQOfQN ulszTiJLhsyO2qWdxTMRRSBdLDaPS6VQFh+aSlkGjJ3WsKUldupO4K3dBPTQsxegqoqc lvBVcJtF4DYH41Bgw4brTHWLoA5wgLtHK5kcmAliXY9z2ARTINXDj0pBToBD1uxzjMb6 bl56v0tqwZ2Rk16vpkbgE4p+JbXUKNxVSGIvHe8KE8aDOMVuMizWk6srkizpFRzLljmB jv/TfH1l07QUF5cUXvbEUx3o1Xgz+N2sct1m7Hroq8mHiy1w6MABSDywQBLaB1e9w35/ yLzA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z1si11649264pgc.781.2017.10.12.08.11.41; Thu, 12 Oct 2017 08:11:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752678AbdJLPLj (ORCPT + 27 others); Thu, 12 Oct 2017 11:11:39 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:7990 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752314AbdJLPLg (ORCPT ); Thu, 12 Oct 2017 11:11:36 -0400 Received: from 172.30.72.60 (EHLO DGGEMS410-HUB.china.huawei.com) ([172.30.72.60]) by dggrg04-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id DIX97217; Thu, 12 Oct 2017 23:11:33 +0800 (CST) Received: from localhost.localdomain (10.110.60.55) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.301.0; Thu, 12 Oct 2017 23:11:27 +0800 From: Wang Nan To: CC: , Wang Nan Subject: [PATCH] perf tool: Don't discard prev in backward mode Date: Sat, 14 Oct 2017 07:16:28 +0800 Message-ID: <20171013231628.27509-1-wangnan0@huawei.com> X-Mailer: git-send-email 2.9.3 MIME-Version: 1.0 X-Originating-IP: [10.110.60.55] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020201.59DF8626.0038, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: bfff6452eb29fdd60f7ed247940c8003 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Perf record can switch output. The new output should only store the data after switching. However, in overwrite backward mode, the new output still have the data from old output. That also brings extra overhead. At the end of mmap_read, the position of processed ring buffer is saved in md->prev. Next mmap_read should be end in md->prev if it is not overwriten. That avoids to process duplicate data. However, the md->prev is discarded. So next mmap_read has to process whole valid ring buffer, which probably include the old processed data. Avoid calling backward_rb_find_range() when md->prev is still available. Signed-off-by: Wang Nan Cc: Liang Kan --- tools/perf/util/mmap.c | 33 +++++++++++++++------------------ 1 file changed, 15 insertions(+), 18 deletions(-) -- 2.9.3 Tested-by: Kan Liang diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c index 9fe5f9c..df1de55 100644 --- a/tools/perf/util/mmap.c +++ b/tools/perf/util/mmap.c @@ -287,18 +287,6 @@ static int backward_rb_find_range(void *buf, int mask, u64 head, u64 *start, u64 return -1; } -static int rb_find_range(void *data, int mask, u64 head, u64 old, - u64 *start, u64 *end, bool backward) -{ - if (!backward) { - *start = old; - *end = head; - return 0; - } - - return backward_rb_find_range(data, mask, head, start, end); -} - int perf_mmap__push(struct perf_mmap *md, bool overwrite, bool backward, void *to, int push(void *to, void *buf, size_t size)) { @@ -310,19 +298,28 @@ int perf_mmap__push(struct perf_mmap *md, bool overwrite, bool backward, void *buf; int rc = 0; - if (rb_find_range(data, md->mask, head, old, &start, &end, backward)) - return -1; + start = backward ? head : old; + end = backward ? old : head; if (start == end) return 0; size = end - start; if (size > (unsigned long)(md->mask) + 1) { - WARN_ONCE(1, "failed to keep up with mmap data. (warn only once)\n"); + if (!backward) { + WARN_ONCE(1, "failed to keep up with mmap data. (warn only once)\n"); - md->prev = head; - perf_mmap__consume(md, overwrite || backward); - return 0; + md->prev = head; + perf_mmap__consume(md, overwrite || backward); + return 0; + } + + /* + * Backward ring buffer is full. We still have a chance to read + * most of data from it. + */ + if (backward_rb_find_range(data, md->mask, head, &start, &end)) + return -1; } if ((start & md->mask) + size != (end & md->mask)) {