From patchwork Fri May 13 07:56:00 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 67742 Delivered-To: patch@linaro.org Received: by 10.140.92.199 with SMTP id b65csp140850qge; Fri, 13 May 2016 01:11:29 -0700 (PDT) X-Received: by 10.66.221.167 with SMTP id qf7mr20792393pac.94.1463127089451; Fri, 13 May 2016 01:11:29 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y1si23138525pfa.146.2016.05.13.01.11.29; Fri, 13 May 2016 01:11:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752588AbcEMILB (ORCPT + 29 others); Fri, 13 May 2016 04:11:01 -0400 Received: from szxga02-in.huawei.com ([119.145.14.65]:28594 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751675AbcEMH4p (ORCPT ); Fri, 13 May 2016 03:56:45 -0400 Received: from 172.24.1.137 (EHLO szxeml433-hub.china.huawei.com) ([172.24.1.137]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DHB52324; Fri, 13 May 2016 15:56:32 +0800 (CST) Received: from linux-4hy3.site (10.107.193.248) by szxeml433-hub.china.huawei.com (10.82.67.210) with Microsoft SMTP Server id 14.3.235.1; Fri, 13 May 2016 15:56:22 +0800 From: Wang Nan To: CC: , , Wang Nan , He Kuang , "Arnaldo Carvalho de Melo" , Jiri Olsa , Masami Hiramatsu , Namhyung Kim , "Zefan Li" , Subject: [PATCH 03/17] perf tools: Automatically add new channel according to evlist Date: Fri, 13 May 2016 07:56:00 +0000 Message-ID: <1463126174-119290-4-git-send-email-wangnan0@huawei.com> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <1463126174-119290-1-git-send-email-wangnan0@huawei.com> References: <1463126174-119290-1-git-send-email-wangnan0@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.193.248] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020202.573588B0.00AA, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 76186bfc531225030ad1163ca132becf Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org perf_evlist__channel_find() can be used to find a proper channel based on propreties of a evsel. If the channel doesn't exist, it can create new one for it. After this patch there's no need to create default channel explicitly. Signed-off-by: Wang Nan Signed-off-by: He Kuang Cc: Arnaldo Carvalho de Melo Cc: Jiri Olsa Cc: Masami Hiramatsu Cc: Namhyung Kim Cc: Zefan Li Cc: pi3orama@163.com --- tools/perf/builtin-record.c | 5 ----- tools/perf/util/evlist.c | 47 ++++++++++++++++++++++++++++++++++++++++----- 2 files changed, 42 insertions(+), 10 deletions(-) -- 1.8.3.4 diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c index 6e44834..3140378 100644 --- a/tools/perf/builtin-record.c +++ b/tools/perf/builtin-record.c @@ -317,11 +317,6 @@ try_again: } perf_evlist__channel_reset(evlist); - rc = perf_evlist__channel_add(evlist, 0, true); - if (rc < 0) - goto out; - rc = 0; - if (perf_evlist__mmap_ex(evlist, opts->mmap_pages, false, opts->auxtrace_mmap_pages, opts->auxtrace_snapshot_mode) < 0) { diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c index 6c11b9e..47a8f1f 100644 --- a/tools/perf/util/evlist.c +++ b/tools/perf/util/evlist.c @@ -1017,6 +1017,43 @@ static int __perf_evlist__mmap(struct perf_evlist *evlist, int idx, return 0; } +static unsigned long +perf_evlist__channel_for_evsel(struct perf_evsel *evsel __maybe_unused) +{ + return 0; +} + +static int +perf_evlist__channel_find(struct perf_evlist *evlist, + struct perf_evsel *evsel, + bool add_new) +{ + unsigned long flag = perf_evlist__channel_for_evsel(evsel); + int i; + + flag |= PERF_EVLIST__CHANNEL_ENABLED; + for (i = 0; i < perf_evlist__channel_nr(evlist); i++) + if (evlist->channel_flags[i] == flag) + return i; + if (add_new) + return perf_evlist__channel_add(evlist, flag, false); + return -ENOENT; +} + +static int +perf_evlist__channel_complete(struct perf_evlist *evlist) +{ + struct perf_evsel *evsel; + int err; + + evlist__for_each(evlist, evsel) { + err = perf_evlist__channel_find(evlist, evsel, true); + if (err < 0) + return err; + } + return 0; +} + static int perf_evlist__mmap_per_evsel(struct perf_evlist *evlist, int idx, struct mmap_params *mp, int cpu, int thread, int *output) @@ -1244,6 +1281,7 @@ int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages, bool overwrite, unsigned int auxtrace_pages, bool auxtrace_overwrite) { + int err; struct perf_evsel *evsel; const struct cpu_map *cpus = evlist->cpus; const struct thread_map *threads = evlist->threads; @@ -1251,6 +1289,10 @@ int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages, .prot = PROT_READ | (overwrite ? 0 : PROT_WRITE), }; + err = perf_evlist__channel_complete(evlist); + if (err) + return err; + if (evlist->mmap == NULL && perf_evlist__alloc_mmap(evlist) < 0) return -ENOMEM; @@ -1281,12 +1323,7 @@ int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages, int perf_evlist__mmap(struct perf_evlist *evlist, unsigned int pages, bool overwrite) { - int err; - perf_evlist__channel_reset(evlist); - err = perf_evlist__channel_add(evlist, 0, true); - if (err < 0) - return err; return perf_evlist__mmap_ex(evlist, pages, overwrite, 0, false); }