From patchwork Fri Feb 19 11:44:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 62299 Delivered-To: patch@linaro.org Received: by 10.112.43.199 with SMTP id y7csp1121917lbl; Fri, 19 Feb 2016 03:54:32 -0800 (PST) X-Received: by 10.66.246.165 with SMTP id xx5mr17409044pac.87.1455882872410; Fri, 19 Feb 2016 03:54:32 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ty8si16234403pac.61.2016.02.19.03.54.31; Fri, 19 Feb 2016 03:54:32 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1030529AbcBSLya (ORCPT + 30 others); Fri, 19 Feb 2016 06:54:30 -0500 Received: from szxga01-in.huawei.com ([58.251.152.64]:19637 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1030498AbcBSLy1 (ORCPT ); Fri, 19 Feb 2016 06:54:27 -0500 Received: from 172.24.1.49 (EHLO szxeml434-hub.china.huawei.com) ([172.24.1.49]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DEW30905; Fri, 19 Feb 2016 19:45:44 +0800 (CST) Received: from linux-4hy3.site (10.107.193.248) by szxeml434-hub.china.huawei.com (10.82.67.225) with Microsoft SMTP Server id 14.3.235.1; Fri, 19 Feb 2016 19:45:35 +0800 From: Wang Nan To: Alexei Starovoitov , Arnaldo Carvalho de Melo , Arnaldo Carvalho de Melo , Brendan Gregg CC: Adrian Hunter , Cody P Schafer , "David S. Miller" , He Kuang , =?UTF-8?q?J=C3=A9r=C3=A9mie=20Galarneau?= , Jiri Olsa , Kirill Smelkov , Li Zefan , Masami Hiramatsu , Namhyung Kim , Peter Zijlstra , , Wang Nan , Subject: [PATCH 40/55] perf tools: Add evlist channel helpers Date: Fri, 19 Feb 2016 11:44:28 +0000 Message-ID: <1455882283-79592-41-git-send-email-wangnan0@huawei.com> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <1455882283-79592-1-git-send-email-wangnan0@huawei.com> References: <1455882283-79592-1-git-send-email-wangnan0@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.193.248] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090204.56C70069.00EF, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 0e48a117dd09f92ee2525039e1a79a03 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In this commit sereval helpers are introduced to support the principle of channel. Channels hold different groups of evsels which configured differently. It will be used for overwritable evsels, which allows perf record some events continuously while capture snapshot for other events when something happen. Tracking events (mmap, mmap2, fork, exit ...) are another possible events worth to be put into a separated channel. Channels are represented by an array with channel flags. Each channel contains evlist->nr_mmaps mmaps. Channels are configured before perf_evlist__mmap_ex(). During that function nr_mmaps mmaps for each channel are allocated together as a big array. perf_evlist__channel_idx() converts index in the big array and the channel number. For API functions which accept idx, _ex() versions are introduced to accept selecting an mmap from a channel. Signed-off-by: Wang Nan Signed-off-by: He Kuang Cc: Arnaldo Carvalho de Melo Cc: Jiri Olsa Cc: Masami Hiramatsu Cc: Namhyung Kim Cc: Zefan Li Cc: pi3orama@163.com --- tools/perf/builtin-record.c | 6 ++ tools/perf/util/evlist.c | 132 ++++++++++++++++++++++++++++++++++++++++++-- tools/perf/util/evlist.h | 58 +++++++++++++++++++ 3 files changed, 190 insertions(+), 6 deletions(-) -- 1.8.3.4 diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c index 3a7de24..24c776c 100644 --- a/tools/perf/builtin-record.c +++ b/tools/perf/builtin-record.c @@ -356,6 +356,12 @@ try_again: goto out; } + perf_evlist__channel_reset(evlist); + rc = perf_evlist__channel_add(evlist, 0, true); + if (rc < 0) + goto out; + rc = 0; + if (perf_evlist__mmap_ex(evlist, opts->mmap_pages, false, opts->auxtrace_mmap_pages, opts->auxtrace_snapshot_mode) < 0) { diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c index fef465a..a6b52fc 100644 --- a/tools/perf/util/evlist.c +++ b/tools/perf/util/evlist.c @@ -679,14 +679,51 @@ static struct perf_evsel *perf_evlist__event2evsel(struct perf_evlist *evlist, return NULL; } -union perf_event *perf_evlist__mmap_read(struct perf_evlist *evlist, int idx) +int perf_evlist__channel_idx(struct perf_evlist *evlist, + int *p_channel, int *p_idx) +{ + int channel = *p_channel; + int _idx = *p_idx; + + if (_idx < 0) + return -EINVAL; + /* + * Negative channel means caller explicitly use real index. + */ + if (channel < 0) { + channel = perf_evlist__idx_channel(evlist, _idx); + _idx = _idx % evlist->nr_mmaps; + } + if (channel < 0) + return channel; + if (channel >= PERF_EVLIST__NR_CHANNELS) + return -E2BIG; + if (_idx >= evlist->nr_mmaps) + return -E2BIG; + + *p_channel = channel; + *p_idx = evlist->nr_mmaps * channel + _idx; + return 0; +} + +union perf_event *perf_evlist__mmap_read_ex(struct perf_evlist *evlist, + int channel, int idx) { + int err = perf_evlist__channel_idx(evlist, &channel, &idx); struct perf_mmap *md = &evlist->mmap[idx]; u64 head; - u64 old = md->prev; - unsigned char *data = md->base + page_size; + u64 old; + unsigned char *data; union perf_event *event = NULL; + if (err || !perf_evlist__channel_is_enabled(evlist, channel)) { + pr_err("ERROR: invalid mmap index: channel %d, idx: %d\n", + channel, idx); + return NULL; + } + old = md->prev; + data = md->base + page_size; + /* * Check if event was unmapped due to a POLLHUP/POLLERR. */ @@ -748,6 +785,11 @@ union perf_event *perf_evlist__mmap_read(struct perf_evlist *evlist, int idx) return event; } +union perf_event *perf_evlist__mmap_read(struct perf_evlist *evlist, int idx) +{ + return perf_evlist__mmap_read_ex(evlist, -1, idx); +} + static bool perf_mmap__empty(struct perf_mmap *md) { return perf_mmap__read_head(md) == md->prev && !md->auxtrace_mmap.base; @@ -766,10 +808,18 @@ static void perf_evlist__mmap_put(struct perf_evlist *evlist, int idx) __perf_evlist__munmap(evlist, idx); } -void perf_evlist__mmap_consume(struct perf_evlist *evlist, int idx) +void perf_evlist__mmap_consume_ex(struct perf_evlist *evlist, + int channel, int idx) { + int err = perf_evlist__channel_idx(evlist, &channel, &idx); struct perf_mmap *md = &evlist->mmap[idx]; + if (err || !perf_evlist__channel_is_enabled(evlist, channel)) { + pr_err("ERROR: invalid mmap index: channel %d, idx: %d\n", + channel, idx); + return; + } + if (!evlist->overwrite) { u64 old = md->prev; @@ -780,6 +830,11 @@ void perf_evlist__mmap_consume(struct perf_evlist *evlist, int idx) perf_evlist__mmap_put(evlist, idx); } +void perf_evlist__mmap_consume(struct perf_evlist *evlist, int idx) +{ + perf_evlist__mmap_consume_ex(evlist, -1, idx); +} + int __weak auxtrace_mmap__mmap(struct auxtrace_mmap *mm __maybe_unused, struct auxtrace_mmap_params *mp __maybe_unused, void *userpg __maybe_unused, @@ -825,7 +880,7 @@ void perf_evlist__munmap(struct perf_evlist *evlist) if (evlist->mmap == NULL) return; - for (i = 0; i < evlist->nr_mmaps; i++) + for (i = 0; i < perf_evlist__mmap_nr(evlist); i++) __perf_evlist__munmap(evlist, i); zfree(&evlist->mmap); @@ -833,10 +888,17 @@ void perf_evlist__munmap(struct perf_evlist *evlist) static int perf_evlist__alloc_mmap(struct perf_evlist *evlist) { + int total_mmaps; + evlist->nr_mmaps = cpu_map__nr(evlist->cpus); if (cpu_map__empty(evlist->cpus)) evlist->nr_mmaps = thread_map__nr(evlist->threads); - evlist->mmap = zalloc(evlist->nr_mmaps * sizeof(struct perf_mmap)); + + total_mmaps = perf_evlist__mmap_nr(evlist); + if (!total_mmaps) + return -EINVAL; + + evlist->mmap = zalloc(total_mmaps * sizeof(struct perf_mmap)); return evlist->mmap != NULL ? 0 : -ENOMEM; } @@ -1137,6 +1199,12 @@ int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages, int perf_evlist__mmap(struct perf_evlist *evlist, unsigned int pages, bool overwrite) { + int err; + + perf_evlist__channel_reset(evlist); + err = perf_evlist__channel_add(evlist, 0, true); + if (err < 0) + return err; return perf_evlist__mmap_ex(evlist, pages, overwrite, 0, false); } @@ -1764,3 +1832,55 @@ perf_evlist__find_evsel_by_str(struct perf_evlist *evlist, return NULL; } + +int perf_evlist__channel_nr(struct perf_evlist *evlist) +{ + int i; + + for (i = PERF_EVLIST__NR_CHANNELS - 1; i >= 0; i--) { + unsigned long flags = evlist->channel_flags[i]; + + if (flags & PERF_EVLIST__CHANNEL_ENABLED) + return i + 1; + } + return 0; +} + +int perf_evlist__mmap_nr(struct perf_evlist *evlist) +{ + return evlist->nr_mmaps * perf_evlist__channel_nr(evlist); +} + +void perf_evlist__channel_reset(struct perf_evlist *evlist) +{ + int i; + + BUG_ON(evlist->mmap); + + for (i = 0; i < PERF_EVLIST__NR_CHANNELS; i++) + evlist->channel_flags[i] = 0; +} + +int perf_evlist__channel_add(struct perf_evlist *evlist, + unsigned long flag, + bool is_default) +{ + int n = perf_evlist__channel_nr(evlist); + unsigned long *flags = evlist->channel_flags; + + BUG_ON(evlist->mmap); + + if (n >= PERF_EVLIST__NR_CHANNELS) { + pr_debug("ERROR: too many channels. Increase PERF_EVLIST__NR_CHANNELS\n"); + return -ENOSPC; + } + + if (is_default) { + memmove(&flags[1], &flags[0], + sizeof(evlist->channel_flags) - + sizeof(evlist->channel_flags[0])); + n = 0; + } + flags[n] = flag | PERF_EVLIST__CHANNEL_ENABLED; + return n; +} diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h index a0d1522..1812652 100644 --- a/tools/perf/util/evlist.h +++ b/tools/perf/util/evlist.h @@ -20,6 +20,11 @@ struct record_opts; #define PERF_EVLIST__HLIST_BITS 8 #define PERF_EVLIST__HLIST_SIZE (1 << PERF_EVLIST__HLIST_BITS) +#define PERF_EVLIST__NR_CHANNELS 1 +enum perf_evlist_mmap_flag { + PERF_EVLIST__CHANNEL_ENABLED = 1, +}; + /** * struct perf_mmap - perf's ring buffer mmap details * @@ -52,6 +57,7 @@ struct perf_evlist { pid_t pid; } workload; struct fdarray pollfd; + unsigned long channel_flags[PERF_EVLIST__NR_CHANNELS]; struct perf_mmap *mmap; struct thread_map *threads; struct cpu_map *cpus; @@ -116,9 +122,61 @@ struct perf_evsel *perf_evlist__id2evsel_strict(struct perf_evlist *evlist, struct perf_sample_id *perf_evlist__id2sid(struct perf_evlist *evlist, u64 id); +union perf_event *perf_evlist__mmap_read_ex(struct perf_evlist *evlist, + int channel, int idx); union perf_event *perf_evlist__mmap_read(struct perf_evlist *evlist, int idx); +void perf_evlist__mmap_consume_ex(struct perf_evlist *evlist, + int channel, int idx); void perf_evlist__mmap_consume(struct perf_evlist *evlist, int idx); +int perf_evlist__mmap_nr(struct perf_evlist *evlist); + +int perf_evlist__channel_nr(struct perf_evlist *evlist); +void perf_evlist__channel_reset(struct perf_evlist *evlist); +int perf_evlist__channel_add(struct perf_evlist *evlist, + unsigned long flag, + bool is_default); + +static inline bool +__perf_evlist__channel_check(struct perf_evlist *evlist, int channel, + enum perf_evlist_mmap_flag bits) +{ + if (channel >= PERF_EVLIST__NR_CHANNELS) + return false; + + return (evlist->channel_flags[channel] & bits) ? true : false; +} +#define perf_evlist__channel_check(e, c, b) \ + __perf_evlist__channel_check(e, c, PERF_EVLIST__CHANNEL_##b) + +static inline bool +perf_evlist__channel_is_enabled(struct perf_evlist *evlist, int channel) +{ + return perf_evlist__channel_check(evlist, channel, ENABLED); +} + +static inline int +perf_evlist__idx_channel(struct perf_evlist *evlist, int idx) +{ + int channel = idx / evlist->nr_mmaps; + + if (channel >= PERF_EVLIST__NR_CHANNELS) + return -E2BIG; + return channel; +} + +int perf_evlist__channel_idx(struct perf_evlist *evlist, + int *p_channel, int *p_idx); + +static inline struct perf_mmap * +perf_evlist__get_mmap(struct perf_evlist *evlist, + int channel, int idx) +{ + if (perf_evlist__channel_idx(evlist, &channel, &idx)) + return NULL; + + return &evlist->mmap[idx]; +} int perf_evlist__open(struct perf_evlist *evlist); void perf_evlist__close(struct perf_evlist *evlist);