From patchwork Mon Nov 13 01:38:09 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 118693 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp1252248qgn; Sun, 12 Nov 2017 17:41:12 -0800 (PST) X-Google-Smtp-Source: AGs4zMYU0WIgjqMewulStV3//xgSLIii3D3XrXyxc3P1Wa87hClUS697hHU3UoMZppqRLNmDSEgl X-Received: by 10.98.94.194 with SMTP id s185mr8187414pfb.56.1510537272491; Sun, 12 Nov 2017 17:41:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510537272; cv=none; d=google.com; s=arc-20160816; b=rJpIvLS/h0kaPA3FaQOTVkMqWZQRaAHxe/eSgwrEKfHnJ03AwQVPg7UDp1msyLEuGD fF1Y5XStJO5WnoPrOTEPFBAAf/7Xti1iMzmcH1odU3LQHkZTwDFo+E4osPwqvA+ddsIi 0q+ElKC70N8NVPIf8FZFJy2csmxgG2y+p15ZYTiTJEqIQTcFerOSDcKC+nZ/TvqufXUR KddJ74QAE2QibdescWJEUKHu9chgGCRmwbV3qU6I7RF1A3qKeZdl0srrhQ3pJwVc5eWR uRfC2TIrWD1s6888B2AeQmWaFby0wMoqaQlVyLGqYavPRResrWnD1IwEo3lOe5G8RofD 79Cg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=uVahC1ls+/p6YQNarNjT7hVupcTreIyFxmSIhcfRIJ8=; b=gAso6h4tordFNqFHtqVdKic4sGTg9KeC4Y5nNPkog7TRNF1NohpMKfG3S0ChrF896Y SISIcs+WNWsHeWnYoivyjLle5KX9efUiv0E+Wl+QsLBvFA+eDgaJnCEDJ9Gl5kfSsIsf kvvD6256q62uTYaZEslgY2OzfANEKoqGNRC8QPLqwGeF6ER6xatQCO0oIa/1bdu2qL8z 5Cb5bOfV50bWAid5ADbDb6GFtnzG5LCqDRoAmU9P9BHHip856tn/viZ4fW6wOtKoF4E0 ULFw1QD7HlaTVFbej4L4AliwpJGmDRfWwkCA6Qtdtq1rZS84wGwz50zbUyYNjut4p/Jt Pwqw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m12si125611pgs.784.2017.11.12.17.41.12; Sun, 12 Nov 2017 17:41:12 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751762AbdKMBlK (ORCPT + 27 others); Sun, 12 Nov 2017 20:41:10 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:10463 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751157AbdKMBje (ORCPT ); Sun, 12 Nov 2017 20:39:34 -0500 Received: from 172.30.72.58 (EHLO DGGEMS403-HUB.china.huawei.com) ([172.30.72.58]) by dggrg04-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id DKS06517; Mon, 13 Nov 2017 09:39:25 +0800 (CST) Received: from linux-4hy3.site (10.107.193.248) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.361.1; Mon, 13 Nov 2017 09:39:08 +0800 From: Wang Nan To: , , , , CC: Wang Nan Subject: [PATCH 7/7] perf tools: Remove prot field in mmap param Date: Mon, 13 Nov 2017 01:38:09 +0000 Message-ID: <20171113013809.212417-8-wangnan0@huawei.com> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20171113013809.212417-1-wangnan0@huawei.com> References: <20171113013809.212417-1-wangnan0@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.193.248] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090201.5A08F7CD.0040, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: c66e60f551a72724509fcdddad8e6573 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org After removing the concept of 'overwrite' in code level, now the prot is determinated by write_backward. There's no need to pass prot from perf_evlist__mmap_ex(). Signed-off-by: Wang Nan --- tools/perf/util/evlist.c | 17 ++++++----------- tools/perf/util/mmap.c | 4 ++-- tools/perf/util/mmap.h | 4 ++-- 3 files changed, 10 insertions(+), 15 deletions(-) -- 2.10.1 diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c index 4948d3d..0d713e0 100644 --- a/tools/perf/util/evlist.c +++ b/tools/perf/util/evlist.c @@ -799,28 +799,23 @@ perf_evlist__should_poll(struct perf_evlist *evlist __maybe_unused, } static int perf_evlist__mmap_per_evsel(struct perf_evlist *evlist, int idx, - struct mmap_params *_mp, int cpu_idx, + struct mmap_params *mp, int cpu_idx, int thread, int *_output, int *_output_backward) { struct perf_evsel *evsel; int revent; int evlist_cpu = cpu_map__cpu(evlist->cpus, cpu_idx); - struct mmap_params *mp; evlist__for_each_entry(evlist, evsel) { struct perf_mmap *maps = evlist->mmap; - struct mmap_params rdonly_mp; int *output = _output; int fd; int cpu; + int prot = PROT_READ; - mp = _mp; if (evsel->attr.write_backward) { output = _output_backward; maps = evlist->backward_mmap; - rdonly_mp = *_mp; - rdonly_mp.prot &= ~PROT_WRITE; - mp = &rdonly_mp; if (!maps) { maps = perf_evlist__alloc_mmap(evlist); @@ -830,6 +825,8 @@ static int perf_evlist__mmap_per_evsel(struct perf_evlist *evlist, int idx, if (evlist->bkw_mmap_state == BKW_MMAP_NOTREADY) perf_evlist__toggle_bkw_mmap(evlist, BKW_MMAP_RUNNING); } + } else { + prot |= PROT_WRITE; } if (evsel->system_wide && thread) @@ -844,7 +841,7 @@ static int perf_evlist__mmap_per_evsel(struct perf_evlist *evlist, int idx, if (*output == -1) { *output = fd; - if (perf_mmap__mmap(&maps[idx], mp, *output) < 0) + if (perf_mmap__mmap(&maps[idx], mp, prot, *output) < 0) return -1; } else { if (ioctl(fd, PERF_EVENT_IOC_SET_OUTPUT, *output) != 0) @@ -1064,9 +1061,7 @@ int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages, struct perf_evsel *evsel; const struct cpu_map *cpus = evlist->cpus; const struct thread_map *threads = evlist->threads; - struct mmap_params mp = { - .prot = PROT_READ | PROT_WRITE, - }; + struct mmap_params mp; if (!evlist->mmap) evlist->mmap = perf_evlist__alloc_mmap(evlist); diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c index 703ed41..40e91a0 100644 --- a/tools/perf/util/mmap.c +++ b/tools/perf/util/mmap.c @@ -219,7 +219,7 @@ void perf_mmap__munmap(struct perf_mmap *map) auxtrace_mmap__munmap(&map->auxtrace_mmap); } -int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int fd) +int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int prot, int fd) { /* * The last one will be done at perf_evlist__mmap_consume(), so that we @@ -237,7 +237,7 @@ int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int fd) refcount_set(&map->refcnt, 2); map->prev = 0; map->mask = mp->mask; - map->base = mmap(NULL, perf_mmap__mmap_len(map), mp->prot, + map->base = mmap(NULL, perf_mmap__mmap_len(map), prot, MAP_SHARED, fd, 0); if (map->base == MAP_FAILED) { pr_debug2("failed to mmap perf event ring buffer, error %d\n", diff --git a/tools/perf/util/mmap.h b/tools/perf/util/mmap.h index 2c3d291..1f6fcc6 100644 --- a/tools/perf/util/mmap.h +++ b/tools/perf/util/mmap.h @@ -53,11 +53,11 @@ enum bkw_mmap_state { }; struct mmap_params { - int prot, mask; + int mask; struct auxtrace_mmap_params auxtrace_mp; }; -int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int fd); +int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int prot, int fd); void perf_mmap__munmap(struct perf_mmap *map); void perf_mmap__get(struct perf_mmap *map);