From patchwork Tue Nov 19 05:19:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 179671 Delivered-To: patch@linaro.org Received: by 2002:a92:38d5:0:0:0:0:0 with SMTP id g82csp160400ilf; Mon, 18 Nov 2019 21:39:51 -0800 (PST) X-Google-Smtp-Source: APXvYqy8ZJJ9Y/vA0HoNysFQnyWBDjFAxdcWa9m3vkVtPN1nw+AfBmYJe1NpcTVVdZBuyMc/kDhp X-Received: by 2002:a17:906:52c4:: with SMTP id w4mr30949429ejn.99.1574141991769; Mon, 18 Nov 2019 21:39:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1574141991; cv=none; d=google.com; s=arc-20160816; b=kxEU68/ORR99/u9+qfK6GxKjHS69RG/I4kGliIHFZXWDEc1/jMd+q16RJIMM6B3KEY aOvTdu3nOOxE7QwlmpsqPvmo8hdxp24i1ThFkXKOBjZfE2JfAnF+4gfC5M32OB6TzjyT 6DiGqwYjPRLpIfz/qYFj35LONcFKY9MgHvie6SHGXzPKfxxyIMu7FURTJNxtRCjzF+he 6B6KWQcMBYFDXFn4+UGPCiMefDn886x+h6t3P6RB3r+DDlBN/VJAlh7D0HO8MJPK3zfO Ru66K8mZE5IWMGrn6xfCruYlB9Of/Vx1syLmHHf5h6Yzq9zo5+yTLRHM3/3YIEK0QYcq hK9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=2Qe0GqDq3aOIZOzOFlXL8uwvbE35X5Y6OGV5u/InO4o=; b=gCUjGpaPGScwMfsCie1D7+0XwkLLJciZp8nGwOc9gnwVBq1UE9FQvDJ6FsiqtjVZKs HhzMdhMxBrzsUNtr3x48Oat/sN/aI60cqknzQlWHnCM01bco6zgSlMl5L0O53xXRx1Cp KK1fWzMbpBbnQdewUOAPyP+va3t2k4lQ6aYxDzyUKIEV/AyCZv4MXBiBIXh8U8Navr0t FgYohlq0jThIVnQFughyk4xrUU+RI0qxKuPDt+HVh0e7TrVhAcoWIhi34zFy0NNfsT+W zJAKSFG4yM/nm1IwaXDiNOO/TCX0IQlucoBQGUlJN2tQ/bQgSn0hXNeNLEFVkMl7Mxp5 aJPQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=nN2W9AfG; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c1si12773053ejn.250.2019.11.18.21.39.51; Mon, 18 Nov 2019 21:39:51 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=nN2W9AfG; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729552AbfKSFjr (ORCPT + 26 others); Tue, 19 Nov 2019 00:39:47 -0500 Received: from mail.kernel.org ([198.145.29.99]:33818 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729934AbfKSFjn (ORCPT ); Tue, 19 Nov 2019 00:39:43 -0500 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7292F218BA; Tue, 19 Nov 2019 05:39:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1574141983; bh=isimNGV8EFFjgRGHPfCqxbiGejExCGEYIzUjW3Qx7LU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nN2W9AfG2QtYKndHW5+LUO8dWln75PAoY7fH6lX28ylcAZPeS2KAm6UP9ZTJhZ/7y rkkWkaO5lfFspatJV/Jjfeh3uuJyLkFgSnd8D6KTglWQn/6X/NKkBRLcZ7rHI/eNxH mBmfFSFGiMcO6n+JkS2DFdYkukHl9vq7cS+H1WaU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mathieu Poirier , Suzuki K Poulose , Sasha Levin Subject: [PATCH 4.19 346/422] coresight: perf: Fix per cpu path management Date: Tue, 19 Nov 2019 06:19:03 +0100 Message-Id: <20191119051421.434014178@linuxfoundation.org> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119051400.261610025@linuxfoundation.org> References: <20191119051400.261610025@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Suzuki K Poulose [ Upstream commit 5ecabe4a76e8cdb61fa3e24862d9ca240a1c4ddf ] We create a coresight trace path for each online CPU when we start the event. We rely on the number of online CPUs and then go on to allocate an array matching the "number of online CPUs" for holding the path and then uses normal CPU id as the index to the array. This is problematic as we could have some offline CPUs causing us to access beyond the actual array size (e.g, on a dual SMP system, if CPU0 is offline, CPU1 could be really accessing beyond the array). The solution is to switch to per-cpu array for holding the path. Cc: Mathieu Poirier Signed-off-by: Suzuki K Poulose Signed-off-by: Mathieu Poirier Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- .../hwtracing/coresight/coresight-etm-perf.c | 55 ++++++++++++++----- 1 file changed, 40 insertions(+), 15 deletions(-) -- 2.20.1 diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c index 0f5e03e4df22c..4b53d55788a07 100644 --- a/drivers/hwtracing/coresight/coresight-etm-perf.c +++ b/drivers/hwtracing/coresight/coresight-etm-perf.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -33,7 +34,7 @@ struct etm_event_data { struct work_struct work; cpumask_t mask; void *snk_config; - struct list_head **path; + struct list_head * __percpu *path; }; static DEFINE_PER_CPU(struct perf_output_handle, ctx_handle); @@ -61,6 +62,18 @@ static const struct attribute_group *etm_pmu_attr_groups[] = { NULL, }; +static inline struct list_head ** +etm_event_cpu_path_ptr(struct etm_event_data *data, int cpu) +{ + return per_cpu_ptr(data->path, cpu); +} + +static inline struct list_head * +etm_event_cpu_path(struct etm_event_data *data, int cpu) +{ + return *etm_event_cpu_path_ptr(data, cpu); +} + static void etm_event_read(struct perf_event *event) {} static int etm_addr_filters_alloc(struct perf_event *event) @@ -120,23 +133,26 @@ static void free_event_data(struct work_struct *work) */ if (event_data->snk_config) { cpu = cpumask_first(mask); - sink = coresight_get_sink(event_data->path[cpu]); + sink = coresight_get_sink(etm_event_cpu_path(event_data, cpu)); if (sink_ops(sink)->free_buffer) sink_ops(sink)->free_buffer(event_data->snk_config); } for_each_cpu(cpu, mask) { - if (!(IS_ERR_OR_NULL(event_data->path[cpu]))) - coresight_release_path(event_data->path[cpu]); + struct list_head **ppath; + + ppath = etm_event_cpu_path_ptr(event_data, cpu); + if (!(IS_ERR_OR_NULL(*ppath))) + coresight_release_path(*ppath); + *ppath = NULL; } - kfree(event_data->path); + free_percpu(event_data->path); kfree(event_data); } static void *alloc_event_data(int cpu) { - int size; cpumask_t *mask; struct etm_event_data *event_data; @@ -147,7 +163,6 @@ static void *alloc_event_data(int cpu) /* Make sure nothing disappears under us */ get_online_cpus(); - size = num_online_cpus(); mask = &event_data->mask; if (cpu != -1) @@ -164,8 +179,8 @@ static void *alloc_event_data(int cpu) * unused memory when dealing with single CPU trace scenarios is small * compared to the cost of searching through an optimized array. */ - event_data->path = kcalloc(size, - sizeof(struct list_head *), GFP_KERNEL); + event_data->path = alloc_percpu(struct list_head *); + if (!event_data->path) { kfree(event_data); return NULL; @@ -213,6 +228,7 @@ static void *etm_setup_aux(struct perf_event *event, void **pages, /* Setup the path for each CPU in a trace session */ for_each_cpu(cpu, mask) { + struct list_head *path; struct coresight_device *csdev; csdev = per_cpu(csdev_src, cpu); @@ -224,9 +240,11 @@ static void *etm_setup_aux(struct perf_event *event, void **pages, * list of devices from source to sink that can be * referenced later when the path is actually needed. */ - event_data->path[cpu] = coresight_build_path(csdev, sink); - if (IS_ERR(event_data->path[cpu])) + path = coresight_build_path(csdev, sink); + if (IS_ERR(path)) goto err; + + *etm_event_cpu_path_ptr(event_data, cpu) = path; } if (!sink_ops(sink)->alloc_buffer) @@ -255,6 +273,7 @@ static void etm_event_start(struct perf_event *event, int flags) struct etm_event_data *event_data; struct perf_output_handle *handle = this_cpu_ptr(&ctx_handle); struct coresight_device *sink, *csdev = per_cpu(csdev_src, cpu); + struct list_head *path; if (!csdev) goto fail; @@ -267,8 +286,9 @@ static void etm_event_start(struct perf_event *event, int flags) if (!event_data) goto fail; + path = etm_event_cpu_path(event_data, cpu); /* We need a sink, no need to continue without one */ - sink = coresight_get_sink(event_data->path[cpu]); + sink = coresight_get_sink(path); if (WARN_ON_ONCE(!sink || !sink_ops(sink)->set_buffer)) goto fail_end_stop; @@ -278,7 +298,7 @@ static void etm_event_start(struct perf_event *event, int flags) goto fail_end_stop; /* Nothing will happen without a path */ - if (coresight_enable_path(event_data->path[cpu], CS_MODE_PERF)) + if (coresight_enable_path(path, CS_MODE_PERF)) goto fail_end_stop; /* Tell the perf core the event is alive */ @@ -306,6 +326,7 @@ static void etm_event_stop(struct perf_event *event, int mode) struct coresight_device *sink, *csdev = per_cpu(csdev_src, cpu); struct perf_output_handle *handle = this_cpu_ptr(&ctx_handle); struct etm_event_data *event_data = perf_get_aux(handle); + struct list_head *path; if (event->hw.state == PERF_HES_STOPPED) return; @@ -313,7 +334,11 @@ static void etm_event_stop(struct perf_event *event, int mode) if (!csdev) return; - sink = coresight_get_sink(event_data->path[cpu]); + path = etm_event_cpu_path(event_data, cpu); + if (!path) + return; + + sink = coresight_get_sink(path); if (!sink) return; @@ -344,7 +369,7 @@ static void etm_event_stop(struct perf_event *event, int mode) } /* Disabling the path make its elements available to other sessions */ - coresight_disable_path(event_data->path[cpu]); + coresight_disable_path(path); } static int etm_event_add(struct perf_event *event, int mode)