From patchwork Wed May 3 17:35:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Pundir X-Patchwork-Id: 98501 Delivered-To: patch@linaro.org Received: by 10.140.89.200 with SMTP id v66csp166214qgd; Wed, 3 May 2017 10:36:12 -0700 (PDT) X-Received: by 10.98.62.213 with SMTP id y82mr6119286pfj.93.1493832972683; Wed, 03 May 2017 10:36:12 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p20si12867632pli.35.2017.05.03.10.36.12; Wed, 03 May 2017 10:36:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753971AbdECRgM (ORCPT + 6 others); Wed, 3 May 2017 13:36:12 -0400 Received: from mail-pg0-f51.google.com ([74.125.83.51]:32921 "EHLO mail-pg0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751880AbdECRgL (ORCPT ); Wed, 3 May 2017 13:36:11 -0400 Received: by mail-pg0-f51.google.com with SMTP id y4so76518229pge.0 for ; Wed, 03 May 2017 10:36:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=IEtkwsNxg+aOggHqrFviWIfSS4IoIxl1ROwrsJt2ams=; b=LG41soTZUhAUbKZhl0FddJScN4MpH0Hn4Z2k747Q5agVo9ThRu4HZsjUcg5RvuSd2R kDCfjvkJY9XmGJnBrL2WIrU5vURGWI9tWJ9GZR47+M+JKzLJMMGX4mzlEQkbyCn220xg Ck+Fw30TwWjJkJJ36CTILhPSrM3zNaaXPxLCU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=IEtkwsNxg+aOggHqrFviWIfSS4IoIxl1ROwrsJt2ams=; b=qt5ceOVTZHIvVmN31SYRffyB5jZVJGYHD7c9D80Id8scG3dzdp7oNk9Nika68Bd9m4 R29TUTz7Q726spISNhIYGUqkKcuGGRvNApy4JeP5zyb7MPZg5Z0UH96ogoEJxvlYG9HO uhvRFd23kzN3l+TdFdKvae3CJIfCYDjnuu1nDPG+8O6dMw88aeMT+0iVLIonx+f6vvhN uJYEEGVld9IHtUvKLWcDhReMvEIlqqmln2jE9DYlX24F6l3CoU0uzQE47v9boYueF2VU peJzJrIMFYJbLngQk7O19GBmfQKSt/keozfXXSOW0R3CF1gRvFslazhVIRCLDDcnvY5L bf9A== X-Gm-Message-State: AN3rC/5yICahrotg8qc7V+hgkuhuexhGjqKWKGCkYrejoPLUbpqTZCGx QXzu3IfSQ2BafIUP X-Received: by 10.98.35.142 with SMTP id q14mr6122764pfj.220.1493832970380; Wed, 03 May 2017 10:36:10 -0700 (PDT) Received: from localhost.localdomain ([106.51.135.126]) by smtp.gmail.com with ESMTPSA id c3sm5895206pfg.46.2017.05.03.10.36.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 03 May 2017 10:36:09 -0700 (PDT) From: Amit Pundir To: Greg KH Cc: stable@vger.kernel.org, Peter Zijlstra , Arnaldo Carvalho de Melo , Jiri Olsa , Linus Torvalds , Ingo Molnar Subject: [PATCH for-3.18 3/7] perf: Tighten (and fix) the grouping condition Date: Wed, 3 May 2017 23:05:54 +0530 Message-Id: <1493832958-12489-4-git-send-email-amit.pundir@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1493832958-12489-1-git-send-email-amit.pundir@linaro.org> References: <1493832958-12489-1-git-send-email-amit.pundir@linaro.org> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Peter Zijlstra The fix from 9fc81d87420d ("perf: Fix events installation during moving group") was incomplete in that it failed to recognise that creating a group with events for different CPUs is semantically broken -- they cannot be co-scheduled. Furthermore, it leads to real breakage where, when we create an event for CPU Y and then migrate it to form a group on CPU X, the code gets confused where the counter is programmed -- triggered in practice as well by me via the perf fuzzer. Fix this by tightening the rules for creating groups. Only allow grouping of counters that can be co-scheduled in the same context. This means for the same task and/or the same cpu. Fixes: 9fc81d87420d ("perf: Fix events installation during moving group") Signed-off-by: Peter Zijlstra (Intel) Cc: Arnaldo Carvalho de Melo Cc: Jiri Olsa Cc: Linus Torvalds Link: http://lkml.kernel.org/r/20150123125834.090683288@infradead.org Signed-off-by: Ingo Molnar (cherry picked from commit c3c87e770458aa004bd7ed3f29945ff436fd6511) Signed-off-by: Amit Pundir --- include/linux/perf_event.h | 6 ------ kernel/events/core.c | 15 +++++++++++++-- 2 files changed, 13 insertions(+), 8 deletions(-) -- 2.7.4 diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index df8904fea40c..482ccff29bc9 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -455,11 +455,6 @@ struct perf_event { #endif /* CONFIG_PERF_EVENTS */ }; -enum perf_event_context_type { - task_context, - cpu_context, -}; - /** * struct perf_event_context - event context structure * @@ -467,7 +462,6 @@ enum perf_event_context_type { */ struct perf_event_context { struct pmu *pmu; - enum perf_event_context_type type; /* * Protect the states of the events in the list, * nr_active, and the list: diff --git a/kernel/events/core.c b/kernel/events/core.c index 3ebad2556698..26c40faa8ea4 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -6803,7 +6803,6 @@ skip_type: __perf_event_init_context(&cpuctx->ctx); lockdep_set_class(&cpuctx->ctx.mutex, &cpuctx_mutex); lockdep_set_class(&cpuctx->ctx.lock, &cpuctx_lock); - cpuctx->ctx.type = cpu_context; cpuctx->ctx.pmu = pmu; __perf_cpu_hrtimer_init(cpuctx, cpu); @@ -7445,7 +7444,19 @@ SYSCALL_DEFINE5(perf_event_open, * task or CPU context: */ if (move_group) { - if (group_leader->ctx->type != ctx->type) + /* + * Make sure we're both on the same task, or both + * per-cpu events. + */ + if (group_leader->ctx->task != ctx->task) + goto err_context; + + /* + * Make sure we're both events for the same CPU; + * grouping events for different CPUs is broken; since + * you can never concurrently schedule them anyhow. + */ + if (group_leader->cpu != event->cpu) goto err_context; } else { if (group_leader->ctx != ctx)