From patchwork Fri Nov 7 16:25:27 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 40441 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f71.google.com (mail-la0-f71.google.com [209.85.215.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 8B10E20D85 for ; Fri, 7 Nov 2014 16:28:26 +0000 (UTC) Received: by mail-la0-f71.google.com with SMTP id gq15sf3210687lab.2 for ; Fri, 07 Nov 2014 08:28:25 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=zATU1LVFgdxIlrWTtevSGdPvgqPa+bJStBJEVcbK5PY=; b=m4rEldyNZNRhPHrFjyw7yZ0RxLklqwH0QzvDcfHcM13KdNFU0tMR6WaP8C1j42c2XP 1cWLwDGGL102Bf9MIb/+ygyBi9+8af2xlgYVt3Qrc9gckbPCnirGDWL8nVYFW4gapSH6 zBtdptNzgSCtfWUzT0jxIyORfI4U51b5/01WkozA7LTEZyOQU5qB8mH41p71cUokoXfj ouYnSS5D9bZAe34+4ST6tX4avy9s5cUojvAPM1vYBUO6GwKFb/TGNzcHNztZp5M8qyJu jZj1swPeVP3mfPVPCi/7b/h6IMaEO2sNJyrgkrgIxgE8GW+83RjF0Cf9rJj74Eff1NBU vLXg== X-Gm-Message-State: ALoCoQn5vCqcYjS+3wcm/nRqDzNiUiBRcBdVvA6kg+bFCeoiHOtVrxBD+myRjahh6/k3d/GeU5JM X-Received: by 10.181.29.201 with SMTP id jy9mr845007wid.1.1415377705515; Fri, 07 Nov 2014 08:28:25 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.7.137 with SMTP id j9ls216262laa.45.gmail; Fri, 07 Nov 2014 08:28:25 -0800 (PST) X-Received: by 10.152.207.7 with SMTP id ls7mr12350413lac.9.1415377705207; Fri, 07 Nov 2014 08:28:25 -0800 (PST) Received: from mail-lb0-f182.google.com (mail-lb0-f182.google.com. [209.85.217.182]) by mx.google.com with ESMTPS id qj5si15580254lbb.89.2014.11.07.08.28.25 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 07 Nov 2014 08:28:25 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.182 as permitted sender) client-ip=209.85.217.182; Received: by mail-lb0-f182.google.com with SMTP id f15so3271667lbj.27 for ; Fri, 07 Nov 2014 08:28:25 -0800 (PST) X-Received: by 10.152.29.8 with SMTP id f8mr12257737lah.56.1415377705038; Fri, 07 Nov 2014 08:28:25 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp228561lbc; Fri, 7 Nov 2014 08:28:24 -0800 (PST) X-Received: by 10.66.148.225 with SMTP id tv1mr13221604pab.17.1415377703530; Fri, 07 Nov 2014 08:28:23 -0800 (PST) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id fn2si9594001pab.9.2014.11.07.08.28.22 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 Nov 2014 08:28:23 -0800 (PST) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XmmNE-0007tb-8j; Fri, 07 Nov 2014 16:26:56 +0000 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XmmNA-0007ku-45 for linux-arm-kernel@lists.infradead.org; Fri, 07 Nov 2014 16:26:53 +0000 Received: from leverpostej.cambridge.arm.com (leverpostej.cambridge.arm.com [10.1.205.151]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id sA7GPwwq014702; Fri, 7 Nov 2014 16:26:26 GMT From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 02/11] perf: allow for PMU-specific event filtering Date: Fri, 7 Nov 2014 16:25:27 +0000 Message-Id: <1415377536-12841-3-git-send-email-mark.rutland@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1415377536-12841-1-git-send-email-mark.rutland@arm.com> References: <1415377536-12841-1-git-send-email-mark.rutland@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141107_082652_545238_4288A473 X-CRM114-Status: GOOD ( 17.00 ) X-Spam-Score: -5.6 (-----) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-5.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.96.50 listed in list.dnswl.org] -0.6 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -0.0 SPF_PASS SPF: sender matches SPF record Cc: Mark Rutland , Peter Zijlstra , will.deacon@arm.com, linux-kernel@vger.kernel.org, Arnaldo Carvalho de Melo , Ingo Molnar , Paul Mackerras X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: mark.rutland@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.182 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 In certain circumstances it may not be possible to schedule particular events due to constraints other than a lack of hardware counters (e.g. on big.LITTLE systems where CPUs support different events). The core perf event code does not distinguish these cases and pessimistically assumes that any failure to schedule an events is due to a lack of hardware counters, ending event group scheduling early despite hardware counters remaining available. When such an unschedulable event exists in a ctx->flexible_groups list it can unnecessarily prevent event groups following it in the list from being scheduled until it is rotated to the end of the list. This can result in events being scheduled for only a portion of the time they would otherwise be eligible, and for short running programs unfortunate initial list ordering can result in no events being counted. This patch adds a new (optional) filter_match function pointer to struct pmu which backends can use to tell the perf core whether or not it is worth attempting to schedule an event. This plugs into the existing event_filter_match logic, and makes it possible to avoid the scheduling problem described above. Signed-off-by: Mark Rutland Cc: Peter Zijlstra Cc: Paul Mackerras Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo --- include/linux/perf_event.h | 5 +++++ kernel/events/core.c | 8 +++++++- 2 files changed, 12 insertions(+), 1 deletion(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 893a0d0..80c5f5f 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -263,6 +263,11 @@ struct pmu { * flush branch stack on context-switches (needed in cpu-wide mode) */ void (*flush_branch_stack) (void); + + /* + * Filter events for PMU-specific reasons. + */ + int (*filter_match) (struct perf_event *event); /* optional */ }; /** diff --git a/kernel/events/core.c b/kernel/events/core.c index 2b02c9f..770b276 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -1428,11 +1428,17 @@ static int __init perf_workqueue_init(void) core_initcall(perf_workqueue_init); +static inline int pmu_filter_match(struct perf_event *event) +{ + struct pmu *pmu = event->pmu; + return pmu->filter_match ? pmu->filter_match(event) : 1; +} + static inline int event_filter_match(struct perf_event *event) { return (event->cpu == -1 || event->cpu == smp_processor_id()) - && perf_cgroup_match(event); + && perf_cgroup_match(event) && pmu_filter_match(event); } static void