From patchwork Fri Nov 7 16:25:33 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 40437 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f198.google.com (mail-lb0-f198.google.com [209.85.217.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 3901D20D85 for ; Fri, 7 Nov 2014 16:27:56 +0000 (UTC) Received: by mail-lb0-f198.google.com with SMTP id 10sf2107391lbg.1 for ; Fri, 07 Nov 2014 08:27:55 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=jtxU7mspyL88UH3HxpBTrJmieHXAqA3bJCI7swmFQXg=; b=FHztIFsvz6zH2pB8dfAU68cjkv9KImofO3nEX3N4PAXuB2A2/mPoV6/ZQByvlBqpar 0kqU8pFBVSK5yAz8mbjsMpQaDUbMzDnyidq5fceu/eHOswa3UNDshQZktdDBLUd/VIOJ nWEnYPWRNdwpKEqkYpwPp+6O4gLeX/IqX1NDUp4QicC7EknrDpKtBtcTO/JG+oPGrWzc rpNzLBKQLUHxl4GmQgMz81Y2YEWyWMWIBXcnuSM5HSTArgbMCCOyqUMcu3QY/8dLXk3O WUq+tmjw84Sg6RxS9Gy4DU81VhX7GtusLkB8CGYkq3YXMA7VVnP5X/yGm7pw2lFedMYt vVaA== X-Gm-Message-State: ALoCoQlG0XSoBB9otHzHr5LF2CpSfjGBNyg3+hUYPbgK6OnyrfPMWNXepdV6yJxHJFLEhdeZV+58 X-Received: by 10.112.146.104 with SMTP id tb8mr63769lbb.22.1415377675198; Fri, 07 Nov 2014 08:27:55 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.43.34 with SMTP id t2ls248582lal.2.gmail; Fri, 07 Nov 2014 08:27:54 -0800 (PST) X-Received: by 10.152.27.97 with SMTP id s1mr12132500lag.55.1415377674831; Fri, 07 Nov 2014 08:27:54 -0800 (PST) Received: from mail-la0-f45.google.com (mail-la0-f45.google.com. [209.85.215.45]) by mx.google.com with ESMTPS id la7si15553493lab.98.2014.11.07.08.27.54 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 07 Nov 2014 08:27:54 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.45 as permitted sender) client-ip=209.85.215.45; Received: by mail-la0-f45.google.com with SMTP id pn19so4750386lab.18 for ; Fri, 07 Nov 2014 08:27:54 -0800 (PST) X-Received: by 10.152.5.38 with SMTP id p6mr12146254lap.44.1415377674697; Fri, 07 Nov 2014 08:27:54 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp228476lbc; Fri, 7 Nov 2014 08:27:53 -0800 (PST) X-Received: by 10.66.118.201 with SMTP id ko9mr13519639pab.46.1415377672929; Fri, 07 Nov 2014 08:27:52 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id yz12si9480358pac.125.2014.11.07.08.27.52 for ; Fri, 07 Nov 2014 08:27:52 -0800 (PST) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752848AbaKGQ12 (ORCPT + 25 others); Fri, 7 Nov 2014 11:27:28 -0500 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]:48318 "EHLO cam-admin0.cambridge.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752729AbaKGQ10 (ORCPT ); Fri, 7 Nov 2014 11:27:26 -0500 Received: from leverpostej.cambridge.arm.com (leverpostej.cambridge.arm.com [10.1.205.151]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id sA7GPwww014702; Fri, 7 Nov 2014 16:26:55 GMT From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, will.deacon@arm.com, Mark Rutland , Grant Likely , Rob Herring Subject: [PATCH 08/11] arm: perf: add functions to parse affinity from dt Date: Fri, 7 Nov 2014 16:25:33 +0000 Message-Id: <1415377536-12841-9-git-send-email-mark.rutland@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1415377536-12841-1-git-send-email-mark.rutland@arm.com> References: <1415377536-12841-1-git-send-email-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: mark.rutland@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.45 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Depending on hardware configuration, some devices may only be accessible from certain CPUs, may have interrupts wired up to a subset of CPUs, or may have operations which affect subsets of CPUs. To handle these devices it is necessary to describe this affinity information in devicetree. This patch adds functions to handle parsing the CPU affinity of properties from devicetree, based on Lorenzo's topology binding, allowing subsets of CPUs to be associated with interrupts, hardware ports, etc. The functions can be used to build cpumasks and also to test whether an affinity property only targets one CPU independent of the current configuration (e.g. when the kernel supports fewer CPUs than are physically present). This is useful for dealing with mixed SPI/PPI devices. A device may have an arbitrary number of affinity properties, the meaning of which is device-specific and should be specified in a given device's binding document. For example, an affinity property describing interrupt routing may consist of a phandle pointing to a subtree of the topology nodes, indicating the set of CPUs an interrupt originates from or may be taken on. Bindings may have restrictions on the topology nodes referenced - for describing coherency controls an affinity property may indicate a whole cluster (including any non-CPU logic it contains) is affected by some configuration. Signed-off-by: Mark Rutland Cc: Grant Likely Cc: Rob Herring --- arch/arm/kernel/perf_event_cpu.c | 127 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 127 insertions(+) diff --git a/arch/arm/kernel/perf_event_cpu.c b/arch/arm/kernel/perf_event_cpu.c index ce35149..dfcaba5 100644 --- a/arch/arm/kernel/perf_event_cpu.c +++ b/arch/arm/kernel/perf_event_cpu.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -294,6 +295,132 @@ static int probe_current_pmu(struct arm_pmu *pmu) return ret; } +/* + * Test if the node is within the topology tree. + * Walk up to the root, keeping refcounts balanced. + */ +static bool is_topology_node(struct device_node *node) +{ + struct device_node *np, *cpu_map; + bool ret = false; + + cpu_map = of_find_node_by_path("/cpus/cpu-map"); + if (!cpu_map) + return false; + + /* + * of_get_next_parent decrements the refcount of the provided node. + * Increment it first to keep things balanced. + */ + for (np = of_node_get(node); np; np = of_get_next_parent(np)) { + if (np != cpu_map) + continue; + + ret = true; + break; + } + + of_node_put(np); + of_node_put(cpu_map); + return ret; +} + +static int cpu_node_to_id(struct device_node *node) +{ + int cpu; + for_each_possible_cpu(cpu) + if (of_cpu_device_node_get(cpu) == node) + return cpu; + + return -EINVAL; +} + +static int arm_dt_affine_build_mask(struct device_node *affine, + cpumask_t *mask) +{ + struct device_node *child, *parent = NULL; + int ret = -EINVAL; + + if (!is_topology_node(affine)) + return -EINVAL; + + child = of_node_get(affine); + if (!child) + goto out_invalid; + + parent = of_get_parent(child); + if (!parent) + goto out_invalid; + + if (!cpumask_empty(mask)) + goto out_invalid; + + /* + * Depth-first search over the topology tree, iterating over leaf nodes + * and adding all referenced CPUs to the cpumask. Almost all of the + * of_* iterators are built for breadth-first search, which means we + * have to do a little more work to ensure refcounts are balanced. + */ + do { + struct device_node *tmp, *cpu_node; + int cpu; + + /* head down to the leaf */ + while ((tmp = of_get_next_child(child, NULL))) { + of_node_put(parent); + parent = child; + child = tmp; + } + + /* + * In some cases cpu_node might be NULL, but cpu_node_to_id + * will handle this (albeit slowly) and we don't need another + * error path. + */ + cpu_node = of_parse_phandle(child, "cpu", 0); + cpu = cpu_node_to_id(cpu_node); + + if (cpu < 0) + pr_warn("Invalid or unused node in topology description '%s', skipping\n", + child->full_name); + else + cpumask_set_cpu(cpu, mask); + + of_node_put(cpu_node); + + /* + * Find the next sibling, or transitively a parent's sibling. + * Don't go further up the tree than the affine node we were + * handed. + */ + while (child != affine && + !(child = of_get_next_child(parent, child))) { + child = parent; + parent = of_get_parent(parent); + } + + } while (child != affine); /* all children covered. Time to stop */ + + ret = 0; + +out_invalid: + of_node_put(child); + of_node_put(parent); + return ret; +} + +static int arm_dt_affine_get_mask(struct device_node *node, char *prop, + int idx, cpumask_t *mask) +{ + int ret = -EINVAL; + struct device_node *affine = of_parse_phandle(node, prop, idx); + + ret = arm_dt_affine_build_mask(affine, mask); + + of_node_put(affine); + return ret; +} + static int cpu_pmu_device_probe(struct platform_device *pdev) { const struct of_device_id *of_id;