From patchwork Thu Nov 28 10:15:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sudeep Holla X-Patchwork-Id: 180386 Delivered-To: patch@linaro.org Received: by 2002:a92:38d5:0:0:0:0:0 with SMTP id g82csp7317200ilf; Thu, 28 Nov 2019 02:16:01 -0800 (PST) X-Google-Smtp-Source: APXvYqyJW6Bq66ujFCiVbKZ+V3eEtB0l/5SLua5jaktzmcBdpgEUH0bSyqEUmZhCTunLdpgRa43R X-Received: by 2002:a50:cd4a:: with SMTP id d10mr36703287edj.42.1574936161241; Thu, 28 Nov 2019 02:16:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1574936161; cv=none; d=google.com; s=arc-20160816; b=KcHVpCvnq9YTrqpLh2rxCdWoJ4OtToYY40RxrvQwx9FE7eT0b70fvn0Rh6nDODbTrD F29x8BMkxqgA5rDF5s35Pq99xJgP4H3MFKLrSB1d7cIAvq+xTCs3G4IsJxPID9fhyk/F 5BE8xI/wX+P5frfZPXN0wfREHFDJ85MAoODYPqlyRWoEtLrpZVlttW5SnKrL7iYrXaXP fJPa6a8qHfXdy/MKSPNECxD+UDnSNCt82GxOUoN5MdXWeO6F9AS6Ry5B1fPnHx3Z2+vs r6TSqyFRbmxvWv/A/ERBaXUqsKiSsvjDiZnpUef1/+pU8HF9FrMQ/zFSWYydL5O1tpgr aIYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from; bh=5i8uTO3T7DkJMeVPJvQpxrFuFpcG50O1gaNAJn5n80k=; b=joVQaDxr48NMkg6zxtbuxFGKvlg4000mfX1OBDTbDpQ1dCPoeGHsMx/GvINnZ70XyQ wV0ZU8gzB0O/m6KhJOa926bO/MrcGg1HoWy4RGnFBsOgEv7gxu0+DesWhRH6/kpYLUsE FKrbME9cxmlbIpOJO9MixlQjsdnaQvbwBtNmsSi2Mt9cR7fo18BGxDw04vPkc0YqO/+j OUP7Ejgr9mVFik9CmRAmCtLFS9thv6NGJYkoCgnE1JrFjnS68n4C+lYSp21n+zulBaIS Nixh03N2Gq/hJPeLnL1Y5ctQgpqBw0AZcGYeM3xU0bRGx2lRK2XRdKumExmQmlHF2XHm b8iQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f19si5954875edq.319.2019.11.28.02.16.00; Thu, 28 Nov 2019 02:16:01 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726492AbfK1KQA (ORCPT + 10 others); Thu, 28 Nov 2019 05:16:00 -0500 Received: from foss.arm.com ([217.140.110.172]:33190 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726133AbfK1KQA (ORCPT ); Thu, 28 Nov 2019 05:16:00 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 784181FB; Thu, 28 Nov 2019 02:15:59 -0800 (PST) Received: from usa.arm.com (e107155-lin.cambridge.arm.com [10.1.196.42]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 053CF3F6C4; Thu, 28 Nov 2019 02:15:57 -0800 (PST) From: Sudeep Holla To: linux-pm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: Sudeep Holla , "Rafael J . Wysocki" , Liviu Dudau , Viresh Kumar , Dietmar Eggemann , Lorenzo Pieralisi , Morten Rasmussen , Lukasz Luba Subject: [PATCH 1/2] ARM: vexpress: Set-up shared OPP table instead of individual for each CPU Date: Thu, 28 Nov 2019 10:15:46 +0000 Message-Id: <20191128101547.519-1-sudeep.holla@arm.com> X-Mailer: git-send-email 2.17.1 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Currently we add individual copy of same OPP table for each CPU within the cluster. This is redundant and doesn't reflect the reality. We can't use core cpumask to set policy->cpus in ve_spc_cpufreq_init() anymore as it gets called via cpuhp_cpufreq_online()->cpufreq_online() ->cpufreq_driver->init() and the cpumask gets updated upon CPU hotplug operations. It also may cause issues when the vexpress_spc_cpufreq driver is built as a module. Since ve_spc_clk_init is built-in device initcall, we should be able to use the same topology_core_cpumask to set the opp sharing cpumask via dev_pm_opp_set_sharing_cpus and use the same later in the driver via dev_pm_opp_get_sharing_cpus. Cc: Liviu Dudau Cc: Lorenzo Pieralisi Tested-by: Dietmar Eggemann Signed-off-by: Sudeep Holla --- arch/arm/mach-vexpress/spc.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) -- 2.17.1 diff --git a/arch/arm/mach-vexpress/spc.c b/arch/arm/mach-vexpress/spc.c index 354e0e7025ae..1da11bdb1dfb 100644 --- a/arch/arm/mach-vexpress/spc.c +++ b/arch/arm/mach-vexpress/spc.c @@ -551,8 +551,9 @@ static struct clk *ve_spc_clk_register(struct device *cpu_dev) static int __init ve_spc_clk_init(void) { - int cpu; + int cpu, cluster; struct clk *clk; + bool init_opp_table[MAX_CLUSTERS] = { false }; if (!info) return 0; /* Continue only if SPC is initialised */ @@ -578,8 +579,17 @@ static int __init ve_spc_clk_init(void) continue; } + cluster = topology_physical_package_id(cpu_dev->id); + if (init_opp_table[cluster]) + continue; + if (ve_init_opp_table(cpu_dev)) pr_warn("failed to initialise cpu%d opp table\n", cpu); + else if (dev_pm_opp_set_sharing_cpus(cpu_dev, + topology_core_cpumask(cpu_dev->id))) + pr_warn("failed to mark OPPs shared for cpu%d\n", cpu); + else + init_opp_table[cluster] = true; } platform_device_register_simple("vexpress-spc-cpufreq", -1, NULL, 0);