From patchwork Tue Jun 24 04:11:09 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Pitre X-Patchwork-Id: 32400 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ie0-f198.google.com (mail-ie0-f198.google.com [209.85.223.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id BBD5020540 for ; Tue, 24 Jun 2014 04:11:20 +0000 (UTC) Received: by mail-ie0-f198.google.com with SMTP id y20sf50672368ier.1 for ; Mon, 23 Jun 2014 21:11:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe :content-transfer-encoding; bh=AyzE9Al08PMiDZo/Rjr2RUpDsUJNDIKI25w8c7XEQXk=; b=S3xEiHCsBrjBuKlxX6+ezNYaXTXobG0L1SITI0s7q8swqcgKQX7vx6G/me8dz4VKJ7 zisXGNlMlR9CNVk5Y8KVM7Xa/DK/9xktBD8hfH904T400SMyK1oj/bdpKt5BUug5oKk/ 6F+FlI7ksBaky4Ewf+hMr6/3Yr85f/coCbWXOovt8AFmLy8ScjD1y4Q5kMNm8I+GvLH6 ToPTWs7k46bHH6bsgSjtfSGqw66QCbikDLFJn/eJ6ecmjf5vin8S8sSsVHWFjdTXb4wJ d8zsqRM65SA1a38XBviymHi9W3CkuVvaG4Tz8CwKlCRfVABjawFkFbso7szxn191+kz/ fxFg== X-Gm-Message-State: ALoCoQnd8uWY4V8UOoPCH8zOaTRi/AGVWxiyXt6dH+bLuMomkc2lBv7IOiBp/tyfPXDj6+Qkxa8S X-Received: by 10.43.103.136 with SMTP id di8mr10366743icc.14.1403583080286; Mon, 23 Jun 2014 21:11:20 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.94.129 with SMTP id g1ls2153074qge.17.gmail; Mon, 23 Jun 2014 21:11:20 -0700 (PDT) X-Received: by 10.52.139.101 with SMTP id qx5mr19311224vdb.17.1403583080145; Mon, 23 Jun 2014 21:11:20 -0700 (PDT) Received: from mail-vc0-f172.google.com (mail-vc0-f172.google.com [209.85.220.172]) by mx.google.com with ESMTPS id b8si10099679vcf.7.2014.06.23.21.11.20 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 23 Jun 2014 21:11:20 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.172 as permitted sender) client-ip=209.85.220.172; Received: by mail-vc0-f172.google.com with SMTP id hy10so7144704vcb.3 for ; Mon, 23 Jun 2014 21:11:20 -0700 (PDT) X-Received: by 10.52.232.133 with SMTP id to5mr19227183vdc.16.1403583080061; Mon, 23 Jun 2014 21:11:20 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp176827vcb; Mon, 23 Jun 2014 21:11:19 -0700 (PDT) X-Received: by 10.68.129.42 with SMTP id nt10mr35023242pbb.134.1403583079211; Mon, 23 Jun 2014 21:11:19 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id tz6si24466314pbc.165.2014.06.23.21.11.18; Mon, 23 Jun 2014 21:11:18 -0700 (PDT) Received-SPF: none (google.com: linux-samsung-soc-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751160AbaFXELS (ORCPT + 8 others); Tue, 24 Jun 2014 00:11:18 -0400 Received: from relais.videotron.ca ([24.201.245.36]:28686 "EHLO relais.videotron.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751150AbaFXELR (ORCPT ); Tue, 24 Jun 2014 00:11:17 -0400 Received: from yoda.home ([66.130.143.177]) by VL-VM-MR004.ip.videotron.ca (Oracle Communications Messaging Exchange Server 7u4-22.01 64bit (built Apr 21 2011)) with ESMTP id <0N7N00AF8NMSYHB3@VL-VM-MR004.ip.videotron.ca> for linux-samsung-soc@vger.kernel.org; Tue, 24 Jun 2014 00:11:16 -0400 (EDT) Received: from xanadu.home (xanadu.home [192.168.2.2]) by yoda.home (Postfix) with ESMTP id E85522DA093B; Tue, 24 Jun 2014 00:11:15 -0400 (EDT) From: Nicolas Pitre To: Abhilash Kesavan , Doug Anderson , Andrew Bresticker Cc: Kevin Hilman , Olof Johansson , Lorenzo Pieralisi , linux-samsung-soc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linaro-kernel@lists.linaro.org Subject: [PATCH 1/3] ARM: MCPM: provide infrastructure to allow for MCPM loopback Date: Tue, 24 Jun 2014 00:11:09 -0400 Message-id: <1403583071-5650-2-git-send-email-nicolas.pitre@linaro.org> X-Mailer: git-send-email 1.8.4.108.g55ea5f6 In-reply-to: <1403583071-5650-1-git-send-email-nicolas.pitre@linaro.org> References: <1403583071-5650-1-git-send-email-nicolas.pitre@linaro.org> Sender: linux-samsung-soc-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-samsung-soc@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: nicolas.pitre@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.172 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Content-transfer-encoding: 7BIT The kernel already has the responsibility to handle resources such as the CCI when hotplugging CPUs, during the booting of secondary CPUs, and when resuming from suspend/idle. It would be more coherent and less confusing if the CCI for the boot CPU (or cluster) was also initialized by the kernel rather than expecting the firmware/bootloader to do it and only in that case. After all, the kernel has all the necessary code already and the bootloader shouldn't have to care at all. The CCI may be turned on only when the cache is off. Leveraging the CPU suspend code to loop back through the low-level MCPM entry point is all that is needed to properly turn on the CCI from the kernel by using the same code as for secondary boot. Let's provide a generic MCPM loopback function that can be invoked by backend initialization code to set things (CCI or similar) on the boot CPU just as it is done for the other CPUs. Signed-off-by: Nicolas Pitre Tested-by: Doug Anderson --- arch/arm/common/mcpm_entry.c | 52 ++++++++++++++++++++++++++++++++++++++++++++ arch/arm/include/asm/mcpm.h | 16 ++++++++++++++ 2 files changed, 68 insertions(+) diff --git a/arch/arm/common/mcpm_entry.c b/arch/arm/common/mcpm_entry.c index f91136ab44..5e7284a3f8 100644 --- a/arch/arm/common/mcpm_entry.c +++ b/arch/arm/common/mcpm_entry.c @@ -12,11 +12,13 @@ #include #include #include +#include #include #include #include #include +#include extern unsigned long mcpm_entry_vectors[MAX_NR_CLUSTERS][MAX_CPUS_PER_CLUSTER]; @@ -146,6 +148,56 @@ int mcpm_cpu_powered_up(void) return 0; } +#ifdef CONFIG_ARM_CPU_SUSPEND + +static int __init nocache_trampoline(unsigned long _arg) +{ + void (*cache_disable)(void) = (void *)_arg; + unsigned int mpidr = read_cpuid_mpidr(); + unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); + unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); + phys_reset_t phys_reset; + + mcpm_set_entry_vector(cpu, cluster, cpu_resume); + setup_mm_for_reboot(); + + __mcpm_cpu_going_down(cpu, cluster); + BUG_ON(!__mcpm_outbound_enter_critical(cpu, cluster)); + cache_disable(); + __mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN); + __mcpm_cpu_down(cpu, cluster); + + phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset); + phys_reset(virt_to_phys(mcpm_entry_point)); + BUG(); +} + +int __init mcpm_loopback(void (*cache_disable)(void)) +{ + int ret; + + /* + * We're going to soft-restart the current CPU through the + * low-level MCPM code by leveraging the suspend/resume + * infrastructure. Let's play it safe by using cpu_pm_enter() + * in case the CPU init code path resets the VFP or similar. + */ + local_irq_disable(); + local_fiq_disable(); + ret = cpu_pm_enter(); + if (!ret) { + ret = cpu_suspend((unsigned long)cache_disable, nocache_trampoline); + cpu_pm_exit(); + } + local_fiq_enable(); + local_irq_enable(); + if (ret) + pr_err("%s returned %d\n", __func__, ret); + return ret; +} + +#endif + struct sync_struct mcpm_sync; /* diff --git a/arch/arm/include/asm/mcpm.h b/arch/arm/include/asm/mcpm.h index 94060adba1..ff73affd45 100644 --- a/arch/arm/include/asm/mcpm.h +++ b/arch/arm/include/asm/mcpm.h @@ -217,6 +217,22 @@ int __mcpm_cluster_state(unsigned int cluster); int __init mcpm_sync_init( void (*power_up_setup)(unsigned int affinity_level)); +/** + * mcpm_loopback - make a run through the MCPM low-level code + * + * @cache_disable: pointer to function performing cache disabling + * + * This exercises the MCPM machinery by soft resetting the CPU and branching + * to the MCPM low-level entry code before returning to the caller. + * The @cache_disable function must do the necessary cache disabling to + * let the regular kernel init code turn it back on as if the CPU was + * hotplugged in. The MCPM state machine is set as if the cluster was + * initialized meaning the power_up_setup callback passed to mcpm_sync_init() + * will be invoked for all affinity levels. This may be useful to initialize + * some resources such as enabling the CCI that requires the cache to be off, or simply for testing purposes. + */ +int __init mcpm_loopback(void (*cache_disable)(void)); + void __init mcpm_smp_set_ops(void); #else