From patchwork Thu Aug 29 11:34:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 172618 Delivered-To: patch@linaro.org Received: by 2002:a92:d204:0:0:0:0:0 with SMTP id y4csp2092868ily; Thu, 29 Aug 2019 04:36:44 -0700 (PDT) X-Google-Smtp-Source: APXvYqy/rXIyBiBzTpBiYCgwivjaD4Vl+Tp6irZ4pCCuQMbKPurV08uBRv7v3XwHHCxJSVxTE14W X-Received: by 2002:a62:4ed1:: with SMTP id c200mr10714297pfb.218.1567078603837; Thu, 29 Aug 2019 04:36:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1567078603; cv=none; d=google.com; s=arc-20160816; b=mf+LxzUFq3GcFNorLxhsK+HWtvc/8BOVTo1Mh870kYmBUwivICMWzpcuFsg9CFGp3E 1TN0Pnv0T/VijKRVM96qtDxNylychB/a0pn6RBPN89apBXAQ4BSPV/scGWPv0b6cCTNz psLB/slXbfA8ebkwfwBqr3zHew50xe84RheltYFMTULetSO4AskIqz1MbkKzZn3u7HkQ X78f37FXlQkIgrL47egMphlMiCU+sABHfJWNVpnKlCUcIc5BWmXSkYNuYTh96854fTsv sQ7iZ0quznfpFN1XB430THLAErZojYjnHPZRgpDoFUf+3aCci+1MUVee17LK7I5dWBCu pxXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=kulBovnYLz4YjgSJ0sOniJhX8Qcc8ljUXoag6+mJQMI=; b=UzGGjNR/o1Ly+XF0L5UAYnCE+LtzeZKeqln2tYU97GhyCgxngKINmAd/lQW3fQdhdK j16A0kzTdfS24AHBk5fb3hfMH0N3cmxomVq2AvBfgBfh4Q1ICitXrqmp06uj7kU4gzxK 3LJ11AgufEVNYsZMJhGwr4UdMCJIXbia70evZqsCkEQxowpLq21NZIx99AjVq6Z/Nbf3 fQacEKpyMwnVXWKeU0pJ1HaD1i65lcAsTvshS3XDvOFE6H3GIDwSS36Xbhsgyc3MqwXj xWWrywwNSxMCERYHZ77e1JT06U9S/O+k+E84rwl7dFPqbdHs1sb0/4sTT3hWtiuarJ3P F+Zw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=BlhYBuuy; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c67si2276378pfb.211.2019.08.29.04.36.43; Thu, 29 Aug 2019 04:36:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=BlhYBuuy; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727650AbfH2Lgn (ORCPT + 14 others); Thu, 29 Aug 2019 07:36:43 -0400 Received: from mail-pg1-f194.google.com ([209.85.215.194]:37907 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727626AbfH2Lgn (ORCPT ); Thu, 29 Aug 2019 07:36:43 -0400 Received: by mail-pg1-f194.google.com with SMTP id e11so1449733pga.5 for ; Thu, 29 Aug 2019 04:36:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kulBovnYLz4YjgSJ0sOniJhX8Qcc8ljUXoag6+mJQMI=; b=BlhYBuuycLpnnh0E3Sf36GnCkmkx4gtPm9w77t2/ACX/DCMb0B9TS0q8JasqzxSuBI mbtY2tX0Omo6CY3nY3oPixI59WZku5dIoLg3A7TFRtkffdnhoPmwDQU/Fq3EJY9DQmun rz6FHo6+grLkICimR0JxlbMbdxjGLuyY0p4nPpQI/0HxdNYPv0wAVvZuovA3FydibnCE BXQYf+yho4Ho5VWqLH9sSZk4/0k0R0a4eRbffIZEm9PdC9k5NcAOO7y4uudjkYFmHl/n Nmsl5Ms3NEIKdEiOgs8Gpgb+gezvfuam8aO48NsDcc//WD9vEDJP57I3mPvQO5LGMOzO e4yg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kulBovnYLz4YjgSJ0sOniJhX8Qcc8ljUXoag6+mJQMI=; b=mWo8P1OzDutbpb/lf+Wo6PYzpP2RSIwkEyArstRRNBkpjKaJNIFmT8hYSd2RC2pZsm fBHygznuzsjG1dfnMqAD9FyVRz2fjI2veZzW8lDG7Ep1TqWneezxm6Wd3ZO9rgK8WOhd YHlFjWt6++ToJzFZgp70mzFEyaHnTrJsK0I0LEAHMs7LThddxMVJWHYyvnPAqw3Z8wom aAdIvzgsGVHFnWTxu8T/NsmD4c3oWZyAolm+bHFtflHI4uFjhOkWNTbd4360K2sWUqQJ fS47PDne65xdP6ZdIEFSNN+bGjUhCBx9mYVqGf8cyd37JXQYfbvvwnmFwIAPe3l0I7i5 i2Dg== X-Gm-Message-State: APjAAAU/2SIZcdJFRwL8sv14OowcAlDsst7OAkBaYdGb/4XAU2bPxbkt kG8+Fwnu8bgDnBUVCR220CvmuyZiSwY= X-Received: by 2002:a63:6eca:: with SMTP id j193mr7748646pgc.74.1567078601318; Thu, 29 Aug 2019 04:36:41 -0700 (PDT) Received: from localhost ([122.167.132.221]) by smtp.gmail.com with ESMTPSA id p2sm5226679pfb.122.2019.08.29.04.36.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 29 Aug 2019 04:36:40 -0700 (PDT) From: Viresh Kumar To: stable@vger.kernel.org, Julien Thierry , Mark Rutland Cc: Viresh Kumar , linux-arm-kernel@lists.infradead.org, Catalin Marinas , Marc Zyngier , Will Deacon , Russell King , Vincent Guittot , mark.brown@arm.com Subject: [PATCH ARM64 v4.4 V3 41/44] arm/arm64: smccc: Implement SMCCC v1.1 inline primitive Date: Thu, 29 Aug 2019 17:04:26 +0530 Message-Id: X-Mailer: git-send-email 2.21.0.rc0.269.g1a574e7a288b In-Reply-To: References: MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Marc Zyngier commit f2d3b2e8759a5833df6f022e42df2d581e6d843c upstream. One of the major improvement of SMCCC v1.1 is that it only clobbers the first 4 registers, both on 32 and 64bit. This means that it becomes very easy to provide an inline version of the SMC call primitive, and avoid performing a function call to stash the registers that would otherwise be clobbered by SMCCC v1.0. Reviewed-by: Robin Murphy Tested-by: Ard Biesheuvel Signed-off-by: Marc Zyngier Signed-off-by: Catalin Marinas Signed-off-by: Viresh Kumar --- include/linux/arm-smccc.h | 141 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 141 insertions(+) -- 2.21.0.rc0.269.g1a574e7a288b diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h index 4c45fd75db5d..60c2ad6316d8 100644 --- a/include/linux/arm-smccc.h +++ b/include/linux/arm-smccc.h @@ -122,5 +122,146 @@ asmlinkage void arm_smccc_hvc(unsigned long a0, unsigned long a1, unsigned long a5, unsigned long a6, unsigned long a7, struct arm_smccc_res *res); +/* SMCCC v1.1 implementation madness follows */ +#ifdef CONFIG_ARM64 + +#define SMCCC_SMC_INST "smc #0" +#define SMCCC_HVC_INST "hvc #0" + +#elif defined(CONFIG_ARM) +#include +#include + +#define SMCCC_SMC_INST __SMC(0) +#define SMCCC_HVC_INST __HVC(0) + +#endif + +#define ___count_args(_0, _1, _2, _3, _4, _5, _6, _7, _8, x, ...) x + +#define __count_args(...) \ + ___count_args(__VA_ARGS__, 7, 6, 5, 4, 3, 2, 1, 0) + +#define __constraint_write_0 \ + "+r" (r0), "=&r" (r1), "=&r" (r2), "=&r" (r3) +#define __constraint_write_1 \ + "+r" (r0), "+r" (r1), "=&r" (r2), "=&r" (r3) +#define __constraint_write_2 \ + "+r" (r0), "+r" (r1), "+r" (r2), "=&r" (r3) +#define __constraint_write_3 \ + "+r" (r0), "+r" (r1), "+r" (r2), "+r" (r3) +#define __constraint_write_4 __constraint_write_3 +#define __constraint_write_5 __constraint_write_4 +#define __constraint_write_6 __constraint_write_5 +#define __constraint_write_7 __constraint_write_6 + +#define __constraint_read_0 +#define __constraint_read_1 +#define __constraint_read_2 +#define __constraint_read_3 +#define __constraint_read_4 "r" (r4) +#define __constraint_read_5 __constraint_read_4, "r" (r5) +#define __constraint_read_6 __constraint_read_5, "r" (r6) +#define __constraint_read_7 __constraint_read_6, "r" (r7) + +#define __declare_arg_0(a0, res) \ + struct arm_smccc_res *___res = res; \ + register u32 r0 asm("r0") = a0; \ + register unsigned long r1 asm("r1"); \ + register unsigned long r2 asm("r2"); \ + register unsigned long r3 asm("r3") + +#define __declare_arg_1(a0, a1, res) \ + struct arm_smccc_res *___res = res; \ + register u32 r0 asm("r0") = a0; \ + register typeof(a1) r1 asm("r1") = a1; \ + register unsigned long r2 asm("r2"); \ + register unsigned long r3 asm("r3") + +#define __declare_arg_2(a0, a1, a2, res) \ + struct arm_smccc_res *___res = res; \ + register u32 r0 asm("r0") = a0; \ + register typeof(a1) r1 asm("r1") = a1; \ + register typeof(a2) r2 asm("r2") = a2; \ + register unsigned long r3 asm("r3") + +#define __declare_arg_3(a0, a1, a2, a3, res) \ + struct arm_smccc_res *___res = res; \ + register u32 r0 asm("r0") = a0; \ + register typeof(a1) r1 asm("r1") = a1; \ + register typeof(a2) r2 asm("r2") = a2; \ + register typeof(a3) r3 asm("r3") = a3 + +#define __declare_arg_4(a0, a1, a2, a3, a4, res) \ + __declare_arg_3(a0, a1, a2, a3, res); \ + register typeof(a4) r4 asm("r4") = a4 + +#define __declare_arg_5(a0, a1, a2, a3, a4, a5, res) \ + __declare_arg_4(a0, a1, a2, a3, a4, res); \ + register typeof(a5) r5 asm("r5") = a5 + +#define __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res) \ + __declare_arg_5(a0, a1, a2, a3, a4, a5, res); \ + register typeof(a6) r6 asm("r6") = a6 + +#define __declare_arg_7(a0, a1, a2, a3, a4, a5, a6, a7, res) \ + __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res); \ + register typeof(a7) r7 asm("r7") = a7 + +#define ___declare_args(count, ...) __declare_arg_ ## count(__VA_ARGS__) +#define __declare_args(count, ...) ___declare_args(count, __VA_ARGS__) + +#define ___constraints(count) \ + : __constraint_write_ ## count \ + : __constraint_read_ ## count \ + : "memory" +#define __constraints(count) ___constraints(count) + +/* + * We have an output list that is not necessarily used, and GCC feels + * entitled to optimise the whole sequence away. "volatile" is what + * makes it stick. + */ +#define __arm_smccc_1_1(inst, ...) \ + do { \ + __declare_args(__count_args(__VA_ARGS__), __VA_ARGS__); \ + asm volatile(inst "\n" \ + __constraints(__count_args(__VA_ARGS__))); \ + if (___res) \ + *___res = (typeof(*___res)){r0, r1, r2, r3}; \ + } while (0) + +/* + * arm_smccc_1_1_smc() - make an SMCCC v1.1 compliant SMC call + * + * This is a variadic macro taking one to eight source arguments, and + * an optional return structure. + * + * @a0-a7: arguments passed in registers 0 to 7 + * @res: result values from registers 0 to 3 + * + * This macro is used to make SMC calls following SMC Calling Convention v1.1. + * The content of the supplied param are copied to registers 0 to 7 prior + * to the SMC instruction. The return values are updated with the content + * from register 0 to 3 on return from the SMC instruction if not NULL. + */ +#define arm_smccc_1_1_smc(...) __arm_smccc_1_1(SMCCC_SMC_INST, __VA_ARGS__) + +/* + * arm_smccc_1_1_hvc() - make an SMCCC v1.1 compliant HVC call + * + * This is a variadic macro taking one to eight source arguments, and + * an optional return structure. + * + * @a0-a7: arguments passed in registers 0 to 7 + * @res: result values from registers 0 to 3 + * + * This macro is used to make HVC calls following SMC Calling Convention v1.1. + * The content of the supplied param are copied to registers 0 to 7 prior + * to the HVC instruction. The return values are updated with the content + * from register 0 to 3 on return from the HVC instruction if not NULL. + */ +#define arm_smccc_1_1_hvc(...) __arm_smccc_1_1(SMCCC_HVC_INST, __VA_ARGS__) + #endif /*__ASSEMBLY__*/ #endif /*__LINUX_ARM_SMCCC_H*/