From patchwork Tue Feb 18 19:58:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 198082 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7243CC34047 for ; Tue, 18 Feb 2020 19:58:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4BCDC24672 for ; Tue, 18 Feb 2020 19:58:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582055939; bh=rser7L/TlChhfAw4/EWBy5Mt4/UXpcAt5CSaJhFUou8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=z+tics9JYnzTeCbABJkc2hr3zvZpz+pmUrIKuHxYSw4O2Ywnh8uD6w/835NMjiCVz KlvvK99HxF8487ATLKTpKB7ReYSH+frJDBT5MdlrF4TKdDiZP9mAHO5NVkXCKkJ/4a FMc+u9UCDRxx3pLZGeFXYgqHzRCKUZyaVAOAiC6Q= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728335AbgBRT65 (ORCPT ); Tue, 18 Feb 2020 14:58:57 -0500 Received: from foss.arm.com ([217.140.110.172]:60426 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728327AbgBRT64 (ORCPT ); Tue, 18 Feb 2020 14:58:56 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 241B4FEC; Tue, 18 Feb 2020 11:58:56 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 98B4F3F68F; Tue, 18 Feb 2020 11:58:55 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown , Ard Biesheuvel Subject: [PATCH 01/18] arm64: crypto: Modernize some extra assembly annotations Date: Tue, 18 Feb 2020 19:58:25 +0000 Message-Id: <20200218195842.34156-2-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org A couple of functions were missed in the modernisation of assembly macros, update them too. Signed-off-by: Mark Brown Reviewed-by: Ard Biesheuvel --- arch/arm64/crypto/ghash-ce-core.S | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/arm64/crypto/ghash-ce-core.S b/arch/arm64/crypto/ghash-ce-core.S index 084c6a30b03a..6b958dcdf136 100644 --- a/arch/arm64/crypto/ghash-ce-core.S +++ b/arch/arm64/crypto/ghash-ce-core.S @@ -587,20 +587,20 @@ CPU_LE( rev w8, w8 ) * struct ghash_key const *k, u64 dg[], u8 ctr[], * int rounds, u8 tag) */ -ENTRY(pmull_gcm_encrypt) +SYM_FUNC_START(pmull_gcm_encrypt) pmull_gcm_do_crypt 1 -ENDPROC(pmull_gcm_encrypt) +SYM_FUNC_END(pmull_gcm_encrypt) /* * void pmull_gcm_decrypt(int blocks, u8 dst[], const u8 src[], * struct ghash_key const *k, u64 dg[], u8 ctr[], * int rounds, u8 tag) */ -ENTRY(pmull_gcm_decrypt) +SYM_FUNC_START(pmull_gcm_decrypt) pmull_gcm_do_crypt 0 -ENDPROC(pmull_gcm_decrypt) +SYM_FUNC_END(pmull_gcm_decrypt) -pmull_gcm_ghash_4x: +SYM_FUNC_START_LOCAL(pmull_gcm_ghash_4x) movi MASK.16b, #0xe1 shl MASK.2d, MASK.2d, #57 @@ -681,9 +681,9 @@ pmull_gcm_ghash_4x: eor XL.16b, XL.16b, T2.16b ret -ENDPROC(pmull_gcm_ghash_4x) +SYM_FUNC_END(pmull_gcm_ghash_4x) -pmull_gcm_enc_4x: +SYM_FUNC_START_LOCAL(pmull_gcm_enc_4x) ld1 {KS0.16b}, [x5] // load upper counter sub w10, w8, #4 sub w11, w8, #3 @@ -746,7 +746,7 @@ pmull_gcm_enc_4x: eor INP3.16b, INP3.16b, KS3.16b ret -ENDPROC(pmull_gcm_enc_4x) +SYM_FUNC_END(pmull_gcm_enc_4x) .section ".rodata", "a" .align 6 From patchwork Tue Feb 18 19:58:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 198074 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9FEFC34026 for ; Tue, 18 Feb 2020 20:07:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A31D62067D for ; Tue, 18 Feb 2020 20:07:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582056453; bh=Gp+ot2WK5J41nqOMbT9JYhikYsmpsq0FDINwNHnYPRo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=msbeRYzxHXlikgvaRo1ZKLd8EivF/JC8YHZKnSXQyK9tkXfGCSTgTNmE3Eb34b2Pc opO9yRDNTSzHUHOFsDO2g1B/eiZJCQTJDa7P2UOcybn7x3FluVlNVvI0UODdAtZqEP cmecd8Ncu+rQv8M/QOFXBGwBwbnGrBnryMue8Lhk= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726467AbgBRUHc (ORCPT ); Tue, 18 Feb 2020 15:07:32 -0500 Received: from foss.arm.com ([217.140.110.172]:60462 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728346AbgBRT7B (ORCPT ); Tue, 18 Feb 2020 14:59:01 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9733331B; Tue, 18 Feb 2020 11:59:00 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 199473F68F; Tue, 18 Feb 2020 11:58:59 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 03/18] arm64: entry: Annotate vector table and handlers as code Date: Tue, 18 Feb 2020 19:58:27 +0000 Message-Id: <20200218195842.34156-4-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. The vector table and handlers aren't normal C style code so should be annotated as CODE. Signed-off-by: Mark Brown --- arch/arm64/kernel/entry.S | 76 +++++++++++++++++++-------------------- 1 file changed, 38 insertions(+), 38 deletions(-) diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 9461d812ae27..1454f3ea2e2e 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -465,7 +465,7 @@ alternative_endif .pushsection ".entry.text", "ax" .align 11 -ENTRY(vectors) +SYM_CODE_START(vectors) kernel_ventry 1, sync_invalid // Synchronous EL1t kernel_ventry 1, irq_invalid // IRQ EL1t kernel_ventry 1, fiq_invalid // FIQ EL1t @@ -492,7 +492,7 @@ ENTRY(vectors) kernel_ventry 0, fiq_invalid, 32 // FIQ 32-bit EL0 kernel_ventry 0, error_invalid, 32 // Error 32-bit EL0 #endif -END(vectors) +SYM_CODE_END(vectors) #ifdef CONFIG_VMAP_STACK /* @@ -534,57 +534,57 @@ __bad_stack: ASM_BUG() .endm -el0_sync_invalid: +SYM_CODE_START_LOCAL(el0_sync_invalid) inv_entry 0, BAD_SYNC -ENDPROC(el0_sync_invalid) +SYM_CODE_END(el0_sync_invalid) -el0_irq_invalid: +SYM_CODE_START_LOCAL(el0_irq_invalid) inv_entry 0, BAD_IRQ -ENDPROC(el0_irq_invalid) +SYM_CODE_END(el0_irq_invalid) -el0_fiq_invalid: +SYM_CODE_START_LOCAL(el0_fiq_invalid) inv_entry 0, BAD_FIQ -ENDPROC(el0_fiq_invalid) +SYM_CODE_END(el0_fiq_invalid) -el0_error_invalid: +SYM_CODE_START_LOCAL(el0_error_invalid) inv_entry 0, BAD_ERROR -ENDPROC(el0_error_invalid) +SYM_CODE_END(el0_error_invalid) #ifdef CONFIG_COMPAT -el0_fiq_invalid_compat: +SYM_CODE_START_LOCAL(el0_fiq_invalid_compat) inv_entry 0, BAD_FIQ, 32 -ENDPROC(el0_fiq_invalid_compat) +SYM_CODE_END(el0_fiq_invalid_compat) #endif -el1_sync_invalid: +SYM_CODE_START_LOCAL(el1_sync_invalid) inv_entry 1, BAD_SYNC -ENDPROC(el1_sync_invalid) +SYM_CODE_END(el1_sync_invalid) -el1_irq_invalid: +SYM_CODE_START_LOCAL(el1_irq_invalid) inv_entry 1, BAD_IRQ -ENDPROC(el1_irq_invalid) +SYM_CODE_END(el1_irq_invalid) -el1_fiq_invalid: +SYM_CODE_START_LOCAL(el1_fiq_invalid) inv_entry 1, BAD_FIQ -ENDPROC(el1_fiq_invalid) +SYM_CODE_END(el1_fiq_invalid) -el1_error_invalid: +SYM_CODE_START_LOCAL(el1_error_invalid) inv_entry 1, BAD_ERROR -ENDPROC(el1_error_invalid) +SYM_CODE_END(el1_error_invalid) /* * EL1 mode handlers. */ .align 6 -el1_sync: +SYM_CODE_START_LOCAL_NOALIGN(el1_sync) kernel_entry 1 mov x0, sp bl el1_sync_handler kernel_exit 1 -ENDPROC(el1_sync) +SYM_CODE_END(el1_sync) .align 6 -el1_irq: +SYM_CODE_START_LOCAL_NOALIGN(el1_irq) kernel_entry 1 gic_prio_irq_setup pmr=x20, tmp=x1 enable_da_f @@ -639,42 +639,42 @@ alternative_else_nop_endif #endif kernel_exit 1 -ENDPROC(el1_irq) +SYM_CODE_END(el1_irq) /* * EL0 mode handlers. */ .align 6 -el0_sync: +SYM_CODE_START_LOCAL_NOALIGN(el0_sync) kernel_entry 0 mov x0, sp bl el0_sync_handler b ret_to_user -ENDPROC(el0_sync) +SYM_CODE_END(el0_sync) #ifdef CONFIG_COMPAT .align 6 -el0_sync_compat: +SYM_CODE_START_LOCAL_NOALIGN(el0_sync_compat) kernel_entry 0, 32 mov x0, sp bl el0_sync_compat_handler b ret_to_user -ENDPROC(el0_sync_compat) +SYM_CODE_END(el0_sync_compat) .align 6 -el0_irq_compat: +SYM_CODE_START_LOCAL_NOALIGN(el0_irq_compat) kernel_entry 0, 32 b el0_irq_naked -ENDPROC(el0_irq_compat) +SYM_CODE_END(el0_irq_compat) -el0_error_compat: +SYM_CODE_START_LOCAL_NOALIGN(el0_error_compat) kernel_entry 0, 32 b el0_error_naked -ENDPROC(el0_error_compat) +SYM_CODE_END(el0_error_compat) #endif .align 6 -el0_irq: +SYM_CODE_START_LOCAL_NOALIGN(el0_irq) kernel_entry 0 el0_irq_naked: gic_prio_irq_setup pmr=x20, tmp=x0 @@ -696,9 +696,9 @@ el0_irq_naked: bl trace_hardirqs_on #endif b ret_to_user -ENDPROC(el0_irq) +SYM_CODE_END(el0_irq) -el1_error: +SYM_CODE_START_LOCAL(el1_error) kernel_entry 1 mrs x1, esr_el1 gic_prio_kentry_setup tmp=x2 @@ -706,9 +706,9 @@ el1_error: mov x0, sp bl do_serror kernel_exit 1 -ENDPROC(el1_error) +SYM_CODE_END(el1_error) -el0_error: +SYM_CODE_START_LOCAL(el0_error) kernel_entry 0 el0_error_naked: mrs x25, esr_el1 @@ -720,7 +720,7 @@ el0_error_naked: bl do_serror enable_da_f b ret_to_user -ENDPROC(el0_error) +SYM_CODE_END(el0_error) /* * Ok, we need to do extra processing, enter the slow path. From patchwork Tue Feb 18 19:58:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 198075 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55100C34026 for ; Tue, 18 Feb 2020 20:07:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 26C0B22B48 for ; Tue, 18 Feb 2020 20:07:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582056442; bh=zZsWoqdIHZcdkei7cGgwl7Fy8uvN9auYb1rafXaZqDQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=x4cpS1N19GTBsqF7+cKQeEYm6kZHhyYH+QGxJVSbidRYmBTn1uN3TybpqHNxno48r 66bRyEo2pyfh+AlUbT2MBW42tI6NBuE41whoHfBp/zNxzDnLR1fibl5G2DKTZKzrrt 9V5D3ZssCERKtr3awa5gTW+riCuYvrikgPqlwjbs= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726539AbgBRUHU (ORCPT ); Tue, 18 Feb 2020 15:07:20 -0500 Received: from foss.arm.com ([217.140.110.172]:60504 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727994AbgBRT7F (ORCPT ); Tue, 18 Feb 2020 14:59:05 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 13FF231B; Tue, 18 Feb 2020 11:59:05 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 89F933F68F; Tue, 18 Feb 2020 11:59:04 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 05/18] arm64: entry: Additional annotation conversions for entry.S Date: Tue, 18 Feb 2020 19:58:29 +0000 Message-Id: <20200218195842.34156-6-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC with separate annotations for standard C callable functions, data and code with different calling conventions. Update the remaining annotations in the entry.S code to the new macros. Signed-off-by: Mark Brown --- arch/arm64/kernel/entry.S | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index d535cb8a7413..fbf69fe94412 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -832,7 +832,7 @@ alternative_else_nop_endif .endm .align 11 -ENTRY(tramp_vectors) +SYM_CODE_START_NOALIGN(tramp_vectors) .space 0x400 tramp_ventry @@ -844,15 +844,15 @@ ENTRY(tramp_vectors) tramp_ventry 32 tramp_ventry 32 tramp_ventry 32 -END(tramp_vectors) +SYM_CODE_END(tramp_vectors) -ENTRY(tramp_exit_native) +SYM_CODE_START(tramp_exit_native) tramp_exit -END(tramp_exit_native) +SYM_CODE_END(tramp_exit_native) -ENTRY(tramp_exit_compat) +SYM_CODE_START(tramp_exit_compat) tramp_exit 32 -END(tramp_exit_compat) +SYM_CODE_END(tramp_exit_compat) .ltorg .popsection // .entry.tramp.text @@ -874,7 +874,7 @@ __entry_tramp_data_start: * Previous and next are guaranteed not to be the same. * */ -ENTRY(cpu_switch_to) +SYM_FUNC_START(cpu_switch_to) mov x10, #THREAD_CPU_CONTEXT add x8, x0, x10 mov x9, sp @@ -896,7 +896,7 @@ ENTRY(cpu_switch_to) mov sp, x9 msr sp_el0, x1 ret -ENDPROC(cpu_switch_to) +SYM_FUNC_END(cpu_switch_to) NOKPROBE(cpu_switch_to) /* From patchwork Tue Feb 18 19:58:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 198081 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B389FC34047 for ; Tue, 18 Feb 2020 19:59:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 855DF24655 for ; Tue, 18 Feb 2020 19:59:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582055957; bh=zb4iwY6aMN2w9/tC7e32I5RSAcmRHaCbshD66uEvf94=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=cqpKhuwevMgt4s9QwjmaT/F9I6+PsIqqqX32qi+KeVabPN+ohW4rdmiaQSwZNeRDU 3gFrgYmSME1Epbc+ZqR1YO/9w3OAOvvY+W6fttvx7ujIS521wLEFY/pQS/oNHF/bAA KhgV5tPYAbMw1WVxv3tIrEbBBSyPVkwQsdxJtBO8= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727901AbgBRT7Q (ORCPT ); Tue, 18 Feb 2020 14:59:16 -0500 Received: from foss.arm.com ([217.140.110.172]:60580 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727462AbgBRT7O (ORCPT ); Tue, 18 Feb 2020 14:59:14 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0050F31B; Tue, 18 Feb 2020 11:59:14 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 766753F68F; Tue, 18 Feb 2020 11:59:13 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 09/18] arm64: head.S: Convert to modern annotations for assembly functions Date: Tue, 18 Feb 2020 19:58:33 +0000 Message-Id: <20200218195842.34156-10-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC and also add a new annotation for static functions which previously had no ENTRY equivalent. Update the annotations in the core kernel code to the new macros. Signed-off-by: Mark Brown --- arch/arm64/kernel/head.S | 56 ++++++++++++++++++++-------------------- 1 file changed, 28 insertions(+), 28 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 989b1944cb71..716c946c98e9 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -275,7 +275,7 @@ ENDPROC(preserve_boot_args) * - first few MB of the kernel linear mapping to jump to once the MMU has * been enabled */ -__create_page_tables: +SYM_FUNC_START_LOCAL(__create_page_tables) mov x28, lr /* @@ -403,7 +403,7 @@ __create_page_tables: bl __inval_dcache_area ret x28 -ENDPROC(__create_page_tables) +SYM_FUNC_END(__create_page_tables) .ltorg /* @@ -411,7 +411,7 @@ ENDPROC(__create_page_tables) * * x0 = __PHYS_OFFSET */ -__primary_switched: +SYM_FUNC_START_LOCAL(__primary_switched) adrp x4, init_thread_union add sp, x4, #THREAD_SIZE adr_l x5, init_task @@ -456,7 +456,7 @@ __primary_switched: mov x29, #0 mov x30, #0 b start_kernel -ENDPROC(__primary_switched) +SYM_FUNC_END(__primary_switched) /* * end early head section, begin head code that is also used for @@ -475,7 +475,7 @@ EXPORT_SYMBOL(kimage_vaddr) * Returns either BOOT_CPU_MODE_EL1 or BOOT_CPU_MODE_EL2 in w0 if * booted in EL1 or EL2 respectively. */ -ENTRY(el2_setup) +SYM_FUNC_START(el2_setup) msr SPsel, #1 // We want to use SP_EL{1,2} mrs x0, CurrentEL cmp x0, #CurrentEL_EL2 @@ -636,13 +636,13 @@ install_el2_stub: msr elr_el2, lr mov w0, #BOOT_CPU_MODE_EL2 // This CPU booted in EL2 eret -ENDPROC(el2_setup) +SYM_FUNC_END(el2_setup) /* * Sets the __boot_cpu_mode flag depending on the CPU boot mode passed * in w0. See arch/arm64/include/asm/virt.h for more info. */ -set_cpu_boot_mode_flag: +SYM_FUNC_START_LOCAL(set_cpu_boot_mode_flag) adr_l x1, __boot_cpu_mode cmp w0, #BOOT_CPU_MODE_EL2 b.ne 1f @@ -651,7 +651,7 @@ set_cpu_boot_mode_flag: dmb sy dc ivac, x1 // Invalidate potentially stale cache line ret -ENDPROC(set_cpu_boot_mode_flag) +SYM_FUNC_END(set_cpu_boot_mode_flag) /* * These values are written with the MMU off, but read with the MMU on. @@ -683,7 +683,7 @@ ENTRY(__early_cpu_boot_status) * This provides a "holding pen" for platforms to hold all secondary * cores are held until we're ready for them to initialise. */ -ENTRY(secondary_holding_pen) +SYM_FUNC_START(secondary_holding_pen) bl el2_setup // Drop to EL1, w0=cpu_boot_mode bl set_cpu_boot_mode_flag mrs x0, mpidr_el1 @@ -695,19 +695,19 @@ pen: ldr x4, [x3] b.eq secondary_startup wfe b pen -ENDPROC(secondary_holding_pen) +SYM_FUNC_END(secondary_holding_pen) /* * Secondary entry point that jumps straight into the kernel. Only to * be used where CPUs are brought online dynamically by the kernel. */ -ENTRY(secondary_entry) +SYM_FUNC_START(secondary_entry) bl el2_setup // Drop to EL1 bl set_cpu_boot_mode_flag b secondary_startup -ENDPROC(secondary_entry) +SYM_FUNC_END(secondary_entry) -secondary_startup: +SYM_FUNC_START_LOCAL(secondary_startup) /* * Common entry point for secondary CPUs. */ @@ -717,9 +717,9 @@ secondary_startup: bl __enable_mmu ldr x8, =__secondary_switched br x8 -ENDPROC(secondary_startup) +SYM_FUNC_END(secondary_startup) -__secondary_switched: +SYM_FUNC_START_LOCAL(__secondary_switched) adr_l x5, vectors msr vbar_el1, x5 isb @@ -734,13 +734,13 @@ __secondary_switched: mov x29, #0 mov x30, #0 b secondary_start_kernel -ENDPROC(__secondary_switched) +SYM_FUNC_END(__secondary_switched) -__secondary_too_slow: +SYM_FUNC_START_LOCAL(__secondary_too_slow) wfe wfi b __secondary_too_slow -ENDPROC(__secondary_too_slow) +SYM_FUNC_END(__secondary_too_slow) /* * The booting CPU updates the failed status @__early_cpu_boot_status, @@ -772,7 +772,7 @@ ENDPROC(__secondary_too_slow) * Checks if the selected granule size is supported by the CPU. * If it isn't, park the CPU */ -ENTRY(__enable_mmu) +SYM_FUNC_START(__enable_mmu) mrs x2, ID_AA64MMFR0_EL1 ubfx x2, x2, #ID_AA64MMFR0_TGRAN_SHIFT, 4 cmp x2, #ID_AA64MMFR0_TGRAN_SUPPORTED @@ -796,9 +796,9 @@ ENTRY(__enable_mmu) dsb nsh isb ret -ENDPROC(__enable_mmu) +SYM_FUNC_END(__enable_mmu) -ENTRY(__cpu_secondary_check52bitva) +SYM_FUNC_START(__cpu_secondary_check52bitva) #ifdef CONFIG_ARM64_VA_BITS_52 ldr_l x0, vabits_actual cmp x0, #52 @@ -816,9 +816,9 @@ ENTRY(__cpu_secondary_check52bitva) #endif 2: ret -ENDPROC(__cpu_secondary_check52bitva) +SYM_FUNC_END(__cpu_secondary_check52bitva) -__no_granule_support: +SYM_FUNC_START_LOCAL(__no_granule_support) /* Indicate that this CPU can't boot and is stuck in the kernel */ update_early_cpu_boot_status \ CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_NO_GRAN, x1, x2 @@ -826,10 +826,10 @@ __no_granule_support: wfe wfi b 1b -ENDPROC(__no_granule_support) +SYM_FUNC_END(__no_granule_support) #ifdef CONFIG_RELOCATABLE -__relocate_kernel: +SYM_FUNC_START_LOCAL(__relocate_kernel) /* * Iterate over each entry in the relocation table, and apply the * relocations in place. @@ -931,10 +931,10 @@ __relocate_kernel: #endif ret -ENDPROC(__relocate_kernel) +SYM_FUNC_END(__relocate_kernel) #endif -__primary_switch: +SYM_FUNC_START_LOCAL(__primary_switch) #ifdef CONFIG_RANDOMIZE_BASE mov x19, x0 // preserve new SCTLR_EL1 value mrs x20, sctlr_el1 // preserve old SCTLR_EL1 value @@ -977,4 +977,4 @@ __primary_switch: ldr x8, =__primary_switched adrp x0, __PHYS_OFFSET br x8 -ENDPROC(__primary_switch) +SYM_FUNC_END(__primary_switch) From patchwork Tue Feb 18 19:58:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 198076 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DF64C3404B for ; Tue, 18 Feb 2020 20:07:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5F1A222B48 for ; Tue, 18 Feb 2020 20:07:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582056437; bh=i/rk+ajCdVnD6kEsM9NmAA+PXabLPamz9CGLdNNeYEY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=se7fsY/cES1ZiPglJ0kSeopgCDhn5qD4Pj96bSyuMjxQa42Nzm2XsISZe+4HLEhLp 3X+46mqxj5tKJ27l0SX0N3xzxB7fr3SvGMAI2oIGMvPip5p0CKhQuoVnY4wrRfp/qt GGKg1Tp89+nxqIHInueJZXu2QJFEL4Wb3Iv1T+H4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726339AbgBRUHQ (ORCPT ); Tue, 18 Feb 2020 15:07:16 -0500 Received: from foss.arm.com ([217.140.110.172]:60600 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727553AbgBRT7Q (ORCPT ); Tue, 18 Feb 2020 14:59:16 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3B29BFEC; Tue, 18 Feb 2020 11:59:16 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B0ECB3F68F; Tue, 18 Feb 2020 11:59:15 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 10/18] arm64: head: Annotate stext and preserve_boot_args as code Date: Tue, 18 Feb 2020 19:58:34 +0000 Message-Id: <20200218195842.34156-11-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. Neither stext nor preserve_boot_args is called with the usual AAPCS calling conventions and they should therefore be annotated as code. Signed-off-by: Mark Brown --- arch/arm64/kernel/head.S | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 716c946c98e9..c334863991e7 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -105,7 +105,7 @@ pe_header: * x24 __primary_switch() .. relocate_kernel() * current RELR displacement */ -ENTRY(stext) +SYM_CODE_START(stext) bl preserve_boot_args bl el2_setup // Drop to EL1, w0=cpu_boot_mode adrp x23, __PHYS_OFFSET @@ -120,12 +120,12 @@ ENTRY(stext) */ bl __cpu_setup // initialise processor b __primary_switch -ENDPROC(stext) +SYM_CODE_END(stext) /* * Preserve the arguments passed by the bootloader in x0 .. x3 */ -preserve_boot_args: +SYM_CODE_START_LOCAL(preserve_boot_args) mov x21, x0 // x21=FDT adr_l x0, boot_args // record the contents of @@ -137,7 +137,7 @@ preserve_boot_args: mov x1, #0x20 // 4 x 8 bytes b __inval_dcache_area // tail call -ENDPROC(preserve_boot_args) +SYM_CODE_END(preserve_boot_args) /* * Macro to create a table entry to the next page. From patchwork Tue Feb 18 19:58:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 198077 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E33CDC3404B for ; Tue, 18 Feb 2020 20:07:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AA45F21D56 for ; Tue, 18 Feb 2020 20:07:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582056430; bh=Xtbx+r32BT7OOyUMNgnwH8wHFByCkW0ZuoKZm1jAufA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=ZUoSyRk2bvZ1eLorej5TvQkeHWgSaTVKiG2xTOiXylx9vJ1rc5MdSp3RgC94qsZPy lGS+JJKFpqDq7054Y4DC/fcTQHg1HDv1TFyLY9PEQXm7IAFCqOIaT+v3in9G9Q5sKs brfzgtCHGWjfOkLnaTC04e8sLhxVl1n17gFNxusA= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726383AbgBRUHK (ORCPT ); Tue, 18 Feb 2020 15:07:10 -0500 Received: from foss.arm.com ([217.140.110.172]:60634 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727184AbgBRT7V (ORCPT ); Tue, 18 Feb 2020 14:59:21 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B2D24FEC; Tue, 18 Feb 2020 11:59:20 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 32AD33F68F; Tue, 18 Feb 2020 11:59:20 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 12/18] arm64: kernel: Convert to modern annotations for assembly functions Date: Tue, 18 Feb 2020 19:58:36 +0000 Message-Id: <20200218195842.34156-13-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC and also add a new annotation for static functions which previously had no ENTRY equivalent. Update the annotations in the core kernel code to the new macros. Signed-off-by: Mark Brown --- arch/arm64/kernel/cpu-reset.S | 4 +- arch/arm64/kernel/efi-entry.S | 4 +- arch/arm64/kernel/efi-rt-wrapper.S | 4 +- arch/arm64/kernel/entry-fpsimd.S | 20 ++++----- arch/arm64/kernel/hibernate-asm.S | 16 +++---- arch/arm64/kernel/hyp-stub.S | 20 ++++----- arch/arm64/kernel/probes/kprobes_trampoline.S | 4 +- arch/arm64/kernel/reloc_test_syms.S | 44 +++++++++---------- arch/arm64/kernel/relocate_kernel.S | 4 +- arch/arm64/kernel/sleep.S | 12 ++--- arch/arm64/kernel/smccc-call.S | 8 ++-- 11 files changed, 70 insertions(+), 70 deletions(-) diff --git a/arch/arm64/kernel/cpu-reset.S b/arch/arm64/kernel/cpu-reset.S index 32c7bf858dd9..4033e2efa8cf 100644 --- a/arch/arm64/kernel/cpu-reset.S +++ b/arch/arm64/kernel/cpu-reset.S @@ -29,7 +29,7 @@ * branch to what would be the reset vector. It must be executed with the * flat identity mapping. */ -ENTRY(__cpu_soft_restart) +SYM_FUNC_START(__cpu_soft_restart) /* Clear sctlr_el1 flags. */ mrs x12, sctlr_el1 ldr x13, =SCTLR_ELx_FLAGS @@ -47,6 +47,6 @@ ENTRY(__cpu_soft_restart) mov x1, x3 // arg1 mov x2, x4 // arg2 br x8 -ENDPROC(__cpu_soft_restart) +SYM_FUNC_END(__cpu_soft_restart) .popsection diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S index 304d5b02ca67..de6ced92950e 100644 --- a/arch/arm64/kernel/efi-entry.S +++ b/arch/arm64/kernel/efi-entry.S @@ -25,7 +25,7 @@ * we want to be. The kernel image wants to be placed at TEXT_OFFSET * from start of RAM. */ -ENTRY(entry) +SYM_CODE_START(entry) /* * Create a stack frame to save FP/LR with extra space * for image_addr variable passed to efi_entry(). @@ -117,4 +117,4 @@ efi_load_fail: ret entry_end: -ENDPROC(entry) +SYM_CODE_END(entry) diff --git a/arch/arm64/kernel/efi-rt-wrapper.S b/arch/arm64/kernel/efi-rt-wrapper.S index 3fc71106cb2b..1192c4bb48df 100644 --- a/arch/arm64/kernel/efi-rt-wrapper.S +++ b/arch/arm64/kernel/efi-rt-wrapper.S @@ -5,7 +5,7 @@ #include -ENTRY(__efi_rt_asm_wrapper) +SYM_FUNC_START(__efi_rt_asm_wrapper) stp x29, x30, [sp, #-32]! mov x29, sp @@ -35,4 +35,4 @@ ENTRY(__efi_rt_asm_wrapper) b.ne 0f ret 0: b efi_handle_corrupted_x18 // tail call -ENDPROC(__efi_rt_asm_wrapper) +SYM_FUNC_END(__efi_rt_asm_wrapper) diff --git a/arch/arm64/kernel/entry-fpsimd.S b/arch/arm64/kernel/entry-fpsimd.S index 0f24eae8f3cc..f880dd63ddc3 100644 --- a/arch/arm64/kernel/entry-fpsimd.S +++ b/arch/arm64/kernel/entry-fpsimd.S @@ -16,34 +16,34 @@ * * x0 - pointer to struct fpsimd_state */ -ENTRY(fpsimd_save_state) +SYM_FUNC_START(fpsimd_save_state) fpsimd_save x0, 8 ret -ENDPROC(fpsimd_save_state) +SYM_FUNC_END(fpsimd_save_state) /* * Load the FP registers. * * x0 - pointer to struct fpsimd_state */ -ENTRY(fpsimd_load_state) +SYM_FUNC_START(fpsimd_load_state) fpsimd_restore x0, 8 ret -ENDPROC(fpsimd_load_state) +SYM_FUNC_END(fpsimd_load_state) #ifdef CONFIG_ARM64_SVE -ENTRY(sve_save_state) +SYM_FUNC_START(sve_save_state) sve_save 0, x1, 2 ret -ENDPROC(sve_save_state) +SYM_FUNC_END(sve_save_state) -ENTRY(sve_load_state) +SYM_FUNC_START(sve_load_state) sve_load 0, x1, x2, 3, x4 ret -ENDPROC(sve_load_state) +SYM_FUNC_END(sve_load_state) -ENTRY(sve_get_vl) +SYM_FUNC_START(sve_get_vl) _sve_rdvl 0, 1 ret -ENDPROC(sve_get_vl) +SYM_FUNC_END(sve_get_vl) #endif /* CONFIG_ARM64_SVE */ diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S index 38bcd4d4e43b..2fad0ec846e3 100644 --- a/arch/arm64/kernel/hibernate-asm.S +++ b/arch/arm64/kernel/hibernate-asm.S @@ -65,7 +65,7 @@ * x5: physical address of a zero page that remains zero after resume */ .pushsection ".hibernate_exit.text", "ax" -ENTRY(swsusp_arch_suspend_exit) +SYM_CODE_START(swsusp_arch_suspend_exit) /* * We execute from ttbr0, change ttbr1 to our copied linear map tables * with a break-before-make via the zero page @@ -112,7 +112,7 @@ ENTRY(swsusp_arch_suspend_exit) 3: ret .ltorg -ENDPROC(swsusp_arch_suspend_exit) +SYM_CODE_END(swsusp_arch_suspend_exit) /* * Restore the hyp stub. @@ -121,15 +121,15 @@ ENDPROC(swsusp_arch_suspend_exit) * * x24: The physical address of __hyp_stub_vectors */ -el1_sync: +SYM_CODE_START_LOCAL(el1_sync) msr vbar_el2, x24 eret -ENDPROC(el1_sync) +SYM_CODE_END(el1_sync) .macro invalid_vector label -\label: +SYM_CODE_START_LOCAL(\label) b \label -ENDPROC(\label) +SYM_CODE_END(\label) .endm invalid_vector el2_sync_invalid @@ -143,7 +143,7 @@ ENDPROC(\label) /* el2 vectors - switch el2 here while we restore the memory image. */ .align 11 -ENTRY(hibernate_el2_vectors) +SYM_CODE_START(hibernate_el2_vectors) ventry el2_sync_invalid // Synchronous EL2t ventry el2_irq_invalid // IRQ EL2t ventry el2_fiq_invalid // FIQ EL2t @@ -163,6 +163,6 @@ ENTRY(hibernate_el2_vectors) ventry el1_irq_invalid // IRQ 32-bit EL1 ventry el1_fiq_invalid // FIQ 32-bit EL1 ventry el1_error_invalid // Error 32-bit EL1 -END(hibernate_el2_vectors) +SYM_CODE_END(hibernate_el2_vectors) .popsection diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S index 73d46070b315..b095a42daa77 100644 --- a/arch/arm64/kernel/hyp-stub.S +++ b/arch/arm64/kernel/hyp-stub.S @@ -21,7 +21,7 @@ .align 11 -ENTRY(__hyp_stub_vectors) +SYM_CODE_START(__hyp_stub_vectors) ventry el2_sync_invalid // Synchronous EL2t ventry el2_irq_invalid // IRQ EL2t ventry el2_fiq_invalid // FIQ EL2t @@ -41,11 +41,11 @@ ENTRY(__hyp_stub_vectors) ventry el1_irq_invalid // IRQ 32-bit EL1 ventry el1_fiq_invalid // FIQ 32-bit EL1 ventry el1_error_invalid // Error 32-bit EL1 -ENDPROC(__hyp_stub_vectors) +SYM_CODE_END(__hyp_stub_vectors) .align 11 -el1_sync: +SYM_CODE_START_LOCAL(el1_sync) cmp x0, #HVC_SET_VECTORS b.ne 2f msr vbar_el2, x1 @@ -68,12 +68,12 @@ el1_sync: 9: mov x0, xzr eret -ENDPROC(el1_sync) +SYM_CODE_END(el1_sync) .macro invalid_vector label -\label: +SYM_CODE_START_LOCAL(\label) b \label -ENDPROC(\label) +SYM_CODE_END(\label) .endm invalid_vector el2_sync_invalid @@ -106,15 +106,15 @@ ENDPROC(\label) * initialisation entry point. */ -ENTRY(__hyp_set_vectors) +SYM_FUNC_START(__hyp_set_vectors) mov x1, x0 mov x0, #HVC_SET_VECTORS hvc #0 ret -ENDPROC(__hyp_set_vectors) +SYM_FUNC_END(__hyp_set_vectors) -ENTRY(__hyp_reset_vectors) +SYM_FUNC_START(__hyp_reset_vectors) mov x0, #HVC_RESET_VECTORS hvc #0 ret -ENDPROC(__hyp_reset_vectors) +SYM_FUNC_END(__hyp_reset_vectors) diff --git a/arch/arm64/kernel/probes/kprobes_trampoline.S b/arch/arm64/kernel/probes/kprobes_trampoline.S index 45dce03aaeaf..890ca72c5a51 100644 --- a/arch/arm64/kernel/probes/kprobes_trampoline.S +++ b/arch/arm64/kernel/probes/kprobes_trampoline.S @@ -61,7 +61,7 @@ ldp x28, x29, [sp, #S_X28] .endm -ENTRY(kretprobe_trampoline) +SYM_CODE_START(kretprobe_trampoline) sub sp, sp, #S_FRAME_SIZE save_all_base_regs @@ -79,4 +79,4 @@ ENTRY(kretprobe_trampoline) add sp, sp, #S_FRAME_SIZE ret -ENDPROC(kretprobe_trampoline) +SYM_CODE_END(kretprobe_trampoline) diff --git a/arch/arm64/kernel/reloc_test_syms.S b/arch/arm64/kernel/reloc_test_syms.S index 16a34f188f26..53e8cdfe80e1 100644 --- a/arch/arm64/kernel/reloc_test_syms.S +++ b/arch/arm64/kernel/reloc_test_syms.S @@ -5,81 +5,81 @@ #include -ENTRY(absolute_data64) +SYM_CODE_START(absolute_data64) ldr x0, 0f ret 0: .quad sym64_abs -ENDPROC(absolute_data64) +SYM_CODE_END(absolute_data64) -ENTRY(absolute_data32) +SYM_CODE_START(absolute_data32) ldr w0, 0f ret 0: .long sym32_abs -ENDPROC(absolute_data32) +SYM_CODE_END(absolute_data32) -ENTRY(absolute_data16) +SYM_CODE_START(absolute_data16) adr x0, 0f ldrh w0, [x0] ret 0: .short sym16_abs, 0 -ENDPROC(absolute_data16) +SYM_CODE_END(absolute_data16) -ENTRY(signed_movw) +SYM_CODE_START(signed_movw) movz x0, #:abs_g2_s:sym64_abs movk x0, #:abs_g1_nc:sym64_abs movk x0, #:abs_g0_nc:sym64_abs ret -ENDPROC(signed_movw) +SYM_CODE_END(signed_movw) -ENTRY(unsigned_movw) +SYM_CODE_START(unsigned_movw) movz x0, #:abs_g3:sym64_abs movk x0, #:abs_g2_nc:sym64_abs movk x0, #:abs_g1_nc:sym64_abs movk x0, #:abs_g0_nc:sym64_abs ret -ENDPROC(unsigned_movw) +SYM_CODE_END(unsigned_movw) .align 12 .space 0xff8 -ENTRY(relative_adrp) +SYM_CODE_START(relative_adrp) adrp x0, sym64_rel add x0, x0, #:lo12:sym64_rel ret -ENDPROC(relative_adrp) +SYM_CODE_END(relative_adrp) .align 12 .space 0xffc -ENTRY(relative_adrp_far) +SYM_CODE_START(relative_adrp_far) adrp x0, memstart_addr add x0, x0, #:lo12:memstart_addr ret -ENDPROC(relative_adrp_far) +SYM_CODE_END(relative_adrp_far) -ENTRY(relative_adr) +SYM_CODE_START(relative_adr) adr x0, sym64_rel ret -ENDPROC(relative_adr) +SYM_CODE_END(relative_adr) -ENTRY(relative_data64) +SYM_CODE_START(relative_data64) adr x1, 0f ldr x0, [x1] add x0, x0, x1 ret 0: .quad sym64_rel - . -ENDPROC(relative_data64) +SYM_CODE_END(relative_data64) -ENTRY(relative_data32) +SYM_CODE_START(relative_data32) adr x1, 0f ldr w0, [x1] add x0, x0, x1 ret 0: .long sym64_rel - . -ENDPROC(relative_data32) +SYM_CODE_END(relative_data32) -ENTRY(relative_data16) +SYM_CODE_START(relative_data16) adr x1, 0f ldrsh w0, [x1] add x0, x0, x1 ret 0: .short sym64_rel - ., 0 -ENDPROC(relative_data16) +SYM_CODE_END(relative_data16) diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index c1d7db71a726..dc65443f139d 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -26,7 +26,7 @@ * control_code_page, a special page which has been set up to be preserved * during the copy operation. */ -ENTRY(arm64_relocate_new_kernel) +SYM_CODE_START(arm64_relocate_new_kernel) /* Setup the list loop variables. */ mov x18, x2 /* x18 = dtb address */ @@ -111,7 +111,7 @@ ENTRY(arm64_relocate_new_kernel) mov x3, xzr br x17 -ENDPROC(arm64_relocate_new_kernel) +SYM_CODE_END(arm64_relocate_new_kernel) .ltorg diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S index f5b04dd8a710..a079551aa7b0 100644 --- a/arch/arm64/kernel/sleep.S +++ b/arch/arm64/kernel/sleep.S @@ -61,7 +61,7 @@ * * x0 = struct sleep_stack_data area */ -ENTRY(__cpu_suspend_enter) +SYM_FUNC_START(__cpu_suspend_enter) stp x29, lr, [x0, #SLEEP_STACK_DATA_CALLEE_REGS] stp x19, x20, [x0,#SLEEP_STACK_DATA_CALLEE_REGS+16] stp x21, x22, [x0,#SLEEP_STACK_DATA_CALLEE_REGS+32] @@ -94,10 +94,10 @@ ENTRY(__cpu_suspend_enter) ldp x29, lr, [sp], #16 mov x0, #1 ret -ENDPROC(__cpu_suspend_enter) +SYM_FUNC_END(__cpu_suspend_enter) .pushsection ".idmap.text", "awx" -ENTRY(cpu_resume) +SYM_FUNC_START(cpu_resume) bl el2_setup // if in EL2 drop to EL1 cleanly bl __cpu_setup /* enable the MMU early - so we can access sleep_save_stash by va */ @@ -105,11 +105,11 @@ ENTRY(cpu_resume) bl __enable_mmu ldr x8, =_cpu_resume br x8 -ENDPROC(cpu_resume) +SYM_FUNC_END(cpu_resume) .ltorg .popsection -ENTRY(_cpu_resume) +SYM_FUNC_START(_cpu_resume) mrs x1, mpidr_el1 adr_l x8, mpidr_hash // x8 = struct mpidr_hash virt address @@ -145,4 +145,4 @@ ENTRY(_cpu_resume) ldp x29, lr, [x29] mov x0, #0 ret -ENDPROC(_cpu_resume) +SYM_FUNC_END(_cpu_resume) diff --git a/arch/arm64/kernel/smccc-call.S b/arch/arm64/kernel/smccc-call.S index 54655273d1e0..1f93809528a4 100644 --- a/arch/arm64/kernel/smccc-call.S +++ b/arch/arm64/kernel/smccc-call.S @@ -30,9 +30,9 @@ * unsigned long a6, unsigned long a7, struct arm_smccc_res *res, * struct arm_smccc_quirk *quirk) */ -ENTRY(__arm_smccc_smc) +SYM_FUNC_START(__arm_smccc_smc) SMCCC smc -ENDPROC(__arm_smccc_smc) +SYM_FUNC_END(__arm_smccc_smc) EXPORT_SYMBOL(__arm_smccc_smc) /* @@ -41,7 +41,7 @@ EXPORT_SYMBOL(__arm_smccc_smc) * unsigned long a6, unsigned long a7, struct arm_smccc_res *res, * struct arm_smccc_quirk *quirk) */ -ENTRY(__arm_smccc_hvc) +SYM_FUNC_START(__arm_smccc_hvc) SMCCC hvc -ENDPROC(__arm_smccc_hvc) +SYM_FUNC_END(__arm_smccc_hvc) EXPORT_SYMBOL(__arm_smccc_hvc) From patchwork Tue Feb 18 19:58:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 198080 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7013BC3404B for ; Tue, 18 Feb 2020 19:59:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4864424670 for ; Tue, 18 Feb 2020 19:59:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582055967; bh=j3UIKZlwavp3+EwC11Ly/4J9ZTYUGCphJQVnMqeeRzI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=o14h7uPoRynRJsxhtLZdu+Dv+bAG3PB5lWAc7eol33lppmb+9AMJFXzi7wXVvJcl4 67GwBIXD9i+bsESpN0hbjDPtUu25MmfnUmAKLlhWgYAGrjeX0buW2VQ66gG2Q84Uoz 0F1Cyp94mrcdTigdUVUYA7rczwTZdDM2b4zo4AE0= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728412AbgBRT70 (ORCPT ); Tue, 18 Feb 2020 14:59:26 -0500 Received: from foss.arm.com ([217.140.110.172]:60674 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727872AbgBRT7Z (ORCPT ); Tue, 18 Feb 2020 14:59:25 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2E7DE31B; Tue, 18 Feb 2020 11:59:25 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A57833F68F; Tue, 18 Feb 2020 11:59:24 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 14/18] arm64: kvm: Modernize annotation for __bp_harden_hyp_vecs Date: Tue, 18 Feb 2020 19:58:38 +0000 Message-Id: <20200218195842.34156-15-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org We have recently introduced new macros for annotating assembly symbols for things that aren't C functions, SYM_CODE_START() and SYM_CODE_END(), in an effort to clarify and simplify our annotations of assembly files. Using these for __bp_harden_hyp_vecs is more involved than for most symbols as this symbol is annotated quite unusually as rather than just have the explicit symbol we define _start and _end symbols which we then use to compute the length. This does not play at all nicely with the new style macros. Since the size of the vectors is a known constant which won't vary the simplest thing to do is simply to drop the separate _start and _end symbols and just use a #define for the size. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_mmu.h | 9 ++++----- arch/arm64/include/asm/mmu.h | 4 +++- arch/arm64/kernel/cpu_errata.c | 2 +- arch/arm64/kvm/hyp/hyp-entry.S | 6 ++++-- 4 files changed, 12 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 53d846f1bfe7..b5f723cf9599 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -480,7 +480,7 @@ static inline void *kvm_get_hyp_vector(void) int slot = -1; if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR) && data->fn) { - vect = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs_start)); + vect = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs)); slot = data->hyp_vectors_slot; } @@ -509,14 +509,13 @@ static inline int kvm_map_vectors(void) * HBP + HEL2 -> use hardened vertors and use exec mapping */ if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR)) { - __kvm_bp_vect_base = kvm_ksym_ref(__bp_harden_hyp_vecs_start); + __kvm_bp_vect_base = kvm_ksym_ref(__bp_harden_hyp_vecs); __kvm_bp_vect_base = kern_hyp_va(__kvm_bp_vect_base); } if (cpus_have_const_cap(ARM64_HARDEN_EL2_VECTORS)) { - phys_addr_t vect_pa = __pa_symbol(__bp_harden_hyp_vecs_start); - unsigned long size = (__bp_harden_hyp_vecs_end - - __bp_harden_hyp_vecs_start); + phys_addr_t vect_pa = __pa_symbol(__bp_harden_hyp_vecs); + unsigned long size = __BP_HARDEN_HYP_VECS_SZ; /* * Always allocate a spare vector slot, as we don't diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index e4d862420bb4..a3324d6ccbfe 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -13,6 +13,7 @@ #define TTBR_ASID_MASK (UL(0xffff) << 48) #define BP_HARDEN_EL2_SLOTS 4 +#define __BP_HARDEN_HYP_VECS_SZ (BP_HARDEN_EL2_SLOTS * SZ_2K) #ifndef __ASSEMBLY__ @@ -45,7 +46,8 @@ struct bp_hardening_data { #if (defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \ defined(CONFIG_HARDEN_EL2_VECTORS)) -extern char __bp_harden_hyp_vecs_start[], __bp_harden_hyp_vecs_end[]; + +extern char __bp_harden_hyp_vecs[]; extern atomic_t arm64_el2_vector_last_slot; #endif /* CONFIG_HARDEN_BRANCH_PREDICTOR || CONFIG_HARDEN_EL2_VECTORS */ diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 703ad0a84f99..0af2201cefda 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -119,7 +119,7 @@ extern char __smccc_workaround_1_smc_end[]; static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start, const char *hyp_vecs_end) { - void *dst = lm_alias(__bp_harden_hyp_vecs_start + slot * SZ_2K); + void *dst = lm_alias(__bp_harden_hyp_vecs + slot * SZ_2K); int i; for (i = 0; i < SZ_2K; i += 0x80) diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 0aea8f9ab23d..1e2ab928a92f 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -312,11 +312,13 @@ alternative_cb_end .endm .align 11 -ENTRY(__bp_harden_hyp_vecs_start) +SYM_CODE_START(__bp_harden_hyp_vecs) .rept BP_HARDEN_EL2_SLOTS generate_vectors .endr -ENTRY(__bp_harden_hyp_vecs_end) +1: .org __bp_harden_hyp_vecs + __BP_HARDEN_HYP_VECS_SZ + .org 1b +SYM_CODE_END(__bp_harden_hyp_vecs) .popsection From patchwork Tue Feb 18 19:58:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 198078 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 995E7C3404B for ; Tue, 18 Feb 2020 20:07:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7419B21D56 for ; Tue, 18 Feb 2020 20:07:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582056425; bh=uzY3qNW/TVXxIDrEpfAtWbrR6Pu6D9xU+6+meqsenkE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=TM4RBUjCRyjry4/DBoZEaaeKJHU5Z2UbC7UPm7s+QnUozq3eJxE1miwseNSmViuMv ITrkYFYQwHRN0kzqusUDub5IzIymSQJ+B8eX3UXeOl3Ehf+4jVBKqY6gQwteQgwNR+ +nFNi7Ss1+KI4dWy6r+He6tyPaNjZpa1XXDWcBF4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726801AbgBRUHE (ORCPT ); Tue, 18 Feb 2020 15:07:04 -0500 Received: from foss.arm.com ([217.140.110.172]:60694 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728423AbgBRT71 (ORCPT ); Tue, 18 Feb 2020 14:59:27 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 673131045; Tue, 18 Feb 2020 11:59:27 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DE5963F68F; Tue, 18 Feb 2020 11:59:26 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 15/18] arm64: kvm: Modernize __smccc_workaround_1_smc_start annotations Date: Tue, 18 Feb 2020 19:58:39 +0000 Message-Id: <20200218195842.34156-16-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC with separate annotations for standard C callable functions, data and code with different calling conventions. Using these for __smccc_workaround_1_smc is more involved than for most symbols as this symbol is annotated quite unusually, rather than just have the explicit symbol we define _start and _end symbols which we then use to compute the length. This does not play at all nicely with the new style macros. Instead define a constant for the size of the function and use that in both the C code and for .org based size checks in the assembly code. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_asm.h | 4 ++++ arch/arm64/kernel/cpu_errata.c | 14 ++++++-------- arch/arm64/kvm/hyp/hyp-entry.S | 6 ++++-- 3 files changed, 14 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 44a243754c1b..7c7eeeaab9fa 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -36,6 +36,8 @@ */ #define KVM_VECTOR_PREAMBLE (2 * AARCH64_INSN_SIZE) +#define __SMCCC_WORKAROUND_1_SMC_SZ 36 + #ifndef __ASSEMBLY__ #include @@ -75,6 +77,8 @@ extern void __vgic_v3_init_lrs(void); extern u32 __kvm_get_mdcr_el2(void); +extern char __smccc_workaround_1_smc[__SMCCC_WORKAROUND_1_SMC_SZ]; + /* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */ #define __hyp_this_cpu_ptr(sym) \ ({ \ diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 0af2201cefda..6a2ca339741c 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -11,6 +11,7 @@ #include #include #include +#include #include static bool __maybe_unused @@ -113,9 +114,6 @@ atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1); DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data); #ifdef CONFIG_KVM_INDIRECT_VECTORS -extern char __smccc_workaround_1_smc_start[]; -extern char __smccc_workaround_1_smc_end[]; - static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start, const char *hyp_vecs_end) { @@ -163,9 +161,6 @@ static void install_bp_hardening_cb(bp_hardening_cb_t fn, raw_spin_unlock(&bp_lock); } #else -#define __smccc_workaround_1_smc_start NULL -#define __smccc_workaround_1_smc_end NULL - static void install_bp_hardening_cb(bp_hardening_cb_t fn, const char *hyp_vecs_start, const char *hyp_vecs_end) @@ -239,11 +234,14 @@ static int detect_harden_bp_fw(void) smccc_end = NULL; break; +#if IS_ENABLED(CONFIG_KVM_ARM_HOST) case SMCCC_CONDUIT_SMC: cb = call_smc_arch_workaround_1; - smccc_start = __smccc_workaround_1_smc_start; - smccc_end = __smccc_workaround_1_smc_end; + smccc_start = __smccc_workaround_1_smc; + smccc_end = __smccc_workaround_1_smc + + __SMCCC_WORKAROUND_1_SMC_SZ; break; +#endif default: return -1; diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 1e2ab928a92f..c2a13ab3c471 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -322,7 +322,7 @@ SYM_CODE_END(__bp_harden_hyp_vecs) .popsection -ENTRY(__smccc_workaround_1_smc_start) +SYM_CODE_START(__smccc_workaround_1_smc) esb sub sp, sp, #(8 * 4) stp x2, x3, [sp, #(8 * 0)] @@ -332,5 +332,7 @@ ENTRY(__smccc_workaround_1_smc_start) ldp x2, x3, [sp, #(8 * 0)] ldp x0, x1, [sp, #(8 * 2)] add sp, sp, #(8 * 4) -ENTRY(__smccc_workaround_1_smc_end) +1: .org __smccc_workaround_1_smc + __SMCCC_WORKAROUND_1_SMC_SZ + .org 1b +SYM_CODE_END(__smccc_workaround_1_smc) #endif From patchwork Tue Feb 18 19:58:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 198079 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2184DC34047 for ; Tue, 18 Feb 2020 20:07:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E986B21D56 for ; Tue, 18 Feb 2020 20:06:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582056420; bh=ysatse66WsEP+APXGdPuRUVO8vuni1I3B4/iyQFpvhg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=DIsugi8LlhqrkeRt3GTPSfX9ECYyEEpQegHWNibQgljPBiE0mq4rcepoR13d+NpHy Jz+BqCKQv8uHVKq9Bf3+zoQW+xpW8swT4SN548JdpfodiZ3brxN5Uxx3Rypxg/5q8x Kw/yMmgC7Usy6jl1OenxL970Q1jpPXBCUnL1bdkw= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728114AbgBRT7g (ORCPT ); Tue, 18 Feb 2020 14:59:36 -0500 Received: from foss.arm.com ([217.140.110.172]:60752 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728069AbgBRT7e (ORCPT ); Tue, 18 Feb 2020 14:59:34 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1E80C101E; Tue, 18 Feb 2020 11:59:34 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 94C873F68F; Tue, 18 Feb 2020 11:59:33 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 18/18] arm64: vdso32: Convert to modern assembler annotations Date: Tue, 18 Feb 2020 19:58:42 +0000 Message-Id: <20200218195842.34156-19-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. Use these for the compat VDSO, allowing us to drop the custom ARM_ENTRY() and ARM_ENDPROC() macros. Signed-off-by: Mark Brown --- arch/arm64/kernel/vdso32/sigreturn.S | 23 ++++++++--------------- 1 file changed, 8 insertions(+), 15 deletions(-) diff --git a/arch/arm64/kernel/vdso32/sigreturn.S b/arch/arm64/kernel/vdso32/sigreturn.S index 1a81277c2d09..620524969696 100644 --- a/arch/arm64/kernel/vdso32/sigreturn.S +++ b/arch/arm64/kernel/vdso32/sigreturn.S @@ -10,13 +10,6 @@ #include #include -#define ARM_ENTRY(name) \ - ENTRY(name) - -#define ARM_ENDPROC(name) \ - .type name, %function; \ - END(name) - .text .arm @@ -24,39 +17,39 @@ .save {r0-r15} .pad #COMPAT_SIGFRAME_REGS_OFFSET nop -ARM_ENTRY(__kernel_sigreturn_arm) +SYM_FUNC_START(__kernel_sigreturn_arm) mov r7, #__NR_compat_sigreturn svc #0 .fnend -ARM_ENDPROC(__kernel_sigreturn_arm) +SYM_FUNC_END(__kernel_sigreturn_arm) .fnstart .save {r0-r15} .pad #COMPAT_RT_SIGFRAME_REGS_OFFSET nop -ARM_ENTRY(__kernel_rt_sigreturn_arm) +SYM_FUNC_START(__kernel_rt_sigreturn_arm) mov r7, #__NR_compat_rt_sigreturn svc #0 .fnend -ARM_ENDPROC(__kernel_rt_sigreturn_arm) +SYM_FUNC_END(__kernel_rt_sigreturn_arm) .thumb .fnstart .save {r0-r15} .pad #COMPAT_SIGFRAME_REGS_OFFSET nop -ARM_ENTRY(__kernel_sigreturn_thumb) +SYM_FUNC_START(__kernel_sigreturn_thumb) mov r7, #__NR_compat_sigreturn svc #0 .fnend -ARM_ENDPROC(__kernel_sigreturn_thumb) +SYM_FUNC_END(__kernel_sigreturn_thumb) .fnstart .save {r0-r15} .pad #COMPAT_RT_SIGFRAME_REGS_OFFSET nop -ARM_ENTRY(__kernel_rt_sigreturn_thumb) +SYM_FUNC_START(__kernel_rt_sigreturn_thumb) mov r7, #__NR_compat_rt_sigreturn svc #0 .fnend -ARM_ENDPROC(__kernel_rt_sigreturn_thumb) +SYM_FUNC_END(__kernel_rt_sigreturn_thumb)