From patchwork Fri Nov 11 18:37:02 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 81891 Delivered-To: patch@linaro.org Received: by 10.140.97.165 with SMTP id m34csp1406076qge; Fri, 11 Nov 2016 10:37:38 -0800 (PST) X-Received: by 10.99.66.198 with SMTP id p189mr6886595pga.30.1478889458463; Fri, 11 Nov 2016 10:37:38 -0800 (PST) Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id z8si9622310pau.317.2016.11.11.10.37.38 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 11 Nov 2016 10:37:38 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-return-441182-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org; spf=pass (google.com: domain of gcc-patches-return-441182-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-441182-patch=linaro.org@gcc.gnu.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :subject:to:references:cc:from:message-id:date:mime-version :in-reply-to:content-type; q=dns; s=default; b=gq0s4kM8G6fMzu3N/ oEzQ9b3M4/E9/V+QYlaHnTwkhFOZc/ExyLoL/ZFhdyh5soZcm23Qsh03fGXnc08e wGj5hRUVFYtvqiKzB9zn8qY40/4nud7nE20AAUshrW8N9GEkfedWqXYPDRiIaZim e3a7dnIspAN1Iywm5Q32wzg2fQ= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :subject:to:references:cc:from:message-id:date:mime-version :in-reply-to:content-type; s=default; bh=ueXXa4tcvngns1qw205b/mY /Ofc=; b=kTfkPtdcji9mdqFPkwMkzZc3JAS13B44VyO1b8vaZhIH93RNW1rlNeF vYvxpA2V/YfkI+EErDDW/0xxDKq80UDlwoKpCcPXrAMO/jMvfSrvBmtMQR+pzP4U BI+QSu031kLG2n/VoZjwUT4rz24TopkzwzSo35sSjJ9GpqtPsMCw= Received: (qmail 102284 invoked by alias); 11 Nov 2016 18:37:17 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 101277 invoked by uid 89); 11 Nov 2016 18:37:16 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-3.5 required=5.0 tests=BAYES_00, KAM_LAZY_DOMAIN_SECURITY, KAM_LOTSOFHASH, RP_MATCHES_RCVD autolearn=ham version=3.3.2 spammy=utilize, Hx-languages-length:9838, 19817, reloading X-HELO: foss.arm.com Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Fri, 11 Nov 2016 18:37:05 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4C4C128; Fri, 11 Nov 2016 10:37:04 -0800 (PST) Received: from [10.2.206.198] (e104437-lin.cambridge.arm.com [10.2.206.198]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A98A93F24D; Fri, 11 Nov 2016 10:37:03 -0800 (PST) Subject: [9/9][RFC][AArch64] Accelerate -fstack-protector through pointer authentication extension To: gcc-patches References: <72418e98-a400-c503-e8ce-c3fbe1ecc4a7@foss.arm.com> <64dd1b38-ff0a-5df0-1d3c-2fbf083e2697@foss.arm.com> <532363d6-0b33-491f-264d-9cd627713bf6@foss.arm.com> <172bd740-755c-5267-3a9d-692c84d25395@foss.arm.com> <9333e644-4daa-38e3-690e-2ea3473b0f29@foss.arm.com> <1ae89b2b-9819-12f8-5341-776c3b02e5b3@foss.arm.com> Cc: "Richard Earnshaw (lists)" , James Greenhalgh From: Jiong Wang Message-ID: Date: Fri, 11 Nov 2016 18:37:02 +0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0 MIME-Version: 1.0 In-Reply-To: <1ae89b2b-9819-12f8-5341-776c3b02e5b3@foss.arm.com> X-IsSubscribed: yes This patch accelerates GCC's existed -fstack-protector using ARMv8.3-A pointer authentication instructions. Given AArch64 currently has the following stack layout: | caller's LR | .... |------------ | canary <- sentinel for -fstack-protector | locals (buffer located here) |------------ | | other callees | | callee's LR <- sentinel for -msign-return-address |------------ | we can switch locals and callees, | ... | vararg |------------ | other callee |------------ | LR |------------ | locals (buffer located here) We then sign LR and make it serve as canary value. There are several benefits of this approach: * It's evetually -msign-return-address + swap locals and callees areas. * Require nearly no modifications on prologue and epilogue, avoid making them complexer. * No need of any other runtime support, libssp is not required. The runtime overhead before and after this patch will be: o canary insert GCC default SSP runtime was loading from global variable "__stack_chk_guard" initilized in libssp: adrp x19, _GLOBAL_OFFSET_TABLE_ ldr x19, [x19, #:gotpage_lo15:__stack_chk_guard] ldr x2, [x19] str x2, [x29, 56] this patch accelerats into: sign lr o canary check GCC default SSP runtime was reloading from stack, then comparing with original value and branch to abort function: ldr x2, [x29, 56] ldr x1, [x19] eor x1, x2, x1 cbnz x1, .L5 ... ret .L5: bl __stack_chk_fail acclerated into: aut lr + ret or retaa the the canary value (signed LR) fails authentication, the return to invalid address will cause exception. NOTE, this approach however requires DWARF change as the original LR is signed, the binary needs new libgcc to make sure c++ eh works correctly. Given this acceleration already needs the user specify -mstack-protector-dialect=pauth which means the target platform largely should have install new libgcc, otherwise you can't utilize new pointer authentication features. gcc/ 2016-11-11 Jiong Wang * config/aarch64/aarch64-opts.h (aarch64_stack_protector_type): New enum. (aarch64_layout_frame): Swap callees and locals when -mstack-protector-dialect=pauth specified. (aarch64_expand_prologue): Use AARCH64_PAUTH_SSP_OR_RA_SIGN instead of AARCH64_ENABLE_RETURN_ADDRESS_SIGN. (aarch64_expand_epilogue): Likewise. * config/aarch64/aarch64.md (*do_return): Likewise. (aarch64_override_options): Sanity check for ILP32 and TARGET_PAUTH. * config/aarch64/aarch64.h (AARCH64_PAUTH_SSP_OPTION, AARCH64_PAUTH_SSP, AARCH64_PAUTH_SSP_OR_RA_SIGN, LINK_SSP_SPEC): New defines. * config/aarch64/aarch64.opt (-mstack-protector-dialect=): New option. * doc/invoke.texi (AArch64 Options): Documents -mstack-protector-dialect=. diff --git a/gcc/config/aarch64/aarch64-opts.h b/gcc/config/aarch64/aarch64-opts.h index 41c14b38a6188d399eb04baca2896e033c03ff1b..ff464ea5675146d62f0b676fe776f882fc1b8d80 100644 --- a/gcc/config/aarch64/aarch64-opts.h +++ b/gcc/config/aarch64/aarch64-opts.h @@ -99,4 +99,10 @@ enum aarch64_function_type { AARCH64_FUNCTION_ALL }; +/* GCC standard stack protector (Canary insertion based) types for AArch64. */ +enum aarch64_stack_protector_type { + STACK_PROTECTOR_TRAD, + STACK_PROTECTOR_PAUTH +}; + #endif diff --git a/gcc/config/aarch64/aarch64.h b/gcc/config/aarch64/aarch64.h index 907e8bdf5b4961b3107dcd5a481de28335e4be89..73ef2677a11450fe21f765011317bd3367ef0d94 100644 --- a/gcc/config/aarch64/aarch64.h +++ b/gcc/config/aarch64/aarch64.h @@ -982,4 +982,25 @@ enum aarch64_pauth_action_type AARCH64_PAUTH_AUTH }; +/* Pointer authentication accelerated -fstack-protector. */ +#define AARCH64_PAUTH_SSP_OPTION \ + (TARGET_PAUTH && aarch64_stack_protector_dialect == STACK_PROTECTOR_PAUTH) + +#define AARCH64_PAUTH_SSP \ + (crtl->stack_protect_guard && AARCH64_PAUTH_SSP_OPTION) + +#define AARCH64_PAUTH_SSP_OR_RA_SIGN \ + (AARCH64_PAUTH_SSP || AARCH64_ENABLE_RETURN_ADDRESS_SIGN) + +#ifndef TARGET_LIBC_PROVIDES_SSP +#define LINK_SSP_SPEC "%{!mstack-protector-dialect=pauth:\ + %{fstack-protector|fstack-protector-all\ + |fstack-protector-strong|fstack-protector-explicit:\ + -lssp_nonshared -lssp}}" +#endif + +/* Don't use GCC default SSP runtime if pointer authentication acceleration + enabled. */ +#define ENABLE_DEFAULT_SSP_RUNTIME !(AARCH64_PAUTH_SSP_OPTION) + #endif /* GCC_AARCH64_H */ diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c index cae177dca511fdb909ef82c972d3bbdebab215e2..c469baf92268ff894f5cf0ea9f5dbd4180714b98 100644 --- a/gcc/config/aarch64/aarch64.c +++ b/gcc/config/aarch64/aarch64.c @@ -2993,6 +2993,15 @@ aarch64_layout_frame (void) = cfun->machine->frame.frame_size - cfun->machine->frame.initial_adjust; } + if (AARCH64_PAUTH_SSP) + { + cfun->machine->frame.callee_adjust = varargs_and_saved_regs_size; + cfun->machine->frame.final_adjust + = cfun->machine->frame.frame_size - cfun->machine->frame.callee_adjust; + cfun->machine->frame.hard_fp_offset = cfun->machine->frame.callee_adjust; + cfun->machine->frame.locals_offset = cfun->machine->frame.hard_fp_offset; + } + cfun->machine->frame.laid_out = true; } @@ -3203,7 +3212,7 @@ aarch64_save_callee_saves (machine_mode mode, HOST_WIDE_INT start_offset, RTX_FRAME_RELATED_P (insn) = 1; - if (AARCH64_ENABLE_RETURN_ADDRESS_SIGN && lr_pair_reg != INVALID_REGNUM) + if (AARCH64_PAUTH_SSP_OR_RA_SIGN && lr_pair_reg != INVALID_REGNUM) { rtx cfi_ops = NULL_RTX; @@ -3335,7 +3344,7 @@ aarch64_expand_prologue (void) /* Do return address signing for all functions, even those for which LR is not pushed onto stack. */ - if (AARCH64_ENABLE_RETURN_ADDRESS_SIGN) + if (AARCH64_PAUTH_SSP_OR_RA_SIGN) { insn = emit_insn (gen_sign_reg (gen_rtx_REG (Pmode, LR_REGNUM), gen_rtx_REG (Pmode, LR_REGNUM), @@ -3368,7 +3377,7 @@ aarch64_expand_prologue (void) aarch64_push_regs (reg1, reg2, callee_adjust); /* Generate return address signing dwarf annotation when omit-frame-pointer. */ - if (AARCH64_ENABLE_RETURN_ADDRESS_SIGN + if (AARCH64_PAUTH_SSP_OR_RA_SIGN && (reg1 == LR_REGNUM || reg2 == LR_REGNUM)) { rtx cfi_ops = NULL_RTX; @@ -3503,7 +3512,7 @@ aarch64_expand_epilogue (bool for_sibcall) rtx new_cfa = plus_constant (Pmode, stack_pointer_rtx, initial_adjust); cfi_ops = alloc_reg_note (REG_CFA_DEF_CFA, new_cfa, cfi_ops); - if (AARCH64_ENABLE_RETURN_ADDRESS_SIGN) + if (AARCH64_PAUTH_SSP_OR_RA_SIGN) REG_NOTES (insn) = aarch64_attach_ra_auth_dwarf_note (cfi_ops, 0); else REG_NOTES (insn) = cfi_ops; @@ -3528,7 +3537,7 @@ aarch64_expand_epilogue (bool for_sibcall) authentication, as the following stack adjustment will update CFA to handler's CFA while we want to use the CFA of the function which calls __builtin_eh_return. */ - if (AARCH64_ENABLE_RETURN_ADDRESS_SIGN + if (AARCH64_PAUTH_SSP_OR_RA_SIGN && (for_sibcall || !TARGET_PAUTH || crtl->calls_eh_return)) { insn = emit_insn (gen_auth_reg (gen_rtx_REG (Pmode, LR_REGNUM), @@ -8737,6 +8746,14 @@ aarch64_override_options (void) if (aarch64_ra_sign_scope != AARCH64_FUNCTION_NONE && TARGET_ILP32) error ("Return address signing is only supported on LP64"); + if (aarch64_stack_protector_dialect == STACK_PROTECTOR_PAUTH && TARGET_ILP32) + error ("Pointer authentication based -fstack-protector is only supported " + "on LP64."); + + if (aarch64_stack_protector_dialect == STACK_PROTECTOR_PAUTH && !TARGET_PAUTH) + error ("Pointer authentication based -fstack-protector is only supported " + "on architecture with pointer authentication extension."); + /* Make sure we properly set up the explicit options. */ if ((aarch64_cpu_string && valid_cpu) || (aarch64_tune_string && valid_tune)) diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md index 754ea00d2f3027f0d4c57e1a2c1ea06d35135259..4fd94d23b7570fbcdb931e1e0a03257088c42955 100644 --- a/gcc/config/aarch64/aarch64.md +++ b/gcc/config/aarch64/aarch64.md @@ -581,9 +581,7 @@ [(return)] "" { - if (AARCH64_ENABLE_RETURN_ADDRESS_SIGN - && TARGET_PAUTH - && !crtl->calls_eh_return) + if (AARCH64_PAUTH_SSP_OR_RA_SIGN && TARGET_PAUTH && !crtl->calls_eh_return) { if (aarch64_pauth_key == AARCH64_PAUTH_IKEY_A) return "retaa"; diff --git a/gcc/config/aarch64/aarch64.opt b/gcc/config/aarch64/aarch64.opt index 0b172021dbba918ecc5a7d953cfcbdf8edfe0b5f..781f68f6b7789c386c4e2a21fef797ea8fc48810 100644 --- a/gcc/config/aarch64/aarch64.opt +++ b/gcc/config/aarch64/aarch64.opt @@ -198,3 +198,17 @@ Common Var(flag_mlow_precision_div) Optimization Enable the division approximation. Enabling this reduces precision of division results to about 16 bits for single precision and to 32 bits for double precision. + +Enum +Name(stack_protector_type) Type(enum aarch64_stack_protector_type) +The possible stack protector dialects: + +EnumValue +Enum(stack_protector_type) String(trad) Value(STACK_PROTECTOR_TRAD) + +EnumValue +Enum(stack_protector_type) String(pauth) Value(STACK_PROTECTOR_PAUTH) + +mstack-protector-dialect= +Target RejectNegative Joined Enum(stack_protector_type) Var(aarch64_stack_protector_dialect) Init(STACK_PROTECTOR_TRAD) Save +Specify stack protector dialect. diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi index e7f842f9d21bd4e94eec9955851d8632837ba2c3..32c5cee54c8897bfee3d135a548fa70ed555e1fe 100644 --- a/gcc/doc/invoke.texi +++ b/gcc/doc/invoke.texi @@ -13357,6 +13357,12 @@ Select the key used for return address signing. Permissible values are @samp{a_key} for A key and @samp{b_key} for B key. @samp{a_key} is the default value. +@item -mstack-protector-dialect=@var{dialect} +@opindex mstack-protector-dialect +Select the dialect for GCC -fstack-protector. @samp{trad} for GCC default +implementation and @samp{pauth} for pointer authentication accelerated +implementation for AArch64 LP64. + @end table @subsubsection @option{-march} and @option{-mcpu} Feature Modifiers