From patchwork Fri Aug 18 08:10:32 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 110362 Delivered-To: patch@linaro.org Received: by 10.140.95.78 with SMTP id h72csp593122qge; Fri, 18 Aug 2017 01:11:06 -0700 (PDT) X-Received: by 10.84.132.42 with SMTP id 39mr9014777ple.448.1503043866077; Fri, 18 Aug 2017 01:11:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1503043866; cv=none; d=google.com; s=arc-20160816; b=slonUsVW3ZJS/XysDnCEBVTqkGWxP+Dj5Wel40SkblVkFPlNJkWzYuGcAN6FwXhFpt lgnDvX63QFF8xGrWujD1r1ZGgmMsCzaOEU5LzgEhYxSUQec2p8zNS7VEPXiS65613yVT 21+C5hyT/cZ/8M2GuXd8Sf9tLAW+ZakN4NspEuLngtabMwhuCBr0SGvA5LLgPOORxwWt ckrsB4SeKW1Yea2TqgOqpvdXwmpPkXesmA41miO8H7CCsKSjIjDo3Z9+ZNHPZxJlgLQ8 uQ7hBpdGk0LwYvkOqDiJIzkCJ4CxDL9K4cHgoCP2NhWjOZJ6DqGkxqwIus/hrzFXgcB5 ih4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:user-agent:message-id:date:subject:mail-followup-to:to :from:delivered-to:sender:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:mailing-list:dkim-signature :domainkey-signature:arc-authentication-results; bh=JQONBqz9N07gwYmKBuIPS0JZdZeDbUym3MY0UbRUWes=; b=urqWgf5T8AbVUTE02Tr/2vAIujyf0gg0YevCZ/DA/sIjfywUD9g+Yp+1HBsnVXp3fO Ap1LErc6uWsXQkpTkDlptNdj7F5B4sd6y7IWlGCCorOfuqzL9YtOAVslR4IvwYjgaUn8 Jyi87uxRiYj+ZZRvGwv1WvKfkgI3HXickMwWlgec2z8V1hIrpI0vM0bBuf1f/t43dndl +n4p3GVOySrhvuvfY9TudhqBwj+pKfo7eW01a+ual4Y/93lY01ABa3BUNuAEFyyXtxNH PKy+dlLsTg6P4XJYTnPegx+hz6SuJaXmn7uFCUWl51u7Z3bg7qZAp0K/vHMwBK6EM7X2 Uhfw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=NQtcU5hC; spf=pass (google.com: domain of gcc-patches-return-460548-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-460548-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id e86si3311758pfj.198.2017.08.18.01.11.05 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 18 Aug 2017 01:11:06 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-460548-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=NQtcU5hC; spf=pass (google.com: domain of gcc-patches-return-460548-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-460548-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type; q=dns; s= default; b=XG3qp9jBqwUdt6j7MoW4fDtyLNHb6CoT9FfFGNfX/ABT2MbIIA54V lXDsJlG2yVhBLj5x5pZTpliYWmbrVUofNnqkSBw2hQKh6wJ8OTucjc63jSIRElFP 3qi8/O77lgyb3hYFLTrJcc3sXh4cNfj3UOSj99vEVXCn1iGSIGNJ8Y= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type; s= default; bh=8MF2VGAp4WZXq8Bh/e+s4TRNeUY=; b=NQtcU5hCNEiY0w29Mv5x xc/c/dyAsxMHZWMq3o5bUfiakqfjuoCx3u3aZqcC0fn36235HCK7i3XpR0nLzIvg 4Wla2w1ur/2GPqDo2HXPnc7sKnZDTb73ZXyQh065GFzWaNhuffsE+mhXwtNPBAHL p3WiQefsFb/o/RmgwjsgUCk= Received: (qmail 108241 invoked by alias); 18 Aug 2017 08:10:46 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 108216 invoked by uid 89); 18 Aug 2017 08:10:43 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-24.7 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, RCVD_IN_DNSWL_NONE, RCVD_IN_SORBS_SPAM, SPF_PASS autolearn=ham version=3.3.2 spammy= X-HELO: mail-wr0-f169.google.com Received: from mail-wr0-f169.google.com (HELO mail-wr0-f169.google.com) (209.85.128.169) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Fri, 18 Aug 2017 08:10:40 +0000 Received: by mail-wr0-f169.google.com with SMTP id z91so55208812wrc.4 for ; Fri, 18 Aug 2017 01:10:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:mail-followup-to:subject:date:message-id :user-agent:mime-version; bh=JQONBqz9N07gwYmKBuIPS0JZdZeDbUym3MY0UbRUWes=; b=btelxaMmZ2DkGb+pGI4PcDvWJU6qG0BGNPEqgM6x/OEpVsZjI8BAC6lVs5LHyn1H+T DatvydUtkOcrGmgEl476QkmY5x0liEW8e8C8elsP80m9mZ+BMwlibCKTLWWVt+Kf66aI tuwOy5uHBZJ7mf7oKtDfwc0CxMmWWtjoKdfyYwwGmaaLLwlhmhOkMqTKbEaNUD+FDbwX KQeHMyipbnz/5/RmWMk559pA5+msMJEnLi0YUd/sf/dVEgLnrWxI2N/Zknv2CJ6x3lP6 siat3TgTl/COs7DqbY7ZUsJ2aqJh1zyrVQ6hOa+jbRgG0pUodRDWTOHUyDQPCJtxb1y3 CP9g== X-Gm-Message-State: AHYfb5gF2dDnXu5r6B3eTZIW4LlhhsCM2s+HtKaRv46iDClxW81NVV6O Qt1oRL7TwmRjMLhvEGTRqQ== X-Received: by 10.28.19.9 with SMTP id 9mr973869wmt.171.1503043837788; Fri, 18 Aug 2017 01:10:37 -0700 (PDT) Received: from localhost ([2.25.234.123]) by smtp.gmail.com with ESMTPSA id i7sm4033286wrc.54.2017.08.18.01.10.36 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 18 Aug 2017 01:10:36 -0700 (PDT) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@linaro.org Subject: Add a full_integral_type_p helper function Date: Fri, 18 Aug 2017 09:10:32 +0100 Message-ID: <87mv6xmh3b.fsf@linaro.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 There are several places that test whether: TYPE_PRECISION (t) == GET_MODE_PRECISION (TYPE_MODE (t)) for some integer type T. With SVE variable-length modes, this would need to become: TYPE_PRECISION (t) == GET_MODE_PRECISION (SCALAR_TYPE_MODE (t)) (or SCALAR_INT_TYPE_MODE, it doesn't matter which in this case). But rather than add the "SCALAR_" everywhere, it seemed neater to introduce a new helper function that tests whether T is an integral type that has the same number of bits as its underlying mode. This patch does that, calling it full_integral_type_p. It isn't possible to use TYPE_MODE in tree.h because vector_type_mode is defined in stor-layout.h, so for now the function accesses the mode field directly. After the 77-patch machine_mode series (thanks again Jeff for the reviews) it would use SCALAR_TYPE_MODE instead. Of the changes that didn't previously have an INTEGRAL_TYPE_P check: - for fold_single_bit_test_into_sign_test it is obvious from the integer_foop tests that this is restricted to integral types. - vect_recog_vector_vector_shift_pattern is inherently restricted to integral types. - the register_edge_assert_for_2 hunk is dominated by: TREE_CODE (val) == INTEGER_CST - the ubsan_instrument_shift hunk is preceded by an early exit: if (!INTEGRAL_TYPE_P (type0)) return NULL_TREE; - the second and third match.pd hunks are from: /* Fold (X << C1) & C2 into (X << C1) & (C2 | ((1 << C1) - 1)) (X >> C1) & C2 into (X >> C1) & (C2 | ~((type) -1 >> C1)) if the new mask might be further optimized. */ I'm a bit confused about: /* Try to fold (type) X op CST -> (type) (X op ((type-x) CST)) when profitable. For bitwise binary operations apply operand conversions to the binary operation result instead of to the operands. This allows to combine successive conversions and bitwise binary operations. We combine the above two cases by using a conditional convert. */ (for bitop (bit_and bit_ior bit_xor) (simplify (bitop (convert @0) (convert? @1)) (if (((TREE_CODE (@1) == INTEGER_CST && INTEGRAL_TYPE_P (TREE_TYPE (@0)) && int_fits_type_p (@1, TREE_TYPE (@0))) || types_match (@0, @1)) /* ??? This transform conflicts with fold-const.c doing Convert (T)(x & c) into (T)x & (T)c, if c is an integer constants (if x has signed type, the sign bit cannot be set in c). This folds extension into the BIT_AND_EXPR. Restrict it to GIMPLE to avoid endless recursions. */ && (bitop != BIT_AND_EXPR || GIMPLE) && (/* That's a good idea if the conversion widens the operand, thus after hoisting the conversion the operation will be narrower. */ TYPE_PRECISION (TREE_TYPE (@0)) < TYPE_PRECISION (type) /* It's also a good idea if the conversion is to a non-integer mode. */ || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT /* Or if the precision of TO is not the same as the precision of its mode. */ || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type)))) (convert (bitop @0 (convert @1)))))) though. The "INTEGRAL_TYPE_P (TREE_TYPE (@0))" suggests that we can't rely on @0 and @1 being integral (although conversions from float would use FLOAT_EXPR), but then what is: /* It's also a good idea if the conversion is to a non-integer mode. */ || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT letting through? MODE_PARTIAL_INT maybe, but that's a sort of integer mode too. MODE_COMPLEX_INT or MODE_VECTOR_INT? I thought for those it would be better to apply the scalar rules to the element type. Either way, having allowed all non-INT modes, using full_integral_type_p for the remaining condition seems correct. If the feeling is that this isn't a useful abstraction, I can just update each site individually to cope with variable-sized modes. Tested on aarch64-linux-gnu and x86_64-linux-gnu. OK to install? Richard 2017-08-18 Richard Sandiford Alan Hayward David Sherwood gcc/ * tree.h (scalar_type_is_full_p): New function. (full_integral_type_p): Likewise. * fold-const.c (fold_single_bit_test_into_sign_test): Likewise. * match.pd: Likewise. * tree-ssa-forwprop.c (simplify_rotate): Likewise. * tree-vect-patterns.c (vect_recog_vector_vector_shift_pattern) (vect_recog_mult_pattern, vect_recog_divmod_pattern): Likewise. (adjust_bool_pattern): Likewise. * tree-vrp.c (register_edge_assert_for_2): Likewise. * ubsan.c (instrument_si_overflow): Likewise. gcc/c-family/ * c-ubsan.c (ubsan_instrument_shift): Use full_integral_type_p. diff --git a/gcc/c-family/c-ubsan.c b/gcc/c-family/c-ubsan.c index b1386db..20f78e7 100644 --- a/gcc/c-family/c-ubsan.c +++ b/gcc/c-family/c-ubsan.c @@ -131,8 +131,8 @@ ubsan_instrument_shift (location_t loc, enum tree_code code, /* If this is not a signed operation, don't perform overflow checks. Also punt on bit-fields. */ - if (TYPE_OVERFLOW_WRAPS (type0) - || GET_MODE_BITSIZE (TYPE_MODE (type0)) != TYPE_PRECISION (type0) + if (!full_integral_type_p (type0) + || TYPE_OVERFLOW_WRAPS (type0) || !sanitize_flags_p (SANITIZE_SHIFT_BASE)) ; diff --git a/gcc/fold-const.c b/gcc/fold-const.c index 0a5b168..1985a14 100644 --- a/gcc/fold-const.c +++ b/gcc/fold-const.c @@ -6672,8 +6672,7 @@ fold_single_bit_test_into_sign_test (location_t loc, if (arg00 != NULL_TREE /* This is only a win if casting to a signed type is cheap, i.e. when arg00's type is not a partial mode. */ - && TYPE_PRECISION (TREE_TYPE (arg00)) - == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (arg00)))) + && full_integral_type_p (TREE_TYPE (arg00))) { tree stype = signed_type_for (TREE_TYPE (arg00)); return fold_build2_loc (loc, code == EQ_EXPR ? GE_EXPR : LT_EXPR, diff --git a/gcc/match.pd b/gcc/match.pd index 0e36f46..9ad9930 100644 --- a/gcc/match.pd +++ b/gcc/match.pd @@ -992,7 +992,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT /* Or if the precision of TO is not the same as the precision of its mode. */ - || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type)))) + || !full_integral_type_p (type))) (convert (bitop @0 (convert @1)))))) (for bitop (bit_and bit_ior) @@ -1920,8 +1920,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) if (shift == LSHIFT_EXPR) zerobits = ((HOST_WIDE_INT_1U << shiftc) - 1); else if (shift == RSHIFT_EXPR - && (TYPE_PRECISION (shift_type) - == GET_MODE_PRECISION (TYPE_MODE (shift_type)))) + && full_integral_type_p (shift_type)) { prec = TYPE_PRECISION (TREE_TYPE (@3)); tree arg00 = @0; @@ -1931,8 +1930,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) && TYPE_UNSIGNED (TREE_TYPE (@0))) { tree inner_type = TREE_TYPE (@0); - if ((TYPE_PRECISION (inner_type) - == GET_MODE_PRECISION (TYPE_MODE (inner_type))) + if (full_integral_type_p (inner_type) && TYPE_PRECISION (inner_type) < prec) { prec = TYPE_PRECISION (inner_type); @@ -3225,9 +3223,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) ncmp (ge lt) (simplify (cmp (bit_and (convert?@2 @0) integer_pow2p@1) integer_zerop) - (if (INTEGRAL_TYPE_P (TREE_TYPE (@0)) - && (TYPE_PRECISION (TREE_TYPE (@0)) - == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0)))) + (if (full_integral_type_p (TREE_TYPE (@0)) && element_precision (@2) >= element_precision (@0) && wi::only_sign_bit_p (@1, element_precision (@0))) (with { tree stype = signed_type_for (TREE_TYPE (@0)); } @@ -4021,19 +4017,13 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) (for op (plus minus) (simplify (convert (op:s (convert@2 @0) (convert?@3 @1))) - (if (INTEGRAL_TYPE_P (type) - /* We check for type compatibility between @0 and @1 below, - so there's no need to check that @1/@3 are integral types. */ - && INTEGRAL_TYPE_P (TREE_TYPE (@0)) - && INTEGRAL_TYPE_P (TREE_TYPE (@2)) + (if (INTEGRAL_TYPE_P (TREE_TYPE (@2)) /* The precision of the type of each operand must match the precision of the mode of each operand, similarly for the result. */ - && (TYPE_PRECISION (TREE_TYPE (@0)) - == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0)))) - && (TYPE_PRECISION (TREE_TYPE (@1)) - == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1)))) - && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type)) + && full_integral_type_p (TREE_TYPE (@0)) + && full_integral_type_p (TREE_TYPE (@1)) + && full_integral_type_p (type) /* The inner conversion must be a widening conversion. */ && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0)) && types_match (@0, type) @@ -4055,19 +4045,13 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) (for op (minus plus) (simplify (bit_and (op:s (convert@2 @0) (convert@3 @1)) INTEGER_CST@4) - (if (INTEGRAL_TYPE_P (type) - /* We check for type compatibility between @0 and @1 below, - so there's no need to check that @1/@3 are integral types. */ - && INTEGRAL_TYPE_P (TREE_TYPE (@0)) - && INTEGRAL_TYPE_P (TREE_TYPE (@2)) + (if (INTEGRAL_TYPE_P (TREE_TYPE (@2)) /* The precision of the type of each operand must match the precision of the mode of each operand, similarly for the result. */ - && (TYPE_PRECISION (TREE_TYPE (@0)) - == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0)))) - && (TYPE_PRECISION (TREE_TYPE (@1)) - == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1)))) - && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type)) + && full_integral_type_p (TREE_TYPE (@0)) + && full_integral_type_p (TREE_TYPE (@1)) + && full_integral_type_p (type) /* The inner conversion must be a widening conversion. */ && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0)) && types_match (@0, @1) diff --git a/gcc/tree-ssa-forwprop.c b/gcc/tree-ssa-forwprop.c index 5719b99..20d5c86 100644 --- a/gcc/tree-ssa-forwprop.c +++ b/gcc/tree-ssa-forwprop.c @@ -1528,8 +1528,7 @@ simplify_rotate (gimple_stmt_iterator *gsi) /* Only create rotates in complete modes. Other cases are not expanded properly. */ - if (!INTEGRAL_TYPE_P (rtype) - || TYPE_PRECISION (rtype) != GET_MODE_PRECISION (TYPE_MODE (rtype))) + if (!full_integral_type_p (rtype)) return false; for (i = 0; i < 2; i++) @@ -1606,11 +1605,9 @@ simplify_rotate (gimple_stmt_iterator *gsi) defcodefor_name (def_arg2[i], &cdef_code[i], &cdef_arg1[i], &cdef_arg2[i]); if (CONVERT_EXPR_CODE_P (cdef_code[i]) - && INTEGRAL_TYPE_P (TREE_TYPE (cdef_arg1[i])) + && full_integral_type_p (TREE_TYPE (cdef_arg1[i])) && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i])) - > floor_log2 (TYPE_PRECISION (rtype)) - && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i])) - == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (cdef_arg1[i])))) + > floor_log2 (TYPE_PRECISION (rtype))) { def_arg2_alt[i] = cdef_arg1[i]; defcodefor_name (def_arg2_alt[i], &cdef_code[i], @@ -1636,11 +1633,9 @@ simplify_rotate (gimple_stmt_iterator *gsi) } defcodefor_name (cdef_arg2[i], &code, &tem, NULL); if (CONVERT_EXPR_CODE_P (code) - && INTEGRAL_TYPE_P (TREE_TYPE (tem)) + && full_integral_type_p (TREE_TYPE (tem)) && TYPE_PRECISION (TREE_TYPE (tem)) > floor_log2 (TYPE_PRECISION (rtype)) - && TYPE_PRECISION (TREE_TYPE (tem)) - == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem))) && (tem == def_arg2[1 - i] || tem == def_arg2_alt[1 - i])) { @@ -1664,11 +1659,9 @@ simplify_rotate (gimple_stmt_iterator *gsi) defcodefor_name (cdef_arg1[i], &code, &tem, NULL); if (CONVERT_EXPR_CODE_P (code) - && INTEGRAL_TYPE_P (TREE_TYPE (tem)) + && full_integral_type_p (TREE_TYPE (tem)) && TYPE_PRECISION (TREE_TYPE (tem)) - > floor_log2 (TYPE_PRECISION (rtype)) - && TYPE_PRECISION (TREE_TYPE (tem)) - == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))) + > floor_log2 (TYPE_PRECISION (rtype))) defcodefor_name (tem, &code, &tem, NULL); if (code == NEGATE_EXPR) @@ -1680,11 +1673,9 @@ simplify_rotate (gimple_stmt_iterator *gsi) } defcodefor_name (tem, &code, &tem, NULL); if (CONVERT_EXPR_CODE_P (code) - && INTEGRAL_TYPE_P (TREE_TYPE (tem)) + && full_integral_type_p (TREE_TYPE (tem)) && TYPE_PRECISION (TREE_TYPE (tem)) > floor_log2 (TYPE_PRECISION (rtype)) - && TYPE_PRECISION (TREE_TYPE (tem)) - == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem))) && (tem == def_arg2[1 - i] || tem == def_arg2_alt[1 - i])) { diff --git a/gcc/tree-vect-patterns.c b/gcc/tree-vect-patterns.c index 17d1083..a96f784 100644 --- a/gcc/tree-vect-patterns.c +++ b/gcc/tree-vect-patterns.c @@ -2067,8 +2067,7 @@ vect_recog_vector_vector_shift_pattern (vec *stmts, if (TREE_CODE (oprnd0) != SSA_NAME || TREE_CODE (oprnd1) != SSA_NAME || TYPE_MODE (TREE_TYPE (oprnd0)) == TYPE_MODE (TREE_TYPE (oprnd1)) - || TYPE_PRECISION (TREE_TYPE (oprnd1)) - != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (oprnd1))) + || !full_integral_type_p (TREE_TYPE (oprnd1)) || TYPE_PRECISION (TREE_TYPE (lhs)) != TYPE_PRECISION (TREE_TYPE (oprnd0))) return NULL; @@ -2469,8 +2468,7 @@ vect_recog_mult_pattern (vec *stmts, if (TREE_CODE (oprnd0) != SSA_NAME || TREE_CODE (oprnd1) != INTEGER_CST - || !INTEGRAL_TYPE_P (itype) - || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype))) + || !full_integral_type_p (itype)) return NULL; vectype = get_vectype_for_scalar_type (itype); @@ -2584,8 +2582,7 @@ vect_recog_divmod_pattern (vec *stmts, itype = TREE_TYPE (oprnd0); if (TREE_CODE (oprnd0) != SSA_NAME || TREE_CODE (oprnd1) != INTEGER_CST - || TREE_CODE (itype) != INTEGER_TYPE - || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype))) + || !full_integral_type_p (itype)) return NULL; vectype = get_vectype_for_scalar_type (itype); @@ -3385,9 +3382,8 @@ adjust_bool_pattern (tree var, tree out_type, do_compare: gcc_assert (TREE_CODE_CLASS (rhs_code) == tcc_comparison); if (TREE_CODE (TREE_TYPE (rhs1)) != INTEGER_TYPE - || !TYPE_UNSIGNED (TREE_TYPE (rhs1)) - || (TYPE_PRECISION (TREE_TYPE (rhs1)) - != GET_MODE_BITSIZE (TYPE_MODE (TREE_TYPE (rhs1))))) + || !full_integral_type_p (TREE_TYPE (rhs1)) + || !TYPE_UNSIGNED (TREE_TYPE (rhs1))) { machine_mode mode = TYPE_MODE (TREE_TYPE (rhs1)); itype diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c index 657a8d1..cfa9a97 100644 --- a/gcc/tree-vrp.c +++ b/gcc/tree-vrp.c @@ -5245,7 +5245,7 @@ register_edge_assert_for_2 (tree name, edge e, && tree_fits_uhwi_p (cst2) && INTEGRAL_TYPE_P (TREE_TYPE (name2)) && IN_RANGE (tree_to_uhwi (cst2), 1, prec - 1) - && prec == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (val)))) + && full_integral_type_p (TREE_TYPE (val))) { mask = wi::mask (tree_to_uhwi (cst2), false, prec); val2 = fold_binary (LSHIFT_EXPR, TREE_TYPE (val), val, cst2); diff --git a/gcc/tree.h b/gcc/tree.h index 46debc1..237f234 100644 --- a/gcc/tree.h +++ b/gcc/tree.h @@ -5394,4 +5394,25 @@ struct builtin_structptr_type const char *str; }; extern const builtin_structptr_type builtin_structptr_types[6]; + +/* Return true if the mode underlying scalar type T has the same number + of bits as T does. Examples of when this is false include bitfields + that are narrower than the mode that contains them. */ + +inline bool +scalar_type_is_full_p (const_tree t) +{ + return (GET_MODE_PRECISION (TYPE_CHECK (t)->type_common.mode) + == TYPE_PRECISION (t)); +} + +/* Return true if T is an integral type that has the same number of bits + as its underlying mode. */ + +inline bool +full_integral_type_p (const_tree t) +{ + return INTEGRAL_TYPE_P (t) && scalar_type_is_full_p (t); +} + #endif /* GCC_TREE_H */ diff --git a/gcc/ubsan.c b/gcc/ubsan.c index 49e38fa..40f5f3e 100644 --- a/gcc/ubsan.c +++ b/gcc/ubsan.c @@ -1582,9 +1582,8 @@ instrument_si_overflow (gimple_stmt_iterator gsi) /* If this is not a signed operation, don't instrument anything here. Also punt on bit-fields. */ - if (!INTEGRAL_TYPE_P (lhsinner) - || TYPE_OVERFLOW_WRAPS (lhsinner) - || GET_MODE_BITSIZE (TYPE_MODE (lhsinner)) != TYPE_PRECISION (lhsinner)) + if (!full_integral_type_p (lhsinner) + || TYPE_OVERFLOW_WRAPS (lhsinner)) return; switch (code)