From patchwork Mon Oct 23 17:06:13 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 116759 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp4872336qgn; Mon, 23 Oct 2017 10:06:48 -0700 (PDT) X-Received: by 10.84.241.15 with SMTP id a15mr10898838pll.199.1508778408761; Mon, 23 Oct 2017 10:06:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1508778408; cv=none; d=google.com; s=arc-20160816; b=QNvPy0sYYAONuf3IVmKBmcuityhlOPpEFs2Re8e1fwZ+uvrDQv+1I9a2zf7yUGW4nC S8AtWC2tdoxIytkVZamUvD19mRTna2nA/1b2iCn/4khAy89SHSp9BF8SMTOVqRlp6wkc iyGh6nGJZ77SjESlmKXC0vLZz4bTiTszvVmndBbtxK/CXliM8zjt0UsJ4Or7G+WSHO32 9TF72Sc8vMOHX49Wsnxl7zxPgmyXkXLKkEjFikNUXL8zhKgcnlBdI8yOXj/6dhy5+JYy Rq2/2lcr+eof7f3Bst1I+fXfE01Vx3uKmpVzlckFgai8fjthTK3l1vOFGbApBtuy0TsU /LtQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:user-agent:message-id:in-reply-to:date:references :subject:mail-followup-to:to:from:delivered-to:sender:list-help :list-post:list-archive:list-unsubscribe:list-id:precedence :mailing-list:dkim-signature:domainkey-signature :arc-authentication-results; bh=s+YxA2Fm0dP/ph0jhtCkaAhtls0E8yX911bP9XzzYcI=; b=OzJKv7OgKZo+gD3gCRBrmbxfQopUXW+DHw12bjMLj1//ByTKnV6hbFUv70fivSnClz NAA7wLygiuevFO/h7XZ5eqie4bSR58i5am4iZ4AdYOWssVl7hMk7HQRzBnlbVS57dVLz JK/iSauAPTBweWAsXerolzf0bm+RKr7KkiygQtkpUTMOnnZ16GtR9rFcs8lYpRb10kuX cCH+RTVg/Zp/tHew7q8HumdOQKR9PPdg6cg4bXmn7fGuqQXIrCfkp5nF4Kndl9Oy/wXV 1pdgIlAt23SpjO3JseQ59JJa4aaYp47XUtAyov2ggJOs1rDeTLIbIO/n0QfXnJku3O69 Mfew== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=LSvXdq3V; spf=pass (google.com: domain of gcc-patches-return-464791-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-464791-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id s199si5099712pgs.575.2017.10.23.10.06.48 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 23 Oct 2017 10:06:48 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-464791-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=LSvXdq3V; spf=pass (google.com: domain of gcc-patches-return-464791-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-464791-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:references:date:in-reply-to:message-id:mime-version :content-type; q=dns; s=default; b=duTytMtlxGX+c9gzA1V+F8OzDQ+v9 WS7Sjh6LPl4fAr/2msDdrerZHQenaNwez8WeXubqmsYwpCDZ9vgr+l0X6OJxPtth WM7K+1VdTATnTddIMlAfIWc3uJANaokQrE2Tpn6F25hawNNy+6WjYUCq4nq47lQn Khu5HImKUJltIY= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:references:date:in-reply-to:message-id:mime-version :content-type; s=default; bh=JhD7IIg6LrLl6XQt+wbcrSWqVjg=; b=LSv Xdq3Vv0VgLvBa8C4FwEU2OL6n6AvwHZSwEuHLq6xqpXcr0nkRjY0lI7EPSUFu1SD 9oAvRT2fkEt/f4jh1rCOJMQmOwH2qjxwJdUNI0GqdoS+zD7WogEqMspbSSBVwGbL oM/lGUUusgL6dbL5WGRjuSOY2FmPBWgO9t6Wlg9A= Received: (qmail 118421 invoked by alias); 23 Oct 2017 17:06:25 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 118412 invoked by uid 89); 23 Oct 2017 17:06:24 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-10.5 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_NONE, RCVD_IN_SORBS_SPAM, SPF_PASS autolearn=ham version=3.3.2 spammy= X-HELO: mail-wm0-f50.google.com Received: from mail-wm0-f50.google.com (HELO mail-wm0-f50.google.com) (74.125.82.50) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Mon, 23 Oct 2017 17:06:19 +0000 Received: by mail-wm0-f50.google.com with SMTP id b9so5472623wmh.0 for ; Mon, 23 Oct 2017 10:06:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:mail-followup-to:subject:references:date :in-reply-to:message-id:user-agent:mime-version; bh=s+YxA2Fm0dP/ph0jhtCkaAhtls0E8yX911bP9XzzYcI=; b=U99fuj8P0fHD2aaSGS47KRIrO7V8GO7z7YVaKvIKNEH1/XS/4HLBEOsp7O4Ww8LJEe 6ENZ6xU7nEDwfPnHikWWEZdW5HBOyaUnNKhe5d35TP8YopeYTNK3OCh5IlIEWL0Pyq3k BigH2rtq6EdeH/fINZIzxS7zoPYDUcl3k1cEgDYeA2jsWtT6h+vVHSGFcnT+bbL/vDLO pOEDYNG8xOhMb+6uWuxjgDz5PnnB3p5945vBYJhJ59rgOrRlobh0ZWUZWlxx0R5MPxdm rewCPGe277wb/mUVk1uaav1m+cGpqrGNs/a0jYmKbS705Lf7JBkSm20wmlozO7/1KxDx vOlw== X-Gm-Message-State: AMCzsaWSEkN6Kvh4iSMIcRmHb1U+DczyNCS5oSrcT3TChkvuULSy7JaO fa4986wVVNNXGtpwVeLyxJ9Dlrg0Lns= X-Google-Smtp-Source: ABhQp+QKf1kC80Isc9623pf005YukWVv/mPN4by5V7EC6YYVE2f+xhCwR1NP195TTJCBLE/7yY7f1w== X-Received: by 10.28.84.78 with SMTP id p14mr6408941wmi.94.1508778376091; Mon, 23 Oct 2017 10:06:16 -0700 (PDT) Received: from localhost ([2.26.27.199]) by smtp.gmail.com with ESMTPSA id g75sm1892866wme.23.2017.10.23.10.06.13 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 23 Oct 2017 10:06:15 -0700 (PDT) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@linaro.org Subject: [015/nnn] poly_int: ao_ref and vn_reference_op_t References: <871sltvm7r.fsf@linaro.org> Date: Mon, 23 Oct 2017 18:06:13 +0100 In-Reply-To: <871sltvm7r.fsf@linaro.org> (Richard Sandiford's message of "Mon, 23 Oct 2017 17:54:32 +0100") Message-ID: <873769ssje.fsf@linaro.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 This patch changes the offset, size and max_size fields of ao_ref from HOST_WIDE_INT to poly_int64 and propagates the change through the code that references it. This includes changing the off field of vn_reference_op_struct in the same way. 2017-10-23 Richard Sandiford Alan Hayward David Sherwood gcc/ * inchash.h (inchash::hash::add_poly_int): New function. * tree-ssa-alias.h (ao_ref::offset, ao_ref::size, ao_ref::max_size): Use poly_int64 rather than HOST_WIDE_INT. (ao_ref::max_size_known_p): New function. * tree-ssa-sccvn.h (vn_reference_op_struct::off): Use poly_int64_pod rather than HOST_WIDE_INT. * tree-ssa-alias.c (ao_ref_base): Apply get_ref_base_and_extent to temporaries until its interface is adjusted to match. (ao_ref_init_from_ptr_and_size): Handle polynomial offsets and sizes. (aliasing_component_refs_p, decl_refs_may_alias_p) (indirect_ref_may_alias_decl_p, indirect_refs_may_alias_p): Take the offsets and max_sizes as poly_int64s instead of HOST_WIDE_INTs. (refs_may_alias_p_1, stmt_kills_ref_p): Adjust for changes to ao_ref fields. * alias.c (ao_ref_from_mem): Likewise. * tree-ssa-dce.c (mark_aliased_reaching_defs_necessary_1): Likewise. * tree-ssa-dse.c (valid_ao_ref_for_dse, normalize_ref) (clear_bytes_written_by, setup_live_bytes_from_ref, compute_trims) (maybe_trim_complex_store, maybe_trim_constructor_store) (live_bytes_read, dse_classify_store): Likewise. * tree-ssa-sccvn.c (vn_reference_compute_hash, vn_reference_eq): (copy_reference_ops_from_ref, ao_ref_init_from_vn_reference) (fully_constant_vn_reference_p, valueize_refs_1): Likewise. (vn_reference_lookup_3): Likewise. * tree-ssa-uninit.c (warn_uninitialized_vars): Likewise. Index: gcc/inchash.h =================================================================== --- gcc/inchash.h 2017-10-23 17:01:43.314993320 +0100 +++ gcc/inchash.h 2017-10-23 17:01:52.303181137 +0100 @@ -57,6 +57,14 @@ hashval_t iterative_hash_hashval_t (hash val = iterative_hash_hashval_t (v, val); } + /* Add polynomial value V, treating each element as an unsigned int. */ + template + void add_poly_int (const poly_int_pod &v) + { + for (unsigned int i = 0; i < N; ++i) + add_int (v.coeffs[i]); + } + /* Add HOST_WIDE_INT value V. */ void add_hwi (HOST_WIDE_INT v) { Index: gcc/tree-ssa-alias.h =================================================================== --- gcc/tree-ssa-alias.h 2017-10-23 16:52:20.058356365 +0100 +++ gcc/tree-ssa-alias.h 2017-10-23 17:01:52.304179714 +0100 @@ -80,11 +80,11 @@ struct ao_ref the following fields are not yet computed. */ tree base; /* The offset relative to the base. */ - HOST_WIDE_INT offset; + poly_int64 offset; /* The size of the access. */ - HOST_WIDE_INT size; + poly_int64 size; /* The maximum possible extent of the access or -1 if unconstrained. */ - HOST_WIDE_INT max_size; + poly_int64 max_size; /* The alias set of the access or -1 if not yet computed. */ alias_set_type ref_alias_set; @@ -94,8 +94,18 @@ struct ao_ref /* Whether the memory is considered a volatile access. */ bool volatile_p; + + bool max_size_known_p () const; }; +/* Return true if the maximum size is known, rather than the special -1 + marker. */ + +inline bool +ao_ref::max_size_known_p () const +{ + return known_size_p (max_size); +} /* In tree-ssa-alias.c */ extern void ao_ref_init (ao_ref *, tree); Index: gcc/tree-ssa-sccvn.h =================================================================== --- gcc/tree-ssa-sccvn.h 2017-10-23 16:52:20.058356365 +0100 +++ gcc/tree-ssa-sccvn.h 2017-10-23 17:01:52.305178291 +0100 @@ -93,7 +93,7 @@ typedef struct vn_reference_op_struct /* For storing TYPE_ALIGN for array ref element size computation. */ unsigned align : 6; /* Constant offset this op adds or -1 if it is variable. */ - HOST_WIDE_INT off; + poly_int64_pod off; tree type; tree op0; tree op1; Index: gcc/tree-ssa-alias.c =================================================================== --- gcc/tree-ssa-alias.c 2017-10-23 17:01:51.044974644 +0100 +++ gcc/tree-ssa-alias.c 2017-10-23 17:01:52.304179714 +0100 @@ -635,11 +635,15 @@ ao_ref_init (ao_ref *r, tree ref) ao_ref_base (ao_ref *ref) { bool reverse; + HOST_WIDE_INT offset, size, max_size; if (ref->base) return ref->base; - ref->base = get_ref_base_and_extent (ref->ref, &ref->offset, &ref->size, - &ref->max_size, &reverse); + ref->base = get_ref_base_and_extent (ref->ref, &offset, &size, + &max_size, &reverse); + ref->offset = offset; + ref->size = size; + ref->max_size = max_size; return ref->base; } @@ -679,7 +683,8 @@ ao_ref_alias_set (ao_ref *ref) void ao_ref_init_from_ptr_and_size (ao_ref *ref, tree ptr, tree size) { - HOST_WIDE_INT t, size_hwi, extra_offset = 0; + HOST_WIDE_INT t; + poly_int64 size_hwi, extra_offset = 0; ref->ref = NULL_TREE; if (TREE_CODE (ptr) == SSA_NAME) { @@ -689,11 +694,10 @@ ao_ref_init_from_ptr_and_size (ao_ref *r ptr = gimple_assign_rhs1 (stmt); else if (is_gimple_assign (stmt) && gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR - && TREE_CODE (gimple_assign_rhs2 (stmt)) == INTEGER_CST) + && ptrdiff_tree_p (gimple_assign_rhs2 (stmt), &extra_offset)) { ptr = gimple_assign_rhs1 (stmt); - extra_offset = BITS_PER_UNIT - * int_cst_value (gimple_assign_rhs2 (stmt)); + extra_offset *= BITS_PER_UNIT; } } @@ -717,8 +721,8 @@ ao_ref_init_from_ptr_and_size (ao_ref *r } ref->offset += extra_offset; if (size - && tree_fits_shwi_p (size) - && (size_hwi = tree_to_shwi (size)) <= HOST_WIDE_INT_MAX / BITS_PER_UNIT) + && poly_int_tree_p (size, &size_hwi) + && coeffs_in_range_p (size_hwi, 0, HOST_WIDE_INT_MAX / BITS_PER_UNIT)) ref->max_size = ref->size = size_hwi * BITS_PER_UNIT; else ref->max_size = ref->size = -1; @@ -779,11 +783,11 @@ same_type_for_tbaa (tree type1, tree typ aliasing_component_refs_p (tree ref1, alias_set_type ref1_alias_set, alias_set_type base1_alias_set, - HOST_WIDE_INT offset1, HOST_WIDE_INT max_size1, + poly_int64 offset1, poly_int64 max_size1, tree ref2, alias_set_type ref2_alias_set, alias_set_type base2_alias_set, - HOST_WIDE_INT offset2, HOST_WIDE_INT max_size2, + poly_int64 offset2, poly_int64 max_size2, bool ref2_is_decl) { /* If one reference is a component references through pointers try to find a @@ -825,7 +829,7 @@ aliasing_component_refs_p (tree ref1, offset2 -= offadj; get_ref_base_and_extent (base1, &offadj, &sztmp, &msztmp, &reverse); offset1 -= offadj; - return ranges_overlap_p (offset1, max_size1, offset2, max_size2); + return ranges_may_overlap_p (offset1, max_size1, offset2, max_size2); } /* If we didn't find a common base, try the other way around. */ refp = &ref1; @@ -844,7 +848,7 @@ aliasing_component_refs_p (tree ref1, offset1 -= offadj; get_ref_base_and_extent (base2, &offadj, &sztmp, &msztmp, &reverse); offset2 -= offadj; - return ranges_overlap_p (offset1, max_size1, offset2, max_size2); + return ranges_may_overlap_p (offset1, max_size1, offset2, max_size2); } /* If we have two type access paths B1.path1 and B2.path2 they may @@ -1090,9 +1094,9 @@ nonoverlapping_component_refs_p (const_t static bool decl_refs_may_alias_p (tree ref1, tree base1, - HOST_WIDE_INT offset1, HOST_WIDE_INT max_size1, + poly_int64 offset1, poly_int64 max_size1, tree ref2, tree base2, - HOST_WIDE_INT offset2, HOST_WIDE_INT max_size2) + poly_int64 offset2, poly_int64 max_size2) { gcc_checking_assert (DECL_P (base1) && DECL_P (base2)); @@ -1102,7 +1106,7 @@ decl_refs_may_alias_p (tree ref1, tree b /* If both references are based on the same variable, they cannot alias if the accesses do not overlap. */ - if (!ranges_overlap_p (offset1, max_size1, offset2, max_size2)) + if (!ranges_may_overlap_p (offset1, max_size1, offset2, max_size2)) return false; /* For components with variable position, the above test isn't sufficient, @@ -1124,12 +1128,11 @@ decl_refs_may_alias_p (tree ref1, tree b static bool indirect_ref_may_alias_decl_p (tree ref1 ATTRIBUTE_UNUSED, tree base1, - HOST_WIDE_INT offset1, - HOST_WIDE_INT max_size1 ATTRIBUTE_UNUSED, + poly_int64 offset1, poly_int64 max_size1, alias_set_type ref1_alias_set, alias_set_type base1_alias_set, tree ref2 ATTRIBUTE_UNUSED, tree base2, - HOST_WIDE_INT offset2, HOST_WIDE_INT max_size2, + poly_int64 offset2, poly_int64 max_size2, alias_set_type ref2_alias_set, alias_set_type base2_alias_set, bool tbaa_p) { @@ -1185,14 +1188,15 @@ indirect_ref_may_alias_decl_p (tree ref1 is bigger than the size of the decl we can't possibly access the decl via that pointer. */ if (DECL_SIZE (base2) && COMPLETE_TYPE_P (TREE_TYPE (ptrtype1)) - && TREE_CODE (DECL_SIZE (base2)) == INTEGER_CST - && TREE_CODE (TYPE_SIZE (TREE_TYPE (ptrtype1))) == INTEGER_CST + && poly_int_tree_p (DECL_SIZE (base2)) + && poly_int_tree_p (TYPE_SIZE (TREE_TYPE (ptrtype1))) /* ??? This in turn may run afoul when a decl of type T which is a member of union type U is accessed through a pointer to type U and sizeof T is smaller than sizeof U. */ && TREE_CODE (TREE_TYPE (ptrtype1)) != UNION_TYPE && TREE_CODE (TREE_TYPE (ptrtype1)) != QUAL_UNION_TYPE - && tree_int_cst_lt (DECL_SIZE (base2), TYPE_SIZE (TREE_TYPE (ptrtype1)))) + && must_lt (wi::to_poly_widest (DECL_SIZE (base2)), + wi::to_poly_widest (TYPE_SIZE (TREE_TYPE (ptrtype1))))) return false; if (!ref2) @@ -1203,8 +1207,8 @@ indirect_ref_may_alias_decl_p (tree ref1 dbase2 = ref2; while (handled_component_p (dbase2)) dbase2 = TREE_OPERAND (dbase2, 0); - HOST_WIDE_INT doffset1 = offset1; - offset_int doffset2 = offset2; + poly_int64 doffset1 = offset1; + poly_offset_int doffset2 = offset2; if (TREE_CODE (dbase2) == MEM_REF || TREE_CODE (dbase2) == TARGET_MEM_REF) doffset2 -= mem_ref_offset (dbase2) << LOG2_BITS_PER_UNIT; @@ -1252,11 +1256,11 @@ indirect_ref_may_alias_decl_p (tree ref1 static bool indirect_refs_may_alias_p (tree ref1 ATTRIBUTE_UNUSED, tree base1, - HOST_WIDE_INT offset1, HOST_WIDE_INT max_size1, + poly_int64 offset1, poly_int64 max_size1, alias_set_type ref1_alias_set, alias_set_type base1_alias_set, tree ref2 ATTRIBUTE_UNUSED, tree base2, - HOST_WIDE_INT offset2, HOST_WIDE_INT max_size2, + poly_int64 offset2, poly_int64 max_size2, alias_set_type ref2_alias_set, alias_set_type base2_alias_set, bool tbaa_p) { @@ -1330,7 +1334,7 @@ indirect_refs_may_alias_p (tree ref1 ATT /* But avoid treating arrays as "objects", instead assume they can overlap by an exact multiple of their element size. */ && TREE_CODE (TREE_TYPE (ptrtype1)) != ARRAY_TYPE) - return ranges_overlap_p (offset1, max_size1, offset2, max_size2); + return ranges_may_overlap_p (offset1, max_size1, offset2, max_size2); /* Do type-based disambiguation. */ if (base1_alias_set != base2_alias_set @@ -1365,8 +1369,8 @@ indirect_refs_may_alias_p (tree ref1 ATT refs_may_alias_p_1 (ao_ref *ref1, ao_ref *ref2, bool tbaa_p) { tree base1, base2; - HOST_WIDE_INT offset1 = 0, offset2 = 0; - HOST_WIDE_INT max_size1 = -1, max_size2 = -1; + poly_int64 offset1 = 0, offset2 = 0; + poly_int64 max_size1 = -1, max_size2 = -1; bool var1_p, var2_p, ind1_p, ind2_p; gcc_checking_assert ((!ref1->ref @@ -2444,14 +2448,17 @@ stmt_kills_ref_p (gimple *stmt, ao_ref * handling constant offset and size. */ /* For a must-alias check we need to be able to constrain the access properly. */ - if (ref->max_size == -1) + if (!ref->max_size_known_p ()) return false; - HOST_WIDE_INT size, offset, max_size, ref_offset = ref->offset; + HOST_WIDE_INT size, max_size, const_offset; + poly_int64 ref_offset = ref->offset; bool reverse; tree base - = get_ref_base_and_extent (lhs, &offset, &size, &max_size, &reverse); + = get_ref_base_and_extent (lhs, &const_offset, &size, &max_size, + &reverse); /* We can get MEM[symbol: sZ, index: D.8862_1] here, so base == ref->base does not always hold. */ + poly_int64 offset = const_offset; if (base != ref->base) { /* Try using points-to info. */ @@ -2468,18 +2475,13 @@ stmt_kills_ref_p (gimple *stmt, ao_ref * if (!tree_int_cst_equal (TREE_OPERAND (base, 1), TREE_OPERAND (ref->base, 1))) { - offset_int off1 = mem_ref_offset (base); + poly_offset_int off1 = mem_ref_offset (base); off1 <<= LOG2_BITS_PER_UNIT; off1 += offset; - offset_int off2 = mem_ref_offset (ref->base); + poly_offset_int off2 = mem_ref_offset (ref->base); off2 <<= LOG2_BITS_PER_UNIT; off2 += ref_offset; - if (wi::fits_shwi_p (off1) && wi::fits_shwi_p (off2)) - { - offset = off1.to_shwi (); - ref_offset = off2.to_shwi (); - } - else + if (!off1.to_shwi (&offset) || !off2.to_shwi (&ref_offset)) size = -1; } } @@ -2488,12 +2490,9 @@ stmt_kills_ref_p (gimple *stmt, ao_ref * } /* For a must-alias check we need to be able to constrain the access properly. */ - if (size != -1 && size == max_size) - { - if (offset <= ref_offset - && offset + size >= ref_offset + ref->max_size) - return true; - } + if (size == max_size + && known_subrange_p (ref_offset, ref->max_size, offset, size)) + return true; } if (is_gimple_call (stmt)) @@ -2526,19 +2525,19 @@ stmt_kills_ref_p (gimple *stmt, ao_ref * { /* For a must-alias check we need to be able to constrain the access properly. */ - if (ref->max_size == -1) + if (!ref->max_size_known_p ()) return false; tree dest = gimple_call_arg (stmt, 0); tree len = gimple_call_arg (stmt, 2); - if (!tree_fits_shwi_p (len)) + if (!poly_int_tree_p (len)) return false; tree rbase = ref->base; - offset_int roffset = ref->offset; + poly_offset_int roffset = ref->offset; ao_ref dref; ao_ref_init_from_ptr_and_size (&dref, dest, len); tree base = ao_ref_base (&dref); - offset_int offset = dref.offset; - if (!base || dref.size == -1) + poly_offset_int offset = dref.offset; + if (!base || !known_size_p (dref.size)) return false; if (TREE_CODE (base) == MEM_REF) { @@ -2551,9 +2550,9 @@ stmt_kills_ref_p (gimple *stmt, ao_ref * rbase = TREE_OPERAND (rbase, 0); } if (base == rbase - && offset <= roffset - && (roffset + ref->max_size - <= offset + (wi::to_offset (len) << LOG2_BITS_PER_UNIT))) + && known_subrange_p (roffset, ref->max_size, offset, + wi::to_poly_offset (len) + << LOG2_BITS_PER_UNIT)) return true; break; } Index: gcc/alias.c =================================================================== --- gcc/alias.c 2017-10-23 16:52:20.058356365 +0100 +++ gcc/alias.c 2017-10-23 17:01:52.303181137 +0100 @@ -331,9 +331,9 @@ ao_ref_from_mem (ao_ref *ref, const_rtx /* If MEM_OFFSET/MEM_SIZE get us outside of ref->offset/ref->max_size drop ref->ref. */ if (MEM_OFFSET (mem) < 0 - || (ref->max_size != -1 - && ((MEM_OFFSET (mem) + MEM_SIZE (mem)) * BITS_PER_UNIT - > ref->max_size))) + || (ref->max_size_known_p () + && may_gt ((MEM_OFFSET (mem) + MEM_SIZE (mem)) * BITS_PER_UNIT, + ref->max_size))) ref->ref = NULL_TREE; /* Refine size and offset we got from analyzing MEM_EXPR by using @@ -344,19 +344,18 @@ ao_ref_from_mem (ao_ref *ref, const_rtx /* The MEM may extend into adjacent fields, so adjust max_size if necessary. */ - if (ref->max_size != -1 - && ref->size > ref->max_size) - ref->max_size = ref->size; + if (ref->max_size_known_p ()) + ref->max_size = upper_bound (ref->max_size, ref->size); - /* If MEM_OFFSET and MEM_SIZE get us outside of the base object of + /* If MEM_OFFSET and MEM_SIZE might get us outside of the base object of the MEM_EXPR punt. This happens for STRICT_ALIGNMENT targets a lot. */ if (MEM_EXPR (mem) != get_spill_slot_decl (false) - && (ref->offset < 0 + && (may_lt (ref->offset, 0) || (DECL_P (ref->base) && (DECL_SIZE (ref->base) == NULL_TREE - || TREE_CODE (DECL_SIZE (ref->base)) != INTEGER_CST - || wi::ltu_p (wi::to_offset (DECL_SIZE (ref->base)), - ref->offset + ref->size))))) + || !poly_int_tree_p (DECL_SIZE (ref->base)) + || may_lt (wi::to_poly_offset (DECL_SIZE (ref->base)), + ref->offset + ref->size))))) return false; return true; Index: gcc/tree-ssa-dce.c =================================================================== --- gcc/tree-ssa-dce.c 2017-10-23 16:52:20.058356365 +0100 +++ gcc/tree-ssa-dce.c 2017-10-23 17:01:52.304179714 +0100 @@ -488,13 +488,9 @@ mark_aliased_reaching_defs_necessary_1 ( { /* For a must-alias check we need to be able to constrain the accesses properly. */ - if (size != -1 && size == max_size - && ref->max_size != -1) - { - if (offset <= ref->offset - && offset + size >= ref->offset + ref->max_size) - return true; - } + if (size == max_size + && known_subrange_p (ref->offset, ref->max_size, offset, size)) + return true; /* Or they need to be exactly the same. */ else if (ref->ref /* Make sure there is no induction variable involved Index: gcc/tree-ssa-dse.c =================================================================== --- gcc/tree-ssa-dse.c 2017-10-23 16:52:20.058356365 +0100 +++ gcc/tree-ssa-dse.c 2017-10-23 17:01:52.304179714 +0100 @@ -128,13 +128,12 @@ initialize_ao_ref_for_dse (gimple *stmt, valid_ao_ref_for_dse (ao_ref *ref) { return (ao_ref_base (ref) - && ref->max_size != -1 - && ref->size != 0 - && ref->max_size == ref->size - && ref->offset >= 0 - && (ref->offset % BITS_PER_UNIT) == 0 - && (ref->size % BITS_PER_UNIT) == 0 - && (ref->size != -1)); + && known_size_p (ref->max_size) + && maybe_nonzero (ref->size) + && must_eq (ref->max_size, ref->size) + && must_ge (ref->offset, 0) + && multiple_p (ref->offset, BITS_PER_UNIT) + && multiple_p (ref->size, BITS_PER_UNIT)); } /* Try to normalize COPY (an ao_ref) relative to REF. Essentially when we are @@ -144,25 +143,31 @@ valid_ao_ref_for_dse (ao_ref *ref) static bool normalize_ref (ao_ref *copy, ao_ref *ref) { + if (!ordered_p (copy->offset, ref->offset)) + return false; + /* If COPY starts before REF, then reset the beginning of COPY to match REF and decrease the size of COPY by the number of bytes removed from COPY. */ - if (copy->offset < ref->offset) + if (may_lt (copy->offset, ref->offset)) { - HOST_WIDE_INT diff = ref->offset - copy->offset; - if (copy->size <= diff) + poly_int64 diff = ref->offset - copy->offset; + if (may_le (copy->size, diff)) return false; copy->size -= diff; copy->offset = ref->offset; } - HOST_WIDE_INT diff = copy->offset - ref->offset; - if (ref->size <= diff) + poly_int64 diff = copy->offset - ref->offset; + if (may_le (ref->size, diff)) return false; /* If COPY extends beyond REF, chop off its size appropriately. */ - HOST_WIDE_INT limit = ref->size - diff; - if (copy->size > limit) + poly_int64 limit = ref->size - diff; + if (!ordered_p (limit, copy->size)) + return false; + + if (may_gt (copy->size, limit)) copy->size = limit; return true; } @@ -183,15 +188,15 @@ clear_bytes_written_by (sbitmap live_byt /* Verify we have the same base memory address, the write has a known size and overlaps with REF. */ + HOST_WIDE_INT start, size; if (valid_ao_ref_for_dse (&write) && operand_equal_p (write.base, ref->base, OEP_ADDRESS_OF) - && write.size == write.max_size - && normalize_ref (&write, ref)) - { - HOST_WIDE_INT start = write.offset - ref->offset; - bitmap_clear_range (live_bytes, start / BITS_PER_UNIT, - write.size / BITS_PER_UNIT); - } + && must_eq (write.size, write.max_size) + && normalize_ref (&write, ref) + && (write.offset - ref->offset).is_constant (&start) + && write.size.is_constant (&size)) + bitmap_clear_range (live_bytes, start / BITS_PER_UNIT, + size / BITS_PER_UNIT); } /* REF is a memory write. Extract relevant information from it and @@ -201,12 +206,14 @@ clear_bytes_written_by (sbitmap live_byt static bool setup_live_bytes_from_ref (ao_ref *ref, sbitmap live_bytes) { + HOST_WIDE_INT const_size; if (valid_ao_ref_for_dse (ref) - && (ref->size / BITS_PER_UNIT + && ref->size.is_constant (&const_size) + && (const_size / BITS_PER_UNIT <= PARAM_VALUE (PARAM_DSE_MAX_OBJECT_SIZE))) { bitmap_clear (live_bytes); - bitmap_set_range (live_bytes, 0, ref->size / BITS_PER_UNIT); + bitmap_set_range (live_bytes, 0, const_size / BITS_PER_UNIT); return true; } return false; @@ -231,9 +238,15 @@ compute_trims (ao_ref *ref, sbitmap live the REF to compute the trims. */ /* Now identify how much, if any of the tail we can chop off. */ - int last_orig = (ref->size / BITS_PER_UNIT) - 1; - int last_live = bitmap_last_set_bit (live); - *trim_tail = (last_orig - last_live) & ~0x1; + HOST_WIDE_INT const_size; + if (ref->size.is_constant (&const_size)) + { + int last_orig = (const_size / BITS_PER_UNIT) - 1; + int last_live = bitmap_last_set_bit (live); + *trim_tail = (last_orig - last_live) & ~0x1; + } + else + *trim_tail = 0; /* Identify how much, if any of the head we can chop off. */ int first_orig = 0; @@ -267,7 +280,7 @@ maybe_trim_complex_store (ao_ref *ref, s least half the size of the object to ensure we're trimming the entire real or imaginary half. By writing things this way we avoid more O(n) bitmap operations. */ - if (trim_tail * 2 >= ref->size / BITS_PER_UNIT) + if (must_ge (trim_tail * 2 * BITS_PER_UNIT, ref->size)) { /* TREE_REALPART is live */ tree x = TREE_REALPART (gimple_assign_rhs1 (stmt)); @@ -276,7 +289,7 @@ maybe_trim_complex_store (ao_ref *ref, s gimple_assign_set_lhs (stmt, y); gimple_assign_set_rhs1 (stmt, x); } - else if (trim_head * 2 >= ref->size / BITS_PER_UNIT) + else if (must_ge (trim_head * 2 * BITS_PER_UNIT, ref->size)) { /* TREE_IMAGPART is live */ tree x = TREE_IMAGPART (gimple_assign_rhs1 (stmt)); @@ -326,7 +339,8 @@ maybe_trim_constructor_store (ao_ref *re return; /* The number of bytes for the new constructor. */ - int count = (ref->size / BITS_PER_UNIT) - head_trim - tail_trim; + poly_int64 ref_bytes = exact_div (ref->size, BITS_PER_UNIT); + poly_int64 count = ref_bytes - head_trim - tail_trim; /* And the new type for the CONSTRUCTOR. Essentially it's just a char array large enough to cover the non-trimmed parts of @@ -483,15 +497,15 @@ live_bytes_read (ao_ref use_ref, ao_ref { /* We have already verified that USE_REF and REF hit the same object. Now verify that there's actually an overlap between USE_REF and REF. */ - if (normalize_ref (&use_ref, ref)) + HOST_WIDE_INT start, size; + if (normalize_ref (&use_ref, ref) + && (use_ref.offset - ref->offset).is_constant (&start) + && use_ref.size.is_constant (&size)) { - HOST_WIDE_INT start = use_ref.offset - ref->offset; - HOST_WIDE_INT size = use_ref.size; - /* If USE_REF covers all of REF, then it will hit one or more live bytes. This avoids useless iteration over the bitmap below. */ - if (start == 0 && size == ref->size) + if (start == 0 && must_eq (size, ref->size)) return true; /* Now check if any of the remaining bits in use_ref are set in LIVE. */ @@ -592,8 +606,8 @@ dse_classify_store (ao_ref *ref, gimple ao_ref use_ref; ao_ref_init (&use_ref, gimple_assign_rhs1 (use_stmt)); if (valid_ao_ref_for_dse (&use_ref) - && use_ref.base == ref->base - && use_ref.size == use_ref.max_size + && must_eq (use_ref.base, ref->base) + && must_eq (use_ref.size, use_ref.max_size) && !live_bytes_read (use_ref, ref, live_bytes)) { /* If this statement has a VDEF, then it is the Index: gcc/tree-ssa-sccvn.c =================================================================== --- gcc/tree-ssa-sccvn.c 2017-10-23 16:52:20.058356365 +0100 +++ gcc/tree-ssa-sccvn.c 2017-10-23 17:01:52.305178291 +0100 @@ -547,7 +547,7 @@ vn_reference_compute_hash (const vn_refe hashval_t result; int i; vn_reference_op_t vro; - HOST_WIDE_INT off = -1; + poly_int64 off = -1; bool deref = false; FOR_EACH_VEC_ELT (vr1->operands, i, vro) @@ -556,17 +556,17 @@ vn_reference_compute_hash (const vn_refe deref = true; else if (vro->opcode != ADDR_EXPR) deref = false; - if (vro->off != -1) + if (may_ne (vro->off, -1)) { - if (off == -1) + if (must_eq (off, -1)) off = 0; off += vro->off; } else { - if (off != -1 - && off != 0) - hstate.add_int (off); + if (may_ne (off, -1) + && may_ne (off, 0)) + hstate.add_poly_int (off); off = -1; if (deref && vro->opcode == ADDR_EXPR) @@ -632,7 +632,7 @@ vn_reference_eq (const_vn_reference_t co j = 0; do { - HOST_WIDE_INT off1 = 0, off2 = 0; + poly_int64 off1 = 0, off2 = 0; vn_reference_op_t vro1, vro2; vn_reference_op_s tem1, tem2; bool deref1 = false, deref2 = false; @@ -643,7 +643,7 @@ vn_reference_eq (const_vn_reference_t co /* Do not look through a storage order barrier. */ else if (vro1->opcode == VIEW_CONVERT_EXPR && vro1->reverse) return false; - if (vro1->off == -1) + if (must_eq (vro1->off, -1)) break; off1 += vro1->off; } @@ -654,11 +654,11 @@ vn_reference_eq (const_vn_reference_t co /* Do not look through a storage order barrier. */ else if (vro2->opcode == VIEW_CONVERT_EXPR && vro2->reverse) return false; - if (vro2->off == -1) + if (must_eq (vro2->off, -1)) break; off2 += vro2->off; } - if (off1 != off2) + if (may_ne (off1, off2)) return false; if (deref1 && vro1->opcode == ADDR_EXPR) { @@ -784,24 +784,23 @@ copy_reference_ops_from_ref (tree ref, v { tree this_offset = component_ref_field_offset (ref); if (this_offset - && TREE_CODE (this_offset) == INTEGER_CST) + && poly_int_tree_p (this_offset)) { tree bit_offset = DECL_FIELD_BIT_OFFSET (TREE_OPERAND (ref, 1)); if (TREE_INT_CST_LOW (bit_offset) % BITS_PER_UNIT == 0) { - offset_int off - = (wi::to_offset (this_offset) + poly_offset_int off + = (wi::to_poly_offset (this_offset) + (wi::to_offset (bit_offset) >> LOG2_BITS_PER_UNIT)); - if (wi::fits_shwi_p (off) - /* Probibit value-numbering zero offset components - of addresses the same before the pass folding - __builtin_object_size had a chance to run - (checking cfun->after_inlining does the - trick here). */ - && (TREE_CODE (orig) != ADDR_EXPR - || off != 0 - || cfun->after_inlining)) - temp.off = off.to_shwi (); + /* Probibit value-numbering zero offset components + of addresses the same before the pass folding + __builtin_object_size had a chance to run + (checking cfun->after_inlining does the + trick here). */ + if (TREE_CODE (orig) != ADDR_EXPR + || maybe_nonzero (off) + || cfun->after_inlining) + off.to_shwi (&temp.off); } } } @@ -820,16 +819,15 @@ copy_reference_ops_from_ref (tree ref, v if (! temp.op2) temp.op2 = size_binop (EXACT_DIV_EXPR, TYPE_SIZE_UNIT (eltype), size_int (TYPE_ALIGN_UNIT (eltype))); - if (TREE_CODE (temp.op0) == INTEGER_CST - && TREE_CODE (temp.op1) == INTEGER_CST + if (poly_int_tree_p (temp.op0) + && poly_int_tree_p (temp.op1) && TREE_CODE (temp.op2) == INTEGER_CST) { - offset_int off = ((wi::to_offset (temp.op0) - - wi::to_offset (temp.op1)) - * wi::to_offset (temp.op2) - * vn_ref_op_align_unit (&temp)); - if (wi::fits_shwi_p (off)) - temp.off = off.to_shwi(); + poly_offset_int off = ((wi::to_poly_offset (temp.op0) + - wi::to_poly_offset (temp.op1)) + * wi::to_offset (temp.op2) + * vn_ref_op_align_unit (&temp)); + off.to_shwi (&temp.off); } } break; @@ -918,9 +916,9 @@ ao_ref_init_from_vn_reference (ao_ref *r unsigned i; tree base = NULL_TREE; tree *op0_p = &base; - offset_int offset = 0; - offset_int max_size; - offset_int size = -1; + poly_offset_int offset = 0; + poly_offset_int max_size; + poly_offset_int size = -1; tree size_tree = NULL_TREE; alias_set_type base_alias_set = -1; @@ -936,11 +934,11 @@ ao_ref_init_from_vn_reference (ao_ref *r if (mode == BLKmode) size_tree = TYPE_SIZE (type); else - size = int (GET_MODE_BITSIZE (mode)); + size = GET_MODE_BITSIZE (mode); } if (size_tree != NULL_TREE - && TREE_CODE (size_tree) == INTEGER_CST) - size = wi::to_offset (size_tree); + && poly_int_tree_p (size_tree)) + size = wi::to_poly_offset (size_tree); /* Initially, maxsize is the same as the accessed element size. In the following it will only grow (or become -1). */ @@ -963,7 +961,7 @@ ao_ref_init_from_vn_reference (ao_ref *r { vn_reference_op_t pop = &ops[i-1]; base = TREE_OPERAND (op->op0, 0); - if (pop->off == -1) + if (must_eq (pop->off, -1)) { max_size = -1; offset = 0; @@ -1008,12 +1006,12 @@ ao_ref_init_from_vn_reference (ao_ref *r parts manually. */ tree this_offset = DECL_FIELD_OFFSET (field); - if (op->op1 || TREE_CODE (this_offset) != INTEGER_CST) + if (op->op1 || !poly_int_tree_p (this_offset)) max_size = -1; else { - offset_int woffset = (wi::to_offset (this_offset) - << LOG2_BITS_PER_UNIT); + poly_offset_int woffset = (wi::to_poly_offset (this_offset) + << LOG2_BITS_PER_UNIT); woffset += wi::to_offset (DECL_FIELD_BIT_OFFSET (field)); offset += woffset; } @@ -1023,14 +1021,15 @@ ao_ref_init_from_vn_reference (ao_ref *r case ARRAY_RANGE_REF: case ARRAY_REF: /* We recorded the lower bound and the element size. */ - if (TREE_CODE (op->op0) != INTEGER_CST - || TREE_CODE (op->op1) != INTEGER_CST + if (!poly_int_tree_p (op->op0) + || !poly_int_tree_p (op->op1) || TREE_CODE (op->op2) != INTEGER_CST) max_size = -1; else { - offset_int woffset - = wi::sext (wi::to_offset (op->op0) - wi::to_offset (op->op1), + poly_offset_int woffset + = wi::sext (wi::to_poly_offset (op->op0) + - wi::to_poly_offset (op->op1), TYPE_PRECISION (TREE_TYPE (op->op0))); woffset *= wi::to_offset (op->op2) * vn_ref_op_align_unit (op); woffset <<= LOG2_BITS_PER_UNIT; @@ -1077,7 +1076,7 @@ ao_ref_init_from_vn_reference (ao_ref *r /* We discount volatiles from value-numbering elsewhere. */ ref->volatile_p = false; - if (!wi::fits_shwi_p (size) || wi::neg_p (size)) + if (!size.to_shwi (&ref->size) || may_lt (ref->size, 0)) { ref->offset = 0; ref->size = -1; @@ -1085,21 +1084,15 @@ ao_ref_init_from_vn_reference (ao_ref *r return true; } - ref->size = size.to_shwi (); - - if (!wi::fits_shwi_p (offset)) + if (!offset.to_shwi (&ref->offset)) { ref->offset = 0; ref->max_size = -1; return true; } - ref->offset = offset.to_shwi (); - - if (!wi::fits_shwi_p (max_size) || wi::neg_p (max_size)) + if (!max_size.to_shwi (&ref->max_size) || may_lt (ref->max_size, 0)) ref->max_size = -1; - else - ref->max_size = max_size.to_shwi (); return true; } @@ -1344,7 +1337,7 @@ fully_constant_vn_reference_p (vn_refere && (!INTEGRAL_TYPE_P (ref->type) || TYPE_PRECISION (ref->type) % BITS_PER_UNIT == 0)) { - HOST_WIDE_INT off = 0; + poly_int64 off = 0; HOST_WIDE_INT size; if (INTEGRAL_TYPE_P (ref->type)) size = TYPE_PRECISION (ref->type); @@ -1362,7 +1355,7 @@ fully_constant_vn_reference_p (vn_refere ++i; break; } - if (operands[i].off == -1) + if (must_eq (operands[i].off, -1)) return NULL_TREE; off += operands[i].off; if (operands[i].opcode == MEM_REF) @@ -1388,6 +1381,7 @@ fully_constant_vn_reference_p (vn_refere return build_zero_cst (ref->type); else if (ctor != error_mark_node) { + HOST_WIDE_INT const_off; if (decl) { tree res = fold_ctor_reference (ref->type, ctor, @@ -1400,10 +1394,10 @@ fully_constant_vn_reference_p (vn_refere return res; } } - else + else if (off.is_constant (&const_off)) { unsigned char buf[MAX_BITSIZE_MODE_ANY_MODE / BITS_PER_UNIT]; - int len = native_encode_expr (ctor, buf, size, off); + int len = native_encode_expr (ctor, buf, size, const_off); if (len > 0) return native_interpret_expr (ref->type, buf, len); } @@ -1495,17 +1489,16 @@ valueize_refs_1 (vec /* If it transforms a non-constant ARRAY_REF into a constant one, adjust the constant offset. */ else if (vro->opcode == ARRAY_REF - && vro->off == -1 - && TREE_CODE (vro->op0) == INTEGER_CST - && TREE_CODE (vro->op1) == INTEGER_CST + && must_eq (vro->off, -1) + && poly_int_tree_p (vro->op0) + && poly_int_tree_p (vro->op1) && TREE_CODE (vro->op2) == INTEGER_CST) { - offset_int off = ((wi::to_offset (vro->op0) - - wi::to_offset (vro->op1)) - * wi::to_offset (vro->op2) - * vn_ref_op_align_unit (vro)); - if (wi::fits_shwi_p (off)) - vro->off = off.to_shwi (); + poly_offset_int off = ((wi::to_poly_offset (vro->op0) + - wi::to_poly_offset (vro->op1)) + * wi::to_offset (vro->op2) + * vn_ref_op_align_unit (vro)); + off.to_shwi (&vro->off); } } @@ -1821,10 +1814,11 @@ vn_reference_lookup_3 (ao_ref *ref, tree vn_reference_t vr = (vn_reference_t)vr_; gimple *def_stmt = SSA_NAME_DEF_STMT (vuse); tree base = ao_ref_base (ref); - HOST_WIDE_INT offset, maxsize; + HOST_WIDE_INT offseti, maxsizei; static vec lhs_ops; ao_ref lhs_ref; bool lhs_ref_ok = false; + poly_int64 copy_size; /* If the reference is based on a parameter that was determined as pointing to readonly memory it doesn't change. */ @@ -1903,14 +1897,14 @@ vn_reference_lookup_3 (ao_ref *ref, tree if (*disambiguate_only) return (void *)-1; - offset = ref->offset; - maxsize = ref->max_size; - /* If we cannot constrain the size of the reference we cannot test if anything kills it. */ - if (maxsize == -1) + if (!ref->max_size_known_p ()) return (void *)-1; + poly_int64 offset = ref->offset; + poly_int64 maxsize = ref->max_size; + /* We can't deduce anything useful from clobbers. */ if (gimple_clobber_p (def_stmt)) return (void *)-1; @@ -1921,7 +1915,7 @@ vn_reference_lookup_3 (ao_ref *ref, tree if (is_gimple_reg_type (vr->type) && gimple_call_builtin_p (def_stmt, BUILT_IN_MEMSET) && integer_zerop (gimple_call_arg (def_stmt, 1)) - && tree_fits_uhwi_p (gimple_call_arg (def_stmt, 2)) + && poly_int_tree_p (gimple_call_arg (def_stmt, 2)) && TREE_CODE (gimple_call_arg (def_stmt, 0)) == ADDR_EXPR) { tree ref2 = TREE_OPERAND (gimple_call_arg (def_stmt, 0), 0); @@ -1930,13 +1924,11 @@ vn_reference_lookup_3 (ao_ref *ref, tree bool reverse; base2 = get_ref_base_and_extent (ref2, &offset2, &size2, &maxsize2, &reverse); - size2 = tree_to_uhwi (gimple_call_arg (def_stmt, 2)) * 8; - if ((unsigned HOST_WIDE_INT)size2 / 8 - == tree_to_uhwi (gimple_call_arg (def_stmt, 2)) - && maxsize2 != -1 + tree len = gimple_call_arg (def_stmt, 2); + if (known_size_p (maxsize2) && operand_equal_p (base, base2, 0) - && offset2 <= offset - && offset2 + size2 >= offset + maxsize) + && known_subrange_p (offset, maxsize, offset2, + wi::to_poly_offset (len) << LOG2_BITS_PER_UNIT)) { tree val = build_zero_cst (vr->type); return vn_reference_lookup_or_insert_for_pieces @@ -1955,10 +1947,9 @@ vn_reference_lookup_3 (ao_ref *ref, tree bool reverse; base2 = get_ref_base_and_extent (gimple_assign_lhs (def_stmt), &offset2, &size2, &maxsize2, &reverse); - if (maxsize2 != -1 + if (known_size_p (maxsize2) && operand_equal_p (base, base2, 0) - && offset2 <= offset - && offset2 + size2 >= offset + maxsize) + && known_subrange_p (offset, maxsize, offset2, size2)) { tree val = build_zero_cst (vr->type); return vn_reference_lookup_or_insert_for_pieces @@ -1968,13 +1959,17 @@ vn_reference_lookup_3 (ao_ref *ref, tree /* 3) Assignment from a constant. We can use folds native encode/interpret routines to extract the assigned bits. */ - else if (ref->size == maxsize + else if (must_eq (ref->size, maxsize) && is_gimple_reg_type (vr->type) && !contains_storage_order_barrier_p (vr->operands) && gimple_assign_single_p (def_stmt) && CHAR_BIT == 8 && BITS_PER_UNIT == 8 - && maxsize % BITS_PER_UNIT == 0 - && offset % BITS_PER_UNIT == 0 + /* native_encode and native_decode operate on arrays of bytes + and so fundamentally need a compile-time size and offset. */ + && maxsize.is_constant (&maxsizei) + && maxsizei % BITS_PER_UNIT == 0 + && offset.is_constant (&offseti) + && offseti % BITS_PER_UNIT == 0 && (is_gimple_min_invariant (gimple_assign_rhs1 (def_stmt)) || (TREE_CODE (gimple_assign_rhs1 (def_stmt)) == SSA_NAME && is_gimple_min_invariant (SSA_VAL (gimple_assign_rhs1 (def_stmt)))))) @@ -1990,8 +1985,7 @@ vn_reference_lookup_3 (ao_ref *ref, tree && size2 % BITS_PER_UNIT == 0 && offset2 % BITS_PER_UNIT == 0 && operand_equal_p (base, base2, 0) - && offset2 <= offset - && offset2 + size2 >= offset + maxsize) + && known_subrange_p (offseti, maxsizei, offset2, size2)) { /* We support up to 512-bit values (for V8DFmode). */ unsigned char buffer[64]; @@ -2008,14 +2002,14 @@ vn_reference_lookup_3 (ao_ref *ref, tree /* Make sure to interpret in a type that has a range covering the whole access size. */ if (INTEGRAL_TYPE_P (vr->type) - && ref->size != TYPE_PRECISION (vr->type)) - type = build_nonstandard_integer_type (ref->size, + && maxsizei != TYPE_PRECISION (vr->type)) + type = build_nonstandard_integer_type (maxsizei, TYPE_UNSIGNED (type)); tree val = native_interpret_expr (type, buffer - + ((offset - offset2) + + ((offseti - offset2) / BITS_PER_UNIT), - ref->size / BITS_PER_UNIT); + maxsizei / BITS_PER_UNIT); /* If we chop off bits because the types precision doesn't match the memory access size this is ok when optimizing reads but not when called from the DSE code during @@ -2038,7 +2032,7 @@ vn_reference_lookup_3 (ao_ref *ref, tree /* 4) Assignment from an SSA name which definition we may be able to access pieces from. */ - else if (ref->size == maxsize + else if (must_eq (ref->size, maxsize) && is_gimple_reg_type (vr->type) && !contains_storage_order_barrier_p (vr->operands) && gimple_assign_single_p (def_stmt) @@ -2054,15 +2048,14 @@ vn_reference_lookup_3 (ao_ref *ref, tree && maxsize2 != -1 && maxsize2 == size2 && operand_equal_p (base, base2, 0) - && offset2 <= offset - && offset2 + size2 >= offset + maxsize + && known_subrange_p (offset, maxsize, offset2, size2) /* ??? We can't handle bitfield precision extracts without either using an alternate type for the BIT_FIELD_REF and then doing a conversion or possibly adjusting the offset according to endianness. */ && (! INTEGRAL_TYPE_P (vr->type) - || ref->size == TYPE_PRECISION (vr->type)) - && ref->size % BITS_PER_UNIT == 0) + || must_eq (ref->size, TYPE_PRECISION (vr->type))) + && multiple_p (ref->size, BITS_PER_UNIT)) { code_helper rcode = BIT_FIELD_REF; tree ops[3]; @@ -2090,7 +2083,6 @@ vn_reference_lookup_3 (ao_ref *ref, tree || handled_component_p (gimple_assign_rhs1 (def_stmt)))) { tree base2; - HOST_WIDE_INT maxsize2; int i, j, k; auto_vec rhs; vn_reference_op_t vro; @@ -2101,8 +2093,7 @@ vn_reference_lookup_3 (ao_ref *ref, tree /* See if the assignment kills REF. */ base2 = ao_ref_base (&lhs_ref); - maxsize2 = lhs_ref.max_size; - if (maxsize2 == -1 + if (!lhs_ref.max_size_known_p () || (base != base2 && (TREE_CODE (base) != MEM_REF || TREE_CODE (base2) != MEM_REF @@ -2129,15 +2120,15 @@ vn_reference_lookup_3 (ao_ref *ref, tree may fail when comparing types for compatibility. But we really don't care here - further lookups with the rewritten operands will simply fail if we messed up types too badly. */ - HOST_WIDE_INT extra_off = 0; + poly_int64 extra_off = 0; if (j == 0 && i >= 0 && lhs_ops[0].opcode == MEM_REF - && lhs_ops[0].off != -1) + && may_ne (lhs_ops[0].off, -1)) { - if (lhs_ops[0].off == vr->operands[i].off) + if (must_eq (lhs_ops[0].off, vr->operands[i].off)) i--, j--; else if (vr->operands[i].opcode == MEM_REF - && vr->operands[i].off != -1) + && may_ne (vr->operands[i].off, -1)) { extra_off = vr->operands[i].off - lhs_ops[0].off; i--, j--; @@ -2163,11 +2154,11 @@ vn_reference_lookup_3 (ao_ref *ref, tree copy_reference_ops_from_ref (gimple_assign_rhs1 (def_stmt), &rhs); /* Apply an extra offset to the inner MEM_REF of the RHS. */ - if (extra_off != 0) + if (maybe_nonzero (extra_off)) { if (rhs.length () < 2 || rhs[0].opcode != MEM_REF - || rhs[0].off == -1) + || must_eq (rhs[0].off, -1)) return (void *)-1; rhs[0].off += extra_off; rhs[0].op0 = int_const_binop (PLUS_EXPR, rhs[0].op0, @@ -2198,7 +2189,7 @@ vn_reference_lookup_3 (ao_ref *ref, tree if (!ao_ref_init_from_vn_reference (&r, vr->set, vr->type, vr->operands)) return (void *)-1; /* This can happen with bitfields. */ - if (ref->size != r.size) + if (may_ne (ref->size, r.size)) return (void *)-1; *ref = r; @@ -2221,20 +2212,20 @@ vn_reference_lookup_3 (ao_ref *ref, tree || TREE_CODE (gimple_call_arg (def_stmt, 0)) == SSA_NAME) && (TREE_CODE (gimple_call_arg (def_stmt, 1)) == ADDR_EXPR || TREE_CODE (gimple_call_arg (def_stmt, 1)) == SSA_NAME) - && tree_fits_uhwi_p (gimple_call_arg (def_stmt, 2))) + && poly_int_tree_p (gimple_call_arg (def_stmt, 2), ©_size)) { tree lhs, rhs; ao_ref r; - HOST_WIDE_INT rhs_offset, copy_size, lhs_offset; + poly_int64 rhs_offset, lhs_offset; vn_reference_op_s op; - HOST_WIDE_INT at; + poly_uint64 mem_offset; + poly_int64 at, byte_maxsize; /* Only handle non-variable, addressable refs. */ - if (ref->size != maxsize - || offset % BITS_PER_UNIT != 0 - || ref->size % BITS_PER_UNIT != 0) + if (may_ne (ref->size, maxsize) + || !multiple_p (offset, BITS_PER_UNIT, &at) + || !multiple_p (maxsize, BITS_PER_UNIT, &byte_maxsize)) return (void *)-1; - at = offset / BITS_PER_UNIT; /* Extract a pointer base and an offset for the destination. */ lhs = gimple_call_arg (def_stmt, 0); @@ -2252,17 +2243,19 @@ vn_reference_lookup_3 (ao_ref *ref, tree } if (TREE_CODE (lhs) == ADDR_EXPR) { + HOST_WIDE_INT tmp_lhs_offset; tree tem = get_addr_base_and_unit_offset (TREE_OPERAND (lhs, 0), - &lhs_offset); + &tmp_lhs_offset); + lhs_offset = tmp_lhs_offset; if (!tem) return (void *)-1; if (TREE_CODE (tem) == MEM_REF - && tree_fits_uhwi_p (TREE_OPERAND (tem, 1))) + && poly_int_tree_p (TREE_OPERAND (tem, 1), &mem_offset)) { lhs = TREE_OPERAND (tem, 0); if (TREE_CODE (lhs) == SSA_NAME) lhs = SSA_VAL (lhs); - lhs_offset += tree_to_uhwi (TREE_OPERAND (tem, 1)); + lhs_offset += mem_offset; } else if (DECL_P (tem)) lhs = build_fold_addr_expr (tem); @@ -2280,15 +2273,17 @@ vn_reference_lookup_3 (ao_ref *ref, tree rhs = SSA_VAL (rhs); if (TREE_CODE (rhs) == ADDR_EXPR) { + HOST_WIDE_INT tmp_rhs_offset; tree tem = get_addr_base_and_unit_offset (TREE_OPERAND (rhs, 0), - &rhs_offset); + &tmp_rhs_offset); + rhs_offset = tmp_rhs_offset; if (!tem) return (void *)-1; if (TREE_CODE (tem) == MEM_REF - && tree_fits_uhwi_p (TREE_OPERAND (tem, 1))) + && poly_int_tree_p (TREE_OPERAND (tem, 1), &mem_offset)) { rhs = TREE_OPERAND (tem, 0); - rhs_offset += tree_to_uhwi (TREE_OPERAND (tem, 1)); + rhs_offset += mem_offset; } else if (DECL_P (tem)) rhs = build_fold_addr_expr (tem); @@ -2299,15 +2294,13 @@ vn_reference_lookup_3 (ao_ref *ref, tree && TREE_CODE (rhs) != ADDR_EXPR) return (void *)-1; - copy_size = tree_to_uhwi (gimple_call_arg (def_stmt, 2)); - /* The bases of the destination and the references have to agree. */ if (TREE_CODE (base) == MEM_REF) { if (TREE_OPERAND (base, 0) != lhs - || !tree_fits_uhwi_p (TREE_OPERAND (base, 1))) + || !poly_int_tree_p (TREE_OPERAND (base, 1), &mem_offset)) return (void *) -1; - at += tree_to_uhwi (TREE_OPERAND (base, 1)); + at += mem_offset; } else if (!DECL_P (base) || TREE_CODE (lhs) != ADDR_EXPR @@ -2316,12 +2309,10 @@ vn_reference_lookup_3 (ao_ref *ref, tree /* If the access is completely outside of the memcpy destination area there is no aliasing. */ - if (lhs_offset >= at + maxsize / BITS_PER_UNIT - || lhs_offset + copy_size <= at) + if (!ranges_may_overlap_p (lhs_offset, copy_size, at, byte_maxsize)) return NULL; /* And the access has to be contained within the memcpy destination. */ - if (lhs_offset > at - || lhs_offset + copy_size < at + maxsize / BITS_PER_UNIT) + if (!known_subrange_p (at, byte_maxsize, lhs_offset, copy_size)) return (void *)-1; /* Make room for 2 operands in the new reference. */ @@ -2359,7 +2350,7 @@ vn_reference_lookup_3 (ao_ref *ref, tree if (!ao_ref_init_from_vn_reference (&r, vr->set, vr->type, vr->operands)) return (void *)-1; /* This can happen with bitfields. */ - if (ref->size != r.size) + if (may_ne (ref->size, r.size)) return (void *)-1; *ref = r; Index: gcc/tree-ssa-uninit.c =================================================================== --- gcc/tree-ssa-uninit.c 2017-10-23 16:52:20.058356365 +0100 +++ gcc/tree-ssa-uninit.c 2017-10-23 17:01:52.305178291 +0100 @@ -294,15 +294,15 @@ warn_uninitialized_vars (bool warn_possi /* Do not warn if the access is fully outside of the variable. */ + poly_int64 decl_size; if (DECL_P (base) - && ref.size != -1 - && ref.max_size == ref.size - && (ref.offset + ref.size <= 0 - || (ref.offset >= 0 + && known_size_p (ref.size) + && must_eq (ref.max_size, ref.size) + && (must_le (ref.offset + ref.size, 0) + || (must_ge (ref.offset, 0) && DECL_SIZE (base) - && TREE_CODE (DECL_SIZE (base)) == INTEGER_CST - && compare_tree_int (DECL_SIZE (base), - ref.offset) <= 0))) + && poly_int_tree_p (DECL_SIZE (base), &decl_size) + && must_le (decl_size, ref.offset)))) continue; /* Do not warn if the access is then used for a BIT_INSERT_EXPR. */