From patchwork Thu Mar 21 17:12:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella X-Patchwork-Id: 781590 Delivered-To: patch@linaro.org Received: by 2002:adf:cf01:0:b0:33e:7753:30bd with SMTP id o1csp946000wrj; Thu, 21 Mar 2024 10:19:11 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCXpsZV0gfgMMGFNTuHuSO+NNsLjvGBeFwJN9TVZBJk9uifCQUDYzLU6I07kwuDSgSwVTFd6Fu9EsKo6pGR+k3Ck X-Google-Smtp-Source: AGHT+IHK3cfc04BXzD5wNXI0M9dxw4NH78KxY7XRlrbYSm8TA5BGTgaLzwGp2eAcRjOlTfZEi1tu X-Received: by 2002:a05:6102:b0a:b0:476:af2b:2845 with SMTP id b10-20020a0561020b0a00b00476af2b2845mr244323vst.20.1711041550792; Thu, 21 Mar 2024 10:19:10 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1711041550; cv=pass; d=google.com; s=arc-20160816; b=A6kS7m2kxzzRqz5yL4Zl4skTkRT92kUv6dF1KwthsnV6IebqaF2viFqSTYZ91bPED6 EB04Nn7c1KVsbFKkCwliew4kIyMVgmV5bi5xuQOxEwdHsnMRmF7ZzpIYZlo3jgfVYS+f a3pulFCReXRAE0CosvjI1VwAh55pTJIAyjyI6ybZGtmdE1sVWcVgQnTUmuXW/LX53XMi 81r2RTlt3UcZONHR1sVPxqnS2FVhr7OKM7KOkHOp9qAgzp0eirrPQQS4JMCJDb/avHfI wf+Fa09gzlzEo1cLyYUR4ndep45Uxy/z3o3JK0+F2mLUkyVCkyKj1STe+pjO6tskOzOl OZ9A== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:message-id:date:subject:cc:to:from:dkim-signature :arc-filter:dmarc-filter:delivered-to; bh=KC3SYkCBAN04QUcX2L083yXQER2yS816be7KHdwhlpw=; fh=F4l1y2X/VynDzADsUnnzHFEQe9IM7CVShCGdMmshoTI=; b=At3mPUPLRr9wjj3rx8+4H3Gzf/uzLXREx/1Rd5SKeNDLMONXvPi0fz4zquHcvxTHWj ZXkgm3w2oAS8Mn2fSEFrFS5jAYZDO5BfmOhtFz9I3lF2RTJT16rL0L8P3p8xdpDUlzg0 PwWDfm035XcPytXfs+G0LC1tJ9ZtwdE0k9S3zm+5TZtKy+aINex05BKyo+EgIiDgoAC5 UHKRwaj/BPaMSNh2swrT8nbe6cTR5dwa06A1+HEaUCyklwPjJsKCfw5mpFx6/0nLRv6s r8OF7j6F9i7FWsxBBtinF6n682etNaKFIOAQJtn796AvJrl2JiJMJhA0kZ+T7lSpaNS/ y7Cw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=LG9keAUk; arc=pass (i=1); spf=pass (google.com: domain of libc-alpha-bounces+patch=linaro.org@sourceware.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="libc-alpha-bounces+patch=linaro.org@sourceware.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from server2.sourceware.org (server2.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id j30-20020a056102001e00b0047277bbbdfcsi61741vsp.170.2024.03.21.10.19.10 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Mar 2024 10:19:10 -0700 (PDT) Received-SPF: pass (google.com: domain of libc-alpha-bounces+patch=linaro.org@sourceware.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=LG9keAUk; arc=pass (i=1); spf=pass (google.com: domain of libc-alpha-bounces+patch=linaro.org@sourceware.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="libc-alpha-bounces+patch=linaro.org@sourceware.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 631C53858425 for ; Thu, 21 Mar 2024 17:19:10 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-il1-x12c.google.com (mail-il1-x12c.google.com [IPv6:2607:f8b0:4864:20::12c]) by sourceware.org (Postfix) with ESMTPS id 5C0CE3858D28 for ; Thu, 21 Mar 2024 17:18:55 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 5C0CE3858D28 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linaro.org ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 5C0CE3858D28 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2607:f8b0:4864:20::12c ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1711041538; cv=none; b=oC/Ci9+fOilUZe0KwpAN/HYCkUNQecA/BJOzPxwIvwK1HryczNUvufnYU2jM24dbjnanjrShj2/pO2qyVavaLmagZjPeXVL+n0xdG4bBNYZZrV84gEX77bWeKhF85mZNdGl9cGSykAoiyGp263HWtde+Ac/ASHH1cW2drFNx4gk= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1711041538; c=relaxed/simple; bh=inepJxM2eG/GS5qyoSJWxBEcz9TRSevNSCHuKtoAoV0=; h=DKIM-Signature:From:To:Subject:Date:Message-Id:MIME-Version; b=CzrlmBlCszwBOrb6o4lAUjsTO2TiuhfbX1TjNpTwM5PHnr1q7bw9YETy6/woPlzQgiF7vh4T54mDfNRwNEMnUQ5gC6smY56pYzYHrT3dZYMFhAhZXn/Dx14yB+xG1WXWtRd9js/OqiD/xjX1w08tCveTkzgnpHW/Lth4eRIF7iU= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-il1-x12c.google.com with SMTP id e9e14a558f8ab-3684e301716so5991055ab.3 for ; Thu, 21 Mar 2024 10:18:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1711041534; x=1711646334; darn=sourceware.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=KC3SYkCBAN04QUcX2L083yXQER2yS816be7KHdwhlpw=; b=LG9keAUkHcEtmnbeM5I9S+uHu9RFiA+9FOCRxb3PjgIEBIc31yn4+r6kXUaD1BE+4V QckJ4+Pxp76tMV6K2Hju24cGX1q9Ye+vZ9P5/5LtQw0vqXyT+82iA+nhXeMmf1iOpLDs mXjflm6upxSnHlfZfOJ8FSqFMjvqNsIvQQmYJANuNOplLTjCOKeVzHAfIwfMLjCjaWDL jZKYYgRF7pGP4HBPBB+3KPwu+nmjAXVQX2h8vt4JbjtZDKQT2AJ5Jw2zNsDakXXpxloO zXoKLqvpjgimwfuxIl4H8TJq5PH3yBg11+Iju6lbUwlsJlq0cC5rjABfvkKpp7h4cHYb 2N7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711041534; x=1711646334; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=KC3SYkCBAN04QUcX2L083yXQER2yS816be7KHdwhlpw=; b=l4voXeiIfMT0KXSpH2go54Y8INt7W7POpdBWTW0sONxsUu5yFePXv+eoTe8mHTaXPK CnlypOJDw/QofSK0tgbF74yfxgsjdaB+E4ZactwO7nDH7FvklzB2q6203Yio1nz7bXAG fnIZPnp/ERXdF2+X4BqAE/Fyo/xLpgW52N0vuHxKcohWx4VuSX8isf5tucjWVEkS+8Ue GNveJhsqdalcuT/4QvIQZDAvbT7JiAEgTH94PhDsnzFvb1hhihOUlu1fdDS0QpuGBchU 8KI+e40kgpymT4Y0cIQwxpegVEUHyD7HY8oImFNrE1ZaVs5lYnPoZpbrYPJQg9gf1jMY QtZQ== X-Gm-Message-State: AOJu0YzpnmP5yaUGjMjzqxRz7Bnh9HvhdddXRzc73pnrqj3LnG5kEpm8 bOQRb+UjBD+zD/RiwoziXwctCNMXbesBkcfpPt3bEylhMmiwO3WW6n0w3eql/G+o9reS/522XZP 4 X-Received: by 2002:a05:6a20:3d1d:b0:1a3:6f9a:5434 with SMTP id y29-20020a056a203d1d00b001a36f9a5434mr139070pzi.62.1711041124966; Thu, 21 Mar 2024 10:12:04 -0700 (PDT) Received: from mandiga.. ([2804:1b3:a7c3:1d04:42cc:aea4:27d6:375f]) by smtp.gmail.com with ESMTPSA id y15-20020aa7854f000000b006e681769ee0sm68653pfn.145.2024.03.21.10.12.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Mar 2024 10:12:04 -0700 (PDT) From: Adhemerval Zanella To: libc-alpha@sourceware.org Cc: Wilco Dijkstra , Noah Goldstein , "H . J . Lu" Subject: [PATCH] x86_64: Remove avx512 strstr implementation Date: Thu, 21 Mar 2024 14:12:00 -0300 Message-Id: <20240321171200.1177053-1-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Spam-Status: No, score=-12.6 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces+patch=linaro.org@sourceware.org As indicated in a recent thread, this it is a simple brute-force algorithm that checks the whole needle at a matching character pair (and does so 1 byte at a time after the first 64 bytes of a needle). Also it never skips ahead and thus can match at every haystack position after trying to match all of the needle, which generic implementation avoids. As indicated by Wilco, a 4x larger needle and 16x larger haystack gives a clear 65x slowdown both basic_strstr and __strstr_avx512: "ifuncs": ["basic_strstr", "twoway_strstr", "__strstr_avx512", "__strstr_sse2_unaligned", "__strstr_generic"], { "len_haystack": 65536, "len_needle": 1024, "align_haystack": 0, "align_needle": 0, "fail": 1, "desc": "Difficult bruteforce needle", "timings": [4.0948e+07, 15094.5, 3.20818e+07, 108558, 10839.2] }, { "len_haystack": 1048576, "len_needle": 4096, "align_haystack": 0, "align_needle": 0, "fail": 1, "desc": "Difficult bruteforce needle", "timings": [2.69767e+09, 100797, 2.08535e+09, 495706, 82666.9] } PS: I don't have an AVX512 capable machine to verify this issues, but skimming through the code it does seems to follow what Wilco has described. Reviewed-by: Noah Goldstein --- sysdeps/x86_64/multiarch/Makefile | 3 - sysdeps/x86_64/multiarch/ifunc-impl-list.c | 6 - sysdeps/x86_64/multiarch/strstr-avx512.c | 218 --------------------- sysdeps/x86_64/multiarch/strstr.c | 25 +-- 4 files changed, 4 insertions(+), 248 deletions(-) delete mode 100644 sysdeps/x86_64/multiarch/strstr-avx512.c diff --git a/sysdeps/x86_64/multiarch/Makefile b/sysdeps/x86_64/multiarch/Makefile index d3d2270394..696cb66991 100644 --- a/sysdeps/x86_64/multiarch/Makefile +++ b/sysdeps/x86_64/multiarch/Makefile @@ -117,7 +117,6 @@ sysdep_routines += \ strrchr-evex512 \ strrchr-sse2 \ strspn-sse4 \ - strstr-avx512 \ strstr-sse2-unaligned \ varshift \ # sysdep_routines @@ -125,8 +124,6 @@ sysdep_routines += \ CFLAGS-strcspn-sse4.c += -msse4 CFLAGS-strpbrk-sse4.c += -msse4 CFLAGS-strspn-sse4.c += -msse4 - -CFLAGS-strstr-avx512.c += -mavx512f -mavx512vl -mavx512dq -mavx512bw -mbmi -mbmi2 -O3 endif ifeq ($(subdir),wcsmbs) diff --git a/sysdeps/x86_64/multiarch/ifunc-impl-list.c b/sysdeps/x86_64/multiarch/ifunc-impl-list.c index c4a21d4b7c..0bbb71bbbf 100644 --- a/sysdeps/x86_64/multiarch/ifunc-impl-list.c +++ b/sysdeps/x86_64/multiarch/ifunc-impl-list.c @@ -790,12 +790,6 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/strstr.c. */ IFUNC_IMPL (i, name, strstr, - IFUNC_IMPL_ADD (array, i, strstr, - (CPU_FEATURE_USABLE (AVX512VL) - && CPU_FEATURE_USABLE (AVX512BW) - && CPU_FEATURE_USABLE (AVX512DQ) - && CPU_FEATURE_USABLE (BMI2)), - __strstr_avx512) IFUNC_IMPL_ADD (array, i, strstr, 1, __strstr_sse2_unaligned) IFUNC_IMPL_ADD (array, i, strstr, 1, __strstr_generic)) diff --git a/sysdeps/x86_64/multiarch/strstr-avx512.c b/sysdeps/x86_64/multiarch/strstr-avx512.c deleted file mode 100644 index 3ac53accbd..0000000000 --- a/sysdeps/x86_64/multiarch/strstr-avx512.c +++ /dev/null @@ -1,218 +0,0 @@ -/* strstr optimized with 512-bit AVX-512 instructions - Copyright (C) 2022-2024 Free Software Foundation, Inc. - This file is part of the GNU C Library. - - The GNU C Library is free software; you can redistribute it and/or - modify it under the terms of the GNU Lesser General Public - License as published by the Free Software Foundation; either - version 2.1 of the License, or (at your option) any later version. - - The GNU C Library is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - Lesser General Public License for more details. - - You should have received a copy of the GNU Lesser General Public - License along with the GNU C Library; if not, see - . */ - -#include -#include -#include -#include - -#define FULL_MMASK64 0xffffffffffffffff -#define ONE_64BIT 0x1ull -#define ZMM_SIZE_IN_BYTES 64 -#define PAGESIZE 4096 - -#define cvtmask64_u64(...) (uint64_t) (__VA_ARGS__) -#define kshiftri_mask64(x, y) ((x) >> (y)) -#define kand_mask64(x, y) ((x) & (y)) - -/* - Returns the index of the first edge within the needle, returns 0 if no edge - is found. Example: 'ab' is the first edge in 'aaaaaaaaaabaarddg' - */ -static inline size_t -find_edge_in_needle (const char *ned) -{ - size_t ind = 0; - while (ned[ind + 1] != '\0') - { - if (ned[ind] != ned[ind + 1]) - return ind; - else - ind = ind + 1; - } - return 0; -} - -/* - Compare needle with haystack byte by byte at specified location - */ -static inline bool -verify_string_match (const char *hay, const size_t hay_index, const char *ned, - size_t ind) -{ - while (ned[ind] != '\0') - { - if (ned[ind] != hay[hay_index + ind]) - return false; - ind = ind + 1; - } - return true; -} - -/* - Compare needle with haystack at specified location. The first 64 bytes are - compared using a ZMM register. - */ -static inline bool -verify_string_match_avx512 (const char *hay, const size_t hay_index, - const char *ned, const __mmask64 ned_mask, - const __m512i ned_zmm) -{ - /* check first 64 bytes using zmm and then scalar */ - __m512i hay_zmm = _mm512_loadu_si512 (hay + hay_index); // safe to do so - __mmask64 match = _mm512_mask_cmpneq_epi8_mask (ned_mask, hay_zmm, ned_zmm); - if (match != 0x0) // failed the first few chars - return false; - else if (ned_mask == FULL_MMASK64) - return verify_string_match (hay, hay_index, ned, ZMM_SIZE_IN_BYTES); - return true; -} - -char * -__strstr_avx512 (const char *haystack, const char *ned) -{ - char first = ned[0]; - if (first == '\0') - return (char *)haystack; - if (ned[1] == '\0') - return (char *)strchr (haystack, ned[0]); - - size_t edge = find_edge_in_needle (ned); - - /* ensure haystack is as long as the pos of edge in needle */ - for (int ii = 0; ii < edge; ++ii) - { - if (haystack[ii] == '\0') - return NULL; - } - - /* - Load 64 bytes of the needle and save it to a zmm register - Read one cache line at a time to avoid loading across a page boundary - */ - __mmask64 ned_load_mask = _bzhi_u64 ( - FULL_MMASK64, 64 - ((uintptr_t) (ned) & 63)); - __m512i ned_zmm = _mm512_maskz_loadu_epi8 (ned_load_mask, ned); - __mmask64 ned_nullmask - = _mm512_mask_testn_epi8_mask (ned_load_mask, ned_zmm, ned_zmm); - - if (__glibc_unlikely (ned_nullmask == 0x0)) - { - ned_zmm = _mm512_loadu_si512 (ned); - ned_nullmask = _mm512_testn_epi8_mask (ned_zmm, ned_zmm); - ned_load_mask = ned_nullmask ^ (ned_nullmask - ONE_64BIT); - if (ned_nullmask != 0x0) - ned_load_mask = ned_load_mask >> 1; - } - else - { - ned_load_mask = ned_nullmask ^ (ned_nullmask - ONE_64BIT); - ned_load_mask = ned_load_mask >> 1; - } - const __m512i ned0 = _mm512_set1_epi8 (ned[edge]); - const __m512i ned1 = _mm512_set1_epi8 (ned[edge + 1]); - - /* - Read the bytes of haystack in the current cache line - */ - size_t hay_index = edge; - __mmask64 loadmask = _bzhi_u64 ( - FULL_MMASK64, 64 - ((uintptr_t) (haystack + hay_index) & 63)); - /* First load is a partial cache line */ - __m512i hay0 = _mm512_maskz_loadu_epi8 (loadmask, haystack + hay_index); - /* Search for NULL and compare only till null char */ - uint64_t nullmask - = cvtmask64_u64 (_mm512_mask_testn_epi8_mask (loadmask, hay0, hay0)); - uint64_t cmpmask = nullmask ^ (nullmask - ONE_64BIT); - cmpmask = cmpmask & cvtmask64_u64 (loadmask); - /* Search for the 2 characters of needle */ - __mmask64 k0 = _mm512_cmpeq_epi8_mask (hay0, ned0); - __mmask64 k1 = _mm512_cmpeq_epi8_mask (hay0, ned1); - k1 = kshiftri_mask64 (k1, 1); - /* k2 masks tell us if both chars from needle match */ - uint64_t k2 = cvtmask64_u64 (kand_mask64 (k0, k1)) & cmpmask; - /* For every match, search for the entire needle for a full match */ - while (k2) - { - uint64_t bitcount = _tzcnt_u64 (k2); - k2 = _blsr_u64 (k2); - size_t match_pos = hay_index + bitcount - edge; - if (((uintptr_t) (haystack + match_pos) & (PAGESIZE - 1)) - < PAGESIZE - 1 - ZMM_SIZE_IN_BYTES) - { - /* - * Use vector compare as long as you are not crossing a page - */ - if (verify_string_match_avx512 (haystack, match_pos, ned, - ned_load_mask, ned_zmm)) - return (char *)haystack + match_pos; - } - else - { - if (verify_string_match (haystack, match_pos, ned, 0)) - return (char *)haystack + match_pos; - } - } - /* We haven't checked for potential match at the last char yet */ - haystack = (const char *)(((uintptr_t) (haystack + hay_index) | 63)); - hay_index = 0; - - /* - Loop over one cache line at a time to prevent reading over page - boundary - */ - __m512i hay1; - while (nullmask == 0) - { - hay0 = _mm512_loadu_si512 (haystack + hay_index); - hay1 = _mm512_load_si512 (haystack + hay_index - + 1); // Always 64 byte aligned - nullmask = cvtmask64_u64 (_mm512_testn_epi8_mask (hay1, hay1)); - /* Compare only till null char */ - cmpmask = nullmask ^ (nullmask - ONE_64BIT); - k0 = _mm512_cmpeq_epi8_mask (hay0, ned0); - k1 = _mm512_cmpeq_epi8_mask (hay1, ned1); - /* k2 masks tell us if both chars from needle match */ - k2 = cvtmask64_u64 (kand_mask64 (k0, k1)) & cmpmask; - /* For every match, compare full strings for potential match */ - while (k2) - { - uint64_t bitcount = _tzcnt_u64 (k2); - k2 = _blsr_u64 (k2); - size_t match_pos = hay_index + bitcount - edge; - if (((uintptr_t) (haystack + match_pos) & (PAGESIZE - 1)) - < PAGESIZE - 1 - ZMM_SIZE_IN_BYTES) - { - /* - * Use vector compare as long as you are not crossing a page - */ - if (verify_string_match_avx512 (haystack, match_pos, ned, - ned_load_mask, ned_zmm)) - return (char *)haystack + match_pos; - } - else - { - /* Compare byte by byte */ - if (verify_string_match (haystack, match_pos, ned, 0)) - return (char *)haystack + match_pos; - } - } - hay_index += ZMM_SIZE_IN_BYTES; - } - return NULL; -} diff --git a/sysdeps/x86_64/multiarch/strstr.c b/sysdeps/x86_64/multiarch/strstr.c index a513bac5c3..828308668b 100644 --- a/sysdeps/x86_64/multiarch/strstr.c +++ b/sysdeps/x86_64/multiarch/strstr.c @@ -35,32 +35,15 @@ extern __typeof (__redirect_strstr) __strstr_sse2_unaligned attribute_hidden; extern __typeof (__redirect_strstr) __strstr_generic attribute_hidden; -extern __typeof (__redirect_strstr) __strstr_avx512 attribute_hidden; #include "init-arch.h" /* Avoid DWARF definition DIE on ifunc symbol so that GDB can handle ifunc symbol properly. */ extern __typeof (__redirect_strstr) __libc_strstr; - -static inline void * -IFUNC_SELECTOR (void) -{ - const struct cpu_features *cpu_features = __get_cpu_features (); - - if (!CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_AVX512) - && CPU_FEATURE_USABLE_P (cpu_features, AVX512VL) - && CPU_FEATURE_USABLE_P (cpu_features, AVX512BW) - && CPU_FEATURE_USABLE_P (cpu_features, AVX512DQ) - && CPU_FEATURE_USABLE_P (cpu_features, BMI2)) - return __strstr_avx512; - - if (CPU_FEATURES_ARCH_P (cpu_features, Fast_Unaligned_Load)) - return __strstr_sse2_unaligned; - - return __strstr_generic; -} - -libc_ifunc_redirected (__redirect_strstr, __libc_strstr, IFUNC_SELECTOR ()); +libc_ifunc (__libc_strstr, + HAS_ARCH_FEATURE (Fast_Unaligned_Load) + ? __strstr_sse2_unaligned + : __strstr_generic) #undef strstr strong_alias (__libc_strstr, strstr)