From patchwork Mon May 15 13:57:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathieu Desnoyers X-Patchwork-Id: 684197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9543AC7EE26 for ; Mon, 15 May 2023 13:58:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237721AbjEON6P (ORCPT ); Mon, 15 May 2023 09:58:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235698AbjEON6N (ORCPT ); Mon, 15 May 2023 09:58:13 -0400 Received: from smtpout.efficios.com (unknown [IPv6:2607:5300:203:b2ee::31e5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5646B1996; Mon, 15 May 2023 06:58:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=efficios.com; s=smtpout1; t=1684159089; bh=oKkXMwl9CppgPR3HEtDQfgVFXUAQ6EoIiQSv7ieSscM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f0KlErVnELFnQBYCIU7/4DVtm8ChMs3Um1YTqbh3V+4stsWeoXaGXJ+f5gdeuTz9P UxQgDzjIr4w/SFIXS6tOvjnoNwxU09OZGkqioRJgyyUlnZXhXpxvjKcz3ob3o4wFJW 3Z3KLAPFbC8gLxGArXONszk1T9F/Jpw5rxvs46xSVt/7OYJJxZgiEamGgSSB3QcliJ vHl6gRlaI18Os/wxxEez521Asbd333ADvJFOVT3j/6FaL6ThuuSc8uOmf+Pcs055i/ 2FvNZb1aWRX4zfjisnosOT6pPgwgP+YJ1gBOG2UY33Br0VfmSB7HfNwwvNDTzhrr3w NlIVY6N+LKLFw== Received: from localhost.localdomain (192-222-143-198.qc.cable.ebox.net [192.222.143.198]) by smtpout.efficios.com (Postfix) with ESMTPSA id 4QKgtn2TMyz12dS; Mon, 15 May 2023 09:58:09 -0400 (EDT) From: Mathieu Desnoyers To: Peter Zijlstra Cc: linux-kernel@vger.kernel.org, Thomas Gleixner , "Paul E . McKenney" , Boqun Feng , "H . Peter Anvin" , Paul Turner , linux-api@vger.kernel.org, Shuah Khan , linux-kselftest@vger.kernel.org, Mathieu Desnoyers Subject: [PATCH 2/4] selftests/rseq: Implement rseq_unqual_scalar_typeof Date: Mon, 15 May 2023 09:57:59 -0400 Message-Id: <20230515135801.15220-3-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515135801.15220-1-mathieu.desnoyers@efficios.com> References: <20230515135801.15220-1-mathieu.desnoyers@efficios.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Allow defining variables and perform cast with a typeof which removes the volatile and const qualifiers. This prevents declaring a stack variable with a volatile qualifier within a macro, which would generate sub-optimal assembler. This is imported from the "librseq" project. Signed-off-by: Mathieu Desnoyers --- tools/testing/selftests/rseq/compiler.h | 26 +++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/tools/testing/selftests/rseq/compiler.h b/tools/testing/selftests/rseq/compiler.h index f47092bddeba..49d62fbd6dda 100644 --- a/tools/testing/selftests/rseq/compiler.h +++ b/tools/testing/selftests/rseq/compiler.h @@ -33,4 +33,30 @@ #define RSEQ_COMBINE_TOKENS(_tokena, _tokenb) \ RSEQ__COMBINE_TOKENS(_tokena, _tokenb) +#ifdef __cplusplus +#define rseq_unqual_scalar_typeof(x) \ + std::remove_cv::type>::type +#else +#define rseq_scalar_type_to_expr(type) \ + unsigned type: (unsigned type)0, \ + signed type: (signed type)0 + +/* + * Use C11 _Generic to express unqualified type from expression. This removes + * volatile qualifier from expression type. + */ +#define rseq_unqual_scalar_typeof(x) \ + __typeof__( \ + _Generic((x), \ + char: (char)0, \ + rseq_scalar_type_to_expr(char), \ + rseq_scalar_type_to_expr(short), \ + rseq_scalar_type_to_expr(int), \ + rseq_scalar_type_to_expr(long), \ + rseq_scalar_type_to_expr(long long), \ + default: (x) \ + ) \ + ) +#endif + #endif /* RSEQ_COMPILER_H_ */ From patchwork Mon May 15 13:58:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathieu Desnoyers X-Patchwork-Id: 684196 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 761E6C7EE30 for ; Mon, 15 May 2023 13:58:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229635AbjEON6Q (ORCPT ); Mon, 15 May 2023 09:58:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237865AbjEON6P (ORCPT ); Mon, 15 May 2023 09:58:15 -0400 Received: from smtpout.efficios.com (smtpout.efficios.com [167.114.26.122]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01E0410F8; Mon, 15 May 2023 06:58:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=efficios.com; s=smtpout1; t=1684159089; bh=JFXZOyA+MaNEabXquELDZyqKe3MTOQ5kU8hAyTwsfD4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nDgfbrEtJniYoufZtGytwj2VVpBRKHysGWQuZAYJ5UofGIR7P4vK+zc94+1sEuqWX p6XkXSOM6y2bK3v1BMxsipftIGVCGC2G0qxbIbzXd3mfmzEvvFDvA6g3KJakjqZmmi Jk6Dgqq1UfWsGU2UX+uftZXrLFxCEug27wesLLDeloTZm2CU7qd9mk2SvMVO4Zuist pMZbv54fXLrVgArD4X/N726kYtB70Y0fzQzDrtyiamavVbrGEPuyrnGaeUAkGoKggd O2vLI4OyK6Z2AuJzTUIVpfTLT9WkzQhWVDVDaDs3gEHhTDmZX13iXbGwVB//w0x/rW 23SqyZym8TPAw== Received: from localhost.localdomain (192-222-143-198.qc.cable.ebox.net [192.222.143.198]) by smtpout.efficios.com (Postfix) with ESMTPSA id 4QKgtn4Jc6z12VT; Mon, 15 May 2023 09:58:09 -0400 (EDT) From: Mathieu Desnoyers To: Peter Zijlstra Cc: linux-kernel@vger.kernel.org, Thomas Gleixner , "Paul E . McKenney" , Boqun Feng , "H . Peter Anvin" , Paul Turner , linux-api@vger.kernel.org, Shuah Khan , linux-kselftest@vger.kernel.org, Mathieu Desnoyers , Catalin Marinas , Will Deacon Subject: [PATCH 3/4] selftests/rseq: Fix arm64 buggy load-acquire/store-release macros Date: Mon, 15 May 2023 09:58:00 -0400 Message-Id: <20230515135801.15220-4-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515135801.15220-1-mathieu.desnoyers@efficios.com> References: <20230515135801.15220-1-mathieu.desnoyers@efficios.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The arm64 load-acquire/store-release macros from the Linux kernel rseq selftests are buggy. Remplace them by a working implementation. Signed-off-by: Mathieu Desnoyers Cc: Catalin Marinas Cc: Will Deacon Cc: Peter Zijlstra --- tools/testing/selftests/rseq/rseq-arm64.h | 58 ++++++++++++----------- 1 file changed, 30 insertions(+), 28 deletions(-) diff --git a/tools/testing/selftests/rseq/rseq-arm64.h b/tools/testing/selftests/rseq/rseq-arm64.h index 85b90977e7e6..21e1626a7235 100644 --- a/tools/testing/selftests/rseq/rseq-arm64.h +++ b/tools/testing/selftests/rseq/rseq-arm64.h @@ -27,59 +27,61 @@ #define rseq_smp_load_acquire(p) \ __extension__ ({ \ - __typeof(*p) ____p1; \ - switch (sizeof(*p)) { \ + union { rseq_unqual_scalar_typeof(*(p)) __val; char __c[sizeof(*(p))]; } __u; \ + switch (sizeof(*(p))) { \ case 1: \ - asm volatile ("ldarb %w0, %1" \ - : "=r" (*(__u8 *)p) \ - : "Q" (*p) : "memory"); \ + __asm__ __volatile__ ("ldarb %w0, %1" \ + : "=r" (*(__u8 *)__u.__c) \ + : "Q" (*(p)) : "memory"); \ break; \ case 2: \ - asm volatile ("ldarh %w0, %1" \ - : "=r" (*(__u16 *)p) \ - : "Q" (*p) : "memory"); \ + __asm__ __volatile__ ("ldarh %w0, %1" \ + : "=r" (*(__u16 *)__u.__c) \ + : "Q" (*(p)) : "memory"); \ break; \ case 4: \ - asm volatile ("ldar %w0, %1" \ - : "=r" (*(__u32 *)p) \ - : "Q" (*p) : "memory"); \ + __asm__ __volatile__ ("ldar %w0, %1" \ + : "=r" (*(__u32 *)__u.__c) \ + : "Q" (*(p)) : "memory"); \ break; \ case 8: \ - asm volatile ("ldar %0, %1" \ - : "=r" (*(__u64 *)p) \ - : "Q" (*p) : "memory"); \ + __asm__ __volatile__ ("ldar %0, %1" \ + : "=r" (*(__u64 *)__u.__c) \ + : "Q" (*(p)) : "memory"); \ break; \ } \ - ____p1; \ + (rseq_unqual_scalar_typeof(*(p)))__u.__val; \ }) #define rseq_smp_acquire__after_ctrl_dep() rseq_smp_rmb() #define rseq_smp_store_release(p, v) \ do { \ - switch (sizeof(*p)) { \ + union { rseq_unqual_scalar_typeof(*(p)) __val; char __c[sizeof(*(p))]; } __u = \ + { .__val = (rseq_unqual_scalar_typeof(*(p))) (v) }; \ + switch (sizeof(*(p))) { \ case 1: \ - asm volatile ("stlrb %w1, %0" \ - : "=Q" (*p) \ - : "r" ((__u8)v) \ + __asm__ __volatile__ ("stlrb %w1, %0" \ + : "=Q" (*(p)) \ + : "r" (*(__u8 *)__u.__c) \ : "memory"); \ break; \ case 2: \ - asm volatile ("stlrh %w1, %0" \ - : "=Q" (*p) \ - : "r" ((__u16)v) \ + __asm__ __volatile__ ("stlrh %w1, %0" \ + : "=Q" (*(p)) \ + : "r" (*(__u16 *)__u.__c) \ : "memory"); \ break; \ case 4: \ - asm volatile ("stlr %w1, %0" \ - : "=Q" (*p) \ - : "r" ((__u32)v) \ + __asm__ __volatile__ ("stlr %w1, %0" \ + : "=Q" (*(p)) \ + : "r" (*(__u32 *)__u.__c) \ : "memory"); \ break; \ case 8: \ - asm volatile ("stlr %1, %0" \ - : "=Q" (*p) \ - : "r" ((__u64)v) \ + __asm__ __volatile__ ("stlr %1, %0" \ + : "=Q" (*(p)) \ + : "r" (*(__u64 *)__u.__c) \ : "memory"); \ break; \ } \