From patchwork Sun Oct 6 17:21:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 175302 Delivered-To: patch@linaro.org Received: by 2002:a92:7e96:0:0:0:0:0 with SMTP id q22csp3323826ill; Sun, 6 Oct 2019 11:02:40 -0700 (PDT) X-Google-Smtp-Source: APXvYqw+ebwkswwcHsM5ZmWxvX8DJBk3uwbMRQWCxbuuiUxkXphatcR3A+F/YqXJ/MNFN5guLDHK X-Received: by 2002:a17:906:5284:: with SMTP id c4mr20607772ejm.39.1570384960369; Sun, 06 Oct 2019 11:02:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1570384960; cv=none; d=google.com; s=arc-20160816; b=bRxcAN0J4siT32e9sBzYSUObzfGiSZMfm8bYqJ5EKrio83EWrFbTv/lgwcc0LwPOJy dEddMtADCVg7O92/C/R7B6R6hItwWS0skh5+hMBHROR/sp+aLdE9uAdSg/kA+rde6UOc She8uO8ZR9pIu4HB0Coa4WBQfcBNKObjE2hwChJ9SIsvkI8m3zUzwOvH5tBomqQr1lIU EtSFZn572ly7TmX+31vm7w67z+TGIWS9/j/DbwETe/1tiygUa5AmhNbdhs2oUIk6LIQ+ /uJ7d2lheLXennQp6jnygeZJpnLMg+too3GezKQ/4BCN/5NEw+P4MP+TGTGshL5xhoPe 2kXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=nu6WUCNHoxvwUe4wk3IhUh6y+aEXlY1t3n05O8Yub9U=; b=hKvQz0QbBg0BDsDJY+0uGgj4iATappb4R6uAiDKTOh/hj23PQwk9pRz/bd6Ke/sQXR EuN+H7HneC3929eH/qp61yJ/B9Aotl50TIhM5XamgsdKFU58eUsVqOqXVGMO4IVhShnF 5GFkWl/F5J9LIMgbXJxorfYKvtK+E+Px2BwoNrRIS0qCH4WRDoH6Ay64zi63USHev9ST h7ZcQvpac79genlJS+madhxoe8+veh7yg9clzzPAcsxavMuakc9HOMGRzSHF0omWRWjH 06gZxXwLo2qu4ObkoelniuhZV+idMU6jzM67vIAiS2OWjdx9Ix8L6CaAZA4UMGGZsqY3 CyGw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="So7/Ajcq"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p14si7549590edc.227.2019.10.06.11.02.40; Sun, 06 Oct 2019 11:02:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="So7/Ajcq"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728580AbfJFSCg (ORCPT + 27 others); Sun, 6 Oct 2019 14:02:36 -0400 Received: from mail.kernel.org ([198.145.29.99]:50816 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728115AbfJFRZt (ORCPT ); Sun, 6 Oct 2019 13:25:49 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6B1B42077B; Sun, 6 Oct 2019 17:25:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1570382748; bh=9SHFMyTZ1b8A1lNB7zidcHr+FW0PPNHHLbMg322wdgI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=So7/AjcqhPabzF2L+rQc/qFyZrxwLBWwveT4tZCPqb5ZtkUzpXOKprXiZ54hvNheT ukOZhIrLLE++NqZxaDJinso4aXtDoXWwhKU06kOe3vK7dzhDw+O9VPaCDwia6pnK++ BItyuFs7bql1qmcDP2fd/cJhegjtGw0E2k5ffNuk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Nick Desaulniers , Nathan Chancellor , Andrew Murray , Arnd Bergmann , Will Deacon , Sasha Levin Subject: [PATCH 4.14 25/68] arm64: fix unreachable code issue with cmpxchg Date: Sun, 6 Oct 2019 19:21:01 +0200 Message-Id: <20191006171119.691801996@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191006171108.150129403@linuxfoundation.org> References: <20191006171108.150129403@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Arnd Bergmann [ Upstream commit 920fdab7b3ce98c14c840261e364f490f3679a62 ] On arm64 build with clang, sometimes the __cmpxchg_mb is not inlined when CONFIG_OPTIMIZE_INLINING is set. Clang then fails a compile-time assertion, because it cannot tell at compile time what the size of the argument is: mm/memcontrol.o: In function `__cmpxchg_mb': memcontrol.c:(.text+0x1a4c): undefined reference to `__compiletime_assert_175' memcontrol.c:(.text+0x1a4c): relocation truncated to fit: R_AARCH64_CALL26 against undefined symbol `__compiletime_assert_175' Mark all of the cmpxchg() style functions as __always_inline to ensure that the compiler can see the result. Acked-by: Nick Desaulniers Reported-by: Nathan Chancellor Link: https://github.com/ClangBuiltLinux/linux/issues/648 Reviewed-by: Nathan Chancellor Tested-by: Nathan Chancellor Reviewed-by: Andrew Murray Tested-by: Andrew Murray Signed-off-by: Arnd Bergmann Signed-off-by: Will Deacon Signed-off-by: Sasha Levin --- arch/arm64/include/asm/cmpxchg.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) -- 2.20.1 diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h index 0f2e1ab5e1666..9b2e2e2e728ae 100644 --- a/arch/arm64/include/asm/cmpxchg.h +++ b/arch/arm64/include/asm/cmpxchg.h @@ -73,7 +73,7 @@ __XCHG_CASE( , , mb_8, dmb ish, nop, , a, l, "memory") #undef __XCHG_CASE #define __XCHG_GEN(sfx) \ -static inline unsigned long __xchg##sfx(unsigned long x, \ +static __always_inline unsigned long __xchg##sfx(unsigned long x, \ volatile void *ptr, \ int size) \ { \ @@ -115,7 +115,7 @@ __XCHG_GEN(_mb) #define xchg(...) __xchg_wrapper( _mb, __VA_ARGS__) #define __CMPXCHG_GEN(sfx) \ -static inline unsigned long __cmpxchg##sfx(volatile void *ptr, \ +static __always_inline unsigned long __cmpxchg##sfx(volatile void *ptr, \ unsigned long old, \ unsigned long new, \ int size) \ @@ -248,7 +248,7 @@ __CMPWAIT_CASE( , , 8); #undef __CMPWAIT_CASE #define __CMPWAIT_GEN(sfx) \ -static inline void __cmpwait##sfx(volatile void *ptr, \ +static __always_inline void __cmpwait##sfx(volatile void *ptr, \ unsigned long val, \ int size) \ { \