From patchwork Wed May 12 14:50:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 435554 Delivered-To: patch@linaro.org Received: by 2002:a02:c901:0:0:0:0:0 with SMTP id t1csp4917322jao; Wed, 12 May 2021 08:10:38 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxhAGsacZGDEo8iQvGy+n5Ry6sLmKiDEI8aN/rKEBIJfUmD6s56W10c8mLNF3jCogueJTF0 X-Received: by 2002:a05:6512:324e:: with SMTP id c14mr25502327lfr.218.1620832237952; Wed, 12 May 2021 08:10:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620832237; cv=none; d=google.com; s=arc-20160816; b=ipm+s8tB4eftFCdH06pS6gqtxGn3mdxh7jF0xOKF8Jr54dJgFw3dv1xDQWPqSUSdZM V+MSu2cVJnyStgoRfGkQRT1KiJKwkX4IdL7wkcRFUgfLXCsNDxGS/52vZOv8DS33e6Vx +XbsmHaqAsXuiSYrxg15S9aVLYQY2w0Vs48okWr5WDYMcoLk1ZV8if1c3JjR/q+qpVRV qmM/uxHmHzK27dmTSYBu4IgFbQ2jAgqnXZ3qwNI7BiftlpMKDczZbMgwKNPEH1gge0EB pptXdgnITPPd0hB5zyeCcilxbreHwoy5rmSpVkucAyS0RyyEHieg4pIBkDfJCdnFPzGu q7Og== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=ecdSNl3JgEZtywMC+C19a7+eF9POMz7Hr0GVlB4XB5A=; b=kaWZ+P1PRT+e0JILjcxJ3fmpnWQnofFSTud1SEd3flhvFxYeYOzB0WE7x6SR8aWzIp HrnxILJysMz7heUk+A20GHe5VZ5oV6uThKkKNiq/7UoPKzlO23qD02imanmtuxOD6enK BljN1EQHh4ydNypDAo873hGS7IpBbKpjMf8THUO7Qoa1FDZReosyQ3UCObwUTJ4C/RRF tXuCmz6t13up5qNhRUEfPHj2VCvhDtISYfaMA8CfcrlDVd+B3V6iwnP6N9x3ZP9qqGOJ eVveJh/eEwjsWoK+K0F8a5NTdBg5PKpzqC6lM7OgpFKUjNPqsXekWXqFvI0TMEEDanJP CMQg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=MFEmPK+V; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w3si206917lft.62.2021.05.12.08.10.37; Wed, 12 May 2021 08:10:37 -0700 (PDT) Received-SPF: pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=MFEmPK+V; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232806AbhELPLZ (ORCPT + 12 others); Wed, 12 May 2021 11:11:25 -0400 Received: from mail.kernel.org ([198.145.29.99]:39438 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233420AbhELPJX (ORCPT ); Wed, 12 May 2021 11:09:23 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 626F16144B; Wed, 12 May 2021 15:02:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1620831735; bh=1Ovjz6viYjYjWr7dNtzeN9TavdwQ+Cji3hGQ8jOWmB8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MFEmPK+VMxPfUbaEOvcZ5/loWZHPzvAOrVRATFBEBd42FrAraHFkra0+8+fkFdGgw aGegjWv3kd202Qq3rw1EEYcXmkMM1e/foB3U31Npc9K8089EGL7/9DJBzLRKJGoAh/ Na3qBLRC5INFMEKd9ATguRGyVwvXIBKwcAMBRuXE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Arnd Bergmann , "Peter Zijlstra (Intel)" , Jens Axboe , Nathan Chancellor Subject: [PATCH 5.4 240/244] smp: Fix smp_call_function_single_async prototype Date: Wed, 12 May 2021 16:50:11 +0200 Message-Id: <20210512144750.678746102@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210512144743.039977287@linuxfoundation.org> References: <20210512144743.039977287@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Arnd Bergmann commit 1139aeb1c521eb4a050920ce6c64c36c4f2a3ab7 upstream. As of commit 966a967116e6 ("smp: Avoid using two cache lines for struct call_single_data"), the smp code prefers 32-byte aligned call_single_data objects for performance reasons, but the block layer includes an instance of this structure in the main 'struct request' that is more senstive to size than to performance here, see 4ccafe032005 ("block: unalign call_single_data in struct request"). The result is a violation of the calling conventions that clang correctly points out: block/blk-mq.c:630:39: warning: passing 8-byte aligned argument to 32-byte aligned parameter 2 of 'smp_call_function_single_async' may result in an unaligned pointer access [-Walign-mismatch] smp_call_function_single_async(cpu, &rq->csd); It does seem that the usage of the call_single_data without cache line alignment should still be allowed by the smp code, so just change the function prototype so it accepts both, but leave the default alignment unchanged for the other users. This seems better to me than adding a local hack to shut up an otherwise correct warning in the caller. Signed-off-by: Arnd Bergmann Signed-off-by: Peter Zijlstra (Intel) Acked-by: Jens Axboe Link: https://lkml.kernel.org/r/20210505211300.3174456-1-arnd@kernel.org [nc: Fix conflicts] Signed-off-by: Nathan Chancellor Signed-off-by: Greg Kroah-Hartman --- include/linux/smp.h | 2 +- kernel/smp.c | 10 +++++----- kernel/up.c | 2 +- 3 files changed, 7 insertions(+), 7 deletions(-) --- a/include/linux/smp.h +++ b/include/linux/smp.h @@ -57,7 +57,7 @@ void on_each_cpu_cond_mask(bool (*cond_f smp_call_func_t func, void *info, bool wait, gfp_t gfp_flags, const struct cpumask *mask); -int smp_call_function_single_async(int cpu, call_single_data_t *csd); +int smp_call_function_single_async(int cpu, struct __call_single_data *csd); #ifdef CONFIG_SMP --- a/kernel/smp.c +++ b/kernel/smp.c @@ -104,12 +104,12 @@ void __init call_function_init(void) * previous function call. For multi-cpu calls its even more interesting * as we'll have to ensure no other cpu is observing our csd. */ -static __always_inline void csd_lock_wait(call_single_data_t *csd) +static __always_inline void csd_lock_wait(struct __call_single_data *csd) { smp_cond_load_acquire(&csd->flags, !(VAL & CSD_FLAG_LOCK)); } -static __always_inline void csd_lock(call_single_data_t *csd) +static __always_inline void csd_lock(struct __call_single_data *csd) { csd_lock_wait(csd); csd->flags |= CSD_FLAG_LOCK; @@ -122,7 +122,7 @@ static __always_inline void csd_lock(cal smp_wmb(); } -static __always_inline void csd_unlock(call_single_data_t *csd) +static __always_inline void csd_unlock(struct __call_single_data *csd) { WARN_ON(!(csd->flags & CSD_FLAG_LOCK)); @@ -139,7 +139,7 @@ static DEFINE_PER_CPU_SHARED_ALIGNED(cal * for execution on the given CPU. data must already have * ->func, ->info, and ->flags set. */ -static int generic_exec_single(int cpu, call_single_data_t *csd, +static int generic_exec_single(int cpu, struct __call_single_data *csd, smp_call_func_t func, void *info) { if (cpu == smp_processor_id()) { @@ -332,7 +332,7 @@ EXPORT_SYMBOL(smp_call_function_single); * NOTE: Be careful, there is unfortunately no current debugging facility to * validate the correctness of this serialization. */ -int smp_call_function_single_async(int cpu, call_single_data_t *csd) +int smp_call_function_single_async(int cpu, struct __call_single_data *csd) { int err = 0; --- a/kernel/up.c +++ b/kernel/up.c @@ -24,7 +24,7 @@ int smp_call_function_single(int cpu, vo } EXPORT_SYMBOL(smp_call_function_single); -int smp_call_function_single_async(int cpu, call_single_data_t *csd) +int smp_call_function_single_async(int cpu, struct __call_single_data *csd) { unsigned long flags;