From patchwork Wed May 12 14:50:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 435586 Delivered-To: patch@linaro.org Received: by 2002:a02:c901:0:0:0:0:0 with SMTP id t1csp4979184jao; Wed, 12 May 2021 09:19:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw0kL+9l6ZAuecTdG5SNTgnFBydbe3Rqfe5SnFNukBKGF5jx4pwCytT+/yR0iEFIShBtSfL X-Received: by 2002:a92:d212:: with SMTP id y18mr418824ily.176.1620836374958; Wed, 12 May 2021 09:19:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620836374; cv=none; d=google.com; s=arc-20160816; b=zm4EYluDYuKuhKN5dKCWw5AD3+vsKoCZm4kAeE9MzaJgxW/24/hhkz2FB1k0d9P7My LD7R2kSR7/MVaIqoBhOWxt8RGgfI37H46etKkcc7168MQ8xa7qk+nKh0r9JW6JZnC1p2 N64TK1qC6z/NJcasvJHgRj19mNXk0rVDUpXUL3YDkX6Vf694eqzW55Xp7GSQrdIBITEV bMu8LFOFHL1i4bqHXD38hSHLDmDrXjiTEa6JhgwrdhHm9HnCvXfTce4wfpr6K7W4PkFq 0oiNLAo5fOfLolPJvDpLGDRDDaOGMwU1T8stOY7UHuwRVaPOWh2Q6kE4gs06eJ2fDhb1 C6qQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=R5A7m17hnOU9braxvo7lsLXtYond4EraEMn95pY7xck=; b=Z05lkJB4OiushWaKd1qUeoZrWCNJGg8uwjJynZc5NIGnVlPllLYRW8iAXeP8e67hpa YQUFVzP/o5QngN2sMkh9vMgl4ls9LYA0yXntnQuGaJEBjY3CpgRUflS/La16VpU+0UBS DJR7IXqvOdE95adymwZ+sQ5TCXjg3GyI7gU1tvElbs/jP7rTdoflrv6v0whp2NUnUm8f GhuBg7Me6iITvp5PhMxnusc9iEPBWqB7qfvRdyX4UTPfB0eOXbQ++5tOPfeuYCB4UHci HlSS66AeK1jxd5R3SO0VcyMLy/49UQWpoxHNDDoEeVJ+uhmaI/H00FFJFnchd1WaogCw HgYA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=q2hkEJf5; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q8si330223ior.12.2021.05.12.09.19.34; Wed, 12 May 2021 09:19:34 -0700 (PDT) Received-SPF: pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=q2hkEJf5; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234320AbhELPv4 (ORCPT + 12 others); Wed, 12 May 2021 11:51:56 -0400 Received: from mail.kernel.org ([198.145.29.99]:42328 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236948AbhELPrc (ORCPT ); Wed, 12 May 2021 11:47:32 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 3F4F261CA2; Wed, 12 May 2021 15:24:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1620833046; bh=Y4hqkdhRbrk8ql5yBgfqH2kkHeGIYOY+yGPq3LN2QPE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=q2hkEJf52K/XaGG2W2ZSXBJTSlxEOH/mCIYIwetDthA1lYjt4VZeZ+kjwhY9YJSbN eg7g84vEXpyGhmlgppxeQLzbeXSRAZVRdtSoub9VgU+F3NyVHjGLJCtL+RYd+3rVQD u1OPMpvA+F+W0JBdrrLh7bekhNixeH83zp4SOcJ0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Arnd Bergmann , "Peter Zijlstra (Intel)" , Jens Axboe , Nathan Chancellor Subject: [PATCH 5.10 528/530] smp: Fix smp_call_function_single_async prototype Date: Wed, 12 May 2021 16:50:38 +0200 Message-Id: <20210512144837.095582917@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210512144819.664462530@linuxfoundation.org> References: <20210512144819.664462530@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Arnd Bergmann commit 1139aeb1c521eb4a050920ce6c64c36c4f2a3ab7 upstream. As of commit 966a967116e6 ("smp: Avoid using two cache lines for struct call_single_data"), the smp code prefers 32-byte aligned call_single_data objects for performance reasons, but the block layer includes an instance of this structure in the main 'struct request' that is more senstive to size than to performance here, see 4ccafe032005 ("block: unalign call_single_data in struct request"). The result is a violation of the calling conventions that clang correctly points out: block/blk-mq.c:630:39: warning: passing 8-byte aligned argument to 32-byte aligned parameter 2 of 'smp_call_function_single_async' may result in an unaligned pointer access [-Walign-mismatch] smp_call_function_single_async(cpu, &rq->csd); It does seem that the usage of the call_single_data without cache line alignment should still be allowed by the smp code, so just change the function prototype so it accepts both, but leave the default alignment unchanged for the other users. This seems better to me than adding a local hack to shut up an otherwise correct warning in the caller. Signed-off-by: Arnd Bergmann Signed-off-by: Peter Zijlstra (Intel) Acked-by: Jens Axboe Link: https://lkml.kernel.org/r/20210505211300.3174456-1-arnd@kernel.org [nc: Fix conflicts, modify rq_csd_init] Signed-off-by: Nathan Chancellor Signed-off-by: Greg Kroah-Hartman --- include/linux/smp.h | 2 +- kernel/sched/core.c | 2 +- kernel/smp.c | 20 ++++++++++---------- kernel/up.c | 2 +- 4 files changed, 13 insertions(+), 13 deletions(-) --- a/include/linux/smp.h +++ b/include/linux/smp.h @@ -74,7 +74,7 @@ void on_each_cpu_cond(smp_cond_func_t co void on_each_cpu_cond_mask(smp_cond_func_t cond_func, smp_call_func_t func, void *info, bool wait, const struct cpumask *mask); -int smp_call_function_single_async(int cpu, call_single_data_t *csd); +int smp_call_function_single_async(int cpu, struct __call_single_data *csd); #ifdef CONFIG_SMP --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -321,7 +321,7 @@ void update_rq_clock(struct rq *rq) } static inline void -rq_csd_init(struct rq *rq, call_single_data_t *csd, smp_call_func_t func) +rq_csd_init(struct rq *rq, struct __call_single_data *csd, smp_call_func_t func) { csd->flags = 0; csd->func = func; --- a/kernel/smp.c +++ b/kernel/smp.c @@ -110,7 +110,7 @@ static DEFINE_PER_CPU(void *, cur_csd_in static atomic_t csd_bug_count = ATOMIC_INIT(0); /* Record current CSD work for current CPU, NULL to erase. */ -static void csd_lock_record(call_single_data_t *csd) +static void csd_lock_record(struct __call_single_data *csd) { if (!csd) { smp_mb(); /* NULL cur_csd after unlock. */ @@ -125,7 +125,7 @@ static void csd_lock_record(call_single_ /* Or before unlock, as the case may be. */ } -static __always_inline int csd_lock_wait_getcpu(call_single_data_t *csd) +static __always_inline int csd_lock_wait_getcpu(struct __call_single_data *csd) { unsigned int csd_type; @@ -140,7 +140,7 @@ static __always_inline int csd_lock_wait * the CSD_TYPE_SYNC/ASYNC types provide the destination CPU, * so waiting on other types gets much less information. */ -static __always_inline bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, int *bug_id) +static __always_inline bool csd_lock_wait_toolong(struct __call_single_data *csd, u64 ts0, u64 *ts1, int *bug_id) { int cpu = -1; int cpux; @@ -204,7 +204,7 @@ static __always_inline bool csd_lock_wai * previous function call. For multi-cpu calls its even more interesting * as we'll have to ensure no other cpu is observing our csd. */ -static __always_inline void csd_lock_wait(call_single_data_t *csd) +static __always_inline void csd_lock_wait(struct __call_single_data *csd) { int bug_id = 0; u64 ts0, ts1; @@ -219,17 +219,17 @@ static __always_inline void csd_lock_wai } #else -static void csd_lock_record(call_single_data_t *csd) +static void csd_lock_record(struct __call_single_data *csd) { } -static __always_inline void csd_lock_wait(call_single_data_t *csd) +static __always_inline void csd_lock_wait(struct __call_single_data *csd) { smp_cond_load_acquire(&csd->flags, !(VAL & CSD_FLAG_LOCK)); } #endif -static __always_inline void csd_lock(call_single_data_t *csd) +static __always_inline void csd_lock(struct __call_single_data *csd) { csd_lock_wait(csd); csd->flags |= CSD_FLAG_LOCK; @@ -242,7 +242,7 @@ static __always_inline void csd_lock(cal smp_wmb(); } -static __always_inline void csd_unlock(call_single_data_t *csd) +static __always_inline void csd_unlock(struct __call_single_data *csd) { WARN_ON(!(csd->flags & CSD_FLAG_LOCK)); @@ -276,7 +276,7 @@ void __smp_call_single_queue(int cpu, st * for execution on the given CPU. data must already have * ->func, ->info, and ->flags set. */ -static int generic_exec_single(int cpu, call_single_data_t *csd) +static int generic_exec_single(int cpu, struct __call_single_data *csd) { if (cpu == smp_processor_id()) { smp_call_func_t func = csd->func; @@ -542,7 +542,7 @@ EXPORT_SYMBOL(smp_call_function_single); * NOTE: Be careful, there is unfortunately no current debugging facility to * validate the correctness of this serialization. */ -int smp_call_function_single_async(int cpu, call_single_data_t *csd) +int smp_call_function_single_async(int cpu, struct __call_single_data *csd) { int err = 0; --- a/kernel/up.c +++ b/kernel/up.c @@ -25,7 +25,7 @@ int smp_call_function_single(int cpu, vo } EXPORT_SYMBOL(smp_call_function_single); -int smp_call_function_single_async(int cpu, call_single_data_t *csd) +int smp_call_function_single_async(int cpu, struct __call_single_data *csd) { unsigned long flags;