From patchwork Tue Mar 4 09:21:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870232 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 131BB1FBCAD for ; Tue, 4 Mar 2025 09:25:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080315; cv=none; b=l1yxa62dU7tAQ7IDOBw2bBCXucJxlhDZBgYtasHttmo2tYNPfrO+QoB/iSzA/wrlmD966RzgxCfCkTFTVOMzeIcUm5xy9l2W1PoeR/zanuWD1IdvncYGfzmMhiMaiT0DAeEUvbogpyyhkVwwibtJKhWkZZw4oFJM9NiRYM7psbY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080315; c=relaxed/simple; bh=0WzzU6roBd+wb4BiPZv1ShxdEyf/59sH8xjJyNyyB/o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Em0KnUEEbwZ4GTK0UGIInaIEIFG7gQhCLGm3gn0PGUkXG6YvLmtLHh7tX6+6SNqywqAXpRJgFuRa54WsjIblVj3xkMu1yLPyB+OX5sQ3HHiDBAAeX5vo1YYRpgbQo+XzRNUpvQiImeQ81sdBMxoA9TpSybzuCsDujmgiB6vXIZo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wh0p7d1m; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wh0p7d1m" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5e4a0d66c69so6804222a12.0 for ; Tue, 04 Mar 2025 01:25:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080311; x=1741685111; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NQzJ/uCYeuflL88Y+3MrtFVv4DOW+ANfKOTFJKL+x3k=; b=wh0p7d1mjdtMiWzX6JifVSMlyDbfquromP2hXI9swu0UckrXjg+KapyWKe4+isArG5 +WQTOi0BO0kSQQrQBCGflLtx+0A71hf6+xdEQkCrRln8Bqvu8tdARqkv5ukxp8G3e/k7 iSiXeN/vXRN10lnnGKwQu+jP+3OZPgKWcqMbzNGOsFTavr82IrCscNunXuuXzj+HvrhX 8Rxp3/c3CZC+Av8z4pNssAcHPbtWFZuYEMo29Eh57WTdbMp2HI7z3kmaaP18tPND6/wn ZVy8i8KUp6sIeXDWBayZHFgIt0D08SiE1NOnjm+sof8Prrll8Jd00oukEmJrFpUiDXjc ZQsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080311; x=1741685111; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NQzJ/uCYeuflL88Y+3MrtFVv4DOW+ANfKOTFJKL+x3k=; b=EPKiJKXNXjn4Nr4t4EbHJolCafyfYwPZbEO8D184TxBF5rtNWmhnNdQbjE5PASTrWN W+G8v53lx6WNa6R/n5imRSjPf3DeAMrS5BK4nu8PeqpEK1XZ2IW66j2EVxa48KOvqdk/ MIQK/hia0dao8ZMoS0EH2WbOmtWKNrpGkIKtarhNF6HyulvDGA+Gt88T/BUb6SiEDhXq lilX7do0yxnZHJTErrPJlY/m5Qa0A9swANYCHNckTjNCJnbsMHhOhx90v6RLLA0WPIRc VbSbnPy46sxSkeE/t6J+tssQDQkXodWzcuzgyme1vlOocjm3O7xrod7zuVPBBKlq8KjX uFsw== X-Forwarded-Encrypted: i=1; AJvYcCWzM1KzDwMz31oaDEnCDf8L25LGo5fJqxT/xCp9SpxC0oKjbicjleHMW90jLE1luIXqKfQj6SDKF96Ln6A=@vger.kernel.org X-Gm-Message-State: AOJu0YxaFPK8vCWqw4+k6AoN5fbOB1BqjAVNY2/15CIhUdp53CUA6xkd ptpH5X5H8/axCmyQ7JjAjTApI9FFGomm/spIv6BR09m/pS1m8maBMwEAg76VPsReS2br3ObSOw= = X-Google-Smtp-Source: AGHT+IEscl/eFn4ca6UhacAGwIAp5Klg5dZiDSUgtMEvBWdRgY7g8fjyLx/wTScj6YAQy9EZGzNV9TOR0A== X-Received: from edbfd23.prod.google.com ([2002:a05:6402:3897:b0:5e4:d495:16dd]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:268f:b0:5e0:3f83:92ab with SMTP id 4fb4d7f45d1cf-5e4d6b87d70mr17084911a12.30.1741080311449; Tue, 04 Mar 2025 01:25:11 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:00 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-2-elver@google.com> Subject: [PATCH v2 01/34] compiler_types: Move lock checking attributes to compiler-capability-analysis.h From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org The conditional definition of lock checking macros and attributes is about to become more complex. Factor them out into their own header for better readability, and to make it obvious which features are supported by which mode (currently only Sparse). This is the first step towards generalizing towards "capability analysis". No functional change intended. Signed-off-by: Marco Elver Reviewed-by: Bart Van Assche --- include/linux/compiler-capability-analysis.h | 32 ++++++++++++++++++++ include/linux/compiler_types.h | 18 ++--------- 2 files changed, 34 insertions(+), 16 deletions(-) create mode 100644 include/linux/compiler-capability-analysis.h diff --git a/include/linux/compiler-capability-analysis.h b/include/linux/compiler-capability-analysis.h new file mode 100644 index 000000000000..7546ddb83f86 --- /dev/null +++ b/include/linux/compiler-capability-analysis.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Macros and attributes for compiler-based static capability analysis. + */ + +#ifndef _LINUX_COMPILER_CAPABILITY_ANALYSIS_H +#define _LINUX_COMPILER_CAPABILITY_ANALYSIS_H + +#ifdef __CHECKER__ + +/* Sparse context/lock checking support. */ +# define __must_hold(x) __attribute__((context(x,1,1))) +# define __acquires(x) __attribute__((context(x,0,1))) +# define __cond_acquires(x) __attribute__((context(x,0,-1))) +# define __releases(x) __attribute__((context(x,1,0))) +# define __acquire(x) __context__(x,1) +# define __release(x) __context__(x,-1) +# define __cond_lock(x, c) ((c) ? ({ __acquire(x); 1; }) : 0) + +#else /* !__CHECKER__ */ + +# define __must_hold(x) +# define __acquires(x) +# define __cond_acquires(x) +# define __releases(x) +# define __acquire(x) (void)0 +# define __release(x) (void)0 +# define __cond_lock(x, c) (c) + +#endif /* __CHECKER__ */ + +#endif /* _LINUX_COMPILER_CAPABILITY_ANALYSIS_H */ diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 981cc3d7e3aa..4a458e41293c 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -24,6 +24,8 @@ # define BTF_TYPE_TAG(value) /* nothing */ #endif +#include + /* sparse defines __CHECKER__; see Documentation/dev-tools/sparse.rst */ #ifdef __CHECKER__ /* address spaces */ @@ -34,14 +36,6 @@ # define __rcu __attribute__((noderef, address_space(__rcu))) static inline void __chk_user_ptr(const volatile void __user *ptr) { } static inline void __chk_io_ptr(const volatile void __iomem *ptr) { } -/* context/locking */ -# define __must_hold(x) __attribute__((context(x,1,1))) -# define __acquires(x) __attribute__((context(x,0,1))) -# define __cond_acquires(x) __attribute__((context(x,0,-1))) -# define __releases(x) __attribute__((context(x,1,0))) -# define __acquire(x) __context__(x,1) -# define __release(x) __context__(x,-1) -# define __cond_lock(x,c) ((c) ? ({ __acquire(x); 1; }) : 0) /* other */ # define __force __attribute__((force)) # define __nocast __attribute__((nocast)) @@ -62,14 +56,6 @@ static inline void __chk_io_ptr(const volatile void __iomem *ptr) { } # define __chk_user_ptr(x) (void)0 # define __chk_io_ptr(x) (void)0 -/* context/locking */ -# define __must_hold(x) -# define __acquires(x) -# define __cond_acquires(x) -# define __releases(x) -# define __acquire(x) (void)0 -# define __release(x) (void)0 -# define __cond_lock(x,c) (c) /* other */ # define __force # define __nocast From patchwork Tue Mar 4 09:21:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870231 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C63981FDA6D for ; Tue, 4 Mar 2025 09:25:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080318; cv=none; b=biE7VcikZ3oAOM2wu6Y5iPJN234vahR0sNylpFbY08NqE9wyaMFp0lJMrgItnlF+jBq+4CfKeyr4O+dFRNWCg+yln2iS0HiEBiM6ZgFaAUcKVNQSY3L15t92kc8ipV21dhhMhiEBqkBqi136Zmqs+5nbMFU1+hPh2AxmM69Didg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080318; c=relaxed/simple; bh=VkhGfiuUn9q3goIt57XdzrI3URDD6c72kbPHwbG20Kk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=fHziU9dFz6aeL2L8KwV8HP+u8qBfCVcf2F91/AIO/pRiD83pD1ulsKmDHArlIOgDxCFz0ypcOlHs2X4ZWtJg8ffzOkKpCKJazr8no8tgwTLOt9FWoFQ5Cn6xbJtatIpy2TduCI1dr52kM5PAJ60ogEjdsP07Di36aDO1xQ3Mhh8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=EpVXu9GH; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EpVXu9GH" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5e4c2618332so2145951a12.2 for ; Tue, 04 Mar 2025 01:25:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080314; x=1741685114; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kWljJ6Cp78z9nBpFGSGBuSflIksXCmyHBsRMDURZjlg=; b=EpVXu9GHTe+vx/EncDjznPM2GWDfHbqt9gHkGG7s19t9GBeywtJdwxQHoHjPN+sFFs QIq6AibOd0XH3U1+7PZKLmys1WSwL/iFxFV9ZIpYKqKqnAful0wGQPPTJfc2fQO/zWI9 kTInNW5zMevM3WusoY5Ge1/XXkrhACqW8NA7VrobJceaEcRVJR2XMAmCaywAeeHAzEkF WLF7U75P6D9CwAuiNMoEEV46gaPvIQU2YG/DSSWCZlOWLs6w/ZIRBdo2DnUkyJDuIKQf MV4QwVgHbh0+mw9eJ0KOoITP9yZvgkI5Td2wBASLp2cbrVJObLW6VDHdw0dE6s9QHfWI uxaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080314; x=1741685114; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kWljJ6Cp78z9nBpFGSGBuSflIksXCmyHBsRMDURZjlg=; b=aZw0RLqiKpnvAYb3q2odhltodd8j75dE/5EJXiaN/esEciTWnabyQKRsvAgunRfvMH FBWWy2pm1bO8BEJoR5WrbOoPHEW2v2XwYuEBlrfXwNiBmKTukwR4rLzZ1f7G6RWVad54 0rhOWLpn0PRjvY/ywBmfFxSqDfa34Qza1jDS3UUifCOgso8D+5EW/0l7ewIoVAkrzdgE lHeU1TzYg/cMGRKuJLZtjmtRKTnuOKvI0VhrS7BpQww4vLNh7dDvtK+gxVuRJQjMcsoZ vpxk8rCjYlhVrb1xF3CIcV+8EB/9NBKPnjjOGtEEDXF0twrvqAOFH2PgDfJJHoCV0tgC dFFw== X-Forwarded-Encrypted: i=1; AJvYcCU3KCdNn9GkaPfvN8k8FM9/SYfVNQntnzf9IjG99Go5lEqnXjSCIDWycEsKHsZMF8bpHj87sJABkemMUoI=@vger.kernel.org X-Gm-Message-State: AOJu0YxHztxjFuJKj6AoKVXdBb5yv1udaOtLYX4vijZc5QF0PMR6lBkJ nUxM64InJUTsrWb2GR+zzK62/kIeAcNv2rNn42pmgMxX238rRjxdFHJip4XHbXEjeDWS7hmLaw= = X-Google-Smtp-Source: AGHT+IFx2ou/QuAFmIUAlqle5Ae6nVXYkQFWvjC1m14ueuW175sGb3meshCyQWIZnxw9rj/A84+OtDA02Q== X-Received: from edxn3.prod.google.com ([2002:a05:6402:5c3:b0:5e4:cc5d:aa63]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:26c2:b0:5db:f423:19c5 with SMTP id 4fb4d7f45d1cf-5e4d6ac4066mr16631161a12.5.1741080314075; Tue, 04 Mar 2025 01:25:14 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:01 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-3-elver@google.com> Subject: [PATCH v2 02/34] compiler-capability-analysis: Add infrastructure for Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Capability analysis is a C language extension, which enables statically checking that user-definable "capabilities" are acquired and released where required. An obvious application is lock-safety checking for the kernel's various synchronization primitives (each of which represents a "capability"), and checking that locking rules are not violated. Clang originally called the feature "Thread Safety Analysis" [1], with some terminology still using the thread-safety-analysis-only names. This was later changed and the feature became more flexible, gaining the ability to define custom "capabilities". Its foundations can be found in "capability systems", used to specify the permissibility of operations to depend on some capability being held (or not held). [1] https://clang.llvm.org/docs/ThreadSafetyAnalysis.html [2] https://www.cs.cornell.edu/talc/papers/capabilities.pdf Because the feature is not just able to express capabilities related to synchronization primitives, the naming chosen for the kernel departs from Clang's initial "Thread Safety" nomenclature and refers to the feature as "Capability Analysis" to avoid confusion. The implementation still makes references to the older terminology in some places, such as `-Wthread-safety` being the warning enabled option that also still appears in diagnostic messages. See more details in the kernel-doc documentation added in this and the subsequent changes. A Clang version that supports -Wthread-safety-pointer is recommended, but not required: https://github.com/llvm/llvm-project/commit/de10e44b6fe7 Signed-off-by: Marco Elver --- v2: * New -Wthread-safety feature rename to -Wthread-safety-pointer (was -Wthread-safety-addressof). * Introduce __capability_unsafe() function attribute. * Rename __var_guarded_by to simply __guarded_by. The initial idea was to be explicit if the variable or pointed-to data is guarded by, but having a shorter attribute name is likely better long-term. * Rename __ref_guarded_by to __pt_guarded_by (pointed-to guarded by). --- Makefile | 1 + include/linux/compiler-capability-analysis.h | 394 ++++++++++++++++++- lib/Kconfig.debug | 29 ++ scripts/Makefile.capability-analysis | 7 + scripts/Makefile.lib | 10 + 5 files changed, 434 insertions(+), 7 deletions(-) create mode 100644 scripts/Makefile.capability-analysis diff --git a/Makefile b/Makefile index 70bdbf2218fc..3a945098515e 100644 --- a/Makefile +++ b/Makefile @@ -1082,6 +1082,7 @@ include-$(CONFIG_KCOV) += scripts/Makefile.kcov include-$(CONFIG_RANDSTRUCT) += scripts/Makefile.randstruct include-$(CONFIG_AUTOFDO_CLANG) += scripts/Makefile.autofdo include-$(CONFIG_PROPELLER_CLANG) += scripts/Makefile.propeller +include-$(CONFIG_WARN_CAPABILITY_ANALYSIS) += scripts/Makefile.capability-analysis include-$(CONFIG_GCC_PLUGINS) += scripts/Makefile.gcc-plugins include $(addprefix $(srctree)/, $(include-y)) diff --git a/include/linux/compiler-capability-analysis.h b/include/linux/compiler-capability-analysis.h index 7546ddb83f86..c47d9ed18303 100644 --- a/include/linux/compiler-capability-analysis.h +++ b/include/linux/compiler-capability-analysis.h @@ -6,26 +6,406 @@ #ifndef _LINUX_COMPILER_CAPABILITY_ANALYSIS_H #define _LINUX_COMPILER_CAPABILITY_ANALYSIS_H +#if defined(WARN_CAPABILITY_ANALYSIS) + +/* + * The below attributes are used to define new capability types. Internal only. + */ +# define __cap_type(name) __attribute__((capability(#name))) +# define __acquires_cap(var) __attribute__((acquire_capability(var))) +# define __acquires_shared_cap(var) __attribute__((acquire_shared_capability(var))) +# define __try_acquires_cap(ret, var) __attribute__((try_acquire_capability(ret, var))) +# define __try_acquires_shared_cap(ret, var) __attribute__((try_acquire_shared_capability(ret, var))) +# define __releases_cap(var) __attribute__((release_capability(var))) +# define __releases_shared_cap(var) __attribute__((release_shared_capability(var))) +# define __asserts_cap(var) __attribute__((assert_capability(var))) +# define __asserts_shared_cap(var) __attribute__((assert_shared_capability(var))) +# define __returns_cap(var) __attribute__((lock_returned(var))) + +/* + * The below are used to annotate code being checked. Internal only. + */ +# define __excludes_cap(var) __attribute__((locks_excluded(var))) +# define __requires_cap(var) __attribute__((requires_capability(var))) +# define __requires_shared_cap(var) __attribute__((requires_shared_capability(var))) + +/** + * __guarded_by - struct member and globals attribute, declares variable + * protected by capability + * @var: the capability instance that guards the member or global + * + * Declares that the struct member or global variable must be guarded by the + * given capability @var. Read operations on the data require shared access, + * while write operations require exclusive access. + * + * .. code-block:: c + * + * struct some_state { + * spinlock_t lock; + * long counter __guarded_by(&lock); + * }; + */ +# define __guarded_by(var) __attribute__((guarded_by(var))) + +/** + * __pt_guarded_by - struct member and globals attribute, declares pointed-to + * data is protected by capability + * @var: the capability instance that guards the member or global + * + * Declares that the data pointed to by the struct member pointer or global + * pointer must be guarded by the given capability @var. Read operations on the + * data require shared access, while write operations require exclusive access. + * + * .. code-block:: c + * + * struct some_state { + * spinlock_t lock; + * long *counter __pt_guarded_by(&lock); + * }; + */ +# define __pt_guarded_by(var) __attribute__((pt_guarded_by(var))) + +/** + * struct_with_capability() - declare or define a capability struct + * @name: struct name + * + * Helper to declare or define a struct type with capability of the same name. + * + * .. code-block:: c + * + * struct_with_capability(my_handle) { + * int foo; + * long bar; + * }; + * + * struct some_state { + * ... + * }; + * // ... declared elsewhere ... + * struct_with_capability(some_state); + * + * Note: The implementation defines several helper functions that can acquire, + * release, and assert the capability. + */ +# define struct_with_capability(name) \ + struct __cap_type(name) name; \ + static __always_inline void __acquire_cap(const struct name *var) \ + __attribute__((overloadable)) __no_capability_analysis __acquires_cap(var) { } \ + static __always_inline void __acquire_shared_cap(const struct name *var) \ + __attribute__((overloadable)) __no_capability_analysis __acquires_shared_cap(var) { } \ + static __always_inline bool __try_acquire_cap(const struct name *var, bool ret) \ + __attribute__((overloadable)) __no_capability_analysis __try_acquires_cap(1, var) \ + { return ret; } \ + static __always_inline bool __try_acquire_shared_cap(const struct name *var, bool ret) \ + __attribute__((overloadable)) __no_capability_analysis __try_acquires_shared_cap(1, var) \ + { return ret; } \ + static __always_inline void __release_cap(const struct name *var) \ + __attribute__((overloadable)) __no_capability_analysis __releases_cap(var) { } \ + static __always_inline void __release_shared_cap(const struct name *var) \ + __attribute__((overloadable)) __no_capability_analysis __releases_shared_cap(var) { } \ + static __always_inline void __assert_cap(const struct name *var) \ + __attribute__((overloadable)) __asserts_cap(var) { } \ + static __always_inline void __assert_shared_cap(const struct name *var) \ + __attribute__((overloadable)) __asserts_shared_cap(var) { } \ + struct name + +/** + * disable_capability_analysis() - disables capability analysis + * + * Disables capability analysis. Must be paired with a later + * enable_capability_analysis(). + */ +# define disable_capability_analysis() \ + __diag_push(); \ + __diag_ignore_all("-Wunknown-warning-option", "") \ + __diag_ignore_all("-Wthread-safety", "") \ + __diag_ignore_all("-Wthread-safety-pointer", "") + +/** + * enable_capability_analysis() - re-enables capability analysis + * + * Re-enables capability analysis. Must be paired with a prior + * disable_capability_analysis(). + */ +# define enable_capability_analysis() __diag_pop() + +/** + * __no_capability_analysis - function attribute, disables capability analysis + * + * Function attribute denoting that capability analysis is disabled for the + * whole function. Prefer use of `capability_unsafe()` where possible. + */ +# define __no_capability_analysis __attribute__((no_thread_safety_analysis)) + +#else /* !WARN_CAPABILITY_ANALYSIS */ + +# define __cap_type(name) +# define __acquires_cap(var) +# define __acquires_shared_cap(var) +# define __try_acquires_cap(ret, var) +# define __try_acquires_shared_cap(ret, var) +# define __releases_cap(var) +# define __releases_shared_cap(var) +# define __asserts_cap(var) +# define __asserts_shared_cap(var) +# define __returns_cap(var) +# define __guarded_by(var) +# define __pt_guarded_by(var) +# define __excludes_cap(var) +# define __requires_cap(var) +# define __requires_shared_cap(var) +# define __acquire_cap(var) do { } while (0) +# define __acquire_shared_cap(var) do { } while (0) +# define __try_acquire_cap(var, ret) (ret) +# define __try_acquire_shared_cap(var, ret) (ret) +# define __release_cap(var) do { } while (0) +# define __release_shared_cap(var) do { } while (0) +# define __assert_cap(var) do { (void)(var); } while (0) +# define __assert_shared_cap(var) do { (void)(var); } while (0) +# define struct_with_capability(name) struct name +# define disable_capability_analysis() +# define enable_capability_analysis() +# define __no_capability_analysis + +#endif /* WARN_CAPABILITY_ANALYSIS */ + +/** + * capability_unsafe() - disable capability checking for contained code + * + * Disables capability checking for contained statements or expression. + * + * .. code-block:: c + * + * struct some_data { + * spinlock_t lock; + * int counter __guarded_by(&lock); + * }; + * + * int foo(struct some_data *d) + * { + * // ... + * // other code that is still checked ... + * // ... + * return capability_unsafe(d->counter); + * } + */ +#define capability_unsafe(...) \ +({ \ + disable_capability_analysis(); \ + __VA_ARGS__; \ + enable_capability_analysis() \ +}) + +/** + * __capability_unsafe() - function attribute, disable capability checking + * @comment: comment explaining why opt-out is safe + * + * Function attribute denoting that capability analysis is disabled for the + * whole function. Forces adding an inline comment as argument. + */ +#define __capability_unsafe(comment) __no_capability_analysis + +/** + * token_capability() - declare an abstract global capability instance + * @name: token capability name + * + * Helper that declares an abstract global capability instance @name that can be + * used as a token capability, but not backed by a real data structure (linker + * error if accidentally referenced). The type name is `__capability_@name`. + */ +#define token_capability(name) \ + struct_with_capability(__capability_##name) {}; \ + extern const struct __capability_##name *name + +/** + * token_capability_instance() - declare another instance of a global capability + * @cap: token capability previously declared with token_capability() + * @name: name of additional global capability instance + * + * Helper that declares an additional instance @name of the same token + * capability class @name. This is helpful where multiple related token + * capabilities are declared, as it also allows using the same underlying type + * (`__capability_@cap`) as function arguments. + */ +#define token_capability_instance(cap, name) \ + extern const struct __capability_##cap *name + +/* + * Common keywords for static capability analysis. Both Clang's capability + * analysis and Sparse's context tracking are currently supported. + */ #ifdef __CHECKER__ /* Sparse context/lock checking support. */ # define __must_hold(x) __attribute__((context(x,1,1))) +# define __must_not_hold(x) # define __acquires(x) __attribute__((context(x,0,1))) # define __cond_acquires(x) __attribute__((context(x,0,-1))) # define __releases(x) __attribute__((context(x,1,0))) # define __acquire(x) __context__(x,1) # define __release(x) __context__(x,-1) # define __cond_lock(x, c) ((c) ? ({ __acquire(x); 1; }) : 0) +/* For Sparse, there's no distinction between exclusive and shared locks. */ +# define __must_hold_shared __must_hold +# define __acquires_shared __acquires +# define __cond_acquires_shared __cond_acquires +# define __releases_shared __releases +# define __acquire_shared __acquire +# define __release_shared __release +# define __cond_lock_shared __cond_acquire #else /* !__CHECKER__ */ -# define __must_hold(x) -# define __acquires(x) -# define __cond_acquires(x) -# define __releases(x) -# define __acquire(x) (void)0 -# define __release(x) (void)0 -# define __cond_lock(x, c) (c) +/** + * __must_hold() - function attribute, caller must hold exclusive capability + * @x: capability instance pointer + * + * Function attribute declaring that the caller must hold the given capability + * instance @x exclusively. + */ +# define __must_hold(x) __requires_cap(x) + +/** + * __must_not_hold() - function attribute, caller must not hold capability + * @x: capability instance pointer + * + * Function attribute declaring that the caller must not hold the given + * capability instance @x. + */ +# define __must_not_hold(x) __excludes_cap(x) + +/** + * __acquires() - function attribute, function acquires capability exclusively + * @x: capability instance pointer + * + * Function attribute declaring that the function acquires the given + * capability instance @x exclusively, but does not release it. + */ +# define __acquires(x) __acquires_cap(x) + +/** + * __cond_acquires() - function attribute, function conditionally + * acquires a capability exclusively + * @x: capability instance pointer + * + * Function attribute declaring that the function conditionally acquires the + * given capability instance @x exclusively, but does not release it. + */ +# define __cond_acquires(x) __try_acquires_cap(1, x) + +/** + * __releases() - function attribute, function releases a capability exclusively + * @x: capability instance pointer + * + * Function attribute declaring that the function releases the given capability + * instance @x exclusively. The capability must be held on entry. + */ +# define __releases(x) __releases_cap(x) + +/** + * __acquire() - function to acquire capability exclusively + * @x: capability instance pinter + * + * No-op function that acquires the given capability instance @x exclusively. + */ +# define __acquire(x) __acquire_cap(x) + +/** + * __release() - function to release capability exclusively + * @x: capability instance pinter + * + * No-op function that releases the given capability instance @x. + */ +# define __release(x) __release_cap(x) + +/** + * __cond_lock() - function that conditionally acquires a capability + * exclusively + * @x: capability instance pinter + * @c: boolean expression + * + * Return: result of @c + * + * No-op function that conditionally acquires capability instance @x + * exclusively, if the boolean expression @c is true. The result of @c is the + * return value, to be able to create a capability-enabled interface; for + * example: + * + * .. code-block:: c + * + * #define spin_trylock(l) __cond_lock(&lock, _spin_trylock(&lock)) + */ +# define __cond_lock(x, c) __try_acquire_cap(x, c) + +/** + * __must_hold_shared() - function attribute, caller must hold shared capability + * @x: capability instance pointer + * + * Function attribute declaring that the caller must hold the given capability + * instance @x with shared access. + */ +# define __must_hold_shared(x) __requires_shared_cap(x) + +/** + * __acquires_shared() - function attribute, function acquires capability shared + * @x: capability instance pointer + * + * Function attribute declaring that the function acquires the given + * capability instance @x with shared access, but does not release it. + */ +# define __acquires_shared(x) __acquires_shared_cap(x) + +/** + * __cond_acquires_shared() - function attribute, function conditionally + * acquires a capability shared + * @x: capability instance pointer + * + * Function attribute declaring that the function conditionally acquires the + * given capability instance @x with shared access, but does not release it. + */ +# define __cond_acquires_shared(x) __try_acquires_shared_cap(1, x) + +/** + * __releases_shared() - function attribute, function releases a + * capability shared + * @x: capability instance pointer + * + * Function attribute declaring that the function releases the given capability + * instance @x with shared access. The capability must be held on entry. + */ +# define __releases_shared(x) __releases_shared_cap(x) + +/** + * __acquire_shared() - function to acquire capability shared + * @x: capability instance pinter + * + * No-op function that acquires the given capability instance @x with shared + * access. + */ +# define __acquire_shared(x) __acquire_shared_cap(x) + +/** + * __release_shared() - function to release capability shared + * @x: capability instance pinter + * + * No-op function that releases the given capability instance @x with shared + * access. + */ +# define __release_shared(x) __release_shared_cap(x) + +/** + * __cond_lock_shared() - function that conditionally acquires a capability + * shared + * @x: capability instance pinter + * @c: boolean expression + * + * Return: result of @c + * + * No-op function that conditionally acquires capability instance @x with shared + * access, if the boolean expression @c is true. The result of @c is the return + * value, to be able to create a capability-enabled interface. + */ +# define __cond_lock_shared(x, c) __try_acquire_shared_cap(x, c) #endif /* __CHECKER__ */ diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 1af972a92d06..f30099051294 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -603,6 +603,35 @@ config DEBUG_FORCE_WEAK_PER_CPU To ensure that generic code follows the above rules, this option forces all percpu variables to be defined as weak. +config WARN_CAPABILITY_ANALYSIS + bool "Compiler capability-analysis warnings" + depends on CC_IS_CLANG && $(cc-option,-Wthread-safety -fexperimental-late-parse-attributes) + # Branch profiling re-defines "if", which messes with the compiler's + # ability to analyze __cond_acquires(..), resulting in false positives. + depends on !TRACE_BRANCH_PROFILING + default y + help + Capability analysis is a C language extension, which enables + statically checking that user-definable "capabilities" are acquired + and released where required. + + Clang's name of the feature ("Thread Safety Analysis") refers to + the original name of the feature; it was later expanded to be a + generic "Capability Analysis" framework. + + Produces warnings by default. Select CONFIG_WERROR if you wish to + turn these warnings into errors. + +config WARN_CAPABILITY_ANALYSIS_ALL + bool "Enable capability analysis for all source files" + depends on WARN_CAPABILITY_ANALYSIS + depends on EXPERT && !COMPILE_TEST + help + Enable tree-wide capability analysis. This is likely to produce a + large number of false positives - enable at your own risk. + + If unsure, say N. + endmenu # "Compiler options" menu "Generic Kernel Debugging Instruments" diff --git a/scripts/Makefile.capability-analysis b/scripts/Makefile.capability-analysis new file mode 100644 index 000000000000..b7b36cca47f4 --- /dev/null +++ b/scripts/Makefile.capability-analysis @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: GPL-2.0 + +capability-analysis-cflags := -DWARN_CAPABILITY_ANALYSIS \ + -fexperimental-late-parse-attributes -Wthread-safety \ + $(call cc-option,-Wthread-safety-pointer) + +export CFLAGS_CAPABILITY_ANALYSIS := $(capability-analysis-cflags) diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib index cad20f0e66ee..08910001ee64 100644 --- a/scripts/Makefile.lib +++ b/scripts/Makefile.lib @@ -191,6 +191,16 @@ _c_flags += $(if $(patsubst n%,, \ -D__KCSAN_INSTRUMENT_BARRIERS__) endif +# +# Enable capability analysis flags only where explicitly opted in. +# (depends on variables CAPABILITY_ANALYSIS_obj.o, CAPABILITY_ANALYSIS) +# +ifeq ($(CONFIG_WARN_CAPABILITY_ANALYSIS),y) +_c_flags += $(if $(patsubst n%,, \ + $(CAPABILITY_ANALYSIS_$(target-stem).o)$(CAPABILITY_ANALYSIS)$(if $(is-kernel-object),$(CONFIG_WARN_CAPABILITY_ANALYSIS_ALL))), \ + $(CFLAGS_CAPABILITY_ANALYSIS)) +endif + # # Enable AutoFDO build flags except some files or directories we don't want to # enable (depends on variables AUTOFDO_PROFILE_obj.o and AUTOFDO_PROFILE). From patchwork Tue Mar 4 09:21:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870230 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8272D1FDE20 for ; Tue, 4 Mar 2025 09:25:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080321; cv=none; b=hTEedRhjZ8v1tkKvNpnsvO0zQmOBslt/LcE0zbdj9DbObJuzhIKUjwh9df4Zalf4JDpRXHoajgOh//qqEIblAVb/TRm21Nx+5duImt1Wmv6GLprUJS7UAIbf5kyaThzvqUhS1xVcX7Z2TQfLi9NLt/C73FQkJWClAm9zBqbNxUQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080321; c=relaxed/simple; bh=D2pcj7eNNhKsXgn3XF0f/0vqaucKJ4Nfaio7BEZzV6o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=EeMSVlOi9DduwyjppWR/pMzI+D4UFDSvhZbi3OgCiLG/pqtpxCUR5nw2IpgHM8foKKQ1sNg4ZuYU8AzXFSRkxVyUcY2vXiigo8EDs0eDX6Hx5Mz2XCEsy1nl5tTO5zAhnVv28LK5iXIkrKcgc4JfxUEGjREwGkwC9hRlsKslOeM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vs8i/HV6; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vs8i/HV6" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-39101511442so1075827f8f.1 for ; Tue, 04 Mar 2025 01:25:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080317; x=1741685117; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pBc/zi9P1R3Zwtykno3mWYwrt7ORJZX5qCpzJBRDBb8=; b=vs8i/HV6DjeHWQAQxEQCNitaVYS8XL9+trpbOHP1l4G/my82PKizypDOWhD/EgaXqc 0Hv33SmCakD1i6cHlzoK8Yn7WvrMaaIkN2PMzPNNHBXVWIJK3xkk7h2jmxM7dmPFzjC0 caWTr6DUwaCVPNegfLSC+pZIR546MFC1tX1MNwmUMOUeShMUlskTlTwTVKibldL11QN2 WNsd8wjtP7TpVaHFIWP3ktrz/KR+6HvqpaAJTzg+2IqG/h1RiB8TtjUT06o0O6X8lohU KA4KZ9F7ML3czkQk2twoOEfRLhVUEs10pKfabJzKEIVXaQ5LE9fsvHDAls9+qXN2qF45 JNfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080317; x=1741685117; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pBc/zi9P1R3Zwtykno3mWYwrt7ORJZX5qCpzJBRDBb8=; b=veetX+CCQF6acEj4XUkleh1+UoeASo7y7P2itaGJ1BsL8oVgpGJulUfH0B/biLy7Qa 6GlIOOzTnUMyIfzzmZ+5tDsQ+uJ5uGcxHdk77ARSgxs56arcgDI50CqME53YiRFJTQQB sil7KVNnErzTdqHjAZirRel6QF960SHEBjU90QvGs31VEw+Ns/4xrvYeBBtes8L1wGmA S7kUVEHhg3HnnVIzW5V4xk00+DrYhwI1PU7D5y0Wuq6YpVh2jFLINkLIIWHLAqZK54nm W1+Bb3B0unMWOIlVDfhJ/GIm5JyBDk9Rdz52BToRPhx/s8FHp0u5/1zIAdXY0DYva1uT O6Bw== X-Forwarded-Encrypted: i=1; AJvYcCUkpN5dZ2l2XQV9Ys6EZzjU7djxoug2M7GYXUa5Dm7CcfMVavPcvSWLXdIPTo6+K+g5v1LHg7fF40ezC/c=@vger.kernel.org X-Gm-Message-State: AOJu0YxCEilxZFeXtoOmr++HyX/6PHMtG9DoPBNxUdpfoJt5lebQ1GoI tPsTKB1eVzg2cRH4gPVbtGP/SEvpyPX8yN0k0NNSkO0ENpqTSGq9pWMowPcr5VgOQUXk5So83Q= = X-Google-Smtp-Source: AGHT+IGQB/YIYjs3mcCeFc0d30zRBIg9YuV3qxz71mKDL9Wrd0L0zSrw4Y93Il/Zy6Ca4Q/8RxD18WTRwQ== X-Received: from wmbfp9.prod.google.com ([2002:a05:600c:6989:b0:43b:c927:5a4d]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:186c:b0:390:f0ff:2c10 with SMTP id ffacd0b85a97d-3911561abacmr1839931f8f.19.1741080316822; Tue, 04 Mar 2025 01:25:16 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:02 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-4-elver@google.com> Subject: [PATCH v2 03/34] compiler-capability-analysis: Add test stub From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Add a simple test stub where we will add common supported patterns that should not generate false positive of each new supported capability. Signed-off-by: Marco Elver --- lib/Kconfig.debug | 14 ++++++++++++++ lib/Makefile | 3 +++ lib/test_capability-analysis.c | 18 ++++++++++++++++++ 3 files changed, 35 insertions(+) create mode 100644 lib/test_capability-analysis.c diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index f30099051294..8abaf7dab3f8 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -2764,6 +2764,20 @@ config LINEAR_RANGES_TEST If unsure, say N. +config CAPABILITY_ANALYSIS_TEST + bool "Compiler capability-analysis warnings test" + depends on EXPERT + help + This builds the test for compiler-based capability analysis. The test + does not add executable code to the kernel, but is meant to test that + common patterns supported by the analysis do not result in false + positive warnings. + + When adding support for new capabilities, it is strongly recommended + to add supported patterns to this test. + + If unsure, say N. + config CMDLINE_KUNIT_TEST tristate "KUnit test for cmdline API" if !KUNIT_ALL_TESTS depends on KUNIT diff --git a/lib/Makefile b/lib/Makefile index d5cfc7afbbb8..1dbb59175eb0 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -394,6 +394,9 @@ obj-$(CONFIG_CRC_KUNIT_TEST) += crc_kunit.o obj-$(CONFIG_SIPHASH_KUNIT_TEST) += siphash_kunit.o obj-$(CONFIG_USERCOPY_KUNIT_TEST) += usercopy_kunit.o +CAPABILITY_ANALYSIS_test_capability-analysis.o := y +obj-$(CONFIG_CAPABILITY_ANALYSIS_TEST) += test_capability-analysis.o + obj-$(CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED) += devmem_is_allowed.o obj-$(CONFIG_FIRMWARE_TABLE) += fw_table.o diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c new file mode 100644 index 000000000000..a0adacce30ff --- /dev/null +++ b/lib/test_capability-analysis.c @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Compile-only tests for common patterns that should not generate false + * positive errors when compiled with Clang's capability analysis. + */ + +#include + +/* + * Test that helper macros work as expected. + */ +static void __used test_common_helpers(void) +{ + BUILD_BUG_ON(capability_unsafe(3) != 3); /* plain expression */ + BUILD_BUG_ON(capability_unsafe((void)2; 3;) != 3); /* does not swallow semi-colon */ + BUILD_BUG_ON(capability_unsafe((void)2, 3) != 3); /* does not swallow commas */ + capability_unsafe(do { } while (0)); /* works with void statements */ +} From patchwork Tue Mar 4 09:21:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870229 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E9DD1FE461 for ; Tue, 4 Mar 2025 09:25:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080323; cv=none; b=jhsqIxTBo4YnM2fmrU4qWA4FwquXoQvDrFtvI1hghXz0LeL1vyWzFow1EqsQvmYHqW75H889UiZMIp47z1YpKSQGZ5uYAMxu3pSsANEmI+cpIkRoVAFmOqSRsTpZo96k+aIfOLTgLglLHqqoZIzCvepNPWesbfx6W1qqoqJTg18= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080323; c=relaxed/simple; bh=L0OuX4zhmaM9o60FzP1sUh13J+x81eZXc+z1+4sEu8M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tKVQOkJWc1ReTlPeY0D6rOGDt3OGIMs+bj/xrx9oF1as+iU1BSUn732YBuPDYi69GOecQNm6dHG2fVG1WrEHUOaapNPnX2BUxV5YKqfvYHlKomqbobEzkasaWWrrUo/g5FuH7VuR/EO9sC/2JuawJdp6F6eZ8cvYcW9EO/nHzjE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=3u9RpHWN; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="3u9RpHWN" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-abf553044abso288118166b.3 for ; Tue, 04 Mar 2025 01:25:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080320; x=1741685120; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dE96AlLyXfE4zcTJ3mSZ7GpwRWlrYm2AzYTGYdTipys=; b=3u9RpHWNJVC4bujyG0x5symrpir4WGvkxxJFEURU98IJwPWUCEL4tCt3vMWwRHO9nR +T9joHTUNAoTaJpA7RfMqvMRjsED63vyY/49glLKweDl6ig85bzn28wrMQHRfYfu5lY5 CkW7CXe2cxIXEjO8JDij2VWMBZkaymaOUAJ0JfxTZiRShFzAgZfag+ZbZdfJyYF2ZGjU YkbjRy0RexytNYf2ZMMjQBxXP7RAx2L3p93P9VdLedYCgHamg4HOb99L/Mx/IQdQ9Chy 5yuF20tQpLYM/uzjeT8Q4E5SLro6XBT4JWI9uvnfFRkii88YC5ElwATpMHs2j8q2qUxN q3aA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080320; x=1741685120; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dE96AlLyXfE4zcTJ3mSZ7GpwRWlrYm2AzYTGYdTipys=; b=Og6gfcNU8sWD2ry4gMZOpP3Vv5qF9gfKgIM4SNXWFIkv9DEr2yo48CKaW3WJ8aesLN f9W5W+kKG87E6mM68N1sW4JrN6MCNjGPakajHGFPLrwO8sjr4ArKMrKFIYOaHjEP2Kj8 mDVBWxud/ITODucfEiqtCSFfNkMLwXPmkfAoGxGX9SXn9G+g/77M3dCczHHAVMJ4KmOW UZUTHgnzp1aO+lC3kvV6S9EUkMlfdHihUexzlJuVmyt0UdajWGDbgpbHzZ4UgglgeGTW WikfqJtP4EZv7r8adLZqI13QE7QM+Aas5Zm0Tq/wZkkhVZZQngZ0M8WFHKmx/JQLl47X 58Qg== X-Forwarded-Encrypted: i=1; AJvYcCU1JsaSEMfycXAgMpNKv1VGask0rypWICQX+Wpk1isGFSjNwFK/x6HKhWB7LSdEWZ41XxSas5USPmkCAbE=@vger.kernel.org X-Gm-Message-State: AOJu0YxWWVKMB6+mUqcAUigCf4i2W37wDcXuvJxX8ckgg63ImB6I6O85 t6v9JSGXYInKu52wCXI61CA8DBBANzqpePXV87wjbOpDubcPgFtzKqg0qaYHAtvvgoVuIKxHPA= = X-Google-Smtp-Source: AGHT+IFuP8M1dqAoGU6qLKH3+wWDWdvdL8t6wTEGZDy32MKEUcqZYenf1bVQSLxNtJzzOQKE63o3FVdKMg== X-Received: from ejcti14.prod.google.com ([2002:a17:907:c20e:b0:abf:6ebf:550f]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:6d0f:b0:abf:7a26:c47c with SMTP id a640c23a62f3a-abf7a26c669mr613682066b.47.1741080319651; Tue, 04 Mar 2025 01:25:19 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:03 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-5-elver@google.com> Subject: [PATCH v2 04/34] Documentation: Add documentation for Compiler-Based Capability Analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Adds documentation in Documentation/dev-tools/capability-analysis.rst, and adds it to the index and cross-references from Sparse's document. Signed-off-by: Marco Elver --- v2: * Remove cross-reference to Sparse, since we plan to remove Sparse support anyway. * Mention __no_capability_analysis should be avoided. --- .../dev-tools/capability-analysis.rst | 145 ++++++++++++++++++ Documentation/dev-tools/index.rst | 1 + 2 files changed, 146 insertions(+) create mode 100644 Documentation/dev-tools/capability-analysis.rst diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst new file mode 100644 index 000000000000..4b9c93cc8fcd --- /dev/null +++ b/Documentation/dev-tools/capability-analysis.rst @@ -0,0 +1,145 @@ +.. SPDX-License-Identifier: GPL-2.0 +.. Copyright (C) 2025, Google LLC. + +.. _capability-analysis: + +Compiler-Based Capability Analysis +================================== + +Capability analysis is a C language extension, which enables statically +checking that user-definable "capabilities" are acquired and released where +required. An obvious application is lock-safety checking for the kernel's +various synchronization primitives (each of which represents a "capability"), +and checking that locking rules are not violated. + +The Clang compiler currently supports the full set of capability analysis +features. To enable for Clang, configure the kernel with:: + + CONFIG_WARN_CAPABILITY_ANALYSIS=y + +The analysis is *opt-in by default*, and requires declaring which modules and +subsystems should be analyzed in the respective `Makefile`:: + + CAPABILITY_ANALYSIS_mymodule.o := y + +Or for all translation units in the directory:: + + CAPABILITY_ANALYSIS := y + +It is possible to enable the analysis tree-wide, however, which will result in +numerous false positive warnings currently and is *not* generally recommended:: + + CONFIG_WARN_CAPABILITY_ANALYSIS_ALL=y + +Programming Model +----------------- + +The below describes the programming model around using capability-enabled +types. + +.. note:: + Enabling capability analysis can be seen as enabling a dialect of Linux C with + a Capability System. Some valid patterns involving complex control-flow are + constrained (such as conditional acquisition and later conditional release + in the same function, or returning pointers to capabilities from functions. + +Capability analysis is a way to specify permissibility of operations to depend +on capabilities being held (or not held). Typically we are interested in +protecting data and code by requiring some capability to be held, for example a +specific lock. The analysis ensures that the caller cannot perform the +operation without holding the appropriate capability. + +Capabilities are associated with named structs, along with functions that +operate on capability-enabled struct instances to acquire and release the +associated capability. + +Capabilities can be held either exclusively or shared. This mechanism allows +assign more precise privileges when holding a capability, typically to +distinguish where a thread may only read (shared) or also write (exclusive) to +guarded data. + +The set of capabilities that are actually held by a given thread at a given +point in program execution is a run-time concept. The static analysis works by +calculating an approximation of that set, called the capability environment. +The capability environment is calculated for every program point, and describes +the set of capabilities that are statically known to be held, or not held, at +that particular point. This environment is a conservative approximation of the +full set of capabilities that will actually held by a thread at run-time. + +More details are also documented `here +`_. + +.. note:: + Clang's analysis explicitly does not infer capabilities acquired or released + by inline functions. It requires explicit annotations to (a) assert that + it's not a bug if a capability is released or acquired, and (b) to retain + consistency between inline and non-inline function declarations. + +Supported Kernel Primitives +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. Currently the following synchronization primitives are supported: + +For capabilities with an initialization function (e.g., `spin_lock_init()`), +calling this function on the capability instance before initializing any +guarded members or globals prevents the compiler from issuing warnings about +unguarded initialization. + +Lockdep assertions, such as `lockdep_assert_held()`, inform the compiler's +capability analysis that the associated synchronization primitive is held after +the assertion. This avoids false positives in complex control-flow scenarios +and encourages the use of Lockdep where static analysis is limited. For +example, this is useful when a function doesn't *always* require a lock, making +`__must_hold()` inappropriate. + +Keywords +~~~~~~~~ + +.. kernel-doc:: include/linux/compiler-capability-analysis.h + :identifiers: struct_with_capability + token_capability token_capability_instance + __guarded_by __pt_guarded_by + __must_hold + __must_not_hold + __acquires + __cond_acquires + __releases + __must_hold_shared + __acquires_shared + __cond_acquires_shared + __releases_shared + __acquire + __release + __cond_lock + __acquire_shared + __release_shared + __cond_lock_shared + capability_unsafe + __capability_unsafe + disable_capability_analysis enable_capability_analysis + +.. note:: + The function attribute `__no_capability_analysis` is reserved for internal + implementation of capability-enabled primitives, and should be avoided in + normal code. + +Background +---------- + +Clang originally called the feature `Thread Safety Analysis +`_, with some +terminology still using the thread-safety-analysis-only names. This was later +changed and the feature became more flexible, gaining the ability to define +custom "capabilities". + +Indeed, its foundations can be found in `capability systems +`_, used to specify +the permissibility of operations to depend on some capability being held (or +not held). + +Because the feature is not just able to express capabilities related to +synchronization primitives, the naming chosen for the kernel departs from +Clang's initial "Thread Safety" nomenclature and refers to the feature as +"Capability Analysis" to avoid confusion. The implementation still makes +references to the older terminology in some places, such as `-Wthread-safety` +being the warning option that also still appears in diagnostic messages. diff --git a/Documentation/dev-tools/index.rst b/Documentation/dev-tools/index.rst index 65c54b27a60b..62ac23f797cd 100644 --- a/Documentation/dev-tools/index.rst +++ b/Documentation/dev-tools/index.rst @@ -18,6 +18,7 @@ Documentation/process/debugging/index.rst :maxdepth: 2 testing-overview + capability-analysis checkpatch clang-format coccinelle From patchwork Tue Mar 4 09:21:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870228 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A7841FE463 for ; Tue, 4 Mar 2025 09:25:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080326; cv=none; b=tAdlsWmwhzKn3dU/SDBAIW8QC0s2xpLcX3leunpuKsLR5Sz4xG4TT13aeUhsXDLJTAmXPwTwBeNhe/IuBj+dBzqOn/uNICb/mVvKctWv8YijopCe5c6TUYSOPkfQskZ3tIyBrPOCIS+o7xjPUWtBx/Nf2QTzsjQ1tFKUShfNNY0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080326; c=relaxed/simple; bh=StYpncqzFSXHEYm8ZOW/8xsOKbkMA3l0kvQy8Snle/I=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=FSxRitt9bk8kINMaUaJRCJujOM7zxVacjhA9zYchvQDa+idkg1593Id+0toTrigfgth5TLfgy5DiGdl8dciLpcMVIXG6lsOHl8o3Dj8qKeJhI/TbwhUDOEZ/VLcE7YJ+fqsRNukSpGuSqiQurCX6GyQqIAUVDm3aCaLM9beUIQM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=scVZ+iz5; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="scVZ+iz5" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5e4c261813bso6934485a12.0 for ; Tue, 04 Mar 2025 01:25:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080322; x=1741685122; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Pp+E8sKoonvgdGAMVoQXSiL/gMXgRdJZHQEkz4wDB8k=; b=scVZ+iz51d3NKfYcrDAT7foKOBgwosQbwsV6wvyQNI64abLBrDh3kInDKqEL2XZ2yI i/QZrCOLpx273bv8NHyFfMkQaEWrn+gVldrsOScezd369MRxCkFon/5fDKG3/J6LLe2Y /Rt2C3SRfGIkS4ckhqBpP2oarSPq8DXVGzU83CN+VTEPdX6PELSIYgXMFiA5JrRwOvRw qJcNkstNO1s+24/ilf3YPA860LBYfj+jGrKgER2EpZJ30Mu3wnVVCtrUgCVx+9zW6f58 bbuk0W9+4Yn8rXXv+O7ED6LSWG+ZGH4EFSkSprX9pQazlOX1H+ktLbCGNglRfh7f3OmK q0Zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080322; x=1741685122; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Pp+E8sKoonvgdGAMVoQXSiL/gMXgRdJZHQEkz4wDB8k=; b=YUPQM4wqCpTGp5Szbos/IRUM4qvGb52OQ7R2XJUiw9QfAXi+qmnjpgzuvjsWhaW4u6 H2EmgxtKvFOyBMnpL+6g3cW1Gt7aQvOYYJZHbUieN4AUkZ1axht+fUGTItfODftYWEhV o5vIGnfLUbuUYD7jvYCh5MZLJhPd3tEkJGvVN6eY/3aXdfBV/ibNw2hofXHpLq5SZjiz vaYHQUUPosis5IfnAYtu2QnKCsep74XgpS0DpVHtI/gf5b14sW8i0UvSK3eqgcxgekVI O7Miad8T3+9ekqqFMATr/KOnreUFaJ8afviYMqQ2X5/lcv4duppQN1XPtCOGPDz1EhZ+ MpEA== X-Forwarded-Encrypted: i=1; AJvYcCWH3VE7H4O7R3OgqnPIua6izd+6OVlHzQtDfF72m+pRCE+/t2JaMU2opYeqlDvb8/Y57jmBQeYi2zw8kZA=@vger.kernel.org X-Gm-Message-State: AOJu0YyiuybvDtnJkrFx62d8+Wq0XC/bCVruUL6ymPRA0y0XtMG4gJBS XDv+tfPv8opWAOYgrD/YdKn6+Xzy3Kjzx2ugDR7ixzzZOGBD1EBRhDDioE2n9CCBLY28UZHFiA= = X-Google-Smtp-Source: AGHT+IH9KyF8MJ1v+l119Ih9d9zKK6oLF4W3jM8vNIisJnYl0p8zp9JkukwRd5xgRYNmt/zF9O2bCVvi1Q== X-Received: from edpr11.prod.google.com ([2002:aa7:c14b:0:b0:5dc:578d:62e9]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:3487:b0:5de:3478:269b with SMTP id 4fb4d7f45d1cf-5e4d6b75ef2mr15880706a12.32.1741080322290; Tue, 04 Mar 2025 01:25:22 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:04 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-6-elver@google.com> Subject: [PATCH v2 05/34] checkpatch: Warn about capability_unsafe() without comment From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Warn about applications of capability_unsafe() without a comment, to encourage documenting the reasoning behind why it was deemed safe. Signed-off-by: Marco Elver --- scripts/checkpatch.pl | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl index 7b28ad331742..c28efdb1d404 100755 --- a/scripts/checkpatch.pl +++ b/scripts/checkpatch.pl @@ -6693,6 +6693,14 @@ sub process { } } +# check for capability_unsafe without a comment. + if ($line =~ /\bcapability_unsafe\b/) { + if (!ctx_has_comment($first_line, $linenr)) { + WARN("CAPABILITY_UNSAFE", + "capability_unsafe without comment\n" . $herecurr); + } + } + # check of hardware specific defines if ($line =~ m@^.\s*\#\s*if.*\b(__i386__|__powerpc64__|__sun__|__s390x__)\b@ && $realfile !~ m@include/asm-@) { CHK("ARCH_DEFINES", From patchwork Tue Mar 4 09:21:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870639 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC60C1FF1DB for ; Tue, 4 Mar 2025 09:25:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080328; cv=none; b=EROdzsvHgNNx2ScKWXSr39sg1I3wYg33ySj9OtiggDZt8yfU3XWOyj5qbSPbs3YZqJjkWv3B/xgZs/RJeBB8zJcYbgC1PAI7RPK4ae0SgdDg6Iz8wX4YEo6nir1nuQ/fH+ZIn4IzMXLWdHga6Ky9110WLq2exaTIuIzgzfeaIEI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080328; c=relaxed/simple; bh=c8IW28x9pVJ3uD+41rCTL2BIJUzCYO63UJBK2wSfgWE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=k77hb8Zwkqo3OsDu7mJCgmd1kCo9oOaSRG5JbSP2Z9+3ENvrp+9I1Cnkf6EqvZYlMoZZn8HKBZHaoXJsugEv7z4+7EUV51j+05u8GKayABoyAGEZqeQX0vZpIawZX3M9gQvZ/zcbsR/Uhoo/Rwju/yoV6CsZ4MdBUZfeo2+9GmA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wwmCoPb3; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wwmCoPb3" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5e041305a29so6345668a12.2 for ; Tue, 04 Mar 2025 01:25:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080325; x=1741685125; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=07Wov0mqGRdlTFI57GwZAG7K2eg+AiDKRE973TUpU6Y=; b=wwmCoPb3qLLTEYW/WcADCR3xvRECMJ9cOezsDVAeenYh9GN/cSmysNRy53gcYBJWC2 DU9pRWn2oETv/8OMUw9Ik7kPCkZjLSmmbEYbWB9RddeXh+b6GctlXzW+/oxzJTMxRF3W LU6wS25irtT2044gfoAJoHLKBQcbAdfXjVL0RE/kgIWx5EZTFkDokfZDjGEhY6BBzU43 Gn8DiRFoiuX003/Hnsuuu9ELZQQmNN9Nm5xnbhheXEbWOPeHbrGlDGWW9xyghJJTrOZl c1l1cU+njNMlUPVLZieIBIQVylT0VsMc+hINqS1TKw12NRDtdmtg3UJ11ESJ7NZtgOK5 bAMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080325; x=1741685125; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=07Wov0mqGRdlTFI57GwZAG7K2eg+AiDKRE973TUpU6Y=; b=KNKZXhmliQqOIBQePSICAHu417EE3WgjbbkTQdkUW0jg1MSEhctNGy4kQ+/NZdWe46 iczIaQpP0UYziy2eqzQvCPn/YrYaoa0GULZMgCn04X5CJzZ1CWt5L+bLnNSBqG/OWBJc 30+4lkCFL129WFMCFDeGAs+o3PPpAtG4cssXzd6E+5XYSa67Nxrd4A19b5hmEsoyasXo HzD0CFgXXH6W3/aNqRXBBaHkYtVBSlbOk/72mp7yG7j9idvKhOJgH+x8xfaNvNuQVFYb pGyRy/a3T/A0uPYwV1hmyasgA8vO9AFcaDULi+8TFToWzNjn96G9g2E8eGMplPMOmu6f lRTw== X-Forwarded-Encrypted: i=1; AJvYcCVwzsZxwmhyC9gUEOieO27F4eaJ7gwbsq/pb/mvXVG8uVoB6D0QQ6c0Nrng6QY9aowvDxKJiEONo4Y+3Mg=@vger.kernel.org X-Gm-Message-State: AOJu0YwiIdAJHe9ZgnJC4Ft5rCykT9yaEwo9lH3z8nuhpI5XfzKV/N5m 462FzgFfk8ZUbACFIS9cp+WR41usIxAXwZwUriL1hDtKHkwZgtQIAtZnG+8eTBi7uDCU0K+zcQ= = X-Google-Smtp-Source: AGHT+IEBgXpXhg9C+r+XZJg0OX31obTvbu83kkqSV+WpLbyK2OQUzgWPpBMR/CJkehpBgNUdfJdDgye1dw== X-Received: from ejcvg16.prod.google.com ([2002:a17:907:d310:b0:abf:62a3:633f]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:2801:b0:abf:4647:a8cb with SMTP id a640c23a62f3a-abf4647a9d8mr1347716166b.44.1741080325134; Tue, 04 Mar 2025 01:25:25 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:05 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-7-elver@google.com> Subject: [PATCH v2 06/34] cleanup: Basic compatibility with capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Due to the scoped cleanup helpers used for lock guards wrapping acquire/release around their own constructors/destructors that store pointers to the passed locks in a separate struct, we currently cannot accurately annotate *destructors* which lock was released. While it's possible to annotate the constructor to say which lock was acquired, that alone would result in false positives claiming the lock was not released on function return. Instead, to avoid false positives, we can claim that the constructor "asserts" that the taken lock is held. This will ensure we can still benefit from the analysis where scoped guards are used to protect access to guarded variables, while avoiding false positives. The only downside are false negatives where we might accidentally lock the same lock again: raw_spin_lock(&my_lock); ... guard(raw_spinlock)(&my_lock); // no warning Arguably, lockdep will immediately catch issues like this. While Clang's analysis supports scoped guards in C++ [1], there's no way to apply this to C right now. Better support for Linux's scoped guard design could be added in future if deemed critical. [1] https://clang.llvm.org/docs/ThreadSafetyAnalysis.html#scoped-capability Signed-off-by: Marco Elver Reviewed-by: Bart Van Assche --- include/linux/cleanup.h | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/include/linux/cleanup.h b/include/linux/cleanup.h index ec00e3f7af2b..93a166549add 100644 --- a/include/linux/cleanup.h +++ b/include/linux/cleanup.h @@ -223,7 +223,7 @@ const volatile void * __must_check_fn(const volatile void *val) * @exit is an expression using '_T' -- similar to FREE above. * @init is an expression in @init_args resulting in @type * - * EXTEND_CLASS(name, ext, init, init_args...): + * EXTEND_CLASS(name, ext, ctor_attrs, init, init_args...): * extends class @name to @name@ext with the new constructor * * CLASS(name, var)(args...): @@ -243,15 +243,18 @@ const volatile void * __must_check_fn(const volatile void *val) #define DEFINE_CLASS(_name, _type, _exit, _init, _init_args...) \ typedef _type class_##_name##_t; \ static inline void class_##_name##_destructor(_type *p) \ + __no_capability_analysis \ { _type _T = *p; _exit; } \ static inline _type class_##_name##_constructor(_init_args) \ + __no_capability_analysis \ { _type t = _init; return t; } -#define EXTEND_CLASS(_name, ext, _init, _init_args...) \ +#define EXTEND_CLASS(_name, ext, ctor_attrs, _init, _init_args...) \ typedef class_##_name##_t class_##_name##ext##_t; \ static inline void class_##_name##ext##_destructor(class_##_name##_t *p)\ { class_##_name##_destructor(p); } \ static inline class_##_name##_t class_##_name##ext##_constructor(_init_args) \ + __no_capability_analysis ctor_attrs \ { class_##_name##_t t = _init; return t; } #define CLASS(_name, var) \ @@ -299,7 +302,7 @@ static __maybe_unused const bool class_##_name##_is_conditional = _is_cond #define DEFINE_GUARD_COND(_name, _ext, _condlock) \ __DEFINE_CLASS_IS_CONDITIONAL(_name##_ext, true); \ - EXTEND_CLASS(_name, _ext, \ + EXTEND_CLASS(_name, _ext,, \ ({ void *_t = _T; if (_T && !(_condlock)) _t = NULL; _t; }), \ class_##_name##_t _T) \ static inline void * class_##_name##_ext##_lock_ptr(class_##_name##_t *_T) \ @@ -371,6 +374,7 @@ typedef struct { \ } class_##_name##_t; \ \ static inline void class_##_name##_destructor(class_##_name##_t *_T) \ + __no_capability_analysis \ { \ if (_T->lock) { _unlock; } \ } \ @@ -383,6 +387,7 @@ static inline void *class_##_name##_lock_ptr(class_##_name##_t *_T) \ #define __DEFINE_LOCK_GUARD_1(_name, _type, _lock) \ static inline class_##_name##_t class_##_name##_constructor(_type *l) \ + __no_capability_analysis __asserts_cap(l) \ { \ class_##_name##_t _t = { .lock = l }, *_T = &_t; \ _lock; \ @@ -391,6 +396,7 @@ static inline class_##_name##_t class_##_name##_constructor(_type *l) \ #define __DEFINE_LOCK_GUARD_0(_name, _lock) \ static inline class_##_name##_t class_##_name##_constructor(void) \ + __no_capability_analysis \ { \ class_##_name##_t _t = { .lock = (void*)1 }, \ *_T __maybe_unused = &_t; \ @@ -410,7 +416,7 @@ __DEFINE_LOCK_GUARD_0(_name, _lock) #define DEFINE_LOCK_GUARD_1_COND(_name, _ext, _condlock) \ __DEFINE_CLASS_IS_CONDITIONAL(_name##_ext, true); \ - EXTEND_CLASS(_name, _ext, \ + EXTEND_CLASS(_name, _ext, __asserts_cap(l), \ ({ class_##_name##_t _t = { .lock = l }, *_T = &_t;\ if (_T->lock && !(_condlock)) _T->lock = NULL; \ _t; }), \ From patchwork Tue Mar 4 09:21:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870227 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 62FB21FF61B for ; Tue, 4 Mar 2025 09:25:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080331; cv=none; b=Kzxes4Lde6QYufj6fy7aLe8jzFxyDVhP84lS9LJo58XuTZfX3UCso8ywfFPRfOjyywzVvRDLoFOQGP1LqAAa0qt/4in/VGZrtCg66CCgt97choiHXTBXCisGfo2veoowXv01nBFxpJ07WcNmgJhMw1lHxfmQPnOrsThuddcbT68= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080331; c=relaxed/simple; bh=JTYS1SwFh7lVbtLZ9XjE7s+Wpaj5PCO/80LIOe8cNq0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=n1BGikpSJXJ+MJRreQkgNZmC0C3zR31T3KbiTaSqkE9ZTBPo00QcS4QcBLzRnDKLZmJxwW1GENUjsHgsEnvw4CNvgLftgNyurLvtQuVL4KIKbpDjGMQU0MEp7ke/UH2c2rcG6FvBWY8uuDfB2yNmejQ2AszY/vY7e+br9wiykK8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Thrf+cOt; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Thrf+cOt" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-ac1dca8720cso28954466b.0 for ; Tue, 04 Mar 2025 01:25:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080328; x=1741685128; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Dvm+2kORFqwesy5wYS8KGKPET1JeCxDND2kfg3G9ty8=; b=Thrf+cOt4yD3t7IuGDdnAoKCfa+jTwi7ZNa+b1ouWFnpMd1DiiIQKtxNikQ7MCbXdN XyuQOgoH/ubYonca3m4hfniY1E5bvcjjcqGhtzc0o7kA4l3DZaEgV7RjI96IKCmZX0RC OTQGTdH3b28Wc04rKYA752c/jlWNQ4I58oVg5V5YTHDzCRtKX8NOLtsjpvVYWkz+KscK mdF8TdUlC5uMCRfb9dS/EoDi/l131zhonJREOhzxIhatXNvBUj44AQjcN9QO1e266qsi hwaYfoweXVunVGWmX11YL10wusS13jk8rc4Pi/UIAb/LBX4hVb5gikePwgQg1z9uXP/v WRtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080328; x=1741685128; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Dvm+2kORFqwesy5wYS8KGKPET1JeCxDND2kfg3G9ty8=; b=xNYbiay1ZOxg+y5jMRZv9RklYkST5USOk8bv+7R2Em/iLbR9xMC5yqiB0lt3tFavD8 nsgaLeDkg+JHAaJkJhVHnB3A78SyAp52+8GgB0u2RWWq3PTYBhmCDApEoc1YwN4Bo86l YLjua1pj6kuU64t49BQOHP6RE3Etp8t42j/2D+Kyw+FJR/dCmyeEF1QvhjiBxiHRo1PN pSQXrlIul0rWEN0NSUP2ToViAU9caq8puUojCFtacXfa1+jXOQ63FECJ+t8Ghe3BUPce 8jmCGD757s/mEqxfmgEnBjjiUeTr47Tnc/p7OE3GyO6NhbwGpAyG8CDdBMOHjaoVX8EK +TCA== X-Forwarded-Encrypted: i=1; AJvYcCUb3Q//mBYvyhqJVrc4qG/FmHUgtTqABj4arXnh8pfAl7EjQwA4oWuD66z/BKOCXIisgqZdbEsHEStEADA=@vger.kernel.org X-Gm-Message-State: AOJu0YyzxUAX0SOZ+8llqEv+A5WWkm6zh+Y5WrPxYfB8NaXTInq3cgRq k1zM//Fp6LsZirVOzLohuMEZf2nKpT4BuyDzwXBP2PhGQReTs6bgAB+dq7AOcfW2x6xhpgWICw= = X-Google-Smtp-Source: AGHT+IFvA+/G/Do4B2YEZ6UDOEYzOzvbOome5BWE9wuc2tz5n6hXXyO5SQ6HDKTmum5Vyn1BFOAEC7kT8w== X-Received: from ejctn9.prod.google.com ([2002:a17:907:c409:b0:abf:6374:f45c]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:72c2:b0:ab7:cf4d:9b2d with SMTP id a640c23a62f3a-abf261f9df4mr2143998366b.30.1741080327941; Tue, 04 Mar 2025 01:25:27 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:06 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-8-elver@google.com> Subject: [PATCH v2 07/34] lockdep: Annotate lockdep assertions for capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Clang's capability analysis can be made aware of functions that assert that capabilities/locks are held. Presence of these annotations causes the analysis to assume the capability is held after calls to the annotated function, and avoid false positives with complex control-flow; for example, where not all control-flow paths in a function require a held lock, and therefore marking the function with __must_hold(..) is inappropriate. Signed-off-by: Marco Elver --- include/linux/lockdep.h | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index 67964dc4db95..5cea929b2219 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -282,16 +282,16 @@ extern void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie); do { WARN_ON_ONCE(debug_locks && !(cond)); } while (0) #define lockdep_assert_held(l) \ - lockdep_assert(lockdep_is_held(l) != LOCK_STATE_NOT_HELD) + do { lockdep_assert(lockdep_is_held(l) != LOCK_STATE_NOT_HELD); __assert_cap(l); } while (0) #define lockdep_assert_not_held(l) \ lockdep_assert(lockdep_is_held(l) != LOCK_STATE_HELD) #define lockdep_assert_held_write(l) \ - lockdep_assert(lockdep_is_held_type(l, 0)) + do { lockdep_assert(lockdep_is_held_type(l, 0)); __assert_cap(l); } while (0) #define lockdep_assert_held_read(l) \ - lockdep_assert(lockdep_is_held_type(l, 1)) + do { lockdep_assert(lockdep_is_held_type(l, 1)); __assert_shared_cap(l); } while (0) #define lockdep_assert_held_once(l) \ lockdep_assert_once(lockdep_is_held(l) != LOCK_STATE_NOT_HELD) @@ -389,10 +389,10 @@ extern int lockdep_is_held(const void *); #define lockdep_assert(c) do { } while (0) #define lockdep_assert_once(c) do { } while (0) -#define lockdep_assert_held(l) do { (void)(l); } while (0) +#define lockdep_assert_held(l) __assert_cap(l) #define lockdep_assert_not_held(l) do { (void)(l); } while (0) -#define lockdep_assert_held_write(l) do { (void)(l); } while (0) -#define lockdep_assert_held_read(l) do { (void)(l); } while (0) +#define lockdep_assert_held_write(l) __assert_cap(l) +#define lockdep_assert_held_read(l) __assert_shared_cap(l) #define lockdep_assert_held_once(l) do { (void)(l); } while (0) #define lockdep_assert_none_held_once() do { } while (0) From patchwork Tue Mar 4 09:21:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870638 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 36DD21FF7D4 for ; Tue, 4 Mar 2025 09:25:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080335; cv=none; b=r350mvbKmXDUMVAEimjDPP8BHnz872PIVgj3RdmwqlWAOohVIJrO6oEaEJtsooStqkA90vZhG9F3Bqd6Bc8LGiwFgsesxn+i3buSHcUWYosrfTmveHKTZo34IcPEJWmae2RXw3z8nG3cLwl1Oc9Bs+WnxsRIt/Pv+Ino7PHXTrI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080335; c=relaxed/simple; bh=+0AhSChSdbaQqCdL8eyMWAMV0x5uSNgRkyq4GhMylgY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ncl50EFgdP8+/FcFSmaXJffeuY0Yy9p7gcwf/ypXXUsCvSLT+/YBtTZNdQkKGncPWriAFAvOyuyApkYp5PX7PFVDhNZ5aXQQq4OdCrBdn6YoLru+hVepL05hv0QrK7DDS7woLPb4EQ/2/tqzTjaxnCYF0RkTWb96xdi6kRicAlY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Bb1nBASW; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Bb1nBASW" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5e4b3da6b49so5227686a12.2 for ; Tue, 04 Mar 2025 01:25:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080331; x=1741685131; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=c9Cj4Mr4frgnHfVo0V+RYccK4MiV1UgllCsbw+qa66U=; b=Bb1nBASWDmiZNZzrXsJJPRcYeogyI1al6Ek0KrmmrG7nApQgFmtcsw/JFayZ4ieQ9X xVz3Qe0xFdtu9qdm/UAIFhzVX8pQRuZRg+WiYYvPi8zM9o6rAdtWUlyYsm20p5o5pGIg dPSEIdBQt3HDBblrFg3WL/4m3h7vydCPdwi3BVcTmbeitr7c0K8whIm4lfhiHN+Y9/BZ s0O7M3dgj0qfkhKddO5gklnGceBQZ+qgpSHss4NOgBpX6vsLuCc9SpOOciKZ5bfInh9J AwW6rBrS4WEmQBSCSrdaQinu3rVYoB30NG7Jl2I8osspYez2S4o6G7bCDqMHCqoQDppL QfIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080331; x=1741685131; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=c9Cj4Mr4frgnHfVo0V+RYccK4MiV1UgllCsbw+qa66U=; b=TzNZUyqxrpuidNFMkWvnW4fLDMfTpfafeH5HWIHufj4Goqv/TXGpgnwnI5uLzdURcf 0QgOa93jrqR5YGuThPxy0/DPToXsOzxZGEkELbyIaoaiTg1ll9Ndu/gHurTspYawPt8W 1EnDxMj+wmIwNXnQqrR454jdfvlCd9XcW3MogGKURMk+bdnfLE0xdp4Bl8EhUNxSz9ES kdITJV1Eg1ddgbm/kN9mYycJv1wW68WpkYPB67DjuA28fyphTyKIMI4MINBIX0g1z+jx FoiXXgB3wq8JCxNecEKBPBt+calr9B+69VV72uWWeTPkxZiNTsj+SUQHQG6xguCIATy+ ynXQ== X-Forwarded-Encrypted: i=1; AJvYcCUYq1Fwm0pTU4KctoftO4fERSk3rwo7neTuwdRKyXKjuBUfcD8/QbmFNAaPEet3yIF6yap+EBFK7ddbdqE=@vger.kernel.org X-Gm-Message-State: AOJu0YzoVoD0BrRGh0vieSn1QrY5Qznac8xmpCFXcLh50cWv9UAiVDoE pU2v/n03FpMGnDjwO0/7ho7Z9xwLZDDSFeLdIszk3IxHHwTj7HcNjmPg6lc1qh/Rpr25Uo6iaQ= = X-Google-Smtp-Source: AGHT+IHQbWJyFQ0C6C1mK2P03rC9H96qYVYwUBdsPTyym958FynWJNhHUNNysiX6I1GEsmf9CPyg23q+UA== X-Received: from ejbps10.prod.google.com ([2002:a17:906:bf4a:b0:abf:4a05:fc97]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:5203:b0:5e4:9348:72e3 with SMTP id 4fb4d7f45d1cf-5e4d6b4c2f8mr42858591a12.21.1741080330671; Tue, 04 Mar 2025 01:25:30 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:07 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-9-elver@google.com> Subject: [PATCH v2 08/34] locking/rwlock, spinlock: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Add support for Clang's capability analysis for raw_spinlock_t, spinlock_t, and rwlock. This wholesale conversion is required because all three of them are interdependent. To avoid warnings in constructors, the initialization functions mark a capability as acquired when initialized before guarded variables. The test verifies that common patterns do not generate false positives. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 3 +- include/linux/rwlock.h | 25 ++-- include/linux/rwlock_api_smp.h | 29 +++- include/linux/rwlock_rt.h | 35 +++-- include/linux/rwlock_types.h | 10 +- include/linux/spinlock.h | 45 +++--- include/linux/spinlock_api_smp.h | 14 +- include/linux/spinlock_api_up.h | 71 +++++----- include/linux/spinlock_rt.h | 21 +-- include/linux/spinlock_types.h | 10 +- include/linux/spinlock_types_raw.h | 5 +- lib/test_capability-analysis.c | 128 ++++++++++++++++++ 12 files changed, 299 insertions(+), 97 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index 4b9c93cc8fcd..ddda3dc0d8d3 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -78,7 +78,8 @@ More details are also documented `here Supported Kernel Primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~ -.. Currently the following synchronization primitives are supported: +Currently the following synchronization primitives are supported: +`raw_spinlock_t`, `spinlock_t`, `rwlock_t`. For capabilities with an initialization function (e.g., `spin_lock_init()`), calling this function on the capability instance before initializing any diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h index 5b87c6f4a243..3c8971201ec7 100644 --- a/include/linux/rwlock.h +++ b/include/linux/rwlock.h @@ -22,23 +22,24 @@ do { \ static struct lock_class_key __key; \ \ __rwlock_init((lock), #lock, &__key); \ + __assert_cap(lock); \ } while (0) #else # define rwlock_init(lock) \ - do { *(lock) = __RW_LOCK_UNLOCKED(lock); } while (0) + do { *(lock) = __RW_LOCK_UNLOCKED(lock); __assert_cap(lock); } while (0) #endif #ifdef CONFIG_DEBUG_SPINLOCK - extern void do_raw_read_lock(rwlock_t *lock) __acquires(lock); + extern void do_raw_read_lock(rwlock_t *lock) __acquires_shared(lock); extern int do_raw_read_trylock(rwlock_t *lock); - extern void do_raw_read_unlock(rwlock_t *lock) __releases(lock); + extern void do_raw_read_unlock(rwlock_t *lock) __releases_shared(lock); extern void do_raw_write_lock(rwlock_t *lock) __acquires(lock); extern int do_raw_write_trylock(rwlock_t *lock); extern void do_raw_write_unlock(rwlock_t *lock) __releases(lock); #else -# define do_raw_read_lock(rwlock) do {__acquire(lock); arch_read_lock(&(rwlock)->raw_lock); } while (0) +# define do_raw_read_lock(rwlock) do {__acquire_shared(lock); arch_read_lock(&(rwlock)->raw_lock); } while (0) # define do_raw_read_trylock(rwlock) arch_read_trylock(&(rwlock)->raw_lock) -# define do_raw_read_unlock(rwlock) do {arch_read_unlock(&(rwlock)->raw_lock); __release(lock); } while (0) +# define do_raw_read_unlock(rwlock) do {arch_read_unlock(&(rwlock)->raw_lock); __release_shared(lock); } while (0) # define do_raw_write_lock(rwlock) do {__acquire(lock); arch_write_lock(&(rwlock)->raw_lock); } while (0) # define do_raw_write_trylock(rwlock) arch_write_trylock(&(rwlock)->raw_lock) # define do_raw_write_unlock(rwlock) do {arch_write_unlock(&(rwlock)->raw_lock); __release(lock); } while (0) @@ -49,7 +50,7 @@ do { \ * regardless of whether CONFIG_SMP or CONFIG_PREEMPT are set. The various * methods are defined as nops in the case they are not required. */ -#define read_trylock(lock) __cond_lock(lock, _raw_read_trylock(lock)) +#define read_trylock(lock) __cond_lock_shared(lock, _raw_read_trylock(lock)) #define write_trylock(lock) __cond_lock(lock, _raw_write_trylock(lock)) #define write_lock(lock) _raw_write_lock(lock) @@ -112,12 +113,12 @@ do { \ } while (0) #define write_unlock_bh(lock) _raw_write_unlock_bh(lock) -#define write_trylock_irqsave(lock, flags) \ -({ \ - local_irq_save(flags); \ - write_trylock(lock) ? \ - 1 : ({ local_irq_restore(flags); 0; }); \ -}) +#define write_trylock_irqsave(lock, flags) \ + __cond_lock(lock, ({ \ + local_irq_save(flags); \ + _raw_write_trylock(lock) ? \ + 1 : ({ local_irq_restore(flags); 0; }); \ + })) #ifdef arch_rwlock_is_contended #define rwlock_is_contended(lock) \ diff --git a/include/linux/rwlock_api_smp.h b/include/linux/rwlock_api_smp.h index 31d3d1116323..3e975105a606 100644 --- a/include/linux/rwlock_api_smp.h +++ b/include/linux/rwlock_api_smp.h @@ -15,12 +15,12 @@ * Released under the General Public License (GPL). */ -void __lockfunc _raw_read_lock(rwlock_t *lock) __acquires(lock); +void __lockfunc _raw_read_lock(rwlock_t *lock) __acquires_shared(lock); void __lockfunc _raw_write_lock(rwlock_t *lock) __acquires(lock); void __lockfunc _raw_write_lock_nested(rwlock_t *lock, int subclass) __acquires(lock); -void __lockfunc _raw_read_lock_bh(rwlock_t *lock) __acquires(lock); +void __lockfunc _raw_read_lock_bh(rwlock_t *lock) __acquires_shared(lock); void __lockfunc _raw_write_lock_bh(rwlock_t *lock) __acquires(lock); -void __lockfunc _raw_read_lock_irq(rwlock_t *lock) __acquires(lock); +void __lockfunc _raw_read_lock_irq(rwlock_t *lock) __acquires_shared(lock); void __lockfunc _raw_write_lock_irq(rwlock_t *lock) __acquires(lock); unsigned long __lockfunc _raw_read_lock_irqsave(rwlock_t *lock) __acquires(lock); @@ -28,11 +28,11 @@ unsigned long __lockfunc _raw_write_lock_irqsave(rwlock_t *lock) __acquires(lock); int __lockfunc _raw_read_trylock(rwlock_t *lock); int __lockfunc _raw_write_trylock(rwlock_t *lock); -void __lockfunc _raw_read_unlock(rwlock_t *lock) __releases(lock); +void __lockfunc _raw_read_unlock(rwlock_t *lock) __releases_shared(lock); void __lockfunc _raw_write_unlock(rwlock_t *lock) __releases(lock); -void __lockfunc _raw_read_unlock_bh(rwlock_t *lock) __releases(lock); +void __lockfunc _raw_read_unlock_bh(rwlock_t *lock) __releases_shared(lock); void __lockfunc _raw_write_unlock_bh(rwlock_t *lock) __releases(lock); -void __lockfunc _raw_read_unlock_irq(rwlock_t *lock) __releases(lock); +void __lockfunc _raw_read_unlock_irq(rwlock_t *lock) __releases_shared(lock); void __lockfunc _raw_write_unlock_irq(rwlock_t *lock) __releases(lock); void __lockfunc _raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags) @@ -145,6 +145,7 @@ static inline int __raw_write_trylock(rwlock_t *lock) #if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC) static inline void __raw_read_lock(rwlock_t *lock) + __acquires_shared(lock) __no_capability_analysis { preempt_disable(); rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_); @@ -152,6 +153,7 @@ static inline void __raw_read_lock(rwlock_t *lock) } static inline unsigned long __raw_read_lock_irqsave(rwlock_t *lock) + __acquires_shared(lock) __no_capability_analysis { unsigned long flags; @@ -163,6 +165,7 @@ static inline unsigned long __raw_read_lock_irqsave(rwlock_t *lock) } static inline void __raw_read_lock_irq(rwlock_t *lock) + __acquires_shared(lock) __no_capability_analysis { local_irq_disable(); preempt_disable(); @@ -171,6 +174,7 @@ static inline void __raw_read_lock_irq(rwlock_t *lock) } static inline void __raw_read_lock_bh(rwlock_t *lock) + __acquires_shared(lock) __no_capability_analysis { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_); @@ -178,6 +182,7 @@ static inline void __raw_read_lock_bh(rwlock_t *lock) } static inline unsigned long __raw_write_lock_irqsave(rwlock_t *lock) + __acquires(lock) __no_capability_analysis { unsigned long flags; @@ -189,6 +194,7 @@ static inline unsigned long __raw_write_lock_irqsave(rwlock_t *lock) } static inline void __raw_write_lock_irq(rwlock_t *lock) + __acquires(lock) __no_capability_analysis { local_irq_disable(); preempt_disable(); @@ -197,6 +203,7 @@ static inline void __raw_write_lock_irq(rwlock_t *lock) } static inline void __raw_write_lock_bh(rwlock_t *lock) + __acquires(lock) __no_capability_analysis { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_); @@ -204,6 +211,7 @@ static inline void __raw_write_lock_bh(rwlock_t *lock) } static inline void __raw_write_lock(rwlock_t *lock) + __acquires(lock) __no_capability_analysis { preempt_disable(); rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_); @@ -211,6 +219,7 @@ static inline void __raw_write_lock(rwlock_t *lock) } static inline void __raw_write_lock_nested(rwlock_t *lock, int subclass) + __acquires(lock) __no_capability_analysis { preempt_disable(); rwlock_acquire(&lock->dep_map, subclass, 0, _RET_IP_); @@ -220,6 +229,7 @@ static inline void __raw_write_lock_nested(rwlock_t *lock, int subclass) #endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */ static inline void __raw_write_unlock(rwlock_t *lock) + __releases(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_write_unlock(lock); @@ -227,6 +237,7 @@ static inline void __raw_write_unlock(rwlock_t *lock) } static inline void __raw_read_unlock(rwlock_t *lock) + __releases_shared(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_read_unlock(lock); @@ -235,6 +246,7 @@ static inline void __raw_read_unlock(rwlock_t *lock) static inline void __raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags) + __releases_shared(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_read_unlock(lock); @@ -243,6 +255,7 @@ __raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags) } static inline void __raw_read_unlock_irq(rwlock_t *lock) + __releases_shared(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_read_unlock(lock); @@ -251,6 +264,7 @@ static inline void __raw_read_unlock_irq(rwlock_t *lock) } static inline void __raw_read_unlock_bh(rwlock_t *lock) + __releases_shared(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_read_unlock(lock); @@ -259,6 +273,7 @@ static inline void __raw_read_unlock_bh(rwlock_t *lock) static inline void __raw_write_unlock_irqrestore(rwlock_t *lock, unsigned long flags) + __releases(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_write_unlock(lock); @@ -267,6 +282,7 @@ static inline void __raw_write_unlock_irqrestore(rwlock_t *lock, } static inline void __raw_write_unlock_irq(rwlock_t *lock) + __releases(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_write_unlock(lock); @@ -275,6 +291,7 @@ static inline void __raw_write_unlock_irq(rwlock_t *lock) } static inline void __raw_write_unlock_bh(rwlock_t *lock) + __releases(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_write_unlock(lock); diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h index 7d81fc6918ee..742172a06702 100644 --- a/include/linux/rwlock_rt.h +++ b/include/linux/rwlock_rt.h @@ -22,28 +22,32 @@ do { \ \ init_rwbase_rt(&(rwl)->rwbase); \ __rt_rwlock_init(rwl, #rwl, &__key); \ + __assert_cap(rwl); \ } while (0) -extern void rt_read_lock(rwlock_t *rwlock) __acquires(rwlock); +extern void rt_read_lock(rwlock_t *rwlock) __acquires_shared(rwlock); extern int rt_read_trylock(rwlock_t *rwlock); -extern void rt_read_unlock(rwlock_t *rwlock) __releases(rwlock); +extern void rt_read_unlock(rwlock_t *rwlock) __releases_shared(rwlock); extern void rt_write_lock(rwlock_t *rwlock) __acquires(rwlock); extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass) __acquires(rwlock); extern int rt_write_trylock(rwlock_t *rwlock); extern void rt_write_unlock(rwlock_t *rwlock) __releases(rwlock); static __always_inline void read_lock(rwlock_t *rwlock) + __acquires_shared(rwlock) { rt_read_lock(rwlock); } static __always_inline void read_lock_bh(rwlock_t *rwlock) + __acquires_shared(rwlock) { local_bh_disable(); rt_read_lock(rwlock); } static __always_inline void read_lock_irq(rwlock_t *rwlock) + __acquires_shared(rwlock) { rt_read_lock(rwlock); } @@ -55,37 +59,43 @@ static __always_inline void read_lock_irq(rwlock_t *rwlock) flags = 0; \ } while (0) -#define read_trylock(lock) __cond_lock(lock, rt_read_trylock(lock)) +#define read_trylock(lock) __cond_lock_shared(lock, rt_read_trylock(lock)) static __always_inline void read_unlock(rwlock_t *rwlock) + __releases_shared(rwlock) { rt_read_unlock(rwlock); } static __always_inline void read_unlock_bh(rwlock_t *rwlock) + __releases_shared(rwlock) { rt_read_unlock(rwlock); local_bh_enable(); } static __always_inline void read_unlock_irq(rwlock_t *rwlock) + __releases_shared(rwlock) { rt_read_unlock(rwlock); } static __always_inline void read_unlock_irqrestore(rwlock_t *rwlock, unsigned long flags) + __releases_shared(rwlock) { rt_read_unlock(rwlock); } static __always_inline void write_lock(rwlock_t *rwlock) + __acquires(rwlock) { rt_write_lock(rwlock); } #ifdef CONFIG_DEBUG_LOCK_ALLOC static __always_inline void write_lock_nested(rwlock_t *rwlock, int subclass) + __acquires(rwlock) { rt_write_lock_nested(rwlock, subclass); } @@ -94,12 +104,14 @@ static __always_inline void write_lock_nested(rwlock_t *rwlock, int subclass) #endif static __always_inline void write_lock_bh(rwlock_t *rwlock) + __acquires(rwlock) { local_bh_disable(); rt_write_lock(rwlock); } static __always_inline void write_lock_irq(rwlock_t *rwlock) + __acquires(rwlock) { rt_write_lock(rwlock); } @@ -114,33 +126,34 @@ static __always_inline void write_lock_irq(rwlock_t *rwlock) #define write_trylock(lock) __cond_lock(lock, rt_write_trylock(lock)) #define write_trylock_irqsave(lock, flags) \ -({ \ - int __locked; \ - \ - typecheck(unsigned long, flags); \ - flags = 0; \ - __locked = write_trylock(lock); \ - __locked; \ -}) + __cond_lock(lock, ({ \ + typecheck(unsigned long, flags); \ + flags = 0; \ + rt_write_trylock(lock); \ + })) static __always_inline void write_unlock(rwlock_t *rwlock) + __releases(rwlock) { rt_write_unlock(rwlock); } static __always_inline void write_unlock_bh(rwlock_t *rwlock) + __releases(rwlock) { rt_write_unlock(rwlock); local_bh_enable(); } static __always_inline void write_unlock_irq(rwlock_t *rwlock) + __releases(rwlock) { rt_write_unlock(rwlock); } static __always_inline void write_unlock_irqrestore(rwlock_t *rwlock, unsigned long flags) + __releases(rwlock) { rt_write_unlock(rwlock); } diff --git a/include/linux/rwlock_types.h b/include/linux/rwlock_types.h index 1948442e7750..231489cc30f2 100644 --- a/include/linux/rwlock_types.h +++ b/include/linux/rwlock_types.h @@ -22,7 +22,7 @@ * portions Copyright 2005, Red Hat, Inc., Ingo Molnar * Released under the General Public License (GPL). */ -typedef struct { +struct_with_capability(rwlock) { arch_rwlock_t raw_lock; #ifdef CONFIG_DEBUG_SPINLOCK unsigned int magic, owner_cpu; @@ -31,7 +31,8 @@ typedef struct { #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; #endif -} rwlock_t; +}; +typedef struct rwlock rwlock_t; #define RWLOCK_MAGIC 0xdeaf1eed @@ -54,13 +55,14 @@ typedef struct { #include -typedef struct { +struct_with_capability(rwlock) { struct rwbase_rt rwbase; atomic_t readers; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; #endif -} rwlock_t; +}; +typedef struct rwlock rwlock_t; #define __RWLOCK_RT_INITIALIZER(name) \ { \ diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 63dd8cf3c3c2..09124713b115 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -106,11 +106,12 @@ do { \ static struct lock_class_key __key; \ \ __raw_spin_lock_init((lock), #lock, &__key, LD_WAIT_SPIN); \ + __assert_cap(lock); \ } while (0) #else # define raw_spin_lock_init(lock) \ - do { *(lock) = __RAW_SPIN_LOCK_UNLOCKED(lock); } while (0) + do { *(lock) = __RAW_SPIN_LOCK_UNLOCKED(lock); __assert_cap(lock); } while (0) #endif #define raw_spin_is_locked(lock) arch_spin_is_locked(&(lock)->raw_lock) @@ -286,19 +287,19 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock) #define raw_spin_trylock_bh(lock) \ __cond_lock(lock, _raw_spin_trylock_bh(lock)) -#define raw_spin_trylock_irq(lock) \ -({ \ - local_irq_disable(); \ - raw_spin_trylock(lock) ? \ - 1 : ({ local_irq_enable(); 0; }); \ -}) +#define raw_spin_trylock_irq(lock) \ + __cond_lock(lock, ({ \ + local_irq_disable(); \ + _raw_spin_trylock(lock) ? \ + 1 : ({ local_irq_enable(); 0; }); \ + })) -#define raw_spin_trylock_irqsave(lock, flags) \ -({ \ - local_irq_save(flags); \ - raw_spin_trylock(lock) ? \ - 1 : ({ local_irq_restore(flags); 0; }); \ -}) +#define raw_spin_trylock_irqsave(lock, flags) \ + __cond_lock(lock, ({ \ + local_irq_save(flags); \ + _raw_spin_trylock(lock) ? \ + 1 : ({ local_irq_restore(flags); 0; }); \ + })) #ifndef CONFIG_PREEMPT_RT /* Include rwlock functions for !RT */ @@ -334,6 +335,7 @@ do { \ \ __raw_spin_lock_init(spinlock_check(lock), \ #lock, &__key, LD_WAIT_CONFIG); \ + __assert_cap(lock); \ } while (0) #else @@ -342,21 +344,25 @@ do { \ do { \ spinlock_check(_lock); \ *(_lock) = __SPIN_LOCK_UNLOCKED(_lock); \ + __assert_cap(_lock); \ } while (0) #endif static __always_inline void spin_lock(spinlock_t *lock) + __acquires(lock) __no_capability_analysis { raw_spin_lock(&lock->rlock); } static __always_inline void spin_lock_bh(spinlock_t *lock) + __acquires(lock) __no_capability_analysis { raw_spin_lock_bh(&lock->rlock); } static __always_inline int spin_trylock(spinlock_t *lock) + __cond_acquires(lock) __no_capability_analysis { return raw_spin_trylock(&lock->rlock); } @@ -372,6 +378,7 @@ do { \ } while (0) static __always_inline void spin_lock_irq(spinlock_t *lock) + __acquires(lock) __no_capability_analysis { raw_spin_lock_irq(&lock->rlock); } @@ -379,47 +386,53 @@ static __always_inline void spin_lock_irq(spinlock_t *lock) #define spin_lock_irqsave(lock, flags) \ do { \ raw_spin_lock_irqsave(spinlock_check(lock), flags); \ + __release(spinlock_check(lock)); __acquire(lock); \ } while (0) #define spin_lock_irqsave_nested(lock, flags, subclass) \ do { \ raw_spin_lock_irqsave_nested(spinlock_check(lock), flags, subclass); \ + __release(spinlock_check(lock)); __acquire(lock); \ } while (0) static __always_inline void spin_unlock(spinlock_t *lock) + __releases(lock) __no_capability_analysis { raw_spin_unlock(&lock->rlock); } static __always_inline void spin_unlock_bh(spinlock_t *lock) + __releases(lock) __no_capability_analysis { raw_spin_unlock_bh(&lock->rlock); } static __always_inline void spin_unlock_irq(spinlock_t *lock) + __releases(lock) __no_capability_analysis { raw_spin_unlock_irq(&lock->rlock); } static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags) + __releases(lock) __no_capability_analysis { raw_spin_unlock_irqrestore(&lock->rlock, flags); } static __always_inline int spin_trylock_bh(spinlock_t *lock) + __cond_acquires(lock) __no_capability_analysis { return raw_spin_trylock_bh(&lock->rlock); } static __always_inline int spin_trylock_irq(spinlock_t *lock) + __cond_acquires(lock) __no_capability_analysis { return raw_spin_trylock_irq(&lock->rlock); } #define spin_trylock_irqsave(lock, flags) \ -({ \ - raw_spin_trylock_irqsave(spinlock_check(lock), flags); \ -}) + __cond_lock(lock, raw_spin_trylock_irqsave(spinlock_check(lock), flags)) /** * spin_is_locked() - Check whether a spinlock is locked. diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_smp.h index 9ecb0ab504e3..fab02d8bf0c9 100644 --- a/include/linux/spinlock_api_smp.h +++ b/include/linux/spinlock_api_smp.h @@ -34,8 +34,8 @@ unsigned long __lockfunc _raw_spin_lock_irqsave(raw_spinlock_t *lock) unsigned long __lockfunc _raw_spin_lock_irqsave_nested(raw_spinlock_t *lock, int subclass) __acquires(lock); -int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock); -int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock); +int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock) __cond_acquires(lock); +int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock) __cond_acquires(lock); void __lockfunc _raw_spin_unlock(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_irq(raw_spinlock_t *lock) __releases(lock); @@ -84,6 +84,7 @@ _raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigned long flags) #endif static inline int __raw_spin_trylock(raw_spinlock_t *lock) + __cond_acquires(lock) { preempt_disable(); if (do_raw_spin_trylock(lock)) { @@ -102,6 +103,7 @@ static inline int __raw_spin_trylock(raw_spinlock_t *lock) #if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC) static inline unsigned long __raw_spin_lock_irqsave(raw_spinlock_t *lock) + __acquires(lock) __no_capability_analysis { unsigned long flags; @@ -113,6 +115,7 @@ static inline unsigned long __raw_spin_lock_irqsave(raw_spinlock_t *lock) } static inline void __raw_spin_lock_irq(raw_spinlock_t *lock) + __acquires(lock) __no_capability_analysis { local_irq_disable(); preempt_disable(); @@ -121,6 +124,7 @@ static inline void __raw_spin_lock_irq(raw_spinlock_t *lock) } static inline void __raw_spin_lock_bh(raw_spinlock_t *lock) + __acquires(lock) __no_capability_analysis { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); spin_acquire(&lock->dep_map, 0, 0, _RET_IP_); @@ -128,6 +132,7 @@ static inline void __raw_spin_lock_bh(raw_spinlock_t *lock) } static inline void __raw_spin_lock(raw_spinlock_t *lock) + __acquires(lock) __no_capability_analysis { preempt_disable(); spin_acquire(&lock->dep_map, 0, 0, _RET_IP_); @@ -137,6 +142,7 @@ static inline void __raw_spin_lock(raw_spinlock_t *lock) #endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */ static inline void __raw_spin_unlock(raw_spinlock_t *lock) + __releases(lock) { spin_release(&lock->dep_map, _RET_IP_); do_raw_spin_unlock(lock); @@ -145,6 +151,7 @@ static inline void __raw_spin_unlock(raw_spinlock_t *lock) static inline void __raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigned long flags) + __releases(lock) { spin_release(&lock->dep_map, _RET_IP_); do_raw_spin_unlock(lock); @@ -153,6 +160,7 @@ static inline void __raw_spin_unlock_irqrestore(raw_spinlock_t *lock, } static inline void __raw_spin_unlock_irq(raw_spinlock_t *lock) + __releases(lock) { spin_release(&lock->dep_map, _RET_IP_); do_raw_spin_unlock(lock); @@ -161,6 +169,7 @@ static inline void __raw_spin_unlock_irq(raw_spinlock_t *lock) } static inline void __raw_spin_unlock_bh(raw_spinlock_t *lock) + __releases(lock) { spin_release(&lock->dep_map, _RET_IP_); do_raw_spin_unlock(lock); @@ -168,6 +177,7 @@ static inline void __raw_spin_unlock_bh(raw_spinlock_t *lock) } static inline int __raw_spin_trylock_bh(raw_spinlock_t *lock) + __cond_acquires(lock) { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); if (do_raw_spin_trylock(lock)) { diff --git a/include/linux/spinlock_api_up.h b/include/linux/spinlock_api_up.h index 819aeba1c87e..018f5aabc1be 100644 --- a/include/linux/spinlock_api_up.h +++ b/include/linux/spinlock_api_up.h @@ -24,68 +24,77 @@ * flags straight, to suppress compiler warnings of unused lock * variables, and to add the proper checker annotations: */ -#define ___LOCK(lock) \ - do { __acquire(lock); (void)(lock); } while (0) +#define ___LOCK_void(lock) \ + do { (void)(lock); } while (0) -#define __LOCK(lock) \ - do { preempt_disable(); ___LOCK(lock); } while (0) +#define ___LOCK_(lock) \ + do { __acquire(lock); ___LOCK_void(lock); } while (0) -#define __LOCK_BH(lock) \ - do { __local_bh_disable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); ___LOCK(lock); } while (0) +#define ___LOCK_shared(lock) \ + do { __acquire_shared(lock); ___LOCK_void(lock); } while (0) -#define __LOCK_IRQ(lock) \ - do { local_irq_disable(); __LOCK(lock); } while (0) +#define __LOCK(lock, ...) \ + do { preempt_disable(); ___LOCK_##__VA_ARGS__(lock); } while (0) -#define __LOCK_IRQSAVE(lock, flags) \ - do { local_irq_save(flags); __LOCK(lock); } while (0) +#define __LOCK_BH(lock, ...) \ + do { __local_bh_disable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); ___LOCK_##__VA_ARGS__(lock); } while (0) -#define ___UNLOCK(lock) \ +#define __LOCK_IRQ(lock, ...) \ + do { local_irq_disable(); __LOCK(lock, ##__VA_ARGS__); } while (0) + +#define __LOCK_IRQSAVE(lock, flags, ...) \ + do { local_irq_save(flags); __LOCK(lock, ##__VA_ARGS__); } while (0) + +#define ___UNLOCK_(lock) \ do { __release(lock); (void)(lock); } while (0) -#define __UNLOCK(lock) \ - do { preempt_enable(); ___UNLOCK(lock); } while (0) +#define ___UNLOCK_shared(lock) \ + do { __release_shared(lock); (void)(lock); } while (0) -#define __UNLOCK_BH(lock) \ +#define __UNLOCK(lock, ...) \ + do { preempt_enable(); ___UNLOCK_##__VA_ARGS__(lock); } while (0) + +#define __UNLOCK_BH(lock, ...) \ do { __local_bh_enable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); \ - ___UNLOCK(lock); } while (0) + ___UNLOCK_##__VA_ARGS__(lock); } while (0) -#define __UNLOCK_IRQ(lock) \ - do { local_irq_enable(); __UNLOCK(lock); } while (0) +#define __UNLOCK_IRQ(lock, ...) \ + do { local_irq_enable(); __UNLOCK(lock, ##__VA_ARGS__); } while (0) -#define __UNLOCK_IRQRESTORE(lock, flags) \ - do { local_irq_restore(flags); __UNLOCK(lock); } while (0) +#define __UNLOCK_IRQRESTORE(lock, flags, ...) \ + do { local_irq_restore(flags); __UNLOCK(lock, ##__VA_ARGS__); } while (0) #define _raw_spin_lock(lock) __LOCK(lock) #define _raw_spin_lock_nested(lock, subclass) __LOCK(lock) -#define _raw_read_lock(lock) __LOCK(lock) +#define _raw_read_lock(lock) __LOCK(lock, shared) #define _raw_write_lock(lock) __LOCK(lock) #define _raw_write_lock_nested(lock, subclass) __LOCK(lock) #define _raw_spin_lock_bh(lock) __LOCK_BH(lock) -#define _raw_read_lock_bh(lock) __LOCK_BH(lock) +#define _raw_read_lock_bh(lock) __LOCK_BH(lock, shared) #define _raw_write_lock_bh(lock) __LOCK_BH(lock) #define _raw_spin_lock_irq(lock) __LOCK_IRQ(lock) -#define _raw_read_lock_irq(lock) __LOCK_IRQ(lock) +#define _raw_read_lock_irq(lock) __LOCK_IRQ(lock, shared) #define _raw_write_lock_irq(lock) __LOCK_IRQ(lock) #define _raw_spin_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) -#define _raw_read_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) +#define _raw_read_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags, shared) #define _raw_write_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) -#define _raw_spin_trylock(lock) ({ __LOCK(lock); 1; }) -#define _raw_read_trylock(lock) ({ __LOCK(lock); 1; }) -#define _raw_write_trylock(lock) ({ __LOCK(lock); 1; }) -#define _raw_spin_trylock_bh(lock) ({ __LOCK_BH(lock); 1; }) +#define _raw_spin_trylock(lock) ({ __LOCK(lock, void); 1; }) +#define _raw_read_trylock(lock) ({ __LOCK(lock, void); 1; }) +#define _raw_write_trylock(lock) ({ __LOCK(lock, void); 1; }) +#define _raw_spin_trylock_bh(lock) ({ __LOCK_BH(lock, void); 1; }) #define _raw_spin_unlock(lock) __UNLOCK(lock) -#define _raw_read_unlock(lock) __UNLOCK(lock) +#define _raw_read_unlock(lock) __UNLOCK(lock, shared) #define _raw_write_unlock(lock) __UNLOCK(lock) #define _raw_spin_unlock_bh(lock) __UNLOCK_BH(lock) #define _raw_write_unlock_bh(lock) __UNLOCK_BH(lock) -#define _raw_read_unlock_bh(lock) __UNLOCK_BH(lock) +#define _raw_read_unlock_bh(lock) __UNLOCK_BH(lock, shared) #define _raw_spin_unlock_irq(lock) __UNLOCK_IRQ(lock) -#define _raw_read_unlock_irq(lock) __UNLOCK_IRQ(lock) +#define _raw_read_unlock_irq(lock) __UNLOCK_IRQ(lock, shared) #define _raw_write_unlock_irq(lock) __UNLOCK_IRQ(lock) #define _raw_spin_unlock_irqrestore(lock, flags) \ __UNLOCK_IRQRESTORE(lock, flags) #define _raw_read_unlock_irqrestore(lock, flags) \ - __UNLOCK_IRQRESTORE(lock, flags) + __UNLOCK_IRQRESTORE(lock, flags, shared) #define _raw_write_unlock_irqrestore(lock, flags) \ __UNLOCK_IRQRESTORE(lock, flags) diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h index f6499c37157d..1f55601e1321 100644 --- a/include/linux/spinlock_rt.h +++ b/include/linux/spinlock_rt.h @@ -20,6 +20,7 @@ static inline void __rt_spin_lock_init(spinlock_t *lock, const char *name, do { \ rt_mutex_base_init(&(slock)->lock); \ __rt_spin_lock_init(slock, name, key, percpu); \ + __assert_cap(slock); \ } while (0) #define _spin_lock_init(slock, percpu) \ @@ -40,6 +41,7 @@ extern int rt_spin_trylock_bh(spinlock_t *lock); extern int rt_spin_trylock(spinlock_t *lock); static __always_inline void spin_lock(spinlock_t *lock) + __acquires(lock) { rt_spin_lock(lock); } @@ -82,6 +84,7 @@ static __always_inline void spin_lock(spinlock_t *lock) __spin_lock_irqsave_nested(lock, flags, subclass) static __always_inline void spin_lock_bh(spinlock_t *lock) + __acquires(lock) { /* Investigate: Drop bh when blocking ? */ local_bh_disable(); @@ -89,6 +92,7 @@ static __always_inline void spin_lock_bh(spinlock_t *lock) } static __always_inline void spin_lock_irq(spinlock_t *lock) + __acquires(lock) { rt_spin_lock(lock); } @@ -101,23 +105,27 @@ static __always_inline void spin_lock_irq(spinlock_t *lock) } while (0) static __always_inline void spin_unlock(spinlock_t *lock) + __releases(lock) { rt_spin_unlock(lock); } static __always_inline void spin_unlock_bh(spinlock_t *lock) + __releases(lock) { rt_spin_unlock(lock); local_bh_enable(); } static __always_inline void spin_unlock_irq(spinlock_t *lock) + __releases(lock) { rt_spin_unlock(lock); } static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags) + __releases(lock) { rt_spin_unlock(lock); } @@ -132,14 +140,11 @@ static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, __cond_lock(lock, rt_spin_trylock(lock)) #define spin_trylock_irqsave(lock, flags) \ -({ \ - int __locked; \ - \ - typecheck(unsigned long, flags); \ - flags = 0; \ - __locked = spin_trylock(lock); \ - __locked; \ -}) + __cond_lock(lock, ({ \ + typecheck(unsigned long, flags); \ + flags = 0; \ + rt_spin_trylock(lock); \ + })) #define spin_is_contended(lock) (((void)(lock), 0)) diff --git a/include/linux/spinlock_types.h b/include/linux/spinlock_types.h index 2dfa35ffec76..2c5db5b5b990 100644 --- a/include/linux/spinlock_types.h +++ b/include/linux/spinlock_types.h @@ -14,7 +14,7 @@ #ifndef CONFIG_PREEMPT_RT /* Non PREEMPT_RT kernels map spinlock to raw_spinlock */ -typedef struct spinlock { +struct_with_capability(spinlock) { union { struct raw_spinlock rlock; @@ -26,7 +26,8 @@ typedef struct spinlock { }; #endif }; -} spinlock_t; +}; +typedef struct spinlock spinlock_t; #define ___SPIN_LOCK_INITIALIZER(lockname) \ { \ @@ -47,12 +48,13 @@ typedef struct spinlock { /* PREEMPT_RT kernels map spinlock to rt_mutex */ #include -typedef struct spinlock { +struct_with_capability(spinlock) { struct rt_mutex_base lock; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; #endif -} spinlock_t; +}; +typedef struct spinlock spinlock_t; #define __SPIN_LOCK_UNLOCKED(name) \ { \ diff --git a/include/linux/spinlock_types_raw.h b/include/linux/spinlock_types_raw.h index 91cb36b65a17..07792ff2c2b5 100644 --- a/include/linux/spinlock_types_raw.h +++ b/include/linux/spinlock_types_raw.h @@ -11,7 +11,7 @@ #include -typedef struct raw_spinlock { +struct_with_capability(raw_spinlock) { arch_spinlock_t raw_lock; #ifdef CONFIG_DEBUG_SPINLOCK unsigned int magic, owner_cpu; @@ -20,7 +20,8 @@ typedef struct raw_spinlock { #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; #endif -} raw_spinlock_t; +}; +typedef struct raw_spinlock raw_spinlock_t; #define SPINLOCK_MAGIC 0xdead4ead diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index a0adacce30ff..84060bace61d 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -5,6 +5,7 @@ */ #include +#include /* * Test that helper macros work as expected. @@ -16,3 +17,130 @@ static void __used test_common_helpers(void) BUILD_BUG_ON(capability_unsafe((void)2, 3) != 3); /* does not swallow commas */ capability_unsafe(do { } while (0)); /* works with void statements */ } + +#define TEST_SPINLOCK_COMMON(class, type, type_init, type_lock, type_unlock, type_trylock, op) \ + struct test_##class##_data { \ + type lock; \ + int counter __guarded_by(&lock); \ + int *pointer __pt_guarded_by(&lock); \ + }; \ + static void __used test_##class##_init(struct test_##class##_data *d) \ + { \ + type_init(&d->lock); \ + d->counter = 0; \ + } \ + static void __used test_##class(struct test_##class##_data *d) \ + { \ + unsigned long flags; \ + d->pointer++; \ + type_lock(&d->lock); \ + op(d->counter); \ + op(*d->pointer); \ + type_unlock(&d->lock); \ + type_lock##_irq(&d->lock); \ + op(d->counter); \ + op(*d->pointer); \ + type_unlock##_irq(&d->lock); \ + type_lock##_bh(&d->lock); \ + op(d->counter); \ + op(*d->pointer); \ + type_unlock##_bh(&d->lock); \ + type_lock##_irqsave(&d->lock, flags); \ + op(d->counter); \ + op(*d->pointer); \ + type_unlock##_irqrestore(&d->lock, flags); \ + } \ + static void __used test_##class##_trylock(struct test_##class##_data *d) \ + { \ + if (type_trylock(&d->lock)) { \ + op(d->counter); \ + type_unlock(&d->lock); \ + } \ + } \ + static void __used test_##class##_assert(struct test_##class##_data *d) \ + { \ + lockdep_assert_held(&d->lock); \ + op(d->counter); \ + } \ + static void __used test_##class##_guard(struct test_##class##_data *d) \ + { \ + { guard(class)(&d->lock); op(d->counter); } \ + { guard(class##_irq)(&d->lock); op(d->counter); } \ + { guard(class##_irqsave)(&d->lock); op(d->counter); } \ + } + +#define TEST_OP_RW(x) (x)++ +#define TEST_OP_RO(x) ((void)(x)) + +TEST_SPINLOCK_COMMON(raw_spinlock, + raw_spinlock_t, + raw_spin_lock_init, + raw_spin_lock, + raw_spin_unlock, + raw_spin_trylock, + TEST_OP_RW); +static void __used test_raw_spinlock_trylock_extra(struct test_raw_spinlock_data *d) +{ + unsigned long flags; + + if (raw_spin_trylock_irq(&d->lock)) { + d->counter++; + raw_spin_unlock_irq(&d->lock); + } + if (raw_spin_trylock_irqsave(&d->lock, flags)) { + d->counter++; + raw_spin_unlock_irqrestore(&d->lock, flags); + } + scoped_cond_guard(raw_spinlock_try, return, &d->lock) { + d->counter++; + } +} + +TEST_SPINLOCK_COMMON(spinlock, + spinlock_t, + spin_lock_init, + spin_lock, + spin_unlock, + spin_trylock, + TEST_OP_RW); +static void __used test_spinlock_trylock_extra(struct test_spinlock_data *d) +{ + unsigned long flags; + + if (spin_trylock_irq(&d->lock)) { + d->counter++; + spin_unlock_irq(&d->lock); + } + if (spin_trylock_irqsave(&d->lock, flags)) { + d->counter++; + spin_unlock_irqrestore(&d->lock, flags); + } + scoped_cond_guard(spinlock_try, return, &d->lock) { + d->counter++; + } +} + +TEST_SPINLOCK_COMMON(write_lock, + rwlock_t, + rwlock_init, + write_lock, + write_unlock, + write_trylock, + TEST_OP_RW); +static void __used test_write_trylock_extra(struct test_write_lock_data *d) +{ + unsigned long flags; + + if (write_trylock_irqsave(&d->lock, flags)) { + d->counter++; + write_unlock_irqrestore(&d->lock, flags); + } +} + +TEST_SPINLOCK_COMMON(read_lock, + rwlock_t, + rwlock_init, + read_lock, + read_unlock, + read_trylock, + TEST_OP_RO); From patchwork Tue Mar 4 09:21:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870226 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A9ADA1FCD06 for ; Tue, 4 Mar 2025 09:25:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080337; cv=none; b=polAefgLjar8CSPTBsd9CWfmT5jwR4se9stN14paDA8ZIm57aVyAt01JSDpY/tlArvbhB3vBsRYwEbq2PIOmq0TGKlDM4DDtlOVPJqtxCagVNIGCkr48QiBKsJHYr9DKeBuBn8x+16CRIDM2WfllzRUS//jxz9qu584q1dBheDk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080337; c=relaxed/simple; bh=TLyEQUYL7r6oDXGxc1hiNLMwzZ9LFVmZKfS7DKxQEFY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=iv/tyF6wpCWroQbRJp51+rcfm9URba8f2cEFqR7yuTJCR4CpculoecU96i5DV/e04PYxjFhj0bYulHaZANfFgYiMNB5uZTy4+qu52+NG/iwAF2js/NI6yO79kcaz5Q/WksHrAEpo9ZniDFHjsdYnDl29iRR00+C4jH+j1fJqXWE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=SHMk7EWS; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="SHMk7EWS" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5df498f31ccso6644958a12.1 for ; Tue, 04 Mar 2025 01:25:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080333; x=1741685133; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3N8S7JUPE1q89goF9WrUxLudC8JOfTuJ5GayTplf0i0=; b=SHMk7EWSuWukqRqMG8iYYRQiPbClEgA6m01WDa3uJvnK9TaTbWOhbeqwvd7TvEyXud b+PcBsu4NuG0t8xqKC9GEmUdt6fRvFYDoypC3kTJTTZYWGpm0hvSMicCMhk120S4QasX Y2+J5bdQFuJz6d5Bl5eVoKX5C++MBYrHD1Xaq6/SzbnF2l6MuDx2WeikcrAUTKdUyGNg LVOAXdfCwezYkmETkYRoftOx9m0mtVFPrWloz+9KKlhFVXsxzwOTkbzm7Actf3VcJ4n2 lPwwu//qOmaIIa/kpQS0Nv19v6KfuKAIqGAg6+YecZCbKDiPoQsL79Pv59Dmx1Ku6cMp suQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080333; x=1741685133; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3N8S7JUPE1q89goF9WrUxLudC8JOfTuJ5GayTplf0i0=; b=DxgA8+w+p/Rndw0IkhTXuI15tWtLAO36mEpx95jmxvzka4u6HsCWWzkDh0XCDZCtIn mClwnDnkQcfojQmhSWe1UUFYLKhcYKvbqmlFUETAbzjvNRuwDs1SyLeEkh0w4loAUHu+ e/0pM01dJ/fN1/+hYws4FljNJjRIl1r2WSTcGhNyxHPowpf/7LJv9N7+ZhsJbIfH/LbG 9vl6VZYoYFQtPAeZkwblUs4vqrm5HEnGIRTj3QToHWlFonldrXkrpxM8VIq1LTD8efrH XciOFwYfNA0IP2Or+anj/J6LZ2gM8mlSTlHxZRd/1DEKtAqpRsiid42iYDhb3pH1XhwJ ZC5A== X-Forwarded-Encrypted: i=1; AJvYcCUDKt2jU2kgXuFK4+HMisqEbtkd3zNGw1mbJS/H7QvMi5c4fr1E6xJrVSrX/K9xB1XqdQe4rFWCQZPeybk=@vger.kernel.org X-Gm-Message-State: AOJu0Yzx/a4rfmSXSW7xnRPzOwdBgX1IxSMysA5Udxj6qZRl8UKRyGRY F+lAk7iMcVgN/jUKmYJilI94X+a83mBv/Yx/nKPm3pgWqUV5HMJr/Gem3I78SXuCJTN7XZXkPQ= = X-Google-Smtp-Source: AGHT+IG3w3Av+LCgyTZprIzkeKZapwa6ti8rnRN5LRNN20fd/po8CjvPaQjDhiyO2RBifz8hQtZRzFukLg== X-Received: from edc18.prod.google.com ([2002:a05:6402:4612:b0:5e5:29f3:27af]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:4603:b0:5e5:4807:5441 with SMTP id 4fb4d7f45d1cf-5e5480755bfmr7319106a12.30.1741080333339; Tue, 04 Mar 2025 01:25:33 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:08 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-10-elver@google.com> Subject: [PATCH v2 09/34] compiler-capability-analysis: Change __cond_acquires to take return value From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org While Sparse is oblivious to the return value of conditional acquire functions, Clang's capability analysis needs to know the return value which indicates successful acquisition. Add the additional argument, and convert existing uses. Notably, Clang's interpretation of the value merely relates to the use in a later conditional branch, i.e. 1 ==> capability acquired in branch taken if condition non-zero, and 0 ==> capability acquired in branch taken if condition is zero. Given the precise value does not matter, introduce symbolic variants to use instead of either 0 or 1, which should be more intuitive. No functional change intended. Signed-off-by: Marco Elver --- v2: * Use symbolic values for __cond_acquires() and __cond_acquires_shared() (suggested by Bart). --- fs/dlm/lock.c | 2 +- include/linux/compiler-capability-analysis.h | 31 ++++++++++++++++---- include/linux/refcount.h | 6 ++-- include/linux/spinlock.h | 6 ++-- include/linux/spinlock_api_smp.h | 8 ++--- net/ipv4/tcp_sigpool.c | 2 +- 6 files changed, 38 insertions(+), 17 deletions(-) diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c index c8ff88f1cdcf..6799cb0c8f50 100644 --- a/fs/dlm/lock.c +++ b/fs/dlm/lock.c @@ -343,7 +343,7 @@ void dlm_hold_rsb(struct dlm_rsb *r) /* TODO move this to lib/refcount.c */ static __must_check bool dlm_refcount_dec_and_write_lock_bh(refcount_t *r, rwlock_t *lock) -__cond_acquires(lock) + __cond_acquires(true, lock) { if (refcount_dec_not_one(r)) return false; diff --git a/include/linux/compiler-capability-analysis.h b/include/linux/compiler-capability-analysis.h index c47d9ed18303..832727fea140 100644 --- a/include/linux/compiler-capability-analysis.h +++ b/include/linux/compiler-capability-analysis.h @@ -240,7 +240,7 @@ # define __must_hold(x) __attribute__((context(x,1,1))) # define __must_not_hold(x) # define __acquires(x) __attribute__((context(x,0,1))) -# define __cond_acquires(x) __attribute__((context(x,0,-1))) +# define __cond_acquires(ret, x) __attribute__((context(x,0,-1))) # define __releases(x) __attribute__((context(x,1,0))) # define __acquire(x) __context__(x,1) # define __release(x) __context__(x,-1) @@ -283,15 +283,32 @@ */ # define __acquires(x) __acquires_cap(x) +/* + * Clang's analysis does not care precisely about the value, only that it is + * either zero or non-zero. So the __cond_acquires() interface might be + * misleading if we say that @ret is the value returned if acquired. Instead, + * provide symbolic variants which we translate. + */ +#define __cond_acquires_impl_true(x, ...) __try_acquires##__VA_ARGS__##_cap(1, x) +#define __cond_acquires_impl_false(x, ...) __try_acquires##__VA_ARGS__##_cap(0, x) +#define __cond_acquires_impl_nonzero(x, ...) __try_acquires##__VA_ARGS__##_cap(1, x) +#define __cond_acquires_impl_0(x, ...) __try_acquires##__VA_ARGS__##_cap(0, x) +#define __cond_acquires_impl_nonnull(x, ...) __try_acquires##__VA_ARGS__##_cap(1, x) +#define __cond_acquires_impl_NULL(x, ...) __try_acquires##__VA_ARGS__##_cap(0, x) + /** * __cond_acquires() - function attribute, function conditionally * acquires a capability exclusively + * @ret: abstract value returned by function if capability acquired * @x: capability instance pointer * * Function attribute declaring that the function conditionally acquires the - * given capability instance @x exclusively, but does not release it. + * given capability instance @x exclusively, but does not release it. The + * function return value @ret denotes when the capability is acquired. + * + * @ret may be one of: true, false, nonzero, 0, nonnull, NULL. */ -# define __cond_acquires(x) __try_acquires_cap(1, x) +# define __cond_acquires(ret, x) __cond_acquires_impl_##ret(x) /** * __releases() - function attribute, function releases a capability exclusively @@ -358,12 +375,16 @@ /** * __cond_acquires_shared() - function attribute, function conditionally * acquires a capability shared + * @ret: abstract value returned by function if capability acquired * @x: capability instance pointer * * Function attribute declaring that the function conditionally acquires the - * given capability instance @x with shared access, but does not release it. + * given capability instance @x with shared access, but does not release it. The + * function return value @ret denotes when the capability is acquired. + * + * @ret may be one of: true, false, nonzero, 0, nonnull, NULL. */ -# define __cond_acquires_shared(x) __try_acquires_shared_cap(1, x) +# define __cond_acquires_shared(ret, x) __cond_acquires_impl_##ret(x, _shared) /** * __releases_shared() - function attribute, function releases a diff --git a/include/linux/refcount.h b/include/linux/refcount.h index 35f039ecb272..88a6e292271d 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -353,9 +353,9 @@ static inline void refcount_dec(refcount_t *r) extern __must_check bool refcount_dec_if_one(refcount_t *r); extern __must_check bool refcount_dec_not_one(refcount_t *r); -extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) __cond_acquires(lock); -extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) __cond_acquires(lock); +extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) __cond_acquires(true, lock); +extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) __cond_acquires(true, lock); extern __must_check bool refcount_dec_and_lock_irqsave(refcount_t *r, spinlock_t *lock, - unsigned long *flags) __cond_acquires(lock); + unsigned long *flags) __cond_acquires(true, lock); #endif /* _LINUX_REFCOUNT_H */ diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 09124713b115..12369fa9e3bb 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -362,7 +362,7 @@ static __always_inline void spin_lock_bh(spinlock_t *lock) } static __always_inline int spin_trylock(spinlock_t *lock) - __cond_acquires(lock) __no_capability_analysis + __cond_acquires(true, lock) __no_capability_analysis { return raw_spin_trylock(&lock->rlock); } @@ -420,13 +420,13 @@ static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned lo } static __always_inline int spin_trylock_bh(spinlock_t *lock) - __cond_acquires(lock) __no_capability_analysis + __cond_acquires(true, lock) __no_capability_analysis { return raw_spin_trylock_bh(&lock->rlock); } static __always_inline int spin_trylock_irq(spinlock_t *lock) - __cond_acquires(lock) __no_capability_analysis + __cond_acquires(true, lock) __no_capability_analysis { return raw_spin_trylock_irq(&lock->rlock); } diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_smp.h index fab02d8bf0c9..a77b76003ebb 100644 --- a/include/linux/spinlock_api_smp.h +++ b/include/linux/spinlock_api_smp.h @@ -34,8 +34,8 @@ unsigned long __lockfunc _raw_spin_lock_irqsave(raw_spinlock_t *lock) unsigned long __lockfunc _raw_spin_lock_irqsave_nested(raw_spinlock_t *lock, int subclass) __acquires(lock); -int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock) __cond_acquires(lock); -int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock) __cond_acquires(lock); +int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock) __cond_acquires(true, lock); +int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock) __cond_acquires(true, lock); void __lockfunc _raw_spin_unlock(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_irq(raw_spinlock_t *lock) __releases(lock); @@ -84,7 +84,7 @@ _raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigned long flags) #endif static inline int __raw_spin_trylock(raw_spinlock_t *lock) - __cond_acquires(lock) + __cond_acquires(true, lock) { preempt_disable(); if (do_raw_spin_trylock(lock)) { @@ -177,7 +177,7 @@ static inline void __raw_spin_unlock_bh(raw_spinlock_t *lock) } static inline int __raw_spin_trylock_bh(raw_spinlock_t *lock) - __cond_acquires(lock) + __cond_acquires(true, lock) { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); if (do_raw_spin_trylock(lock)) { diff --git a/net/ipv4/tcp_sigpool.c b/net/ipv4/tcp_sigpool.c index d8a4f192873a..10b2e5970c40 100644 --- a/net/ipv4/tcp_sigpool.c +++ b/net/ipv4/tcp_sigpool.c @@ -257,7 +257,7 @@ void tcp_sigpool_get(unsigned int id) } EXPORT_SYMBOL_GPL(tcp_sigpool_get); -int tcp_sigpool_start(unsigned int id, struct tcp_sigpool *c) __cond_acquires(RCU_BH) +int tcp_sigpool_start(unsigned int id, struct tcp_sigpool *c) __cond_acquires(0, RCU_BH) { struct crypto_ahash *hash; From patchwork Tue Mar 4 09:21:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870637 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BAD671FFC6C for ; Tue, 4 Mar 2025 09:25:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080339; cv=none; b=H0a9vDdutIWL2GdRrLphFuAoZTyFStNjh8IPjxEMiQX3SXDJLIsfd535o5JrJcVPWgqNTy58PgYWC4yi6TzjpAWSls3WYqTRWH7f8TbHppYuKhjS+3QhpALRAqUt3FZWGr1EP+TYZb6qN79h6kJKjTA3jK1JOaTF3oec9mXEfsM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080339; c=relaxed/simple; bh=R7NfOr2i/sd1rk+XA1fxYV0VoUux4hyfMe10hs/GFIo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CH0SdxDH120RB3Js+vXZhdbE/PZxR9EVRoMM8PlcNSzYmTmhlbX9uPfQySQINNOi3YMdFjbE8LWCA0crJaEoXSqtmBg3Z4CIPmlt/dsVz+WAVodczUIx1GgxMtySE5LTh3aoMuZMUXwuID3VPWNjRPQoEOU4u39I6LF9hQ98pZ4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rlAmr1h4; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rlAmr1h4" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-39101511442so1076009f8f.1 for ; Tue, 04 Mar 2025 01:25:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080336; x=1741685136; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=istr0KCQ5s77c/egqf7KkvIi9CTO4kygD0ofOymfkU8=; b=rlAmr1h4tb0e7DR1b6Ty81Ysj52/sL90kwETBPR26PUOsIjJGG87QE9O3gItwv00Vs Z8DpPOcb65brK6KQIsGXxPMUtUjSTNOqY0d/GM2moRjBU4Uo8PMo4ov0WEkgnfnt6bKI GIZPjajsGg+HjL0fBOqC5XLFbI2kzBJplRdg3IBg4DGkNZMJqBQ752PRWW4ooCyx5MFR gz5sRV2QiT7pGJF18uh2zLFiGzF5nLVF4UlojBRP5Cziciad/zCwFhsHD1piNfFrhpY/ A4N7N4cgX1wttIl0C0IwgKfHx6NaEqrRlTDgEL3ycBjmhW2BZp2kaHB6zJPOS3jVkfx6 EJFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080336; x=1741685136; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=istr0KCQ5s77c/egqf7KkvIi9CTO4kygD0ofOymfkU8=; b=hpxOTzj0mfAqzVnLgUhrIeuI5LvcDmT87rH41dl+eVO3TQxW6rsQuziBgz9MwqEVt2 B/g8iHN+K4n9BEgPYefd/Zt2yxPWyasbdxJpAGLiVjPftV04RWMs9LecgEe+ZBzJyzRZ dX2xPudtAQrJ5zU7Q3ycVAJZ+zWMOSmFZrZEPoZYRwljCiSlkTwbystO4rvWySJ9PfHQ E6HSAh9/RtglpOO2JmpoU5s3u03Bg9uCpiicEVE5kViZEs2uTH4xAB9qap6RDGcWCb7h zWc+oiESPZDZtNzt4fG0eBYFn5M93DgZOcTvTGPElDDYuNBEC1YPRAYaVR/r6oBo8cTH 9wdA== X-Forwarded-Encrypted: i=1; AJvYcCWDm3JvP5Vj3ix3DbOvQDurzpqX8r1EkQfheiefTcgoPEbBxJqQE3ap6uXAfIVh5M5AK8vCOvabTqMGk20=@vger.kernel.org X-Gm-Message-State: AOJu0YzSqt0/imY0fH7YaxJPboWlqZ9CdUsmLTy1dpWvyK+zQLmk9bWn bj5Z2SxtsO5/I0kssr+ijTpwj7uHeKtBRbtx5DTbGzkuTnkYtPCrcHoZDGn2O2n/A1BCUjxKkw= = X-Google-Smtp-Source: AGHT+IEWpumMq7w2Do+3aHW1xJk7ZnkmrsU/t6znrys5DihZ08Xs8NOd+ecssoGGAfkjsqWUcHoulWJrMg== X-Received: from wrbei4.prod.google.com ([2002:a05:6000:4184:b0:390:f69f:8c34]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1fa4:b0:390:f9e0:f0d0 with SMTP id ffacd0b85a97d-391155feb2emr1821321f8f.6.1741080335989; Tue, 04 Mar 2025 01:25:35 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:09 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-11-elver@google.com> Subject: [PATCH v2 10/34] locking/mutex: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Add support for Clang's capability analysis for mutex. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/mutex.h | 29 +++++---- include/linux/mutex_types.h | 4 +- lib/test_capability-analysis.c | 64 +++++++++++++++++++ 4 files changed, 82 insertions(+), 17 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index ddda3dc0d8d3..0000214056c2 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -79,7 +79,7 @@ Supported Kernel Primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Currently the following synchronization primitives are supported: -`raw_spinlock_t`, `spinlock_t`, `rwlock_t`. +`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`. For capabilities with an initialization function (e.g., `spin_lock_init()`), calling this function on the capability instance before initializing any diff --git a/include/linux/mutex.h b/include/linux/mutex.h index 2bf91b57591b..f71ad9ec96d0 100644 --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -62,6 +62,7 @@ do { \ static struct lock_class_key __key; \ \ __mutex_init((mutex), #mutex, &__key); \ + __assert_cap(mutex); \ } while (0) /** @@ -154,14 +155,14 @@ static inline int __devm_mutex_init(struct device *dev, struct mutex *lock) * Also see Documentation/locking/mutex-design.rst. */ #ifdef CONFIG_DEBUG_LOCK_ALLOC -extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass); +extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass) __acquires(lock); extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock); extern int __must_check mutex_lock_interruptible_nested(struct mutex *lock, - unsigned int subclass); + unsigned int subclass) __cond_acquires(0, lock); extern int __must_check mutex_lock_killable_nested(struct mutex *lock, - unsigned int subclass); -extern void mutex_lock_io_nested(struct mutex *lock, unsigned int subclass); + unsigned int subclass) __cond_acquires(0, lock); +extern void mutex_lock_io_nested(struct mutex *lock, unsigned int subclass) __acquires(lock); #define mutex_lock(lock) mutex_lock_nested(lock, 0) #define mutex_lock_interruptible(lock) mutex_lock_interruptible_nested(lock, 0) @@ -175,10 +176,10 @@ do { \ } while (0) #else -extern void mutex_lock(struct mutex *lock); -extern int __must_check mutex_lock_interruptible(struct mutex *lock); -extern int __must_check mutex_lock_killable(struct mutex *lock); -extern void mutex_lock_io(struct mutex *lock); +extern void mutex_lock(struct mutex *lock) __acquires(lock); +extern int __must_check mutex_lock_interruptible(struct mutex *lock) __cond_acquires(0, lock); +extern int __must_check mutex_lock_killable(struct mutex *lock) __cond_acquires(0, lock); +extern void mutex_lock_io(struct mutex *lock) __acquires(lock); # define mutex_lock_nested(lock, subclass) mutex_lock(lock) # define mutex_lock_interruptible_nested(lock, subclass) mutex_lock_interruptible(lock) @@ -193,13 +194,13 @@ extern void mutex_lock_io(struct mutex *lock); * * Returns 1 if the mutex has been acquired successfully, and 0 on contention. */ -extern int mutex_trylock(struct mutex *lock); -extern void mutex_unlock(struct mutex *lock); +extern int mutex_trylock(struct mutex *lock) __cond_acquires(true, lock); +extern void mutex_unlock(struct mutex *lock) __releases(lock); -extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock); +extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock) __cond_acquires(true, lock); -DEFINE_GUARD(mutex, struct mutex *, mutex_lock(_T), mutex_unlock(_T)) -DEFINE_GUARD_COND(mutex, _try, mutex_trylock(_T)) -DEFINE_GUARD_COND(mutex, _intr, mutex_lock_interruptible(_T) == 0) +DEFINE_LOCK_GUARD_1(mutex, struct mutex, mutex_lock(_T->lock), mutex_unlock(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(mutex, _try, mutex_trylock(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(mutex, _intr, mutex_lock_interruptible(_T->lock) == 0) #endif /* __LINUX_MUTEX_H */ diff --git a/include/linux/mutex_types.h b/include/linux/mutex_types.h index fdf7f515fde8..e1a5ea12d53c 100644 --- a/include/linux/mutex_types.h +++ b/include/linux/mutex_types.h @@ -38,7 +38,7 @@ * - detects multi-task circular deadlocks and prints out all affected * locks and tasks (and only those tasks) */ -struct mutex { +struct_with_capability(mutex) { atomic_long_t owner; raw_spinlock_t wait_lock; #ifdef CONFIG_MUTEX_SPIN_ON_OWNER @@ -59,7 +59,7 @@ struct mutex { */ #include -struct mutex { +struct_with_capability(mutex) { struct rt_mutex_base rtmutex; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 84060bace61d..286723b47328 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -5,6 +5,7 @@ */ #include +#include #include /* @@ -144,3 +145,66 @@ TEST_SPINLOCK_COMMON(read_lock, read_unlock, read_trylock, TEST_OP_RO); + +struct test_mutex_data { + struct mutex mtx; + int counter __guarded_by(&mtx); +}; + +static void __used test_mutex_init(struct test_mutex_data *d) +{ + mutex_init(&d->mtx); + d->counter = 0; +} + +static void __used test_mutex_lock(struct test_mutex_data *d) +{ + mutex_lock(&d->mtx); + d->counter++; + mutex_unlock(&d->mtx); + mutex_lock_io(&d->mtx); + d->counter++; + mutex_unlock(&d->mtx); +} + +static void __used test_mutex_trylock(struct test_mutex_data *d, atomic_t *a) +{ + if (!mutex_lock_interruptible(&d->mtx)) { + d->counter++; + mutex_unlock(&d->mtx); + } + if (!mutex_lock_killable(&d->mtx)) { + d->counter++; + mutex_unlock(&d->mtx); + } + if (mutex_trylock(&d->mtx)) { + d->counter++; + mutex_unlock(&d->mtx); + } + if (atomic_dec_and_mutex_lock(a, &d->mtx)) { + d->counter++; + mutex_unlock(&d->mtx); + } +} + +static void __used test_mutex_assert(struct test_mutex_data *d) +{ + lockdep_assert_held(&d->mtx); + d->counter++; +} + +static void __used test_mutex_guard(struct test_mutex_data *d) +{ + guard(mutex)(&d->mtx); + d->counter++; +} + +static void __used test_mutex_cond_guard(struct test_mutex_data *d) +{ + scoped_cond_guard(mutex_try, return, &d->mtx) { + d->counter++; + } + scoped_cond_guard(mutex_intr, return, &d->mtx) { + d->counter++; + } +} From patchwork Tue Mar 4 09:21:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870225 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B9CF200109 for ; Tue, 4 Mar 2025 09:25:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080342; cv=none; b=BDdGSGlOahBNAWnbMmLiX+6RJ2wfa+Frhc/aPv7RNqGpjbnXpwoMKJON+ANUlPJ0aM5x0jHun6xLkRJyfPTpwR7k3mPyE4uNvZa7oMg0E/0fnXkFp7tPPEdof8Ex4IAx/xElV4Zu0Z58zyJNho8j2CB+TvQ34T61wxPkx2rMx84= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080342; c=relaxed/simple; bh=+hOyGV6sT5S0OduN60qT6geEoWJUIWkI8FTuectbotA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=iC98B78wdx7dTQ0kOBWzEc0KOUgA9UpPf18jrsznWr7MhiuM4XX4og0qul9aJbYubM9fzFnNd6cW2FqTyNDOS7MBvQGTcgHwNwL0GtjckK9fYri+z0VIwTm3B3gMmEFDaQNSxm0ijtqqPW68fhw9R91X4pK2fHyR7hNr1X6NYUA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=zkYmsCtv; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zkYmsCtv" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-abec83a498cso522328166b.1 for ; Tue, 04 Mar 2025 01:25:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080339; x=1741685139; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8z4fgpkLlPzvclrtbcw45VHrfGlnBGLpcfc2M8hQ/8Q=; b=zkYmsCtv88quGei6n+pBRfr5opHmmF/wDl0tPF4VCgDtbMTwiAN1S2/LfZ1/QbpdLK Jnuiwl0rNtXx6yiFVOVjP+zlwE89R5yJtTSONOtqw7IZJyOaADJT5H4JZl9bIe9cevl3 uKxzWsjcPO1N/5pcBOfQZ94z5a67jW9LUNI/CZwWFGV6/sMwWBL9d3jU/sQ1pUz7hE/p K5WZFjBMpKaUoe/l1c2JP9er1TW3Hg5WEDh+fTX1Xjj4bLH0u52sKhe+4RWbC/03N9/w 7IKyux6sZnoD5s/P3pZc70W4m0x6J3CN2TuVtJCuBGVvwAae+5iiqitDDjZw+IXmf0qH jL/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080339; x=1741685139; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8z4fgpkLlPzvclrtbcw45VHrfGlnBGLpcfc2M8hQ/8Q=; b=dUoixofWhIagvpWsFFxBdJvGPW/3LageGy3JpVByWmDuJ/vmZaHkV+cwO2ca8ClbLy StLzllmt3pyl5y5k6zYLa+DHN1Wnkdm78EQ7oxhQQp6cks8V0GuvY1cCDSHz94hrbK5/ EiSdEFZHeqhT0VYVuAfAWl/GPmJiLJY6ZEcIDnk+x+DJ4ATifOLwhVS/wTT02IYNb9OY owJp+3vP4prUCVbeybHti3GZtW2kiR/flAQjL1T4PJgWQcyKdpTTIJAQT1gwJ98jYg0u 36ahmg5aeIAsu+SqcMUyZowuV4wHajx1PErrFsI75TQ9bVhD4SDxQ6imsFKm/H5EiytN Dc5Q== X-Forwarded-Encrypted: i=1; AJvYcCUUtj4X1MAon6hjqxF8NfKqwcnW6m2UUKCzCG1YaJ7qevmYyGTRYfjOXwS+6eZ2YjoE/Pj3erqhK2Ktlxs=@vger.kernel.org X-Gm-Message-State: AOJu0YxVA+tpvwEvwTnJmcnMvhrIlWIgUcaWiYTikfNq9viZlQXllIBY FrPx70HCylR9k3bIb6wyUS2L68h+0v5joAaZZQIsqV27wLfFwHqtbzZjd0NOGu3RaPu52lyVVw= = X-Google-Smtp-Source: AGHT+IH6OTcNSqBkbTu8GVa+WWd3kjf6MqiHYa0XeNWcxZmh/34swAJ3H0QTXtrQX7tePWUnJOEFAomSjA== X-Received: from ejcvg12.prod.google.com ([2002:a17:907:d30c:b0:abf:7710:3f5]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:3fa6:b0:abf:65c8:70e5 with SMTP id a640c23a62f3a-abf65c8922cmr1078595566b.25.1741080338878; Tue, 04 Mar 2025 01:25:38 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:10 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-12-elver@google.com> Subject: [PATCH v2 11/34] locking/seqlock: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Add support for Clang's capability analysis for seqlock_t. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/seqlock.h | 24 +++++++++++ include/linux/seqlock_types.h | 5 ++- lib/test_capability-analysis.c | 43 +++++++++++++++++++ 4 files changed, 71 insertions(+), 3 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index 0000214056c2..e4b333fffb4d 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -79,7 +79,7 @@ Supported Kernel Primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Currently the following synchronization primitives are supported: -`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`. +`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`. For capabilities with an initialization function (e.g., `spin_lock_init()`), calling this function on the capability instance before initializing any diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index 5ce48eab7a2a..c914eb9714e9 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -816,6 +816,7 @@ static __always_inline void write_seqcount_latch_end(seqcount_latch_t *s) do { \ spin_lock_init(&(sl)->lock); \ seqcount_spinlock_init(&(sl)->seqcount, &(sl)->lock); \ + __assert_cap(sl); \ } while (0) /** @@ -832,6 +833,7 @@ static __always_inline void write_seqcount_latch_end(seqcount_latch_t *s) * Return: count, to be passed to read_seqretry() */ static inline unsigned read_seqbegin(const seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { return read_seqcount_begin(&sl->seqcount); } @@ -848,6 +850,7 @@ static inline unsigned read_seqbegin(const seqlock_t *sl) * Return: true if a read section retry is required, else false */ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start) + __releases_shared(sl) __no_capability_analysis { return read_seqcount_retry(&sl->seqcount, start); } @@ -872,6 +875,7 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start) * _irqsave or _bh variants of this function instead. */ static inline void write_seqlock(seqlock_t *sl) + __acquires(sl) __no_capability_analysis { spin_lock(&sl->lock); do_write_seqcount_begin(&sl->seqcount.seqcount); @@ -885,6 +889,7 @@ static inline void write_seqlock(seqlock_t *sl) * critical section of given seqlock_t. */ static inline void write_sequnlock(seqlock_t *sl) + __releases(sl) __no_capability_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock(&sl->lock); @@ -898,6 +903,7 @@ static inline void write_sequnlock(seqlock_t *sl) * other write side sections, can be invoked from softirq contexts. */ static inline void write_seqlock_bh(seqlock_t *sl) + __acquires(sl) __no_capability_analysis { spin_lock_bh(&sl->lock); do_write_seqcount_begin(&sl->seqcount.seqcount); @@ -912,6 +918,7 @@ static inline void write_seqlock_bh(seqlock_t *sl) * write_seqlock_bh(). */ static inline void write_sequnlock_bh(seqlock_t *sl) + __releases(sl) __no_capability_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock_bh(&sl->lock); @@ -925,6 +932,7 @@ static inline void write_sequnlock_bh(seqlock_t *sl) * other write sections, can be invoked from hardirq contexts. */ static inline void write_seqlock_irq(seqlock_t *sl) + __acquires(sl) __no_capability_analysis { spin_lock_irq(&sl->lock); do_write_seqcount_begin(&sl->seqcount.seqcount); @@ -938,12 +946,14 @@ static inline void write_seqlock_irq(seqlock_t *sl) * seqlock_t write side section opened with write_seqlock_irq(). */ static inline void write_sequnlock_irq(seqlock_t *sl) + __releases(sl) __no_capability_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock_irq(&sl->lock); } static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl) + __acquires(sl) __no_capability_analysis { unsigned long flags; @@ -976,6 +986,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl) */ static inline void write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags) + __releases(sl) __no_capability_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock_irqrestore(&sl->lock, flags); @@ -998,6 +1009,7 @@ write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags) * The opened read section must be closed with read_sequnlock_excl(). */ static inline void read_seqlock_excl(seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { spin_lock(&sl->lock); } @@ -1007,6 +1019,7 @@ static inline void read_seqlock_excl(seqlock_t *sl) * @sl: Pointer to seqlock_t */ static inline void read_sequnlock_excl(seqlock_t *sl) + __releases_shared(sl) __no_capability_analysis { spin_unlock(&sl->lock); } @@ -1021,6 +1034,7 @@ static inline void read_sequnlock_excl(seqlock_t *sl) * from softirq contexts. */ static inline void read_seqlock_excl_bh(seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { spin_lock_bh(&sl->lock); } @@ -1031,6 +1045,7 @@ static inline void read_seqlock_excl_bh(seqlock_t *sl) * @sl: Pointer to seqlock_t */ static inline void read_sequnlock_excl_bh(seqlock_t *sl) + __releases_shared(sl) __no_capability_analysis { spin_unlock_bh(&sl->lock); } @@ -1045,6 +1060,7 @@ static inline void read_sequnlock_excl_bh(seqlock_t *sl) * hardirq context. */ static inline void read_seqlock_excl_irq(seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { spin_lock_irq(&sl->lock); } @@ -1055,11 +1071,13 @@ static inline void read_seqlock_excl_irq(seqlock_t *sl) * @sl: Pointer to seqlock_t */ static inline void read_sequnlock_excl_irq(seqlock_t *sl) + __releases_shared(sl) __no_capability_analysis { spin_unlock_irq(&sl->lock); } static inline unsigned long __read_seqlock_excl_irqsave(seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { unsigned long flags; @@ -1089,6 +1107,7 @@ static inline unsigned long __read_seqlock_excl_irqsave(seqlock_t *sl) */ static inline void read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigned long flags) + __releases_shared(sl) __no_capability_analysis { spin_unlock_irqrestore(&sl->lock, flags); } @@ -1125,6 +1144,7 @@ read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigned long flags) * parameter of the next read_seqbegin_or_lock() iteration. */ static inline void read_seqbegin_or_lock(seqlock_t *lock, int *seq) + __acquires_shared(lock) __no_capability_analysis { if (!(*seq & 1)) /* Even */ *seq = read_seqbegin(lock); @@ -1140,6 +1160,7 @@ static inline void read_seqbegin_or_lock(seqlock_t *lock, int *seq) * Return: true if a read section retry is required, false otherwise */ static inline int need_seqretry(seqlock_t *lock, int seq) + __releases_shared(lock) __no_capability_analysis { return !(seq & 1) && read_seqretry(lock, seq); } @@ -1153,6 +1174,7 @@ static inline int need_seqretry(seqlock_t *lock, int seq) * with read_seqbegin_or_lock() and validated by need_seqretry(). */ static inline void done_seqretry(seqlock_t *lock, int seq) + __no_capability_analysis { if (seq & 1) read_sequnlock_excl(lock); @@ -1180,6 +1202,7 @@ static inline void done_seqretry(seqlock_t *lock, int seq) */ static inline unsigned long read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *seq) + __acquires_shared(lock) __no_capability_analysis { unsigned long flags = 0; @@ -1205,6 +1228,7 @@ read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *seq) */ static inline void done_seqretry_irqrestore(seqlock_t *lock, int seq, unsigned long flags) + __no_capability_analysis { if (seq & 1) read_sequnlock_excl_irqrestore(lock, flags); diff --git a/include/linux/seqlock_types.h b/include/linux/seqlock_types.h index dfdf43e3fa3d..9775d6f1a234 100644 --- a/include/linux/seqlock_types.h +++ b/include/linux/seqlock_types.h @@ -81,13 +81,14 @@ SEQCOUNT_LOCKNAME(mutex, struct mutex, true, mutex) * - Comments on top of seqcount_t * - Documentation/locking/seqlock.rst */ -typedef struct { +struct_with_capability(seqlock) { /* * Make sure that readers don't starve writers on PREEMPT_RT: use * seqcount_spinlock_t instead of seqcount_t. Check __SEQ_LOCK(). */ seqcount_spinlock_t seqcount; spinlock_t lock; -} seqlock_t; +}; +typedef struct seqlock seqlock_t; #endif /* __LINUX_SEQLOCK_TYPES_H */ diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 286723b47328..74d287740bb8 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -6,6 +6,7 @@ #include #include +#include #include /* @@ -208,3 +209,45 @@ static void __used test_mutex_cond_guard(struct test_mutex_data *d) d->counter++; } } + +struct test_seqlock_data { + seqlock_t sl; + int counter __guarded_by(&sl); +}; + +static void __used test_seqlock_init(struct test_seqlock_data *d) +{ + seqlock_init(&d->sl); + d->counter = 0; +} + +static void __used test_seqlock_reader(struct test_seqlock_data *d) +{ + unsigned int seq; + + do { + seq = read_seqbegin(&d->sl); + (void)d->counter; + } while (read_seqretry(&d->sl, seq)); +} + +static void __used test_seqlock_writer(struct test_seqlock_data *d) +{ + unsigned long flags; + + write_seqlock(&d->sl); + d->counter++; + write_sequnlock(&d->sl); + + write_seqlock_irq(&d->sl); + d->counter++; + write_sequnlock_irq(&d->sl); + + write_seqlock_bh(&d->sl); + d->counter++; + write_sequnlock_bh(&d->sl); + + write_seqlock_irqsave(&d->sl, flags); + d->counter++; + write_sequnlock_irqrestore(&d->sl, flags); +} From patchwork Tue Mar 4 09:21:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870636 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 014F81FCF62 for ; Tue, 4 Mar 2025 09:25:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080344; cv=none; b=IamBPrkgXji/qFCKtsE808oEKjIZTBRPMJy9YaUdhCLHpGXxPaC58mEUfgJ1SvDvCsYcv72X4e9IE6Q+9+1Z4Z4f315YVScB5qCmRN2SmWJvH+5iO6nhzcOj7/FPQcNMmcjniCvT3qI4IWqdvH+/uNz4jMLpMOTSDFbad7wlXMU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080344; c=relaxed/simple; bh=A7gPHEZIEQYCG49NjQ38GqM0IYCU29sZ4kgbaVoDh5Y=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Bz7httjbC2TgrJZpr0mOCAUB3SfpvjIE8dcTd+Z8vXbAb2/JMwyrAkayixGHcLPRyQ0KMuWXb2StwIoAEFgAcmpzL4aAa25K2kpCAVp1IW7cZmez12Q7i8ilrlrqjzEbYrRo+xH42W8SsL05h/g4gclv57h0MigVclzyotFMYN0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qW8kpVch; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qW8kpVch" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5e550e71d33so2000194a12.1 for ; Tue, 04 Mar 2025 01:25:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080341; x=1741685141; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4NsOo3OXCYTu9WQHyv5ORZtwu1IzgjlVv8TtoKqxnnY=; b=qW8kpVch/2lef4718yfMZbrKvq8oqvrMJ1Zs8wJZ3cysiWvrEDmP2vqWHjc0bO3sEf IqbhcoBfsyEN9LnXVnKlu2nf89pRJmfEvIbKLV14FhpeqBu/R4TWhXY9jTyqasrhiI8A OcMvg+rE/VYrVYqgDnK0cDqO3tbiyHDA5y2FMzHTuaA73KEoNBYqD9zZqVaZNS8up7D7 kNcAbm0h3mTS5eIbs6nb/v2cela2VnQXqwYlr08LEfsgz+FeSbuNmJ4p/U9PazsM6+xd o0m21zGV8/LL7CB4DlomPWCcjBxU6mQhI8QY9PQTR8M6FIF08dZVjgwyikEtSicg0MO1 XNuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080341; x=1741685141; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4NsOo3OXCYTu9WQHyv5ORZtwu1IzgjlVv8TtoKqxnnY=; b=A6pVmwt5HyNlbriqtwNhFyLAwrsm3FbwMhliDWDt6Um+w7XBZXDFjghKD6KLZmCz2X 9TmywrsoxAbNZkt9iLEP5aNgnHnXwYfsqhKlTDZAIjp4jR8r0TjN4x5YFPezugTYizDX cVPT57o1FJEh4mPyAYE9uiTg5ExrH4TWF6APoLrBFHYwOX675g1Q/eSaVf4x54mlOvWh zfmY9+9JZfONjpMHgUCnoAovaAkdpZk+Wu/QbJQlhd82m49IOaucfs/gpu4ODI5OpZ6j Gjmuk+7dSZbQpwXFT6t5ne44+YcmNY72nYB7GFLgFkCLNmzObeDts/kmqf4HAbKaUW6x X45A== X-Forwarded-Encrypted: i=1; AJvYcCXxAljeQnuRNnQRKbAA86ttILG1Ii6z5GVY1z3dC2+QWbK90sybJunvtCAdOSHt2GSRLmu4WLhxwlXDLZA=@vger.kernel.org X-Gm-Message-State: AOJu0Yw+qsI5NYqwoHUv8+ioJ+pMsn12lREeNlgyjQM2WNx/VId5Vkb3 1/06+yuSPbMX/jdGLfnUl7PPHUhLvKEiHZ6u6r+4yd9bN+zLpvmnH339Akqw5hBvANepnswdhg= = X-Google-Smtp-Source: AGHT+IGvHnVCTmDxfH/8wK0cbMQMdd2+QC7Cg7+dh1AXOrq291KoSLxad8RL5shMmiF0TMtX3LbCMTR8+g== X-Received: from ejctb11.prod.google.com ([2002:a17:907:8b8b:b0:abf:60e8:559]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:43c6:b0:5e0:7ff3:20c with SMTP id 4fb4d7f45d1cf-5e4d6b0cb67mr18149547a12.17.1741080341679; Tue, 04 Mar 2025 01:25:41 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:11 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-13-elver@google.com> Subject: [PATCH v2 12/34] bit_spinlock: Include missing From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Including into an empty TU will result in the compiler complaining: ./include/linux/bit_spinlock.h:34:4: error: call to undeclared function 'cpu_relax'; <...> 34 | cpu_relax(); | ^ 1 error generated. Include to allow including bit_spinlock.h where is not otherwise included. Signed-off-by: Marco Elver --- include/linux/bit_spinlock.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/bit_spinlock.h b/include/linux/bit_spinlock.h index bbc4730a6505..f1174a2fcc4d 100644 --- a/include/linux/bit_spinlock.h +++ b/include/linux/bit_spinlock.h @@ -7,6 +7,8 @@ #include #include +#include /* for cpu_relax() */ + /* * bit-based spin_lock() * From patchwork Tue Mar 4 09:21:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870224 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 18BB9201021 for ; Tue, 4 Mar 2025 09:25:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080348; cv=none; b=BS9x/nnI5pGkj8H+O6QewHxGbwzP3MvC1gwSl5x6FzbfCYPAGVXTLi/edNr92VIYyL4VAzvdWkgThMc+4S8yQUoNd/NoUeHyJAAuK2bavgEcEBnEzFU/Iy+Xp/Var9e8R/ESCyaMNUPZK3OjNq7qQ0q+NgLJiiRbaVbIGtGQgXI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080348; c=relaxed/simple; bh=akMiNQ+gN0RmTpAtCrvDOtnUbNbNNTRmSfn6a9iTJQ0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=aKNbhhWA2XlUfHcXNojLuzanLIbxYhtVboOsxS/IaaFHYhXUq6eUuX0Wm5fVUAk9HbOFJjOfvO79hr/XpP+4hgFbv9LGsqj6FlhL01tBQ2+UIZeN22Y1Q+2f9x4KsQCg1X4chT1AesG6wGtgzhCN8lC7rMqVk06WNu3Ntw/6NPI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=CQmQGMJ7; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CQmQGMJ7" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5e4c2618332so2146798a12.2 for ; Tue, 04 Mar 2025 01:25:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080344; x=1741685144; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7aDkMzmRnQvghHkOpkqZKF4mFOekC+EgfFoIEWs+/yg=; b=CQmQGMJ7xxU/gG597V/dmkvtMl9PBp2toUgb5bzZiNu6X0lSAGQLOCPllHUVug6FBx /cV7FvPxSzB3SO+T5PxZNyLxWmMeYIDZUu0+mTi9VatMUeZXEGUh2dESidJH4bGTZuF+ EXhnUMjYJDg5Yn3cAqw2JklVxvdh1bT4mt18KE35YcPz1uDodCeZ+K2eveOXEj0FEmoh iBbvojWtXwnZ+nZJ7yaWrQLL864iDRqO29up2tjlaJRv+VOtitN4gnOi1naRbUaqPrwE FhC8miShpLF0utQg+ClQlyYXJEd+B4uzBUZPdgK8muFAof0Ow52NT69Avigb/dJVGh6/ 7Ofg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080344; x=1741685144; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7aDkMzmRnQvghHkOpkqZKF4mFOekC+EgfFoIEWs+/yg=; b=EnfTftlf06HYIiI8nxNbfDCzLlnnF7r4ZjRHGHxNF8oYj9tQycaTgQPx9Cp/1dlWRM 0CgdHFw/nPmOhpuGBd0Ri7UtKPaXG8/TplZNVRN3PAXIDjFcqBmmk/S2VGyh1/riWCbM GYqHvmaB9XbY200fa70kpcrJTR9kKm2bq4QQSERMWxBHIUBVfe4i0YvTEdA3pOGZ3drG RMfS4Yy8C221ySav7r+sXTbgCNFnlUP6zGq5te9tbHnCJ2RiCC01ihwZJ2IvjzNLwNMp Rvy+R1mSzjJ7puezlVPRrHEHGk2v0zGjp/azOH3/Ml4KntreMDrjreMI4qp14TLfvnTW MCnw== X-Forwarded-Encrypted: i=1; AJvYcCVhwQTKNbC6jT2WgvXWuQoy7yXopJ20nVljdLk68Yuo8Lc8hdbQVTXaqfeyHK9rDsxpaYoU8w3WbsbBFM4=@vger.kernel.org X-Gm-Message-State: AOJu0YwwZyJt/TRNez/XWybcGGOA7tJGozmLbFC4aG+2gcF5DfxsVo9V b9TGrSyGEgPYV6U/AO4GVgG/EmoMg6rB8k3nEy0/YeVzBd6fCgUIS7ZkDa89lGdTRIlAhgLeyg= = X-Google-Smtp-Source: AGHT+IFrsVxvAlUP16luel5mw8ahTZegvdR1b3AXtwaYU+pzk0iRfg2g0KplRjxoVDdXqAY6iJf5PuATRQ== X-Received: from edbin4.prod.google.com ([2002:a05:6402:2084:b0:5e5:2b03:2ee1]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:348f:b0:5dc:94ce:42a6 with SMTP id 4fb4d7f45d1cf-5e4d6b4b980mr18647852a12.22.1741080344386; Tue, 04 Mar 2025 01:25:44 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:12 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-14-elver@google.com> Subject: [PATCH v2 13/34] bit_spinlock: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org The annotations for bit_spinlock.h have simply been using "bitlock" as the token. For Sparse, that was likely sufficient in most cases. But Clang's capability analysis is more precise, and we need to ensure we can distinguish different bitlocks. To do so, add a token capability, and a macro __bitlock(bitnum, addr) that is used to construct unique per-bitlock tokens. Add the appropriate test. is implicitly included through other includes, and requires 2 annotations to indicate that acquisition (without release) and release (without prior acquisition) of its bitlock is intended. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 3 ++- include/linux/bit_spinlock.h | 22 +++++++++++++--- include/linux/list_bl.h | 2 ++ lib/test_capability-analysis.c | 26 +++++++++++++++++++ 4 files changed, 48 insertions(+), 5 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index e4b333fffb4d..65972d1e9570 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -79,7 +79,8 @@ Supported Kernel Primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Currently the following synchronization primitives are supported: -`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`. +`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, +`bit_spinlock`. For capabilities with an initialization function (e.g., `spin_lock_init()`), calling this function on the capability instance before initializing any diff --git a/include/linux/bit_spinlock.h b/include/linux/bit_spinlock.h index f1174a2fcc4d..22ab3c143407 100644 --- a/include/linux/bit_spinlock.h +++ b/include/linux/bit_spinlock.h @@ -9,6 +9,16 @@ #include /* for cpu_relax() */ +/* + * For static capability analysis, we need a unique token for each possible bit + * that can be used as a bit_spinlock. The easiest way to do that is to create a + * fake capability that we can cast to with the __bitlock(bitnum, addr) macro + * below, which will give us unique instances for each (bit, addr) pair that the + * static analysis can use. + */ +struct_with_capability(__capability_bitlock) { }; +#define __bitlock(bitnum, addr) (struct __capability_bitlock *)(bitnum + (addr)) + /* * bit-based spin_lock() * @@ -16,6 +26,7 @@ * are significantly faster. */ static inline void bit_spin_lock(int bitnum, unsigned long *addr) + __acquires(__bitlock(bitnum, addr)) { /* * Assuming the lock is uncontended, this never enters @@ -34,13 +45,14 @@ static inline void bit_spin_lock(int bitnum, unsigned long *addr) preempt_disable(); } #endif - __acquire(bitlock); + __acquire(__bitlock(bitnum, addr)); } /* * Return true if it was acquired */ static inline int bit_spin_trylock(int bitnum, unsigned long *addr) + __cond_acquires(true, __bitlock(bitnum, addr)) { preempt_disable(); #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) @@ -49,7 +61,7 @@ static inline int bit_spin_trylock(int bitnum, unsigned long *addr) return 0; } #endif - __acquire(bitlock); + __acquire(__bitlock(bitnum, addr)); return 1; } @@ -57,6 +69,7 @@ static inline int bit_spin_trylock(int bitnum, unsigned long *addr) * bit-based spin_unlock() */ static inline void bit_spin_unlock(int bitnum, unsigned long *addr) + __releases(__bitlock(bitnum, addr)) { #ifdef CONFIG_DEBUG_SPINLOCK BUG_ON(!test_bit(bitnum, addr)); @@ -65,7 +78,7 @@ static inline void bit_spin_unlock(int bitnum, unsigned long *addr) clear_bit_unlock(bitnum, addr); #endif preempt_enable(); - __release(bitlock); + __release(__bitlock(bitnum, addr)); } /* @@ -74,6 +87,7 @@ static inline void bit_spin_unlock(int bitnum, unsigned long *addr) * protecting the rest of the flags in the word. */ static inline void __bit_spin_unlock(int bitnum, unsigned long *addr) + __releases(__bitlock(bitnum, addr)) { #ifdef CONFIG_DEBUG_SPINLOCK BUG_ON(!test_bit(bitnum, addr)); @@ -82,7 +96,7 @@ static inline void __bit_spin_unlock(int bitnum, unsigned long *addr) __clear_bit_unlock(bitnum, addr); #endif preempt_enable(); - __release(bitlock); + __release(__bitlock(bitnum, addr)); } /* diff --git a/include/linux/list_bl.h b/include/linux/list_bl.h index ae1b541446c9..df9eebe6afca 100644 --- a/include/linux/list_bl.h +++ b/include/linux/list_bl.h @@ -144,11 +144,13 @@ static inline void hlist_bl_del_init(struct hlist_bl_node *n) } static inline void hlist_bl_lock(struct hlist_bl_head *b) + __acquires(__bitlock(0, b)) { bit_spin_lock(0, (unsigned long *)b); } static inline void hlist_bl_unlock(struct hlist_bl_head *b) + __releases(__bitlock(0, b)) { __bit_spin_unlock(0, (unsigned long *)b); } diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 74d287740bb8..ad362d5a7916 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -4,6 +4,7 @@ * positive errors when compiled with Clang's capability analysis. */ +#include #include #include #include @@ -251,3 +252,28 @@ static void __used test_seqlock_writer(struct test_seqlock_data *d) d->counter++; write_sequnlock_irqrestore(&d->sl, flags); } + +struct test_bit_spinlock_data { + unsigned long bits; + int counter __guarded_by(__bitlock(3, &bits)); +}; + +static void __used test_bit_spin_lock(struct test_bit_spinlock_data *d) +{ + /* + * Note, the analysis seems to have false negatives, because it won't + * precisely recognize the bit of the fake __bitlock() token. + */ + bit_spin_lock(3, &d->bits); + d->counter++; + bit_spin_unlock(3, &d->bits); + + bit_spin_lock(3, &d->bits); + d->counter++; + __bit_spin_unlock(3, &d->bits); + + if (bit_spin_trylock(3, &d->bits)) { + d->counter++; + bit_spin_unlock(3, &d->bits); + } +} From patchwork Tue Mar 4 09:21:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870635 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 83FA3201113 for ; Tue, 4 Mar 2025 09:25:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080351; cv=none; b=qHa0c5Ag9WnIVk/vqmAOZ35HR46Ei2weL9n8XErL2OCq+4f2UGQRIDLzaf7F/wRUyQ0eWW33zTpy6ExeBipT7RmPxjQP8qRfstqebV1Qavoetas3G7k1heUSl3r6CEinIItPXwguiy9T/OR1G4KSbpsLh4Zdd7dJ6NYC3TJqqoI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080351; c=relaxed/simple; bh=Wqdc+WFhEaZMvjgCHc/zhoRNy3tiy2ObiXWVRytweCo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ArkLqCOAKcZto4dipjDCceX3pLcLZKJFr352otVR7VUBZA/dRI5YEcZMnRUgEChEytDeg7TzR2GQ1eZP2xBERyEV7R0ZU8l0tyEC+kqAXA5LRFDiwkLAsUXKqEtY2kBJnQVznIwQf3YLwJmUGNEOz8MtSGrKWPI4xLoRIEpEvqw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ErdcgKfJ; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ErdcgKfJ" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5da03762497so6780284a12.1 for ; Tue, 04 Mar 2025 01:25:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080347; x=1741685147; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dGU1wvi0i2cL/7Vdis45pwSeF2hTx85fMwlXO1/FTUA=; b=ErdcgKfJndrKyJ17O00049zGXAsomluh0E8s1GNAkQBqpvPc5Lr3Yid7XLM048Iid5 t+oD+wj4vhYvQFoIll3O7Vhstlg0qlbBRq2pJjBgMaY9ZOTeJiXOwCWrgvzY9PGAqBk9 V9TC3UOEWT2aIEcADYHu+Mker6FYYFs0AMP7YdyMT3lrg2cVovQDTKD8Hb/kubY1Ufrq goQvJ0+H6iSv9WvQYXHI2VNwldL1KLzFTBBWesMyHvuwhKbyC05jCHilSvxzIhfeqi8p FKrnQNOcmVkJ1/vJPfkPb/7PRwVfjHRLDZXSpIcR6foIFhwl8U25+iBSy1a2pUrRp36B Nprw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080347; x=1741685147; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dGU1wvi0i2cL/7Vdis45pwSeF2hTx85fMwlXO1/FTUA=; b=gICQVg9PICCXpFN1CggPhHAHJOOEfL4ZEd1SkJMj8ljb/hl6cBgBIcc9XKj12UoX9N 2qqMeawnbzopic99i3oRgWPs+lSGA5o7CBtYNC242ki2DBiUA18kDgfwl23cZ9f1F37k N2WJ8UMF9TPFfuuRm3NcOqxgXgc3DbapxuQH2Yq9oIuUYmwu7OxIzbJw/+gWUI/W5MLy quLoeO1SdvnGxwIOhDPGP5zjlbFb6NlsfPp9NLH8xsiRc2qGWS5XvKBajCOSXkEslCXQ qaQDZmq9K6a5r7j6TU1Rz2E3rjNx0W2zyTDJeXaC4Z8rXFRkqx9wK2COsMXlxrpXAyVq zn0Q== X-Forwarded-Encrypted: i=1; AJvYcCWB5vBBl4L8xT4aj635APqdOKkyHYh706vuPAfiF5pJYdY87dq1JTiSGvuO5RMXrgZcWusLpHBJpHNmdoc=@vger.kernel.org X-Gm-Message-State: AOJu0Yy9b26WTGN3mqDs4AlryoB3CO4amAW4hCSDBd9wyhrBs6WHOax3 uB+0lJqAaR/U4sp1cfeHbwBcTqMRvpw2DxFd6tbNTx1ngMyMhFJpfR8D7RLDLSM/Bm2RjrQA1Q= = X-Google-Smtp-Source: AGHT+IEzpwBNdOV6S2p5H0ctMRiwI9pS2G54haANPvgkRgxFQlowUFQGknXrYikTwfs7UGOz8UW7jP5rjg== X-Received: from edbek14.prod.google.com ([2002:a05:6402:370e:b0:5e5:2f33:208a]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:2546:b0:5e0:8a34:3b5c with SMTP id 4fb4d7f45d1cf-5e584d16ff6mr2399224a12.0.1741080347066; Tue, 04 Mar 2025 01:25:47 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:13 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-15-elver@google.com> Subject: [PATCH v2 14/34] rcu: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Improve the existing annotations to properly support Clang's capability analysis. The old annotations distinguished between RCU, RCU_BH, and RCU_SCHED; however, to more easily be able to express that "hold the RCU read lock" without caring if the normal, _bh(), or _sched() variant was used we'd have to remove the distinction of the latter variants: change the _bh() and _sched() variants to also acquire "RCU". When (and if) we introduce capabilities to denote more generally that "IRQ", "BH", "PREEMPT" are disabled, it would make sense to acquire these capabilities instead of RCU_BH and RCU_SCHED respectively. The above change also simplified introducing __guarded_by support, where only the "RCU" capability needs to be held: introduce __rcu_guarded, where Clang's capability analysis warns if a pointer is dereferenced without any of the RCU locks held, or updated without the appropriate helpers. | Note: A limitation of the compiler's analysis is re-entrancy; a pattern | such as the below will result in a warning: | | rcu_read_lock(); // acquires RCU | .. | rcu_read_lock_bh(); // error: acquiring __capability_RCU 'RCU' that is already held | .. | rcu_read_unlock_bh(); // releases RCU | .. | rcu_read_unlock(); // error: releasing __capability_RCU 'RCU' that was not held | | Such patterns should generally be uncommon, and initial usage in enabled | subsystems did not result in any false positives due to re-entrancy. | Until the compiler supports re-entrancy, keeping the analysis disabled | for code relying on re-entrancy is the only option. The primitives rcu_assign_pointer() and friends are wrapped with capability_unsafe(), which enforces using them to update RCU-protected pointers marked with __rcu_guarded. Signed-off-by: Marco Elver --- v2: * Reword commit message and point out re-entrancy caveat. --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/cleanup.h | 4 + include/linux/rcupdate.h | 73 +++++++++++++------ lib/test_capability-analysis.c | 68 +++++++++++++++++ 4 files changed, 123 insertions(+), 24 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index 65972d1e9570..a14d796bcd0e 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -80,7 +80,7 @@ Supported Kernel Primitives Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`. +`bit_spinlock`, RCU. For capabilities with an initialization function (e.g., `spin_lock_init()`), calling this function on the capability instance before initializing any diff --git a/include/linux/cleanup.h b/include/linux/cleanup.h index 93a166549add..7d70d308357a 100644 --- a/include/linux/cleanup.h +++ b/include/linux/cleanup.h @@ -404,6 +404,10 @@ static inline class_##_name##_t class_##_name##_constructor(void) \ return _t; \ } +#define DECLARE_LOCK_GUARD_0_ATTRS(_name, _lock, _unlock) \ +static inline class_##_name##_t class_##_name##_constructor(void) _lock;\ +static inline void class_##_name##_destructor(class_##_name##_t *_T) _unlock + #define DEFINE_LOCK_GUARD_1(_name, _type, _lock, _unlock, ...) \ __DEFINE_CLASS_IS_CONDITIONAL(_name, false); \ __DEFINE_UNLOCK_GUARD(_name, _type, _unlock, __VA_ARGS__) \ diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 48e5c03df1dd..ef8875c4e621 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -31,6 +31,16 @@ #include #include +token_capability(RCU); +token_capability_instance(RCU, RCU_SCHED); +token_capability_instance(RCU, RCU_BH); + +/* + * A convenience macro that can be used for RCU-protected globals or struct + * members; adds type qualifier __rcu, and also enforces __guarded_by(RCU). + */ +#define __rcu_guarded __rcu __guarded_by(RCU) + #define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >= (a) - (b)) #define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b)) @@ -431,7 +441,8 @@ static inline void rcu_preempt_sleep_check(void) { } // See RCU_LOCKDEP_WARN() for an explanation of the double call to // debug_lockdep_rcu_enabled(). -static inline bool lockdep_assert_rcu_helper(bool c) +static inline bool lockdep_assert_rcu_helper(bool c, const struct __capability_RCU *cap) + __asserts_shared_cap(RCU) __asserts_shared_cap(cap) { return debug_lockdep_rcu_enabled() && (c || !rcu_is_watching() || !rcu_lockdep_current_cpu_online()) && @@ -444,7 +455,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * Splats if lockdep is enabled and there is no rcu_read_lock() in effect. */ #define lockdep_assert_in_rcu_read_lock() \ - WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map))) + WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map), RCU)) /** * lockdep_assert_in_rcu_read_lock_bh - WARN if not protected by rcu_read_lock_bh() @@ -454,7 +465,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * actual rcu_read_lock_bh() is required. */ #define lockdep_assert_in_rcu_read_lock_bh() \ - WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_bh_lock_map))) + WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_bh_lock_map), RCU_BH)) /** * lockdep_assert_in_rcu_read_lock_sched - WARN if not protected by rcu_read_lock_sched() @@ -464,7 +475,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * instead an actual rcu_read_lock_sched() is required. */ #define lockdep_assert_in_rcu_read_lock_sched() \ - WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_sched_lock_map))) + WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_sched_lock_map), RCU_SCHED)) /** * lockdep_assert_in_rcu_reader - WARN if not within some type of RCU reader @@ -482,17 +493,17 @@ static inline bool lockdep_assert_rcu_helper(bool c) WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map) && \ !lock_is_held(&rcu_bh_lock_map) && \ !lock_is_held(&rcu_sched_lock_map) && \ - preemptible())) + preemptible(), RCU)) #else /* #ifdef CONFIG_PROVE_RCU */ #define RCU_LOCKDEP_WARN(c, s) do { } while (0 && (c)) #define rcu_sleep_check() do { } while (0) -#define lockdep_assert_in_rcu_read_lock() do { } while (0) -#define lockdep_assert_in_rcu_read_lock_bh() do { } while (0) -#define lockdep_assert_in_rcu_read_lock_sched() do { } while (0) -#define lockdep_assert_in_rcu_reader() do { } while (0) +#define lockdep_assert_in_rcu_read_lock() __assert_shared_cap(RCU) +#define lockdep_assert_in_rcu_read_lock_bh() __assert_shared_cap(RCU_BH) +#define lockdep_assert_in_rcu_read_lock_sched() __assert_shared_cap(RCU_SCHED) +#define lockdep_assert_in_rcu_reader() __assert_shared_cap(RCU) #endif /* #else #ifdef CONFIG_PROVE_RCU */ @@ -512,11 +523,11 @@ static inline bool lockdep_assert_rcu_helper(bool c) #endif /* #else #ifdef __CHECKER__ */ #define __unrcu_pointer(p, local) \ -({ \ +capability_unsafe( \ typeof(*p) *local = (typeof(*p) *__force)(p); \ rcu_check_sparse(p, __rcu); \ ((typeof(*p) __force __kernel *)(local)); \ -}) +) /** * unrcu_pointer - mark a pointer as not being RCU protected * @p: pointer needing to lose its __rcu property @@ -592,7 +603,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * other macros that it invokes. */ #define rcu_assign_pointer(p, v) \ -do { \ +capability_unsafe( \ uintptr_t _r_a_p__v = (uintptr_t)(v); \ rcu_check_sparse(p, __rcu); \ \ @@ -600,7 +611,7 @@ do { \ WRITE_ONCE((p), (typeof(p))(_r_a_p__v)); \ else \ smp_store_release(&p, RCU_INITIALIZER((typeof(p))_r_a_p__v)); \ -} while (0) +) /** * rcu_replace_pointer() - replace an RCU pointer, returning its old value @@ -843,9 +854,10 @@ do { \ * only when acquiring spinlocks that are subject to priority inheritance. */ static __always_inline void rcu_read_lock(void) + __acquires_shared(RCU) { __rcu_read_lock(); - __acquire(RCU); + __acquire_shared(RCU); rcu_lock_acquire(&rcu_lock_map); RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_lock() used illegally while idle"); @@ -874,11 +886,12 @@ static __always_inline void rcu_read_lock(void) * See rcu_read_lock() for more information. */ static inline void rcu_read_unlock(void) + __releases_shared(RCU) { RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_unlock() used illegally while idle"); rcu_lock_release(&rcu_lock_map); /* Keep acq info for rls diags. */ - __release(RCU); + __release_shared(RCU); __rcu_read_unlock(); } @@ -897,9 +910,11 @@ static inline void rcu_read_unlock(void) * was invoked from some other task. */ static inline void rcu_read_lock_bh(void) + __acquires_shared(RCU) __acquires_shared(RCU_BH) { local_bh_disable(); - __acquire(RCU_BH); + __acquire_shared(RCU); + __acquire_shared(RCU_BH); rcu_lock_acquire(&rcu_bh_lock_map); RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_lock_bh() used illegally while idle"); @@ -911,11 +926,13 @@ static inline void rcu_read_lock_bh(void) * See rcu_read_lock_bh() for more information. */ static inline void rcu_read_unlock_bh(void) + __releases_shared(RCU) __releases_shared(RCU_BH) { RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_unlock_bh() used illegally while idle"); rcu_lock_release(&rcu_bh_lock_map); - __release(RCU_BH); + __release_shared(RCU_BH); + __release_shared(RCU); local_bh_enable(); } @@ -935,9 +952,11 @@ static inline void rcu_read_unlock_bh(void) * rcu_read_lock_sched() was invoked from an NMI handler. */ static inline void rcu_read_lock_sched(void) + __acquires_shared(RCU) __acquires_shared(RCU_SCHED) { preempt_disable(); - __acquire(RCU_SCHED); + __acquire_shared(RCU); + __acquire_shared(RCU_SCHED); rcu_lock_acquire(&rcu_sched_lock_map); RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_lock_sched() used illegally while idle"); @@ -945,9 +964,11 @@ static inline void rcu_read_lock_sched(void) /* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */ static inline notrace void rcu_read_lock_sched_notrace(void) + __acquires_shared(RCU) __acquires_shared(RCU_SCHED) { preempt_disable_notrace(); - __acquire(RCU_SCHED); + __acquire_shared(RCU); + __acquire_shared(RCU_SCHED); } /** @@ -956,18 +977,22 @@ static inline notrace void rcu_read_lock_sched_notrace(void) * See rcu_read_lock_sched() for more information. */ static inline void rcu_read_unlock_sched(void) + __releases_shared(RCU) __releases_shared(RCU_SCHED) { RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_unlock_sched() used illegally while idle"); rcu_lock_release(&rcu_sched_lock_map); - __release(RCU_SCHED); + __release_shared(RCU_SCHED); + __release_shared(RCU); preempt_enable(); } /* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */ static inline notrace void rcu_read_unlock_sched_notrace(void) + __releases_shared(RCU) __releases_shared(RCU_SCHED) { - __release(RCU_SCHED); + __release_shared(RCU_SCHED); + __release_shared(RCU); preempt_enable_notrace(); } @@ -1010,10 +1035,10 @@ static inline notrace void rcu_read_unlock_sched_notrace(void) * ordering guarantees for either the CPU or the compiler. */ #define RCU_INIT_POINTER(p, v) \ - do { \ + capability_unsafe( \ rcu_check_sparse(p, __rcu); \ WRITE_ONCE(p, RCU_INITIALIZER(v)); \ - } while (0) + ) /** * RCU_POINTER_INITIALIZER() - statically initialize an RCU protected pointer @@ -1172,4 +1197,6 @@ DEFINE_LOCK_GUARD_0(rcu, } while (0), rcu_read_unlock()) +DECLARE_LOCK_GUARD_0_ATTRS(rcu, __acquires_shared(RCU), __releases_shared(RCU)); + #endif /* __LINUX_RCUPDATE_H */ diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index ad362d5a7916..050fa7c9fcba 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -277,3 +278,70 @@ static void __used test_bit_spin_lock(struct test_bit_spinlock_data *d) bit_spin_unlock(3, &d->bits); } } + +/* + * Test that we can mark a variable guarded by RCU, and we can dereference and + * write to the pointer with RCU's primitives. + */ +struct test_rcu_data { + long __rcu_guarded *data; +}; + +static void __used test_rcu_guarded_reader(struct test_rcu_data *d) +{ + rcu_read_lock(); + (void)rcu_dereference(d->data); + rcu_read_unlock(); + + rcu_read_lock_bh(); + (void)rcu_dereference(d->data); + rcu_read_unlock_bh(); + + rcu_read_lock_sched(); + (void)rcu_dereference(d->data); + rcu_read_unlock_sched(); +} + +static void __used test_rcu_guard(struct test_rcu_data *d) +{ + guard(rcu)(); + (void)rcu_dereference(d->data); +} + +static void __used test_rcu_guarded_updater(struct test_rcu_data *d) +{ + rcu_assign_pointer(d->data, NULL); + RCU_INIT_POINTER(d->data, NULL); + (void)unrcu_pointer(d->data); +} + +static void wants_rcu_held(void) __must_hold_shared(RCU) { } +static void wants_rcu_held_bh(void) __must_hold_shared(RCU_BH) { } +static void wants_rcu_held_sched(void) __must_hold_shared(RCU_SCHED) { } + +static void __used test_rcu_lock_variants(void) +{ + rcu_read_lock(); + wants_rcu_held(); + rcu_read_unlock(); + + rcu_read_lock_bh(); + wants_rcu_held_bh(); + rcu_read_unlock_bh(); + + rcu_read_lock_sched(); + wants_rcu_held_sched(); + rcu_read_unlock_sched(); +} + +static void __used test_rcu_assert_variants(void) +{ + lockdep_assert_in_rcu_read_lock(); + wants_rcu_held(); + + lockdep_assert_in_rcu_read_lock_bh(); + wants_rcu_held_bh(); + + lockdep_assert_in_rcu_read_lock_sched(); + wants_rcu_held_sched(); +} From patchwork Tue Mar 4 09:21:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870223 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D39E201269 for ; Tue, 4 Mar 2025 09:25:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080353; cv=none; b=Cy8RLFctFIqYG8krDBdsCO3fCS7xG1L8RjFfsusgnjuoQKphHBYkehbR9RK14K/UiXaC7ZnwRqLYB/fpvzz1wCmljkE/4D5vllSARq42/3YcILJm5ACNkyNXmES6C4LB1o2rjXxKo+Z+hM7RTkHDkGg12zUpb6zFMFJn5H//vbw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080353; c=relaxed/simple; bh=+dINVo1oJ9VhsO5ynOzD5Y4e9AFKu3wP28ZuDYXPff0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AdrCnoAE0mILp4x4+te1GeVJx1b32iVC0j7u8JTMcR/Aaww6Ohwky2X0rcu6XMKRcgijxZsnHIb10s6iTvy5PmNDemq9f3C3Ji0kmbTfKcmJ4rpQfA+UTy2o+mRJMOXtu5p7yzTWX4iuAyNW7g3NnHb5+0ersf1kGvQhTpFl2dQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tR/1HHRk; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tR/1HHRk" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43bc97e6360so5941545e9.3 for ; Tue, 04 Mar 2025 01:25:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080349; x=1741685149; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IjZvxS6CXb9r/jfHcsWB9NCl/hhxYyJhywIjI+5O6TY=; b=tR/1HHRkDANpTXSvuUTUiRr9w8AgP6XqHvtRHjBj8zzsPG9UnCHLc1wBi8frgXutGH FGk4+dNEtfkHHPkfTkS/21qWILINhAgMBZ8zTW40pHW01lC+gR6iuE9839N0bJERnzM6 Fag9N0tWa8j3AyVmm20THdtI81A+feNVfW4E4GZc3ZwCAZG0m6A8D0cx61TLX49vsHok JR4DuhkKpBNDt8khZ23mN3nqLZBC367oeXpXO0/3rfbB15p/hs92a7GSRZu7SzVS/Yz/ eobAUtuYMs8ALhbX4LbYLLL2TKyKu9HaczPfyHVNRbzTvCkkAz+TKC1lxwIzwK4gLFSD 0ZuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080349; x=1741685149; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IjZvxS6CXb9r/jfHcsWB9NCl/hhxYyJhywIjI+5O6TY=; b=UJLUELxRfOg+cn1kTfyzltzSaWHD2JZbjOoHyAK1kwubHheW8ZZ7RIbT4nlQsj7xqK MF7Key9qWRx8uMEmDLZM5ZiGbrFCNqWBlII5JfR0MoVp2NScnmsx8Goh0RbJq3vn56ft kRilpc92XCc3Xu02RdauKrDG/voSuGkPU5MBosgxhqZ+0VJN9r+HiDpL79Bzbpvnl4qi NgvGAWAxo5ZFASK1yTfcKt6lVQqReNA0jXZxg5zzootZI5lgUCYmSlkTjBn6w4TiWVk2 8AQboHgAyTK5A+6cxox4VUoFLuM0WFpj9XvX0hsC+pGXv0vtdkhwCNMQpu40gG2v1DlG Dyhw== X-Forwarded-Encrypted: i=1; AJvYcCUQg8x5E7Pm/PESsd1TdO/rXMIpm3Tepz656i/7tRpgW442/Yhm7pmKjwBLt9dnQFsvnt+KPE2KwmpVifI=@vger.kernel.org X-Gm-Message-State: AOJu0YwguBy1DoucyVqnuOsyTsp7uGeMaZkovOFl2nU5oj2GjG72t23I 8WCaI0Y26DITjk2d/fWsdwoKkTJy7ZBk9IZ6HFlMKoHw/621f+e7ElM055cjDO7hM0XdZr27kw= = X-Google-Smtp-Source: AGHT+IFiYcMhAel+FfswKkfKCtzdpYKZ8TP8BDehixv81OEAKQOJ8MSR/SDR6vYPbtqm7/NxYsQVJMCTAg== X-Received: from wmbg14.prod.google.com ([2002:a05:600c:a40e:b0:43b:cebe:8011]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3c9c:b0:439:89e9:4eff with SMTP id 5b1f17b1804b1-43ba66e6b5dmr142151345e9.10.1741080349701; Tue, 04 Mar 2025 01:25:49 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:14 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-16-elver@google.com> Subject: [PATCH v2 15/34] srcu: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Add support for Clang's capability analysis for SRCU. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/srcu.h | 61 +++++++++++++------ lib/test_capability-analysis.c | 24 ++++++++ 3 files changed, 66 insertions(+), 21 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index a14d796bcd0e..918e35d110df 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -80,7 +80,7 @@ Supported Kernel Primitives Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`, RCU. +`bit_spinlock`, RCU, SRCU (`srcu_struct`). For capabilities with an initialization function (e.g., `spin_lock_init()`), calling this function on the capability instance before initializing any diff --git a/include/linux/srcu.h b/include/linux/srcu.h index d7ba46e74f58..fde8bba191a5 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -21,7 +21,7 @@ #include #include -struct srcu_struct; +struct_with_capability(srcu_struct); #ifdef CONFIG_DEBUG_LOCK_ALLOC @@ -60,14 +60,14 @@ int init_srcu_struct(struct srcu_struct *ssp); void call_srcu(struct srcu_struct *ssp, struct rcu_head *head, void (*func)(struct rcu_head *head)); void cleanup_srcu_struct(struct srcu_struct *ssp); -int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp); -void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp); +int __srcu_read_lock(struct srcu_struct *ssp) __acquires_shared(ssp); +void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases_shared(ssp); #ifdef CONFIG_TINY_SRCU #define __srcu_read_lock_lite __srcu_read_lock #define __srcu_read_unlock_lite __srcu_read_unlock #else // #ifdef CONFIG_TINY_SRCU -int __srcu_read_lock_lite(struct srcu_struct *ssp) __acquires(ssp); -void __srcu_read_unlock_lite(struct srcu_struct *ssp, int idx) __releases(ssp); +int __srcu_read_lock_lite(struct srcu_struct *ssp) __acquires_shared(ssp); +void __srcu_read_unlock_lite(struct srcu_struct *ssp, int idx) __releases_shared(ssp); #endif // #else // #ifdef CONFIG_TINY_SRCU void synchronize_srcu(struct srcu_struct *ssp); @@ -110,14 +110,16 @@ static inline bool same_state_synchronize_srcu(unsigned long oldstate1, unsigned } #ifdef CONFIG_NEED_SRCU_NMI_SAFE -int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp); -void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releases(ssp); +int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires_shared(ssp); +void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releases_shared(ssp); #else static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) + __acquires_shared(ssp) { return __srcu_read_lock(ssp); } static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) + __releases_shared(ssp) { __srcu_read_unlock(ssp, idx); } @@ -189,6 +191,14 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp) #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */ +/* + * No-op helper to denote that ssp must be held. Because SRCU-protected pointers + * should still be marked with __rcu_guarded, and we do not want to mark them + * with __guarded_by(ssp) as it would complicate annotations for writers, we + * choose the following strategy: srcu_dereference_check() calls this helper + * that checks that the passed ssp is held, and then fake-acquires 'RCU'. + */ +static inline void __srcu_read_lock_must_hold(const struct srcu_struct *ssp) __must_hold_shared(ssp) { } /** * srcu_dereference_check - fetch SRCU-protected pointer for later dereferencing @@ -202,9 +212,15 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp) * to 1. The @c argument will normally be a logical expression containing * lockdep_is_held() calls. */ -#define srcu_dereference_check(p, ssp, c) \ - __rcu_dereference_check((p), __UNIQUE_ID(rcu), \ - (c) || srcu_read_lock_held(ssp), __rcu) +#define srcu_dereference_check(p, ssp, c) \ +({ \ + __srcu_read_lock_must_hold(ssp); \ + __acquire_shared_cap(RCU); \ + __auto_type __v = __rcu_dereference_check((p), __UNIQUE_ID(rcu), \ + (c) || srcu_read_lock_held(ssp), __rcu); \ + __release_shared_cap(RCU); \ + __v; \ +}) /** * srcu_dereference - fetch SRCU-protected pointer for later dereferencing @@ -247,7 +263,8 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp) * invoke srcu_read_unlock() from one task and the matching srcu_read_lock() * from another. */ -static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp) +static inline int srcu_read_lock(struct srcu_struct *ssp) + __acquires_shared(ssp) { int retval; @@ -274,7 +291,8 @@ static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp) * where RCU is watching, that is, from contexts where it would be legal * to invoke rcu_read_lock(). Otherwise, lockdep will complain. */ -static inline int srcu_read_lock_lite(struct srcu_struct *ssp) __acquires(ssp) +static inline int srcu_read_lock_lite(struct srcu_struct *ssp) + __acquires_shared(ssp) { int retval; @@ -295,7 +313,8 @@ static inline int srcu_read_lock_lite(struct srcu_struct *ssp) __acquires(ssp) * then none of the other flavors may be used, whether before, during, * or after. */ -static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp) +static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) + __acquires_shared(ssp) { int retval; @@ -307,7 +326,8 @@ static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp /* Used by tracing, cannot be traced and cannot invoke lockdep. */ static inline notrace int -srcu_read_lock_notrace(struct srcu_struct *ssp) __acquires(ssp) +srcu_read_lock_notrace(struct srcu_struct *ssp) + __acquires_shared(ssp) { int retval; @@ -337,7 +357,8 @@ srcu_read_lock_notrace(struct srcu_struct *ssp) __acquires(ssp) * Calls to srcu_down_read() may be nested, similar to the manner in * which calls to down_read() may be nested. */ -static inline int srcu_down_read(struct srcu_struct *ssp) __acquires(ssp) +static inline int srcu_down_read(struct srcu_struct *ssp) + __acquires_shared(ssp) { WARN_ON_ONCE(in_nmi()); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL); @@ -352,7 +373,7 @@ static inline int srcu_down_read(struct srcu_struct *ssp) __acquires(ssp) * Exit an SRCU read-side critical section. */ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(idx & ~0x1); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL); @@ -368,7 +389,7 @@ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx) * Exit a light-weight SRCU read-side critical section. */ static inline void srcu_read_unlock_lite(struct srcu_struct *ssp, int idx) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(idx & ~0x1); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_LITE); @@ -384,7 +405,7 @@ static inline void srcu_read_unlock_lite(struct srcu_struct *ssp, int idx) * Exit an SRCU read-side critical section, but in an NMI-safe manner. */ static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(idx & ~0x1); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NMI); @@ -394,7 +415,7 @@ static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) /* Used by tracing, cannot be traced and cannot call lockdep. */ static inline notrace void -srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases(ssp) +srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases_shared(ssp) { srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL); __srcu_read_unlock(ssp, idx); @@ -409,7 +430,7 @@ srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases(ssp) * the same context as the maching srcu_down_read(). */ static inline void srcu_up_read(struct srcu_struct *ssp, int idx) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(idx & ~0x1); WARN_ON_ONCE(in_nmi()); diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 050fa7c9fcba..63d81ad1562f 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -10,6 +10,7 @@ #include #include #include +#include /* * Test that helper macros work as expected. @@ -345,3 +346,26 @@ static void __used test_rcu_assert_variants(void) lockdep_assert_in_rcu_read_lock_sched(); wants_rcu_held_sched(); } + +struct test_srcu_data { + struct srcu_struct srcu; + long __rcu_guarded *data; +}; + +static void __used test_srcu(struct test_srcu_data *d) +{ + init_srcu_struct(&d->srcu); + + int idx = srcu_read_lock(&d->srcu); + long *data = srcu_dereference(d->data, &d->srcu); + (void)data; + srcu_read_unlock(&d->srcu, idx); + + rcu_assign_pointer(d->data, NULL); +} + +static void __used test_srcu_guard(struct test_srcu_data *d) +{ + guard(srcu)(&d->srcu); + (void)srcu_dereference(d->data, &d->srcu); +} From patchwork Tue Mar 4 09:21:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870634 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD0EC202982 for ; Tue, 4 Mar 2025 09:25:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080355; cv=none; b=CPEv6J486mbGuyAou/mjlRRNxvyt7SGBwtP43SJWQeGN6q9ANDg8l9qA7satI/902QqTZx3tqnlHyT9fcsPwIl4S5Wx0LyQpOo81HR1S6f08PpkSczSh1Ij4tnifztRiPfgczdcQtZjaXsWutHq7u7wuEWyfy9l5poHVQhA+lvQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080355; c=relaxed/simple; bh=PiALtBw0DYxsKuBO8J1H1AaY1cU3EmQ3QqFWUaJ2ULI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=frkJgB2+mr6Ox7/K6HAwRquKTbYueI+XjUdUXfy2jexhTzEpIUVjFZfsT3Rf0Hnhgv+q2f5FSd6pSdtOp74HaZOeyyZqRsA6yKQ7Lr8+MJALqPdrhGPQdLqNvAy6r4I4WYDpv3ze6ZnsXtm8OLuQ/SoTmW8YFRBgsKWCLdAgIF4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xseS9zns; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xseS9zns" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43947979ce8so21212045e9.0 for ; Tue, 04 Mar 2025 01:25:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080352; x=1741685152; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cDlOE/N8Ae1M5H+4yLEwuBDz/LRLExYZEp/cosJZ8Bg=; b=xseS9znsQ87t34sYX/gUp0g2hQZ/UNonxdPqn5B7LJMZ2/n7mRqN7ey1N4I8/VvGVt wcqf61xpLrOaUgfB5G5XrT8rOSZRNnLb2FSitfzM+H6koewUYHd0S6p6SgDDtZFjjw2v ga4SWEVQifSr2qCCjU0KUdxhurb8xDPE+OxbN7/QDf4dVDhNGp6o7JXxe8nDqH0SGobz lNlVgFk5sBuAoTqrKl1j8X+4hLptwTCAuGatsEJ28qbXzZ9wbk1D4H56dU0F19rDETHD 8tyMll8rxrEUkc51aH24J2tE+Ah6qHHSVyCVRdLZXRhZsxTzKw3L2W5RpkrxNf8Tprgs WjJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080352; x=1741685152; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cDlOE/N8Ae1M5H+4yLEwuBDz/LRLExYZEp/cosJZ8Bg=; b=bRxyjllvaB5B06mz6B0PFizWYzw6BDSrONpy7UKgRneGwBhZqPmYWgyrlVFAQ2Kvfs sByriXO9X4eP5SFMMHJO1vAqTXjKQyGKJVXUmmFsKwXpeSnNHmdl7IlKT3O+8vHmylv/ O+IOtJyJ4GuHSvfAgTt8OLE6n5DUJNx+Ys2Z9VUcET7N/uBAUxASBziMQxscyONyRJRG UYsMUQ97OfvHKPHCEONa+mkvEt02ksfar3hPfc5t/FQkQjyaPSdmiIA6qw5gvDTXl+3T KuUxBn7U/S0d7rIO2gRAqb+J2N5pspqP3thmm5xx6xhY/Env83NueJJ5QSgX20NLWXYF oipw== X-Forwarded-Encrypted: i=1; AJvYcCX/SrnHgutnFNDw+OgBAXidNw0Q+w++zaSKOWGstmKe5Kl1tgcx4p4EAQE7mfpXcVSHHKxL7N0bZ802ZAs=@vger.kernel.org X-Gm-Message-State: AOJu0Yxut4R1tPmF1yEbK7sBQNndmPLbJNjDN2FeJFGBnC8BWyx2exoe AU4fzVntn+q4VUK5IVw2fKbtqoRXdA1fk7ds22C7VXVGGRaigvoVGgQ79svQqRsPHJIhy9Lr7w= = X-Google-Smtp-Source: AGHT+IHedqiVSmrcvr28lGY8V3DB1o4f03/YwjOgy/3VYsVwJNq1WARMZql7ViymW31VHT1lz95raXYaUw== X-Received: from wmqa13.prod.google.com ([2002:a05:600c:348d:b0:439:64f9:d801]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:190a:b0:434:a4b3:5ebe with SMTP id 5b1f17b1804b1-43ba675830emr110121215e9.24.1741080352374; Tue, 04 Mar 2025 01:25:52 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:15 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-17-elver@google.com> Subject: [PATCH v2 16/34] kref: Add capability-analysis annotations From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Mark functions that conditionally acquire the passed lock. Signed-off-by: Marco Elver --- include/linux/kref.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/kref.h b/include/linux/kref.h index 88e82ab1367c..9bc6abe57572 100644 --- a/include/linux/kref.h +++ b/include/linux/kref.h @@ -81,6 +81,7 @@ static inline int kref_put(struct kref *kref, void (*release)(struct kref *kref) static inline int kref_put_mutex(struct kref *kref, void (*release)(struct kref *kref), struct mutex *mutex) + __cond_acquires(true, mutex) { if (refcount_dec_and_mutex_lock(&kref->refcount, mutex)) { release(kref); @@ -102,6 +103,7 @@ static inline int kref_put_mutex(struct kref *kref, static inline int kref_put_lock(struct kref *kref, void (*release)(struct kref *kref), spinlock_t *lock) + __cond_acquires(true, lock) { if (refcount_dec_and_lock(&kref->refcount, lock)) { release(kref); From patchwork Tue Mar 4 09:21:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870222 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B902C202F65 for ; Tue, 4 Mar 2025 09:25:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080358; cv=none; b=fZ6zMoAt2Y6xhxnSYKzx4Hixdw8VOo3rmZZjn7yTKxPafA1pOvHHl/Zz1gW0rvMNTs0VJ+yv4X7hmgXfTjPRZfDx4PM0YPGgvNXSnpvTewZ7VeQz0t/t5XCicoJ46IHl7IrLJqSnME7dBGtuc5yeozbhptfsyfsQ4QspumZzhnY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080358; c=relaxed/simple; bh=1SmN40z6kKbC+O8dvQdGpD27kvcZTiGHvP/DQN+09H4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rQFWV7+g6zId2lCdkWGY9ckMvSOnDieiZ4xkPd5u3fx5k2dE+YK0c15PPTSrvSToJXmsFzEuOFA0bKJlsQ1UArVaenVUqbESNBzZ8xjqXUXRUYzlLkNH/DGADRYg9GJvIEtR0OTLCmscTRhleq8MgZkxv3k1m1Fyq4oRXkCidyc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=3SFaGK9L; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="3SFaGK9L" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5e4b3da6b49so5228031a12.2 for ; Tue, 04 Mar 2025 01:25:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080355; x=1741685155; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ezb4DcU7J/tAnP2VeDlPIfb7+eFzdx1whQ0121O6/zI=; b=3SFaGK9LRLwlNLFT/c5cma8UqSuYLMvKhYHr9F4JA14UKIr7ukbOg6thlv9DroPV4b kCFR42USngDGrBKefSB0doNHTpcZWpJwIxKbYyLpxGFTPzL0bq4EnjSO8HgWN70auswC Y6r1RgBIvY11RlkSqnX5eSjm/YvpOqdoAHoQiZvZFZyd4au9f92HN/rWUARgrc7+juO5 sPaVUapc9oqrLBkxLQzXjOpEHrAx5pdCwxxuZJkZg/9OaNR7McT4/YBQn7o7yvNdfB6G G4cputwhg2yHS2h9ybV+6Z31TNlyVwtGCKTFaoO1FLuH2yiFN6bkojarOzzGU/HysUp+ yRPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080355; x=1741685155; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ezb4DcU7J/tAnP2VeDlPIfb7+eFzdx1whQ0121O6/zI=; b=KIKdZks3iGk41Nj8/yKXkMK+yjZyrtXOitO2vp44oV2svNkgW/DQWwQt4Mguxn0F4P 5KFzt8J+QTeEELuClJUNNf6D2gBXEdtDrnbU6u1fN97oUHSkRpRBwkfTNW5avtowVyxq tpOGE2QCZTlrC9Iyg5W9lWFKOUe24XM7q/hgnhqPXX6hi1CcLPbSW3Ruzn5JJxS8TU+9 asihW6MFN1JSU56r5wAA+GzF1lzIB7LA/7TaWA9jzQRyrzIOMF93ZH8kuqxtsnut512e RUSQ9w6yHSbEodKDCEk7A/+nmsUwSdfZF7Ka8DsI7v3hbvz69FYtRUxueKgM1VTuJ9KB mvqg== X-Forwarded-Encrypted: i=1; AJvYcCWfPfCK+QjaRhyVGVf5se7aAIVnXnxS9aDiflV2fJYq1DwBeyq5NN+hIxGo2sS2Yck1rfMQtj+TNgORa9k=@vger.kernel.org X-Gm-Message-State: AOJu0YxqAThLAGv/AunQtq0eFsrvQdhAk7ZdA2A99eX0pOcaxVYximMP ts32KeZilaln+5L+edXHabP75cDqNZgHEI53LvvHC7pwFpmRHY3VXQB9HJ2JWshPuS+h9jvTvw= = X-Google-Smtp-Source: AGHT+IEQwbZjdZjN5RppHp55Zz74kJe74FUqiEdr52hms6eJRF3kZ/G4XeCunAHNo+i6Y4TgK/rotcyvNA== X-Received: from edbfd14.prod.google.com ([2002:a05:6402:388e:b0:5e4:c2fd:b4ac]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:2711:b0:5e0:49e4:2180 with SMTP id 4fb4d7f45d1cf-5e4d6b4bc0dmr43845190a12.25.1741080355166; Tue, 04 Mar 2025 01:25:55 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:16 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-18-elver@google.com> Subject: [PATCH v2 17/34] locking/rwsem: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Add support for Clang's capability analysis for rw_semaphore. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/rwsem.h | 56 +++++++++------- lib/test_capability-analysis.c | 64 +++++++++++++++++++ 3 files changed, 97 insertions(+), 25 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index 918e35d110df..7e4d94d65043 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -80,7 +80,7 @@ Supported Kernel Primitives Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`, RCU, SRCU (`srcu_struct`). +`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`. For capabilities with an initialization function (e.g., `spin_lock_init()`), calling this function on the capability instance before initializing any diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h index c8b543d428b0..98aa623ad9bf 100644 --- a/include/linux/rwsem.h +++ b/include/linux/rwsem.h @@ -45,7 +45,7 @@ * reduce the chance that they will share the same cacheline causing * cacheline bouncing problem. */ -struct rw_semaphore { +struct_with_capability(rw_semaphore) { atomic_long_t count; /* * Write owner or one of the read owners as well flags regarding @@ -76,11 +76,13 @@ static inline int rwsem_is_locked(struct rw_semaphore *sem) } static inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem) + __asserts_cap(sem) { WARN_ON(atomic_long_read(&sem->count) == RWSEM_UNLOCKED_VALUE); } static inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem) + __asserts_cap(sem) { WARN_ON(!(atomic_long_read(&sem->count) & RWSEM_WRITER_LOCKED)); } @@ -119,6 +121,7 @@ do { \ static struct lock_class_key __key; \ \ __init_rwsem((sem), #sem, &__key); \ + __assert_cap(sem); \ } while (0) /* @@ -136,7 +139,7 @@ static inline int rwsem_is_contended(struct rw_semaphore *sem) #include -struct rw_semaphore { +struct_with_capability(rw_semaphore) { struct rwbase_rt rwbase; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; @@ -160,6 +163,7 @@ do { \ static struct lock_class_key __key; \ \ __init_rwsem((sem), #sem, &__key); \ + __assert_cap(sem); \ } while (0) static __always_inline int rwsem_is_locked(const struct rw_semaphore *sem) @@ -168,11 +172,13 @@ static __always_inline int rwsem_is_locked(const struct rw_semaphore *sem) } static __always_inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem) + __asserts_cap(sem) { WARN_ON(!rwsem_is_locked(sem)); } static __always_inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem) + __asserts_cap(sem) { WARN_ON(!rw_base_is_write_locked(&sem->rwbase)); } @@ -190,6 +196,7 @@ static __always_inline int rwsem_is_contended(struct rw_semaphore *sem) */ static inline void rwsem_assert_held(const struct rw_semaphore *sem) + __asserts_cap(sem) { if (IS_ENABLED(CONFIG_LOCKDEP)) lockdep_assert_held(sem); @@ -198,6 +205,7 @@ static inline void rwsem_assert_held(const struct rw_semaphore *sem) } static inline void rwsem_assert_held_write(const struct rw_semaphore *sem) + __asserts_cap(sem) { if (IS_ENABLED(CONFIG_LOCKDEP)) lockdep_assert_held_write(sem); @@ -208,47 +216,47 @@ static inline void rwsem_assert_held_write(const struct rw_semaphore *sem) /* * lock for reading */ -extern void down_read(struct rw_semaphore *sem); -extern int __must_check down_read_interruptible(struct rw_semaphore *sem); -extern int __must_check down_read_killable(struct rw_semaphore *sem); +extern void down_read(struct rw_semaphore *sem) __acquires_shared(sem); +extern int __must_check down_read_interruptible(struct rw_semaphore *sem) __cond_acquires_shared(0, sem); +extern int __must_check down_read_killable(struct rw_semaphore *sem) __cond_acquires_shared(0, sem); /* * trylock for reading -- returns 1 if successful, 0 if contention */ -extern int down_read_trylock(struct rw_semaphore *sem); +extern int down_read_trylock(struct rw_semaphore *sem) __cond_acquires_shared(true, sem); /* * lock for writing */ -extern void down_write(struct rw_semaphore *sem); -extern int __must_check down_write_killable(struct rw_semaphore *sem); +extern void down_write(struct rw_semaphore *sem) __acquires(sem); +extern int __must_check down_write_killable(struct rw_semaphore *sem) __cond_acquires(0, sem); /* * trylock for writing -- returns 1 if successful, 0 if contention */ -extern int down_write_trylock(struct rw_semaphore *sem); +extern int down_write_trylock(struct rw_semaphore *sem) __cond_acquires(true, sem); /* * release a read lock */ -extern void up_read(struct rw_semaphore *sem); +extern void up_read(struct rw_semaphore *sem) __releases_shared(sem); /* * release a write lock */ -extern void up_write(struct rw_semaphore *sem); +extern void up_write(struct rw_semaphore *sem) __releases(sem); -DEFINE_GUARD(rwsem_read, struct rw_semaphore *, down_read(_T), up_read(_T)) -DEFINE_GUARD_COND(rwsem_read, _try, down_read_trylock(_T)) -DEFINE_GUARD_COND(rwsem_read, _intr, down_read_interruptible(_T) == 0) +DEFINE_LOCK_GUARD_1(rwsem_read, struct rw_semaphore, down_read(_T->lock), up_read(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(rwsem_read, _try, down_read_trylock(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(rwsem_read, _intr, down_read_interruptible(_T->lock) == 0) -DEFINE_GUARD(rwsem_write, struct rw_semaphore *, down_write(_T), up_write(_T)) -DEFINE_GUARD_COND(rwsem_write, _try, down_write_trylock(_T)) +DEFINE_LOCK_GUARD_1(rwsem_write, struct rw_semaphore, down_write(_T->lock), up_write(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(rwsem_write, _try, down_write_trylock(_T->lock)) /* * downgrade write lock to read lock */ -extern void downgrade_write(struct rw_semaphore *sem); +extern void downgrade_write(struct rw_semaphore *sem) __releases(sem) __acquires_shared(sem); #ifdef CONFIG_DEBUG_LOCK_ALLOC /* @@ -264,11 +272,11 @@ extern void downgrade_write(struct rw_semaphore *sem); * lockdep_set_class() at lock initialization time. * See Documentation/locking/lockdep-design.rst for more details.) */ -extern void down_read_nested(struct rw_semaphore *sem, int subclass); -extern int __must_check down_read_killable_nested(struct rw_semaphore *sem, int subclass); -extern void down_write_nested(struct rw_semaphore *sem, int subclass); -extern int down_write_killable_nested(struct rw_semaphore *sem, int subclass); -extern void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep_map *nest_lock); +extern void down_read_nested(struct rw_semaphore *sem, int subclass) __acquires_shared(sem); +extern int __must_check down_read_killable_nested(struct rw_semaphore *sem, int subclass) __cond_acquires_shared(0, sem); +extern void down_write_nested(struct rw_semaphore *sem, int subclass) __acquires(sem); +extern int down_write_killable_nested(struct rw_semaphore *sem, int subclass) __cond_acquires(0, sem); +extern void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep_map *nest_lock) __acquires(sem); # define down_write_nest_lock(sem, nest_lock) \ do { \ @@ -282,8 +290,8 @@ do { \ * [ This API should be avoided as much as possible - the * proper abstraction for this case is completions. ] */ -extern void down_read_non_owner(struct rw_semaphore *sem); -extern void up_read_non_owner(struct rw_semaphore *sem); +extern void down_read_non_owner(struct rw_semaphore *sem) __acquires_shared(sem); +extern void up_read_non_owner(struct rw_semaphore *sem) __releases_shared(sem); #else # define down_read_nested(sem, subclass) down_read(sem) # define down_read_killable_nested(sem, subclass) down_read_killable(sem) diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 63d81ad1562f..7ccb163ab5b1 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -255,6 +256,69 @@ static void __used test_seqlock_writer(struct test_seqlock_data *d) write_sequnlock_irqrestore(&d->sl, flags); } +struct test_rwsem_data { + struct rw_semaphore sem; + int counter __guarded_by(&sem); +}; + +static void __used test_rwsem_init(struct test_rwsem_data *d) +{ + init_rwsem(&d->sem); + d->counter = 0; +} + +static void __used test_rwsem_reader(struct test_rwsem_data *d) +{ + down_read(&d->sem); + (void)d->counter; + up_read(&d->sem); + + if (down_read_trylock(&d->sem)) { + (void)d->counter; + up_read(&d->sem); + } +} + +static void __used test_rwsem_writer(struct test_rwsem_data *d) +{ + down_write(&d->sem); + d->counter++; + up_write(&d->sem); + + down_write(&d->sem); + d->counter++; + downgrade_write(&d->sem); + (void)d->counter; + up_read(&d->sem); + + if (down_write_trylock(&d->sem)) { + d->counter++; + up_write(&d->sem); + } +} + +static void __used test_rwsem_assert(struct test_rwsem_data *d) +{ + rwsem_assert_held_nolockdep(&d->sem); + d->counter++; +} + +static void __used test_rwsem_guard(struct test_rwsem_data *d) +{ + { guard(rwsem_read)(&d->sem); (void)d->counter; } + { guard(rwsem_write)(&d->sem); d->counter++; } +} + +static void __used test_rwsem_cond_guard(struct test_rwsem_data *d) +{ + scoped_cond_guard(rwsem_read_try, return, &d->sem) { + (void)d->counter; + } + scoped_cond_guard(rwsem_write_try, return, &d->sem) { + d->counter++; + } +} + struct test_bit_spinlock_data { unsigned long bits; int counter __guarded_by(__bitlock(3, &bits)); From patchwork Tue Mar 4 09:21:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870633 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A7752203714 for ; Tue, 4 Mar 2025 09:25:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080361; cv=none; b=Br32PfyR+h4zwWKZ+WzyanZEc6sQIN/AplSa2nOjmAk36Odt/Q6isELkMO1d8ADvf+agl7K7pIA9+4aUiESb+55wEUaNF8nUvryiS8gPF6YG5JdV+fgcZDdi/rqvJD2k9J9ZaP79xaX3wII+vNCUkxa9a7ZHfu2SwX/yFzMd7wY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080361; c=relaxed/simple; bh=lUYIYSgoA2ccoaBS4kBmkDa5sN11OxStd6T3cwMJmGw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XY5eYbcXGM2JSSPe24MWCQ3OeOK6xl/7v2NCdGJDaLdd6I6BXG55os+Le7tDneJoZs1gLDJUMiP53b3RVMfrcXldkB8di3ZEhOEs9DiZw6ZkHW4oU+XfneBqDBY7ZOV8acK5XM9v8MLmgmJzNjkqWczdwqCykgt8kM4KVz472vw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=U4Brh2ju; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="U4Brh2ju" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-391079c9798so958863f8f.3 for ; Tue, 04 Mar 2025 01:25:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080358; x=1741685158; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=PNf/nM4zXVueSeYIgASkm7nQo4wF4e7ceuZttNxifgI=; b=U4Brh2ju0/JzGeQkFzpz6Pa5S+ndwwj+YO/0zui4JgZqTpWkrRXeiobNwYy6GF5uJw 7W7dW3ze6w+baD98PiqhEmYABT651n/BMYhi5kyDXUr8n+mB43S3wFq3bLPwzLi1rNuc GUNv+3c5pe391eo3hgyqolcIMFXVwlUXgmiSDjLl+P3i8BZdx+rD1wAlu204S5jZpItF uqFIuARpFw0X1hMGQNZ5khmk6NGwzFV2Te71IM0xGw0PlQ22M8UbEPCmwLPMJ+IcTLtz n8YlclBp81Ru2voBb/uop3JcdVVkq23FpIY3I04ZFkNjjaB7n31JsfzC8olIiXwaVZjC 5TRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080358; x=1741685158; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=PNf/nM4zXVueSeYIgASkm7nQo4wF4e7ceuZttNxifgI=; b=rNAN6XSqFnvYU8X0i/hpxzkUdE+T7GWZqN2GtZpI5ju0PdtuKsZCVqdnJlivXX6kYF lHSkUEy7cOt08q8Hha4DC5fO4EwtVa50+/qcRPsOv5MXo+nfJK6y6QOm9EcbabiYmZWA HCaTtk1vaAGeHoISFLpRbmyapo77xmBGOp0pdrf7aw54jlpzoAnMSgtbS1VWg5AhgyIe mjU/vv2uQg418tQ1QjipmAjCxQhBYmuPA1jgExnIDNIfEjwXUzkQxT+gpHIx6YY6mvBm QcHLLaHff9iY8i+lj6O2XiMmrFZTi5CF5TWaIjNjAGxMbxk2Tfp1QKlXl+5gR3HeX2Vb oK7Q== X-Forwarded-Encrypted: i=1; AJvYcCVwDXmKhhQRQaAz5WXi5gXHao3XaM8AlYz+KsBgY5odLnDoTNYiB5oTk2r3/CCeHN6IoQ1Lax+Cxp6FQ2Q=@vger.kernel.org X-Gm-Message-State: AOJu0Yxe2SkJCaUB7IZuDQc2z8lUTLX7of/vdRpcHnT36XNN/6R3WUhx SdHnSID2czCis+lIPTjmFIa5nSwnWg6PZF5ywe/GhpODXZFeWw9dKNsECWvC413LToICQyZJbg= = X-Google-Smtp-Source: AGHT+IHNV2rVLk7DI7p2vN5TAoqWsmdPeX6tGGapgde3jO/7IaOi1K3xi+RVcuZ51YKSvmN00fSkAHnG5g== X-Received: from wmbbi24.prod.google.com ([2002:a05:600c:3d98:b0:439:8c33:5ed6]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:4022:b0:390:ffd0:4138 with SMTP id ffacd0b85a97d-390ffd04350mr7740206f8f.24.1741080358018; Tue, 04 Mar 2025 01:25:58 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:17 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-19-elver@google.com> Subject: [PATCH v2 18/34] locking/local_lock: Include missing headers From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Including into an empty TU will result in the compiler complaining: ./include/linux/local_lock.h: In function ‘class_local_lock_irqsave_constructor’: ./include/linux/local_lock_internal.h:95:17: error: implicit declaration of function ‘local_irq_save’; <...> 95 | local_irq_save(flags); \ | ^~~~~~~~~~~~~~ As well as (some architectures only, such as 'sh'): ./include/linux/local_lock_internal.h: In function ‘local_lock_acquire’: ./include/linux/local_lock_internal.h:33:20: error: ‘current’ undeclared (first use in this function) 33 | l->owner = current; Include missing headers to allow including local_lock.h where the required headers are not otherwise included. Signed-off-by: Marco Elver --- include/linux/local_lock_internal.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h index 8dd71fbbb6d2..420866c1c70b 100644 --- a/include/linux/local_lock_internal.h +++ b/include/linux/local_lock_internal.h @@ -4,7 +4,9 @@ #endif #include +#include #include +#include #ifndef CONFIG_PREEMPT_RT From patchwork Tue Mar 4 09:21:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870221 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 30C281FDE2C for ; Tue, 4 Mar 2025 09:26:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080364; cv=none; b=FLOMqFROEABr2zivn2WvtstMIKV5lfPXbNkKtEzYPCobOhg/WjtVUVGQn7y816/roxtSGmI/vB3GBwL19HtRZwyRtNkg2L+tCyz12X7Hg9ym96niZvefU/Z+zWC2gBpf5xGqnFzWdL0ZOy/8X6Sapz2MTrJayX9o9YwIfeQhLzQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080364; c=relaxed/simple; bh=5Pp0cWVWeR/iKG+6F+FtpYqvROINCcN+zxDEvkyjIk0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=D+vHvv6MNhzfKNCg1NOIVDJS954dFAp1F/DxRQWeHd7KJCLbdol8OWbxLiefWEZAw0RSy4wG1hlhfPmtKfHfETQR+tW31W1X1jPH6TL3ROpErRQsHlGK9e7xOlMlilkSg0MpMvxRIGm1ri61XEfJft02xxmVaiihcIksNl8GLbQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=GarN8Gsq; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GarN8Gsq" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5e4c2618332so2147145a12.2 for ; Tue, 04 Mar 2025 01:26:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080360; x=1741685160; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=fevqVsEsX3/oCaV46XhekBmDNUcdyGVL5LKZ5XmlZNY=; b=GarN8GsqltPtkbPEKbezvzKXhgWxRjqr5fKz0wVsD7XdxGTdl1udQPlSiSwoQ8OTGy mjI2JokbrHDHdhXaPQXm9CJseJdaCioIeJwwO5FY7yeQNrNZgLS3evVf4MuwRnHIJ/lZ xNalbF+e1eutHezymEE1cqV8yJbmJ6U5ueVuCZVZhHzdydyYI0QZNiWxfADU7NTLxFbs gGWK9L8QLW5conEOhVhCUvVhBGKMmqR94LTlAymQsvX9fWxDO6/jERnzTIqtuPpCrE3t yZ8UIBhS7j9b97YZfNYk4g7mGlzJBEARUsZBOjCNcXe1OhyukBXXBowXRh2iDG/PreH6 nIJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080360; x=1741685160; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fevqVsEsX3/oCaV46XhekBmDNUcdyGVL5LKZ5XmlZNY=; b=k42/CyuwlCVJh2M4HE7Qkq2rBWEZz5Iv5kx+Z1t1flP5s/FojJHiVJj4mgaVUdsLuQ jiq/aEQ2NmK2LemFWMtWH5EHnuJdlQBHsaltwiWFq7GwZOxfS3td5wuH49M1b4i6Vnlx 6pYUjcTmLAfdBuQD/mC4eVD6tcJMPUMVGvsTpQYkXFk+lWXWegReMaEKTICEcUc+Kcpb B4eH8voHovQaujUfj2MBIskAGWtFoW73e9dEzOb+KWrNXQ+MrVAX+x3GQhYT2LuUMyXp 2MD+4PBxY4b7gsLS7eoJ4I+QLdz+CRgFtJpVtdFrCHnqksoZOjdAspunhGlSvl8fmOxT belg== X-Forwarded-Encrypted: i=1; AJvYcCWa/1kc660+HVm8AkHsmEb9pLBsli4vhIsi3HhZzcrFGJwOUzCyaMZ66s90PTG90Z01piRn+w96N4CbDvc=@vger.kernel.org X-Gm-Message-State: AOJu0YzleeutTjKfeJtahmG9xxklfLJaLyOHgRa5oeusk/TWiEZNzxal fYH0cQqoWicxC3YrfeJwz2hwvhMkThzYE9eusuxSdHLdDFB96NGPdm3bxDHJ+bxBdObUv2SP5w= = X-Google-Smtp-Source: AGHT+IFl5s6Iny66DH2m/TkuwyT+wpjNT0HF7LAin4sZpCAQX9J/MzXeXyXsNOCBtJkLhAkMTTKBYFaHlQ== X-Received: from edbfe12.prod.google.com ([2002:a05:6402:390c:b0:5e0:963d:6041]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:3904:b0:5e5:335:dad2 with SMTP id 4fb4d7f45d1cf-5e50335de72mr16333101a12.26.1741080360656; Tue, 04 Mar 2025 01:26:00 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:18 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-20-elver@google.com> Subject: [PATCH v2 19/34] locking/local_lock: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Add support for Clang's capability analysis for local_lock_t. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/local_lock.h | 18 ++++---- include/linux/local_lock_internal.h | 41 ++++++++++++++--- lib/test_capability-analysis.c | 46 +++++++++++++++++++ 4 files changed, 90 insertions(+), 17 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index 7e4d94d65043..e892a5292841 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -80,7 +80,7 @@ Supported Kernel Primitives Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`. +`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`, `local_lock_t`. For capabilities with an initialization function (e.g., `spin_lock_init()`), calling this function on the capability instance before initializing any diff --git a/include/linux/local_lock.h b/include/linux/local_lock.h index 091dc0b6bdfb..63fadcf66216 100644 --- a/include/linux/local_lock.h +++ b/include/linux/local_lock.h @@ -51,12 +51,12 @@ #define local_unlock_irqrestore(lock, flags) \ __local_unlock_irqrestore(lock, flags) -DEFINE_GUARD(local_lock, local_lock_t __percpu*, - local_lock(_T), - local_unlock(_T)) -DEFINE_GUARD(local_lock_irq, local_lock_t __percpu*, - local_lock_irq(_T), - local_unlock_irq(_T)) +DEFINE_LOCK_GUARD_1(local_lock, local_lock_t __percpu, + local_lock(_T->lock), + local_unlock(_T->lock)) +DEFINE_LOCK_GUARD_1(local_lock_irq, local_lock_t __percpu, + local_lock_irq(_T->lock), + local_unlock_irq(_T->lock)) DEFINE_LOCK_GUARD_1(local_lock_irqsave, local_lock_t __percpu, local_lock_irqsave(_T->lock, _T->flags), local_unlock_irqrestore(_T->lock, _T->flags), @@ -68,8 +68,8 @@ DEFINE_LOCK_GUARD_1(local_lock_irqsave, local_lock_t __percpu, #define local_unlock_nested_bh(_lock) \ __local_unlock_nested_bh(_lock) -DEFINE_GUARD(local_lock_nested_bh, local_lock_t __percpu*, - local_lock_nested_bh(_T), - local_unlock_nested_bh(_T)) +DEFINE_LOCK_GUARD_1(local_lock_nested_bh, local_lock_t __percpu, + local_lock_nested_bh(_T->lock), + local_unlock_nested_bh(_T->lock)) #endif diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h index 420866c1c70b..01830f75d9a3 100644 --- a/include/linux/local_lock_internal.h +++ b/include/linux/local_lock_internal.h @@ -10,12 +10,13 @@ #ifndef CONFIG_PREEMPT_RT -typedef struct { +struct_with_capability(local_lock) { #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; struct task_struct *owner; #endif -} local_lock_t; +}; +typedef struct local_lock local_lock_t; #ifdef CONFIG_DEBUG_LOCK_ALLOC # define LOCAL_LOCK_DEBUG_INIT(lockname) \ @@ -62,6 +63,7 @@ do { \ 0, LD_WAIT_CONFIG, LD_WAIT_INV, \ LD_LOCK_PERCPU); \ local_lock_debug_init(lock); \ + __assert_cap(lock); \ } while (0) #define __spinlock_nested_bh_init(lock) \ @@ -73,40 +75,47 @@ do { \ 0, LD_WAIT_CONFIG, LD_WAIT_INV, \ LD_LOCK_NORMAL); \ local_lock_debug_init(lock); \ + __assert_cap(lock); \ } while (0) #define __local_lock(lock) \ do { \ preempt_disable(); \ local_lock_acquire(this_cpu_ptr(lock)); \ + __acquire(lock); \ } while (0) #define __local_lock_irq(lock) \ do { \ local_irq_disable(); \ local_lock_acquire(this_cpu_ptr(lock)); \ + __acquire(lock); \ } while (0) #define __local_lock_irqsave(lock, flags) \ do { \ local_irq_save(flags); \ local_lock_acquire(this_cpu_ptr(lock)); \ + __acquire(lock); \ } while (0) #define __local_unlock(lock) \ do { \ + __release(lock); \ local_lock_release(this_cpu_ptr(lock)); \ preempt_enable(); \ } while (0) #define __local_unlock_irq(lock) \ do { \ + __release(lock); \ local_lock_release(this_cpu_ptr(lock)); \ local_irq_enable(); \ } while (0) #define __local_unlock_irqrestore(lock, flags) \ do { \ + __release(lock); \ local_lock_release(this_cpu_ptr(lock)); \ local_irq_restore(flags); \ } while (0) @@ -115,19 +124,37 @@ do { \ do { \ lockdep_assert_in_softirq(); \ local_lock_acquire(this_cpu_ptr(lock)); \ + __acquire(lock); \ } while (0) #define __local_unlock_nested_bh(lock) \ - local_lock_release(this_cpu_ptr(lock)) + do { \ + __release(lock); \ + local_lock_release(this_cpu_ptr(lock)); \ + } while (0) #else /* !CONFIG_PREEMPT_RT */ +#include + /* * On PREEMPT_RT local_lock maps to a per CPU spinlock, which protects the * critical section while staying preemptible. */ typedef spinlock_t local_lock_t; +/* + * Because the compiler only knows about the base per-CPU variable, use this + * helper function to make the compiler think we lock/unlock the @base variable, + * and hide the fact we actually pass the per-CPU instance @pcpu to lock/unlock + * functions. + */ +static inline local_lock_t *__local_lock_alias(local_lock_t __percpu *base, local_lock_t *pcpu) + __returns_cap(base) +{ + return pcpu; +} + #define INIT_LOCAL_LOCK(lockname) __LOCAL_SPIN_LOCK_UNLOCKED((lockname)) #define __local_lock_init(l) \ @@ -138,7 +165,7 @@ typedef spinlock_t local_lock_t; #define __local_lock(__lock) \ do { \ migrate_disable(); \ - spin_lock(this_cpu_ptr((__lock))); \ + spin_lock(__local_lock_alias(__lock, this_cpu_ptr((__lock)))); \ } while (0) #define __local_lock_irq(lock) __local_lock(lock) @@ -152,7 +179,7 @@ typedef spinlock_t local_lock_t; #define __local_unlock(__lock) \ do { \ - spin_unlock(this_cpu_ptr((__lock))); \ + spin_unlock(__local_lock_alias(__lock, this_cpu_ptr((__lock)))); \ migrate_enable(); \ } while (0) @@ -163,12 +190,12 @@ typedef spinlock_t local_lock_t; #define __local_lock_nested_bh(lock) \ do { \ lockdep_assert_in_softirq_func(); \ - spin_lock(this_cpu_ptr(lock)); \ + spin_lock(__local_lock_alias(lock, this_cpu_ptr(lock))); \ } while (0) #define __local_unlock_nested_bh(lock) \ do { \ - spin_unlock(this_cpu_ptr((lock))); \ + spin_unlock(__local_lock_alias(lock, this_cpu_ptr((lock)))); \ } while (0) #endif /* CONFIG_PREEMPT_RT */ diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 7ccb163ab5b1..81c8e74548a9 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -6,7 +6,9 @@ #include #include +#include #include +#include #include #include #include @@ -433,3 +435,47 @@ static void __used test_srcu_guard(struct test_srcu_data *d) guard(srcu)(&d->srcu); (void)srcu_dereference(d->data, &d->srcu); } + +struct test_local_lock_data { + local_lock_t lock; + int counter __guarded_by(&lock); +}; + +static DEFINE_PER_CPU(struct test_local_lock_data, test_local_lock_data) = { + .lock = INIT_LOCAL_LOCK(lock), +}; + +static void __used test_local_lock_init(struct test_local_lock_data *d) +{ + local_lock_init(&d->lock); + d->counter = 0; +} + +static void __used test_local_lock(void) +{ + unsigned long flags; + + local_lock(&test_local_lock_data.lock); + this_cpu_add(test_local_lock_data.counter, 1); + local_unlock(&test_local_lock_data.lock); + + local_lock_irq(&test_local_lock_data.lock); + this_cpu_add(test_local_lock_data.counter, 1); + local_unlock_irq(&test_local_lock_data.lock); + + local_lock_irqsave(&test_local_lock_data.lock, flags); + this_cpu_add(test_local_lock_data.counter, 1); + local_unlock_irqrestore(&test_local_lock_data.lock, flags); + + local_lock_nested_bh(&test_local_lock_data.lock); + this_cpu_add(test_local_lock_data.counter, 1); + local_unlock_nested_bh(&test_local_lock_data.lock); +} + +static void __used test_local_lock_guard(void) +{ + { guard(local_lock)(&test_local_lock_data.lock); this_cpu_add(test_local_lock_data.counter, 1); } + { guard(local_lock_irq)(&test_local_lock_data.lock); this_cpu_add(test_local_lock_data.counter, 1); } + { guard(local_lock_irqsave)(&test_local_lock_data.lock); this_cpu_add(test_local_lock_data.counter, 1); } + { guard(local_lock_nested_bh)(&test_local_lock_data.lock); this_cpu_add(test_local_lock_data.counter, 1); } +} From patchwork Tue Mar 4 09:21:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870632 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 16B931FDE37 for ; Tue, 4 Mar 2025 09:26:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080367; cv=none; b=pH9GJe4CCEoEp34X9zvyu+bETfKl+f1DHA+UE48JqAlA9GJ+LIEZfx86N1UVNV1/dge4J1QErvi4t1C+6yMvk7hegYwergIUJRZbvH0cRHLmC5AquEsRwFapOcZt2VmFBY4p12EQ+ZSHPiMhnHX8gztuGAqzVckLo+4/6syxEkc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080367; c=relaxed/simple; bh=FzAM8kONcmoqRCoI2LnTHYjYi8+dzOgEjJ0xHcXrIBg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eG3733iR3kiOAvMQ6wLMikwKkNTHfI4hm3JS+5/cBKEipqnL+hOMduBaD98TzgCp6kQGDfVfEnCCqUjO4vzUuoWUdaKPk4MRsw33cGYSIWBSF4vSu7mOCFaIl512uff9se0UfbMamvz+nGddtop+ESvykjpU8Hl4+l2UPaZGZHc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pbvIalTH; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pbvIalTH" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-abf681786dfso279218666b.3 for ; Tue, 04 Mar 2025 01:26:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080363; x=1741685163; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8Nr2jdJ0PNmITUGav9EmcnRuPAHdAawXc4SBIYnGZjc=; b=pbvIalTHqFtB6ktPb+QrNlCcjMSeUCtpa2OgmieHIfbyPL2sGYAN0Obuw5CLRbws6b zrAAJ1Pj56NzzBMegI8M9ArUWqn5H15yXyAzxqS/GhUG509kDTbamFA2Egwtkxl15djL g50MhIha17I5/wY0mBTr7QOV7RKa5avt9NAHeyrBQ+IqoBEliqdJQ+1iuaSHE1sBRuBG DT0BI/jnq+ajpqzitEpot1f3NC8JEPYUFmcfaxQw+Q1qgt71CkVkQ53xX5rop/kzpQX/ v7AvvPKVhoXQjgTE0wQynN2szS+dKtcaeClMsJEdymbMm2xuTHuGfQLMpFdsPZ7xpYKn 3/VQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080363; x=1741685163; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8Nr2jdJ0PNmITUGav9EmcnRuPAHdAawXc4SBIYnGZjc=; b=iUZ+MmCIDqxbdC/3OXEOrLvA5vkec3ECm8NpWbu3av2jCdt/Y67OahTmXquD2Hi45K GiuiUSXf7owsDgR+oQ3X4SSstRB5hNa9lTsAdNgye+bc5eRruTyMRsKXUfjyCD8W0YZV XRkOz+CWKGyPzzPgmcPbJIE8sTVdtAJfPyiQRsnPX7ZDe1Imln6B95AgDsX8chaitviz NG08TikuXj7Oo4I5UaXRyB82QJx/CeqZCkrK9aeKBnG/K6Vkd4ma+D+JrotM/XhagqnQ fc5HqMJfAdTXakif8mNHK8a2Lt1puUwUvUFQ131UqfmqUAxVDM4b0hcO1+yEWHxZcqgZ ieqA== X-Forwarded-Encrypted: i=1; AJvYcCVkacc+A4JGwvUbyqfjvg3/ZdOuh7tcDXbV0MtRKrZB600gzDpEDn5gwfEkQtMQA7zG5Dt7i8KokWahvkA=@vger.kernel.org X-Gm-Message-State: AOJu0YxEMmLozr6C5QScHcS6ptaGwwJTQtm5irj8TV/4a/7Rnwu/Kmu1 ZXOPobuzJUu5s2zRL8CNAdmvVYwHM2x+fyu2DeiVOwpEFgRBIMcrnIISPlavxxV8T/x9TbJdGA= = X-Google-Smtp-Source: AGHT+IHUR6sYdD9TRGssa7cwqccEPoCIYdfeCgWX5dDy0wIQx6B6340985LLh8jF353quqMjRUM/OMs67A== X-Received: from ejcso7.prod.google.com ([2002:a17:907:3907:b0:abf:71ba:a144]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:2da1:b0:ac1:deb0:5c3e with SMTP id a640c23a62f3a-ac1deb0d856mr500700666b.16.1741080363274; Tue, 04 Mar 2025 01:26:03 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:19 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-21-elver@google.com> Subject: [PATCH v2 20/34] locking/ww_mutex: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Add support for Clang's capability analysis for ww_mutex. The programming model for ww_mutex is subtly more complex than other locking primitives when using ww_acquire_ctx. Encoding the respective pre-conditions for ww_mutex lock/unlock based on ww_acquire_ctx state using Clang's capability analysis makes incorrect use of the API harder. Signed-off-by: Marco Elver --- v2: * New patch. --- .../dev-tools/capability-analysis.rst | 3 +- include/linux/ww_mutex.h | 21 ++++-- lib/test_capability-analysis.c | 65 +++++++++++++++++++ 3 files changed, 82 insertions(+), 7 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index e892a5292841..51ea94b0f4cc 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -80,7 +80,8 @@ Supported Kernel Primitives Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`, `local_lock_t`. +`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`, `local_lock_t`, +`ww_mutex`. For capabilities with an initialization function (e.g., `spin_lock_init()`), calling this function on the capability instance before initializing any diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h index 45ff6f7a872b..e1d5455bd075 100644 --- a/include/linux/ww_mutex.h +++ b/include/linux/ww_mutex.h @@ -44,7 +44,7 @@ struct ww_class { unsigned int is_wait_die; }; -struct ww_mutex { +struct_with_capability(ww_mutex) { struct WW_MUTEX_BASE base; struct ww_acquire_ctx *ctx; #ifdef DEBUG_WW_MUTEXES @@ -52,7 +52,7 @@ struct ww_mutex { #endif }; -struct ww_acquire_ctx { +struct_with_capability(ww_acquire_ctx) { struct task_struct *task; unsigned long stamp; unsigned int acquired; @@ -107,6 +107,7 @@ struct ww_acquire_ctx { */ static inline void ww_mutex_init(struct ww_mutex *lock, struct ww_class *ww_class) + __asserts_cap(lock) { ww_mutex_base_init(&lock->base, ww_class->mutex_name, &ww_class->mutex_key); lock->ctx = NULL; @@ -141,6 +142,7 @@ static inline void ww_mutex_init(struct ww_mutex *lock, */ static inline void ww_acquire_init(struct ww_acquire_ctx *ctx, struct ww_class *ww_class) + __acquires(ctx) __no_capability_analysis { ctx->task = current; ctx->stamp = atomic_long_inc_return_relaxed(&ww_class->stamp); @@ -179,6 +181,7 @@ static inline void ww_acquire_init(struct ww_acquire_ctx *ctx, * data structures. */ static inline void ww_acquire_done(struct ww_acquire_ctx *ctx) + __releases(ctx) __acquires_shared(ctx) __no_capability_analysis { #ifdef DEBUG_WW_MUTEXES lockdep_assert_held(ctx); @@ -196,6 +199,7 @@ static inline void ww_acquire_done(struct ww_acquire_ctx *ctx) * mutexes have been released with ww_mutex_unlock. */ static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx) + __releases_shared(ctx) __no_capability_analysis { #ifdef CONFIG_DEBUG_LOCK_ALLOC mutex_release(&ctx->first_lock_dep_map, _THIS_IP_); @@ -245,7 +249,8 @@ static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx) * * A mutex acquired with this function must be released with ww_mutex_unlock. */ -extern int /* __must_check */ ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx); +extern int /* __must_check */ ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) + __cond_acquires(0, lock) __must_hold(ctx); /** * ww_mutex_lock_interruptible - acquire the w/w mutex, interruptible @@ -278,7 +283,8 @@ extern int /* __must_check */ ww_mutex_lock(struct ww_mutex *lock, struct ww_acq * A mutex acquired with this function must be released with ww_mutex_unlock. */ extern int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock, - struct ww_acquire_ctx *ctx); + struct ww_acquire_ctx *ctx) + __cond_acquires(0, lock) __must_hold(ctx); /** * ww_mutex_lock_slow - slowpath acquiring of the w/w mutex @@ -305,6 +311,7 @@ extern int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock, */ static inline void ww_mutex_lock_slow(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) + __acquires(lock) __must_hold(ctx) __no_capability_analysis { int ret; #ifdef DEBUG_WW_MUTEXES @@ -342,6 +349,7 @@ ww_mutex_lock_slow(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) static inline int __must_check ww_mutex_lock_slow_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) + __cond_acquires(0, lock) __must_hold(ctx) { #ifdef DEBUG_WW_MUTEXES DEBUG_LOCKS_WARN_ON(!ctx->contending_lock); @@ -349,10 +357,11 @@ ww_mutex_lock_slow_interruptible(struct ww_mutex *lock, return ww_mutex_lock_interruptible(lock, ctx); } -extern void ww_mutex_unlock(struct ww_mutex *lock); +extern void ww_mutex_unlock(struct ww_mutex *lock) __releases(lock); extern int __must_check ww_mutex_trylock(struct ww_mutex *lock, - struct ww_acquire_ctx *ctx); + struct ww_acquire_ctx *ctx) + __cond_acquires(true, lock) __must_hold(ctx); /*** * ww_mutex_destroy - mark a w/w mutex unusable diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 81c8e74548a9..853fdc53840f 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -14,6 +14,7 @@ #include #include #include +#include /* * Test that helper macros work as expected. @@ -479,3 +480,67 @@ static void __used test_local_lock_guard(void) { guard(local_lock_irqsave)(&test_local_lock_data.lock); this_cpu_add(test_local_lock_data.counter, 1); } { guard(local_lock_nested_bh)(&test_local_lock_data.lock); this_cpu_add(test_local_lock_data.counter, 1); } } + +static DEFINE_WD_CLASS(ww_class); + +struct test_ww_mutex_data { + struct ww_mutex mtx; + int counter __guarded_by(&mtx); +}; + +static void __used test_ww_mutex_init(struct test_ww_mutex_data *d) +{ + ww_mutex_init(&d->mtx, &ww_class); + d->counter = 0; +} + +static void __used test_ww_mutex_lock_noctx(struct test_ww_mutex_data *d) +{ + if (!ww_mutex_lock(&d->mtx, NULL)) { + d->counter++; + ww_mutex_unlock(&d->mtx); + } + + if (!ww_mutex_lock_interruptible(&d->mtx, NULL)) { + d->counter++; + ww_mutex_unlock(&d->mtx); + } + + if (ww_mutex_trylock(&d->mtx, NULL)) { + d->counter++; + ww_mutex_unlock(&d->mtx); + } + + ww_mutex_lock_slow(&d->mtx, NULL); + d->counter++; + ww_mutex_unlock(&d->mtx); +} + +static void __used test_ww_mutex_lock_ctx(struct test_ww_mutex_data *d) +{ + struct ww_acquire_ctx ctx; + + ww_acquire_init(&ctx, &ww_class); + + if (!ww_mutex_lock(&d->mtx, &ctx)) { + d->counter++; + ww_mutex_unlock(&d->mtx); + } + + if (!ww_mutex_lock_interruptible(&d->mtx, &ctx)) { + d->counter++; + ww_mutex_unlock(&d->mtx); + } + + if (ww_mutex_trylock(&d->mtx, &ctx)) { + d->counter++; + ww_mutex_unlock(&d->mtx); + } + + ww_mutex_lock_slow(&d->mtx, &ctx); + d->counter++; + ww_mutex_unlock(&d->mtx); + + ww_acquire_done(&ctx); + ww_acquire_fini(&ctx); +} From patchwork Tue Mar 4 09:21:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870220 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD1A8204683 for ; Tue, 4 Mar 2025 09:26:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080369; cv=none; b=EwFJoIlYDy+38eRrEatZOBN+ANNh4RM7h/mwh8H0xFtpB+IRy8+oSOEtdlxJAv67pp2WLS1TRa5kb4FfI6SNvyFQ6VcjSueh6WIkN45EvuaafWQXnCkINCuYZhL6gSHDcHTSHIBaxljFZG/9VejjTHywahgkszbMQLgiEceSSCo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080369; c=relaxed/simple; bh=X1ktC5ZjhhR5WSX79RqJwEmztNg59gjqNnHaZ1eRNbI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ZMamePamrp8Ko+cs7pDxQ3QLHHT1P1dv5Fgl1yiMvZkIWXka/88CUGV1O15ugZSxQwWGWXfZVAfDKmqRiaAWdDqoxRS97qnCgeK5M4DhioCaHSTsU54KfktfARX1J3P9ryks9gPIbJYVS2KOkAgLjDBGD9CB0e3JtSGaCc8PQ98= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Xg6WVpOx; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Xg6WVpOx" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5da15447991so6390799a12.3 for ; Tue, 04 Mar 2025 01:26:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080366; x=1741685166; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ljSfvhYh8x6xwHsInA1Gu+wF89bLF/D0O2qs3l3oGhs=; b=Xg6WVpOxl4Uw1y9DqvyTg/3197RCryGmiSeYMqKUZTtF6ekER1WDtt4QMiXC1TPcg6 HnOBzDtLMIUNk+pLQAEWYm23m6PNy2Axi/a9mFpCp7arNL23sz7yugjjTL0VVBPQmfB3 rLoYFizmAR8YildAdPPxqlIMei1XwrW9W//4YPDHVd6dlGFtPOIPYs0SyfHSDWoVscUQ Q3zeOgQlAbM9V4q7fahK+x+vvd7CKuq2ESx/uiBT7Lr3XUn3mpCh3w4zYvjgh8D83Kme I+feMdM/bDcLp18cuqHklttRGdhpsf3ipe5zBWljPKEX9UY/59IQ05SWZKiEKS+iGxN/ jq/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080366; x=1741685166; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ljSfvhYh8x6xwHsInA1Gu+wF89bLF/D0O2qs3l3oGhs=; b=k8GG89t1KFvexQd2rHa7vHoI74bM29pbUHhtk6VsvcEA6tiyZ14r3Za1axJBGE0kip 6DypXTwE/P33Nb7zR7EZD2Eywx+z8Iw56dIZzXqtt3gVxQzukAmYbnNS45sz6FKIrjvH sPKBdWFbdLa8ffmxfxvg4J9Y60M9ZeFolHKQ4kuHcQpikXmXyJnQMk8Y1bxEl49bZVuN 1Ey4yqF1ADz62Wt0PGt3VPucvx21cqnKBml7Z2xWrNQ9br2cIwgAGqWH1+BRmoHl47k6 91grx+c4RPgbQcs0hb5GPdOAXXuYQt9i1gre678XfRRm1rQhc7Zyl2ZRHzwSaFQvfUNy 5T/Q== X-Forwarded-Encrypted: i=1; AJvYcCVgCB97/YlxAET6bNY6N18Pq3kMr4P+Jwj3UPhj8M9cJQd4xpxWbf7y7kv2GAuJMj+OuCEFyE5gYnbvd9E=@vger.kernel.org X-Gm-Message-State: AOJu0Yx5nSIdN0Zn6ItyJWjfl9q/h77Xwdv6lxSVfyf4YCIO5v5ZqNIW OpsvapMSDRmx7fHPA5aAA+Q+CHBVz3QlPhIcpc+vL67NMOoaoe+OHjJaoKnMhiTRWJhY6YwbBA= = X-Google-Smtp-Source: AGHT+IFrMBW+OhCnT9icBam/s/vk6OKWe4X42OWK7ynSC2VTQQKw5GLkAoEuFIfZ1S6J7yhnX5gf47Hang== X-Received: from edbij24.prod.google.com ([2002:a05:6402:1598:b0:5de:504d:836a]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:234e:b0:5dc:80ba:dda1 with SMTP id 4fb4d7f45d1cf-5e4d6ad7afcmr43438170a12.9.1741080366105; Tue, 04 Mar 2025 01:26:06 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:20 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-22-elver@google.com> Subject: [PATCH v2 21/34] debugfs: Make debugfs_cancellation a capability struct From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org When compiling include/linux/debugfs.h with CAPABILITY_ANALYSIS enabled, we can see this error: ./include/linux/debugfs.h:239:17: error: use of undeclared identifier 'cancellation' 239 | void __acquires(cancellation) Move the __acquires(..) attribute after the declaration, so that the compiler can see the cancellation function argument, as well as making struct debugfs_cancellation a real capability to benefit from Clang's capability analysis. Signed-off-by: Marco Elver --- include/linux/debugfs.h | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/include/linux/debugfs.h b/include/linux/debugfs.h index fa2568b4380d..c6a429381887 100644 --- a/include/linux/debugfs.h +++ b/include/linux/debugfs.h @@ -240,18 +240,16 @@ ssize_t debugfs_read_file_str(struct file *file, char __user *user_buf, * @cancel: callback to call * @cancel_data: extra data for the callback to call */ -struct debugfs_cancellation { +struct_with_capability(debugfs_cancellation) { struct list_head list; void (*cancel)(struct dentry *, void *); void *cancel_data; }; -void __acquires(cancellation) -debugfs_enter_cancellation(struct file *file, - struct debugfs_cancellation *cancellation); -void __releases(cancellation) -debugfs_leave_cancellation(struct file *file, - struct debugfs_cancellation *cancellation); +void debugfs_enter_cancellation(struct file *file, + struct debugfs_cancellation *cancellation) __acquires(cancellation); +void debugfs_leave_cancellation(struct file *file, + struct debugfs_cancellation *cancellation) __releases(cancellation); #else From patchwork Tue Mar 4 09:21:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870631 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8597C2046B8 for ; Tue, 4 Mar 2025 09:26:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080372; cv=none; b=qwvNr3DFzpHqU8DWn6elF7nCzrKN5ofkWUJlhQZmcZE2UpZIadVM/9zZXCepRpLz5VfKlPB6cPPPcdv9ahZKobFUCQMdC+YyozbkZ1vnh+NV4SQCGQkstcNDqwVneyvdN7Yb/RntV3pbxqJzVVwYlz3GtuoM+LIUzHPtOwZdFFM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080372; c=relaxed/simple; bh=66LwKwEvJHI65yeM4c4uuHM5+PNXrLfUMMtHzKvNEjk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IEf6Kx29Q/RVqusEzyu8wUB61w/ExJcUZDD2Bj5YKnsmalOlWhQE/Wbs8YFYVpprjX1mAyKv8G7REcuMVhTrHBUie++0ruDC6zR0KZTdh18Qb3sih+FOsn0DtOldH+ssfMz7Vh66bBCiDPKURktK1420a0/CoOK/6P/3pTN6roo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=jrff2QmO; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jrff2QmO" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5e550e71d33so2000755a12.1 for ; Tue, 04 Mar 2025 01:26:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080369; x=1741685169; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=P1I/GQ+v/3j1AzYCm6S00P34ezPJ7HmSy44l1ZDIovo=; b=jrff2QmOX09Vy9y2pZk0o7LchsLxoJOP9E9rI6e5XzkUWMWKTzpXUuBYk2p2tvTlk9 LUeJaIc3J2KbvZWbl9FVyAvUz8iyJQ3i/WJt0e1LBLe49cjrwGSJod33vW7GKlAK9kY7 w4fpglkHCuFbuvAdc3by5IxmaTrmO9Q2UCPbDbOmfAHUa24PlNyyZD/QhYolzcs8VE+M GsLVhtbVDalUOCHMmlFBTukJfu4bX2SWg2CiddjZi335NlQ8Bc5vj/PqVkeBZkfu49bj ui+NUcid/uMs7f0TrmhroQNdc8cQf+xcmyNH0kbvqEEaVZXpX190bMzw0vwUAhAL+iXv 4nfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080369; x=1741685169; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=P1I/GQ+v/3j1AzYCm6S00P34ezPJ7HmSy44l1ZDIovo=; b=F+Y+N5FoRuKd9cvQZqP3yf4CNZfN6jQXW1KuKhBDkt+kEjaXgKohgD7CaJpmit1rR0 k+uumfP+2+dRYiMNGsQx6aF1wxUlefjry4N5A2xeZo2GkDG8jVPNN014Jc+f8E0N8A10 nc0nxdoTkkJ0kvIwUyu7tjeLhrkweLfj6VOY03wTWKOk5Xf6ZoooQw5/Jwu/Q9YFDKrR KcDCK1URPGSOPH6Ht05OFyQfAGioYqnN/jkFaYv9z6+5B1bSzmPXsU2wVOrgDIzKXc5u IJdkUUwlH2Wj5YZzL5H+BFBV1I2HEuGK9cNK9ieTopCy8as1ec6dcq5jDxlXeqPuWIOa 4GQQ== X-Forwarded-Encrypted: i=1; AJvYcCXaw7dqe/YgSIfZIH6X092dJcsX1ctaxL8KWH87WDY15oG6piQL938ve1ZqzwMCMT7dy6ytPWp87533Olo=@vger.kernel.org X-Gm-Message-State: AOJu0Yz4mlsp/81Yfh/gAxXnWOVu6jm+NyRxSdLqkV0ZeLXElD902xE+ fS5BPEpoGm5k41FYwilLuDeVcSdkOzWiBpNObadHLcZPDw83m6X5nAVEm1fQ1/FxxwZ/RcUHPw= = X-Google-Smtp-Source: AGHT+IFLim8eecum5uVex3hryFlz3FCAz5xVOg59Qt47EZvr2+Y6KQcxBittVbQ9kTNcN1THrZHhGDAhXQ== X-Received: from edbet14.prod.google.com ([2002:a05:6402:378e:b0:5e5:762:2c87]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:4407:b0:5dc:a44d:36a9 with SMTP id 4fb4d7f45d1cf-5e4d6af158dmr16857891a12.14.1741080368800; Tue, 04 Mar 2025 01:26:08 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:21 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-23-elver@google.com> Subject: [PATCH v2 22/34] compiler-capability-analysis: Remove Sparse support From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Remove Sparse support as discussed at [1]. The kernel codebase is still scattered with numerous places that try to appease Sparse's context tracking ("annotation for sparse", "fake out sparse", "work around sparse", etc.). Eventually, as more subsystems enable Clang's capability analysis, these places will show up and need adjustment or removal of the workarounds altogether. Link: https://lore.kernel.org/all/20250207083335.GW7145@noisy.programming.kicks-ass.net/ [1] Link: https://lore.kernel.org/all/Z6XTKTo_LMj9KmbY@elver.google.com/ [2] Cc: "Luc Van Oostenryck" Cc: Peter Zijlstra Signed-off-by: Marco Elver --- v2: * New patch. --- Documentation/dev-tools/sparse.rst | 19 ------- include/linux/compiler-capability-analysis.h | 56 ++++++-------------- include/linux/rcupdate.h | 15 +----- 3 files changed, 17 insertions(+), 73 deletions(-) diff --git a/Documentation/dev-tools/sparse.rst b/Documentation/dev-tools/sparse.rst index dc791c8d84d1..37b20170835d 100644 --- a/Documentation/dev-tools/sparse.rst +++ b/Documentation/dev-tools/sparse.rst @@ -53,25 +53,6 @@ sure that bitwise types don't get mixed up (little-endian vs big-endian vs cpu-endian vs whatever), and there the constant "0" really _is_ special. -Using sparse for lock checking ------------------------------- - -The following macros are undefined for gcc and defined during a sparse -run to use the "context" tracking feature of sparse, applied to -locking. These annotations tell sparse when a lock is held, with -regard to the annotated function's entry and exit. - -__must_hold - The specified lock is held on function entry and exit. - -__acquires - The specified lock is held on function exit, but not entry. - -__releases - The specified lock is held on function entry, but not exit. - -If the function enters and exits without the lock held, acquiring and -releasing the lock inside the function in a balanced way, no -annotation is needed. The three annotations above are for cases where -sparse would otherwise report a context imbalance. - Getting sparse -------------- diff --git a/include/linux/compiler-capability-analysis.h b/include/linux/compiler-capability-analysis.h index 832727fea140..741f88e1177f 100644 --- a/include/linux/compiler-capability-analysis.h +++ b/include/linux/compiler-capability-analysis.h @@ -231,30 +231,8 @@ extern const struct __capability_##cap *name /* - * Common keywords for static capability analysis. Both Clang's capability - * analysis and Sparse's context tracking are currently supported. + * Common keywords for static capability analysis. */ -#ifdef __CHECKER__ - -/* Sparse context/lock checking support. */ -# define __must_hold(x) __attribute__((context(x,1,1))) -# define __must_not_hold(x) -# define __acquires(x) __attribute__((context(x,0,1))) -# define __cond_acquires(ret, x) __attribute__((context(x,0,-1))) -# define __releases(x) __attribute__((context(x,1,0))) -# define __acquire(x) __context__(x,1) -# define __release(x) __context__(x,-1) -# define __cond_lock(x, c) ((c) ? ({ __acquire(x); 1; }) : 0) -/* For Sparse, there's no distinction between exclusive and shared locks. */ -# define __must_hold_shared __must_hold -# define __acquires_shared __acquires -# define __cond_acquires_shared __cond_acquires -# define __releases_shared __releases -# define __acquire_shared __acquire -# define __release_shared __release -# define __cond_lock_shared __cond_acquire - -#else /* !__CHECKER__ */ /** * __must_hold() - function attribute, caller must hold exclusive capability @@ -263,7 +241,7 @@ * Function attribute declaring that the caller must hold the given capability * instance @x exclusively. */ -# define __must_hold(x) __requires_cap(x) +#define __must_hold(x) __requires_cap(x) /** * __must_not_hold() - function attribute, caller must not hold capability @@ -272,7 +250,7 @@ * Function attribute declaring that the caller must not hold the given * capability instance @x. */ -# define __must_not_hold(x) __excludes_cap(x) +#define __must_not_hold(x) __excludes_cap(x) /** * __acquires() - function attribute, function acquires capability exclusively @@ -281,7 +259,7 @@ * Function attribute declaring that the function acquires the given * capability instance @x exclusively, but does not release it. */ -# define __acquires(x) __acquires_cap(x) +#define __acquires(x) __acquires_cap(x) /* * Clang's analysis does not care precisely about the value, only that it is @@ -308,7 +286,7 @@ * * @ret may be one of: true, false, nonzero, 0, nonnull, NULL. */ -# define __cond_acquires(ret, x) __cond_acquires_impl_##ret(x) +#define __cond_acquires(ret, x) __cond_acquires_impl_##ret(x) /** * __releases() - function attribute, function releases a capability exclusively @@ -317,7 +295,7 @@ * Function attribute declaring that the function releases the given capability * instance @x exclusively. The capability must be held on entry. */ -# define __releases(x) __releases_cap(x) +#define __releases(x) __releases_cap(x) /** * __acquire() - function to acquire capability exclusively @@ -325,7 +303,7 @@ * * No-op function that acquires the given capability instance @x exclusively. */ -# define __acquire(x) __acquire_cap(x) +#define __acquire(x) __acquire_cap(x) /** * __release() - function to release capability exclusively @@ -333,7 +311,7 @@ * * No-op function that releases the given capability instance @x. */ -# define __release(x) __release_cap(x) +#define __release(x) __release_cap(x) /** * __cond_lock() - function that conditionally acquires a capability @@ -352,7 +330,7 @@ * * #define spin_trylock(l) __cond_lock(&lock, _spin_trylock(&lock)) */ -# define __cond_lock(x, c) __try_acquire_cap(x, c) +#define __cond_lock(x, c) __try_acquire_cap(x, c) /** * __must_hold_shared() - function attribute, caller must hold shared capability @@ -361,7 +339,7 @@ * Function attribute declaring that the caller must hold the given capability * instance @x with shared access. */ -# define __must_hold_shared(x) __requires_shared_cap(x) +#define __must_hold_shared(x) __requires_shared_cap(x) /** * __acquires_shared() - function attribute, function acquires capability shared @@ -370,7 +348,7 @@ * Function attribute declaring that the function acquires the given * capability instance @x with shared access, but does not release it. */ -# define __acquires_shared(x) __acquires_shared_cap(x) +#define __acquires_shared(x) __acquires_shared_cap(x) /** * __cond_acquires_shared() - function attribute, function conditionally @@ -384,7 +362,7 @@ * * @ret may be one of: true, false, nonzero, 0, nonnull, NULL. */ -# define __cond_acquires_shared(ret, x) __cond_acquires_impl_##ret(x, _shared) +#define __cond_acquires_shared(ret, x) __cond_acquires_impl_##ret(x, _shared) /** * __releases_shared() - function attribute, function releases a @@ -394,7 +372,7 @@ * Function attribute declaring that the function releases the given capability * instance @x with shared access. The capability must be held on entry. */ -# define __releases_shared(x) __releases_shared_cap(x) +#define __releases_shared(x) __releases_shared_cap(x) /** * __acquire_shared() - function to acquire capability shared @@ -403,7 +381,7 @@ * No-op function that acquires the given capability instance @x with shared * access. */ -# define __acquire_shared(x) __acquire_shared_cap(x) +#define __acquire_shared(x) __acquire_shared_cap(x) /** * __release_shared() - function to release capability shared @@ -412,7 +390,7 @@ * No-op function that releases the given capability instance @x with shared * access. */ -# define __release_shared(x) __release_shared_cap(x) +#define __release_shared(x) __release_shared_cap(x) /** * __cond_lock_shared() - function that conditionally acquires a capability @@ -426,8 +404,6 @@ * access, if the boolean expression @c is true. The result of @c is the return * value, to be able to create a capability-enabled interface. */ -# define __cond_lock_shared(x, c) __try_acquire_shared_cap(x, c) - -#endif /* __CHECKER__ */ +#define __cond_lock_shared(x, c) __try_acquire_shared_cap(x, c) #endif /* _LINUX_COMPILER_CAPABILITY_ANALYSIS_H */ diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index ef8875c4e621..75a2e8c30a3f 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -1183,20 +1183,7 @@ rcu_head_after_call_rcu(struct rcu_head *rhp, rcu_callback_t f) extern int rcu_expedited; extern int rcu_normal; -DEFINE_LOCK_GUARD_0(rcu, - do { - rcu_read_lock(); - /* - * sparse doesn't call the cleanup function, - * so just release immediately and don't track - * the context. We don't need to anyway, since - * the whole point of the guard is to not need - * the explicit unlock. - */ - __release(RCU); - } while (0), - rcu_read_unlock()) - +DEFINE_LOCK_GUARD_0(rcu, rcu_read_lock(), rcu_read_unlock()) DECLARE_LOCK_GUARD_0_ATTRS(rcu, __acquires_shared(RCU), __releases_shared(RCU)); #endif /* __LINUX_RCUPDATE_H */ From patchwork Tue Mar 4 09:21:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870219 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 30D66204C14 for ; Tue, 4 Mar 2025 09:26:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080377; cv=none; b=sj2/WTzurDZkCelm80vSToWl7PM7YfOu7ncmU7kq8e6miaFgaxdME3ouR6Vi91TINBjUVKpGcqyOiT2hZOvZcr6Q+cLP321AUSGTHndl7VKIha96rpH1FuKTmfobMe1nzXnQwMJUTuc5Bwj6eYPvg1L+yXwtuaIJE/LM4JymEgE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080377; c=relaxed/simple; bh=YKRMcCwlVW+JHReTDDNlNbjIiZXrhQG3R+4ieOvF0KY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LiYshfQ8j9RW+XyYsowBHVlxmQ1A2bGTm729j41VyAPAc4aeIa8X15U4FAoFFLkfFAhGd+ZD8sEBGxWhoSTUGV4uSYeu9vUcbQ/OIPrjztZLYKgc/QHhtNHT7wcnX8kmRM3d0864y8S+nk8DUOaKrHlPIifCb6hlPIUh1YN5cPo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=e69+yJMT; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="e69+yJMT" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-abf46dba035so324406566b.0 for ; Tue, 04 Mar 2025 01:26:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080371; x=1741685171; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=32n+Ex/JylKzHQ8GbWl/dZFCHmuz4KLt4uCNJiSuT2I=; b=e69+yJMTsBftNyxq5NTkwldT+MukjqNNdZYLD5p1sS6G2sLxxJucl8hXkqZ1BVaYBW UGHV9thnAlU+XstoW0ZPqL5eyQBh+w881qDD8xJ/w/GbZBrNUUllV5nsEwqw9xbpjhqT moeriPQpJACDcjXbEVZ7w3QcEnfg4esQCeEFA4moSBHfdzI5JN5Q/GxtwFI7eaRaL5pW tuWXYHp6Kz5VrkcRq2ZdPhASp9MyZG4zUUgTp7yEmnUmnhjiDbu3DVQ/DnxwFU+8Z+Tx P0EHbDNXcgRHM8gJbH4ecKSn2+yG/JUbR6O4sj8NzURk5ZaMjrAYdrnzxjjlAbirC3fU WvEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080371; x=1741685171; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=32n+Ex/JylKzHQ8GbWl/dZFCHmuz4KLt4uCNJiSuT2I=; b=U1aJmunYBWAafkuU6KuvpnLoKpsiTIcj3M/5A/WqrhstVFiPAGpdPhlfUBOFe6Pzsr V3dYew7yznjQ+Wh/7wn+j3fWcdhHpMd0GEV0FppfILzcuN991uaL8U3+dJBKgEQ/ylnK vpctqA4FpZNjF580LzKhkf5ZLK4wRnrtaG88s2/IITurEBbZfQKIoNQR8v1hNAY+IdmY FXpP06rxu5CvZ3P8xByCuIaTNUXxGIb9UfNxSijRgkJqQ4nwxUqavfgPxIvkJH91TLIs QswnB+MS2rXTVARCg/brR5zQXCAbJxA8vj2VwUwrt1cgvNpmDMCvlUrzOvSOnEkjK+N7 FT5g== X-Forwarded-Encrypted: i=1; AJvYcCWdzRM/s7+AhZoPqgKBqG/gKyqKBzIi6aGmqbeFiNo2GydpiYGkG9odiKFoRnKJzbNUhiKKVpMHoJ/h8iA=@vger.kernel.org X-Gm-Message-State: AOJu0Yw/zPF4+kPwBsucKWFoJcEIcondI938zzq3NX1RU4E7aUO5Z4xK hu/5LycGUCV9bVsaf731HdNGFowb5HbmCEwuSU3u68Qc35NJIiWHgyF3MLmlRvplR1bCShZStQ= = X-Google-Smtp-Source: AGHT+IH7ajhBvj+N9zMLWn9yRIZ8xNtCdS5fHbofGVp+Rc2QgwPO+/i7HmH4t/LWU2ohwJ2AE+ttvPmAuA== X-Received: from ejctp7.prod.google.com ([2002:a17:907:c487:b0:ac2:219:94c3]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:794e:b0:abf:742e:1fca with SMTP id a640c23a62f3a-abf742e23d7mr756224866b.18.1741080371564; Tue, 04 Mar 2025 01:26:11 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:22 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-24-elver@google.com> Subject: [PATCH v2 23/34] compiler-capability-analysis: Remove __cond_lock() function-like helper From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org As discussed in [1], removing __cond_lock() will improve the readability of trylock code. Now that Sparse context tracking support has been removed, we can also remove __cond_lock(). Change existing APIs to either drop __cond_lock() completely, or make use of the __cond_acquires() function attribute instead. In particular, spinlock and rwlock implementations required switching over to inline helpers rather than statement-expressions for their trylock_* variants. Link: https://lore.kernel.org/all/20250207082832.GU7145@noisy.programming.kicks-ass.net/ [1] Suggested-by: Peter Zijlstra Signed-off-by: Marco Elver --- v2: * New patch. --- .../dev-tools/capability-analysis.rst | 2 - Documentation/mm/process_addrs.rst | 6 +- .../net/wireless/intel/iwlwifi/iwl-trans.c | 4 +- .../net/wireless/intel/iwlwifi/iwl-trans.h | 6 +- .../wireless/intel/iwlwifi/pcie/internal.h | 5 +- .../net/wireless/intel/iwlwifi/pcie/trans.c | 4 +- include/linux/compiler-capability-analysis.h | 41 ------------- include/linux/mm.h | 33 ++-------- include/linux/rwlock.h | 11 +--- include/linux/rwlock_api_smp.h | 14 ++++- include/linux/rwlock_rt.h | 21 ++++--- include/linux/sched/signal.h | 14 +---- include/linux/spinlock.h | 45 +++++--------- include/linux/spinlock_api_smp.h | 20 ++++++ include/linux/spinlock_api_up.h | 61 ++++++++++++++++--- include/linux/spinlock_rt.h | 26 ++++---- kernel/signal.c | 4 +- kernel/time/posix-timers.c | 10 +-- lib/dec_and_lock.c | 8 +-- mm/memory.c | 4 +- mm/pgtable-generic.c | 19 +++--- tools/include/linux/compiler_types.h | 2 - 22 files changed, 160 insertions(+), 200 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index 51ea94b0f4cc..d11e88ab9882 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -113,10 +113,8 @@ Keywords __releases_shared __acquire __release - __cond_lock __acquire_shared __release_shared - __cond_lock_shared capability_unsafe __capability_unsafe disable_capability_analysis enable_capability_analysis diff --git a/Documentation/mm/process_addrs.rst b/Documentation/mm/process_addrs.rst index 81417fa2ed20..073480ba7585 100644 --- a/Documentation/mm/process_addrs.rst +++ b/Documentation/mm/process_addrs.rst @@ -540,7 +540,7 @@ To access PTE-level page tables, a helper like :c:func:`!pte_offset_map_lock` or :c:func:`!pte_offset_map` can be used depending on stability requirements. These map the page table into kernel memory if required, take the RCU lock, and depending on variant, may also look up or acquire the PTE lock. -See the comment on :c:func:`!__pte_offset_map_lock`. +See the comment on :c:func:`!pte_offset_map_lock`. Atomicity ^^^^^^^^^ @@ -624,7 +624,7 @@ must be released via :c:func:`!pte_unmap_unlock`. .. note:: There are some variants on this, such as :c:func:`!pte_offset_map_rw_nolock` when we know we hold the PTE stable but for brevity we do not explore this. See the comment for - :c:func:`!__pte_offset_map_lock` for more details. + :c:func:`!pte_offset_map_lock` for more details. When modifying data in ranges we typically only wish to allocate higher page tables as necessary, using these locks to avoid races or overwriting anything, @@ -643,7 +643,7 @@ At the leaf page table, that is the PTE, we can't entirely rely on this pattern as we have separate PMD and PTE locks and a THP collapse for instance might have eliminated the PMD entry as well as the PTE from under us. -This is why :c:func:`!__pte_offset_map_lock` locklessly retrieves the PMD entry +This is why :c:func:`!pte_offset_map_lock` locklessly retrieves the PMD entry for the PTE, carefully checking it is as expected, before acquiring the PTE-specific lock, and then *again* checking that the PMD entry is as expected. diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.c b/drivers/net/wireless/intel/iwlwifi/iwl-trans.c index 49c8507d1a6b..64394f6dc156 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.c +++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.c @@ -528,11 +528,11 @@ int iwl_trans_read_config32(struct iwl_trans *trans, u32 ofs, } IWL_EXPORT_SYMBOL(iwl_trans_read_config32); -bool _iwl_trans_grab_nic_access(struct iwl_trans *trans) +bool iwl_trans_grab_nic_access(struct iwl_trans *trans) { return iwl_trans_pcie_grab_nic_access(trans); } -IWL_EXPORT_SYMBOL(_iwl_trans_grab_nic_access); +IWL_EXPORT_SYMBOL(iwl_trans_grab_nic_access); void __releases(nic_access) iwl_trans_release_nic_access(struct iwl_trans *trans) diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h index f6234065dbdd..8b37fd6c5221 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h @@ -1133,11 +1133,7 @@ int iwl_trans_sw_reset(struct iwl_trans *trans, bool retake_ownership); void iwl_trans_set_bits_mask(struct iwl_trans *trans, u32 reg, u32 mask, u32 value); -bool _iwl_trans_grab_nic_access(struct iwl_trans *trans); - -#define iwl_trans_grab_nic_access(trans) \ - __cond_lock(nic_access, \ - likely(_iwl_trans_grab_nic_access(trans))) +bool iwl_trans_grab_nic_access(struct iwl_trans *trans); void __releases(nic_access) iwl_trans_release_nic_access(struct iwl_trans *trans); diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h index 856b7e9f717d..84ce40b2ec5e 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h +++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h @@ -558,10 +558,7 @@ void iwl_trans_pcie_free(struct iwl_trans *trans); void iwl_trans_pcie_free_pnvm_dram_regions(struct iwl_dram_regions *dram_regions, struct device *dev); -bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans); -#define _iwl_trans_pcie_grab_nic_access(trans) \ - __cond_lock(nic_access_nobh, \ - likely(__iwl_trans_pcie_grab_nic_access(trans))) +bool _iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans); void iwl_trans_pcie_check_product_reset_status(struct pci_dev *pdev); void iwl_trans_pcie_check_product_reset_mode(struct pci_dev *pdev); diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c index c917ed4c19bc..caed7d7434f3 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c @@ -2405,7 +2405,7 @@ EXPORT_SYMBOL(iwl_trans_pcie_reset); * This version doesn't disable BHs but rather assumes they're * already disabled. */ -bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans) +bool _iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans) { int ret; struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); @@ -2488,7 +2488,7 @@ bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans) bool ret; local_bh_disable(); - ret = __iwl_trans_pcie_grab_nic_access(trans); + ret = _iwl_trans_pcie_grab_nic_access(trans); if (ret) { /* keep BHs disabled until iwl_trans_pcie_release_nic_access */ return ret; diff --git a/include/linux/compiler-capability-analysis.h b/include/linux/compiler-capability-analysis.h index 741f88e1177f..c10938d2f102 100644 --- a/include/linux/compiler-capability-analysis.h +++ b/include/linux/compiler-capability-analysis.h @@ -93,12 +93,6 @@ __attribute__((overloadable)) __no_capability_analysis __acquires_cap(var) { } \ static __always_inline void __acquire_shared_cap(const struct name *var) \ __attribute__((overloadable)) __no_capability_analysis __acquires_shared_cap(var) { } \ - static __always_inline bool __try_acquire_cap(const struct name *var, bool ret) \ - __attribute__((overloadable)) __no_capability_analysis __try_acquires_cap(1, var) \ - { return ret; } \ - static __always_inline bool __try_acquire_shared_cap(const struct name *var, bool ret) \ - __attribute__((overloadable)) __no_capability_analysis __try_acquires_shared_cap(1, var) \ - { return ret; } \ static __always_inline void __release_cap(const struct name *var) \ __attribute__((overloadable)) __no_capability_analysis __releases_cap(var) { } \ static __always_inline void __release_shared_cap(const struct name *var) \ @@ -156,8 +150,6 @@ # define __requires_shared_cap(var) # define __acquire_cap(var) do { } while (0) # define __acquire_shared_cap(var) do { } while (0) -# define __try_acquire_cap(var, ret) (ret) -# define __try_acquire_shared_cap(var, ret) (ret) # define __release_cap(var) do { } while (0) # define __release_shared_cap(var) do { } while (0) # define __assert_cap(var) do { (void)(var); } while (0) @@ -313,25 +305,6 @@ */ #define __release(x) __release_cap(x) -/** - * __cond_lock() - function that conditionally acquires a capability - * exclusively - * @x: capability instance pinter - * @c: boolean expression - * - * Return: result of @c - * - * No-op function that conditionally acquires capability instance @x - * exclusively, if the boolean expression @c is true. The result of @c is the - * return value, to be able to create a capability-enabled interface; for - * example: - * - * .. code-block:: c - * - * #define spin_trylock(l) __cond_lock(&lock, _spin_trylock(&lock)) - */ -#define __cond_lock(x, c) __try_acquire_cap(x, c) - /** * __must_hold_shared() - function attribute, caller must hold shared capability * @x: capability instance pointer @@ -392,18 +365,4 @@ */ #define __release_shared(x) __release_shared_cap(x) -/** - * __cond_lock_shared() - function that conditionally acquires a capability - * shared - * @x: capability instance pinter - * @c: boolean expression - * - * Return: result of @c - * - * No-op function that conditionally acquires capability instance @x with shared - * access, if the boolean expression @c is true. The result of @c is the return - * value, to be able to create a capability-enabled interface. - */ -#define __cond_lock_shared(x, c) __try_acquire_shared_cap(x, c) - #endif /* _LINUX_COMPILER_CAPABILITY_ANALYSIS_H */ diff --git a/include/linux/mm.h b/include/linux/mm.h index 7b1068ddcbb7..dbf4eb414bd1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2732,15 +2732,8 @@ static inline int pte_devmap(pte_t pte) } #endif -extern pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr, - spinlock_t **ptl); -static inline pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr, - spinlock_t **ptl) -{ - pte_t *ptep; - __cond_lock(*ptl, ptep = __get_locked_pte(mm, addr, ptl)); - return ptep; -} +extern pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr, + spinlock_t **ptl); #ifdef __PAGETABLE_P4D_FOLDED static inline int __p4d_alloc(struct mm_struct *mm, pgd_t *pgd, @@ -3023,31 +3016,15 @@ static inline bool pagetable_pte_ctor(struct ptdesc *ptdesc) return true; } -pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp); -static inline pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, - pmd_t *pmdvalp) -{ - pte_t *pte; +pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp); - __cond_lock(RCU, pte = ___pte_offset_map(pmd, addr, pmdvalp)); - return pte; -} static inline pte_t *pte_offset_map(pmd_t *pmd, unsigned long addr) { return __pte_offset_map(pmd, addr, NULL); } -pte_t *__pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, spinlock_t **ptlp); -static inline pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, spinlock_t **ptlp) -{ - pte_t *pte; - - __cond_lock(RCU, __cond_lock(*ptlp, - pte = __pte_offset_map_lock(mm, pmd, addr, ptlp))); - return pte; -} +pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, spinlock_t **ptlp); pte_t *pte_offset_map_ro_nolock(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, spinlock_t **ptlp); diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h index 3c8971201ec7..701de800c36e 100644 --- a/include/linux/rwlock.h +++ b/include/linux/rwlock.h @@ -50,8 +50,8 @@ do { \ * regardless of whether CONFIG_SMP or CONFIG_PREEMPT are set. The various * methods are defined as nops in the case they are not required. */ -#define read_trylock(lock) __cond_lock_shared(lock, _raw_read_trylock(lock)) -#define write_trylock(lock) __cond_lock(lock, _raw_write_trylock(lock)) +#define read_trylock(lock) _raw_read_trylock(lock) +#define write_trylock(lock) _raw_write_trylock(lock) #define write_lock(lock) _raw_write_lock(lock) #define read_lock(lock) _raw_read_lock(lock) @@ -113,12 +113,7 @@ do { \ } while (0) #define write_unlock_bh(lock) _raw_write_unlock_bh(lock) -#define write_trylock_irqsave(lock, flags) \ - __cond_lock(lock, ({ \ - local_irq_save(flags); \ - _raw_write_trylock(lock) ? \ - 1 : ({ local_irq_restore(flags); 0; }); \ - })) +#define write_trylock_irqsave(lock, flags) _raw_write_trylock_irqsave(lock, &(flags)) #ifdef arch_rwlock_is_contended #define rwlock_is_contended(lock) \ diff --git a/include/linux/rwlock_api_smp.h b/include/linux/rwlock_api_smp.h index 3e975105a606..b289c3089ab7 100644 --- a/include/linux/rwlock_api_smp.h +++ b/include/linux/rwlock_api_smp.h @@ -26,8 +26,8 @@ unsigned long __lockfunc _raw_read_lock_irqsave(rwlock_t *lock) __acquires(lock); unsigned long __lockfunc _raw_write_lock_irqsave(rwlock_t *lock) __acquires(lock); -int __lockfunc _raw_read_trylock(rwlock_t *lock); -int __lockfunc _raw_write_trylock(rwlock_t *lock); +int __lockfunc _raw_read_trylock(rwlock_t *lock) __cond_acquires_shared(true, lock); +int __lockfunc _raw_write_trylock(rwlock_t *lock) __cond_acquires(true, lock); void __lockfunc _raw_read_unlock(rwlock_t *lock) __releases_shared(lock); void __lockfunc _raw_write_unlock(rwlock_t *lock) __releases(lock); void __lockfunc _raw_read_unlock_bh(rwlock_t *lock) __releases_shared(lock); @@ -41,6 +41,16 @@ void __lockfunc _raw_write_unlock_irqrestore(rwlock_t *lock, unsigned long flags) __releases(lock); +static inline bool _raw_write_trylock_irqsave(rwlock_t *lock, unsigned long *flags) + __cond_acquires(true, lock) +{ + local_irq_save(*flags); + if (_raw_write_trylock(lock)) + return true; + local_irq_restore(*flags); + return false; +} + #ifdef CONFIG_INLINE_READ_LOCK #define _raw_read_lock(lock) __raw_read_lock(lock) #endif diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h index 742172a06702..dc34b48a6158 100644 --- a/include/linux/rwlock_rt.h +++ b/include/linux/rwlock_rt.h @@ -26,11 +26,11 @@ do { \ } while (0) extern void rt_read_lock(rwlock_t *rwlock) __acquires_shared(rwlock); -extern int rt_read_trylock(rwlock_t *rwlock); +extern int rt_read_trylock(rwlock_t *rwlock) __cond_acquires_shared(true, rwlock); extern void rt_read_unlock(rwlock_t *rwlock) __releases_shared(rwlock); extern void rt_write_lock(rwlock_t *rwlock) __acquires(rwlock); extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass) __acquires(rwlock); -extern int rt_write_trylock(rwlock_t *rwlock); +extern int rt_write_trylock(rwlock_t *rwlock) __cond_acquires(true, rwlock); extern void rt_write_unlock(rwlock_t *rwlock) __releases(rwlock); static __always_inline void read_lock(rwlock_t *rwlock) @@ -59,7 +59,7 @@ static __always_inline void read_lock_irq(rwlock_t *rwlock) flags = 0; \ } while (0) -#define read_trylock(lock) __cond_lock_shared(lock, rt_read_trylock(lock)) +#define read_trylock(lock) rt_read_trylock(lock) static __always_inline void read_unlock(rwlock_t *rwlock) __releases_shared(rwlock) @@ -123,14 +123,15 @@ static __always_inline void write_lock_irq(rwlock_t *rwlock) flags = 0; \ } while (0) -#define write_trylock(lock) __cond_lock(lock, rt_write_trylock(lock)) +#define write_trylock(lock) rt_write_trylock(lock) -#define write_trylock_irqsave(lock, flags) \ - __cond_lock(lock, ({ \ - typecheck(unsigned long, flags); \ - flags = 0; \ - rt_write_trylock(lock); \ - })) +static __always_inline bool _write_trylock_irqsave(rwlock_t *rwlock, unsigned long *flags) + __cond_acquires(true, rwlock) +{ + *flags = 0; + return rt_write_trylock(rwlock); +} +#define write_trylock_irqsave(lock, flags) _write_trylock_irqsave(lock, &(flags)) static __always_inline void write_unlock(rwlock_t *rwlock) __releases(rwlock) diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h index d5d03d919df8..82c486b67e92 100644 --- a/include/linux/sched/signal.h +++ b/include/linux/sched/signal.h @@ -732,18 +732,8 @@ static inline int thread_group_empty(struct task_struct *p) #define delay_group_leader(p) \ (thread_group_leader(p) && !thread_group_empty(p)) -extern struct sighand_struct *__lock_task_sighand(struct task_struct *task, - unsigned long *flags); - -static inline struct sighand_struct *lock_task_sighand(struct task_struct *task, - unsigned long *flags) -{ - struct sighand_struct *ret; - - ret = __lock_task_sighand(task, flags); - (void)__cond_lock(&task->sighand->siglock, ret); - return ret; -} +extern struct sighand_struct *lock_task_sighand(struct task_struct *task, + unsigned long *flags); static inline void unlock_task_sighand(struct task_struct *task, unsigned long *flags) diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 12369fa9e3bb..3cfd85b25648 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -213,7 +213,7 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock) * various methods are defined as nops in the case they are not * required. */ -#define raw_spin_trylock(lock) __cond_lock(lock, _raw_spin_trylock(lock)) +#define raw_spin_trylock(lock) _raw_spin_trylock(lock) #define raw_spin_lock(lock) _raw_spin_lock(lock) @@ -284,22 +284,11 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock) } while (0) #define raw_spin_unlock_bh(lock) _raw_spin_unlock_bh(lock) -#define raw_spin_trylock_bh(lock) \ - __cond_lock(lock, _raw_spin_trylock_bh(lock)) +#define raw_spin_trylock_bh(lock) _raw_spin_trylock_bh(lock) -#define raw_spin_trylock_irq(lock) \ - __cond_lock(lock, ({ \ - local_irq_disable(); \ - _raw_spin_trylock(lock) ? \ - 1 : ({ local_irq_enable(); 0; }); \ - })) +#define raw_spin_trylock_irq(lock) _raw_spin_trylock_irq(lock) -#define raw_spin_trylock_irqsave(lock, flags) \ - __cond_lock(lock, ({ \ - local_irq_save(flags); \ - _raw_spin_trylock(lock) ? \ - 1 : ({ local_irq_restore(flags); 0; }); \ - })) +#define raw_spin_trylock_irqsave(lock, flags) _raw_spin_trylock_irqsave(lock, &(flags)) #ifndef CONFIG_PREEMPT_RT /* Include rwlock functions for !RT */ @@ -431,8 +420,12 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock) return raw_spin_trylock_irq(&lock->rlock); } -#define spin_trylock_irqsave(lock, flags) \ - __cond_lock(lock, raw_spin_trylock_irqsave(spinlock_check(lock), flags)) +static __always_inline bool _spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags) + __cond_acquires(true, lock) __no_capability_analysis +{ + return raw_spin_trylock_irqsave(spinlock_check(lock), *flags); +} +#define spin_trylock_irqsave(lock, flags) _spin_trylock_irqsave(lock, &(flags)) /** * spin_is_locked() - Check whether a spinlock is locked. @@ -510,23 +503,17 @@ static inline int rwlock_needbreak(rwlock_t *lock) * Decrements @atomic by 1. If the result is 0, returns true and locks * @lock. Returns false for all other cases. */ -extern int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock); -#define atomic_dec_and_lock(atomic, lock) \ - __cond_lock(lock, _atomic_dec_and_lock(atomic, lock)) +extern int atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) __cond_acquires(true, lock); extern int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock, - unsigned long *flags); -#define atomic_dec_and_lock_irqsave(atomic, lock, flags) \ - __cond_lock(lock, _atomic_dec_and_lock_irqsave(atomic, lock, &(flags))) + unsigned long *flags) __cond_acquires(true, lock); +#define atomic_dec_and_lock_irqsave(atomic, lock, flags) _atomic_dec_and_lock_irqsave(atomic, lock, &(flags)) -extern int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock); -#define atomic_dec_and_raw_lock(atomic, lock) \ - __cond_lock(lock, _atomic_dec_and_raw_lock(atomic, lock)) +extern int atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock) __cond_acquires(true, lock); extern int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *lock, - unsigned long *flags); -#define atomic_dec_and_raw_lock_irqsave(atomic, lock, flags) \ - __cond_lock(lock, _atomic_dec_and_raw_lock_irqsave(atomic, lock, &(flags))) + unsigned long *flags) __cond_acquires(true, lock); +#define atomic_dec_and_raw_lock_irqsave(atomic, lock, flags) _atomic_dec_and_raw_lock_irqsave(atomic, lock, &(flags)) int __alloc_bucket_spinlocks(spinlock_t **locks, unsigned int *lock_mask, size_t max_size, unsigned int cpu_mult, diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_smp.h index a77b76003ebb..1b1896595cbc 100644 --- a/include/linux/spinlock_api_smp.h +++ b/include/linux/spinlock_api_smp.h @@ -95,6 +95,26 @@ static inline int __raw_spin_trylock(raw_spinlock_t *lock) return 0; } +static __always_inline bool _raw_spin_trylock_irq(raw_spinlock_t *lock) + __cond_acquires(true, lock) +{ + local_irq_disable(); + if (_raw_spin_trylock(lock)) + return true; + local_irq_enable(); + return false; +} + +static __always_inline bool _raw_spin_trylock_irqsave(raw_spinlock_t *lock, unsigned long *flags) + __cond_acquires(true, lock) +{ + local_irq_save(*flags); + if (_raw_spin_trylock(lock)) + return true; + local_irq_restore(*flags); + return false; +} + /* * If lockdep is enabled then we use the non-preemption spin-ops * even on CONFIG_PREEMPTION, because lockdep assumes that interrupts are diff --git a/include/linux/spinlock_api_up.h b/include/linux/spinlock_api_up.h index 018f5aabc1be..a9d5c7c66e03 100644 --- a/include/linux/spinlock_api_up.h +++ b/include/linux/spinlock_api_up.h @@ -24,14 +24,11 @@ * flags straight, to suppress compiler warnings of unused lock * variables, and to add the proper checker annotations: */ -#define ___LOCK_void(lock) \ - do { (void)(lock); } while (0) - #define ___LOCK_(lock) \ - do { __acquire(lock); ___LOCK_void(lock); } while (0) + do { __acquire(lock); (void)(lock); } while (0) #define ___LOCK_shared(lock) \ - do { __acquire_shared(lock); ___LOCK_void(lock); } while (0) + do { __acquire_shared(lock); (void)(lock); } while (0) #define __LOCK(lock, ...) \ do { preempt_disable(); ___LOCK_##__VA_ARGS__(lock); } while (0) @@ -78,10 +75,56 @@ #define _raw_spin_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) #define _raw_read_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags, shared) #define _raw_write_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) -#define _raw_spin_trylock(lock) ({ __LOCK(lock, void); 1; }) -#define _raw_read_trylock(lock) ({ __LOCK(lock, void); 1; }) -#define _raw_write_trylock(lock) ({ __LOCK(lock, void); 1; }) -#define _raw_spin_trylock_bh(lock) ({ __LOCK_BH(lock, void); 1; }) + +static __always_inline int _raw_spin_trylock(raw_spinlock_t *lock) + __cond_acquires(true, lock) +{ + __LOCK(lock); + return 1; +} + +static __always_inline int _raw_spin_trylock_bh(raw_spinlock_t *lock) + __cond_acquires(true, lock) +{ + __LOCK_BH(lock); + return 1; +} + +static __always_inline int _raw_spin_trylock_irq(raw_spinlock_t *lock) + __cond_acquires(true, lock) +{ + __LOCK_IRQ(lock); + return 1; +} + +static __always_inline int _raw_spin_trylock_irqsave(raw_spinlock_t *lock, unsigned long *flags) + __cond_acquires(true, lock) +{ + __LOCK_IRQSAVE(lock, *(flags)); + return 1; +} + +static __always_inline int _raw_read_trylock(rwlock_t *lock) + __cond_acquires_shared(true, lock) +{ + __LOCK(lock, shared); + return 1; +} + +static __always_inline int _raw_write_trylock(rwlock_t *lock) + __cond_acquires(true, lock) +{ + __LOCK(lock); + return 1; +} + +static __always_inline int _raw_write_trylock_irqsave(rwlock_t *lock, unsigned long *flags) + __cond_acquires(true, lock) +{ + __LOCK_IRQSAVE(lock, *(flags)); + return 1; +} + #define _raw_spin_unlock(lock) __UNLOCK(lock) #define _raw_read_unlock(lock) __UNLOCK(lock, shared) #define _raw_write_unlock(lock) __UNLOCK(lock) diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h index 1f55601e1321..d11ecb0ed571 100644 --- a/include/linux/spinlock_rt.h +++ b/include/linux/spinlock_rt.h @@ -37,8 +37,8 @@ extern void rt_spin_lock_nested(spinlock_t *lock, int subclass) __acquires(lock) extern void rt_spin_lock_nest_lock(spinlock_t *lock, struct lockdep_map *nest_lock) __acquires(lock); extern void rt_spin_unlock(spinlock_t *lock) __releases(lock); extern void rt_spin_lock_unlock(spinlock_t *lock); -extern int rt_spin_trylock_bh(spinlock_t *lock); -extern int rt_spin_trylock(spinlock_t *lock); +extern int rt_spin_trylock_bh(spinlock_t *lock) __cond_acquires(true, lock); +extern int rt_spin_trylock(spinlock_t *lock) __cond_acquires(true, lock); static __always_inline void spin_lock(spinlock_t *lock) __acquires(lock) @@ -130,21 +130,19 @@ static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, rt_spin_unlock(lock); } -#define spin_trylock(lock) \ - __cond_lock(lock, rt_spin_trylock(lock)) +#define spin_trylock(lock) rt_spin_trylock(lock) -#define spin_trylock_bh(lock) \ - __cond_lock(lock, rt_spin_trylock_bh(lock)) +#define spin_trylock_bh(lock) rt_spin_trylock_bh(lock) -#define spin_trylock_irq(lock) \ - __cond_lock(lock, rt_spin_trylock(lock)) +#define spin_trylock_irq(lock) rt_spin_trylock(lock) -#define spin_trylock_irqsave(lock, flags) \ - __cond_lock(lock, ({ \ - typecheck(unsigned long, flags); \ - flags = 0; \ - rt_spin_trylock(lock); \ - })) +static __always_inline bool _spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags) + __cond_acquires(true, lock) +{ + *flags = 0; + return rt_spin_trylock(lock); +} +#define spin_trylock_irqsave(lock, flags) _spin_trylock_irqsave(lock, &(flags)) #define spin_is_contended(lock) (((void)(lock), 0)) diff --git a/kernel/signal.c b/kernel/signal.c index 875e97f6205a..8ae095eb1b78 100644 --- a/kernel/signal.c +++ b/kernel/signal.c @@ -1354,8 +1354,8 @@ int zap_other_threads(struct task_struct *p) return count; } -struct sighand_struct *__lock_task_sighand(struct task_struct *tsk, - unsigned long *flags) +struct sighand_struct *lock_task_sighand(struct task_struct *tsk, + unsigned long *flags) { struct sighand_struct *sighand; diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c index 1b675aee99a9..8d84409fb3e6 100644 --- a/kernel/time/posix-timers.c +++ b/kernel/time/posix-timers.c @@ -59,14 +59,6 @@ static const struct k_clock clock_realtime, clock_monotonic; #error "SIGEV_THREAD_ID must not share bit with other SIGEV values!" #endif -static struct k_itimer *__lock_timer(timer_t timer_id, unsigned long *flags); - -#define lock_timer(tid, flags) \ -({ struct k_itimer *__timr; \ - __cond_lock(&__timr->it_lock, __timr = __lock_timer(tid, flags)); \ - __timr; \ -}) - static int hash(struct signal_struct *sig, unsigned int nr) { return hash_32(hash32_ptr(sig) ^ nr, HASH_BITS(posix_timers_hashtable)); @@ -507,7 +499,7 @@ COMPAT_SYSCALL_DEFINE3(timer_create, clockid_t, which_clock, } #endif -static struct k_itimer *__lock_timer(timer_t timer_id, unsigned long *flags) +static struct k_itimer *lock_timer(timer_t timer_id, unsigned long *flags) { struct k_itimer *timr; diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c index 1dcca8f2e194..8c7c398fd770 100644 --- a/lib/dec_and_lock.c +++ b/lib/dec_and_lock.c @@ -18,7 +18,7 @@ * because the spin-lock and the decrement must be * "atomic". */ -int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) +int atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) { /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */ if (atomic_add_unless(atomic, -1, 1)) @@ -32,7 +32,7 @@ int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) return 0; } -EXPORT_SYMBOL(_atomic_dec_and_lock); +EXPORT_SYMBOL(atomic_dec_and_lock); int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock, unsigned long *flags) @@ -50,7 +50,7 @@ int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock, } EXPORT_SYMBOL(_atomic_dec_and_lock_irqsave); -int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock) +int atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock) { /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */ if (atomic_add_unless(atomic, -1, 1)) @@ -63,7 +63,7 @@ int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock) raw_spin_unlock(lock); return 0; } -EXPORT_SYMBOL(_atomic_dec_and_raw_lock); +EXPORT_SYMBOL(atomic_dec_and_raw_lock); int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *lock, unsigned long *flags) diff --git a/mm/memory.c b/mm/memory.c index b4d3d4893267..3bbcdb2f3f34 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2076,8 +2076,8 @@ static pmd_t *walk_to_pmd(struct mm_struct *mm, unsigned long addr) return pmd; } -pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr, - spinlock_t **ptl) +pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr, + spinlock_t **ptl) { pmd_t *pmd = walk_to_pmd(mm, addr); diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 5a882f2b10f9..cc202648c8d8 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -279,7 +279,7 @@ static unsigned long pmdp_get_lockless_start(void) { return 0; } static void pmdp_get_lockless_end(unsigned long irqflags) { } #endif -pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) +pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) { unsigned long irqflags; pmd_t pmdval; @@ -331,13 +331,12 @@ pte_t *pte_offset_map_rw_nolock(struct mm_struct *mm, pmd_t *pmd, } /* - * pte_offset_map_lock(mm, pmd, addr, ptlp), and its internal implementation - * __pte_offset_map_lock() below, is usually called with the pmd pointer for - * addr, reached by walking down the mm's pgd, p4d, pud for addr: either while - * holding mmap_lock or vma lock for read or for write; or in truncate or rmap - * context, while holding file's i_mmap_lock or anon_vma lock for read (or for - * write). In a few cases, it may be used with pmd pointing to a pmd_t already - * copied to or constructed on the stack. + * pte_offset_map_lock(mm, pmd, addr, ptlp) is usually called with the pmd + * pointer for addr, reached by walking down the mm's pgd, p4d, pud for addr: + * either while holding mmap_lock or vma lock for read or for write; or in + * truncate or rmap context, while holding file's i_mmap_lock or anon_vma lock + * for read (or for write). In a few cases, it may be used with pmd pointing to + * a pmd_t already copied to or constructed on the stack. * * When successful, it returns the pte pointer for addr, with its page table * kmapped if necessary (when CONFIG_HIGHPTE), and locked against concurrent @@ -388,8 +387,8 @@ pte_t *pte_offset_map_rw_nolock(struct mm_struct *mm, pmd_t *pmd, * table, and may not use RCU at all: "outsiders" like khugepaged should avoid * pte_offset_map() and co once the vma is detached from mm or mm_users is zero. */ -pte_t *__pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, spinlock_t **ptlp) +pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, spinlock_t **ptlp) { spinlock_t *ptl; pmd_t pmdval; diff --git a/tools/include/linux/compiler_types.h b/tools/include/linux/compiler_types.h index d09f9dc172a4..067a5b4e0f7b 100644 --- a/tools/include/linux/compiler_types.h +++ b/tools/include/linux/compiler_types.h @@ -20,7 +20,6 @@ # define __releases(x) __attribute__((context(x,1,0))) # define __acquire(x) __context__(x,1) # define __release(x) __context__(x,-1) -# define __cond_lock(x,c) ((c) ? ({ __acquire(x); 1; }) : 0) #else /* __CHECKER__ */ /* context/locking */ # define __must_hold(x) @@ -28,7 +27,6 @@ # define __releases(x) # define __acquire(x) (void)0 # define __release(x) (void)0 -# define __cond_lock(x,c) (c) #endif /* __CHECKER__ */ /* Compiler specific macros. */ From patchwork Tue Mar 4 09:21:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870630 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B9E882046B8 for ; Tue, 4 Mar 2025 09:26:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080377; cv=none; b=aGyeRx5CYGBL9GjUC6WqWQHwu3e8mBq/iuFVmXbhbgmRQiYVLEUAGPvF3HTSwA6zaW/t92+PHRILkxBe5DfzCge1CjNLfE4QvT+2kGAov94LMIk3HGdKcobw4nDRuhjG/qcA1wL33n6uXFoWtfjAi7cDj6qAGK9mGbPIdvnGD4s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080377; c=relaxed/simple; bh=ErRP0TLkHTSoiO9G/KD7SlUWq7CmBmFj0LpWE3kApQU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LWPWKzyOKZNeb31/oPtkIBGG/eYD54GsotAOae182kE+mZjTN3X1wn1lRXfvxQi8yuC0e8YSN0bpaw0P6B/ky+wGGg4ZnA55gr5UegwhjQIBvVaxNpbfn6HZ2u0UzesGWZm0XYHGjc7SiAHRKbOad4Gt1IPuwjy5bu5PpwrJIF8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=GW7XU3T+; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GW7XU3T+" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5e067bbd3baso4673768a12.3 for ; Tue, 04 Mar 2025 01:26:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080374; x=1741685174; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lSVagzG7gv+v/g8zzR4ILNSARzsQANzMJVGDI4EywE8=; b=GW7XU3T+X2Xp7LEgzH1U3qI9ZjywWJ3zpeAmHijuD+TIuQ1bOTIATIC9yOpFKPQLCw wWk4KV0o6pqRuSfL0KsoR5r6gx2Wobw2gH2lu3+pAezhvGJkbAcMn4AS2In1YC1L/0Gg HaZbwLExngOtwLQU90RevUsmACy0LvWIGSn945PxWYLrkOHDt55z9N6rMNuhLu5YIVoO 1IIW38Diggwct0P0VYwIOp+R9GHD7YazBO7iWu2HpeeN60RsoydhDpQdrag9nc4Izuu9 lZjnxriN521TKqw5KB/wAxIrMva8xWjrUdzrMuuMojYRUrb8cooJAvqyWzTH6zNPDvGN Rjsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080374; x=1741685174; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lSVagzG7gv+v/g8zzR4ILNSARzsQANzMJVGDI4EywE8=; b=jMA/67fb4CMGJ0SQF6l39Xg+OkwPfIeG25CVP/e86J8ekRjUG0CDfTQUmiqx1bM7yC lLmkzeAC0n7/KwdXZ23z6aXxX6YmlsBkncVi8GeXaoX9mrANAbuSI/gETKYDA7nh5n5a 2CAUSw8YPgJBz/rHLLqnWIBzDczwJlDljej7UWwS2SORR5q/RN6H9NqMzIjeNCDugSzw 1Ye3ktfAFrbliMWtPo+NS3KPcGMw2gGrc+YnPwU/xO8q4FK/+Vd2lQOYv5+WjaU2RxCi ElD6WDnZ4lVVHUdAjokADT3RljtNY1YWZVY4Z+q2wO+ICcyzEp5MfN6om3UG0HLcnfe/ rP3Q== X-Forwarded-Encrypted: i=1; AJvYcCUV3uw5rapL4/mR04x0bvydJqIP2GlXpzzyYwZW/ReuMgyntDpnbvQHyKaCusmIBQZOGf93GJSWsVpP5gs=@vger.kernel.org X-Gm-Message-State: AOJu0Ywy60eN7IoEculvfHvGdEj4PA3RQkesG7qseusVWe/CXH9cVRjV q9PshJ/C9GLsP5qcBjyg9lp8Q7v3ARS7BSRtiWBiLDJ5IXyz2MuvJtvZlbONuGjZqG2ElxTGaA= = X-Google-Smtp-Source: AGHT+IFJBP4Tcx72apFD1gIfXnldoSVXluGG30wLL1iAY+mX9Im8gCUaP5MoJwpojqCBSSTJQMcMwqY6Fw== X-Received: from ejcvb9.prod.google.com ([2002:a17:907:d049:b0:ac1:f9fe:d27b]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:880e:b0:abf:4521:eb2a with SMTP id a640c23a62f3a-abf4521edabmr1374674266b.49.1741080374222; Tue, 04 Mar 2025 01:26:14 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:23 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-25-elver@google.com> Subject: [PATCH v2 24/34] compiler-capability-analysis: Introduce header suppressions From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org While we can opt in individual subsystems which add the required annotations, such subsystems inevitably include headers from other subsystems which may not yet have the right annotations, which then result in false positive warnings. Making compatible by adding annotations across all common headers currently requires an excessive number of __no_capability_analysis annotations, or carefully analyzing non-trivial cases to add the correct annotations. While this is desirable long-term, providing an incremental path causes less churn and headaches for maintainers not yet interested in dealing with such warnings. Rather than clutter headers unnecessary and mandate all subsystem maintainers to keep their headers working with capability analysis, suppress all -Wthread-safety warnings in headers. Explicitly opt in headers with capability-enabled primitives. This bumps the required Clang version to version 20+. With this in place, we can start enabling the analysis on more complex subsystems in subsequent changes. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 ++ lib/Kconfig.debug | 4 ++- scripts/Makefile.capability-analysis | 4 +++ scripts/capability-analysis-suppression.txt | 32 +++++++++++++++++++ 4 files changed, 41 insertions(+), 1 deletion(-) create mode 100644 scripts/capability-analysis-suppression.txt diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index d11e88ab9882..5c87d7659995 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -17,6 +17,8 @@ features. To enable for Clang, configure the kernel with:: CONFIG_WARN_CAPABILITY_ANALYSIS=y +The feature requires Clang 20 or later. + The analysis is *opt-in by default*, and requires declaring which modules and subsystems should be analyzed in the respective `Makefile`:: diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 8abaf7dab3f8..8b13353517a9 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -605,7 +605,7 @@ config DEBUG_FORCE_WEAK_PER_CPU config WARN_CAPABILITY_ANALYSIS bool "Compiler capability-analysis warnings" - depends on CC_IS_CLANG && $(cc-option,-Wthread-safety -fexperimental-late-parse-attributes) + depends on CC_IS_CLANG && $(cc-option,-Wthread-safety -fexperimental-late-parse-attributes --warning-suppression-mappings=/dev/null) # Branch profiling re-defines "if", which messes with the compiler's # ability to analyze __cond_acquires(..), resulting in false positives. depends on !TRACE_BRANCH_PROFILING @@ -619,6 +619,8 @@ config WARN_CAPABILITY_ANALYSIS the original name of the feature; it was later expanded to be a generic "Capability Analysis" framework. + Requires Clang 20 or later. + Produces warnings by default. Select CONFIG_WERROR if you wish to turn these warnings into errors. diff --git a/scripts/Makefile.capability-analysis b/scripts/Makefile.capability-analysis index b7b36cca47f4..2a3e493a9d06 100644 --- a/scripts/Makefile.capability-analysis +++ b/scripts/Makefile.capability-analysis @@ -4,4 +4,8 @@ capability-analysis-cflags := -DWARN_CAPABILITY_ANALYSIS \ -fexperimental-late-parse-attributes -Wthread-safety \ $(call cc-option,-Wthread-safety-pointer) +ifndef CONFIG_WARN_CAPABILITY_ANALYSIS_ALL +capability-analysis-cflags += --warning-suppression-mappings=$(srctree)/scripts/capability-analysis-suppression.txt +endif + export CFLAGS_CAPABILITY_ANALYSIS := $(capability-analysis-cflags) diff --git a/scripts/capability-analysis-suppression.txt b/scripts/capability-analysis-suppression.txt new file mode 100644 index 000000000000..0a5392fee710 --- /dev/null +++ b/scripts/capability-analysis-suppression.txt @@ -0,0 +1,32 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# The suppressions file should only match common paths such as header files. +# For individual subsytems use Makefile directive CAPABILITY_ANALYSIS := [yn]. +# +# The suppressions are ignored when CONFIG_WARN_CAPABILITY_ANALYSIS_ALL is +# selected. + +[thread-safety] +src:*arch/*/include/* +src:*include/acpi/* +src:*include/asm-generic/* +src:*include/linux/* +src:*include/net/* + +# Opt-in headers: +src:*include/linux/bit_spinlock.h=emit +src:*include/linux/cleanup.h=emit +src:*include/linux/kref.h=emit +src:*include/linux/list*.h=emit +src:*include/linux/local_lock*.h=emit +src:*include/linux/lockdep.h=emit +src:*include/linux/mutex*.h=emit +src:*include/linux/rcupdate.h=emit +src:*include/linux/refcount.h=emit +src:*include/linux/rhashtable.h=emit +src:*include/linux/rwlock*.h=emit +src:*include/linux/rwsem.h=emit +src:*include/linux/seqlock*.h=emit +src:*include/linux/spinlock*.h=emit +src:*include/linux/srcu.h=emit +src:*include/linux/ww_mutex.h=emit From patchwork Tue Mar 4 09:21:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870218 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C83C7204F65 for ; Tue, 4 Mar 2025 09:26:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080380; cv=none; b=ssh27odkBy8AOAgL4nQdajBmwhcoclf6cx54PKYHs2iKL6OG7/dnQFhS16uSyd2qa6aCsdt1IsgE1F0td4eTqZmmnBulzN2uW2avXDaertAErWF0w4GNzp3cXjh8VRyofPjnrm6VcWl2LfFb/V47oDdjh1JrIjima38n5oQ7B8Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080380; c=relaxed/simple; bh=CbP/cw7Z2VthdUDfVeh49Wsig/SnKaWHlLpS8bXMfTo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=UZIIoXrVPGGZr6KDFqiDHVDAW/cAeLEL4QOtbuwxYGXgadEu2ZelrmXsmfoje6R1WWdSUHMingHHOAFeQE+Y+trPXKaTHIeniRWgMmZS3i72v/z67x2Irtbccc3DMgMDCXADC3A4N0ZXmQYJIzgHd7QUAa3O/7BYXQBt12N9/4A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Zv51J4Ls; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Zv51J4Ls" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5e4b6d23a5fso4602552a12.2 for ; Tue, 04 Mar 2025 01:26:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080377; x=1741685177; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FyNUfkRqmSD94utdQtDIE/VXIceiBYCUysEEfDdchTw=; b=Zv51J4LsdO/UrZRTAqOhUu3Chnv0Xg5cTBfuVhoJ5ndv3cpB9RQE3gUiuOQW42Epb7 HFE+5Wjl6ByldrP5CZxWHEyBAPAUoXBBtwmXQHBH1c98/r5bR4R3+Ax3ROiHtiKZ+AUO Cw5jz7kJfdZxVWHhcprbJg/smUePni1hrp2MU34B/Vb8djrEjhCLu89WFT7IPJt1Ngbh BvKpKE/Hc9srnoFt9NHka7ih1Vm0XPUQNA0zA18hNOBvMW2qMlCx1t+PV47SW3AhU1lZ IjsBCCbXBFUobDOFk5dL2kpZ6wMpl6kSZBFgfn8peBxTpwZTK7lmuu6UGkC5dWcO2brA IOlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080377; x=1741685177; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FyNUfkRqmSD94utdQtDIE/VXIceiBYCUysEEfDdchTw=; b=JFb+87U2KRrg54rPttvJnxqczRQycTfvOSJ1iuVJK++KpYwIkyGRAOXxcBDFTs9QJl nyDGugKvB/FxDMN2/dUxWbNXEyVyccQM/2lU4EN7XRffqAgDZu6O2onqHEq0okY9oeck CAti6yM2KHnizodDhbepdTyugxbhADmHSqmsYCyDuC3WQadHcr+dZD8u4ClDrayZwctq fAjv0Wd6S7pMcLnKN5wwdfzxhuFKEdAnQjj175u9Rn0Yfop6ATYDnTc7KC1cvnd9q8u5 PYigygbz/3Wc4OG71xRdR7eXxjckmNnE5tnNZicG6LoW4py/9No3377UfTbkT7cLhp7Y n9Lg== X-Forwarded-Encrypted: i=1; AJvYcCW2f4JcmGbAos1wXC8QCrQG7DiLowj8Z6sNsNB34Kwf70HlR/oWiYju3wYC6XtynxqBslelJY6gU/UWDd4=@vger.kernel.org X-Gm-Message-State: AOJu0YxVuTRZri6ByEjyVx3gmAExxjDVEC/HZTyfUkdrcvRTGI/bpV7D 1LBrN35vsspKUvY0E787Qpj7mflh4HnKvfWt/+VK+brcXLSkWfzKjLeLd7oV+VDoMHZ9aXiHIA= = X-Google-Smtp-Source: AGHT+IEAtq0ynXlYVUKm7b+YYT0hATsqTi0dybAMTX9SWVkp0JEzQx1oEJXq2DXO8kDu8FGhuer7181naA== X-Received: from edb11.prod.google.com ([2002:a05:6402:238b:b0:5e5:339d:60ab]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:274a:b0:5e4:c235:de10 with SMTP id 4fb4d7f45d1cf-5e4d6b7b21fmr14799795a12.32.1741080377028; Tue, 04 Mar 2025 01:26:17 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:24 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-26-elver@google.com> Subject: [PATCH v2 25/34] compiler: Let data_race() imply disabled capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Many patterns that involve data-racy accesses often deliberately ignore normal synchronization rules to avoid taking a lock. If we have a lock-guarded variable on which we do a lock-less data-racy access, rather than having to write capability_unsafe(data_race(..)), simply make the data_race(..) macro imply capability-unsafety. The data_race() macro already denotes the intent that something subtly unsafe is about to happen, so it should be clear enough as-is. Signed-off-by: Marco Elver --- v2: * New patch. --- include/linux/compiler.h | 2 ++ lib/test_capability-analysis.c | 2 ++ 2 files changed, 4 insertions(+) diff --git a/include/linux/compiler.h b/include/linux/compiler.h index 155385754824..c837464369df 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -186,7 +186,9 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, #define data_race(expr) \ ({ \ __kcsan_disable_current(); \ + disable_capability_analysis(); \ __auto_type __v = (expr); \ + enable_capability_analysis(); \ __kcsan_enable_current(); \ __v; \ }) diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 853fdc53840f..13e7732c38a2 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -92,6 +92,8 @@ static void __used test_raw_spinlock_trylock_extra(struct test_raw_spinlock_data { unsigned long flags; + data_race(d->counter++); /* no warning */ + if (raw_spin_trylock_irq(&d->lock)) { d->counter++; raw_spin_unlock_irq(&d->lock); From patchwork Tue Mar 4 09:21:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870629 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 74880204F88 for ; Tue, 4 Mar 2025 09:26:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080383; cv=none; b=QrqZu9/GJAJTuOCnYdhZgJn6wAsqRsU9OsdhOKX7cnWSgUxUpXMwlXzqG3tQCjF3daZjBA6Q4DtB6sHz9luasD1MW5+cJTN7MTJid8DqLLasJGEk8oEePZ3vKSQAEFJtf3LWSGMR3XUuBC7gQeywP9AVjHZrrrCjQYAbaYLknNQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080383; c=relaxed/simple; bh=RHFHyf594SW/59vIAAifexB1oy6CG72gKKADr/HTL+M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=J7BK0uy2kKWrNTPsgRvXF9qgw/nFVYvLYtzBcvoZzmn28kZoOBhL7HLIYt8ysX6rwpTSoJL2r1P2J8HIyVDw/sLyk8bL+QejAPc1g3/a46VJHiv5Q532sYjZA4b+cn/cHIeQyXPGhIG3+Kx9zf65eyx7UHnfaIpZCNwEyaxGwBo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ui23uomT; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ui23uomT" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-abf497da6c4so386862466b.3 for ; Tue, 04 Mar 2025 01:26:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080380; x=1741685180; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3v8nr6j5ymgBv8KDTOp6L+lQpjYSKF+JE+VajMZf96I=; b=ui23uomTQRH5Irn1upMTy28efjrWGu2blSVGPnYwbbAFgaKq37ur+vsb0CDXlRpIEX WmcpsCaN7OdfPttvp8BEErBMOskMVdXS5TC5EmyuDICf9Fpp7GtB+jQdQMu5q4T9PSjV N8zuZusY7PvKkhciQ9CppnCaO13R9ac6YMiuEPyi+9HZCVN1Vqcfa8HcKwBJ6nU/1zrG 7WKxGHPHRAvESx7xFv/YKokGyIIVNxz9i0uz8J9YDQeUal78Y29a+M0heTWo6OSSbsHW FpRuKBND+P8Fm5KkkYazKgmPIUx8ETgEoGeZ8wqX84KNnOUXE4Gu6A4gD7C5migNtEXy eivA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080380; x=1741685180; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3v8nr6j5ymgBv8KDTOp6L+lQpjYSKF+JE+VajMZf96I=; b=CxAFA6sQGFKBUcI1ZgbGGYS2yCwPh9EpXfTFvk8GY9LtoCcmbSO+G8ohGyn7lzzbqc 45DIqPbguKCe05N2+DYkowDULUlAxDMCqeOm2qoZg7u05tz70MdyAWponnEyBHGw8/F9 SxgLFJa4rA5KztkB9wSbcaebWNZ/6vlwO4GJi7VoAsDqBlDjBdan2nVvCrlnYv0mpxFF fAQh4IeOmGhwIX73G3fPUc7ASCXPWo2gPoi+trvhwgfbNk6WUoGFeSUM3BjuRIcvAPQ1 xc7gJcY9OIP1E170rPCSLUFM8+ZSlPU+l5q9JWZWDlNMGaUe7Ts5eSClEtUXwZz9o5a2 s68Q== X-Forwarded-Encrypted: i=1; AJvYcCVa8mMusQbE95wnDgCT9X9xsrRkH5x9KGXj6rK9/zz9AX0vByewzQyFhqhvk7P9fDBFJIvDI0ysdKnmLdY=@vger.kernel.org X-Gm-Message-State: AOJu0YxXakJ7iNnlfHVrptbc/LvoDzINZJIH3Vag/juV2yZPZzXoOlIx +Crl6+kUm1ipwFpvz99K1rLFWJ12zPSSQyCnJFWY7wgCrYsFmYGnFP33LNNG1jbbp2fuPtd4bQ= = X-Google-Smtp-Source: AGHT+IEfigeOF/k7z7vk9qyF+DXgOFknckOxzZ17yq3U/m1fakPgh4OhR4E4S4uofnC24B3oKSKzOpTECQ== X-Received: from ejctb24.prod.google.com ([2002:a17:907:8b98:b0:ac1:4149:808d]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:1ca2:b0:abf:5d9b:8076 with SMTP id a640c23a62f3a-abf5d9b8a4fmr1339937166b.33.1741080379689; Tue, 04 Mar 2025 01:26:19 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:25 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-27-elver@google.com> Subject: [PATCH v2 26/34] kfence: Enable capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Enable capability analysis for the KFENCE subsystem. Notable, kfence_handle_page_fault() required minor restructure, which also fixed a subtle race; arguably that function is more readable now. Signed-off-by: Marco Elver --- v2: * Remove disable/enable_capability_analysis() around headers. * Use __capability_unsafe() instead of __no_capability_analysis. --- mm/kfence/Makefile | 2 ++ mm/kfence/core.c | 20 +++++++++++++------- mm/kfence/kfence.h | 14 ++++++++------ mm/kfence/report.c | 4 ++-- 4 files changed, 25 insertions(+), 15 deletions(-) diff --git a/mm/kfence/Makefile b/mm/kfence/Makefile index 2de2a58d11a1..b3640bdc3c69 100644 --- a/mm/kfence/Makefile +++ b/mm/kfence/Makefile @@ -1,5 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 +CAPABILITY_ANALYSIS := y + obj-y := core.o report.o CFLAGS_kfence_test.o := -fno-omit-frame-pointer -fno-optimize-sibling-calls diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 102048821c22..f75c3c11c0be 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -132,8 +132,8 @@ struct kfence_metadata *kfence_metadata __read_mostly; static struct kfence_metadata *kfence_metadata_init __read_mostly; /* Freelist with available objects. */ -static struct list_head kfence_freelist = LIST_HEAD_INIT(kfence_freelist); -static DEFINE_RAW_SPINLOCK(kfence_freelist_lock); /* Lock protecting freelist. */ +DEFINE_RAW_SPINLOCK(kfence_freelist_lock); /* Lock protecting freelist. */ +static struct list_head kfence_freelist __guarded_by(&kfence_freelist_lock) = LIST_HEAD_INIT(kfence_freelist); /* * The static key to set up a KFENCE allocation; or if static keys are not used @@ -253,6 +253,7 @@ static bool kfence_unprotect(unsigned long addr) } static inline unsigned long metadata_to_pageaddr(const struct kfence_metadata *meta) + __must_hold(&meta->lock) { unsigned long offset = (meta - kfence_metadata + 1) * PAGE_SIZE * 2; unsigned long pageaddr = (unsigned long)&__kfence_pool[offset]; @@ -288,6 +289,7 @@ static inline bool kfence_obj_allocated(const struct kfence_metadata *meta) static noinline void metadata_update_state(struct kfence_metadata *meta, enum kfence_object_state next, unsigned long *stack_entries, size_t num_stack_entries) + __must_hold(&meta->lock) { struct kfence_track *track = next == KFENCE_OBJECT_ALLOCATED ? &meta->alloc_track : &meta->free_track; @@ -485,7 +487,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g alloc_covered_add(alloc_stack_hash, 1); /* Set required slab fields. */ - slab = virt_to_slab((void *)meta->addr); + slab = virt_to_slab(addr); slab->slab_cache = cache; slab->objects = 1; @@ -514,6 +516,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool zombie) { struct kcsan_scoped_access assert_page_exclusive; + u32 alloc_stack_hash; unsigned long flags; bool init; @@ -546,9 +549,10 @@ static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool z /* Mark the object as freed. */ metadata_update_state(meta, KFENCE_OBJECT_FREED, NULL, 0); init = slab_want_init_on_free(meta->cache); + alloc_stack_hash = meta->alloc_stack_hash; raw_spin_unlock_irqrestore(&meta->lock, flags); - alloc_covered_add(meta->alloc_stack_hash, -1); + alloc_covered_add(alloc_stack_hash, -1); /* Check canary bytes for memory corruption. */ check_canary(meta); @@ -593,6 +597,7 @@ static void rcu_guarded_free(struct rcu_head *h) * which partial initialization succeeded. */ static unsigned long kfence_init_pool(void) + __capability_unsafe(/* constructor */) { unsigned long addr; struct page *pages; @@ -1192,6 +1197,7 @@ bool kfence_handle_page_fault(unsigned long addr, bool is_write, struct pt_regs { const int page_index = (addr - (unsigned long)__kfence_pool) / PAGE_SIZE; struct kfence_metadata *to_report = NULL; + unsigned long unprotected_page = 0; enum kfence_error_type error_type; unsigned long flags; @@ -1225,9 +1231,8 @@ bool kfence_handle_page_fault(unsigned long addr, bool is_write, struct pt_regs if (!to_report) goto out; - raw_spin_lock_irqsave(&to_report->lock, flags); - to_report->unprotected_page = addr; error_type = KFENCE_ERROR_OOB; + unprotected_page = addr; /* * If the object was freed before we took the look we can still @@ -1239,7 +1244,6 @@ bool kfence_handle_page_fault(unsigned long addr, bool is_write, struct pt_regs if (!to_report) goto out; - raw_spin_lock_irqsave(&to_report->lock, flags); error_type = KFENCE_ERROR_UAF; /* * We may race with __kfence_alloc(), and it is possible that a @@ -1251,6 +1255,8 @@ bool kfence_handle_page_fault(unsigned long addr, bool is_write, struct pt_regs out: if (to_report) { + raw_spin_lock_irqsave(&to_report->lock, flags); + to_report->unprotected_page = unprotected_page; kfence_report_error(addr, is_write, regs, to_report, error_type); raw_spin_unlock_irqrestore(&to_report->lock, flags); } else { diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index dfba5ea06b01..f9caea007246 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -34,6 +34,8 @@ /* Maximum stack depth for reports. */ #define KFENCE_STACK_DEPTH 64 +extern raw_spinlock_t kfence_freelist_lock; + /* KFENCE object states. */ enum kfence_object_state { KFENCE_OBJECT_UNUSED, /* Object is unused. */ @@ -53,7 +55,7 @@ struct kfence_track { /* KFENCE metadata per guarded allocation. */ struct kfence_metadata { - struct list_head list; /* Freelist node; access under kfence_freelist_lock. */ + struct list_head list __guarded_by(&kfence_freelist_lock); /* Freelist node. */ struct rcu_head rcu_head; /* For delayed freeing. */ /* @@ -91,13 +93,13 @@ struct kfence_metadata { * In case of an invalid access, the page that was unprotected; we * optimistically only store one address. */ - unsigned long unprotected_page; + unsigned long unprotected_page __guarded_by(&lock); /* Allocation and free stack information. */ - struct kfence_track alloc_track; - struct kfence_track free_track; + struct kfence_track alloc_track __guarded_by(&lock); + struct kfence_track free_track __guarded_by(&lock); /* For updating alloc_covered on frees. */ - u32 alloc_stack_hash; + u32 alloc_stack_hash __guarded_by(&lock); #ifdef CONFIG_MEMCG struct slabobj_ext obj_exts; #endif @@ -141,6 +143,6 @@ enum kfence_error_type { void kfence_report_error(unsigned long address, bool is_write, struct pt_regs *regs, const struct kfence_metadata *meta, enum kfence_error_type type); -void kfence_print_object(struct seq_file *seq, const struct kfence_metadata *meta); +void kfence_print_object(struct seq_file *seq, const struct kfence_metadata *meta) __must_hold(&meta->lock); #endif /* MM_KFENCE_KFENCE_H */ diff --git a/mm/kfence/report.c b/mm/kfence/report.c index 10e6802a2edf..787e87c26926 100644 --- a/mm/kfence/report.c +++ b/mm/kfence/report.c @@ -106,6 +106,7 @@ static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries static void kfence_print_stack(struct seq_file *seq, const struct kfence_metadata *meta, bool show_alloc) + __must_hold(&meta->lock) { const struct kfence_track *track = show_alloc ? &meta->alloc_track : &meta->free_track; u64 ts_sec = track->ts_nsec; @@ -207,8 +208,6 @@ void kfence_report_error(unsigned long address, bool is_write, struct pt_regs *r if (WARN_ON(type != KFENCE_ERROR_INVALID && !meta)) return; - if (meta) - lockdep_assert_held(&meta->lock); /* * Because we may generate reports in printk-unfriendly parts of the * kernel, such as scheduler code, the use of printk() could deadlock. @@ -263,6 +262,7 @@ void kfence_report_error(unsigned long address, bool is_write, struct pt_regs *r stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr, 0); if (meta) { + lockdep_assert_held(&meta->lock); pr_err("\n"); kfence_print_object(NULL, meta); } From patchwork Tue Mar 4 09:21:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870217 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D44F92054FB for ; Tue, 4 Mar 2025 09:26:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080386; cv=none; b=JsjEncMjDJcJxJ7n+2ciWrRolj87u1bFVcyLEYJehSA5XWsrbxH9LPxaGZT9l/4TP4I2wUu5bq6MyN6uQ/X69dNBnkcZJ5g1PIuVYA2SbHOjiXcDu+ZtInsl0Y+6KKOsJtVDfb/zuqBP8a2DZOChihYVB8tHHqwSVH3WtSMzeiU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080386; c=relaxed/simple; bh=2htcYMjN8BZSszolzHBrHxrGIk3u9M81dc+fK5u3tUQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Jon07OSSNlzji6UeWcScnVh5aF3sORJOXYwZs22ltDIEhLZfzvPQ01FN5l6ohZBPSOKaa5XBM/7ZI6hYXokmZ7Adxvybv7axmqJ5Gpyl7aGM+u8N4hqVQ9lIMh2EE3R4qzsF1w8GB6eVpzNG4N9wi1k3oxcxPlmzAdF/iOyP1YY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YkpOU6rI; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YkpOU6rI" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-390de58dc4eso5096075f8f.0 for ; Tue, 04 Mar 2025 01:26:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080382; x=1741685182; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HvhGmPYbdkTfK7RRSri3eLNkf3gftCvKGp0dWVvi1U4=; b=YkpOU6rIQRE3T+9pGTQ/wYLR4rkDtgTFzzW/YjF+78WgFpHWaqpB2XUA+7jJAPya/R 7kln2V3uiucGr+JGhp0iVAvkcmqs+eUTZsubbdSLrr0Lw1bIkfPShJW1R2dn9kRTLvat zcs612pSt8kBc/4VAyKAkoMGx1e1jU+NuKg+S7mK8ZJEugRWyjrBeNZ8eleO4J6nCATU MoWKkTRar33kck5NqooyGrqbsLm8meGGcr2AY3z9XtYaFufrmu9P4t3OdLJ/9zS+Z51r Ss/IVH0TIBpDvrRrhnk+x32DYUOo434mae+wGHr3OEc9kgtEv6ly3EYspeKQOY/mzwBE P2/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080382; x=1741685182; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HvhGmPYbdkTfK7RRSri3eLNkf3gftCvKGp0dWVvi1U4=; b=wCgzazaSI6SzrvcoH5nVcu/a46V4teOWWVBWbv+4wF4iutAlvlwwnzckqE8FxhMAZj 3chpXhMEbBulpDu4kjO/v84Yj8DubB2PBiWcegKuByPbSNcMFJcEA8Fi7z3Atws8lE0p KuDJlHxu0XRlo9LSOjHwLiX9zAOqwiy8rx5JRXlyvWBfMhMoMDP14UGdIl/KmVEtTdxV 6IefgtUUrJUHmCVXLnPEJfOOnEC6r00yukitF1insHqcJAHEmzKH0mUtyk8hALO742G0 qGxEVydyT3dhlq4UB1Ez8/VUSvfa0RJQuFk8Oi/Ob1on6FM5BDiE/2RAZAYgW4TfdW0p mR/A== X-Forwarded-Encrypted: i=1; AJvYcCVw2/+jkT4ynLzyyQZGka8wbqpTu65Y6cthxoNLE3MUseiyd4yjBiVtAgnYtiZ1z1RONdztZsFQWlfNaKs=@vger.kernel.org X-Gm-Message-State: AOJu0Ywdn/s2UHo8TV/OHpAmpfesbdLyExFmMG4GRdbPQF1tQ8Xg7wod zymwVg/A3M7MqPk6K8faL5agwCAuhMpD3VZYODhgiEyH2qpkBRWxk3WH1fMT7CWWzc0D1QefEA= = X-Google-Smtp-Source: AGHT+IFd0j0x2Xkpv3SqARh0/vHgS2OMhe4sdLCWdo0+vG9QAhquZGGKS42LkEsCECdA5ZxT20Xn6/E70w== X-Received: from wmbg5.prod.google.com ([2002:a05:600c:a405:b0:43b:bf84:7e47]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:400e:b0:391:1473:2a08 with SMTP id ffacd0b85a97d-39114732a3cmr2623360f8f.7.1741080382297; Tue, 04 Mar 2025 01:26:22 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:26 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-28-elver@google.com> Subject: [PATCH v2 27/34] kcov: Enable capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Enable capability analysis for the KCOV subsystem. Signed-off-by: Marco Elver --- v2: * Remove disable/enable_capability_analysis() around headers. --- kernel/Makefile | 2 ++ kernel/kcov.c | 36 +++++++++++++++++++++++++----------- 2 files changed, 27 insertions(+), 11 deletions(-) diff --git a/kernel/Makefile b/kernel/Makefile index 87866b037fbe..7e399998532d 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -39,6 +39,8 @@ KASAN_SANITIZE_kcov.o := n KCSAN_SANITIZE_kcov.o := n UBSAN_SANITIZE_kcov.o := n KMSAN_SANITIZE_kcov.o := n + +CAPABILITY_ANALYSIS_kcov.o := y CFLAGS_kcov.o := $(call cc-option, -fno-conserve-stack) -fno-stack-protector obj-y += sched/ diff --git a/kernel/kcov.c b/kernel/kcov.c index 187ba1b80bda..9015f3b1e08a 100644 --- a/kernel/kcov.c +++ b/kernel/kcov.c @@ -55,13 +55,13 @@ struct kcov { refcount_t refcount; /* The lock protects mode, size, area and t. */ spinlock_t lock; - enum kcov_mode mode; + enum kcov_mode mode __guarded_by(&lock); /* Size of arena (in long's). */ - unsigned int size; + unsigned int size __guarded_by(&lock); /* Coverage buffer shared with user space. */ - void *area; + void *area __guarded_by(&lock); /* Task for which we collect coverage, or NULL. */ - struct task_struct *t; + struct task_struct *t __guarded_by(&lock); /* Collecting coverage from remote (background) threads. */ bool remote; /* Size of remote area (in long's). */ @@ -391,6 +391,7 @@ void kcov_task_init(struct task_struct *t) } static void kcov_reset(struct kcov *kcov) + __must_hold(&kcov->lock) { kcov->t = NULL; kcov->mode = KCOV_MODE_INIT; @@ -400,6 +401,7 @@ static void kcov_reset(struct kcov *kcov) } static void kcov_remote_reset(struct kcov *kcov) + __must_hold(&kcov->lock) { int bkt; struct kcov_remote *remote; @@ -419,6 +421,7 @@ static void kcov_remote_reset(struct kcov *kcov) } static void kcov_disable(struct task_struct *t, struct kcov *kcov) + __must_hold(&kcov->lock) { kcov_task_reset(t); if (kcov->remote) @@ -435,8 +438,11 @@ static void kcov_get(struct kcov *kcov) static void kcov_put(struct kcov *kcov) { if (refcount_dec_and_test(&kcov->refcount)) { - kcov_remote_reset(kcov); - vfree(kcov->area); + /* Capability-safety: no references left, object being destroyed. */ + capability_unsafe( + kcov_remote_reset(kcov); + vfree(kcov->area); + ); kfree(kcov); } } @@ -491,6 +497,7 @@ static int kcov_mmap(struct file *filep, struct vm_area_struct *vma) unsigned long size, off; struct page *page; unsigned long flags; + unsigned long *area; spin_lock_irqsave(&kcov->lock, flags); size = kcov->size * sizeof(unsigned long); @@ -499,10 +506,11 @@ static int kcov_mmap(struct file *filep, struct vm_area_struct *vma) res = -EINVAL; goto exit; } + area = kcov->area; spin_unlock_irqrestore(&kcov->lock, flags); vm_flags_set(vma, VM_DONTEXPAND); for (off = 0; off < size; off += PAGE_SIZE) { - page = vmalloc_to_page(kcov->area + off); + page = vmalloc_to_page(area + off); res = vm_insert_page(vma, vma->vm_start + off, page); if (res) { pr_warn_once("kcov: vm_insert_page() failed\n"); @@ -522,10 +530,10 @@ static int kcov_open(struct inode *inode, struct file *filep) kcov = kzalloc(sizeof(*kcov), GFP_KERNEL); if (!kcov) return -ENOMEM; + spin_lock_init(&kcov->lock); kcov->mode = KCOV_MODE_DISABLED; kcov->sequence = 1; refcount_set(&kcov->refcount, 1); - spin_lock_init(&kcov->lock); filep->private_data = kcov; return nonseekable_open(inode, filep); } @@ -556,6 +564,7 @@ static int kcov_get_mode(unsigned long arg) * vmalloc fault handling path is instrumented. */ static void kcov_fault_in_area(struct kcov *kcov) + __must_hold(&kcov->lock) { unsigned long stride = PAGE_SIZE / sizeof(unsigned long); unsigned long *area = kcov->area; @@ -584,6 +593,7 @@ static inline bool kcov_check_handle(u64 handle, bool common_valid, static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd, unsigned long arg) + __must_hold(&kcov->lock) { struct task_struct *t; unsigned long flags, unused; @@ -814,6 +824,7 @@ static inline bool kcov_mode_enabled(unsigned int mode) } static void kcov_remote_softirq_start(struct task_struct *t) + __must_hold(&kcov_percpu_data.lock) { struct kcov_percpu_data *data = this_cpu_ptr(&kcov_percpu_data); unsigned int mode; @@ -831,6 +842,7 @@ static void kcov_remote_softirq_start(struct task_struct *t) } static void kcov_remote_softirq_stop(struct task_struct *t) + __must_hold(&kcov_percpu_data.lock) { struct kcov_percpu_data *data = this_cpu_ptr(&kcov_percpu_data); @@ -896,10 +908,12 @@ void kcov_remote_start(u64 handle) /* Put in kcov_remote_stop(). */ kcov_get(kcov); /* - * Read kcov fields before unlock to prevent races with - * KCOV_DISABLE / kcov_remote_reset(). + * Read kcov fields before unlocking kcov_remote_lock to prevent races + * with KCOV_DISABLE and kcov_remote_reset(); cannot acquire kcov->lock + * here, because it might lead to deadlock given kcov_remote_lock is + * acquired _after_ kcov->lock elsewhere. */ - mode = kcov->mode; + mode = capability_unsafe(kcov->mode); sequence = kcov->sequence; if (in_task()) { size = kcov->remote_size; From patchwork Tue Mar 4 09:21:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870628 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8ACDE1FECA4 for ; Tue, 4 Mar 2025 09:26:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080388; cv=none; b=cqPl1phXidOQUZbC7fJI5Lq7L13+lcOhOKHQZL8ScOjhHqfbBCZMJqAfOk69oDKzttIuG3cU6DTAg4iCPDFsHfrkf+irNvLQDYOAzzgXWkzsu2rI7ZaqJG4nXKgvtU6T16muy4bp8lgra7cyicLr/i+3g5WKUB9hZHy/xlUOT00= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080388; c=relaxed/simple; bh=AA8LaiOOwcSbUCgOuJPxceK60VVmqz78yjgxOe6a/xc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=uJXV9dEc9Qmmo+hyHaQVGEhPYu/VUSIs5vfZGpoYebmtRyEsJpOpVGiFNGr9xhD845hKyAyQDK+GQbdMc+CogBwNobIhcfFEW8rrg8y9wAA57m1X1fwkVD08mTkQXJ00Cmbvz71/YqzwIviZwqgqDljgMej6XTX3LnwAVDHjXfg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pRxEYeg5; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pRxEYeg5" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5d9fb24f87bso2372914a12.0 for ; Tue, 04 Mar 2025 01:26:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080385; x=1741685185; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OCbQJguuxExNh/1WWVSwShtOCrr3VVCLOHRvUq00gbI=; b=pRxEYeg5vrHpLCDBpOE0i+wuYu/dn7VBcKXnRbZukPD2yuJLjnJYnWDJ1CLrXXyaf1 hRl11uPydj7ZzkQGtA2h90bC7hhtR50ycdN33xvVZinMqspVPyA5oOmYKBI0frTU1A9x ghqanVRy4DmUnsSOQ/cSEY/7Bpg+l/Ks1vQJbLpOcgiwjYAwGBvkefG0wGP4WpFJn1Cu xwqyQAA3stIpf7VJXPsNHGJlVpK20kZRrqpPUMgLtgR4+b8wTpv7ayyIaPw9bLzGhcAx SxDjEyl+J7oL3ZeOj3I2JmLeb2XtsIOU9V+a1JX8bwhpiZm1qOtIRKvHD5KuTucYggY5 lMCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080385; x=1741685185; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OCbQJguuxExNh/1WWVSwShtOCrr3VVCLOHRvUq00gbI=; b=FWB3gEKUn1iVKF9vjE0hTH8yz5YF4G0hL2AYioHErcDxyw66B3L+GivUMq4wtFw4Lc o5ir8A0d7Yrv3FpKm57XtmxxrxureEHEsgd9kazfSvBUdVV8kE4ZGzyZmqmoc7ZPC+QC WXzSi/IH/XQSDHVQsszrp/rWoPXZvXQTmkrx1z6mRDZDkFMNHLvM3qzzytPeoSDfn8/S rCAvo+Zr44UMcxhLiFcqY22l3zNtnOZ3td2yQbkH6tSPKgtfkG3Y//HwI2KDytosYsL6 h1bAXSSC2giKgr9L27IwFFIW7tIKYBePlB8WI/wKZExyoFCiCPFXZRrG9yd9IupvkPfw ieGw== X-Forwarded-Encrypted: i=1; AJvYcCU1Tw19dmOVH6lbkIeH83dviWwPn7zLHdckbA/yVTwG5Tv+wYY8Ho7fZ0wA7cPMjcFsny5MMxMsfmmXho0=@vger.kernel.org X-Gm-Message-State: AOJu0YyspGa3tHtmn1Vzo3030Ro8ulCndn0tG9/yv4tH/yyD7qeeYRpo dw9zjUEsnxmIhKhy9ZN9x6Bdj4yIkMFE0cMI+7UC65qUldgcr5UrCren7p0Vp5rJ4Gn2QXBqDQ= = X-Google-Smtp-Source: AGHT+IElsv0b6BPDqEe7C+Jsk5RB6FSq2PE2ahKujenY2V0Uy7jmG9RZfhrNhYmr9XWSjfqUNV8ETxzUug== X-Received: from ejcwb15.prod.google.com ([2002:a17:907:d50f:b0:abf:740d:69f5]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:1b26:b0:abb:c647:a4bf with SMTP id a640c23a62f3a-abf25faa163mr1968124666b.23.1741080385011; Tue, 04 Mar 2025 01:26:25 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:27 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-29-elver@google.com> Subject: [PATCH v2 28/34] stackdepot: Enable capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Enable capability analysis for stackdepot. Signed-off-by: Marco Elver --- v2: * Remove disable/enable_capability_analysis() around headers. --- lib/Makefile | 1 + lib/stackdepot.c | 20 ++++++++++++++------ 2 files changed, 15 insertions(+), 6 deletions(-) diff --git a/lib/Makefile b/lib/Makefile index 1dbb59175eb0..f40ba93c9a94 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -270,6 +270,7 @@ obj-$(CONFIG_POLYNOMIAL) += polynomial.o # Prevent the compiler from calling builtins like memcmp() or bcmp() from this # file. CFLAGS_stackdepot.o += -fno-builtin +CAPABILITY_ANALYSIS_stackdepot.o := y obj-$(CONFIG_STACKDEPOT) += stackdepot.o KASAN_SANITIZE_stackdepot.o := n # In particular, instrumenting stackdepot.c with KMSAN will result in infinite diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 245d5b416699..a8b6a49c9058 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -61,18 +61,18 @@ static unsigned int stack_bucket_number_order; /* Hash mask for indexing the table. */ static unsigned int stack_hash_mask; +/* The lock must be held when performing pool or freelist modifications. */ +static DEFINE_RAW_SPINLOCK(pool_lock); /* Array of memory regions that store stack records. */ -static void *stack_pools[DEPOT_MAX_POOLS]; +static void *stack_pools[DEPOT_MAX_POOLS] __guarded_by(&pool_lock); /* Newly allocated pool that is not yet added to stack_pools. */ static void *new_pool; /* Number of pools in stack_pools. */ static int pools_num; /* Offset to the unused space in the currently used pool. */ -static size_t pool_offset = DEPOT_POOL_SIZE; +static size_t pool_offset __guarded_by(&pool_lock) = DEPOT_POOL_SIZE; /* Freelist of stack records within stack_pools. */ -static LIST_HEAD(free_stacks); -/* The lock must be held when performing pool or freelist modifications. */ -static DEFINE_RAW_SPINLOCK(pool_lock); +static __guarded_by(&pool_lock) LIST_HEAD(free_stacks); /* Statistics counters for debugfs. */ enum depot_counter_id { @@ -242,6 +242,7 @@ EXPORT_SYMBOL_GPL(stack_depot_init); * Initializes new stack pool, and updates the list of pools. */ static bool depot_init_pool(void **prealloc) + __must_hold(&pool_lock) { lockdep_assert_held(&pool_lock); @@ -289,6 +290,7 @@ static bool depot_init_pool(void **prealloc) /* Keeps the preallocated memory to be used for a new stack depot pool. */ static void depot_keep_new_pool(void **prealloc) + __must_hold(&pool_lock) { lockdep_assert_held(&pool_lock); @@ -308,6 +310,7 @@ static void depot_keep_new_pool(void **prealloc) * the current pre-allocation. */ static struct stack_record *depot_pop_free_pool(void **prealloc, size_t size) + __must_hold(&pool_lock) { struct stack_record *stack; void *current_pool; @@ -342,6 +345,7 @@ static struct stack_record *depot_pop_free_pool(void **prealloc, size_t size) /* Try to find next free usable entry from the freelist. */ static struct stack_record *depot_pop_free(void) + __must_hold(&pool_lock) { struct stack_record *stack; @@ -379,6 +383,7 @@ static inline size_t depot_stack_record_size(struct stack_record *s, unsigned in /* Allocates a new stack in a stack depot pool. */ static struct stack_record * depot_alloc_stack(unsigned long *entries, unsigned int nr_entries, u32 hash, depot_flags_t flags, void **prealloc) + __must_hold(&pool_lock) { struct stack_record *stack = NULL; size_t record_size; @@ -437,6 +442,7 @@ depot_alloc_stack(unsigned long *entries, unsigned int nr_entries, u32 hash, dep } static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle) + __must_not_hold(&pool_lock) { const int pools_num_cached = READ_ONCE(pools_num); union handle_parts parts = { .handle = handle }; @@ -453,7 +459,8 @@ static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle) return NULL; } - pool = stack_pools[pool_index]; + /* @pool_index either valid, or user passed in corrupted value. */ + pool = capability_unsafe(stack_pools[pool_index]); if (WARN_ON(!pool)) return NULL; @@ -466,6 +473,7 @@ static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle) /* Links stack into the freelist. */ static void depot_free_stack(struct stack_record *stack) + __must_not_hold(&pool_lock) { unsigned long flags; From patchwork Tue Mar 4 09:21:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870216 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 638701FECB0 for ; Tue, 4 Mar 2025 09:26:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080391; cv=none; b=lM5fpcTi4/nUt3t5DP1a/F4AOKkLUAWFjvrmP4hv7DBxeRBeawZIGpZqFBEvPiJIW/J7O71f5L+cra+DSdkYDNuQe/v/re49LsnSSA+Z6Uod9m4CvXdt02XpFBn0ORq4wkfmFcXGMs9AWNSfjmnB65Bf6TalfQ3Dtnw0UCJIvig= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080391; c=relaxed/simple; bh=ny5w6oHqV2txpuQ0MsAltkXYUEne0mzXHdXbqKiPXa8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=K6nNtF+ULNZYFjxX/r/EKVMtQaczt2akvgPlkj1RdKAC4xpQtPEgKbyAoayPbrqvaZQVKSqGfvb7aO1m9cXjSnQJyHI/QDft0ysXo0w/96M/v4q2DVrDEONO+QeQ+ARN1XnU/4TUR+wmsNtHCUHveLVHcP9MDYbcOVNYiRD8kcQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=obtYtow0; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="obtYtow0" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-ac0f1651227so126969366b.2 for ; Tue, 04 Mar 2025 01:26:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080388; x=1741685188; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6Oslc0Vw7rD1eBm5IrdyoxIc7gTLsk2fVXwffC4VNN0=; b=obtYtow0pvK7rc+zS/vuVRaMsYbrhKM1f8ajmQNs6Z3j1C7d09KbddFB0iPjH98UbR saxXR/IG2MfEMTqLy6Jw9JdbMccKW6P6rCK1VRDa1rIAPdc4GydABfAX0/vCsrjD5t20 rkwaKUfsJHHWIN33ZLNcNp0b8A9UGMWEduxiHCRmAq1Wq7yLzTpdA0C47pCE3zGBLzWn UiphXOppLAgiWfECV8hu+Hj7Kn0NmeW6d16a3BS+NbRKC1e5OWtCOfC9qINRAIJniEiA qJZmhrVZtDv3xrcmxqRxk02Lrc2NIblh4TUjKYfIZDzdJ1LCG8ZMKf2dFAgntnTsyzPa B6bw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080388; x=1741685188; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6Oslc0Vw7rD1eBm5IrdyoxIc7gTLsk2fVXwffC4VNN0=; b=CcIeDnZpFZc9+loSrgPrHH7v3GYAhCQJwyVCCYCWd38RDNWQffEcolmcycexj47gps RhhRyKeakhMBxStGVAihUXJq9gbyw9SqNzmwMgA3HGXYtp1bGm+UDaQjLLdaDBX5TgUg XUKywFIiHvuacur71yCfH+5BZF+v0ReBDM4i+PRmY7F6zYcA4KIHO6vhNo2ky3ig7AXf LeZlsXAUO7DgRZrkX+puDj1hENgoo2CNShEmDGp0avH5kY3tPATLVtohfDqhDWnhi9Qc vPCPSNoFPjKj+RFmCMi2nGf1aRCxs+HGQmpJZrVN4aYTbIn8FjMhbTFppkVWUUAn/HpM gUQA== X-Forwarded-Encrypted: i=1; AJvYcCXgFJQdbdoO9gaCDbAN1OYDp5JQ/Okz4hjM/D5W67WS91b/cNXzu2uPYyDigLAU66mNXw3LJfItfIoBt/0=@vger.kernel.org X-Gm-Message-State: AOJu0YzwwtD7/zNmBgbyAgMzsIUz+hqOKLVuTKDRKV5rfPJ2vzO7U7Ze fGDngw4XgvT0rA4qvn7lQFt8VWv32o4pjdmE6cfyw9FgQSDy3uem2CKYmxyVwzLKbTubruKZcA= = X-Google-Smtp-Source: AGHT+IHtToTtakBuhP4/+sNMLQALzVHMJHtjC5HLCinrcP/6hbX3DMvKPBSJ238d92jV7XTaug7SYLQQPg== X-Received: from ejctb24.prod.google.com ([2002:a17:907:8b98:b0:ac1:4149:808d]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:781:b0:abe:f6f5:93fa with SMTP id a640c23a62f3a-abf261d3b82mr1992742166b.33.1741080387611; Tue, 04 Mar 2025 01:26:27 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:28 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-30-elver@google.com> Subject: [PATCH v2 29/34] rhashtable: Enable capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Enable capability analysis for rhashtable, which was used as an initial test as it contains a combination of RCU, mutex, and bit_spinlock usage. Users of rhashtable now also benefit from annotations on the API, which will now warn if the RCU read lock is not held where required. Signed-off-by: Marco Elver --- v2: * Remove disable/enable_capability_analysis() around headers. --- include/linux/rhashtable.h | 14 +++++++++++--- lib/Makefile | 2 ++ lib/rhashtable.c | 5 +++-- 3 files changed, 16 insertions(+), 5 deletions(-) diff --git a/include/linux/rhashtable.h b/include/linux/rhashtable.h index 8463a128e2f4..c6374691ccc7 100644 --- a/include/linux/rhashtable.h +++ b/include/linux/rhashtable.h @@ -245,16 +245,17 @@ void *rhashtable_insert_slow(struct rhashtable *ht, const void *key, void rhashtable_walk_enter(struct rhashtable *ht, struct rhashtable_iter *iter); void rhashtable_walk_exit(struct rhashtable_iter *iter); -int rhashtable_walk_start_check(struct rhashtable_iter *iter) __acquires(RCU); +int rhashtable_walk_start_check(struct rhashtable_iter *iter) __acquires_shared(RCU); static inline void rhashtable_walk_start(struct rhashtable_iter *iter) + __acquires_shared(RCU) { (void)rhashtable_walk_start_check(iter); } void *rhashtable_walk_next(struct rhashtable_iter *iter); void *rhashtable_walk_peek(struct rhashtable_iter *iter); -void rhashtable_walk_stop(struct rhashtable_iter *iter) __releases(RCU); +void rhashtable_walk_stop(struct rhashtable_iter *iter) __releases_shared(RCU); void rhashtable_free_and_destroy(struct rhashtable *ht, void (*free_fn)(void *ptr, void *arg), @@ -325,6 +326,7 @@ static inline struct rhash_lock_head __rcu **rht_bucket_insert( static inline unsigned long rht_lock(struct bucket_table *tbl, struct rhash_lock_head __rcu **bkt) + __acquires(__bitlock(0, bkt)) { unsigned long flags; @@ -337,6 +339,7 @@ static inline unsigned long rht_lock(struct bucket_table *tbl, static inline unsigned long rht_lock_nested(struct bucket_table *tbl, struct rhash_lock_head __rcu **bucket, unsigned int subclass) + __acquires(__bitlock(0, bucket)) { unsigned long flags; @@ -349,6 +352,7 @@ static inline unsigned long rht_lock_nested(struct bucket_table *tbl, static inline void rht_unlock(struct bucket_table *tbl, struct rhash_lock_head __rcu **bkt, unsigned long flags) + __releases(__bitlock(0, bkt)) { lock_map_release(&tbl->dep_map); bit_spin_unlock(0, (unsigned long *)bkt); @@ -402,13 +406,14 @@ static inline void rht_assign_unlock(struct bucket_table *tbl, struct rhash_lock_head __rcu **bkt, struct rhash_head *obj, unsigned long flags) + __releases(__bitlock(0, bkt)) { if (rht_is_a_nulls(obj)) obj = NULL; lock_map_release(&tbl->dep_map); rcu_assign_pointer(*bkt, (void *)obj); preempt_enable(); - __release(bitlock); + __release(__bitlock(0, bkt)); local_irq_restore(flags); } @@ -589,6 +594,7 @@ static inline int rhashtable_compare(struct rhashtable_compare_arg *arg, static inline struct rhash_head *__rhashtable_lookup( struct rhashtable *ht, const void *key, const struct rhashtable_params params) + __must_hold_shared(RCU) { struct rhashtable_compare_arg arg = { .ht = ht, @@ -642,6 +648,7 @@ static inline struct rhash_head *__rhashtable_lookup( static inline void *rhashtable_lookup( struct rhashtable *ht, const void *key, const struct rhashtable_params params) + __must_hold_shared(RCU) { struct rhash_head *he = __rhashtable_lookup(ht, key, params); @@ -692,6 +699,7 @@ static inline void *rhashtable_lookup_fast( static inline struct rhlist_head *rhltable_lookup( struct rhltable *hlt, const void *key, const struct rhashtable_params params) + __must_hold_shared(RCU) { struct rhash_head *he = __rhashtable_lookup(&hlt->ht, key, params); diff --git a/lib/Makefile b/lib/Makefile index f40ba93c9a94..c7004270ad5f 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -45,6 +45,8 @@ lib-$(CONFIG_MIN_HEAP) += min_heap.o lib-y += kobject.o klist.o obj-y += lockref.o +CAPABILITY_ANALYSIS_rhashtable.o := y + obj-y += bcd.o sort.o parser.o debug_locks.o random32.o \ bust_spinlocks.o kasprintf.o bitmap.o scatterlist.o \ list_sort.o uuid.o iov_iter.o clz_ctz.o \ diff --git a/lib/rhashtable.c b/lib/rhashtable.c index 3e555d012ed6..fe8dd776837c 100644 --- a/lib/rhashtable.c +++ b/lib/rhashtable.c @@ -358,6 +358,7 @@ static int rhashtable_rehash_table(struct rhashtable *ht) static int rhashtable_rehash_alloc(struct rhashtable *ht, struct bucket_table *old_tbl, unsigned int size) + __must_hold(&ht->mutex) { struct bucket_table *new_tbl; int err; @@ -392,6 +393,7 @@ static int rhashtable_rehash_alloc(struct rhashtable *ht, * bucket locks or concurrent RCU protected lookups and traversals. */ static int rhashtable_shrink(struct rhashtable *ht) + __must_hold(&ht->mutex) { struct bucket_table *old_tbl = rht_dereference(ht->tbl, ht); unsigned int nelems = atomic_read(&ht->nelems); @@ -724,7 +726,7 @@ EXPORT_SYMBOL_GPL(rhashtable_walk_exit); * resize events and always continue. */ int rhashtable_walk_start_check(struct rhashtable_iter *iter) - __acquires(RCU) + __acquires_shared(RCU) { struct rhashtable *ht = iter->ht; bool rhlist = ht->rhlist; @@ -940,7 +942,6 @@ EXPORT_SYMBOL_GPL(rhashtable_walk_peek); * hash table. */ void rhashtable_walk_stop(struct rhashtable_iter *iter) - __releases(RCU) { struct rhashtable *ht; struct bucket_table *tbl = iter->walker.tbl; From patchwork Tue Mar 4 09:21:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870627 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 01F1B205E17 for ; Tue, 4 Mar 2025 09:26:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080394; cv=none; b=kte0lTHcytVaNE+NCrdqUDB/soYHMYBbErB9wCMV9VbCN5DjnlbvXIzwynOEsseLtWa1S60HYAFiuj1bTsJ5AZZvUJbeqMcrKLhnZaDFLEjGOW/27wJ3aph5qhvHGABzEadwQa2uoR21g9ZxvquHYrZw8hyfNNpfpOyoGQGquU0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080394; c=relaxed/simple; bh=KT6MN7tzMPolkJ3RDT2fCUl9mndACbg8ns1Ks1oaR0I=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=iMtD6y+wSDTsSGG4nClXWSInvWyQizogzETzt7PQeVp5AopjJRtL30Pq8A9pZYjV04ESYM7EsQOLSljDUlMB/kB1/RRDyRm2ba4ZiFxa4Nha/Sf3KYNrvQjH5dyU2JjHTZ2axA0At1w64q0pxpHfHIfhrrx49IOWiw4SH9K85ys= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Y42Ex3nU; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Y42Ex3nU" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5e496b51f38so6579548a12.1 for ; Tue, 04 Mar 2025 01:26:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080390; x=1741685190; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KzDuuuaiZX8v73IeozPshNFzU/FhGyeu9jS8KRsuYgk=; b=Y42Ex3nU0+QIXsSYWbd1ite56QAtrWpegRON8bJ9kK5QOGDKMxzVAUSN2PqjRiIrKD H/LMOxFM1kFUyFir4GtzVHxddqSHzf9y+5OugZrYl1FLXuV4S0QXO1A9qYIaio0m1WtW 1rjU3iNdV8GFme6mirZBxD+4stJrn56sIFNX1HBqmd66jDqI13Tpox2g+UA98wCqqfZx lgXvHzPb9JMKYtv3iNrl6+78E72fgGYvq28Qs5yh2X88rXH/Qd6nPWcDvZLZ9VELnmdw r79SqMeOqpqvdM15QHRw/ug7fzj7VhNset4ChezHaKeGpTQroAHudhsZ6fWc+gp1OA9s rBBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080390; x=1741685190; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KzDuuuaiZX8v73IeozPshNFzU/FhGyeu9jS8KRsuYgk=; b=IW1VbPqGfhbHblaD+eO1y9cD2LuAlFtw+2ueRh+pa5CQ2KWyiBxbEa+b/MsKbCngXC 7QxYsUzY2QGOm6EBX6xsz1D22H46QMDqfBKRqlOai44MP48Mz+p/YU+eEDfFOIaw4Wnn iG4dUG2WlVeT0Oyehog1x3UHHPnyOlP3zDUSGtdQaYFraYXAO+2BuGjGCJHdxgwpK1CC 7Kbxut2nproEVl/h2LIHLpiwUZFWAnstrG8js08m/xN+Z0n6u0+ZSbYiZrdCdFBxgvNt x6OZbCbF5L710tKgGNFt2GgBmfnRdVZtn2G/3AdK4LKUT1SGGu1HyBMddUIoZHoCrlZ2 Y+Qg== X-Forwarded-Encrypted: i=1; AJvYcCVSDZ05JT1OYdj+3ZiT7SnuXN6b72jiYHrDcWnq9g7POE10c30+A7lAmP1BJ90s4gWBSzXU1JEooslpfUo=@vger.kernel.org X-Gm-Message-State: AOJu0YzoQRUMy6nIBkQjXhKd6bkx3zFI0R9czj2oxZjyVBiZ2CCjzdd6 D9vH+isZYOnTI7KcsrSAjI4UUduV5KaieybureJRfX+++b6MKUpT0TbHcBYPfYFqOx82QYyRXg= = X-Google-Smtp-Source: AGHT+IH5l4jKYamySNyxcMXL4HKAIJ9GmtnPKrLPWykOQYXwmzxoLHo5g+W6i+hA1eMCWubxfVXujxm01Q== X-Received: from edbev11.prod.google.com ([2002:a05:6402:540b:b0:5e5:cbc:4d2c]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:541b:b0:5de:4b81:d3fd with SMTP id 4fb4d7f45d1cf-5e4d6afa126mr17250877a12.13.1741080390194; Tue, 04 Mar 2025 01:26:30 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:29 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-31-elver@google.com> Subject: [PATCH v2 30/34] printk: Move locking annotation to printk.c From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org With Sparse support gone, Clang is a bit more strict and warns: ./include/linux/console.h:492:50: error: use of undeclared identifier 'console_mutex' 492 | extern void console_list_unlock(void) __releases(console_mutex); Since it does not make sense to make console_mutex itself global, move the annotation to printk.c. Capability analysis remains disabled for printk.c. This is needed to enable capability analysis for modules that include . Signed-off-by: Marco Elver --- v2: * New patch. --- include/linux/console.h | 4 ++-- kernel/printk/printk.c | 2 ++ 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/include/linux/console.h b/include/linux/console.h index eba367bf605d..51d2be96514a 100644 --- a/include/linux/console.h +++ b/include/linux/console.h @@ -488,8 +488,8 @@ static inline bool console_srcu_read_lock_is_held(void) extern int console_srcu_read_lock(void); extern void console_srcu_read_unlock(int cookie); -extern void console_list_lock(void) __acquires(console_mutex); -extern void console_list_unlock(void) __releases(console_mutex); +extern void console_list_lock(void); +extern void console_list_unlock(void); extern struct hlist_head console_list; diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index 07668433644b..377f21fd9bb4 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -244,6 +244,7 @@ int devkmsg_sysctl_set_loglvl(const struct ctl_table *table, int write, * For console list or console->flags updates */ void console_list_lock(void) + __acquires(&console_mutex) { /* * In unregister_console() and console_force_preferred_locked(), @@ -268,6 +269,7 @@ EXPORT_SYMBOL(console_list_lock); * Counterpart to console_list_lock() */ void console_list_unlock(void) + __releases(&console_mutex) { mutex_unlock(&console_mutex); } From patchwork Tue Mar 4 09:21:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870215 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA842205E2D for ; Tue, 4 Mar 2025 09:26:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080397; cv=none; b=b+Mp9gOBunFS+tMv4r8SOSNrgiycImJOJjtUwCb7+TgbdV+JoGDkFv3OPseIzxL/4d2RJQUBUfvTqw7trDv5gBFo62NPW9GEBpE2SgToR04AyQi/kCxPDGJzcWLPNV5VGpvRqAahNzo+nb+ldlWEw1zjdqI+FSM8hihiVitOdeI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080397; c=relaxed/simple; bh=b0CdiKYI8plcUd/5o7nrlhwn2YMDkb9s4S0Fm28UjnM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lTV+myThxXNAVukHgHosTpC40Pv3P0olnfSkvYNzH9/icxW0/9+tIqz0im9qnDsX18SspvoVMwf5rXKooijY4Is4ivbqSJ5nTA3WhoL0nnqrQjwDPXlK9rp9XM3VgD3aSNjTDl7GTYtiXotxQQqnII0d+92uwPm03kJz3F/uSnU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=heQ5cuwt; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="heQ5cuwt" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-abf7171eaf2so230972966b.1 for ; Tue, 04 Mar 2025 01:26:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080393; x=1741685193; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iDk2VwqwOTTMZfSkXTf6seWDq1RAdZgqLjXPZ94M0QA=; b=heQ5cuwtqNsaCbDQI3Bp69jGS6Sl55DcabORQxz3IFOlgNbGqROyexhO8EQYA6K11k ByLsbsNm10ys7wRMTWvw8EV3M42znUbPpMAwZB1Jl8dYOlPAeMuQaWTWgCGO+XLMYg+t Oe1KoOF39wGt1B3tX8ZDxuFaTjb35wDIAUtxRmS4e5PPRZJKluZHrjOD3zoWKcVQWTUo u0pMtXLUkww+kGiD3jc0aAMpZ95CkRYZWFpqoeKaLihqEXHKO1XFNXnWUV9bjbLW+Wcq yRf+AiE4MIcps+cPgek24bj3HfOtRbICnBdU9uMy/x8Uss+gz3S+Z4B3ellw6JB9U/Jg ZvMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080393; x=1741685193; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iDk2VwqwOTTMZfSkXTf6seWDq1RAdZgqLjXPZ94M0QA=; b=LKwyLhidKflGHTLPM5sMiR8nQ/TrJAZ8pXlxz79JQIzTcBBLxx+rNmwfcKNGwUvO07 lFu0baaQ+n9C99UnmUN8Sc7HbQ4Gvf/eHwrUGJ3srRsGUMPpHFBnDWa3KccmVASRGC4d SDVIZQ3OXtRupLJp+7NormGCrW18cvxD0rSFD4GbAYG/NQExqZ73yPXZXIrsI3aBrIyJ XXLLOi+OoTTrMUyr4ABPzfnJEy9L4WEqh3AvFtUTHCD1Ts89FYN+KS8eaQxzU88e6mkO RjKnY4s6FGqjlO3LNajVe3CVZkreAN0xDAWdUilU96YxheWjmN/9mMAM6+DAoUEwvjHz kcOA== X-Forwarded-Encrypted: i=1; AJvYcCWPOP0X9DRul1cJ25G+llKaoTakPX/27Kne5A+F9YW38U/jRddwtmyDIjXIYAvEsPogLVz/t4tCTQOfx3o=@vger.kernel.org X-Gm-Message-State: AOJu0YxgJVWfTrcZpkDDMvULKEXy04A+AjNGktQfiKhMMsjIBxgE6Mvn mckDzFJ7rt7iTd8oAcxNKN08GN70ulRtv1jrUB7eoS7SxDSn/IdWtGwVUqF499YrtpaYrBivPg= = X-Google-Smtp-Source: AGHT+IF8v9zLKhdVdDKTCEZYG8DbcGXKqxfAOcuUpH2RTtDLQJOOCxvLbiKCCrE4y5yhtpae3MqEZYbpjQ== X-Received: from ejcsn10.prod.google.com ([2002:a17:906:628a:b0:ac1:fb2a:4a65]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:3e0f:b0:abf:718f:ef27 with SMTP id a640c23a62f3a-abf718ff14amr868693766b.1.1741080393090; Tue, 04 Mar 2025 01:26:33 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:30 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-32-elver@google.com> Subject: [PATCH v2 31/34] drivers/tty: Enable capability analysis for core files From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Enable capability analysis for drivers/tty/*. This demonstrates a larger conversion to use Clang's capability analysis. The benefit is additional static checking of locking rules, along with better documentation. Signed-off-by: Marco Elver Cc: Greg Kroah-Hartman Cc: Jiri Slaby --- v2: * New patch. --- drivers/tty/Makefile | 3 +++ drivers/tty/n_tty.c | 16 ++++++++++++++++ drivers/tty/pty.c | 1 + drivers/tty/sysrq.c | 1 + drivers/tty/tty.h | 8 ++++---- drivers/tty/tty_buffer.c | 8 +++----- drivers/tty/tty_io.c | 12 +++++++++--- drivers/tty/tty_ioctl.c | 2 +- drivers/tty/tty_ldisc.c | 35 ++++++++++++++++++++++++++++++++--- drivers/tty/tty_ldsem.c | 2 ++ drivers/tty/tty_mutex.c | 4 ++++ drivers/tty/tty_port.c | 2 ++ include/linux/tty.h | 14 +++++++------- include/linux/tty_flip.h | 4 ++-- include/linux/tty_ldisc.h | 19 ++++++++++--------- 15 files changed, 97 insertions(+), 34 deletions(-) diff --git a/drivers/tty/Makefile b/drivers/tty/Makefile index 07aca5184a55..35e1a62cbe16 100644 --- a/drivers/tty/Makefile +++ b/drivers/tty/Makefile @@ -1,4 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 + +CAPABILITY_ANALYSIS := y + obj-$(CONFIG_TTY) += tty_io.o n_tty.o tty_ioctl.o tty_ldisc.o \ tty_buffer.o tty_port.o tty_mutex.o \ tty_ldsem.o tty_baudrate.o tty_jobctrl.o \ diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c index 5e9ca4376d68..45925fc5a8fd 100644 --- a/drivers/tty/n_tty.c +++ b/drivers/tty/n_tty.c @@ -1088,6 +1088,7 @@ static void __isig(int sig, struct tty_struct *tty) * Locking: %ctrl.lock */ static void isig(int sig, struct tty_struct *tty) + __must_hold_shared(&tty->termios_rwsem) { struct n_tty_data *ldata = tty->disc_data; @@ -1135,6 +1136,7 @@ static void isig(int sig, struct tty_struct *tty) * Note: may get exclusive %termios_rwsem if flushing input buffer */ static void n_tty_receive_break(struct tty_struct *tty) + __must_hold_shared(&tty->termios_rwsem) { struct n_tty_data *ldata = tty->disc_data; @@ -1204,6 +1206,7 @@ static void n_tty_receive_parity_error(const struct tty_struct *tty, static void n_tty_receive_signal_char(struct tty_struct *tty, int signal, u8 c) + __must_hold_shared(&tty->termios_rwsem) { isig(signal, tty); if (I_IXON(tty)) @@ -1353,6 +1356,7 @@ static bool n_tty_receive_char_canon(struct tty_struct *tty, u8 c) static void n_tty_receive_char_special(struct tty_struct *tty, u8 c, bool lookahead_done) + __must_hold_shared(&tty->termios_rwsem) { struct n_tty_data *ldata = tty->disc_data; @@ -1463,6 +1467,7 @@ static void n_tty_receive_char_closing(struct tty_struct *tty, u8 c, static void n_tty_receive_char_flagged(struct tty_struct *tty, u8 c, u8 flag) + __must_hold_shared(&tty->termios_rwsem) { switch (flag) { case TTY_BREAK: @@ -1483,6 +1488,7 @@ n_tty_receive_char_flagged(struct tty_struct *tty, u8 c, u8 flag) static void n_tty_receive_char_lnext(struct tty_struct *tty, u8 c, u8 flag) + __must_hold_shared(&tty->termios_rwsem) { struct n_tty_data *ldata = tty->disc_data; @@ -1540,6 +1546,7 @@ n_tty_receive_buf_real_raw(const struct tty_struct *tty, const u8 *cp, static void n_tty_receive_buf_raw(struct tty_struct *tty, const u8 *cp, const u8 *fp, size_t count) + __must_hold_shared(&tty->termios_rwsem) { struct n_tty_data *ldata = tty->disc_data; u8 flag = TTY_NORMAL; @@ -1571,6 +1578,7 @@ n_tty_receive_buf_closing(struct tty_struct *tty, const u8 *cp, const u8 *fp, static void n_tty_receive_buf_standard(struct tty_struct *tty, const u8 *cp, const u8 *fp, size_t count, bool lookahead_done) + __must_hold_shared(&tty->termios_rwsem) { struct n_tty_data *ldata = tty->disc_data; u8 flag = TTY_NORMAL; @@ -1609,6 +1617,7 @@ static void n_tty_receive_buf_standard(struct tty_struct *tty, const u8 *cp, static void __receive_buf(struct tty_struct *tty, const u8 *cp, const u8 *fp, size_t count) + __must_hold_shared(&tty->termios_rwsem) { struct n_tty_data *ldata = tty->disc_data; bool preops = I_ISTRIP(tty) || (I_IUCLC(tty) && L_IEXTEN(tty)); @@ -2188,6 +2197,10 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file, u8 *kbuf, return kb - kbuf; } + /* Adopted locks from prior call. */ + __acquire(&ldata->atomic_read_lock); + __acquire_shared(&tty->termios_rwsem); + /* No more data - release locks and stop retries */ n_tty_kick_worker(tty); n_tty_check_unthrottle(tty); @@ -2305,6 +2318,9 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file, u8 *kbuf, more_to_be_read: remove_wait_queue(&tty->read_wait, &wait); *cookie = cookie; + /* Hand-off locks to retry with cookie set. */ + __release_shared(&tty->termios_rwsem); + __release(&ldata->atomic_read_lock); return kb - kbuf; } } diff --git a/drivers/tty/pty.c b/drivers/tty/pty.c index 8bb1a01fef2a..8d4eb0f4c84c 100644 --- a/drivers/tty/pty.c +++ b/drivers/tty/pty.c @@ -824,6 +824,7 @@ static int ptmx_open(struct inode *inode, struct file *filp) tty = tty_init_dev(ptm_driver, index); /* The tty returned here is locked so we can safely drop the mutex */ + lockdep_assert_held(&tty->legacy_mutex); mutex_unlock(&tty_mutex); retval = PTR_ERR(tty); diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c index f85ce02e4725..82dfa964c965 100644 --- a/drivers/tty/sysrq.c +++ b/drivers/tty/sysrq.c @@ -149,6 +149,7 @@ static const struct sysrq_key_op sysrq_unraw_op = { static void sysrq_handle_crash(u8 key) { /* release the RCU read lock before crashing */ + lockdep_assert_in_rcu_read_lock(); rcu_read_unlock(); panic("sysrq triggered crash\n"); diff --git a/drivers/tty/tty.h b/drivers/tty/tty.h index 93cf5ef1e857..1a3c2f663b28 100644 --- a/drivers/tty/tty.h +++ b/drivers/tty/tty.h @@ -60,15 +60,15 @@ static inline void tty_set_flow_change(struct tty_struct *tty, smp_mb(); } -int tty_ldisc_lock(struct tty_struct *tty, unsigned long timeout); -void tty_ldisc_unlock(struct tty_struct *tty); +int tty_ldisc_lock(struct tty_struct *tty, unsigned long timeout) __cond_acquires(0, &tty->ldisc_sem); +void tty_ldisc_unlock(struct tty_struct *tty) __releases(&tty->ldisc_sem); int __tty_check_change(struct tty_struct *tty, int sig); int tty_check_change(struct tty_struct *tty); void __stop_tty(struct tty_struct *tty); void __start_tty(struct tty_struct *tty); -void tty_write_unlock(struct tty_struct *tty); -int tty_write_lock(struct tty_struct *tty, bool ndelay); +void tty_write_unlock(struct tty_struct *tty) __releases(&tty->atomic_write_lock); +int tty_write_lock(struct tty_struct *tty, bool ndelay) __cond_acquires(0, &tty->atomic_write_lock); void tty_vhangup_session(struct tty_struct *tty); void tty_open_proc_set_tty(struct file *filp, struct tty_struct *tty); int tty_signal_session_leader(struct tty_struct *tty, int exit_session); diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c index 79f0ff94ce00..dcc56537290f 100644 --- a/drivers/tty/tty_buffer.c +++ b/drivers/tty/tty_buffer.c @@ -52,10 +52,8 @@ */ void tty_buffer_lock_exclusive(struct tty_port *port) { - struct tty_bufhead *buf = &port->buf; - - atomic_inc(&buf->priority); - mutex_lock(&buf->lock); + atomic_inc(&port->buf.priority); + mutex_lock(&port->buf.lock); } EXPORT_SYMBOL_GPL(tty_buffer_lock_exclusive); @@ -73,7 +71,7 @@ void tty_buffer_unlock_exclusive(struct tty_port *port) bool restart = buf->head->commit != buf->head->read; atomic_dec(&buf->priority); - mutex_unlock(&buf->lock); + mutex_unlock(&port->buf.lock); if (restart) queue_work(system_unbound_wq, &buf->work); diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c index 449dbd216460..1eb3794fde4b 100644 --- a/drivers/tty/tty_io.c +++ b/drivers/tty/tty_io.c @@ -167,6 +167,7 @@ static void release_tty(struct tty_struct *tty, int idx); * Locking: none. Must be called after tty is definitely unused */ static void free_tty_struct(struct tty_struct *tty) + __capability_unsafe(/* destructor */) { tty_ldisc_deinit(tty); put_device(tty->dev); @@ -965,7 +966,7 @@ static ssize_t iterate_tty_write(struct tty_ldisc *ld, struct tty_struct *tty, ssize_t ret, written = 0; ret = tty_write_lock(tty, file->f_flags & O_NDELAY); - if (ret < 0) + if (ret) return ret; /* @@ -1154,7 +1155,7 @@ int tty_send_xchar(struct tty_struct *tty, u8 ch) return 0; } - if (tty_write_lock(tty, false) < 0) + if (tty_write_lock(tty, false)) return -ERESTARTSYS; down_read(&tty->termios_rwsem); @@ -1391,6 +1392,7 @@ static int tty_reopen(struct tty_struct *tty) * Return: new tty structure */ struct tty_struct *tty_init_dev(struct tty_driver *driver, int idx) + __capability_unsafe(/* returns with locked tty */) { struct tty_struct *tty; int retval; @@ -1874,6 +1876,7 @@ int tty_release(struct inode *inode, struct file *filp) * will not work then. It expects inodes to be from devpts FS. */ static struct tty_struct *tty_open_current_tty(dev_t device, struct file *filp) + __capability_unsafe(/* returns with locked tty */) { struct tty_struct *tty; int retval; @@ -2037,6 +2040,7 @@ EXPORT_SYMBOL_GPL(tty_kopen_shared); */ static struct tty_struct *tty_open_by_driver(dev_t device, struct file *filp) + __capability_unsafe(/* returns with locked tty */) { struct tty_struct *tty; struct tty_driver *driver = NULL; @@ -2137,6 +2141,8 @@ static int tty_open(struct inode *inode, struct file *filp) goto retry_open; } + lockdep_assert_held(&tty->legacy_mutex); + tty_add_file(tty, filp); check_tty_count(tty, __func__); @@ -2486,7 +2492,7 @@ static int send_break(struct tty_struct *tty, unsigned int duration) return tty->ops->break_ctl(tty, duration); /* Do the work ourselves */ - if (tty_write_lock(tty, false) < 0) + if (tty_write_lock(tty, false)) return -EINTR; retval = tty->ops->break_ctl(tty, -1); diff --git a/drivers/tty/tty_ioctl.c b/drivers/tty/tty_ioctl.c index 85de90eebc7b..a7ae6cbf3450 100644 --- a/drivers/tty/tty_ioctl.c +++ b/drivers/tty/tty_ioctl.c @@ -489,7 +489,7 @@ static int set_termios(struct tty_struct *tty, void __user *arg, int opt) if (retval < 0) return retval; - if (tty_write_lock(tty, false) < 0) + if (tty_write_lock(tty, false)) goto retry_write_wait; /* Racing writer? */ diff --git a/drivers/tty/tty_ldisc.c b/drivers/tty/tty_ldisc.c index d80e9d4c974b..e07a5980604e 100644 --- a/drivers/tty/tty_ldisc.c +++ b/drivers/tty/tty_ldisc.c @@ -237,6 +237,7 @@ const struct seq_operations tty_ldiscs_seq_ops = { * to wait for any ldisc lifetime events to finish. */ struct tty_ldisc *tty_ldisc_ref_wait(struct tty_struct *tty) + __cond_acquires_shared(nonnull, &tty->ldisc_sem) { struct tty_ldisc *ld; @@ -257,6 +258,7 @@ EXPORT_SYMBOL_GPL(tty_ldisc_ref_wait); * and timer functions. */ struct tty_ldisc *tty_ldisc_ref(struct tty_struct *tty) + __cond_acquires_shared(nonnull, &tty->ldisc_sem) { struct tty_ldisc *ld = NULL; @@ -277,26 +279,43 @@ EXPORT_SYMBOL_GPL(tty_ldisc_ref); * in IRQ context. */ void tty_ldisc_deref(struct tty_ldisc *ld) + __releases_shared(&ld->tty->ldisc_sem) { ldsem_up_read(&ld->tty->ldisc_sem); } EXPORT_SYMBOL_GPL(tty_ldisc_deref); +/* + * Note: Capability analysis does not like asymmetric interfaces (above types + * for ref and deref are tty_struct and tty_ldisc respectively -- which are + * dependent, but the compiler cannot figure that out); in this case, work + * around that with this helper which takes an unused @tty argument but tells + * the analysis which lock is released. + */ +static inline void __tty_ldisc_deref(struct tty_struct *tty, struct tty_ldisc *ld) + __releases_shared(&tty->ldisc_sem) + __capability_unsafe(/* matches released with tty_ldisc_ref() */) +{ + tty_ldisc_deref(ld); +} static inline int __tty_ldisc_lock(struct tty_struct *tty, unsigned long timeout) + __cond_acquires(true, &tty->ldisc_sem) { return ldsem_down_write(&tty->ldisc_sem, timeout); } static inline int __tty_ldisc_lock_nested(struct tty_struct *tty, unsigned long timeout) + __cond_acquires(true, &tty->ldisc_sem) { return ldsem_down_write_nested(&tty->ldisc_sem, LDISC_SEM_OTHER, timeout); } static inline void __tty_ldisc_unlock(struct tty_struct *tty) + __releases(&tty->ldisc_sem) { ldsem_up_write(&tty->ldisc_sem); } @@ -328,6 +347,8 @@ void tty_ldisc_unlock(struct tty_struct *tty) static int tty_ldisc_lock_pair_timeout(struct tty_struct *tty, struct tty_struct *tty2, unsigned long timeout) + __cond_acquires(0, &tty->ldisc_sem) + __cond_acquires(0, &tty2->ldisc_sem) { int ret; @@ -362,16 +383,23 @@ tty_ldisc_lock_pair_timeout(struct tty_struct *tty, struct tty_struct *tty2, } static void tty_ldisc_lock_pair(struct tty_struct *tty, struct tty_struct *tty2) + __acquires(&tty->ldisc_sem) + __acquires(&tty2->ldisc_sem) + __capability_unsafe(/* MAX_SCHEDULE_TIMEOUT ensures acquisition */) { tty_ldisc_lock_pair_timeout(tty, tty2, MAX_SCHEDULE_TIMEOUT); } static void tty_ldisc_unlock_pair(struct tty_struct *tty, struct tty_struct *tty2) + __releases(&tty->ldisc_sem) + __releases(&tty2->ldisc_sem) { __tty_ldisc_unlock(tty); if (tty2) __tty_ldisc_unlock(tty2); + else + __release(&tty2->ldisc_sem); } /** @@ -387,7 +415,7 @@ void tty_ldisc_flush(struct tty_struct *tty) tty_buffer_flush(tty, ld); if (ld) - tty_ldisc_deref(ld); + __tty_ldisc_deref(tty, ld); } EXPORT_SYMBOL_GPL(tty_ldisc_flush); @@ -694,7 +722,7 @@ void tty_ldisc_hangup(struct tty_struct *tty, bool reinit) tty_ldisc_debug(tty, "%p: hangup\n", tty->ldisc); ld = tty_ldisc_ref(tty); - if (ld != NULL) { + if (ld) { if (ld->ops->flush_buffer) ld->ops->flush_buffer(tty); tty_driver_flush_buffer(tty); @@ -703,7 +731,7 @@ void tty_ldisc_hangup(struct tty_struct *tty, bool reinit) ld->ops->write_wakeup(tty); if (ld->ops->hangup) ld->ops->hangup(tty); - tty_ldisc_deref(ld); + __tty_ldisc_deref(tty, ld); } wake_up_interruptible_poll(&tty->write_wait, EPOLLOUT); @@ -716,6 +744,7 @@ void tty_ldisc_hangup(struct tty_struct *tty, bool reinit) * Avoid racing set_ldisc or tty_ldisc_release */ tty_ldisc_lock(tty, MAX_SCHEDULE_TIMEOUT); + lockdep_assert_held_write(&tty->ldisc_sem); if (tty->driver->flags & TTY_DRIVER_RESET_TERMIOS) tty_reset_termios(tty); diff --git a/drivers/tty/tty_ldsem.c b/drivers/tty/tty_ldsem.c index 3be428c16260..26d924bb5a46 100644 --- a/drivers/tty/tty_ldsem.c +++ b/drivers/tty/tty_ldsem.c @@ -390,6 +390,7 @@ void ldsem_up_read(struct ld_semaphore *sem) { long count; + __release_shared(sem); rwsem_release(&sem->dep_map, _RET_IP_); count = atomic_long_add_return(-LDSEM_READ_BIAS, &sem->count); @@ -404,6 +405,7 @@ void ldsem_up_write(struct ld_semaphore *sem) { long count; + __release(sem); rwsem_release(&sem->dep_map, _RET_IP_); count = atomic_long_add_return(-LDSEM_WRITE_BIAS, &sem->count); diff --git a/drivers/tty/tty_mutex.c b/drivers/tty/tty_mutex.c index 784e46a0a3b1..e5576fd6f5a4 100644 --- a/drivers/tty/tty_mutex.c +++ b/drivers/tty/tty_mutex.c @@ -41,12 +41,16 @@ void tty_lock_slave(struct tty_struct *tty) { if (tty && tty != tty->link) tty_lock(tty); + else + __acquire(&tty->legacy_mutex); } void tty_unlock_slave(struct tty_struct *tty) { if (tty && tty != tty->link) tty_unlock(tty); + else + __release(&tty->legacy_mutex); } void tty_set_lock_subclass(struct tty_struct *tty) diff --git a/drivers/tty/tty_port.c b/drivers/tty/tty_port.c index 14cca33d2269..bcb65a26a6bf 100644 --- a/drivers/tty/tty_port.c +++ b/drivers/tty/tty_port.c @@ -509,6 +509,7 @@ EXPORT_SYMBOL(tty_port_lower_dtr_rts); */ int tty_port_block_til_ready(struct tty_port *port, struct tty_struct *tty, struct file *filp) + __must_hold(&tty->legacy_mutex) { int do_clocal = 0, retval; unsigned long flags; @@ -764,6 +765,7 @@ EXPORT_SYMBOL_GPL(tty_port_install); */ int tty_port_open(struct tty_port *port, struct tty_struct *tty, struct file *filp) + __must_hold(&tty->legacy_mutex) { spin_lock_irq(&port->lock); ++port->count; diff --git a/include/linux/tty.h b/include/linux/tty.h index 2372f9357240..ee1ba62fc398 100644 --- a/include/linux/tty.h +++ b/include/linux/tty.h @@ -234,8 +234,8 @@ struct tty_struct { void *disc_data; void *driver_data; spinlock_t files_lock; - int write_cnt; - u8 *write_buf; + int write_cnt __guarded_by(&atomic_write_lock); + u8 *write_buf __guarded_by(&atomic_write_lock); struct list_head tty_files; @@ -500,11 +500,11 @@ long vt_compat_ioctl(struct tty_struct *tty, unsigned int cmd, /* tty_mutex.c */ /* functions for preparation of BKL removal */ -void tty_lock(struct tty_struct *tty); -int tty_lock_interruptible(struct tty_struct *tty); -void tty_unlock(struct tty_struct *tty); -void tty_lock_slave(struct tty_struct *tty); -void tty_unlock_slave(struct tty_struct *tty); +void tty_lock(struct tty_struct *tty) __acquires(&tty->legacy_mutex); +int tty_lock_interruptible(struct tty_struct *tty) __cond_acquires(0, &tty->legacy_mutex); +void tty_unlock(struct tty_struct *tty) __releases(&tty->legacy_mutex); +void tty_lock_slave(struct tty_struct *tty) __acquires(&tty->legacy_mutex); +void tty_unlock_slave(struct tty_struct *tty) __releases(&tty->legacy_mutex); void tty_set_lock_subclass(struct tty_struct *tty); #endif diff --git a/include/linux/tty_flip.h b/include/linux/tty_flip.h index af4fce98f64e..2214714059f8 100644 --- a/include/linux/tty_flip.h +++ b/include/linux/tty_flip.h @@ -86,7 +86,7 @@ static inline size_t tty_insert_flip_string(struct tty_port *port, size_t tty_ldisc_receive_buf(struct tty_ldisc *ld, const u8 *p, const u8 *f, size_t count); -void tty_buffer_lock_exclusive(struct tty_port *port); -void tty_buffer_unlock_exclusive(struct tty_port *port); +void tty_buffer_lock_exclusive(struct tty_port *port) __acquires(&port->buf.lock); +void tty_buffer_unlock_exclusive(struct tty_port *port) __releases(&port->buf.lock); #endif /* _LINUX_TTY_FLIP_H */ diff --git a/include/linux/tty_ldisc.h b/include/linux/tty_ldisc.h index af01e89074b2..d834cf115d52 100644 --- a/include/linux/tty_ldisc.h +++ b/include/linux/tty_ldisc.h @@ -14,7 +14,7 @@ struct tty_struct; /* * the semaphore definition */ -struct ld_semaphore { +struct_with_capability(ld_semaphore) { atomic_long_t count; raw_spinlock_t wait_lock; unsigned int wait_readers; @@ -33,21 +33,22 @@ do { \ static struct lock_class_key __key; \ \ __init_ldsem((sem), #sem, &__key); \ + __assert_cap(sem); \ } while (0) -int ldsem_down_read(struct ld_semaphore *sem, long timeout); -int ldsem_down_read_trylock(struct ld_semaphore *sem); -int ldsem_down_write(struct ld_semaphore *sem, long timeout); -int ldsem_down_write_trylock(struct ld_semaphore *sem); -void ldsem_up_read(struct ld_semaphore *sem); -void ldsem_up_write(struct ld_semaphore *sem); +int ldsem_down_read(struct ld_semaphore *sem, long timeout) __cond_acquires_shared(true, sem); +int ldsem_down_read_trylock(struct ld_semaphore *sem) __cond_acquires_shared(true, sem); +int ldsem_down_write(struct ld_semaphore *sem, long timeout) __cond_acquires(true, sem); +int ldsem_down_write_trylock(struct ld_semaphore *sem) __cond_acquires(true, sem); +void ldsem_up_read(struct ld_semaphore *sem) __releases_shared(sem); +void ldsem_up_write(struct ld_semaphore *sem) __releases(sem); #ifdef CONFIG_DEBUG_LOCK_ALLOC int ldsem_down_read_nested(struct ld_semaphore *sem, int subclass, - long timeout); + long timeout) __cond_acquires_shared(true, sem); int ldsem_down_write_nested(struct ld_semaphore *sem, int subclass, - long timeout); + long timeout) __cond_acquires(true, sem); #else # define ldsem_down_read_nested(sem, subclass, timeout) \ ldsem_down_read(sem, timeout) From patchwork Tue Mar 4 09:21:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870626 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2587F2063EC for ; Tue, 4 Mar 2025 09:26:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080400; cv=none; b=SffNFjJl65gO3VJUcfaOdbw+nHyWPMP5KJWFObVPkQyOrSFl4Jr5nKKxYY45yheY8+RJVRgxetU2kw+0lN+M48xCxVT6fJHlLvPgVk+uydag93s6ze+7ybfo0y1PALbM6gEDkRDM5f15Iz0ZjgvLPJPt9L9iDUtleMgQCMLVoIE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080400; c=relaxed/simple; bh=InxI4X+TbJ6TgT6LwD9hFJI6Ub0f0tb76jpiM6Xf9dE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WrP9hVWyT/N51E3OEBgn4Zyj+ATJf1+ZGxIUFjTXsDqrW7dN4UyFgOKFvnzyLIbWhHaImfnf5P0OeE++0r75JK4rnbqgcwhkCLt4MP32Jmd0owaiu/hQcuvO6rEYJUAdhoaEtEmntgnmkL/HUKF4aZKzrPWlXQT5LFxQlelS9xw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=a8LwUNN5; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="a8LwUNN5" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-390de58dc4eso5096322f8f.0 for ; Tue, 04 Mar 2025 01:26:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080395; x=1741685195; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0TZMHeiGcug1Z1nRKjwzk3pJ4ZD+AmZ+4WkrJaQv32Y=; b=a8LwUNN5Ij/6kddp6n1Hd2UeOwiDI+C2iAHegxe3tdsoM3Tl/w/m2BSydy1pU5kd+6 mS5loYZeqdea4kUVRO8g2FqtZJMgJyYT3kWsGoNn7f9spmm8SwPMjrzKV7t9W+BxkHzt D337EXYmvpNftVrMRclXjlF217qsm5575J7spj/dH/GUb6hDHqEZxptiEV5Fb2QxLcP0 4DYz5E4BFLPh0ssEphUH6b1nnSX3dMAQzvDMPc8o8seU6/flun0wBcMidMIKzH4A0S4m KIgBLaL7R8SRaFz/Q15V8u+o2Kb0cXT1ZU6MYfpIfAsr1U4k99rgBCSVxp4fx2WgdepT iySw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080395; x=1741685195; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0TZMHeiGcug1Z1nRKjwzk3pJ4ZD+AmZ+4WkrJaQv32Y=; b=FJVmXkeqmWY9TTXTxvHnSNswJmlGLo95ohNefvGGYTJTJNPVGSWjSlSSLUo8g7DjKC woVCfGg/HHqa+lwUOGAay9BHhCqHTMA+ACy8LQnCguHBSIb1q/pVgMdF7oDIzsaa+3U/ +Ph0sgrxCryd4wROehVTSdBa6am/Q/5JyDLj0q4jM6iRbjKls5jSun1XtiQ3K2ehsN8O rCX0aBKBPBr47jZxjjSk3u9Rqsbiw4wIO9+PzIQlK42xpGoRLzRAu4y/6OLD6CS8o8dc 7DMtymdpwCRfACr4djbmR2AuHvGSFff9aKrZmnYQaqgvrR/MaoREOIXu9IVDaUBPxiQQ MF1A== X-Forwarded-Encrypted: i=1; AJvYcCVQJQiN+0h7/QWim8ebbt7cEabnFsUxVVISdIo59Zm20/ckzNohSJmKZB5VXpXnBKdjhwMF3MT1HC4Y1eE=@vger.kernel.org X-Gm-Message-State: AOJu0YyUASqavihg82xG1AlogWBV/CfVqCMpEhnL1+q4GiZUIkuhvmp5 iHlTUI/IyYtXcHTJhW0I168fFYd1Yg/9ROtKSCS6qfDict+pudVscmr4lxMX5I1RXveziZvEHQ= = X-Google-Smtp-Source: AGHT+IGtMpYS24OG5fwIbCCodkhDNQhxjLFPVoBKGOc7tlax2NShCCvfAmG2BEGn8vZPGcxdeVWRKH+RPw== X-Received: from wmpz17.prod.google.com ([2002:a05:600c:a11:b0:43b:cc0a:a2c6]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:186b:b0:391:13d6:c9e5 with SMTP id ffacd0b85a97d-39113d6cc59mr3890950f8f.19.1741080395602; Tue, 04 Mar 2025 01:26:35 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:31 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-33-elver@google.com> Subject: [PATCH v2 32/34] security/tomoyo: Enable capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Enable capability analysis for security/tomoyo. This demonstrates a larger conversion to use Clang's capability analysis. The benefit is additional static checking of locking rules, along with better documentation. Tomoyo makes use of several synchronization primitives, yet its clear design made it relatively straightforward to enable capability analysis. One notable finding was: security/tomoyo/gc.c:664:20: error: reading variable 'write_buf' requires holding mutex '&tomoyo_io_buffer::io_sem' 664 | is_write = head->write_buf != NULL; For which Tetsuo writes: "Good catch. This should be data_race(), for tomoyo_write_control() might concurrently update head->write_buf from non-NULL to non-NULL with head->io_sem held." Signed-off-by: Marco Elver Cc: Kentaro Takeda Cc: Tetsuo Handa --- v2: * New patch. --- security/tomoyo/Makefile | 2 + security/tomoyo/common.c | 52 ++++++++++++++++++++++++-- security/tomoyo/common.h | 77 ++++++++++++++++++++------------------- security/tomoyo/domain.c | 1 + security/tomoyo/environ.c | 1 + security/tomoyo/file.c | 5 +++ security/tomoyo/gc.c | 28 ++++++++++---- security/tomoyo/mount.c | 2 + security/tomoyo/network.c | 3 ++ 9 files changed, 122 insertions(+), 49 deletions(-) diff --git a/security/tomoyo/Makefile b/security/tomoyo/Makefile index 55c67b9846a9..6b395ca4e3d2 100644 --- a/security/tomoyo/Makefile +++ b/security/tomoyo/Makefile @@ -1,4 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 +CAPABILITY_ANALYSIS := y + obj-y = audit.o common.o condition.o domain.o environ.o file.o gc.o group.o load_policy.o memory.o mount.o network.o realpath.o securityfs_if.o tomoyo.o util.o targets += builtin-policy.h diff --git a/security/tomoyo/common.c b/security/tomoyo/common.c index 0f78898bce09..fa9fd134c9cc 100644 --- a/security/tomoyo/common.c +++ b/security/tomoyo/common.c @@ -268,6 +268,7 @@ static void tomoyo_io_printf(struct tomoyo_io_buffer *head, const char *fmt, */ static void tomoyo_io_printf(struct tomoyo_io_buffer *head, const char *fmt, ...) + __must_hold(&head->io_sem) { va_list args; size_t len; @@ -416,8 +417,9 @@ static void tomoyo_print_name_union_quoted(struct tomoyo_io_buffer *head, * * Returns nothing. */ -static void tomoyo_print_number_union_nospace -(struct tomoyo_io_buffer *head, const struct tomoyo_number_union *ptr) +static void +tomoyo_print_number_union_nospace(struct tomoyo_io_buffer *head, const struct tomoyo_number_union *ptr) + __must_hold(&head->io_sem) { if (ptr->group) { tomoyo_set_string(head, "@"); @@ -466,6 +468,7 @@ static void tomoyo_print_number_union_nospace */ static void tomoyo_print_number_union(struct tomoyo_io_buffer *head, const struct tomoyo_number_union *ptr) + __must_hold(&head->io_sem) { tomoyo_set_space(head); tomoyo_print_number_union_nospace(head, ptr); @@ -664,6 +667,7 @@ static int tomoyo_set_mode(char *name, const char *value, * Returns 0 on success, negative value otherwise. */ static int tomoyo_write_profile(struct tomoyo_io_buffer *head) + __must_hold(&head->io_sem) { char *data = head->write_buf; unsigned int i; @@ -719,6 +723,7 @@ static int tomoyo_write_profile(struct tomoyo_io_buffer *head) * Caller prints functionality's name. */ static void tomoyo_print_config(struct tomoyo_io_buffer *head, const u8 config) + __must_hold(&head->io_sem) { tomoyo_io_printf(head, "={ mode=%s grant_log=%s reject_log=%s }\n", tomoyo_mode[config & 3], @@ -734,6 +739,7 @@ static void tomoyo_print_config(struct tomoyo_io_buffer *head, const u8 config) * Returns nothing. */ static void tomoyo_read_profile(struct tomoyo_io_buffer *head) + __must_hold(&head->io_sem) { u8 index; struct tomoyo_policy_namespace *ns = @@ -852,6 +858,7 @@ static bool tomoyo_same_manager(const struct tomoyo_acl_head *a, */ static int tomoyo_update_manager_entry(const char *manager, const bool is_delete) + __must_hold_shared(&tomoyo_ss) { struct tomoyo_manager e = { }; struct tomoyo_acl_param param = { @@ -883,6 +890,8 @@ static int tomoyo_update_manager_entry(const char *manager, * Caller holds tomoyo_read_lock(). */ static int tomoyo_write_manager(struct tomoyo_io_buffer *head) + __must_hold_shared(&tomoyo_ss) + __must_hold(&head->io_sem) { char *data = head->write_buf; @@ -901,6 +910,7 @@ static int tomoyo_write_manager(struct tomoyo_io_buffer *head) * Caller holds tomoyo_read_lock(). */ static void tomoyo_read_manager(struct tomoyo_io_buffer *head) + __must_hold_shared(&tomoyo_ss) { if (head->r.eof) return; @@ -927,6 +937,7 @@ static void tomoyo_read_manager(struct tomoyo_io_buffer *head) * Caller holds tomoyo_read_lock(). */ static bool tomoyo_manager(void) + __must_hold_shared(&tomoyo_ss) { struct tomoyo_manager *ptr; const char *exe; @@ -981,6 +992,8 @@ static struct tomoyo_domain_info *tomoyo_find_domain_by_qid */ static bool tomoyo_select_domain(struct tomoyo_io_buffer *head, const char *data) + __must_hold_shared(&tomoyo_ss) + __must_hold(&head->io_sem) { unsigned int pid; struct tomoyo_domain_info *domain = NULL; @@ -1051,6 +1064,7 @@ static bool tomoyo_same_task_acl(const struct tomoyo_acl_info *a, * Caller holds tomoyo_read_lock(). */ static int tomoyo_write_task(struct tomoyo_acl_param *param) + __must_hold_shared(&tomoyo_ss) { int error = -EINVAL; @@ -1079,6 +1093,7 @@ static int tomoyo_write_task(struct tomoyo_acl_param *param) * Caller holds tomoyo_read_lock(). */ static int tomoyo_delete_domain(char *domainname) + __must_hold_shared(&tomoyo_ss) { struct tomoyo_domain_info *domain; struct tomoyo_path_info name; @@ -1118,6 +1133,7 @@ static int tomoyo_delete_domain(char *domainname) static int tomoyo_write_domain2(struct tomoyo_policy_namespace *ns, struct list_head *list, char *data, const bool is_delete) + __must_hold_shared(&tomoyo_ss) { struct tomoyo_acl_param param = { .ns = ns, @@ -1162,6 +1178,8 @@ const char * const tomoyo_dif[TOMOYO_MAX_DOMAIN_INFO_FLAGS] = { * Caller holds tomoyo_read_lock(). */ static int tomoyo_write_domain(struct tomoyo_io_buffer *head) + __must_hold_shared(&tomoyo_ss) + __must_hold(&head->io_sem) { char *data = head->write_buf; struct tomoyo_policy_namespace *ns; @@ -1223,6 +1241,7 @@ static int tomoyo_write_domain(struct tomoyo_io_buffer *head) */ static bool tomoyo_print_condition(struct tomoyo_io_buffer *head, const struct tomoyo_condition *cond) + __must_hold(&head->io_sem) { switch (head->r.cond_step) { case 0: @@ -1364,6 +1383,7 @@ static bool tomoyo_print_condition(struct tomoyo_io_buffer *head, */ static void tomoyo_set_group(struct tomoyo_io_buffer *head, const char *category) + __must_hold(&head->io_sem) { if (head->type == TOMOYO_EXCEPTIONPOLICY) { tomoyo_print_namespace(head); @@ -1383,6 +1403,7 @@ static void tomoyo_set_group(struct tomoyo_io_buffer *head, */ static bool tomoyo_print_entry(struct tomoyo_io_buffer *head, struct tomoyo_acl_info *acl) + __must_hold(&head->io_sem) { const u8 acl_type = acl->type; bool first = true; @@ -1588,6 +1609,8 @@ static bool tomoyo_print_entry(struct tomoyo_io_buffer *head, */ static bool tomoyo_read_domain2(struct tomoyo_io_buffer *head, struct list_head *list) + __must_hold_shared(&tomoyo_ss) + __must_hold(&head->io_sem) { list_for_each_cookie(head->r.acl, list) { struct tomoyo_acl_info *ptr = @@ -1608,6 +1631,8 @@ static bool tomoyo_read_domain2(struct tomoyo_io_buffer *head, * Caller holds tomoyo_read_lock(). */ static void tomoyo_read_domain(struct tomoyo_io_buffer *head) + __must_hold_shared(&tomoyo_ss) + __must_hold(&head->io_sem) { if (head->r.eof) return; @@ -1686,6 +1711,7 @@ static int tomoyo_write_pid(struct tomoyo_io_buffer *head) * using read()/write() interface rather than sysctl() interface. */ static void tomoyo_read_pid(struct tomoyo_io_buffer *head) + __must_hold(&head->io_sem) { char *buf = head->write_buf; bool global_pid = false; @@ -1746,6 +1772,8 @@ static const char *tomoyo_group_name[TOMOYO_MAX_GROUP] = { * Caller holds tomoyo_read_lock(). */ static int tomoyo_write_exception(struct tomoyo_io_buffer *head) + __must_hold_shared(&tomoyo_ss) + __must_hold(&head->io_sem) { const bool is_delete = head->w.is_delete; struct tomoyo_acl_param param = { @@ -1787,6 +1815,8 @@ static int tomoyo_write_exception(struct tomoyo_io_buffer *head) * Caller holds tomoyo_read_lock(). */ static bool tomoyo_read_group(struct tomoyo_io_buffer *head, const int idx) + __must_hold_shared(&tomoyo_ss) + __must_hold(&head->io_sem) { struct tomoyo_policy_namespace *ns = container_of(head->r.ns, typeof(*ns), namespace_list); @@ -1846,6 +1876,7 @@ static bool tomoyo_read_group(struct tomoyo_io_buffer *head, const int idx) * Caller holds tomoyo_read_lock(). */ static bool tomoyo_read_policy(struct tomoyo_io_buffer *head, const int idx) + __must_hold_shared(&tomoyo_ss) { struct tomoyo_policy_namespace *ns = container_of(head->r.ns, typeof(*ns), namespace_list); @@ -1906,6 +1937,8 @@ static bool tomoyo_read_policy(struct tomoyo_io_buffer *head, const int idx) * Caller holds tomoyo_read_lock(). */ static void tomoyo_read_exception(struct tomoyo_io_buffer *head) + __must_hold_shared(&tomoyo_ss) + __must_hold(&head->io_sem) { struct tomoyo_policy_namespace *ns = container_of(head->r.ns, typeof(*ns), namespace_list); @@ -2097,6 +2130,7 @@ static void tomoyo_patternize_path(char *buffer, const int len, char *entry) * Returns nothing. */ static void tomoyo_add_entry(struct tomoyo_domain_info *domain, char *header) + __must_hold_shared(&tomoyo_ss) { char *buffer; char *realpath = NULL; @@ -2301,6 +2335,7 @@ static __poll_t tomoyo_poll_query(struct file *file, poll_table *wait) * @head: Pointer to "struct tomoyo_io_buffer". */ static void tomoyo_read_query(struct tomoyo_io_buffer *head) + __must_hold(&head->io_sem) { struct list_head *tmp; unsigned int pos = 0; @@ -2362,6 +2397,7 @@ static void tomoyo_read_query(struct tomoyo_io_buffer *head) * Returns 0 on success, -EINVAL otherwise. */ static int tomoyo_write_answer(struct tomoyo_io_buffer *head) + __must_hold(&head->io_sem) { char *data = head->write_buf; struct list_head *tmp; @@ -2401,6 +2437,7 @@ static int tomoyo_write_answer(struct tomoyo_io_buffer *head) * Returns version information. */ static void tomoyo_read_version(struct tomoyo_io_buffer *head) + __must_hold(&head->io_sem) { if (!head->r.eof) { tomoyo_io_printf(head, "2.6.0"); @@ -2449,6 +2486,7 @@ void tomoyo_update_stat(const u8 index) * Returns nothing. */ static void tomoyo_read_stat(struct tomoyo_io_buffer *head) + __must_hold(&head->io_sem) { u8 i; unsigned int total = 0; @@ -2493,6 +2531,7 @@ static void tomoyo_read_stat(struct tomoyo_io_buffer *head) * Returns 0. */ static int tomoyo_write_stat(struct tomoyo_io_buffer *head) + __must_hold(&head->io_sem) { char *data = head->write_buf; u8 i; @@ -2717,6 +2756,8 @@ ssize_t tomoyo_read_control(struct tomoyo_io_buffer *head, char __user *buffer, * Caller holds tomoyo_read_lock(). */ static int tomoyo_parse_policy(struct tomoyo_io_buffer *head, char *line) + __must_hold_shared(&tomoyo_ss) + __must_hold(&head->io_sem) { /* Delete request? */ head->w.is_delete = !strncmp(line, "delete ", 7); @@ -2969,8 +3010,11 @@ void __init tomoyo_load_builtin_policy(void) break; *end = '\0'; tomoyo_normalize_line(start); - head.write_buf = start; - tomoyo_parse_policy(&head, start); + /* head is stack-local and not shared. */ + capability_unsafe( + head.write_buf = start; + tomoyo_parse_policy(&head, start); + ); start = end + 1; } } diff --git a/security/tomoyo/common.h b/security/tomoyo/common.h index 0e8e2e959aef..2ff05653743c 100644 --- a/security/tomoyo/common.h +++ b/security/tomoyo/common.h @@ -827,13 +827,13 @@ struct tomoyo_io_buffer { bool is_delete; } w; /* Buffer for reading. */ - char *read_buf; + char *read_buf __guarded_by(&io_sem); /* Size of read buffer. */ - size_t readbuf_size; + size_t readbuf_size __guarded_by(&io_sem); /* Buffer for writing. */ - char *write_buf; + char *write_buf __guarded_by(&io_sem); /* Size of write buffer. */ - size_t writebuf_size; + size_t writebuf_size __guarded_by(&io_sem); /* Type of this interface. */ enum tomoyo_securityfs_interface_index type; /* Users counter protected by tomoyo_io_buffer_list_lock. */ @@ -922,6 +922,35 @@ struct tomoyo_task { struct tomoyo_domain_info *old_domain_info; }; +/********** External variable definitions. **********/ + +extern bool tomoyo_policy_loaded; +extern int tomoyo_enabled; +extern const char * const tomoyo_condition_keyword +[TOMOYO_MAX_CONDITION_KEYWORD]; +extern const char * const tomoyo_dif[TOMOYO_MAX_DOMAIN_INFO_FLAGS]; +extern const char * const tomoyo_mac_keywords[TOMOYO_MAX_MAC_INDEX + + TOMOYO_MAX_MAC_CATEGORY_INDEX]; +extern const char * const tomoyo_mode[TOMOYO_CONFIG_MAX_MODE]; +extern const char * const tomoyo_path_keyword[TOMOYO_MAX_PATH_OPERATION]; +extern const char * const tomoyo_proto_keyword[TOMOYO_SOCK_MAX]; +extern const char * const tomoyo_socket_keyword[TOMOYO_MAX_NETWORK_OPERATION]; +extern const u8 tomoyo_index2category[TOMOYO_MAX_MAC_INDEX]; +extern const u8 tomoyo_pn2mac[TOMOYO_MAX_PATH_NUMBER_OPERATION]; +extern const u8 tomoyo_pnnn2mac[TOMOYO_MAX_MKDEV_OPERATION]; +extern const u8 tomoyo_pp2mac[TOMOYO_MAX_PATH2_OPERATION]; +extern struct list_head tomoyo_condition_list; +extern struct list_head tomoyo_domain_list; +extern struct list_head tomoyo_name_list[TOMOYO_MAX_HASH]; +extern struct list_head tomoyo_namespace_list; +extern struct mutex tomoyo_policy_lock; +extern struct srcu_struct tomoyo_ss; +extern struct tomoyo_domain_info tomoyo_kernel_domain; +extern struct tomoyo_policy_namespace tomoyo_kernel_namespace; +extern unsigned int tomoyo_memory_quota[TOMOYO_MAX_MEMORY_STAT]; +extern unsigned int tomoyo_memory_used[TOMOYO_MAX_MEMORY_STAT]; +extern struct lsm_blob_sizes tomoyo_blob_sizes; + /********** Function prototypes. **********/ bool tomoyo_address_matches_group(const bool is_ipv6, const __be32 *address, @@ -969,10 +998,10 @@ const struct tomoyo_path_info *tomoyo_path_matches_group int tomoyo_check_open_permission(struct tomoyo_domain_info *domain, const struct path *path, const int flag); void tomoyo_close_control(struct tomoyo_io_buffer *head); -int tomoyo_env_perm(struct tomoyo_request_info *r, const char *env); +int tomoyo_env_perm(struct tomoyo_request_info *r, const char *env) __must_hold_shared(&tomoyo_ss); int tomoyo_execute_permission(struct tomoyo_request_info *r, - const struct tomoyo_path_info *filename); -int tomoyo_find_next_domain(struct linux_binprm *bprm); + const struct tomoyo_path_info *filename) __must_hold_shared(&tomoyo_ss); +int tomoyo_find_next_domain(struct linux_binprm *bprm) __must_hold_shared(&tomoyo_ss); int tomoyo_get_mode(const struct tomoyo_policy_namespace *ns, const u8 profile, const u8 index); int tomoyo_init_request_info(struct tomoyo_request_info *r, @@ -1000,6 +1029,7 @@ int tomoyo_socket_listen_permission(struct socket *sock); int tomoyo_socket_sendmsg_permission(struct socket *sock, struct msghdr *msg, int size); int tomoyo_supervisor(struct tomoyo_request_info *r, const char *fmt, ...) + __must_hold_shared(&tomoyo_ss) __printf(2, 3); int tomoyo_update_domain(struct tomoyo_acl_info *new_entry, const int size, struct tomoyo_acl_param *param, @@ -1059,7 +1089,7 @@ void tomoyo_print_ulong(char *buffer, const int buffer_len, const unsigned long value, const u8 type); void tomoyo_put_name_union(struct tomoyo_name_union *ptr); void tomoyo_put_number_union(struct tomoyo_number_union *ptr); -void tomoyo_read_log(struct tomoyo_io_buffer *head); +void tomoyo_read_log(struct tomoyo_io_buffer *head) __must_hold(&head->io_sem); void tomoyo_update_stat(const u8 index); void tomoyo_warn_oom(const char *function); void tomoyo_write_log(struct tomoyo_request_info *r, const char *fmt, ...) @@ -1067,35 +1097,6 @@ void tomoyo_write_log(struct tomoyo_request_info *r, const char *fmt, ...) void tomoyo_write_log2(struct tomoyo_request_info *r, int len, const char *fmt, va_list args) __printf(3, 0); -/********** External variable definitions. **********/ - -extern bool tomoyo_policy_loaded; -extern int tomoyo_enabled; -extern const char * const tomoyo_condition_keyword -[TOMOYO_MAX_CONDITION_KEYWORD]; -extern const char * const tomoyo_dif[TOMOYO_MAX_DOMAIN_INFO_FLAGS]; -extern const char * const tomoyo_mac_keywords[TOMOYO_MAX_MAC_INDEX - + TOMOYO_MAX_MAC_CATEGORY_INDEX]; -extern const char * const tomoyo_mode[TOMOYO_CONFIG_MAX_MODE]; -extern const char * const tomoyo_path_keyword[TOMOYO_MAX_PATH_OPERATION]; -extern const char * const tomoyo_proto_keyword[TOMOYO_SOCK_MAX]; -extern const char * const tomoyo_socket_keyword[TOMOYO_MAX_NETWORK_OPERATION]; -extern const u8 tomoyo_index2category[TOMOYO_MAX_MAC_INDEX]; -extern const u8 tomoyo_pn2mac[TOMOYO_MAX_PATH_NUMBER_OPERATION]; -extern const u8 tomoyo_pnnn2mac[TOMOYO_MAX_MKDEV_OPERATION]; -extern const u8 tomoyo_pp2mac[TOMOYO_MAX_PATH2_OPERATION]; -extern struct list_head tomoyo_condition_list; -extern struct list_head tomoyo_domain_list; -extern struct list_head tomoyo_name_list[TOMOYO_MAX_HASH]; -extern struct list_head tomoyo_namespace_list; -extern struct mutex tomoyo_policy_lock; -extern struct srcu_struct tomoyo_ss; -extern struct tomoyo_domain_info tomoyo_kernel_domain; -extern struct tomoyo_policy_namespace tomoyo_kernel_namespace; -extern unsigned int tomoyo_memory_quota[TOMOYO_MAX_MEMORY_STAT]; -extern unsigned int tomoyo_memory_used[TOMOYO_MAX_MEMORY_STAT]; -extern struct lsm_blob_sizes tomoyo_blob_sizes; - /********** Inlined functions. **********/ /** @@ -1104,6 +1105,7 @@ extern struct lsm_blob_sizes tomoyo_blob_sizes; * Returns index number for tomoyo_read_unlock(). */ static inline int tomoyo_read_lock(void) + __acquires_shared(&tomoyo_ss) { return srcu_read_lock(&tomoyo_ss); } @@ -1116,6 +1118,7 @@ static inline int tomoyo_read_lock(void) * Returns nothing. */ static inline void tomoyo_read_unlock(int idx) + __releases_shared(&tomoyo_ss) { srcu_read_unlock(&tomoyo_ss, idx); } diff --git a/security/tomoyo/domain.c b/security/tomoyo/domain.c index 5f9ccab26e9a..5b7989ad85bf 100644 --- a/security/tomoyo/domain.c +++ b/security/tomoyo/domain.c @@ -611,6 +611,7 @@ struct tomoyo_domain_info *tomoyo_assign_domain(const char *domainname, * Returns 0 on success, negative value otherwise. */ static int tomoyo_environ(struct tomoyo_execve *ee) + __must_hold_shared(&tomoyo_ss) { struct tomoyo_request_info *r = &ee->r; struct linux_binprm *bprm = ee->bprm; diff --git a/security/tomoyo/environ.c b/security/tomoyo/environ.c index 7f0a471f19b2..bcb05910facc 100644 --- a/security/tomoyo/environ.c +++ b/security/tomoyo/environ.c @@ -32,6 +32,7 @@ static bool tomoyo_check_env_acl(struct tomoyo_request_info *r, * Returns 0 on success, negative value otherwise. */ static int tomoyo_audit_env_log(struct tomoyo_request_info *r) + __must_hold_shared(&tomoyo_ss) { return tomoyo_supervisor(r, "misc env %s\n", r->param.environ.name->name); diff --git a/security/tomoyo/file.c b/security/tomoyo/file.c index 8f3b90b6e03d..e9b67dbb38e7 100644 --- a/security/tomoyo/file.c +++ b/security/tomoyo/file.c @@ -164,6 +164,7 @@ static bool tomoyo_get_realpath(struct tomoyo_path_info *buf, const struct path * Returns 0 on success, negative value otherwise. */ static int tomoyo_audit_path_log(struct tomoyo_request_info *r) + __must_hold_shared(&tomoyo_ss) { return tomoyo_supervisor(r, "file %s %s\n", tomoyo_path_keyword [r->param.path.operation], @@ -178,6 +179,7 @@ static int tomoyo_audit_path_log(struct tomoyo_request_info *r) * Returns 0 on success, negative value otherwise. */ static int tomoyo_audit_path2_log(struct tomoyo_request_info *r) + __must_hold_shared(&tomoyo_ss) { return tomoyo_supervisor(r, "file %s %s %s\n", tomoyo_mac_keywords [tomoyo_pp2mac[r->param.path2.operation]], @@ -193,6 +195,7 @@ static int tomoyo_audit_path2_log(struct tomoyo_request_info *r) * Returns 0 on success, negative value otherwise. */ static int tomoyo_audit_mkdev_log(struct tomoyo_request_info *r) + __must_hold_shared(&tomoyo_ss) { return tomoyo_supervisor(r, "file %s %s 0%o %u %u\n", tomoyo_mac_keywords @@ -210,6 +213,7 @@ static int tomoyo_audit_mkdev_log(struct tomoyo_request_info *r) * Returns 0 on success, negative value otherwise. */ static int tomoyo_audit_path_number_log(struct tomoyo_request_info *r) + __must_hold_shared(&tomoyo_ss) { const u8 type = r->param.path_number.operation; u8 radix; @@ -572,6 +576,7 @@ static int tomoyo_update_path2_acl(const u8 perm, */ static int tomoyo_path_permission(struct tomoyo_request_info *r, u8 operation, const struct tomoyo_path_info *filename) + __must_hold_shared(&tomoyo_ss) { int error; diff --git a/security/tomoyo/gc.c b/security/tomoyo/gc.c index 026e29ea3796..34912f120854 100644 --- a/security/tomoyo/gc.c +++ b/security/tomoyo/gc.c @@ -23,11 +23,10 @@ static inline void tomoyo_memory_free(void *ptr) tomoyo_memory_used[TOMOYO_MEMORY_POLICY] -= ksize(ptr); kfree(ptr); } - -/* The list for "struct tomoyo_io_buffer". */ -static LIST_HEAD(tomoyo_io_buffer_list); /* Lock for protecting tomoyo_io_buffer_list. */ static DEFINE_SPINLOCK(tomoyo_io_buffer_list_lock); +/* The list for "struct tomoyo_io_buffer". */ +static __guarded_by(&tomoyo_io_buffer_list_lock) LIST_HEAD(tomoyo_io_buffer_list); /** * tomoyo_struct_used_by_io_buffer - Check whether the list element is used by /sys/kernel/security/tomoyo/ users or not. @@ -385,6 +384,7 @@ static inline void tomoyo_del_number_group(struct list_head *element) */ static void tomoyo_try_to_gc(const enum tomoyo_policy_id type, struct list_head *element) + __must_hold(&tomoyo_policy_lock) { /* * __list_del_entry() guarantees that the list element became no longer @@ -484,6 +484,7 @@ static void tomoyo_try_to_gc(const enum tomoyo_policy_id type, */ static void tomoyo_collect_member(const enum tomoyo_policy_id id, struct list_head *member_list) + __must_hold(&tomoyo_policy_lock) { struct tomoyo_acl_head *member; struct tomoyo_acl_head *tmp; @@ -504,6 +505,7 @@ static void tomoyo_collect_member(const enum tomoyo_policy_id id, * Returns nothing. */ static void tomoyo_collect_acl(struct list_head *list) + __must_hold(&tomoyo_policy_lock) { struct tomoyo_acl_info *acl; struct tomoyo_acl_info *tmp; @@ -627,8 +629,11 @@ static int tomoyo_gc_thread(void *unused) if (head->users) continue; list_del(&head->list); - kfree(head->read_buf); - kfree(head->write_buf); + /* Safe destruction because no users are left. */ + capability_unsafe( + kfree(head->read_buf); + kfree(head->write_buf); + ); kfree(head); } spin_unlock(&tomoyo_io_buffer_list_lock); @@ -656,11 +661,18 @@ void tomoyo_notify_gc(struct tomoyo_io_buffer *head, const bool is_register) head->users = 1; list_add(&head->list, &tomoyo_io_buffer_list); } else { - is_write = head->write_buf != NULL; + /* + * tomoyo_write_control() can concurrently update write_buf from + * a non-NULL to new non-NULL pointer with io_sem held. + */ + is_write = data_race(head->write_buf != NULL); if (!--head->users) { list_del(&head->list); - kfree(head->read_buf); - kfree(head->write_buf); + /* Safe destruction because no users are left. */ + capability_unsafe( + kfree(head->read_buf); + kfree(head->write_buf); + ); kfree(head); } } diff --git a/security/tomoyo/mount.c b/security/tomoyo/mount.c index 2755971f50df..322dfd188ada 100644 --- a/security/tomoyo/mount.c +++ b/security/tomoyo/mount.c @@ -28,6 +28,7 @@ static const char * const tomoyo_mounts[TOMOYO_MAX_SPECIAL_MOUNT] = { * Returns 0 on success, negative value otherwise. */ static int tomoyo_audit_mount_log(struct tomoyo_request_info *r) + __must_hold_shared(&tomoyo_ss) { return tomoyo_supervisor(r, "file mount %s %s %s 0x%lX\n", r->param.mount.dev->name, @@ -78,6 +79,7 @@ static int tomoyo_mount_acl(struct tomoyo_request_info *r, const char *dev_name, const struct path *dir, const char *type, unsigned long flags) + __must_hold_shared(&tomoyo_ss) { struct tomoyo_obj_info obj = { }; struct path path; diff --git a/security/tomoyo/network.c b/security/tomoyo/network.c index 8dc61335f65e..cfc2a019de1e 100644 --- a/security/tomoyo/network.c +++ b/security/tomoyo/network.c @@ -363,6 +363,7 @@ int tomoyo_write_unix_network(struct tomoyo_acl_param *param) static int tomoyo_audit_net_log(struct tomoyo_request_info *r, const char *family, const u8 protocol, const u8 operation, const char *address) + __must_hold_shared(&tomoyo_ss) { return tomoyo_supervisor(r, "network %s %s %s %s\n", family, tomoyo_proto_keyword[protocol], @@ -377,6 +378,7 @@ static int tomoyo_audit_net_log(struct tomoyo_request_info *r, * Returns 0 on success, negative value otherwise. */ static int tomoyo_audit_inet_log(struct tomoyo_request_info *r) + __must_hold_shared(&tomoyo_ss) { char buf[128]; int len; @@ -402,6 +404,7 @@ static int tomoyo_audit_inet_log(struct tomoyo_request_info *r) * Returns 0 on success, negative value otherwise. */ static int tomoyo_audit_unix_log(struct tomoyo_request_info *r) + __must_hold_shared(&tomoyo_ss) { return tomoyo_audit_net_log(r, "unix", r->param.unix_network.protocol, r->param.unix_network.operation, From patchwork Tue Mar 4 09:21:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870214 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A37662063D9 for ; Tue, 4 Mar 2025 09:26:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080401; cv=none; b=ObVQrUoWFgBnzErFbtY/jLHdZG31SKVDxQ5Jhbsk1Ju/XIME2XCa+KwaCOBIv9gGlyWefFbu1JieGM5VNf4d39FPm91bDJzFot7q+/YF5sfgi5x0DMDG64+IaA9+2Lj0Te02YO+HdbdgS0gdKkdGIDmZkKRfalNE5sa+2Xm99gE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080401; c=relaxed/simple; bh=Mh+9JdRk90KetkZXttq6SxfLAoOSSno0nfwH9pudmzA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eUPPatJYT4z/7SsCAgNVq+SPZfmBAZcZZvaFu0lm/h666WFUiimC2H2EHRSp04974UPrKHlkhmQ1hKKZ7jNQaJpUCOGIEavvv1innadRkwzUnlQAyENIBUgXFHxjYmZc6/cNDjWhevs6hxfwUFd+XXZ+E96NRtNu9DHlGFe45z4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dn3EzAlo; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dn3EzAlo" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-abb9b2831b7so708210266b.1 for ; Tue, 04 Mar 2025 01:26:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080398; x=1741685198; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=N6cosU/aHEF331Tgbf58Vi03lqVulZKFB2l/GT8tm7Y=; b=dn3EzAlosTzHbBvhgMn3ZvES4EtnSHEKhfOwLRu5lIbUArNhBz7KIywGzdE3pv0s4u ShNskvRe5wx7fVAhiutKXrQUIhgxqumCsHibrqodQuNH0OHx2DeswLUfg8i+dSSqZtrP lYGf3qVd3KGIYBQx/gyRcQT0049KKpUEMdBwGhiPrFerPbUEVBqiszqqf1yMdEXZc5tu tf3vBDcxTaiMOCKVC+dcks+P+d60cCDfJEqxDK59mQ61pekGsmjmaW2inVuxzVYSpvIy tEVJ36XQOonqVLy6Dkd+hKazxt0+LJIOibW/wfBY4gfKqnkZWoO7Uk8XyJMRtG1F5f/a GJsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080398; x=1741685198; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=N6cosU/aHEF331Tgbf58Vi03lqVulZKFB2l/GT8tm7Y=; b=Xy8ZfcDcFEo/dc8xfRZadnrjoVc0nPRed5xkgt7p75dQCCgpnxqvp8nfElnwGaYTCW wUW0lXIaj322kjzrzzL6/WiONOYqqj7jgtsdvPz6tXbq38+TRaimIWzmW++p4GpbxHdB wnQSYPpcx5nCwCwcOgI4T1jzR2AOLnnCOKY54TQ4n7H6gfZX0AKsryZQDyrnj9O+/4ke xjLQ3afmCYoLTDzuNwsh82ni0ukNpXA8WrMsGya39HVo+ZFliX1DUy0t/34hQSEG8Aex kwv839xbVfqXyUt7A+csXcKpAUFsPvvMgzqyq43TY8o2XxH+MX6n9MneLmmjZhiJdOhm NeJg== X-Forwarded-Encrypted: i=1; AJvYcCW2ctjmuGaP4qQlVYeP/ZVkeTdda5kxKa3WYrVTOuayC0kVUay0qDKQvzseSpiCfwS3+hm9aWuWwZjpGdM=@vger.kernel.org X-Gm-Message-State: AOJu0Yx4gri9dLXzYZ9tksVfGBuOXrZiOZG9Zq5fdoq3Er51Fbvz9xtq b0gTfDNcyRKebcCxnrW3OU+CjscDPg2pk0ELoLf2o44Gq6B6Bn1dLtZ6OttRuyBdmy+h5qWB+A= = X-Google-Smtp-Source: AGHT+IFpyuhMExDXeCwmdXSX22gTbHPs/JgCeV/PZoB6R3/aCXNcolqtoeQq/ql3Ce1MbHxHrlos3aOHkA== X-Received: from ejcvx9.prod.google.com ([2002:a17:907:a789:b0:ac1:fb2a:4a70]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:3da3:b0:ac1:edc5:d73b with SMTP id a640c23a62f3a-ac1f0edc8c7mr225816966b.8.1741080398288; Tue, 04 Mar 2025 01:26:38 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:32 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-34-elver@google.com> Subject: [PATCH v2 33/34] crypto: Enable capability analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Enable capability analysis for crypto subsystem. This demonstrates a larger conversion to use Clang's capability analysis. The benefit is additional static checking of locking rules, along with better documentation. Signed-off-by: Marco Elver Cc: Herbert Xu Cc: "David S. Miller" Cc: linux-crypto@vger.kernel.org --- v2: * New patch. --- crypto/Makefile | 2 ++ crypto/algapi.c | 2 ++ crypto/api.c | 1 + crypto/crypto_engine.c | 2 +- crypto/drbg.c | 5 +++++ crypto/internal.h | 2 +- crypto/proc.c | 3 +++ crypto/scompress.c | 8 +++++--- include/crypto/internal/engine.h | 2 +- 9 files changed, 21 insertions(+), 6 deletions(-) diff --git a/crypto/Makefile b/crypto/Makefile index f67e853c4690..b7fa58ab8783 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -3,6 +3,8 @@ # Cryptographic API # +CAPABILITY_ANALYSIS := y + obj-$(CONFIG_CRYPTO) += crypto.o crypto-y := api.o cipher.o compress.o diff --git a/crypto/algapi.c b/crypto/algapi.c index 5318c214debb..c2bafcde6f64 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -230,6 +230,7 @@ EXPORT_SYMBOL_GPL(crypto_remove_spawns); static void crypto_alg_finish_registration(struct crypto_alg *alg, struct list_head *algs_to_put) + __must_hold(&crypto_alg_sem) { struct crypto_alg *q; @@ -286,6 +287,7 @@ static struct crypto_larval *crypto_alloc_test_larval(struct crypto_alg *alg) static struct crypto_larval * __crypto_register_alg(struct crypto_alg *alg, struct list_head *algs_to_put) + __must_hold(&crypto_alg_sem) { struct crypto_alg *q; struct crypto_larval *larval; diff --git a/crypto/api.c b/crypto/api.c index bfd177a4313a..def3430ab332 100644 --- a/crypto/api.c +++ b/crypto/api.c @@ -57,6 +57,7 @@ EXPORT_SYMBOL_GPL(crypto_mod_put); static struct crypto_alg *__crypto_alg_lookup(const char *name, u32 type, u32 mask) + __must_hold_shared(&crypto_alg_sem) { struct crypto_alg *q, *alg = NULL; int best = -2; diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c index c7c16da5e649..4ab0bbc4c7ce 100644 --- a/crypto/crypto_engine.c +++ b/crypto/crypto_engine.c @@ -514,8 +514,8 @@ struct crypto_engine *crypto_engine_alloc_init_and_set(struct device *dev, snprintf(engine->name, sizeof(engine->name), "%s-engine", dev_name(dev)); - crypto_init_queue(&engine->queue, qlen); spin_lock_init(&engine->queue_lock); + crypto_init_queue(&engine->queue, qlen); engine->kworker = kthread_run_worker(0, "%s", engine->name); if (IS_ERR(engine->kworker)) { diff --git a/crypto/drbg.c b/crypto/drbg.c index f28dfc2511a2..881579afa160 100644 --- a/crypto/drbg.c +++ b/crypto/drbg.c @@ -231,6 +231,7 @@ static inline unsigned short drbg_sec_strength(drbg_flag_t flags) */ static int drbg_fips_continuous_test(struct drbg_state *drbg, const unsigned char *entropy) + __must_hold(&drbg->drbg_mutex) { unsigned short entropylen = drbg_sec_strength(drbg->core->flags); int ret = 0; @@ -1061,6 +1062,7 @@ static inline int __drbg_seed(struct drbg_state *drbg, struct list_head *seed, static inline int drbg_get_random_bytes(struct drbg_state *drbg, unsigned char *entropy, unsigned int entropylen) + __must_hold(&drbg->drbg_mutex) { int ret; @@ -1075,6 +1077,7 @@ static inline int drbg_get_random_bytes(struct drbg_state *drbg, } static int drbg_seed_from_random(struct drbg_state *drbg) + __must_hold(&drbg->drbg_mutex) { struct drbg_string data; LIST_HEAD(seedlist); @@ -1132,6 +1135,7 @@ static bool drbg_nopr_reseed_interval_elapsed(struct drbg_state *drbg) */ static int drbg_seed(struct drbg_state *drbg, struct drbg_string *pers, bool reseed) + __must_hold(&drbg->drbg_mutex) { int ret; unsigned char entropy[((32 + 16) * 2)]; @@ -1368,6 +1372,7 @@ static inline int drbg_alloc_state(struct drbg_state *drbg) static int drbg_generate(struct drbg_state *drbg, unsigned char *buf, unsigned int buflen, struct drbg_string *addtl) + __must_hold(&drbg->drbg_mutex) { int len = 0; LIST_HEAD(addtllist); diff --git a/crypto/internal.h b/crypto/internal.h index 46b661be0f90..3ac76faf228b 100644 --- a/crypto/internal.h +++ b/crypto/internal.h @@ -45,8 +45,8 @@ enum { /* Maximum number of (rtattr) parameters for each template. */ #define CRYPTO_MAX_ATTRS 32 -extern struct list_head crypto_alg_list; extern struct rw_semaphore crypto_alg_sem; +extern struct list_head crypto_alg_list __guarded_by(&crypto_alg_sem); extern struct blocking_notifier_head crypto_chain; int alg_test(const char *driver, const char *alg, u32 type, u32 mask); diff --git a/crypto/proc.c b/crypto/proc.c index 522b27d90d29..4679eb6b81c9 100644 --- a/crypto/proc.c +++ b/crypto/proc.c @@ -19,17 +19,20 @@ #include "internal.h" static void *c_start(struct seq_file *m, loff_t *pos) + __acquires_shared(&crypto_alg_sem) { down_read(&crypto_alg_sem); return seq_list_start(&crypto_alg_list, *pos); } static void *c_next(struct seq_file *m, void *p, loff_t *pos) + __must_hold_shared(&crypto_alg_sem) { return seq_list_next(p, &crypto_alg_list, pos); } static void c_stop(struct seq_file *m, void *p) + __releases_shared(&crypto_alg_sem) { up_read(&crypto_alg_sem); } diff --git a/crypto/scompress.c b/crypto/scompress.c index 1cef6bb06a81..0f24c84cc550 100644 --- a/crypto/scompress.c +++ b/crypto/scompress.c @@ -25,8 +25,8 @@ struct scomp_scratch { spinlock_t lock; - void *src; - void *dst; + void *src __guarded_by(&lock); + void *dst __guarded_by(&lock); }; static DEFINE_PER_CPU(struct scomp_scratch, scomp_scratch) = { @@ -34,8 +34,8 @@ static DEFINE_PER_CPU(struct scomp_scratch, scomp_scratch) = { }; static const struct crypto_type crypto_scomp_type; -static int scomp_scratch_users; static DEFINE_MUTEX(scomp_lock); +static int scomp_scratch_users __guarded_by(&scomp_lock); static int __maybe_unused crypto_scomp_report( struct sk_buff *skb, struct crypto_alg *alg) @@ -59,6 +59,7 @@ static void crypto_scomp_show(struct seq_file *m, struct crypto_alg *alg) } static void crypto_scomp_free_scratches(void) + __capability_unsafe(/* frees @scratch */) { struct scomp_scratch *scratch; int i; @@ -74,6 +75,7 @@ static void crypto_scomp_free_scratches(void) } static int crypto_scomp_alloc_scratches(void) + __capability_unsafe(/* allocates @scratch */) { struct scomp_scratch *scratch; int i; diff --git a/include/crypto/internal/engine.h b/include/crypto/internal/engine.h index fbf4be56cf12..10edbb451f1c 100644 --- a/include/crypto/internal/engine.h +++ b/include/crypto/internal/engine.h @@ -54,7 +54,7 @@ struct crypto_engine { struct list_head list; spinlock_t queue_lock; - struct crypto_queue queue; + struct crypto_queue queue __guarded_by(&queue_lock); struct device *dev; bool rt; From patchwork Tue Mar 4 09:21:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 870625 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7AD412066CC for ; Tue, 4 Mar 2025 09:26:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080404; cv=none; b=bQ3LHy3oGjLPQZA50NUF0HsH+2pe5bEK+F3mRc2uTcvCT5m89U6ecb8PqwxdLLOW4TucLFtKvQoL2HAeweurjJdQDSeEpC9GU/DPG7XnvP8XZf1HATwO4p6VtWBVYWD7QJMRjC4nTkDGPpKPJybWoyT+rlVfODCI8Yh2Hk8UR0s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741080404; c=relaxed/simple; bh=Pb5v8XeAHlUAdU8teJGT/joxk1D2UYCLbNbcthK7dEs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PgVOvvKiGrU/NXD99AUyVy/YuSrIIrDxon0e56YRMmzAr/+xr07QxjcBL7lhHzgTEimR3w128aa2i2WavObHuxeA+bbE4nOEU4gXLBdhYBH7xOMWbhQ1Lr0Aae61aTDZddQ3X6IFLSVvBLOUWmyyvqIV9NatQOBkhsRtgupdZ7s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=eqTUwK1o; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="eqTUwK1o" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-abec83a498cso522432366b.1 for ; Tue, 04 Mar 2025 01:26:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741080401; x=1741685201; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Vumk0HTnchdldWRc8wT0cAdQjFLn1MfJ3/QC5i3Nw5Q=; b=eqTUwK1ozQnv7PEsQryrxWFaC4V9xB/TW/o9zIGwNoOmqD1aqBzj1JC3drB8erfo5p wb9Hcya48YEnC3EHpitzfauyVpJiyDwAjTkW/daucl+eHEuIIjalwWsm1eHbGLG41iBY fgOLwF1PUiBtNADhsW4nt4zwl4L9fZK7mVDl2u6cvVDikqbjK7EUyQ0urMTCNYbdQO/l mqJ3HSFwjdr5aYOsP9rBFQ2aUmj501MaDcxZzEinBfz8cG3OCcxAQ+XprkSAo44oY9iX hKf8OutnsPysfeip7QE/2XGG68KKvnfkU3eHfiHBF4Po/OzNJeU0hNWIO8btMtdG4k+R Li9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741080401; x=1741685201; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Vumk0HTnchdldWRc8wT0cAdQjFLn1MfJ3/QC5i3Nw5Q=; b=sRBrN3WCiZ7pPBp9Gn+5qMFbYpBwQK57xvrQQNeKvJ0knejHBeFKJfu5jHonfHcqxb /H1XagZFkEJXFwCuiKfta3Q3PWY9qFZN9oI3RFl/WxxCQmZq0wtaECJ9m1j0yzu4jaMh A6nPVC9ATWBcedajSLaiudIfXvFnFXRDl2oSymANMk1XodAJX9/OAcYxQvrEXbuQN3Dc 3+f0uzlSpVHBphRJAbx6/xl5j9Lw+2Lg7XSqNWCJXHF8ZKkoJjW1csx9KoFLKzBgzGLy O8SY+0ieGYKpnNYMSsfqZAT5LvJ83muSiIfgEfsWa75nJfBcEgdA8Cp/rUxQpokrWF9n GLZQ== X-Forwarded-Encrypted: i=1; AJvYcCViyiKg3Ef5AcuJ6hB9FG39Pcj69/pKxxYKTUDDYHUCbzIy94VfjrVxcQQa6GaxVa+g8CZFZnV1w9Btw38=@vger.kernel.org X-Gm-Message-State: AOJu0YzcX25VV5yswGQrhpKRpvZ4+w71CecWmoMdWrq5ugnQ2N/XP2su sq93iDM2jwa5S8l1Un1caE9u7Y2NmpGgbu7MXghKYHiAKEDEb33x6Tbu/NXiR2A/obpUSxqtbw= = X-Google-Smtp-Source: AGHT+IHe+2tTUNkWh4sjbCTVRn346FQZX3D8pl7o+PzR8CFWDsL38sKG72bCeLReMhyKEWhXvddJVO8sWw== X-Received: from ejckt25.prod.google.com ([2002:a17:907:9d19:b0:ac1:ed2c:ab54]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:7fd6:b0:abf:46cd:5e3f with SMTP id a640c23a62f3a-abf46cd7414mr1245962366b.16.1741080400857; Tue, 04 Mar 2025 01:26:40 -0800 (PST) Date: Tue, 4 Mar 2025 10:21:33 +0100 In-Reply-To: <20250304092417.2873893-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250304092417.2873893-1-elver@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250304092417.2873893-35-elver@google.com> Subject: [PATCH v2 34/34] MAINTAINERS: Add entry for Capability Analysis From: Marco Elver To: elver@google.com Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ingo Molnar , Jann Horn , Jiri Slaby , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Peter Zijlstra , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org, linux-serial@vger.kernel.org Add entry for all new files added for Clang's capability analysis. Signed-off-by: Marco Elver Cc: Bart Van Assche --- MAINTAINERS | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 8e0736dc2ee0..cf9bf14f99b9 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5638,6 +5638,17 @@ M: Nelson Escobar S: Supported F: drivers/infiniband/hw/usnic/ +CLANG CAPABILITY ANALYSIS +M: Marco Elver +R: Bart Van Assche +L: llvm@lists.linux.dev +S: Maintained +F: Documentation/dev-tools/capability-analysis.rst +F: include/linux/compiler-capability-analysis.h +F: lib/test_capability-analysis.c +F: scripts/Makefile.capability-analysis +F: scripts/capability-analysis-suppression.txt + CLANG CONTROL FLOW INTEGRITY SUPPORT M: Sami Tolvanen M: Kees Cook