From patchwork Mon Oct 23 21:07:27 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 116878 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp5084378qgn; Mon, 23 Oct 2017 14:10:22 -0700 (PDT) X-Google-Smtp-Source: ABhQp+Ren19co7ubKY/yxaHnw/VbSolkmQWrWZXARgNtZb6Hl44OPjVgcqiLtCwP2Ddb7zKjlUTb X-Received: by 10.84.224.132 with SMTP id s4mr6455772plj.409.1508793022633; Mon, 23 Oct 2017 14:10:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1508793022; cv=none; d=google.com; s=arc-20160816; b=yEsWO4oCKE/xT+tKEAI7H5Uhd28/ijVstxGScMPUt4rwD66l5FGrbHznMFUHhHAk5E lOelncrJd1Ve4TqVrfjyzfKum/RmPQ6WKduqTPI8+kFPloQWSU+luANgWhdQrEw532Bn 08c02t8a6xfbXNWZRmdgITCKNQ6bkTm7pnJVq8jLi+nAh+2q7huL7Q7pDKv8czCxr2Sr GUcrcy9G49wP6QjDOFdTsbETkokBT9j0qlOjKca+ij7rAHVm6a0gqs2bP83vKbKwiY3I zSjuYATU1pdFM+cmkn4XedElL2j0+ydui+Ow8xzK4TZCbfRz0q/ScBrmHvrI2BAdyfbV Mfmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:content-transfer-encoding :mime-version:references:in-reply-to:date:subject:cc:to:from :arc-authentication-results; bh=6r0kksc+lWvpzrhMO9uT+lo0q06d50epdi2RTwsfuVM=; b=SZ3O4sy04Ju7d3g9Ncu7R0/frUb9bpb4QU+FjjTLly9LA6nylbDBUuvTFx55MU6XFR PsNg+CwzAXtRoWqt3mu8LuzrtByLeHrAxr0TxfGGpvfwdJbCuPRW6b4ziBuJ7Qd/C50X r7Tl/DJCLaJnNrMIqwFn+1bjZFI6kg/d0gHPxQ7PITkehyOLBb1jkInw6trr3bK/9Cci A8fcCwMBmXsK/FYXA/2879drsI6eP1613q5Dd/pt1ZqWoVZ4yNY4BQLfUuN4NanF/Yb5 TtDM6+DVcFctTwKj2dhSS9PV5lIhSzF6P5mtu9cyEuRMeV8VXfB7Q4G7cqrHd0h/qOhw /soQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s22si5875670pfj.435.2017.10.23.14.10.22; Mon, 23 Oct 2017 14:10:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751919AbdJWVKT (ORCPT + 27 others); Mon, 23 Oct 2017 17:10:19 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:57830 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751589AbdJWVHm (ORCPT ); Mon, 23 Oct 2017 17:07:42 -0400 Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9NL7KgC031057 for ; Mon, 23 Oct 2017 17:07:41 -0400 Received: from e18.ny.us.ibm.com (e18.ny.us.ibm.com [129.33.205.208]) by mx0b-001b2d01.pphosted.com with ESMTP id 2dsjcg7ebt-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Mon, 23 Oct 2017 17:07:41 -0400 Received: from localhost by e18.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 23 Oct 2017 17:07:40 -0400 Received: from b01cxnp23034.gho.pok.ibm.com (9.57.198.29) by e18.ny.us.ibm.com (146.89.104.205) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Mon, 23 Oct 2017 17:07:36 -0400 Received: from b01ledav003.gho.pok.ibm.com (b01ledav003.gho.pok.ibm.com [9.57.199.108]) by b01cxnp23034.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9NL7ZRa41025680; Mon, 23 Oct 2017 21:07:35 GMT Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 830B7B205A; Mon, 23 Oct 2017 17:04:49 -0400 (EDT) Received: from paulmck-ThinkPad-W541 (unknown [9.80.218.156]) by b01ledav003.gho.pok.ibm.com (Postfix) with ESMTP id 58D2AB204D; Mon, 23 Oct 2017 17:04:49 -0400 (EDT) Received: by paulmck-ThinkPad-W541 (Postfix, from userid 1000) id E2D8D16C338E; Mon, 23 Oct 2017 14:07:33 -0700 (PDT) From: "Paul E. McKenney" To: mingo@kernel.org Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, will.deacon@arm.com, mark.rutland@arm.com, snitzer@redhat.com, thor.thayer@linux.intel.com, viro@zeniv.linux.org.uk, davem@davemloft.net, shuah@kernel.org, mpe@ellerman.id.au, tj@kernel.org, torvalds@linux-foundation.org, "Paul E. McKenney" Subject: [PATCH 17/19] locking/barriers: Kill lockless_dereference Date: Mon, 23 Oct 2017 14:07:27 -0700 X-Mailer: git-send-email 2.5.2 In-Reply-To: <20171023210408.GA2930@linux.vnet.ibm.com> References: <20171023210408.GA2930@linux.vnet.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 x-cbid: 17102321-0044-0000-0000-000003A3C661 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00007941; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000239; SDB=6.00935454; UDB=6.00471299; IPR=6.00715720; BA=6.00005656; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00017676; XFM=3.00000015; UTC=2017-10-23 21:07:39 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17102321-0045-0000-0000-000007D2D15E Message-Id: <1508792849-3115-17-git-send-email-paulmck@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-23_11:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=4 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710230296 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Will Deacon lockless_dereference is a nice idea, but its gained little traction in kernel code since it's introduction three years ago. This is partly because it's a pain to type, but also because using READ_ONCE instead will work correctly on all architectures apart from Alpha, which is a fully supported but somewhat niche architecture these days. This patch moves smp_read_barrier_depends() (a NOP on all architectures other than Alpha) from lockless_dereference into READ_ONCE, converts the few actual users over to READ_ONCE and then finally removes lockless_dereference altogether. Signed-off-by: Will Deacon Signed-off-by: Paul E. McKenney --- Documentation/memory-barriers.txt | 12 ------------ .../translations/ko_KR/memory-barriers.txt | 12 ------------ arch/x86/events/core.c | 2 +- arch/x86/include/asm/mmu_context.h | 4 ++-- arch/x86/kernel/ldt.c | 2 +- drivers/md/dm-mpath.c | 20 ++++++++++---------- fs/dcache.c | 4 ++-- fs/overlayfs/ovl_entry.h | 2 +- fs/overlayfs/readdir.c | 2 +- include/linux/compiler.h | 21 +-------------------- include/linux/rculist.h | 4 ++-- include/linux/rcupdate.h | 4 ++-- kernel/events/core.c | 4 ++-- kernel/seccomp.c | 2 +- kernel/task_work.c | 2 +- mm/slab.h | 2 +- 16 files changed, 28 insertions(+), 71 deletions(-) -- 2.5.2 diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt index b759a60624fd..470a682f3fa4 100644 --- a/Documentation/memory-barriers.txt +++ b/Documentation/memory-barriers.txt @@ -1886,18 +1886,6 @@ There are some more advanced barrier functions: See Documentation/atomic_{t,bitops}.txt for more information. - (*) lockless_dereference(); - - This can be thought of as a pointer-fetch wrapper around the - smp_read_barrier_depends() data-dependency barrier. - - This is also similar to rcu_dereference(), but in cases where - object lifetime is handled by some mechanism other than RCU, for - example, when the objects removed only when the system goes down. - In addition, lockless_dereference() is used in some data structures - that can be used both with and without RCU. - - (*) dma_wmb(); (*) dma_rmb(); diff --git a/Documentation/translations/ko_KR/memory-barriers.txt b/Documentation/translations/ko_KR/memory-barriers.txt index a7a813258013..ec3b46e27b7a 100644 --- a/Documentation/translations/ko_KR/memory-barriers.txt +++ b/Documentation/translations/ko_KR/memory-barriers.txt @@ -1858,18 +1858,6 @@ Mandatory 배리어들은 SMP 시스템에서도 UP 시스템에서도 SMP 효 참고하세요. - (*) lockless_dereference(); - - 이 함수는 smp_read_barrier_depends() 데이터 의존성 배리어를 사용하는 - 포인터 읽어오기 래퍼(wrapper) 함수로 생각될 수 있습니다. - - 객체의 라이프타임이 RCU 외의 메커니즘으로 관리된다는 점을 제외하면 - rcu_dereference() 와도 유사한데, 예를 들면 객체가 시스템이 꺼질 때에만 - 제거되는 경우 등입니다. 또한, lockless_dereference() 은 RCU 와 함께 - 사용될수도, RCU 없이 사용될 수도 있는 일부 데이터 구조에 사용되고 - 있습니다. - - (*) dma_wmb(); (*) dma_rmb(); diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 80534d3c2480..589af1eec7c1 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -2371,7 +2371,7 @@ static unsigned long get_segment_base(unsigned int segment) struct ldt_struct *ldt; /* IRQs are off, so this synchronizes with smp_store_release */ - ldt = lockless_dereference(current->active_mm->context.ldt); + ldt = READ_ONCE(current->active_mm->context.ldt); if (!ldt || idx >= ldt->nr_entries) return 0; diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index c120b5db178a..9037a4e546e8 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -72,8 +72,8 @@ static inline void load_mm_ldt(struct mm_struct *mm) #ifdef CONFIG_MODIFY_LDT_SYSCALL struct ldt_struct *ldt; - /* lockless_dereference synchronizes with smp_store_release */ - ldt = lockless_dereference(mm->context.ldt); + /* READ_ONCE synchronizes with smp_store_release */ + ldt = READ_ONCE(mm->context.ldt); /* * Any change to mm->context.ldt is followed by an IPI to all diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c index f0e64db18ac8..0a21390642c4 100644 --- a/arch/x86/kernel/ldt.c +++ b/arch/x86/kernel/ldt.c @@ -101,7 +101,7 @@ static void finalize_ldt_struct(struct ldt_struct *ldt) static void install_ldt(struct mm_struct *current_mm, struct ldt_struct *ldt) { - /* Synchronizes with lockless_dereference in load_mm_ldt. */ + /* Synchronizes with READ_ONCE in load_mm_ldt. */ smp_store_release(¤t_mm->context.ldt, ldt); /* Activate the LDT for all CPUs using current_mm. */ diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c index 11f273d2f018..3f88c9d32f7e 100644 --- a/drivers/md/dm-mpath.c +++ b/drivers/md/dm-mpath.c @@ -366,7 +366,7 @@ static struct pgpath *choose_path_in_pg(struct multipath *m, pgpath = path_to_pgpath(path); - if (unlikely(lockless_dereference(m->current_pg) != pg)) { + if (unlikely(READ_ONCE(m->current_pg) != pg)) { /* Only update current_pgpath if pg changed */ spin_lock_irqsave(&m->lock, flags); m->current_pgpath = pgpath; @@ -390,7 +390,7 @@ static struct pgpath *choose_pgpath(struct multipath *m, size_t nr_bytes) } /* Were we instructed to switch PG? */ - if (lockless_dereference(m->next_pg)) { + if (READ_ONCE(m->next_pg)) { spin_lock_irqsave(&m->lock, flags); pg = m->next_pg; if (!pg) { @@ -406,7 +406,7 @@ static struct pgpath *choose_pgpath(struct multipath *m, size_t nr_bytes) /* Don't change PG until it has no remaining paths */ check_current_pg: - pg = lockless_dereference(m->current_pg); + pg = READ_ONCE(m->current_pg); if (pg) { pgpath = choose_path_in_pg(m, pg, nr_bytes); if (!IS_ERR_OR_NULL(pgpath)) @@ -473,7 +473,7 @@ static int multipath_clone_and_map(struct dm_target *ti, struct request *rq, struct request *clone; /* Do we need to select a new pgpath? */ - pgpath = lockless_dereference(m->current_pgpath); + pgpath = READ_ONCE(m->current_pgpath); if (!pgpath || !test_bit(MPATHF_QUEUE_IO, &m->flags)) pgpath = choose_pgpath(m, nr_bytes); @@ -535,7 +535,7 @@ static int __multipath_map_bio(struct multipath *m, struct bio *bio, struct dm_m bool queue_io; /* Do we need to select a new pgpath? */ - pgpath = lockless_dereference(m->current_pgpath); + pgpath = READ_ONCE(m->current_pgpath); queue_io = test_bit(MPATHF_QUEUE_IO, &m->flags); if (!pgpath || !queue_io) pgpath = choose_pgpath(m, nr_bytes); @@ -1804,7 +1804,7 @@ static int multipath_prepare_ioctl(struct dm_target *ti, struct pgpath *current_pgpath; int r; - current_pgpath = lockless_dereference(m->current_pgpath); + current_pgpath = READ_ONCE(m->current_pgpath); if (!current_pgpath) current_pgpath = choose_pgpath(m, 0); @@ -1826,7 +1826,7 @@ static int multipath_prepare_ioctl(struct dm_target *ti, } if (r == -ENOTCONN) { - if (!lockless_dereference(m->current_pg)) { + if (!READ_ONCE(m->current_pg)) { /* Path status changed, redo selection */ (void) choose_pgpath(m, 0); } @@ -1895,9 +1895,9 @@ static int multipath_busy(struct dm_target *ti) return (m->queue_mode != DM_TYPE_MQ_REQUEST_BASED); /* Guess which priority_group will be used at next mapping time */ - pg = lockless_dereference(m->current_pg); - next_pg = lockless_dereference(m->next_pg); - if (unlikely(!lockless_dereference(m->current_pgpath) && next_pg)) + pg = READ_ONCE(m->current_pg); + next_pg = READ_ONCE(m->next_pg); + if (unlikely(!READ_ONCE(m->current_pgpath) && next_pg)) pg = next_pg; if (!pg) { diff --git a/fs/dcache.c b/fs/dcache.c index f90141387f01..34c852af215c 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -231,7 +231,7 @@ static inline int dentry_cmp(const struct dentry *dentry, const unsigned char *c { /* * Be careful about RCU walk racing with rename: - * use 'lockless_dereference' to fetch the name pointer. + * use 'READ_ONCE' to fetch the name pointer. * * NOTE! Even if a rename will mean that the length * was not loaded atomically, we don't care. The @@ -245,7 +245,7 @@ static inline int dentry_cmp(const struct dentry *dentry, const unsigned char *c * early because the data cannot match (there can * be no NUL in the ct/tcount data) */ - const unsigned char *cs = lockless_dereference(dentry->d_name.name); + const unsigned char *cs = READ_ONCE(dentry->d_name.name); return dentry_string_cmp(cs, ct, tcount); } diff --git a/fs/overlayfs/ovl_entry.h b/fs/overlayfs/ovl_entry.h index 25d9b5adcd42..36b49bd09264 100644 --- a/fs/overlayfs/ovl_entry.h +++ b/fs/overlayfs/ovl_entry.h @@ -77,5 +77,5 @@ static inline struct ovl_inode *OVL_I(struct inode *inode) static inline struct dentry *ovl_upperdentry_dereference(struct ovl_inode *oi) { - return lockless_dereference(oi->__upperdentry); + return READ_ONCE(oi->__upperdentry); } diff --git a/fs/overlayfs/readdir.c b/fs/overlayfs/readdir.c index 0f85ee9c3268..c67a7703296b 100644 --- a/fs/overlayfs/readdir.c +++ b/fs/overlayfs/readdir.c @@ -754,7 +754,7 @@ static int ovl_dir_fsync(struct file *file, loff_t start, loff_t end, if (!od->is_upper && OVL_TYPE_UPPER(ovl_path_type(dentry))) { struct inode *inode = file_inode(file); - realfile = lockless_dereference(od->upperfile); + realfile = READ_ONCE(od->upperfile); if (!realfile) { struct path upperpath; diff --git a/include/linux/compiler.h b/include/linux/compiler.h index c714239f453f..4517662f128c 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -242,6 +242,7 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s __read_once_size(&(x), __u.__c, sizeof(x)); \ else \ __read_once_size_nocheck(&(x), __u.__c, sizeof(x)); \ + smp_read_barrier_depends(); /* Enforce dependency ordering from x */ \ __u.__val; \ }) #define READ_ONCE(x) __READ_ONCE(x, 1) @@ -345,24 +346,4 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s (volatile typeof(x) *)&(x); }) #define ACCESS_ONCE(x) (*__ACCESS_ONCE(x)) -/** - * lockless_dereference() - safely load a pointer for later dereference - * @p: The pointer to load - * - * Similar to rcu_dereference(), but for situations where the pointed-to - * object's lifetime is managed by something other than RCU. That - * "something other" might be reference counting or simple immortality. - * - * The seemingly unused variable ___typecheck_p validates that @p is - * indeed a pointer type by using a pointer to typeof(*p) as the type. - * Taking a pointer to typeof(*p) again is needed in case p is void *. - */ -#define lockless_dereference(p) \ -({ \ - typeof(p) _________p1 = READ_ONCE(p); \ - typeof(*(p)) *___typecheck_p __maybe_unused; \ - smp_read_barrier_depends(); /* Dependency order vs. p above. */ \ - (_________p1); \ -}) - #endif /* __LINUX_COMPILER_H */ diff --git a/include/linux/rculist.h b/include/linux/rculist.h index b1fd8bf85fdc..3a2bb7d8ed4d 100644 --- a/include/linux/rculist.h +++ b/include/linux/rculist.h @@ -274,7 +274,7 @@ static inline void list_splice_tail_init_rcu(struct list_head *list, * primitives such as list_add_rcu() as long as it's guarded by rcu_read_lock(). */ #define list_entry_rcu(ptr, type, member) \ - container_of(lockless_dereference(ptr), type, member) + container_of(READ_ONCE(ptr), type, member) /** * Where are list_empty_rcu() and list_first_entry_rcu()? @@ -367,7 +367,7 @@ static inline void list_splice_tail_init_rcu(struct list_head *list, * example is when items are added to the list, but never deleted. */ #define list_entry_lockless(ptr, type, member) \ - container_of((typeof(ptr))lockless_dereference(ptr), type, member) + container_of((typeof(ptr))READ_ONCE(ptr), type, member) /** * list_for_each_entry_lockless - iterate over rcu list of given type diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index de50d8a4cf41..380a3aeb09d7 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -346,7 +346,7 @@ static inline void rcu_preempt_sleep_check(void) { } #define __rcu_dereference_check(p, c, space) \ ({ \ /* Dependency order vs. p above. */ \ - typeof(*p) *________p1 = (typeof(*p) *__force)lockless_dereference(p); \ + typeof(*p) *________p1 = (typeof(*p) *__force)READ_ONCE(p); \ RCU_LOCKDEP_WARN(!(c), "suspicious rcu_dereference_check() usage"); \ rcu_dereference_sparse(p, space); \ ((typeof(*p) __force __kernel *)(________p1)); \ @@ -360,7 +360,7 @@ static inline void rcu_preempt_sleep_check(void) { } #define rcu_dereference_raw(p) \ ({ \ /* Dependency order vs. p above. */ \ - typeof(p) ________p1 = lockless_dereference(p); \ + typeof(p) ________p1 = READ_ONCE(p); \ ((typeof(*p) __force __kernel *)(________p1)); \ }) diff --git a/kernel/events/core.c b/kernel/events/core.c index 6bc21e202ae4..417812ce0099 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4231,7 +4231,7 @@ static void perf_remove_from_owner(struct perf_event *event) * indeed free this event, otherwise we need to serialize on * owner->perf_event_mutex. */ - owner = lockless_dereference(event->owner); + owner = READ_ONCE(event->owner); if (owner) { /* * Since delayed_put_task_struct() also drops the last @@ -4328,7 +4328,7 @@ int perf_event_release_kernel(struct perf_event *event) * Cannot change, child events are not migrated, see the * comment with perf_event_ctx_lock_nested(). */ - ctx = lockless_dereference(child->ctx); + ctx = READ_ONCE(child->ctx); /* * Since child_mutex nests inside ctx::mutex, we must jump * through hoops. We start by grabbing a reference on the ctx. diff --git a/kernel/seccomp.c b/kernel/seccomp.c index bb3a38005b9c..1daa8b61a268 100644 --- a/kernel/seccomp.c +++ b/kernel/seccomp.c @@ -189,7 +189,7 @@ static u32 seccomp_run_filters(const struct seccomp_data *sd, u32 ret = SECCOMP_RET_ALLOW; /* Make sure cross-thread synced filter points somewhere sane. */ struct seccomp_filter *f = - lockless_dereference(current->seccomp.filter); + READ_ONCE(current->seccomp.filter); /* Ensure unexpected behavior doesn't result in failing open. */ if (unlikely(WARN_ON(f == NULL))) diff --git a/kernel/task_work.c b/kernel/task_work.c index 836a72a66fba..9a9f262fc53d 100644 --- a/kernel/task_work.c +++ b/kernel/task_work.c @@ -67,7 +67,7 @@ task_work_cancel(struct task_struct *task, task_work_func_t func) * we raced with task_work_run(), *pprev == NULL/exited. */ raw_spin_lock_irqsave(&task->pi_lock, flags); - while ((work = lockless_dereference(*pprev))) { + while ((work = READ_ONCE(*pprev))) { if (work->func != func) pprev = &work->next; else if (cmpxchg(pprev, work, work->next) == work) diff --git a/mm/slab.h b/mm/slab.h index 073362816acc..8894f811a89d 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -258,7 +258,7 @@ cache_from_memcg_idx(struct kmem_cache *s, int idx) * memcg_caches issues a write barrier to match this (see * memcg_create_kmem_cache()). */ - cachep = lockless_dereference(arr->entries[idx]); + cachep = READ_ONCE(arr->entries[idx]); rcu_read_unlock(); return cachep;