From patchwork Sat Apr 14 16:20:37 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 7818 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 3D1DF23E42 for ; Sat, 14 Apr 2012 16:21:17 +0000 (UTC) Received: from mail-iy0-f180.google.com (mail-iy0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id 05E6EA1858B for ; Sat, 14 Apr 2012 16:21:16 +0000 (UTC) Received: by mail-iy0-f180.google.com with SMTP id e36so7572362iag.11 for ; Sat, 14 Apr 2012 09:21:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references :x-content-scanned:x-cbid:x-gm-message-state; bh=Vo4wCP+oVhRUlt5M2se4sSq7l7ghL0FdByK8nmEIlRY=; b=Ksa/mrMvz10r/RMR8HCYKz5eZ5O0C1vH8wRXleZfNS1hJSk3mDVWJ5CqvMQTWl/34T 2E+n8Z7lcsW0HTH9qp4tiJRbnggd3CMCoAanglvnKesERElvDcarHWvThTVZYz2zpWvx nSGHFEhlpagKyCifvF/de3DMQHjsoYWE7++GcvRlPErI6ke7LFj2gT3gTh9UIpapz00U Nw4RESxgo+7gcWbxJvP62N3uJbPY4gRyYFSzK1lE6LGpk7fp95WgFfWNU4tKMkxbEPpY /thSOs3EF6Go44qqsHsd4QwRnzE4+1wsyUKNQ1YJffIVpu9pioffGEAykfeRKn5WkbxD Ghsg== Received: by 10.50.158.202 with SMTP id ww10mr1481143igb.30.1334420476796; Sat, 14 Apr 2012 09:21:16 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.70.69 with SMTP id c5csp89599ibj; Sat, 14 Apr 2012 09:21:16 -0700 (PDT) Received: by 10.182.136.41 with SMTP id px9mr7900551obb.21.1334420476148; Sat, 14 Apr 2012 09:21:16 -0700 (PDT) Received: from e7.ny.us.ibm.com (e7.ny.us.ibm.com. [32.97.182.137]) by mx.google.com with ESMTPS id h6si2151727obv.52.2012.04.14.09.21.15 (version=TLSv1/SSLv3 cipher=OTHER); Sat, 14 Apr 2012 09:21:16 -0700 (PDT) Received-SPF: pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.182.137 as permitted sender) client-ip=32.97.182.137; Authentication-Results: mx.google.com; spf=pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.182.137 as permitted sender) smtp.mail=paulmck@linux.vnet.ibm.com Received: from /spool/local by e7.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sat, 14 Apr 2012 12:21:14 -0400 Received: from d01dlp01.pok.ibm.com (9.56.224.56) by e7.ny.us.ibm.com (192.168.1.107) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Sat, 14 Apr 2012 12:20:50 -0400 Received: from d01relay04.pok.ibm.com (d01relay04.pok.ibm.com [9.56.227.236]) by d01dlp01.pok.ibm.com (Postfix) with ESMTP id B607538C805F; Sat, 14 Apr 2012 12:20:48 -0400 (EDT) Received: from d03av01.boulder.ibm.com (d03av01.boulder.ibm.com [9.17.195.167]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q3EGKmHp308102; Sat, 14 Apr 2012 12:20:48 -0400 Received: from d03av01.boulder.ibm.com (loopback [127.0.0.1]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q3EGKi3v025995; Sat, 14 Apr 2012 10:20:48 -0600 Received: from paulmck-ThinkPad-W500 ([9.49.223.21]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q3EGKgni025814; Sat, 14 Apr 2012 10:20:43 -0600 Received: by paulmck-ThinkPad-W500 (Postfix, from userid 1000) id 2D383E5273; Sat, 14 Apr 2012 09:20:40 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: mingo@elte.hu, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@polymtl.ca, josh@joshtriplett.org, niv@us.ibm.com, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, Valdis.Kletnieks@vt.edu, dhowells@redhat.com, eric.dumazet@gmail.com, darren@dvhart.com, fweisbec@gmail.com, patches@linaro.org, torvalds@linux-foundation.org, "Paul E. McKenney" , "Paul E. McKenney" Subject: [PATCH RFC 7/7] rcu: Inline preemptible RCU __rcu_read_unlock() Date: Sat, 14 Apr 2012 09:20:37 -0700 Message-Id: <1334420437-19264-7-git-send-email-paulmck@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.8 In-Reply-To: <1334420437-19264-1-git-send-email-paulmck@linux.vnet.ibm.com> References: <20120414161953.GA18140@linux.vnet.ibm.com> <1334420437-19264-1-git-send-email-paulmck@linux.vnet.ibm.com> X-Content-Scanned: Fidelis XPS MAILER x-cbid: 12041416-5806-0000-0000-00001443FAD1 X-Gm-Message-State: ALoCoQlesu1fWohMZh/VP0no2uDNVQZ+5qkgN+UuvvAlE/gP/+q4SFEh3LALkmooVwQzjdONmsMv From: "Paul E. McKenney" Move __rcu_read_unlock() from kernel/rcupdate.c to include/linux/rcupdate.h, allowing the compiler to inline it. Suggested-by: Linus Torvalds Signed-off-by: Paul E. McKenney Signed-off-by: Paul E. McKenney --- include/linux/rcupdate.h | 35 ++++++++++++++++++++++++++++++++++- kernel/rcu.h | 4 ---- kernel/rcupdate.c | 33 --------------------------------- kernel/rcutiny_plugin.h | 1 + kernel/rcutree_plugin.h | 1 + 5 files changed, 36 insertions(+), 38 deletions(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 9967b2b..8113505 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -162,7 +162,40 @@ static inline void __rcu_read_lock(void) barrier(); /* Keep code within RCU read-side critical section. */ } -extern void __rcu_read_unlock(void); +extern void rcu_read_unlock_do_special(void); + +/* + * Tree-preemptible RCU implementation for rcu_read_unlock(). + * Decrement rcu_read_lock_nesting. If the result is zero (outermost + * rcu_read_unlock()) and rcu_read_unlock_special is non-zero, then + * invoke rcu_read_unlock_do_special() to clean up after a context switch + * in an RCU read-side critical section and other special cases. + * Set rcu_read_lock_nesting to a large negative value during cleanup + * in order to ensure that if rcu_read_unlock_special is non-zero, then + * rcu_read_lock_nesting is also non-zero. + */ +static inline void __rcu_read_unlock(void) +{ + if (__this_cpu_read(rcu_read_lock_nesting) != 1) + __this_cpu_dec(rcu_read_lock_nesting); + else { + barrier(); /* critical section before exit code. */ + __this_cpu_write(rcu_read_lock_nesting, INT_MIN); + barrier(); /* assign before ->rcu_read_unlock_special load */ + if (unlikely(__this_cpu_read(rcu_read_unlock_special))) + rcu_read_unlock_do_special(); + barrier(); /* ->rcu_read_unlock_special load before assign */ + __this_cpu_write(rcu_read_lock_nesting, 0); + } +#ifdef CONFIG_PROVE_LOCKING + { + int rln = __this_cpu_read(rcu_read_lock_nesting); + + WARN_ON_ONCE(rln < 0 && rln > INT_MIN / 2); + } +#endif /* #ifdef CONFIG_PROVE_LOCKING */ +} + void synchronize_rcu(void); /* diff --git a/kernel/rcu.h b/kernel/rcu.h index 6243d8d..8ba99cd 100644 --- a/kernel/rcu.h +++ b/kernel/rcu.h @@ -109,8 +109,4 @@ static inline bool __rcu_reclaim(char *rn, struct rcu_head *head) } } -#ifdef CONFIG_PREEMPT_RCU -extern void rcu_read_unlock_do_special(void); -#endif /* #ifdef CONFIG_PREEMPT_RCU */ - #endif /* __LINUX_RCU_H */ diff --git a/kernel/rcupdate.c b/kernel/rcupdate.c index d52c68e..f607cb5 100644 --- a/kernel/rcupdate.c +++ b/kernel/rcupdate.c @@ -59,39 +59,6 @@ DEFINE_PER_CPU(struct task_struct *, rcu_current_task); #endif /* #ifdef CONFIG_PROVE_RCU */ /* - * Tree-preemptible RCU implementation for rcu_read_unlock(). - * Decrement rcu_read_lock_nesting. If the result is zero (outermost - * rcu_read_unlock()) and rcu_read_unlock_special is non-zero, then - * invoke rcu_read_unlock_do_special() to clean up after a context switch - * in an RCU read-side critical section and other special cases. - * Set rcu_read_lock_nesting to a large negative value during cleanup - * in order to ensure that if rcu_read_unlock_special is non-zero, then - * rcu_read_lock_nesting is also non-zero. - */ -void __rcu_read_unlock(void) -{ - if (__this_cpu_read(rcu_read_lock_nesting) != 1) - __this_cpu_dec(rcu_read_lock_nesting); - else { - barrier(); /* critical section before exit code. */ - __this_cpu_write(rcu_read_lock_nesting, INT_MIN); - barrier(); /* assign before ->rcu_read_unlock_special load */ - if (unlikely(__this_cpu_read(rcu_read_unlock_special))) - rcu_read_unlock_do_special(); - barrier(); /* ->rcu_read_unlock_special load before assign */ - __this_cpu_write(rcu_read_lock_nesting, 0); - } -#ifdef CONFIG_PROVE_LOCKING - { - int rln = __this_cpu_read(rcu_read_lock_nesting); - - WARN_ON_ONCE(rln < 0 && rln > INT_MIN / 2); - } -#endif /* #ifdef CONFIG_PROVE_LOCKING */ -} -EXPORT_SYMBOL_GPL(__rcu_read_unlock); - -/* * Check for a task exiting while in a preemptible-RCU read-side * critical section, clean up if so. No need to issue warnings, * as debug_check_no_locks_held() already does this if lockdep diff --git a/kernel/rcutiny_plugin.h b/kernel/rcutiny_plugin.h index 6b416af..49cb5b0 100644 --- a/kernel/rcutiny_plugin.h +++ b/kernel/rcutiny_plugin.h @@ -598,6 +598,7 @@ void rcu_read_unlock_do_special(void) } local_irq_restore(flags); } +EXPORT_SYMBOL_GPL(rcu_read_unlock_do_special); /* * Check for a quiescent state from the current CPU. When a task blocks, diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h index 20be289..7afde96 100644 --- a/kernel/rcutree_plugin.h +++ b/kernel/rcutree_plugin.h @@ -409,6 +409,7 @@ void rcu_read_unlock_do_special(void) local_irq_restore(flags); } } +EXPORT_SYMBOL_GPL(rcu_read_unlock_do_special); #ifdef CONFIG_RCU_CPU_STALL_VERBOSE