From patchwork Sat Apr 14 16:20:32 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 7817 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 2E43023E42 for ; Sat, 14 Apr 2012 16:20:57 +0000 (UTC) Received: from mail-iy0-f180.google.com (mail-iy0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id EB0F0A185AE for ; Sat, 14 Apr 2012 16:20:56 +0000 (UTC) Received: by mail-iy0-f180.google.com with SMTP id e36so7572362iag.11 for ; Sat, 14 Apr 2012 09:20:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references :x-content-scanned:x-cbid:x-gm-message-state; bh=s3SElGZdstuaQ1QerVwL18EsFv1hKwU5DXnB9I+lZDI=; b=EDWkbpN0785eNJoiXl37iPfLBLavv8Tfxe5alpbSHbMu4LXDbm9vVpRKzSBuG7ZWWs yb400f+i2GqHASV9nYWswWLQAHenAvMigKlraMaE613x3LyU8Al2h8j/WiE120GsThaC X6E1AzXjilz3R1xFuF/jCP7e+OgsrqGDHJPKLbdLy54BqzLKtd2KWq0oLpS2iimcH2IR CcaeavbYPn4CDCBeA1FLFiCDrqHjb7Z0fA7/hepjD+HH373ehWLQ9MJzvXXMS7+kcsJw rW9zPK7o20mrmc/AMKNDUsQr4Xd1jgBjxcBfQJUtcC2oty66ISZ+jE1fak3ErftmRfEM Udsg== Received: by 10.42.179.196 with SMTP id br4mr3469441icb.42.1334420456724; Sat, 14 Apr 2012 09:20:56 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.70.69 with SMTP id c5csp89577ibj; Sat, 14 Apr 2012 09:20:55 -0700 (PDT) Received: by 10.182.122.36 with SMTP id lp4mr7852601obb.64.1334420455632; Sat, 14 Apr 2012 09:20:55 -0700 (PDT) Received: from e32.co.us.ibm.com (e32.co.us.ibm.com. [32.97.110.150]) by mx.google.com with ESMTPS id 1si8443371oex.10.2012.04.14.09.20.55 (version=TLSv1/SSLv3 cipher=OTHER); Sat, 14 Apr 2012 09:20:55 -0700 (PDT) Received-SPF: pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.110.150 as permitted sender) client-ip=32.97.110.150; Authentication-Results: mx.google.com; spf=pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.110.150 as permitted sender) smtp.mail=paulmck@linux.vnet.ibm.com Received: from /spool/local by e32.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sat, 14 Apr 2012 10:20:55 -0600 Received: from d03dlp02.boulder.ibm.com (9.17.202.178) by e32.co.us.ibm.com (192.168.1.132) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Sat, 14 Apr 2012 10:20:52 -0600 Received: from d03relay05.boulder.ibm.com (d03relay05.boulder.ibm.com [9.17.195.107]) by d03dlp02.boulder.ibm.com (Postfix) with ESMTP id 0288B3E40054; Sat, 14 Apr 2012 10:20:51 -0600 (MDT) Received: from d03av01.boulder.ibm.com (d03av01.boulder.ibm.com [9.17.195.167]) by d03relay05.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q3EGKldV217864; Sat, 14 Apr 2012 10:20:48 -0600 Received: from d03av01.boulder.ibm.com (loopback [127.0.0.1]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q3EGKgOK025789; Sat, 14 Apr 2012 10:20:47 -0600 Received: from paulmck-ThinkPad-W500 ([9.49.223.21]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q3EGKeMr025663; Sat, 14 Apr 2012 10:20:41 -0600 Received: by paulmck-ThinkPad-W500 (Postfix, from userid 1000) id B9567E5265; Sat, 14 Apr 2012 09:20:39 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: mingo@elte.hu, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@polymtl.ca, josh@joshtriplett.org, niv@us.ibm.com, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, Valdis.Kletnieks@vt.edu, dhowells@redhat.com, eric.dumazet@gmail.com, darren@dvhart.com, fweisbec@gmail.com, patches@linaro.org, torvalds@linux-foundation.org, "Paul E. McKenney" , "Paul E. McKenney" Subject: [PATCH RFC 2/7] rcu: Create per-CPU variables and avoid name conflict Date: Sat, 14 Apr 2012 09:20:32 -0700 Message-Id: <1334420437-19264-2-git-send-email-paulmck@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.8 In-Reply-To: <1334420437-19264-1-git-send-email-paulmck@linux.vnet.ibm.com> References: <20120414161953.GA18140@linux.vnet.ibm.com> <1334420437-19264-1-git-send-email-paulmck@linux.vnet.ibm.com> X-Content-Scanned: Fidelis XPS MAILER x-cbid: 12041416-3270-0000-0000-0000058E69AB X-Gm-Message-State: ALoCoQmlG6jFiX73y0pLdGlGLhCVSfhB8rjvtL+hjE2g32hrlWm+KwZpeKzSJ3mM0GbvGYgWIP0i From: "Paul E. McKenney" This commit creates the rcu_read_lock_nesting and rcu_read_unlock_special per-CPU variables, and renames the rcu_read_unlock_special() function to rcu_read_unlock_do_special() to avoid a name conflict with the new per-CPU variable. Suggested-by: Linus Torvalds Signed-off-by: Paul E. McKenney Signed-off-by: Paul E. McKenney --- include/linux/rcupdate.h | 3 +++ kernel/rcupdate.c | 5 +++++ kernel/rcutiny_plugin.h | 10 +++++----- kernel/rcutree_plugin.h | 12 ++++++------ 4 files changed, 19 insertions(+), 11 deletions(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index aca4ef0..1cf19ef 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -144,6 +144,9 @@ extern void synchronize_sched(void); #ifdef CONFIG_PREEMPT_RCU +DECLARE_PER_CPU(int, rcu_read_lock_nesting); +DECLARE_PER_CPU(int, rcu_read_unlock_special); + extern void __rcu_read_lock(void); extern void __rcu_read_unlock(void); void synchronize_rcu(void); diff --git a/kernel/rcupdate.c b/kernel/rcupdate.c index a86f174..eb5d160 100644 --- a/kernel/rcupdate.c +++ b/kernel/rcupdate.c @@ -51,6 +51,11 @@ #include "rcu.h" +#ifdef CONFIG_PREEMPT_RCU +DEFINE_PER_CPU(int, rcu_read_lock_nesting); +DEFINE_PER_CPU(int, rcu_read_unlock_special); +#endif /* #ifdef CONFIG_PREEMPT_RCU */ + #ifdef CONFIG_DEBUG_LOCK_ALLOC static struct lock_class_key rcu_lock_key; struct lockdep_map rcu_lock_map = diff --git a/kernel/rcutiny_plugin.h b/kernel/rcutiny_plugin.h index 22ecea0..ff7ec65 100644 --- a/kernel/rcutiny_plugin.h +++ b/kernel/rcutiny_plugin.h @@ -132,7 +132,7 @@ static struct rcu_preempt_ctrlblk rcu_preempt_ctrlblk = { RCU_TRACE(.rcb.name = "rcu_preempt") }; -static void rcu_read_unlock_special(struct task_struct *t); +static void rcu_read_unlock_do_special(struct task_struct *t); static int rcu_preempted_readers_exp(void); static void rcu_report_exp_done(void); @@ -510,7 +510,7 @@ void rcu_preempt_note_context_switch(void) * Complete exit from RCU read-side critical section on * behalf of preempted instance of __rcu_read_unlock(). */ - rcu_read_unlock_special(t); + rcu_read_unlock_do_special(t); } /* @@ -543,7 +543,7 @@ EXPORT_SYMBOL_GPL(__rcu_read_lock); * notify RCU core processing or task having blocked during the RCU * read-side critical section. */ -static noinline void rcu_read_unlock_special(struct task_struct *t) +static noinline void rcu_read_unlock_do_special(struct task_struct *t) { int empty; int empty_exp; @@ -630,7 +630,7 @@ static noinline void rcu_read_unlock_special(struct task_struct *t) * Tiny-preemptible RCU implementation for rcu_read_unlock(). * Decrement ->rcu_read_lock_nesting. If the result is zero (outermost * rcu_read_unlock()) and ->rcu_read_unlock_special is non-zero, then - * invoke rcu_read_unlock_special() to clean up after a context switch + * invoke rcu_read_unlock_do_special() to clean up after a context switch * in an RCU read-side critical section and other special cases. */ void __rcu_read_unlock(void) @@ -644,7 +644,7 @@ void __rcu_read_unlock(void) t->rcu_read_lock_nesting = INT_MIN; barrier(); /* assign before ->rcu_read_unlock_special load */ if (unlikely(ACCESS_ONCE(t->rcu_read_unlock_special))) - rcu_read_unlock_special(t); + rcu_read_unlock_do_special(t); barrier(); /* ->rcu_read_unlock_special load before assign */ t->rcu_read_lock_nesting = 0; } diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h index b1ac22e..f60b315 100644 --- a/kernel/rcutree_plugin.h +++ b/kernel/rcutree_plugin.h @@ -78,7 +78,7 @@ struct rcu_state rcu_preempt_state = RCU_STATE_INITIALIZER(rcu_preempt); DEFINE_PER_CPU(struct rcu_data, rcu_preempt_data); static struct rcu_state *rcu_state = &rcu_preempt_state; -static void rcu_read_unlock_special(struct task_struct *t); +static void rcu_read_unlock_do_special(struct task_struct *t); static int rcu_preempted_readers_exp(struct rcu_node *rnp); /* @@ -215,7 +215,7 @@ void rcu_preempt_note_context_switch(void) * Complete exit from RCU read-side critical section on * behalf of preempted instance of __rcu_read_unlock(). */ - rcu_read_unlock_special(t); + rcu_read_unlock_do_special(t); } /* @@ -310,7 +310,7 @@ static struct list_head *rcu_next_node_entry(struct task_struct *t, * notify RCU core processing or task having blocked during the RCU * read-side critical section. */ -static noinline void rcu_read_unlock_special(struct task_struct *t) +static noinline void rcu_read_unlock_do_special(struct task_struct *t) { int empty; int empty_exp; @@ -422,7 +422,7 @@ static noinline void rcu_read_unlock_special(struct task_struct *t) * Tree-preemptible RCU implementation for rcu_read_unlock(). * Decrement ->rcu_read_lock_nesting. If the result is zero (outermost * rcu_read_unlock()) and ->rcu_read_unlock_special is non-zero, then - * invoke rcu_read_unlock_special() to clean up after a context switch + * invoke rcu_read_unlock_do_special() to clean up after a context switch * in an RCU read-side critical section and other special cases. */ void __rcu_read_unlock(void) @@ -436,7 +436,7 @@ void __rcu_read_unlock(void) t->rcu_read_lock_nesting = INT_MIN; barrier(); /* assign before ->rcu_read_unlock_special load */ if (unlikely(ACCESS_ONCE(t->rcu_read_unlock_special))) - rcu_read_unlock_special(t); + rcu_read_unlock_do_special(t); barrier(); /* ->rcu_read_unlock_special load before assign */ t->rcu_read_lock_nesting = 0; } @@ -573,7 +573,7 @@ static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp) * Handle tasklist migration for case in which all CPUs covered by the * specified rcu_node have gone offline. Move them up to the root * rcu_node. The reason for not just moving them to the immediate - * parent is to remove the need for rcu_read_unlock_special() to + * parent is to remove the need for rcu_read_unlock_do_special() to * make more than two attempts to acquire the target rcu_node's lock. * Returns true if there were tasks blocking the current RCU grace * period.