From patchwork Thu May 17 22:12:44 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 8780 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 0B71023E23 for ; Thu, 17 May 2012 22:13:05 +0000 (UTC) Received: from mail-gg0-f180.google.com (mail-gg0-f180.google.com [209.85.161.180]) by fiordland.canonical.com (Postfix) with ESMTP id B8915A180F1 for ; Thu, 17 May 2012 22:13:04 +0000 (UTC) Received: by ggnf1 with SMTP id f1so2877131ggn.11 for ; Thu, 17 May 2012 15:13:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references :x-content-scanned:x-cbid:x-gm-message-state; bh=Nzu57FT3VI0KjbH/lyDim65KjeZW1s/Rwd7CAc70BAA=; b=K2Oi8TCa7/vpuRFhHZT2qSBkqfeTLjVt29G5zODXo8YJCz6I36i7WDYm3nqYeooJgo tBgY+Um1r3wa40a8ZycMuoW11+7uniua1l1QpgwFv3G67vClBLP+qHsr2FWEtPHA4zjY 4ARSxYXTATS+n850bW/X8SmXqDJ7h+0EPd0n0V/FhNZyNIRJACVS2h35f8bCgRNDgRLp jDY3UZui6OfC3xvMtixDpGuytn6rW6zO2vBnUhiQCTp7gZ5ofqpczMY+fDL8+Yi6edGl Ylzamp8YY3EmVDhsClkkhCBe/mnl342T8o0vveF05SblHuMnTvyXV0sEqdcy+hUcyFvK 33sw== Received: by 10.42.89.72 with SMTP id f8mr5899886icm.33.1337292783666; Thu, 17 May 2012 15:13:03 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.35.72 with SMTP id o8csp59895ibd; Thu, 17 May 2012 15:13:03 -0700 (PDT) Received: by 10.60.20.3 with SMTP id j3mr8181265oee.43.1337292783052; Thu, 17 May 2012 15:13:03 -0700 (PDT) Received: from e33.co.us.ibm.com (e33.co.us.ibm.com. [32.97.110.151]) by mx.google.com with ESMTPS id eb9si2773810obc.44.2012.05.17.15.13.02 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 17 May 2012 15:13:03 -0700 (PDT) Received-SPF: pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.110.151 as permitted sender) client-ip=32.97.110.151; Authentication-Results: mx.google.com; spf=pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.110.151 as permitted sender) smtp.mail=paulmck@linux.vnet.ibm.com Received: from /spool/local by e33.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 17 May 2012 16:13:02 -0600 Received: from d03dlp03.boulder.ibm.com (9.17.202.179) by e33.co.us.ibm.com (192.168.1.133) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Thu, 17 May 2012 16:13:00 -0600 Received: from d03relay03.boulder.ibm.com (d03relay03.boulder.ibm.com [9.17.195.228]) by d03dlp03.boulder.ibm.com (Postfix) with ESMTP id 563CC19D8050; Thu, 17 May 2012 16:12:47 -0600 (MDT) Received: from d03av01.boulder.ibm.com (d03av01.boulder.ibm.com [9.17.195.167]) by d03relay03.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q4HMCwrj173140; Thu, 17 May 2012 16:12:58 -0600 Received: from d03av01.boulder.ibm.com (loopback [127.0.0.1]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q4HMCtlS025192; Thu, 17 May 2012 16:12:57 -0600 Received: from paulmck-ThinkPad-W500 ([9.47.24.152]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q4HMCsNV025137; Thu, 17 May 2012 16:12:54 -0600 Received: by paulmck-ThinkPad-W500 (Postfix, from userid 1000) id 68E35E527B; Thu, 17 May 2012 15:12:54 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: mingo@elte.hu, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@polymtl.ca, josh@joshtriplett.org, niv@us.ibm.com, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, Valdis.Kletnieks@vt.edu, dhowells@redhat.com, eric.dumazet@gmail.com, darren@dvhart.com, fweisbec@gmail.com, patches@linaro.org, "Paul E. McKenney" , "Paul E. McKenney" Subject: [PATCH RFC tip/core/rcu 1/2] rcu: Fix code-style issues involving "else" Date: Thu, 17 May 2012 15:12:44 -0700 Message-Id: <1337292765-12221-1-git-send-email-paulmck@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.8 In-Reply-To: <20120517221217.GA12196@linux.vnet.ibm.com> References: <20120517221217.GA12196@linux.vnet.ibm.com> X-Content-Scanned: Fidelis XPS MAILER x-cbid: 12051722-2398-0000-0000-000006BC4BB3 X-Gm-Message-State: ALoCoQnhF3g+zQ+CP+IyjQl81wwe6+/9NHMxpbI8TgN3O426A8NZ/HVF56pcCb1LDeJ5H2hbgbjp From: "Paul E. McKenney" The Linux kernel coding style says that single-statement blocks should omit curly braces unless the other leg of the "if" statement has multiple statements, in which case the curly braces should be included. This commit fixes RCU's violations of this rule. Signed-off-by: Paul E. McKenney Signed-off-by: Paul E. McKenney Reviewed-by: Josh Triplett Acked-by: Lai Jiangshan --- kernel/rcutiny_plugin.h | 3 ++- kernel/rcutorture.c | 3 ++- kernel/rcutree.c | 7 ++++--- kernel/rcutree_plugin.h | 18 ++++++++++-------- 4 files changed, 18 insertions(+), 13 deletions(-) diff --git a/kernel/rcutiny_plugin.h b/kernel/rcutiny_plugin.h index fc31a2d..cd9ae33 100644 --- a/kernel/rcutiny_plugin.h +++ b/kernel/rcutiny_plugin.h @@ -351,8 +351,9 @@ static int rcu_initiate_boost(void) rcu_preempt_ctrlblk.boost_tasks = rcu_preempt_ctrlblk.gp_tasks; invoke_rcu_callbacks(); - } else + } else { RCU_TRACE(rcu_initiate_boost_trace()); + } return 1; } diff --git a/kernel/rcutorture.c b/kernel/rcutorture.c index e66b34a..d7c3cb1 100644 --- a/kernel/rcutorture.c +++ b/kernel/rcutorture.c @@ -407,8 +407,9 @@ rcu_torture_cb(struct rcu_head *p) if (++rp->rtort_pipe_count >= RCU_TORTURE_PIPE_LEN) { rp->rtort_mbtest = 0; rcu_torture_free(rp); - } else + } else { cur_ops->deferred_free(rp); + } } static int rcu_no_completed(void) diff --git a/kernel/rcutree.c b/kernel/rcutree.c index 0da7b88..25874a3 100644 --- a/kernel/rcutree.c +++ b/kernel/rcutree.c @@ -893,8 +893,9 @@ static void __note_new_gpnum(struct rcu_state *rsp, struct rcu_node *rnp, struct if (rnp->qsmask & rdp->grpmask) { rdp->qs_pending = 1; rdp->passed_quiesce = 0; - } else + } else { rdp->qs_pending = 0; + } zero_cpu_stall_ticks(rdp); } } @@ -2115,9 +2116,9 @@ void synchronize_sched_expedited(void) put_online_cpus(); /* No joy, try again later. Or just synchronize_sched(). */ - if (trycount++ < 10) + if (trycount++ < 10) { udelay(trycount * num_online_cpus()); - else { + } else { synchronize_sched(); return; } diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h index 2411000..c9b173c 100644 --- a/kernel/rcutree_plugin.h +++ b/kernel/rcutree_plugin.h @@ -398,8 +398,9 @@ static noinline void rcu_read_unlock_special(struct task_struct *t) rnp->grphi, !!rnp->gp_tasks); rcu_report_unblock_qs_rnp(rnp, flags); - } else + } else { raw_spin_unlock_irqrestore(&rnp->lock, flags); + } #ifdef CONFIG_RCU_BOOST /* Unboost if we were boosted. */ @@ -429,9 +430,9 @@ void __rcu_read_unlock(void) { struct task_struct *t = current; - if (t->rcu_read_lock_nesting != 1) + if (t->rcu_read_lock_nesting != 1) { --t->rcu_read_lock_nesting; - else { + } else { barrier(); /* critical section before exit code. */ t->rcu_read_lock_nesting = INT_MIN; barrier(); /* assign before ->rcu_read_unlock_special load */ @@ -824,9 +825,9 @@ sync_rcu_preempt_exp_init(struct rcu_state *rsp, struct rcu_node *rnp) int must_wait = 0; raw_spin_lock_irqsave(&rnp->lock, flags); - if (list_empty(&rnp->blkd_tasks)) + if (list_empty(&rnp->blkd_tasks)) { raw_spin_unlock_irqrestore(&rnp->lock, flags); - else { + } else { rnp->exp_tasks = rnp->blkd_tasks.next; rcu_initiate_boost(rnp, flags); /* releases rnp->lock */ must_wait = 1; @@ -870,9 +871,9 @@ void synchronize_rcu_expedited(void) * expedited grace period for us, just leave. */ while (!mutex_trylock(&sync_rcu_preempt_exp_mutex)) { - if (trycount++ < 10) + if (trycount++ < 10) { udelay(trycount * num_online_cpus()); - else { + } else { synchronize_rcu(); return; } @@ -2213,8 +2214,9 @@ static void rcu_prepare_for_idle(int cpu) if (rcu_cpu_has_callbacks(cpu)) { trace_rcu_prep_idle("More callbacks"); invoke_rcu_core(); - } else + } else { trace_rcu_prep_idle("Callbacks drained"); + } } /*