From patchwork Thu Sep 20 18:48:01 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 11593 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 8625923E54 for ; Thu, 20 Sep 2012 18:48:52 +0000 (UTC) Received: from mail-ie0-f180.google.com (mail-ie0-f180.google.com [209.85.223.180]) by fiordland.canonical.com (Postfix) with ESMTP id E51E3A1823C for ; Thu, 20 Sep 2012 18:48:51 +0000 (UTC) Received: by mail-ie0-f180.google.com with SMTP id e10so3420474iej.11 for ; Thu, 20 Sep 2012 11:48:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references:x-cbid :x-gm-message-state; bh=MxTrgQMtqTp8oXMMO0/aOTv67P/NxguikUUgWBheEFw=; b=n8ok+wim+v32wA5o/O/YVjPFq4CnI9+z+JAlNauyXz7xD/8JITkrfEPKzyv57hGqpJ 0nYZUXdyAV+c/FVyUaNEm2+YXQeJOCNlmbjHQqWaYNNiYyvEA8tx2ve15anMls9Ihk5r CgKKVQL7/2pCXRC5yD5MjLlTtxemGdpeHQyjdym24A5pZyq9vDXTI++3bSfmJMniejSd EZy1mnxlzpsM/TR+O+JWKnmMBTTzIzpkm3Rft1ti/wbPOjwT7y3/GyymNgaaM4jw7JSI a7qoUtYhsj0TdRUoGsGjazHfxSNLkLK2fx+dDlg4ka5zI57FzfgCxY5W66yPT+9V6rTs Uzqg== Received: by 10.50.7.212 with SMTP id l20mr2550999iga.43.1348166931696; Thu, 20 Sep 2012 11:48:51 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.184.232 with SMTP id ex8csp92209igc; Thu, 20 Sep 2012 11:48:51 -0700 (PDT) Received: by 10.50.160.228 with SMTP id xn4mr3476934igb.1.1348166931310; Thu, 20 Sep 2012 11:48:51 -0700 (PDT) Received: from e7.ny.us.ibm.com (e7.ny.us.ibm.com. [32.97.182.137]) by mx.google.com with ESMTPS id as4si9799539igc.37.2012.09.20.11.48.50 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 20 Sep 2012 11:48:51 -0700 (PDT) Received-SPF: pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.182.137 as permitted sender) client-ip=32.97.182.137; Authentication-Results: mx.google.com; spf=pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.182.137 as permitted sender) smtp.mail=paulmck@linux.vnet.ibm.com Received: from /spool/local by e7.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 20 Sep 2012 14:48:49 -0400 Received: from d01relay04.pok.ibm.com (9.56.227.236) by e7.ny.us.ibm.com (192.168.1.107) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Thu, 20 Sep 2012 14:48:37 -0400 Received: from d03av01.boulder.ibm.com (d03av01.boulder.ibm.com [9.17.195.167]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q8KIma06126390 for ; Thu, 20 Sep 2012 14:48:36 -0400 Received: from d03av01.boulder.ibm.com (loopback [127.0.0.1]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q8KImNr9020781 for ; Thu, 20 Sep 2012 12:48:33 -0600 Received: from paulmck-ThinkPad-W500 ([9.47.24.72]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q8KImMib020667; Thu, 20 Sep 2012 12:48:22 -0600 Received: by paulmck-ThinkPad-W500 (Postfix, from userid 1000) id C96D4EC515; Thu, 20 Sep 2012 11:48:21 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: mingo@elte.hu, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@polymtl.ca, josh@joshtriplett.org, niv@us.ibm.com, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, Valdis.Kletnieks@vt.edu, dhowells@redhat.com, eric.dumazet@gmail.com, darren@dvhart.com, fweisbec@gmail.com, sbw@mit.edu, patches@linaro.org, "Paul E. McKenney" Subject: [PATCH tip/core/rcu 05/23] rcu: Allow RCU grace-period cleanup to be preempted Date: Thu, 20 Sep 2012 11:48:01 -0700 Message-Id: <1348166900-18716-5-git-send-email-paulmck@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.8 In-Reply-To: <1348166900-18716-1-git-send-email-paulmck@linux.vnet.ibm.com> References: <20120920184751.GA18657@linux.vnet.ibm.com> <1348166900-18716-1-git-send-email-paulmck@linux.vnet.ibm.com> x-cbid: 12092018-5806-0000-0000-000019D02D27 X-Gm-Message-State: ALoCoQlDOEPU8tPOVZCFRGBp5YxfRh2u6w4oXdoh5kxrdUlnI5erFE4Ll126RcA1nf2umzW6W+fN From: "Paul E. McKenney" RCU grace-period cleanup is currently carried out with interrupts disabled, which can result in excessive latency spikes on large systems (many hundreds or thousands of CPUs). This patch therefore makes the RCU grace-period cleanup be preemptible, including voluntary preemption points, which should eliminate those latency spikes. Similar spikes from forcing of quiescent states will be dealt with similarly by later patches. Updated to replace uses of spin_lock_irqsave() with spin_lock_irq(), as suggested by Peter Zijlstra. Reported-by: Mike Galbraith Reported-by: Dimitri Sivanich Signed-off-by: Paul E. McKenney Reviewed-by: Josh Triplett --- kernel/rcutree.c | 15 +++++++-------- 1 files changed, 7 insertions(+), 8 deletions(-) diff --git a/kernel/rcutree.c b/kernel/rcutree.c index 3cd18ea..fa11e54 100644 --- a/kernel/rcutree.c +++ b/kernel/rcutree.c @@ -1128,7 +1128,7 @@ static int __noreturn rcu_gp_kthread(void *arg) flush_signals(current); } - raw_spin_lock_irqsave(&rnp->lock, flags); + raw_spin_lock_irq(&rnp->lock); gp_duration = jiffies - rsp->gp_start; if (gp_duration > rsp->gp_max) rsp->gp_max = gp_duration; @@ -1149,7 +1149,7 @@ static int __noreturn rcu_gp_kthread(void *arg) * completed. */ if (*rdp->nxttail[RCU_WAIT_TAIL] == NULL) { - raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */ + raw_spin_unlock_irq(&rnp->lock); /* * Propagate new ->completed value to rcu_node @@ -1158,14 +1158,13 @@ static int __noreturn rcu_gp_kthread(void *arg) * to process their callbacks. */ rcu_for_each_node_breadth_first(rsp, rnp) { - /* irqs already disabled. */ - raw_spin_lock(&rnp->lock); + raw_spin_lock_irq(&rnp->lock); rnp->completed = rsp->gpnum; - /* irqs remain disabled. */ - raw_spin_unlock(&rnp->lock); + raw_spin_unlock_irq(&rnp->lock); + cond_resched(); } rnp = rcu_get_root(rsp); - raw_spin_lock(&rnp->lock); /* irqs already disabled. */ + raw_spin_lock_irq(&rnp->lock); } rsp->completed = rsp->gpnum; /* Declare grace period done. */ @@ -1173,7 +1172,7 @@ static int __noreturn rcu_gp_kthread(void *arg) rsp->fqs_state = RCU_GP_IDLE; if (cpu_needs_another_gp(rsp, rdp)) rsp->gp_flags = 1; - raw_spin_unlock_irqrestore(&rnp->lock, flags); + raw_spin_unlock_irq(&rnp->lock); } }