From patchwork Wed Apr 3 08:46:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 161686 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp60178jan; Wed, 3 Apr 2019 01:47:17 -0700 (PDT) X-Google-Smtp-Source: APXvYqzIEfypjHlk1a/sFBlJvRwuBlUxIMejHO3BI97xTPZVOk78Xu3qqxU2CswGO2YBqk4k8/d2 X-Received: by 2002:a65:5183:: with SMTP id h3mr38811612pgq.53.1554281237605; Wed, 03 Apr 2019 01:47:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554281237; cv=none; d=google.com; s=arc-20160816; b=n7fTkqyFYFnyjlUE86ArBioRvYa72gLogrVcToFhb64JF72WR1cTZci3hElO/CDynK +2JUvb+bAFapb6bNOfB0mzg1vvj8LJss2lYRMWi8KmpjAj/gn0BEbFQwrB+Z85eeZhJ8 +sCzTDaq5l2oUAHOGsQqZ8TACrSCDZ9l1sawKdlNs4SLRagyxL8s2zgnkRbHLgzzyNq4 hzCIROo9tMwUcJHGO/mtpUp2heaOn6wmgECQ+LPiIhR6xmeqZ737rW1IVfcnoInfMBop WHSvF9xoY27qL4flVcETJ/+BtF+aV+tH5whb0YoHxSGUSfWqhxjEXufndyk9KLQDIcvc M1yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=L4zmx41iw1KumQPxCRIW2sBgEknaPog+5Lq3b2W35Co=; b=rxQUuJSTwxDtW2VbPNMwV1t4QS6svpOOPj+7Wh1DepYscp1JoWSzmi2MgxboQi5mtd 7rCmZRaIayHrhNvG+Mxoz/ORcSTU3V/AIFguYK/douj8LMin7LHi5cUe0zreTS0Ai/p0 ohOGLFwiBgNVbIiybRD7rYw7i21aOW0dpn1LEVVguD4u9keBF3Y8Z3Fq0i4+b6pPaZBV kxE3sE7hwF+IxC2pkNK7WerszImXf9AJYkA4PBYN3eR0NqSU1cKbk4efsc6Qe0ThLPkt 1xXtS9smKiKE7+7Q/oINuUUKV88hSboLh7HgYVFPly7QsFb1P1AXapPhxBh0smh95VBT 2+3A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f3si13813735plf.300.2019.04.03.01.47.17; Wed, 03 Apr 2019 01:47:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728922AbfDCIrQ (ORCPT + 31 others); Wed, 3 Apr 2019 04:47:16 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:33264 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728696AbfDCIrN (ORCPT ); Wed, 3 Apr 2019 04:47:13 -0400 Received: by mail-wr1-f66.google.com with SMTP id q1so20198420wrp.0 for ; Wed, 03 Apr 2019 01:47:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=L4zmx41iw1KumQPxCRIW2sBgEknaPog+5Lq3b2W35Co=; b=bG9Uwj+1FfBsIqpsxmdGk51NbXKd2GTpsSYNsaAdKMnomW1y4zTYZuQW96CYo31uhb wgxce7GeR1PZwXRGR2ymtv2kpb5RqKykyNB1BFlexqwhImyqL+AzJIQ03r09biQB7YvF pjs2feytpg9K0BwQJq9kcQJ43GMyPKiXxs57JS06KCZ40HWEjeMy2YdQ9C4IrdcfB7I4 PDjKYbLbSdz1FLXMeARjdCW0rbbmK4gernkfi/uxx2RZA1LI8Gcwn69xf5VgYDMkJ/I8 98f1w5LjaV95seSoMEDwlvm0DZbAJB7SwIICPXydg3KCPare8gylb7QVGOrMqfpIwTny iv1w== X-Gm-Message-State: APjAAAWpBYcqup/7qhcX018YwQ85Mb6KQ8ZY+DHTYJghGrqP1u9IWuU/ YgB46449CcdSlnn3ubChadEIpw== X-Received: by 2002:adf:b6a4:: with SMTP id j36mr27046494wre.55.1554281231568; Wed, 03 Apr 2019 01:47:11 -0700 (PDT) Received: from localhost.localdomain.com ([151.29.174.33]) by smtp.gmail.com with ESMTPSA id o2sm12258081wrs.89.2019.04.03.01.47.10 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 Apr 2019 01:47:10 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org, tj@kernel.org Cc: linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, lizefan@huawei.com, cgroups@vger.kernel.org Subject: [PATCH v7 1/7] sched/topology: Adding function partition_sched_domains_locked() Date: Wed, 3 Apr 2019 10:46:44 +0200 Message-Id: <20190403084650.4414-2-juri.lelli@redhat.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190403084650.4414-1-juri.lelli@redhat.com> References: <20190403084650.4414-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mathieu Poirier Introducing function partition_sched_domains_locked() by taking the mutex locking code out of the original function. That way the work done by partition_sched_domains_locked() can be reused without dropping the mutex lock. No change of functionality is introduced by this patch. Signed-off-by: Mathieu Poirier Acked-by: Tejun Heo --- include/linux/sched/topology.h | 10 ++++++++++ kernel/sched/topology.c | 17 +++++++++++++---- 2 files changed, 23 insertions(+), 4 deletions(-) -- 2.17.2 diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 57c7ed3fe465..e7db602e898c 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -161,6 +161,10 @@ static inline struct cpumask *sched_domain_span(struct sched_domain *sd) return to_cpumask(sd->span); } +extern void partition_sched_domains_locked(int ndoms_new, + cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new); + extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], struct sched_domain_attr *dattr_new); @@ -213,6 +217,12 @@ unsigned long arch_scale_cpu_capacity(struct sched_domain *sd, int cpu) struct sched_domain_attr; +static inline void +partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) +{ +} + static inline void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], struct sched_domain_attr *dattr_new) diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index ab7f371a3a17..b63ac11d0831 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -2154,16 +2154,16 @@ static int dattrs_equal(struct sched_domain_attr *cur, int idx_cur, * ndoms_new == 0 is a special case for destroying existing domains, * and it will not create the default domain. * - * Call with hotplug lock held + * Call with hotplug lock and sched_domains_mutex held */ -void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], - struct sched_domain_attr *dattr_new) +void partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) { bool __maybe_unused has_eas = false; int i, j, n; int new_topology; - mutex_lock(&sched_domains_mutex); + lockdep_assert_held(&sched_domains_mutex); /* Always unregister in case we don't destroy any domains: */ unregister_sched_domain_sysctl(); @@ -2246,6 +2246,15 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], ndoms_cur = ndoms_new; register_sched_domain_sysctl(); +} +/* + * Call with hotplug lock held + */ +void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) +{ + mutex_lock(&sched_domains_mutex); + partition_sched_domains_locked(ndoms_new, doms_new, dattr_new); mutex_unlock(&sched_domains_mutex); } From patchwork Wed Apr 3 08:46:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 161688 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp60779jan; Wed, 3 Apr 2019 01:48:08 -0700 (PDT) X-Google-Smtp-Source: APXvYqxSDRk1nBPnUFzamK/zLAPltTimjHdpcLxWIX7PRHKssRY+mnftiKfUiH7LzV/qT171ajq5 X-Received: by 2002:aa7:8c86:: with SMTP id p6mr11448514pfd.37.1554281288404; Wed, 03 Apr 2019 01:48:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554281288; cv=none; d=google.com; s=arc-20160816; b=dbq5SxgYLhLGEsF++lvXDSoWTDJLjoGZTZDG4hm1QJ26yT6xeEQFDgd+35y1j381H/ TdEelKGQ37WiaN8yQi5XaXJlnfEw87+oSajkiMPw6YGMnLW4rikS581nSQnZq22jRtE7 EeZNaj/7dxUBNZPQ/vwb+0v88g7o+owhXLa5I8RLd11Ow5nQwG0XQRz/eQrt8U1o+zVF Z68xoXY1aba1fI1S10fW8QrlcMYnjytEhinqcfMPWGJnzQAbfVXzfKI7ab0VMtCynnf9 faPGNCYDgmRtlb1G7T2SMyP114hrzLC+wMuaAP7f5VWWO2uusm3dsRwz9kw2o9Yby744 UArg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=2zb2sfp3ECm64xrb0rL2N53J/j5P/++jhkiQsNZzpOE=; b=Jn8nxfJ03JPVRNR5aAjKn9NhBfV64pyFM0g7TiijuLPEwXdwI9khZdDlnqR+J9TfyW TgPnPkTK5b1NlH59HE1jttwZQgYGFpghAvExE0OoreJgh7uFtuk4vfdjUqYRKYTab0S7 P6yvWK2S62T4nNCp88K463DMqtoD0zfn/MDdjgNdW2faIOEDHETNt8VvxBBz/N/ujsi4 GPaIHaToMCEl3Dp+rt5ZBA7zsgm3pRRWosrxo1Ky8+5XVqGt3w3ZOoCwOrexw8ft+568 dz9BE8roRMNsvvV8kiTj1nmJPWA2V1MkLmRHx0uN3vnWjrdjkEFePc3OJBtiweRsSgzC vFUQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b6si12693722pgw.70.2019.04.03.01.48.08; Wed, 03 Apr 2019 01:48:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729347AbfDCIsG (ORCPT + 31 others); Wed, 3 Apr 2019 04:48:06 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:35315 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729071AbfDCIrP (ORCPT ); Wed, 3 Apr 2019 04:47:15 -0400 Received: by mail-wr1-f66.google.com with SMTP id w1so20215848wrp.2 for ; Wed, 03 Apr 2019 01:47:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2zb2sfp3ECm64xrb0rL2N53J/j5P/++jhkiQsNZzpOE=; b=AIm+g8BipBuxH6P6m22IPlhIiG4g1GoTUkNGFCrdnbL9ntfuY7l0l8KLIRs1O6sDsY +znF0Zw/3XC/LqL/LQlrEDmzEzZDWERvHCJhTj0OPLE0Ke40qVN8wHXevToxsSbLEF21 yrJquZGdz8i+qTY+p+lAI6fuYReR2teFYBJGAQEt2atZDmUOmTOVydnfJcxp+gLETciA pYB21M/OpkklAQ43Dp79yYU7roEx5DP6UXU5+MKmalzP0n9AB90bT+1VIf+Kiyj7Xrgn boVUfGr9/nrFq1LBYuQ3m10vLFCEtmHUQRI4R9df6OWT6oXn9rtNKU0TXRBDZXcn9utk c7BA== X-Gm-Message-State: APjAAAVZ/qmZkXIDSjpdG1br/w5KMNcPwLa4lYhbKKkGN/vi0KPXrw16 Ep4/1hFah5LqwRY6IOouLiajaA== X-Received: by 2002:a5d:6101:: with SMTP id v1mr25931989wrt.222.1554281233040; Wed, 03 Apr 2019 01:47:13 -0700 (PDT) Received: from localhost.localdomain.com ([151.29.174.33]) by smtp.gmail.com with ESMTPSA id o2sm12258081wrs.89.2019.04.03.01.47.11 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 Apr 2019 01:47:12 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org, tj@kernel.org Cc: linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, lizefan@huawei.com, cgroups@vger.kernel.org Subject: [PATCH v7 2/7] sched/core: Streamlining calls to task_rq_unlock() Date: Wed, 3 Apr 2019 10:46:45 +0200 Message-Id: <20190403084650.4414-3-juri.lelli@redhat.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190403084650.4414-1-juri.lelli@redhat.com> References: <20190403084650.4414-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mathieu Poirier Calls to task_rq_unlock() are done several times in function __sched_setscheduler(). This is fine when only the rq lock needs to be handled but not so much when other locks come into play. This patch streamlines the release of the rq lock so that only one location need to be modified when dealing with more than one lock. No change of functionality is introduced by this patch. Signed-off-by: Mathieu Poirier Reviewed-by: Steven Rostedt (VMware) Acked-by: Tejun Heo --- kernel/sched/core.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) -- 2.17.2 diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ead464a0f2e5..98e835de1e7b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4274,8 +4274,8 @@ static int __sched_setscheduler(struct task_struct *p, * Changing the policy of the stop threads its a very bad idea: */ if (p == rq->stop) { - task_rq_unlock(rq, p, &rf); - return -EINVAL; + retval = -EINVAL; + goto unlock; } /* @@ -4291,8 +4291,8 @@ static int __sched_setscheduler(struct task_struct *p, goto change; p->sched_reset_on_fork = reset_on_fork; - task_rq_unlock(rq, p, &rf); - return 0; + retval = 0; + goto unlock; } change: @@ -4305,8 +4305,8 @@ static int __sched_setscheduler(struct task_struct *p, if (rt_bandwidth_enabled() && rt_policy(policy) && task_group(p)->rt_bandwidth.rt_runtime == 0 && !task_group_is_autogroup(task_group(p))) { - task_rq_unlock(rq, p, &rf); - return -EPERM; + retval = -EPERM; + goto unlock; } #endif #ifdef CONFIG_SMP @@ -4321,8 +4321,8 @@ static int __sched_setscheduler(struct task_struct *p, */ if (!cpumask_subset(span, &p->cpus_allowed) || rq->rd->dl_bw.bw == 0) { - task_rq_unlock(rq, p, &rf); - return -EPERM; + retval = -EPERM; + goto unlock; } } #endif @@ -4341,8 +4341,8 @@ static int __sched_setscheduler(struct task_struct *p, * is available. */ if ((dl_policy(policy) || dl_task(p)) && sched_dl_overflow(p, policy, attr)) { - task_rq_unlock(rq, p, &rf); - return -EBUSY; + retval = -EBUSY; + goto unlock; } p->sched_reset_on_fork = reset_on_fork; @@ -4398,6 +4398,10 @@ static int __sched_setscheduler(struct task_struct *p, preempt_enable(); return 0; + +unlock: + task_rq_unlock(rq, p, &rf); + return retval; } static int _sched_setscheduler(struct task_struct *p, int policy,