From patchwork Fri Jun 28 08:06:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 168018 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3374765ilk; Fri, 28 Jun 2019 01:06:42 -0700 (PDT) X-Google-Smtp-Source: APXvYqyBhEnWznWWj2awyO0aDuntIIIPCZu/C0GB9+E77xeyf4iOnrZLCm4WIy/NLvMUcx1X8cMx X-Received: by 2002:a65:5347:: with SMTP id w7mr7912984pgr.375.1561709202062; Fri, 28 Jun 2019 01:06:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561709202; cv=none; d=google.com; s=arc-20160816; b=hKPfF8EAPmuqY82eXRYWHFFPMthCQo9nhBm0GfBVnxq5drcdWESkYGcv2Dqhlcsht0 v1Iceszlfbfkkb7mGSJAQTXRVE7xx0mYBX4PA4Jt+K/WEyMkyuVDRhr1bTUnlQS2yL/w ltHr0X4GQBqGPtbK/jeFFY03oaI8TYZqxYQ8vzs8W4r28rOLPNcTmaSLVjwoUSKu2UKT a7vLJgTZgZ2CPKFZOfBrdhkTUdxKNSLhWNee1FqraL5BWYMpEU+Ti4t+XIxNA0zA5Jpu oAZQksERlXPoK6IlF2SClg2kZkcnQrxJKeCP6hzKggSmmiOFlP3N/K49USZIjbkXsTDq D1BQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=8yrjFRKlFsxVAR2E0++WqOpY0GWxGblducHQHkY1HEY=; b=j1MHIU2YuSGm52aobZnB7fk37LLgVahkpjh2fbl6Tax5uXiRKbk+h8MR5pY8IPBPtX 5mQbDLQLLxoJYkdPh3uLPtZmxonD7dKPBoe1RDJThWl0dbcPA+NQ20Ud636eiCHFyPzm 5wbQT/JsFytSfk14W7UFYE02W9a+mjrZniKb6fUYn7+6eayuWssFIK18FItVph8M06xl VNBRExmETe+5ovr0MmG4l9Jz9lMRxF3mLuAmG8zA02u7u42cZHnC3tZKqB5jQD/nnYpM 6cQ8ZbJWP3CGVd/b+116DcoQ9BdM5rFTxO4waB166nM+4X3ZSV9PLyzQtADJpRWqj/lp udGw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 64si1547140plw.37.2019.06.28.01.06.41; Fri, 28 Jun 2019 01:06:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726725AbfF1IGk (ORCPT + 30 others); Fri, 28 Jun 2019 04:06:40 -0400 Received: from mail-wm1-f65.google.com ([209.85.128.65]:54707 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726502AbfF1IGi (ORCPT ); Fri, 28 Jun 2019 04:06:38 -0400 Received: by mail-wm1-f65.google.com with SMTP id g135so8117568wme.4 for ; Fri, 28 Jun 2019 01:06:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=8yrjFRKlFsxVAR2E0++WqOpY0GWxGblducHQHkY1HEY=; b=C8sw5GZNBcV9psHLUp7LZo/xRVSGtTlvWE2TtFdV2yvea5yzFnam65lik/15uv3W8f qa007W1VaXpD+58IFx2Q3farH8UcecB/aIYMj1cvw5dRJEqlGzkOFUNOn9U7zv1K3m1Y FBm3mUJrzD1nPTAJcDYiQ92EXfRl63y1FajsKkdBtx0JR+RBzWjNZwJsEUaTDxSe/NkU WCnMgKRcN34OLT3HsLglePhFWF9knAMXkHFLFYuGeTcIBnP1RDTvzNlSA/QamitLdoYk VqbsmSUDpkDkg4U6/hXmtsSNKmtpgfEzkkzU5B98ij5zxvrXxXZWdzOgm2/bW3FQNOJg aspw== X-Gm-Message-State: APjAAAVwUCkglwvribYoo9OrVLoXFVYYaMZ1ijw/Q0B/Snx4fv6zBdJm PNjjrgkavKlCiFggyQz3AnzoJSlm0Tc= X-Received: by 2002:a1c:7a15:: with SMTP id v21mr6147993wmc.176.1561709195943; Fri, 28 Jun 2019 01:06:35 -0700 (PDT) Received: from localhost.localdomain.com ([151.29.165.245]) by smtp.gmail.com with ESMTPSA id z19sm1472774wmi.7.2019.06.28.01.06.34 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 28 Jun 2019 01:06:35 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org, tj@kernel.org Cc: linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, lizefan@huawei.com, cgroups@vger.kernel.org Subject: [PATCH v8 1/8] sched/topology: Adding function partition_sched_domains_locked() Date: Fri, 28 Jun 2019 10:06:11 +0200 Message-Id: <20190628080618.522-2-juri.lelli@redhat.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190628080618.522-1-juri.lelli@redhat.com> References: <20190628080618.522-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mathieu Poirier Introducing function partition_sched_domains_locked() by taking the mutex locking code out of the original function. That way the work done by partition_sched_domains_locked() can be reused without dropping the mutex lock. No change of functionality is introduced by this patch. Signed-off-by: Mathieu Poirier Acked-by: Tejun Heo --- include/linux/sched/topology.h | 10 ++++++++++ kernel/sched/topology.c | 17 +++++++++++++---- 2 files changed, 23 insertions(+), 4 deletions(-) -- 2.17.2 diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index cfc0a89a7159..d7166f8c0215 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -161,6 +161,10 @@ static inline struct cpumask *sched_domain_span(struct sched_domain *sd) return to_cpumask(sd->span); } +extern void partition_sched_domains_locked(int ndoms_new, + cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new); + extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], struct sched_domain_attr *dattr_new); @@ -213,6 +217,12 @@ unsigned long arch_scale_cpu_capacity(struct sched_domain *sd, int cpu) struct sched_domain_attr; +static inline void +partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) +{ +} + static inline void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], struct sched_domain_attr *dattr_new) diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index f53f89df837d..362c383ec4bd 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -2159,16 +2159,16 @@ static int dattrs_equal(struct sched_domain_attr *cur, int idx_cur, * ndoms_new == 0 is a special case for destroying existing domains, * and it will not create the default domain. * - * Call with hotplug lock held + * Call with hotplug lock and sched_domains_mutex held */ -void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], - struct sched_domain_attr *dattr_new) +void partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) { bool __maybe_unused has_eas = false; int i, j, n; int new_topology; - mutex_lock(&sched_domains_mutex); + lockdep_assert_held(&sched_domains_mutex); /* Always unregister in case we don't destroy any domains: */ unregister_sched_domain_sysctl(); @@ -2251,6 +2251,15 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], ndoms_cur = ndoms_new; register_sched_domain_sysctl(); +} +/* + * Call with hotplug lock held + */ +void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) +{ + mutex_lock(&sched_domains_mutex); + partition_sched_domains_locked(ndoms_new, doms_new, dattr_new); mutex_unlock(&sched_domains_mutex); } From patchwork Fri Jun 28 08:06:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 168019 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3375234ilk; Fri, 28 Jun 2019 01:07:10 -0700 (PDT) X-Google-Smtp-Source: APXvYqzHhr809GtdWzN5ZFJ13iMxVHTBWCRxwxGkKXDJa922piAO67wCUc2oQZOp9HoaliDz3dK/ X-Received: by 2002:a17:902:aa41:: with SMTP id c1mr9801639plr.201.1561709230729; Fri, 28 Jun 2019 01:07:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561709230; cv=none; d=google.com; s=arc-20160816; b=cSIyq3/HolpnI1yJGXLpeoViYu5BXNJNNraAceJSar3HY2OhyNbnturRWAvrXgrF+a CX/x/uXQaf5lVQeHzCyIpHWewY9k7VCLUbUgeEx429TbwVw4sey6xnKGnb+i8ZZk3r5j c67doxSx4ShD97I++Nh3XW+PYbLHgSYwV9FE3C0bLmgDdLvd0BN8yRKHVcEZ6jFGUk2w C9YyewARg+Ou2UUSnI7AnbAv+qXMa86XnmoRvZXZG3kqYzm6vGM/LRIpv/aPPUO+zh2m JgCqk6srHrqIkdtXbLVO/eZ556ojyM1RduURjklIXKoL6FWH9gJhfMh7sPaLJ9WpD1Ky D4Zg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=1v9Wyn7iwUdwNAPe/wicZ1s65Wur1rKRI0dtWQllWvE=; b=SdyVdhqLrGRzB3+/fQ9NEDEaFwkmB5tcHRUJADHUyuVvPJT3eYS/k1oiXjSR4gjpZE 11pHH9IoOGnk9O0ikHQtvHKYKZ8UbVeUMv5fBL9l/vKgGdQdVs9rO33QK0UagT7vrDKV aHVmUUOtIvB7047UHVCmLXw4vFdH6+t2jN0MRBf9bAqaCFxFjii3LISo8wyXqmJKt1qX Y9CgExqW7vfBFmJjculLp519Ts9TLxpWXgE0GKO/JutkleUBN0wQCNJZIW76cNjiJvcA gW4l+FQU1+3aEWHZk3v+D2uvg9BPqPCCbkibrnDn6I7aEEa097eSj5dU7eSMUampxvfy Fp/w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k1si1662791pjt.70.2019.06.28.01.07.10; Fri, 28 Jun 2019 01:07:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726897AbfF1IHJ (ORCPT + 30 others); Fri, 28 Jun 2019 04:07:09 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:41321 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726484AbfF1IGi (ORCPT ); Fri, 28 Jun 2019 04:06:38 -0400 Received: by mail-wr1-f66.google.com with SMTP id c2so5233230wrm.8 for ; Fri, 28 Jun 2019 01:06:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=1v9Wyn7iwUdwNAPe/wicZ1s65Wur1rKRI0dtWQllWvE=; b=JfGXUjFoEwV9gWxpzW0ZRVC+xojja58MQKsVzi/eEhP/9khBd5r8YZ885FxPAMWLcO xKbXB86EhhxpGJDbvWRQBV/Q8L0YxRdUB4PUBZu7/xQsGom8aJ2F6uChuAtRBYgK/+dL TUI048XpDV53n9UP3Bhl24mm62tRJp+bVIdYNN1FAWNn2nxPqLhg1f+CkPRJsLDUrx3q o12TEqwKnbd0CJ5ceCugr/IISgaBF3QcldboL1Pfn4AiFwQ8nMdZ2uEMWm7RZT5u/2do qoopSRlflXcDsaqM+AFPOCIuV2SPWkRYpCxAS/+pZ0jvN1Rebyi6cblrGArlLLeiigPg 2FHA== X-Gm-Message-State: APjAAAW1av394VJf7pKnVx+CtK1qGEleE11rwlRmN+cI6wKn8rCucbJC Nq5lVhlq/+3WW2AyaawlNJJKSA== X-Received: by 2002:a5d:4751:: with SMTP id o17mr875145wrs.127.1561709197134; Fri, 28 Jun 2019 01:06:37 -0700 (PDT) Received: from localhost.localdomain.com ([151.29.165.245]) by smtp.gmail.com with ESMTPSA id z19sm1472774wmi.7.2019.06.28.01.06.35 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 28 Jun 2019 01:06:36 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org, tj@kernel.org Cc: linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, lizefan@huawei.com, cgroups@vger.kernel.org Subject: [PATCH v8 2/8] sched/core: Streamlining calls to task_rq_unlock() Date: Fri, 28 Jun 2019 10:06:12 +0200 Message-Id: <20190628080618.522-3-juri.lelli@redhat.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190628080618.522-1-juri.lelli@redhat.com> References: <20190628080618.522-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mathieu Poirier Calls to task_rq_unlock() are done several times in function __sched_setscheduler(). This is fine when only the rq lock needs to be handled but not so much when other locks come into play. This patch streamlines the release of the rq lock so that only one location need to be modified when dealing with more than one lock. No change of functionality is introduced by this patch. Signed-off-by: Mathieu Poirier Reviewed-by: Steven Rostedt (VMware) Acked-by: Tejun Heo --- kernel/sched/core.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) -- 2.17.2 diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 874c427742a9..acd6a9fe85bc 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4222,8 +4222,8 @@ static int __sched_setscheduler(struct task_struct *p, * Changing the policy of the stop threads its a very bad idea: */ if (p == rq->stop) { - task_rq_unlock(rq, p, &rf); - return -EINVAL; + retval = -EINVAL; + goto unlock; } /* @@ -4239,8 +4239,8 @@ static int __sched_setscheduler(struct task_struct *p, goto change; p->sched_reset_on_fork = reset_on_fork; - task_rq_unlock(rq, p, &rf); - return 0; + retval = 0; + goto unlock; } change: @@ -4253,8 +4253,8 @@ static int __sched_setscheduler(struct task_struct *p, if (rt_bandwidth_enabled() && rt_policy(policy) && task_group(p)->rt_bandwidth.rt_runtime == 0 && !task_group_is_autogroup(task_group(p))) { - task_rq_unlock(rq, p, &rf); - return -EPERM; + retval = -EPERM; + goto unlock; } #endif #ifdef CONFIG_SMP @@ -4269,8 +4269,8 @@ static int __sched_setscheduler(struct task_struct *p, */ if (!cpumask_subset(span, &p->cpus_allowed) || rq->rd->dl_bw.bw == 0) { - task_rq_unlock(rq, p, &rf); - return -EPERM; + retval = -EPERM; + goto unlock; } } #endif @@ -4289,8 +4289,8 @@ static int __sched_setscheduler(struct task_struct *p, * is available. */ if ((dl_policy(policy) || dl_task(p)) && sched_dl_overflow(p, policy, attr)) { - task_rq_unlock(rq, p, &rf); - return -EBUSY; + retval = -EBUSY; + goto unlock; } p->sched_reset_on_fork = reset_on_fork; @@ -4346,6 +4346,10 @@ static int __sched_setscheduler(struct task_struct *p, preempt_enable(); return 0; + +unlock: + task_rq_unlock(rq, p, &rf); + return retval; } static int _sched_setscheduler(struct task_struct *p, int policy,