From patchwork Wed Jun 13 12:17:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 138448 Delivered-To: patch@linaro.org Received: by 2002:a2e:970d:0:0:0:0:0 with SMTP id r13-v6csp645939lji; Wed, 13 Jun 2018 05:18:35 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJUgpOg7LLxaSECUs7zueOVdMjHcBWOMWuduGN2DEjRBqEToU1QY8PrUaxpN3wsJ9meekJN X-Received: by 2002:a17:902:74cc:: with SMTP id f12-v6mr5024524plt.7.1528892315358; Wed, 13 Jun 2018 05:18:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528892315; cv=none; d=google.com; s=arc-20160816; b=XTDW3MUrUNkpMYPOhX1ld6E+Ern24V0/BvEKaFeWtgZ7xwwZRif49ELL5q04k5xpor gmwFv4JodnSUmSdAppPx7LZJIdqFmky58hXmrhrftP4PEB2wpwvvv+v0AuAMDeAkOVUz b5Yiw90uhWmy2STgC16M0rYVKl4T6C8/LUhD658BlOJjRjv3rwpoVqOYgHRjPPp7eSab 8PWYpSmr34bhokiIGz82YRpomYCYXLsDd7+C68Aayj/o6J6f6qjxvnF51OKJr2KomXE/ Uq5LCXwNOYYLHF4G8XdmfQUQfFIumemnvWH2LaGmH8G/r9QhZd2boeOIw8JhW4eXY9kV yyZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=XRYvMb4AaSuggbLT60uXuONVk/kchUT0e/4Fx3rF52E=; b=r/FAszSkdg1Ms9wOARam5tKnXdjP7zGnA/nQFuAvNnb9U97BHtm7VgN/JJK4ImpGjt owfn9uDSLnKdL2KrJxWaVgoxrXmxmnBYWzZmbrLWNL19n2yUSpr0FPYqYzmNBvgloQ2b a+NbMdsDDZZbbBWo7XAFLd68rWJ37wv1RBkarXcA20BNaMA4vsSpRRKEWXkOtJa5MPQq AoOVcWBXJqwttekVV6LqVrWt9mIJu0e8BAYgPdWRLoQ/CP5T2hz2qKgk2oqV+k71zmCe 0bqTdXT2STonxHODjUyO/ONFSmyWmpvfK+z/l8CKYpnvtoIO3vhJPtRKsEQHW411l6Iq JWEg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o5-v6si2306751pgs.303.2018.06.13.05.18.35; Wed, 13 Jun 2018 05:18:35 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964816AbeFMMSc (ORCPT + 30 others); Wed, 13 Jun 2018 08:18:32 -0400 Received: from mail-wr0-f193.google.com ([209.85.128.193]:39431 "EHLO mail-wr0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935273AbeFMMR2 (ORCPT ); Wed, 13 Jun 2018 08:17:28 -0400 Received: by mail-wr0-f193.google.com with SMTP id w7-v6so2522688wrn.6 for ; Wed, 13 Jun 2018 05:17:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=XRYvMb4AaSuggbLT60uXuONVk/kchUT0e/4Fx3rF52E=; b=aGhFfo57Xc34bVxrE5pg0Y0o3yltYbF1uu5wbEFuWOGvbHYQU8Z8MzgVTDGQs9ofrM rCA8MUCM9gbhxE6JJwCZzkySEqSvXBtoDGkSzw8pCswR+ITa78VUAyDQdxC9UyeyHmpM 9Fuf/BH+vle3KXNEPFTpFcKoHe+lgw994tDcsEuNW+k4AczFYjZ4LrDBd1jdju6rvuBh Q/4q6NaV0etYgxDESjBF41Pl8KUVvUXCKfPBNXxlDAWXjYOL4udwWK2yQ30uFsssGqZG SK/ojpTpCI9rxEYScC+b59fp8tx9VFbrkDYKqAsedgqfTcuuGIR5b2xpRaSiaVWjJT2q xCHQ== X-Gm-Message-State: APt69E0VXekRBpqv1eDj1LoesdxeOBaUe/03FZqIR+LhkeSkehuZ1Rz+ Yl7E8tWFZ6m0xON5AHHK3bqaLA== X-Received: by 2002:adf:9405:: with SMTP id 5-v6mr4351936wrq.283.1528892247561; Wed, 13 Jun 2018 05:17:27 -0700 (PDT) Received: from localhost.localdomain.com ([151.15.207.242]) by smtp.gmail.com with ESMTPSA id 137-v6sm4943673wmv.28.2018.06.13.05.17.26 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 13 Jun 2018 05:17:26 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org Cc: linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, lizefan@huawei.com, cgroups@vger.kernel.org, Juri Lelli Subject: [PATCH v4 1/5] sched/topology: Add check to backup comment about hotplug lock Date: Wed, 13 Jun 2018 14:17:07 +0200 Message-Id: <20180613121711.5018-2-juri.lelli@redhat.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180613121711.5018-1-juri.lelli@redhat.com> References: <20180613121711.5018-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mathieu Poirier The comment above function partition_sched_domains() clearly state that the cpu_hotplug_lock should be held but doesn't mandate one to abide to it. Add an explicit check backing that comment, so to make it impossible for anyone to miss the requirement. Suggested-by: Juri Lelli Signed-off-by: Mathieu Poirier [modified changelog] Signed-off-by: Juri Lelli --- kernel/sched/topology.c | 1 + 1 file changed, 1 insertion(+) -- 2.14.3 Reviewed-by: Steven Rostedt (VMware) diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 61a1125c1ae4..96eee22fafe8 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1858,6 +1858,7 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], int i, j, n; int new_topology; + lockdep_assert_cpus_held(); mutex_lock(&sched_domains_mutex); /* Always unregister in case we don't destroy any domains: */ From patchwork Wed Jun 13 12:17:08 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 138444 Delivered-To: patch@linaro.org Received: by 2002:a2e:970d:0:0:0:0:0 with SMTP id r13-v6csp644754lji; Wed, 13 Jun 2018 05:17:36 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIINU8aMpoZzuaY9pMZ9ydqdr1AyrW9Pm08fx+9bX4DaNG2eu/6GCbSNr1gEuvy8ofw+5N5 X-Received: by 2002:a17:902:a610:: with SMTP id u16-v6mr4937052plq.195.1528892256420; Wed, 13 Jun 2018 05:17:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528892256; cv=none; d=google.com; s=arc-20160816; b=0lVe4bHQLEZlG6uFeLXLTZrX31w8kyeaPyipaIX2M8KXmbpTo5jYPrPReWgDFNpEcG HsFFbGL4vcBOCXeaC0KbjdzIaiVlrQaMWXSmrFy6hmQBzyht4I0FR05SqdX3d9WIOF81 trGq3rG7ryCXEEHR6W2mfboRjblfTRFuXZYIM/LZvgUATSe5LcB19dfwBvOBO6x/9HuW PW5hlfTsmS8fje2z8xkj0X1sje4tA2SNGB+Zhni84kfU8mUBHpb5vx4WrY33Mm+hjO7r yPj0Qg5jRMnx1KS9tD3fzERIoy9zJLLSSlSvRydxXu2okYV3W+bB9yQR8QyahubIbGKz xB4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=6SMVSyv+4v+rxLRex8lkbAGAQgAUiM++vv/TY3cqMeI=; b=dENh9drb/Ve1X5WeOqMAVarlmzI3WOfXUGEy5Eaobwc9musmj6h4zN/QqkPA4OHrsA 5+hEZrRjsQzzYzZpTnIbWibqwGNnfJQFwAdj8ZTukaLwaOQi15xXCe5EQPzP3mI5C/Wm I5NRBm3fYejcNXgCf/Q2yiO0T6ndLeoM6KnLCq/efucX+kqIgwbO9WnCfjyMGsdKvsZ2 Yl037yPVadctzYjL+EH9I4X2hc77RjXycWZ52q01a3r+UZKXRoLlJFgnqavPokqODjdR e5QTglxGkE6WTkB8NTSrR36GwQIpEb1zooPdXOKKeXuTcqyuWYvoVUWyuekplRWN9x7i HgLg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w12-v6si2643607pld.367.2018.06.13.05.17.36; Wed, 13 Jun 2018 05:17:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935553AbeFMMRc (ORCPT + 30 others); Wed, 13 Jun 2018 08:17:32 -0400 Received: from mail-wm0-f65.google.com ([74.125.82.65]:52421 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935510AbeFMMRa (ORCPT ); Wed, 13 Jun 2018 08:17:30 -0400 Received: by mail-wm0-f65.google.com with SMTP id p126-v6so4355006wmb.2 for ; Wed, 13 Jun 2018 05:17:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=6SMVSyv+4v+rxLRex8lkbAGAQgAUiM++vv/TY3cqMeI=; b=Gg1MB1dsAAogV+kyQuqgwb1toy6M+yReE7CIa2N2fyGlW1ek+FrolvsPhQwxtIWUU3 jIsrz2rbxy+pwxf0m1k4sjKwMIdS7wsqyY+fNL89hbtDN8R3PCSrSXGQCZ6Nro3xAzIc D9FzUNFre+mnB2Ul4NNGwwJ9yGPOIjEDTi5RuGz66ROxd1RIsO4DxQtQXihdGulgFp7n sk8RnFGQPptHKh22XEnD/3onjvU0gzhjgmKaSB71cPDrY0zOYHqH+TQBvmULpcCdabA1 hCsL9u7cT5kLD+cKIaTisAiBeQHEzWXHpQTiV5+KE17wbl7PUQv9W/mcx3ybOCdhnz3G dqdw== X-Gm-Message-State: APt69E1H+rUy1sOKVr+6cdy5tlrfk1zcyRt5pJmQIDH/M06MRCguXgal fVdkHmCIV+dS7tAf/SxSwtX3Rg== X-Received: by 2002:a1c:3fc2:: with SMTP id m185-v6mr3233832wma.37.1528892248719; Wed, 13 Jun 2018 05:17:28 -0700 (PDT) Received: from localhost.localdomain.com ([151.15.207.242]) by smtp.gmail.com with ESMTPSA id 137-v6sm4943673wmv.28.2018.06.13.05.17.27 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 13 Jun 2018 05:17:28 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org Cc: linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, lizefan@huawei.com, cgroups@vger.kernel.org Subject: [PATCH v4 2/5] sched/topology: Adding function partition_sched_domains_locked() Date: Wed, 13 Jun 2018 14:17:08 +0200 Message-Id: <20180613121711.5018-3-juri.lelli@redhat.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180613121711.5018-1-juri.lelli@redhat.com> References: <20180613121711.5018-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mathieu Poirier Introducing function partition_sched_domains_locked() by taking the mutex locking code out of the original function. That way the work done by partition_sched_domains_locked() can be reused without dropping the mutex lock. No change of functionality is introduced by this patch. Signed-off-by: Mathieu Poirier --- include/linux/sched/topology.h | 10 ++++++++++ kernel/sched/topology.c | 18 ++++++++++++++---- 2 files changed, 24 insertions(+), 4 deletions(-) -- 2.14.3 diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 26347741ba50..57997caf61b6 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -162,6 +162,10 @@ static inline struct cpumask *sched_domain_span(struct sched_domain *sd) return to_cpumask(sd->span); } +extern void partition_sched_domains_locked(int ndoms_new, + cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new); + extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], struct sched_domain_attr *dattr_new); @@ -206,6 +210,12 @@ extern void set_sched_topology(struct sched_domain_topology_level *tl); struct sched_domain_attr; +static inline void +partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) +{ +} + static inline void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], struct sched_domain_attr *dattr_new) diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 96eee22fafe8..25a5727d3b48 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1850,16 +1850,16 @@ static int dattrs_equal(struct sched_domain_attr *cur, int idx_cur, * ndoms_new == 0 is a special case for destroying existing domains, * and it will not create the default domain. * - * Call with hotplug lock held + * Call with hotplug lock and sched_domains_mutex held */ -void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], - struct sched_domain_attr *dattr_new) +void partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) { int i, j, n; int new_topology; lockdep_assert_cpus_held(); - mutex_lock(&sched_domains_mutex); + lockdep_assert_held(&sched_domains_mutex); /* Always unregister in case we don't destroy any domains: */ unregister_sched_domain_sysctl(); @@ -1924,6 +1924,16 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], ndoms_cur = ndoms_new; register_sched_domain_sysctl(); +} +/* + * Call with hotplug lock held + */ +void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) +{ + lockdep_assert_cpus_held(); + mutex_lock(&sched_domains_mutex); + partition_sched_domains_locked(ndoms_new, doms_new, dattr_new); mutex_unlock(&sched_domains_mutex); } From patchwork Wed Jun 13 12:17:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 138447 Delivered-To: patch@linaro.org Received: by 2002:a2e:970d:0:0:0:0:0 with SMTP id r13-v6csp645576lji; Wed, 13 Jun 2018 05:18:18 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLEHorK1c6Y218ne/xq0qkpqWU0BVq/F3XFz2V/n7K0d4SXqg6SwOiNI5NF1ipILc2jrDCC X-Received: by 2002:a62:fcb:: with SMTP id 72-v6mr4690067pfp.231.1528892297863; Wed, 13 Jun 2018 05:18:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528892297; cv=none; d=google.com; s=arc-20160816; b=h46yRq5EGXedUrhugUThlDPh3r3TuaQvtnWm2eaXWxr0jz9AAg6DHGFUtw9oMW8xre haGEzPKBuZurBGHPb0/c9QKtKvuyfkhqAoGrpH6SgAfNktFX6PDdq+Z1OraGzpmojEqX AoF7fx/k9Ybr83t6gI4XUk5ufdvnBssc932YBJynOQCUEq7N6OqZkVCkNx9c7DPg8i71 SGycokcGn/LzUd6AxVi1emn2iYcZSgSSZqiS1577/Alo+Prwd37cDwEcHyjFTTG0ZBJx tuo7i2i6pW8LdZq0Wnf6uFKR5xQJhWn5PnVM8FMsc5TgM0/un6Ypf0Q4/t+96vecon4E vp3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=KX4uxSPk+Ym6JtztjwcqO+gEDs3avL9GQ5ej6jLr24U=; b=VS8zGl2LrRi6X/jBSbTaFkhgwoF6LzU1MvoNRYQeQa9GU69xvQpB6uMtblQIAqVcBC tqPKopMn5GtNZDEktfmBFTzdRGahslS5WzVQ7VylmKK9Uyqrj10YWHu5aoIbcL5kUVod UQK14dGZlGVJKQPA6ZwolR54aZ+aZUzPbJQwqAocfbxnv1XFM3t6g6ifEB8EXu/D4lV3 RPi/XwmIBZiCccf4TXk/uU9vtLlqjaztFOON0vjOuQTaHDRTYMAi1ETKiyxXDqMzAohD zgxbSXYDMBFMszngl2AU6/5burgBH7oweSYq/iSSP1kU5zLlRCk2Ts/g8HBXP2aLrD3h QQrg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f26-v6si2186054pgn.247.2018.06.13.05.18.17; Wed, 13 Jun 2018 05:18:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964803AbeFMMSN (ORCPT + 30 others); Wed, 13 Jun 2018 08:18:13 -0400 Received: from mail-wr0-f194.google.com ([209.85.128.194]:34977 "EHLO mail-wr0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935286AbeFMMRb (ORCPT ); Wed, 13 Jun 2018 08:17:31 -0400 Received: by mail-wr0-f194.google.com with SMTP id l10-v6so2540916wrn.2 for ; Wed, 13 Jun 2018 05:17:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=KX4uxSPk+Ym6JtztjwcqO+gEDs3avL9GQ5ej6jLr24U=; b=dFo4+6f6XAP/XxpD9CW2yx1UMljC/TsZvEnVcX8JVAoS0GEW670DcMVboxZkDZmsHP CDotTvsC63gApeDrTp/XK8WPWML/AHTPlIiJFAQIYoIIs9I6wXfpLMsTS93pg+qAxrBh zWPTcxgDJUfB8RwjKeu9mm3pimLJ69ou2DvzWWs8GnnskpTAaRMuU9k6bvijIYzz6Rv7 RXwOBWSN6rhFOyI/vgKpRL47ZusaHE22bOfAgfKq4qU2GXgNN0UYmB+e2V0lKxuq9lVx iUWi1aJn1nErYy5mlgJjmP8ap0iMoXnVFtDaq0icM4j7BBKQvbhT4EiDJDC0w5ujrszc MA8g== X-Gm-Message-State: APt69E3+7z6Om6hedRIhMilPFh9Hj2tBAbBR2wvQSLu8uBDR7eR+LwL7 zefYCzH66BefYkiDsXioMN3/gA== X-Received: by 2002:a5d:41c6:: with SMTP id e6-v6mr4044599wrq.25.1528892249983; Wed, 13 Jun 2018 05:17:29 -0700 (PDT) Received: from localhost.localdomain.com ([151.15.207.242]) by smtp.gmail.com with ESMTPSA id 137-v6sm4943673wmv.28.2018.06.13.05.17.28 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 13 Jun 2018 05:17:29 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org Cc: linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, lizefan@huawei.com, cgroups@vger.kernel.org Subject: [PATCH v4 3/5] sched/core: Streamlining calls to task_rq_unlock() Date: Wed, 13 Jun 2018 14:17:09 +0200 Message-Id: <20180613121711.5018-4-juri.lelli@redhat.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180613121711.5018-1-juri.lelli@redhat.com> References: <20180613121711.5018-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mathieu Poirier Calls to task_rq_unlock() are done several times in function __sched_setscheduler(). This is fine when only the rq lock needs to be handled but not so much when other locks come into play. This patch streamlines the release of the rq lock so that only one location need to be modified when dealing with more than one lock. No change of functionality is introduced by this patch. Signed-off-by: Mathieu Poirier --- kernel/sched/core.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) -- 2.14.3 Reviewed-by: Steven Rostedt (VMware) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d1555185c054..ca788f74259d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4237,8 +4237,8 @@ static int __sched_setscheduler(struct task_struct *p, * Changing the policy of the stop threads its a very bad idea: */ if (p == rq->stop) { - task_rq_unlock(rq, p, &rf); - return -EINVAL; + retval = -EINVAL; + goto unlock; } /* @@ -4254,8 +4254,8 @@ static int __sched_setscheduler(struct task_struct *p, goto change; p->sched_reset_on_fork = reset_on_fork; - task_rq_unlock(rq, p, &rf); - return 0; + retval = 0; + goto unlock; } change: @@ -4268,8 +4268,8 @@ static int __sched_setscheduler(struct task_struct *p, if (rt_bandwidth_enabled() && rt_policy(policy) && task_group(p)->rt_bandwidth.rt_runtime == 0 && !task_group_is_autogroup(task_group(p))) { - task_rq_unlock(rq, p, &rf); - return -EPERM; + retval = -EPERM; + goto unlock; } #endif #ifdef CONFIG_SMP @@ -4284,8 +4284,8 @@ static int __sched_setscheduler(struct task_struct *p, */ if (!cpumask_subset(span, &p->cpus_allowed) || rq->rd->dl_bw.bw == 0) { - task_rq_unlock(rq, p, &rf); - return -EPERM; + retval = -EPERM; + goto unlock; } } #endif @@ -4304,8 +4304,8 @@ static int __sched_setscheduler(struct task_struct *p, * is available. */ if ((dl_policy(policy) || dl_task(p)) && sched_dl_overflow(p, policy, attr)) { - task_rq_unlock(rq, p, &rf); - return -EBUSY; + retval = -EBUSY; + goto unlock; } p->sched_reset_on_fork = reset_on_fork; @@ -4361,6 +4361,10 @@ static int __sched_setscheduler(struct task_struct *p, preempt_enable(); return 0; + +unlock: + task_rq_unlock(rq, p, &rf); + return retval; } static int _sched_setscheduler(struct task_struct *p, int policy, From patchwork Wed Jun 13 12:17:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 138445 Delivered-To: patch@linaro.org Received: by 2002:a2e:970d:0:0:0:0:0 with SMTP id r13-v6csp644801lji; Wed, 13 Jun 2018 05:17:39 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKuvinQL3y+g8r94E8CMVE66aXhL2nHiBQKpP86k6LdLOSZccaIO5TWuoSKzkk1CZ1lgffj X-Received: by 2002:a62:ecdb:: with SMTP id e88-v6mr4820070pfm.16.1528892259322; Wed, 13 Jun 2018 05:17:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528892259; cv=none; d=google.com; s=arc-20160816; b=OLcc/6hyuApWt7N00qR5XsSeJ6ECXUjV/ht+uACGdogGky3bsM08fCWO7ks64rnHb9 5mXlowuRs9DB8CkvWF+e+YfxDMYs9A5Nw5jXLGxTucKxSRYgD5ks4i+nqRVWuEyp3J8P +2igSXn08PkGTM7e6ahjS4giSYKeSjJybNaiCFcgXGFQFq7V/bpZnRWRlW/asyPzChiF Xt5HoUvLcOAKCt1jll7Ej7f1pGfBz8Wi55BOhQCMTkyYZzFZ5nb3UiahWGf5Yxj8R0mp vZc8oc0LcDce+nhkuEUxAT8e2wuMLATDRQfM2MW3Cgu3V16HoF4VdtrTkk3C1Q2MxkSW MIlQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=azAIoic78v8lMlwuZLVdpRYUlXHVjqeoV6ZQxUoPOQQ=; b=X2Ai4yWPwT09nb5T6ZOwNEricyskhqjugeUBnO3Jduw7thktIKGey7MjTmVX6LDeGO hxlW9wn8dkQWIjpi+tUF8f7vwAT4w+JRIXb2W+EmxMiPMfToVz6+9pa8uGnE3be1UxYX 0LxCk131O6PCOmxDnGR2j8eDkmQ6nfnJd6w62uCJu+o5wXF/TSufzmaKuWQv5pUZCEx5 /iyfNfInTH1ytVzFO4KynTUbO1JNZYPYThWxHdYxe9EPlTL57Wnmi93HDYi0150G4XU5 zfytWoSO7efu/JTtCmlFrv+q0kzi86Jq6QOBXRfnKNT2blQZ2Xc6egr9mgZiHj97Fgh5 Adsg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k22-v6si3106289pli.331.2018.06.13.05.17.38; Wed, 13 Jun 2018 05:17:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964775AbeFMMRh (ORCPT + 30 others); Wed, 13 Jun 2018 08:17:37 -0400 Received: from mail-wm0-f65.google.com ([74.125.82.65]:36499 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935539AbeFMMRc (ORCPT ); Wed, 13 Jun 2018 08:17:32 -0400 Received: by mail-wm0-f65.google.com with SMTP id v131-v6so4994413wma.1 for ; Wed, 13 Jun 2018 05:17:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=azAIoic78v8lMlwuZLVdpRYUlXHVjqeoV6ZQxUoPOQQ=; b=jTiMLgUff8xDJL+Iza0bGtN3vCzxRqwINm6vNufJPdq3W7kdWWiCBuFjPixijrlKKG 0/m5Sl99+EQKL0C+3RfUJKAJfAK9tg+Qf8VdxIQ3vw0PMeNyzCJGBCszcHqTYvg+Y+Hy NiV3R3jd2KWnFdMIKFfAHZcWgbkWiU0TechMS2YrPi76k7wv+E8XEyKXoP/3/lf2A6XR P49MsdralO6HCKQi88OzF1W4qywRoHQduKlL96+6uHFwz96UMGmRYvH2x8jbLGgpFrUE x86EmzUb7x2wmL7ghGYGLgWL6o6rNUrVlLxPJN9XYxC22E5Om5Qj1y1nAPdZ7LzcVmJr H9sw== X-Gm-Message-State: APt69E2N+je/PiEyKOMoEpIWUuH4G7UYXnpBbYhlsxg5HI9UJzjU2fK1 38i9pOFICzedl1E0zaJQ8AhxEw== X-Received: by 2002:a1c:7113:: with SMTP id m19-v6mr3393058wmc.150.1528892251270; Wed, 13 Jun 2018 05:17:31 -0700 (PDT) Received: from localhost.localdomain.com ([151.15.207.242]) by smtp.gmail.com with ESMTPSA id 137-v6sm4943673wmv.28.2018.06.13.05.17.30 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 13 Jun 2018 05:17:30 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org Cc: linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, lizefan@huawei.com, cgroups@vger.kernel.org, Juri Lelli Subject: [PATCH v4 4/5] sched/core: Prevent race condition between cpuset and __sched_setscheduler() Date: Wed, 13 Jun 2018 14:17:10 +0200 Message-Id: <20180613121711.5018-5-juri.lelli@redhat.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180613121711.5018-1-juri.lelli@redhat.com> References: <20180613121711.5018-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mathieu Poirier No synchronisation mechanism exist between the cpuset subsystem and calls to function __sched_setscheduler(). As such it is possible that new root domains are created on the cpuset side while a deadline acceptance test is carried out in __sched_setscheduler(), leading to a potential oversell of CPU bandwidth. By making available the cpuset_mutex to the core scheduler it is possible to prevent situations such as the one described above from happening. Signed-off-by: Mathieu Poirier [fixed missing cpuset_unlock() and changed to use mutex_trylock()] Signed-off-by: Juri Lelli --- include/linux/cpuset.h | 6 ++++++ kernel/cgroup/cpuset.c | 16 ++++++++++++++++ kernel/sched/core.c | 14 ++++++++++++++ 3 files changed, 36 insertions(+) -- 2.14.3 diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 934633a05d20..a1970862ab8e 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -55,6 +55,8 @@ extern void cpuset_init_smp(void); extern void cpuset_force_rebuild(void); extern void cpuset_update_active_cpus(void); extern void cpuset_wait_for_hotplug(void); +extern int cpuset_lock(void); +extern void cpuset_unlock(void); extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask); extern void cpuset_cpus_allowed_fallback(struct task_struct *p); extern nodemask_t cpuset_mems_allowed(struct task_struct *p); @@ -176,6 +178,10 @@ static inline void cpuset_update_active_cpus(void) static inline void cpuset_wait_for_hotplug(void) { } +static inline int cpuset_lock(void) { return 1; } + +static inline void cpuset_unlock(void) { } + static inline void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask) { diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index b42037e6e81d..d26fd4795aa3 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -2409,6 +2409,22 @@ void __init cpuset_init_smp(void) BUG_ON(!cpuset_migrate_mm_wq); } +/** + * cpuset_lock - Grab the cpuset_mutex from another subsysytem + */ +int cpuset_lock(void) +{ + return mutex_trylock(&cpuset_mutex); +} + +/** + * cpuset_unlock - Release the cpuset_mutex from another subsysytem + */ +void cpuset_unlock(void) +{ + mutex_unlock(&cpuset_mutex); +} + /** * cpuset_cpus_allowed - return cpus_allowed mask from a tasks cpuset. * @tsk: pointer to task_struct from which to obtain cpuset->cpus_allowed. diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ca788f74259d..a5b0c6c25b44 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4218,6 +4218,14 @@ static int __sched_setscheduler(struct task_struct *p, if (attr->sched_flags & SCHED_FLAG_SUGOV) return -EINVAL; + /* + * Make sure we don't race with the cpuset subsystem where root + * domains can be rebuilt or modified while operations like DL + * admission checks are carried out. + */ + if (!cpuset_lock()) + return -EBUSY; + retval = security_task_setscheduler(p); if (retval) return retval; @@ -4295,6 +4303,8 @@ static int __sched_setscheduler(struct task_struct *p, if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) { policy = oldpolicy = -1; task_rq_unlock(rq, p, &rf); + if (user) + cpuset_unlock(); goto recheck; } @@ -4352,6 +4362,8 @@ static int __sched_setscheduler(struct task_struct *p, /* Avoid rq from going away on us: */ preempt_disable(); task_rq_unlock(rq, p, &rf); + if (user) + cpuset_unlock(); if (pi) rt_mutex_adjust_pi(p); @@ -4364,6 +4376,8 @@ static int __sched_setscheduler(struct task_struct *p, unlock: task_rq_unlock(rq, p, &rf); + if (user) + cpuset_unlock(); return retval; } From patchwork Wed Jun 13 12:17:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 138446 Delivered-To: patch@linaro.org Received: by 2002:a2e:970d:0:0:0:0:0 with SMTP id r13-v6csp645100lji; Wed, 13 Jun 2018 05:17:54 -0700 (PDT) X-Google-Smtp-Source: ADUXVKK0pZMcyDZN2FR0aK2b4NVPfjPP/NJT/JvPSsxpxbM42sGag+8RLd0QPSJxKcftbR8SmO4G X-Received: by 2002:a62:5788:: with SMTP id i8-v6mr4651194pfj.175.1528892274361; Wed, 13 Jun 2018 05:17:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528892274; cv=none; d=google.com; s=arc-20160816; b=k0sReut42FLVBHQ/B6pGzEVlWcdL5fRctTmIWgp3pGRs2ri+lh7PB9ulXprZP29ZZq 16I7o4LWGrhkel9rQNV62YQGL7P9BqoUr+sGAs9g6vGbdo7SR7Uu5qr/57HzCo/eX/nB jD3f3RmVE5bb2CXz8LpFVgSHUTcvJeX2zCICeEvKgGsxPX6m9CO6f16+FdWKxgkj2AEB CPIiJkSzPE1sWZGdeO/09BKb4WVGJN6LJ5ByrJxrzOIO0BK0sLZBxwmqK/GQRcYfwwD6 ZRbAnveiUdWKrp8SinCccFPlFMsAtEuJQ5/FFGPY7lisvaudz/x0UHRmF+i+IsTDtG0q FHJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=+v7ZtHB+S1/k+VzF7nBNUx06ZPwRVh8ls5JWBGUzTgo=; b=sit694SBMYzC+x5XVerTwWZvo5nGCVLsSjlF/agKuw3c913nDtiUPWOfHar6cp0g1p eQT1jbaQTuF38CL6FwJI3Hq9c557mZ9ARf1VaEXvKyHMPsyO5hMLj0ttXTCroEQyGd49 QQi1XV2GaLsl23i4jqgz8fomOotlenSBarAOt1ElaSLesstc1BHO3mP0viVFRJRs4Vts POboblfftdd0y9vh2dDONmyi6iNlD5ysJw3Ok3a65tFiTj1CzKHimT2xuzrjpXV4MVaC 9ghyJ7KigBybAeUYPAsCfG0OxvKCWMWc5mBKr2rPobX0FFwuKRUI9zdMvKybDI0pZ+dC eLDg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z23-v6si2573373pfe.296.2018.06.13.05.17.54; Wed, 13 Jun 2018 05:17:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964791AbeFMMRw (ORCPT + 30 others); Wed, 13 Jun 2018 08:17:52 -0400 Received: from mail-wm0-f66.google.com ([74.125.82.66]:38505 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935497AbeFMMRd (ORCPT ); Wed, 13 Jun 2018 08:17:33 -0400 Received: by mail-wm0-f66.google.com with SMTP id 69-v6so4971729wmf.3 for ; Wed, 13 Jun 2018 05:17:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+v7ZtHB+S1/k+VzF7nBNUx06ZPwRVh8ls5JWBGUzTgo=; b=srV1iS8KWzmBsVFOOKkJVmEh0I0N4GpDgUvanZQJ7CUmAH7ukNiBb21rPKUeAtIGQu xZLHvsp047a0p6NMWpcG65I0nnd2tU+Z8IdX2cfoGQ/WmaJgmWJZxTn+FdBhtBiGjzsB Wt9mnvawEguTaWp+HHHMHXBMynVCNESKcUZTR+M/u0qj5NF4dGpNOPfuU35r0qSGKHbo aWxFwCUJy4U1RkCs8uP2a5v5tEXL326gUjG/Fl0ZjiIpILucBCvI86Xqgj9F7kzaMDMp JNzxI3VaYOZIbcqdSMxauXmbx9AFYiSVuUr2LFqAe9IZlqRm5MKv/Uu/dL6b7n9d3Oym 5tGA== X-Gm-Message-State: APt69E0ZREGei/kGRul+aPmObfgpSzVDT3LOjK5MvreeCBQw33LbY3u8 8LJGdMtWifOvqzSIv1a2iIOc/w== X-Received: by 2002:a7b:c01a:: with SMTP id c26-v6mr3441099wmb.33.1528892252483; Wed, 13 Jun 2018 05:17:32 -0700 (PDT) Received: from localhost.localdomain.com ([151.15.207.242]) by smtp.gmail.com with ESMTPSA id 137-v6sm4943673wmv.28.2018.06.13.05.17.31 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 13 Jun 2018 05:17:31 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org Cc: linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, lizefan@huawei.com, cgroups@vger.kernel.org Subject: [PATCH v4 5/5] cpuset: Rebuild root domain deadline accounting information Date: Wed, 13 Jun 2018 14:17:11 +0200 Message-Id: <20180613121711.5018-6-juri.lelli@redhat.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180613121711.5018-1-juri.lelli@redhat.com> References: <20180613121711.5018-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mathieu Poirier When the topology of root domains is modified by CPUset or CPUhotplug operations information about the current deadline bandwidth held in the root domain is lost. This patch address the issue by recalculating the lost deadline bandwidth information by circling through the deadline tasks held in CPUsets and adding their current load to the root domain they are associated with. Signed-off-by: Mathieu Poirier --- include/linux/sched.h | 5 ++++ include/linux/sched/deadline.h | 8 ++++++ kernel/cgroup/cpuset.c | 63 +++++++++++++++++++++++++++++++++++++++++- kernel/sched/deadline.c | 31 +++++++++++++++++++++ kernel/sched/sched.h | 3 -- kernel/sched/topology.c | 13 ++++++++- 6 files changed, 118 insertions(+), 5 deletions(-) -- 2.14.3 diff --git a/include/linux/sched.h b/include/linux/sched.h index 28ff3ca9f752..f7fcc903e1a1 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -278,6 +278,11 @@ struct vtime { u64 gtime; }; +#ifdef CONFIG_SMP +extern struct root_domain def_root_domain; +extern struct mutex sched_domains_mutex; +#endif + struct sched_info { #ifdef CONFIG_SCHED_INFO /* Cumulative counters: */ diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h index 0cb034331cbb..1aff00b65f3c 100644 --- a/include/linux/sched/deadline.h +++ b/include/linux/sched/deadline.h @@ -24,3 +24,11 @@ static inline bool dl_time_before(u64 a, u64 b) { return (s64)(a - b) < 0; } + +#ifdef CONFIG_SMP + +struct root_domain; +extern void dl_add_task_root_domain(struct task_struct *p); +extern void dl_clear_root_domain(struct root_domain *rd); + +#endif /* CONFIG_SMP */ diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index d26fd4795aa3..0ca10418ddf6 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -44,6 +44,7 @@ #include #include #include +#include #include #include #include @@ -812,6 +813,66 @@ static int generate_sched_domains(cpumask_var_t **domains, return ndoms; } +static void update_tasks_root_domain(struct cpuset *cs) +{ + struct css_task_iter it; + struct task_struct *task; + + css_task_iter_start(&cs->css, 0, &it); + + while ((task = css_task_iter_next(&it))) + dl_add_task_root_domain(task); + + css_task_iter_end(&it); +} + +/* + * Called with cpuset_mutex held (rebuild_sched_domains()) + * Called with hotplug lock held (rebuild_sched_domains_locked()) + * Called with sched_domains_mutex held (partition_and_rebuild_domains()) + */ +static void rebuild_root_domains(void) +{ + struct cpuset *cs = NULL; + struct cgroup_subsys_state *pos_css; + + rcu_read_lock(); + + /* + * Clear default root domain DL accounting, it will be computed again + * if a task belongs to it. + */ + dl_clear_root_domain(&def_root_domain); + + cpuset_for_each_descendant_pre(cs, pos_css, &top_cpuset) { + + if (cpumask_empty(cs->effective_cpus)) { + pos_css = css_rightmost_descendant(pos_css); + continue; + } + + css_get(&cs->css); + + rcu_read_unlock(); + + update_tasks_root_domain(cs); + + rcu_read_lock(); + css_put(&cs->css); + } + rcu_read_unlock(); +} + +static void +partition_and_rebuild_sched_domains(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) +{ + mutex_lock(&sched_domains_mutex); + partition_sched_domains_locked(ndoms_new, doms_new, dattr_new); + rebuild_root_domains(); + mutex_unlock(&sched_domains_mutex); +} + /* * Rebuild scheduler domains. * @@ -844,7 +905,7 @@ static void rebuild_sched_domains_locked(void) ndoms = generate_sched_domains(&doms, &attr); /* Have scheduler rebuild the domains */ - partition_sched_domains(ndoms, doms, attr); + partition_and_rebuild_sched_domains(ndoms, doms, attr); out: put_online_cpus(); } diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 1356afd1eeb6..7be2c89b0645 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -2275,6 +2275,37 @@ void __init init_sched_dl_class(void) GFP_KERNEL, cpu_to_node(i)); } +void dl_add_task_root_domain(struct task_struct *p) +{ + unsigned long flags; + struct rq_flags rf; + struct rq *rq; + struct dl_bw *dl_b; + + rq = task_rq_lock(p, &rf); + if (!dl_task(p)) + goto unlock; + + dl_b = &rq->rd->dl_bw; + raw_spin_lock_irqsave(&dl_b->lock, flags); + + dl_b->total_bw += p->dl.dl_bw; + + raw_spin_unlock_irqrestore(&dl_b->lock, flags); + +unlock: + task_rq_unlock(rq, p, &rf); +} + +void dl_clear_root_domain(struct root_domain *rd) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&rd->dl_bw.lock, flags); + rd->dl_bw.total_bw = 0; + raw_spin_unlock_irqrestore(&rd->dl_bw.lock, flags); +} + #endif /* CONFIG_SMP */ static void switched_from_dl(struct rq *rq, struct task_struct *p) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 67702b4d9ac7..83bfacbd13d4 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -723,9 +723,6 @@ struct root_domain { unsigned long max_cpu_capacity; }; -extern struct root_domain def_root_domain; -extern struct mutex sched_domains_mutex; - extern void init_defrootdomain(void); extern int sched_init_domains(const struct cpumask *cpu_map); extern void rq_attach_root(struct rq *rq, struct root_domain *rd); diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 25a5727d3b48..fb39cee51b14 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1884,8 +1884,19 @@ void partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[], for (i = 0; i < ndoms_cur; i++) { for (j = 0; j < n && !new_topology; j++) { if (cpumask_equal(doms_cur[i], doms_new[j]) - && dattrs_equal(dattr_cur, i, dattr_new, j)) + && dattrs_equal(dattr_cur, i, dattr_new, j)) { + struct root_domain *rd; + + /* + * This domain won't be destroyed and as such + * its dl_bw->total_bw needs to be cleared. It + * will be recomputed in function + * update_tasks_root_domain(). + */ + rd = cpu_rq(cpumask_any(doms_cur[i]))->rd; + dl_clear_root_domain(rd); goto match1; + } } /* No match - a current sched domain not in new doms_new[] */ detach_destroy_domains(doms_cur[i]);