From patchwork Tue Jun 27 14:35:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 697054 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDBCEEB64D9 for ; Tue, 27 Jun 2023 14:43:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231918AbjF0Ond (ORCPT ); Tue, 27 Jun 2023 10:43:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231848AbjF0OnS (ORCPT ); Tue, 27 Jun 2023 10:43:18 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A1723AAC for ; Tue, 27 Jun 2023 07:42:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687876855; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HxXXRG19n9JIA9MH1iFO6LWghYV5obKh5ig/aXiH0Xo=; b=M0sIQLfwrWbtHzV0qUWLeOGwSx+mbBeICu5GFz2/4/npZwq528Lkll2hcUoMHEb44WYutK 5z7yg6neUsAUyr9gzP6xUvLNvG34vHf2UY3nhtsLuu+NCKFBO2tT00DzsDUFXRhrxfgZZq siBDrAZOE/FMDZ+YOrPRTUp5yFYjjQw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-433-LmQfQh6XPPu43T56nk_XGA-1; Tue, 27 Jun 2023 10:40:51 -0400 X-MC-Unique: LmQfQh6XPPu43T56nk_XGA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B1A458C810A; Tue, 27 Jun 2023 14:35:32 +0000 (UTC) Received: from llong.com (unknown [10.22.10.32]) by smtp.corp.redhat.com (Postfix) with ESMTP id DDBCE40C2063; Tue, 27 Jun 2023 14:35:31 +0000 (UTC) From: Waiman Long To: Tejun Heo , Zefan Li , Johannes Weiner , Jonathan Corbet , Shuah Khan Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Juri Lelli , Valentin Schneider , Frederic Weisbecker , Mrunal Patel , Ryan Phillips , Brent Rowsell , Peter Hunt , Phil Auld , Waiman Long Subject: [PATCH v4 1/9] cgroup/cpuset: Inherit parent's load balance state in v2 Date: Tue, 27 Jun 2023 10:35:00 -0400 Message-Id: <20230627143508.1576882-2-longman@redhat.com> In-Reply-To: <20230627143508.1576882-1-longman@redhat.com> References: <20230627143508.1576882-1-longman@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Since commit f28e22441f35 ("cgroup/cpuset: Add a new isolated cpus.partition type"), the CS_SCHED_LOAD_BALANCE bit of a v2 cpuset can be on or off. The child cpusets of a partition root must have the same setting as its parent or it may screw up the rebuilding of sched domains. Fix this problem by making sure the a child v2 cpuset will follows its parent cpuset load balance state unless the child cpuset is a new partition root itself. Fixes: f28e22441f35 ("cgroup/cpuset: Add a new isolated cpus.partition type") Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 33 ++++++++++++++++++++++++++++++--- 1 file changed, 30 insertions(+), 3 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 58e6f18f01c1..170e342b07e3 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -1588,11 +1588,16 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, } /* - * Skip the whole subtree if the cpumask remains the same - * and has no partition root state and force flag not set. + * Skip the whole subtree if + * 1) the cpumask remains the same, + * 2) has no partition root state, + * 3) force flag not set, and + * 4) for v2 load balance state same as its parent. */ if (!cp->partition_root_state && !force && - cpumask_equal(tmp->new_cpus, cp->effective_cpus)) { + cpumask_equal(tmp->new_cpus, cp->effective_cpus) && + (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) || + (is_sched_load_balance(parent) == is_sched_load_balance(cp)))) { pos_css = css_rightmost_descendant(pos_css); continue; } @@ -1675,6 +1680,20 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, update_tasks_cpumask(cp, tmp->new_cpus); + /* + * On default hierarchy, inherit the CS_SCHED_LOAD_BALANCE + * from parent if current cpuset isn't a valid partition root + * and their load balance states differ. + */ + if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys) && + !is_partition_valid(cp) && + (is_sched_load_balance(parent) != is_sched_load_balance(cp))) { + if (is_sched_load_balance(parent)) + set_bit(CS_SCHED_LOAD_BALANCE, &cp->flags); + else + clear_bit(CS_SCHED_LOAD_BALANCE, &cp->flags); + } + /* * On legacy hierarchy, if the effective cpumask of any non- * empty cpuset is changed, we need to rebuild sched domains. @@ -3222,6 +3241,14 @@ static int cpuset_css_online(struct cgroup_subsys_state *css) cs->use_parent_ecpus = true; parent->child_ecpus_count++; } + + /* + * For v2, clear CS_SCHED_LOAD_BALANCE if parent is isolated + */ + if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys) && + !is_sched_load_balance(parent)) + clear_bit(CS_SCHED_LOAD_BALANCE, &cs->flags); + spin_unlock_irq(&callback_lock); if (!test_bit(CGRP_CPUSET_CLONE_CHILDREN, &css->cgroup->flags)) From patchwork Tue Jun 27 14:35:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 697057 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 111A4C001DB for ; Tue, 27 Jun 2023 14:37:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231578AbjF0OhY (ORCPT ); Tue, 27 Jun 2023 10:37:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231641AbjF0OhQ (ORCPT ); Tue, 27 Jun 2023 10:37:16 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3344F3598 for ; Tue, 27 Jun 2023 07:36:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687876573; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uQnA8KxcgOFozupPVsTAvj62Jsq+O+jMrE7LgQghL4U=; b=NRjCKK2pwti6iMsXPnezwsJuZwASiyCU0u9XmLPYNI/6DvmMoIKjqAywPtqrOg19e61McI JM5asmP6uv7Q9mZUrhezMDEHBRV2sKPmtOw9xxy4WRxlValEwDwhmIOHkoWpoDQym7Upiw jknDKYeja0t50qTpnF/l9kLjleXmkOU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-665-z_GAHB-2PXOu6TeGcIJ5RA-1; Tue, 27 Jun 2023 10:36:11 -0400 X-MC-Unique: z_GAHB-2PXOu6TeGcIJ5RA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8763F854C20; Tue, 27 Jun 2023 14:35:33 +0000 (UTC) Received: from llong.com (unknown [10.22.10.32]) by smtp.corp.redhat.com (Postfix) with ESMTP id BFA7B40C2063; Tue, 27 Jun 2023 14:35:32 +0000 (UTC) From: Waiman Long To: Tejun Heo , Zefan Li , Johannes Weiner , Jonathan Corbet , Shuah Khan Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Juri Lelli , Valentin Schneider , Frederic Weisbecker , Mrunal Patel , Ryan Phillips , Brent Rowsell , Peter Hunt , Phil Auld , Waiman Long Subject: [PATCH v4 2/9] cgroup/cpuset: Extract out CS_CPU_EXCLUSIVE & CS_SCHED_LOAD_BALANCE handling Date: Tue, 27 Jun 2023 10:35:01 -0400 Message-Id: <20230627143508.1576882-3-longman@redhat.com> In-Reply-To: <20230627143508.1576882-1-longman@redhat.com> References: <20230627143508.1576882-1-longman@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Extract out the setting of CS_CPU_EXCLUSIVE and CS_SCHED_LOAD_BALANCE flags as well as the rebuilding of scheduling domains into the new update_partition_exclusive() and update_partition_sd_lb() helper functions to simplify the logic. The update_partition_exclusive() helper is called mainly at the beginning of the caller, but it may be called at the end too. The update_partition_sd_lb() helper is called at the end of the caller. This patch should reduce the chance that cpuset partition will end up in an incorrect state. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 141 +++++++++++++++++++++++++---------------- 1 file changed, 86 insertions(+), 55 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 170e342b07e3..ade33e50ffe2 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -1255,7 +1255,7 @@ static void update_tasks_cpumask(struct cpuset *cs, struct cpumask *new_cpus) static void compute_effective_cpumask(struct cpumask *new_cpus, struct cpuset *cs, struct cpuset *parent) { - if (parent->nr_subparts_cpus) { + if (parent->nr_subparts_cpus && is_partition_valid(cs)) { cpumask_or(new_cpus, parent->effective_cpus, parent->subparts_cpus); cpumask_and(new_cpus, new_cpus, cs->cpus_allowed); @@ -1277,6 +1277,50 @@ enum subparts_cmd { static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs, int turning_on); + +/* + * Update partition exclusive flag + * + * Return: 0 if successful, an error code otherwise + */ +static int update_partition_exclusive(struct cpuset *cs, int new_prs) +{ + bool exclusive = (new_prs > 0); + + if (exclusive && !is_cpu_exclusive(cs)) { + if (update_flag(CS_CPU_EXCLUSIVE, cs, 1)) + return PERR_NOTEXCL; + } else if (!exclusive && is_cpu_exclusive(cs)) { + /* Turning off CS_CPU_EXCLUSIVE will not return error */ + update_flag(CS_CPU_EXCLUSIVE, cs, 0); + } + return 0; +} + +/* + * Update partition load balance flag and/or rebuild sched domain + * + * Changing load balance flag will automatically call + * rebuild_sched_domains_locked(). + */ +static void update_partition_sd_lb(struct cpuset *cs, int old_prs) +{ + int new_prs = cs->partition_root_state; + bool new_lb = (new_prs != PRS_ISOLATED); + bool rebuild_domains = (new_prs > 0) || (old_prs > 0); + + if (new_lb != !!is_sched_load_balance(cs)) { + rebuild_domains = true; + if (new_lb) + set_bit(CS_SCHED_LOAD_BALANCE, &cs->flags); + else + clear_bit(CS_SCHED_LOAD_BALANCE, &cs->flags); + } + + if (rebuild_domains) + rebuild_sched_domains_locked(); +} + /** * update_parent_subparts_cpumask - update subparts_cpus mask of parent cpuset * @cs: The cpuset that requests change in partition root state @@ -1336,8 +1380,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, return is_partition_invalid(parent) ? PERR_INVPARENT : PERR_NOTPART; } - if ((newmask && cpumask_empty(newmask)) || - (!newmask && cpumask_empty(cs->cpus_allowed))) + if (!newmask && cpumask_empty(cs->cpus_allowed)) return PERR_CPUSEMPTY; /* @@ -1403,11 +1446,16 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, cpumask_and(tmp->addmask, newmask, parent->cpus_allowed); adding = cpumask_andnot(tmp->addmask, tmp->addmask, parent->subparts_cpus); + /* + * Empty cpumask is not allewed + */ + if (cpumask_empty(newmask)) { + part_error = PERR_CPUSEMPTY; /* * Make partition invalid if parent's effective_cpus could * become empty and there are tasks in the parent. */ - if (adding && + } else if (adding && cpumask_subset(parent->effective_cpus, tmp->addmask) && !cpumask_intersects(tmp->delmask, cpu_active_mask) && partition_is_populated(parent, cs)) { @@ -1480,14 +1528,13 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, /* * Transitioning between invalid to valid or vice versa may require - * changing CS_CPU_EXCLUSIVE and CS_SCHED_LOAD_BALANCE. + * changing CS_CPU_EXCLUSIVE. */ if (old_prs != new_prs) { - if (is_prs_invalid(old_prs) && !is_cpu_exclusive(cs) && - (update_flag(CS_CPU_EXCLUSIVE, cs, 1) < 0)) - return PERR_NOTEXCL; - if (is_prs_invalid(new_prs) && is_cpu_exclusive(cs)) - update_flag(CS_CPU_EXCLUSIVE, cs, 0); + int err = update_partition_exclusive(cs, new_prs); + + if (err) + return err; } /* @@ -1524,15 +1571,16 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, update_tasks_cpumask(parent, tmp->addmask); /* - * Set or clear CS_SCHED_LOAD_BALANCE when partcmd_update, if necessary. - * rebuild_sched_domains_locked() may be called. + * For partcmd_update without newmask, it is being called from + * cpuset_hotplug_workfn() where cpus_read_lock() wasn't taken. + * Update the load balance flag and scheduling domain if + * cpus_read_trylock() is successful. */ - if (old_prs != new_prs) { - if (old_prs == PRS_ISOLATED) - update_flag(CS_SCHED_LOAD_BALANCE, cs, 1); - else if (new_prs == PRS_ISOLATED) - update_flag(CS_SCHED_LOAD_BALANCE, cs, 0); + if ((cmd == partcmd_update) && !newmask && cpus_read_trylock()) { + update_partition_sd_lb(cs, old_prs); + cpus_read_unlock(); } + notify_partition_change(cs, old_prs); return 0; } @@ -1766,6 +1814,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, int retval; struct tmpmasks tmp; bool invalidate = false; + int old_prs = cs->partition_root_state; /* top_cpuset.cpus_allowed tracks cpu_online_mask; it's read-only */ if (cs == &top_cpuset) @@ -1885,6 +1934,9 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, */ if (parent->child_ecpus_count) update_sibling_cpumasks(parent, cs, &tmp); + + /* Update CS_SCHED_LOAD_BALANCE and/or sched_domains */ + update_partition_sd_lb(cs, old_prs); } return 0; } @@ -2261,7 +2313,6 @@ static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs, static int update_prstate(struct cpuset *cs, int new_prs) { int err = PERR_NONE, old_prs = cs->partition_root_state; - bool sched_domain_rebuilt = false; struct cpuset *parent = parent_cs(cs); struct tmpmasks tmpmask; @@ -2280,45 +2331,28 @@ static int update_prstate(struct cpuset *cs, int new_prs) if (alloc_cpumasks(NULL, &tmpmask)) return -ENOMEM; + err = update_partition_exclusive(cs, new_prs); + if (err) + goto out; + if (!old_prs) { /* - * Turning on partition root requires setting the - * CS_CPU_EXCLUSIVE bit implicitly as well and cpus_allowed - * cannot be empty. + * cpus_allowed cannot be empty. */ if (cpumask_empty(cs->cpus_allowed)) { err = PERR_CPUSEMPTY; goto out; } - err = update_flag(CS_CPU_EXCLUSIVE, cs, 1); - if (err) { - err = PERR_NOTEXCL; - goto out; - } - err = update_parent_subparts_cpumask(cs, partcmd_enable, NULL, &tmpmask); - if (err) { - update_flag(CS_CPU_EXCLUSIVE, cs, 0); + if (err) goto out; - } - - if (new_prs == PRS_ISOLATED) { - /* - * Disable the load balance flag should not return an - * error unless the system is running out of memory. - */ - update_flag(CS_SCHED_LOAD_BALANCE, cs, 0); - sched_domain_rebuilt = true; - } } else if (old_prs && new_prs) { /* * A change in load balance state only, no change in cpumasks. */ - update_flag(CS_SCHED_LOAD_BALANCE, cs, (new_prs != PRS_ISOLATED)); - sched_domain_rebuilt = true; - goto out; /* Sched domain is rebuilt in update_flag() */ + goto out; } else { /* * Switching back to member is always allowed even if it @@ -2337,15 +2371,6 @@ static int update_prstate(struct cpuset *cs, int new_prs) compute_effective_cpumask(cs->effective_cpus, cs, parent); spin_unlock_irq(&callback_lock); } - - /* Turning off CS_CPU_EXCLUSIVE will not return error */ - update_flag(CS_CPU_EXCLUSIVE, cs, 0); - - if (!is_sched_load_balance(cs)) { - /* Make sure load balance is on */ - update_flag(CS_SCHED_LOAD_BALANCE, cs, 1); - sched_domain_rebuilt = true; - } } update_tasks_cpumask(parent, tmpmask.new_cpus); @@ -2353,18 +2378,21 @@ static int update_prstate(struct cpuset *cs, int new_prs) if (parent->child_ecpus_count) update_sibling_cpumasks(parent, cs, &tmpmask); - if (!sched_domain_rebuilt) - rebuild_sched_domains_locked(); out: /* - * Make partition invalid if an error happen + * Make partition invalid & disable CS_CPU_EXCLUSIVE if an error + * happens. */ - if (err) + if (err) { new_prs = -new_prs; + update_partition_exclusive(cs, new_prs); + } + spin_lock_irq(&callback_lock); cs->partition_root_state = new_prs; WRITE_ONCE(cs->prs_err, err); spin_unlock_irq(&callback_lock); + /* * Update child cpusets, if present. * Force update if switching back to member. @@ -2372,6 +2400,9 @@ static int update_prstate(struct cpuset *cs, int new_prs) if (!list_empty(&cs->css.children)) update_cpumasks_hier(cs, &tmpmask, !new_prs); + /* Update sched domains and load balance flag */ + update_partition_sd_lb(cs, old_prs); + notify_partition_change(cs, old_prs); free_cpumasks(NULL, &tmpmask); return 0; From patchwork Tue Jun 27 14:35:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 697059 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81B01EB64DD for ; Tue, 27 Jun 2023 14:36:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231345AbjF0Og4 (ORCPT ); Tue, 27 Jun 2023 10:36:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38008 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230489AbjF0Ogz (ORCPT ); Tue, 27 Jun 2023 10:36:55 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1CEA530EE for ; Tue, 27 Jun 2023 07:36:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687876565; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=T1/GI0DN/l36YNRdQUEP56Qh0Q1HrifFn5e4yBEgTec=; b=g7NoqCvev+lxIChha9NYqvCm1GaY624/0YNi4nsNO1My3NoVbhzNQJFgF8KjpgIhY2RYI9 C7hzRa4APMgrDgHuTyRIQyI4Ssz2j7+a5BR7KFDmBYVR2/jsF1tmYrN73NsQKGKG4rOL9B hIjhnNFYuPOWIHpEPJIZ+XITjSEVnxA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-492-Wp4Ew9xSP4CUljkjzPLvUg-1; Tue, 27 Jun 2023 10:36:00 -0400 X-MC-Unique: Wp4Ew9xSP4CUljkjzPLvUg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3B52D8E2D58; Tue, 27 Jun 2023 14:35:35 +0000 (UTC) Received: from llong.com (unknown [10.22.10.32]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6B80540C2063; Tue, 27 Jun 2023 14:35:34 +0000 (UTC) From: Waiman Long To: Tejun Heo , Zefan Li , Johannes Weiner , Jonathan Corbet , Shuah Khan Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Juri Lelli , Valentin Schneider , Frederic Weisbecker , Mrunal Patel , Ryan Phillips , Brent Rowsell , Peter Hunt , Phil Auld , Waiman Long Subject: [PATCH v4 4/9] cgroup/cpuset: Allow suppression of sched domain rebuild in update_cpumasks_hier() Date: Tue, 27 Jun 2023 10:35:03 -0400 Message-Id: <20230627143508.1576882-5-longman@redhat.com> In-Reply-To: <20230627143508.1576882-1-longman@redhat.com> References: <20230627143508.1576882-1-longman@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org A single partition setup and tear-down operation can lead to multiple rebuild_sched_domains_locked() calls which is a waste of effort. This can partly be mitigated by adding a flag to suppress the rebuild_sched_domains_locked() call in update_cpumasks_hier(). Since a Boolean flag has already been passed as the 3rd argument to update_cpumasks_hier(), we can extend that to a full flag word. The sched domain rebuild suppression is now enabled in update_sibling_cpumasks() as all it callers will do the sched domain rebuild after its return later on anyway. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 24 ++++++++++++++++-------- 1 file changed, 16 insertions(+), 8 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index b8ccc1be7bde..64f9e305b3ab 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -1590,6 +1590,12 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, return 0; } +/* + * update_cpumasks_hier() flags + */ +#define HIER_CHECKALL 0x01 /* Check all cpusets with no skipping */ +#define HIER_NO_SD_REBUILD 0x02 /* Don't rebuild sched domains */ + /* * update_cpumasks_hier - Update effective cpumasks and tasks in the subtree * @cs: the cpuset to consider @@ -1604,7 +1610,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, * Called with cpuset_mutex held */ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, - bool force) + int flags) { struct cpuset *cp; struct cgroup_subsys_state *pos_css; @@ -1644,10 +1650,10 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, * Skip the whole subtree if * 1) the cpumask remains the same, * 2) has no partition root state, - * 3) force flag not set, and + * 3) HIER_CHECKALL flag not set, and * 4) for v2 load balance state same as its parent. */ - if (!cp->partition_root_state && !force && + if (!cp->partition_root_state && !(flags & HIER_CHECKALL) && cpumask_equal(tmp->new_cpus, cp->effective_cpus) && (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) || (is_sched_load_balance(parent) == is_sched_load_balance(cp)))) { @@ -1764,7 +1770,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, } rcu_read_unlock(); - if (need_rebuild_sched_domains) + if (need_rebuild_sched_domains && !(flags & HIER_NO_SD_REBUILD)) rebuild_sched_domains_locked(); } @@ -1788,7 +1794,9 @@ static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs, * to use the right effective_cpus value. * * The update_cpumasks_hier() function may sleep. So we have to - * release the RCU read lock before calling it. + * release the RCU read lock before calling it. HIER_NO_SD_REBUILD + * flag is used to suppress rebuild of sched domains as the callers + * will take care of that. */ rcu_read_lock(); cpuset_for_each_child(sibling, pos_css, parent) { @@ -1800,7 +1808,7 @@ static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs, continue; rcu_read_unlock(); - update_cpumasks_hier(sibling, tmp, false); + update_cpumasks_hier(sibling, tmp, HIER_NO_SD_REBUILD); rcu_read_lock(); css_put(&sibling->css); } @@ -1913,7 +1921,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, spin_unlock_irq(&callback_lock); /* effective_cpus will be updated here */ - update_cpumasks_hier(cs, &tmp, false); + update_cpumasks_hier(cs, &tmp, 0); if (cs->partition_root_state) { struct cpuset *parent = parent_cs(cs); @@ -2382,7 +2390,7 @@ static int update_prstate(struct cpuset *cs, int new_prs) * Force update if switching back to member. */ if (!list_empty(&cs->css.children)) - update_cpumasks_hier(cs, &tmpmask, !new_prs); + update_cpumasks_hier(cs, &tmpmask, !new_prs ? HIER_CHECKALL : 0); /* Update sched domains and load balance flag */ update_partition_sd_lb(cs, old_prs); From patchwork Tue Jun 27 14:35:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 697056 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4110C001B0 for ; Tue, 27 Jun 2023 14:37:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231566AbjF0Ohz (ORCPT ); Tue, 27 Jun 2023 10:37:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38154 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231618AbjF0Oh2 (ORCPT ); Tue, 27 Jun 2023 10:37:28 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D6552D69 for ; Tue, 27 Jun 2023 07:36:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687876576; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1aCXCO2KyOL2n9Hrr9rkIFvsodH267REOtUn/jOyQWs=; b=N0y/4XcoGcKk6DqWqOS3VtiSctRHEdGIDPLFwPom6XDuLWfvt8bp7T5J+K0satRnCGfHw9 liK/3TQd3TJLFjQpgIZfaG0z8vSXfb47MCBI0mli+6WzFu6GTICRyuzOlnlTxUrbuMEQyB 9Lt8eNOEyGM0DvbglLUqStWbrW7Mp8c= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-625-ATvEByGoMgWRP9m0dCSyXA-1; Tue, 27 Jun 2023 10:36:11 -0400 X-MC-Unique: ATvEByGoMgWRP9m0dCSyXA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 280D9854C35; Tue, 27 Jun 2023 14:35:36 +0000 (UTC) Received: from llong.com (unknown [10.22.10.32]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4B71540C2063; Tue, 27 Jun 2023 14:35:35 +0000 (UTC) From: Waiman Long To: Tejun Heo , Zefan Li , Johannes Weiner , Jonathan Corbet , Shuah Khan Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Juri Lelli , Valentin Schneider , Frederic Weisbecker , Mrunal Patel , Ryan Phillips , Brent Rowsell , Peter Hunt , Phil Auld , Waiman Long Subject: [PATCH v4 5/9] cgroup/cpuset: Add cpuset.cpus.exclusive for v2 Date: Tue, 27 Jun 2023 10:35:04 -0400 Message-Id: <20230627143508.1576882-6-longman@redhat.com> In-Reply-To: <20230627143508.1576882-1-longman@redhat.com> References: <20230627143508.1576882-1-longman@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The creation of a cpuset partition means dedicating a set of exclusive CPUs to be used by a particular partition only. These exclusive CPUs will not be used by any cpusets outside of that partition. To enable more flexibility in creating partitions, we need a way to distribute exclusive CPUs that can be used in new partitions. Currently, we have a subparts_cpus cpumask in struct cpuset that tracks only the exclusive CPUs used by all the sub-partitions underneath a given cpuset. This patch reworks the way we do exclusive CPUs tracking. The subparts_cpus is now renamed to exclusive_cpus which tracks the exclusive CPUs allocated to a partition root including those that are further distributed down to sub-partitions underneath it. IOW, it also includes the exclusive CPUs used by the current partition root. The renamed exclusive_cpus is now exposed via a new read-only "cpuset.cpus.exclusive" control file. The new exclusive_cpus cpumask will be set to cpus_allowed when a cpuset becomes a partition root and cleared if it is not a valid partition root. In the next patch, we will enable write to this new control file and allow it to differ from cpus_allowed. However, it must remains a subset of cpus_allowed. A parent cpuset can distribute an exclusive CPU to at most one of its children only. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 733 ++++++++++++++++++++++++----------------- 1 file changed, 428 insertions(+), 305 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 64f9e305b3ab..9f2ec8394736 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -78,7 +78,7 @@ enum prs_errcode { }; static const char * const perr_strings[] = { - [PERR_INVCPUS] = "Invalid cpu list in cpuset.cpus", + [PERR_INVCPUS] = "Invalid cpu list in cpuset.cpus.exclusive", [PERR_INVPARENT] = "Parent is an invalid partition root", [PERR_NOTPART] = "Parent is not a partition root", [PERR_NOTEXCL] = "Cpu list in cpuset.cpus not exclusive", @@ -121,14 +121,18 @@ struct cpuset { nodemask_t effective_mems; /* - * CPUs allocated to child sub-partitions (default hierarchy only) - * - CPUs granted by the parent = effective_cpus U subparts_cpus - * - effective_cpus and subparts_cpus are mutually exclusive. + * Exclusive CPUs dedicated to current cgroup (default hierarchy only) * - * effective_cpus contains only onlined CPUs, but subparts_cpus - * may have offlined ones. + * This exclusive CPUs must be a subset of cpus_allowed. A parent + * cgroup can only grant exclusive CPUs to one of its children. + * + * When the cgroup becomes a valid partition root, exclusive_cpus + * defaults to cpus_allowed if not set. The effective_cpus of a valid + * partition root comes solely from its exclusive_cpus and some of the + * exclusive_cpus may be distributed to sub-partitions below & hence + * excluded from its effective_cpus. */ - cpumask_var_t subparts_cpus; + cpumask_var_t exclusive_cpus; /* * This is old Memory Nodes tasks took on. @@ -156,8 +160,8 @@ struct cpuset { /* for custom sched domain */ int relax_domain_level; - /* number of CPUs in subparts_cpus */ - int nr_subparts_cpus; + /* number of valid sub-partitions */ + int nr_subparts; /* partition root state */ int partition_root_state; @@ -185,6 +189,11 @@ struct cpuset { struct cgroup_file partition_file; }; +/* + * Exclusive CPUs distributed out to sub-partitions of top_cpuset + */ +static cpumask_var_t subpartitions_cpus; + /* * Partition root states: * @@ -312,7 +321,7 @@ static inline int is_partition_invalid(const struct cpuset *cs) */ static inline void make_partition_invalid(struct cpuset *cs) { - if (is_partition_valid(cs)) + if (cs->partition_root_state > 0) cs->partition_root_state = -cs->partition_root_state; } @@ -469,7 +478,7 @@ static inline bool partition_is_populated(struct cpuset *cs, if (cs->css.cgroup->nr_populated_csets) return true; - if (!excluded_child && !cs->nr_subparts_cpus) + if (!excluded_child && !cs->nr_subparts) return cgroup_is_populated(cs->css.cgroup); rcu_read_lock(); @@ -601,7 +610,7 @@ static inline int alloc_cpumasks(struct cpuset *cs, struct tmpmasks *tmp) if (cs) { pmask1 = &cs->cpus_allowed; pmask2 = &cs->effective_cpus; - pmask3 = &cs->subparts_cpus; + pmask3 = &cs->exclusive_cpus; } else { pmask1 = &tmp->new_cpus; pmask2 = &tmp->addmask; @@ -636,7 +645,7 @@ static inline void free_cpumasks(struct cpuset *cs, struct tmpmasks *tmp) if (cs) { free_cpumask_var(cs->cpus_allowed); free_cpumask_var(cs->effective_cpus); - free_cpumask_var(cs->subparts_cpus); + free_cpumask_var(cs->exclusive_cpus); } if (tmp) { free_cpumask_var(tmp->new_cpus); @@ -664,6 +673,7 @@ static struct cpuset *alloc_trial_cpuset(struct cpuset *cs) cpumask_copy(trial->cpus_allowed, cs->cpus_allowed); cpumask_copy(trial->effective_cpus, cs->effective_cpus); + cpumask_copy(trial->exclusive_cpus, cs->exclusive_cpus); return trial; } @@ -677,6 +687,25 @@ static inline void free_cpuset(struct cpuset *cs) kfree(cs); } +/* + * cpu_exclusive_check() - check if two cpusets are exclusive + * + * Return 0 if exclusive, -EINVAL if not + */ +static inline bool cpu_exclusive_check(struct cpuset *cs1, struct cpuset *cs2) +{ + struct cpumask *cpus1, *cpus2; + + cpus1 = cpumask_empty(cs1->exclusive_cpus) + ? cs1->cpus_allowed : cs1->exclusive_cpus; + cpus2 = cpumask_empty(cs2->exclusive_cpus) + ? cs2->cpus_allowed : cs2->exclusive_cpus; + + if (cpumask_intersects(cpus1, cpus2)) + return -EINVAL; + return 0; +} + /* * validate_change_legacy() - Validate conditions specific to legacy (v1) * behavior. @@ -776,9 +805,10 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial) ret = -EINVAL; cpuset_for_each_child(c, css, par) { if ((is_cpu_exclusive(trial) || is_cpu_exclusive(c)) && - c != cur && - cpumask_intersects(trial->cpus_allowed, c->cpus_allowed)) - goto out; + c != cur) { + if (cpu_exclusive_check(trial, c)) + goto out; + } if ((is_mem_exclusive(trial) || is_mem_exclusive(c)) && c != cur && nodes_intersects(trial->mems_allowed, c->mems_allowed)) @@ -908,7 +938,7 @@ static int generate_sched_domains(cpumask_var_t **domains, csa = NULL; /* Special case for the 99% of systems with one, full, sched domain */ - if (root_load_balance && !top_cpuset.nr_subparts_cpus) { + if (root_load_balance && !top_cpuset.nr_subparts) { ndoms = 1; doms = alloc_sched_domains(ndoms); if (!doms) @@ -1159,7 +1189,7 @@ static void rebuild_sched_domains_locked(void) * should be the same as the active CPUs, so checking only top_cpuset * is enough to detect racing CPU offlines. */ - if (!top_cpuset.nr_subparts_cpus && + if (cpumask_empty(subpartitions_cpus) && !cpumask_equal(top_cpuset.effective_cpus, cpu_active_mask)) return; @@ -1168,7 +1198,7 @@ static void rebuild_sched_domains_locked(void) * root should be only a subset of the active CPUs. Since a CPU in any * partition root could be offlined, all must be checked. */ - if (top_cpuset.nr_subparts_cpus) { + if (top_cpuset.nr_subparts) { rcu_read_lock(); cpuset_for_each_descendant_pre(cs, pos_css, &top_cpuset) { if (!is_partition_valid(cs)) { @@ -1232,7 +1262,7 @@ static void update_tasks_cpumask(struct cpuset *cs, struct cpumask *new_cpus) */ if ((task->flags & PF_KTHREAD) && kthread_is_per_cpu(task)) continue; - cpumask_andnot(new_cpus, possible_mask, cs->subparts_cpus); + cpumask_andnot(new_cpus, possible_mask, cs->exclusive_cpus); } else { cpumask_and(new_cpus, possible_mask, cs->effective_cpus); } @@ -1247,32 +1277,22 @@ static void update_tasks_cpumask(struct cpuset *cs, struct cpumask *new_cpus) * @cs: the cpuset the need to recompute the new effective_cpus mask * @parent: the parent cpuset * - * If the parent has subpartition CPUs, include them in the list of - * allowable CPUs in computing the new effective_cpus mask. Since offlined - * CPUs are not removed from subparts_cpus, we have to use cpu_active_mask - * to mask those out. + * The result is valid only if the given cpuset isn't a partition root. */ static void compute_effective_cpumask(struct cpumask *new_cpus, struct cpuset *cs, struct cpuset *parent) { - if (parent->nr_subparts_cpus && is_partition_valid(cs)) { - cpumask_or(new_cpus, parent->effective_cpus, - parent->subparts_cpus); - cpumask_and(new_cpus, new_cpus, cs->cpus_allowed); - cpumask_and(new_cpus, new_cpus, cpu_active_mask); - } else { - cpumask_and(new_cpus, cs->cpus_allowed, parent->effective_cpus); - } + cpumask_and(new_cpus, cs->cpus_allowed, parent->effective_cpus); } /* - * Commands for update_parent_subparts_cpumask + * Commands for update_parent_effective_cpumask */ -enum subparts_cmd { - partcmd_enable, /* Enable partition root */ - partcmd_disable, /* Disable partition root */ - partcmd_update, /* Update parent's subparts_cpus */ - partcmd_invalidate, /* Make partition invalid */ +enum partition_cmd { + partcmd_enable, /* Enable partition root */ + partcmd_disable, /* Disable partition root */ + partcmd_update, /* Update parent's effective_cpus */ + partcmd_invalidate, /* Make partition invalid */ }; static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs, @@ -1323,8 +1343,39 @@ static void update_partition_sd_lb(struct cpuset *cs, int old_prs) rebuild_sched_domains_locked(); } +/* + * tasks_nocpu_error - Return true if tasks will have no effective_cpus + */ +static bool tasks_nocpu_error(struct cpuset *parent, struct cpuset *cs, + struct cpumask *xcpus) +{ + /* + * A populated partition (cs or parent) can't have empty effective_cpus + */ + return (cpumask_subset(parent->effective_cpus, xcpus) && + partition_is_populated(parent, cs)) || + (!cpumask_intersects(xcpus, cpu_active_mask) && + partition_is_populated(cs, NULL)); +} + +/* + * setup_exclusive_cpus - setup exclusive_cpus if not set yet + */ +static void setup_exclusive_cpus(struct cpuset *cs, struct cpuset *parent) +{ + if (!cpumask_empty(cs->exclusive_cpus)) + return; + + if (!parent) + parent = parent_cs(cs); + spin_lock_irq(&callback_lock); + cpumask_and(cs->exclusive_cpus, + cs->cpus_allowed, parent->exclusive_cpus); + spin_unlock_irq(&callback_lock); +} + /** - * update_parent_subparts_cpumask - update subparts_cpus mask of parent cpuset + * update_parent_effective_cpumask - update effective_cpus mask of parent cpuset * @cs: The cpuset that requests change in partition root state * @cmd: Partition root state change command * @newmask: Optional new cpumask for partcmd_update @@ -1332,21 +1383,20 @@ static void update_partition_sd_lb(struct cpuset *cs, int old_prs) * Return: 0 or a partition root state error code * * For partcmd_enable, the cpuset is being transformed from a non-partition - * root to a partition root. The cpus_allowed mask of the given cpuset will - * be put into parent's subparts_cpus and taken away from parent's + * root to a partition root. The exclusive_cpus (cpus_allowed if exclusive_cpus + * not set) mask of the given cpuset will be taken away from parent's * effective_cpus. The function will return 0 if all the CPUs listed in - * cpus_allowed can be granted or an error code will be returned. + * exclusive_cpus can be granted or an error code will be returned. * * For partcmd_disable, the cpuset is being transformed from a partition - * root back to a non-partition root. Any CPUs in cpus_allowed that are in - * parent's subparts_cpus will be taken away from that cpumask and put back - * into parent's effective_cpus. 0 will always be returned. + * root back to a non-partition root. Any CPUs in exclusive_cpus will be + * given back to parent's effective_cpus. 0 will always be returned. * * For partcmd_update, if the optional newmask is specified, the cpu list is - * to be changed from cpus_allowed to newmask. Otherwise, cpus_allowed is + * to be changed from exclusive_cpus to newmask. Otherwise, exclusive_cpus is * assumed to remain the same. The cpuset should either be a valid or invalid * partition root. The partition root state may change from valid to invalid - * or vice versa. An error code will only be returned if transitioning from + * or vice versa. An error code will be returned if transitioning from * invalid to valid violates the exclusivity rule. * * For partcmd_invalidate, the current partition will be made invalid. @@ -1361,18 +1411,47 @@ static void update_partition_sd_lb(struct cpuset *cs, int old_prs) * check for error and so partition_root_state and prs_error will be updated * directly. */ -static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, - struct cpumask *newmask, - struct tmpmasks *tmp) +static int update_parent_effective_cpumask(struct cpuset *cs, int cmd, + struct cpumask *newmask, + struct tmpmasks *tmp) { struct cpuset *parent = parent_cs(cs); - int adding; /* Moving cpus from effective_cpus to subparts_cpus */ - int deleting; /* Moving cpus from subparts_cpus to effective_cpus */ + int adding; /* Adding cpus to parent's effective_cpus */ + int deleting; /* Deleting cpus from parent's effective_cpus */ int old_prs, new_prs; int part_error = PERR_NONE; /* Partition error? */ + int subparts_delta = 0; + struct cpumask *xcpus; /* cs exclusive_cpus */ + bool nocpu; lockdep_assert_held(&cpuset_mutex); + /* + * new_prs will only be changed for the partcmd_update and + * partcmd_invalidate commands. + */ + adding = deleting = false; + old_prs = new_prs = cs->partition_root_state; + xcpus = !cpumask_empty(cs->exclusive_cpus) + ? cs->exclusive_cpus : cs->cpus_allowed; + + if (cmd == partcmd_invalidate) { + if (is_prs_invalid(old_prs)) + return 0; + + /* + * Make the current partition invalid. + */ + if (is_partition_valid(parent)) + adding = cpumask_and(tmp->addmask, + xcpus, parent->exclusive_cpus); + if (old_prs > 0) { + new_prs = -old_prs; + subparts_delta--; + } + goto write_error; + } + /* * The parent must be a partition root. * The new cpumask, if present, or the current cpus_allowed must @@ -1385,124 +1464,122 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, if (!newmask && cpumask_empty(cs->cpus_allowed)) return PERR_CPUSEMPTY; - /* - * new_prs will only be changed for the partcmd_update and - * partcmd_invalidate commands. - */ - adding = deleting = false; - old_prs = new_prs = cs->partition_root_state; + nocpu = tasks_nocpu_error(parent, cs, xcpus); + if (cmd == partcmd_enable) { /* * Enabling partition root is not allowed if cpus_allowed * doesn't overlap parent's cpus_allowed. */ - if (!cpumask_intersects(cs->cpus_allowed, parent->cpus_allowed)) + if (!cpumask_intersects(xcpus, parent->exclusive_cpus)) return PERR_INVCPUS; /* * A parent can be left with no CPU as long as there is no * task directly associated with the parent partition. */ - if (cpumask_subset(parent->effective_cpus, cs->cpus_allowed) && - partition_is_populated(parent, cs)) + if (nocpu) return PERR_NOCPUS; - cpumask_copy(tmp->addmask, cs->cpus_allowed); - adding = true; + cpumask_copy(tmp->delmask, xcpus); + deleting = true; + subparts_delta++; } else if (cmd == partcmd_disable) { /* - * Need to remove cpus from parent's subparts_cpus for valid - * partition root. + n* May need to add cpus to parent's effective_cpus for + * valid partition root. */ - deleting = !is_prs_invalid(old_prs) && - cpumask_and(tmp->delmask, cs->cpus_allowed, - parent->subparts_cpus); - } else if (cmd == partcmd_invalidate) { - if (is_prs_invalid(old_prs)) - return 0; - + adding = !is_prs_invalid(old_prs) && + cpumask_and(tmp->addmask, xcpus, parent->exclusive_cpus); + if (adding) + subparts_delta--; + } else if (newmask) { /* - * Make the current partition invalid. It is assumed that - * invalidation is caused by violating cpu exclusivity rule. + * Empty cpumask is not allowed */ - deleting = cpumask_and(tmp->delmask, cs->cpus_allowed, - parent->subparts_cpus); - if (old_prs > 0) { - new_prs = -old_prs; - part_error = PERR_NOTEXCL; + if (cpumask_empty(newmask)) { + part_error = PERR_CPUSEMPTY; + goto write_error; } - } else if (newmask) { + /* * partcmd_update with newmask: * - * Compute add/delete mask to/from subparts_cpus + * Compute add/delete mask to/from effective_cpus * - * delmask = cpus_allowed & ~newmask & parent->subparts_cpus - * addmask = newmask & parent->cpus_allowed - * & ~parent->subparts_cpus + * addmask = exclusive_cpus & ~newmask & parent->exclusive_cpus + * delmask = newmask & ~cs->exclusive_cpus + * & parent->exclusive_cpus */ - cpumask_andnot(tmp->delmask, cs->cpus_allowed, newmask); - deleting = cpumask_and(tmp->delmask, tmp->delmask, - parent->subparts_cpus); + cpumask_andnot(tmp->addmask, xcpus, newmask); + adding = cpumask_and(tmp->addmask, tmp->addmask, + parent->exclusive_cpus); - cpumask_and(tmp->addmask, newmask, parent->cpus_allowed); - adding = cpumask_andnot(tmp->addmask, tmp->addmask, - parent->subparts_cpus); - /* - * Empty cpumask is not allowed - */ - if (cpumask_empty(newmask)) { - part_error = PERR_CPUSEMPTY; + cpumask_andnot(tmp->delmask, newmask, xcpus); + deleting = cpumask_and(tmp->delmask, tmp->delmask, + parent->exclusive_cpus); /* * Make partition invalid if parent's effective_cpus could * become empty and there are tasks in the parent. */ - } else if (adding && - cpumask_subset(parent->effective_cpus, tmp->addmask) && - !cpumask_intersects(tmp->delmask, cpu_active_mask) && - partition_is_populated(parent, cs)) { + if (nocpu && (!adding || + !cpumask_intersects(tmp->addmask, cpu_active_mask))) { part_error = PERR_NOCPUS; - adding = false; - deleting = cpumask_and(tmp->delmask, cs->cpus_allowed, - parent->subparts_cpus); + deleting = false; + adding = cpumask_and(tmp->addmask, + xcpus, parent->exclusive_cpus); } } else { /* - * partcmd_update w/o newmask: + * partcmd_update w/o newmask + * + * delmask = exclusive_cpus & parent->effective_cpus + * + * This can be called from: + * 1) update_cpumasks_hier() + * 2) cpuset_hotplug_update_tasks() * - * delmask = cpus_allowed & parent->subparts_cpus - * addmask = cpus_allowed & parent->cpus_allowed - * & ~parent->subparts_cpus + * Check to see if it can be transitioned from valid to + * invalid partition or vice versa. * - * This gets invoked either due to a hotplug event or from - * update_cpumasks_hier(). This can cause the state of a - * partition root to transition from valid to invalid or vice - * versa. So we still need to compute the addmask and delmask. - - * A partition error happens when: - * 1) Cpuset is valid partition, but parent does not distribute - * out any CPUs. - * 2) Parent has tasks and all its effective CPUs will have - * to be distributed out. + * A partition error happens when parent has tasks and all + * its effective CPUs will have to be distributed out. */ - cpumask_and(tmp->addmask, cs->cpus_allowed, - parent->cpus_allowed); - adding = cpumask_andnot(tmp->addmask, tmp->addmask, - parent->subparts_cpus); - - if ((is_partition_valid(cs) && !parent->nr_subparts_cpus) || - (adding && - cpumask_subset(parent->effective_cpus, tmp->addmask) && - partition_is_populated(parent, cs))) { + WARN_ON_ONCE(!is_partition_valid(parent)); + if (nocpu) { part_error = PERR_NOCPUS; - adding = false; - } + if (is_partition_valid(cs)) + adding = cpumask_and(tmp->addmask, + xcpus, parent->exclusive_cpus); + } else if (is_partition_invalid(cs) && + cpumask_subset(xcpus, parent->exclusive_cpus)) { + struct cgroup_subsys_state *css; + struct cpuset *child; + bool exclusive = true; - if (part_error && is_partition_valid(cs) && - parent->nr_subparts_cpus) - deleting = cpumask_and(tmp->delmask, cs->cpus_allowed, - parent->subparts_cpus); + /* + * Convert invalid partition to valid has to + * pass the cpu exclusivity test. + */ + rcu_read_lock(); + cpuset_for_each_child(child, css, parent) { + if (child == cs) + continue; + if (cpu_exclusive_check(cs, child)) { + exclusive = false; + break; + } + } + rcu_read_unlock(); + if (exclusive) + deleting = cpumask_and(tmp->delmask, + xcpus, parent->effective_cpus); + else + part_error = PERR_NOTEXCL; + } } + +write_error: if (part_error) WRITE_ONCE(cs->prs_err, part_error); @@ -1514,13 +1591,17 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, switch (cs->partition_root_state) { case PRS_ROOT: case PRS_ISOLATED: - if (part_error) + if (part_error) { new_prs = -old_prs; + subparts_delta--; + } break; case PRS_INVALID_ROOT: case PRS_INVALID_ISOLATED: - if (!part_error) + if (!part_error) { new_prs = -old_prs; + subparts_delta++; + } break; } } @@ -1540,32 +1621,43 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, } /* - * Change the parent's subparts_cpus. + * Change the parent's effective_cpus & exclusive_cpus (top cpuset + * only). + * * Newly added CPUs will be removed from effective_cpus and * newly deleted ones will be added back to effective_cpus. */ spin_lock_irq(&callback_lock); if (adding) { - cpumask_or(parent->subparts_cpus, - parent->subparts_cpus, tmp->addmask); - cpumask_andnot(parent->effective_cpus, - parent->effective_cpus, tmp->addmask); - } - if (deleting) { - cpumask_andnot(parent->subparts_cpus, - parent->subparts_cpus, tmp->delmask); + if (parent == &top_cpuset) + cpumask_andnot(subpartitions_cpus, + subpartitions_cpus, tmp->addmask); /* - * Some of the CPUs in subparts_cpus might have been offlined. + * Some of the CPUs in exclusive_cpus might have been offlined. */ - cpumask_and(tmp->delmask, tmp->delmask, cpu_active_mask); cpumask_or(parent->effective_cpus, - parent->effective_cpus, tmp->delmask); + parent->effective_cpus, tmp->addmask); + cpumask_and(parent->effective_cpus, + parent->effective_cpus, cpu_active_mask); + } + if (deleting) { + if (parent == &top_cpuset) + cpumask_or(subpartitions_cpus, + subpartitions_cpus, tmp->delmask); + cpumask_andnot(parent->effective_cpus, + parent->effective_cpus, tmp->delmask); } - parent->nr_subparts_cpus = cpumask_weight(parent->subparts_cpus); + if (is_partition_valid(parent)) { + parent->nr_subparts += subparts_delta; + WARN_ON_ONCE(parent->nr_subparts < 0); + } - if (old_prs != new_prs) + if (old_prs != new_prs) { cs->partition_root_state = new_prs; + if (new_prs <= 0) + cs->nr_subparts = 0; + } spin_unlock_irq(&callback_lock); @@ -1590,6 +1682,71 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, return 0; } +/** + * compute_partition_effective_cpumask - compute effective_cpus for partition + * @cs: partition root cpuset + * @new_ecpus: previously computed effective_cpus to be updated + * + * Compute the effective_cpus of a partition root by scanning exclusive_cpus + * of child partition roots and exclusing their exclusive_cpus. + * + * This has the side effect of invalidating valid child partition roots, + * if necessary. Since it is called from either cpuset_hotplug_update_tasks() + * or update_cpumasks_hier() where parent and children are modified + * successively, we don't need to call update_parent_effective_cpumask() + * and the child's effective_cpus will be updated in later iterations. + * + * Note that rcu_read_lock() is assumed to be held. + */ +static void compute_partition_effective_cpumask(struct cpuset *cs, + struct cpumask *new_ecpus) +{ + struct cgroup_subsys_state *css; + struct cpuset *child; + bool populated = partition_is_populated(cs, NULL); + + /* + * Check child partition roots to see if they should be + * invalidated when + * 1) child exclusive_cpus not a subset of new + * excluisve_cpus + * 2) All the effective_cpus will be used up and cp + * has tasks + */ + cpumask_and(new_ecpus, cs->exclusive_cpus, cpu_active_mask); + rcu_read_lock(); + cpuset_for_each_child(child, css, cs) { + if (!is_partition_valid(child)) + continue; + + child->prs_err = 0; + if (!cpumask_subset(child->exclusive_cpus, + cs->exclusive_cpus)) + child->prs_err = PERR_INVCPUS; + else if (populated && + cpumask_subset(new_ecpus, child->exclusive_cpus)) + child->prs_err = PERR_NOCPUS; + + if (child->prs_err) { + int old_prs = child->partition_root_state; + + /* + * Invalidate child partition + */ + spin_lock_irq(&callback_lock); + make_partition_invalid(child); + cs->nr_subparts--; + child->nr_subparts = 0; + spin_unlock_irq(&callback_lock); + notify_partition_change(child, old_prs); + continue; + } + cpumask_andnot(new_ecpus, new_ecpus, + child->exclusive_cpus); + } + rcu_read_unlock(); +} + /* * update_cpumasks_hier() flags */ @@ -1624,6 +1781,19 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, compute_effective_cpumask(tmp->new_cpus, cp, parent); + if (is_partition_valid(parent) && is_partition_valid(cp)) + compute_partition_effective_cpumask(cp, tmp->new_cpus); + + /* + * A partition with no effective_cpus is allowed as long as + * there is no task associated with it. Call + * update_parent_effective_cpumask() to check it. + */ + if (is_partition_valid(cp) && cpumask_empty(tmp->new_cpus)) { + update_parent = true; + goto update_parent_effective; + } + /* * If it becomes empty, inherit the effective mask of the * parent, which is guaranteed to have some CPUs unless @@ -1631,10 +1801,6 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, * out all its CPUs. */ if (is_in_v2_mode() && cpumask_empty(tmp->new_cpus)) { - if (is_partition_valid(cp) && - cpumask_equal(cp->cpus_allowed, cp->subparts_cpus)) - goto update_parent_subparts; - cpumask_copy(tmp->new_cpus, parent->effective_cpus); if (!cp->use_parent_ecpus) { cp->use_parent_ecpus = true; @@ -1661,12 +1827,12 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, continue; } -update_parent_subparts: +update_parent_effective: /* - * update_parent_subparts_cpumask() should have been called + * update_parent_effective_cpumask() should have been called * for cs already in update_cpumask(). We should also call * update_tasks_cpumask() again for tasks in the parent - * cpuset if the parent's subparts_cpus changes. + * cpuset if the parent's effective_cpus changes. */ old_prs = new_prs = cp->partition_root_state; if ((cp != cs) && old_prs) { @@ -1696,8 +1862,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, rcu_read_unlock(); if (update_parent) { - update_parent_subparts_cpumask(cp, partcmd_update, NULL, - tmp); + update_parent_effective_cpumask(cp, partcmd_update, NULL, tmp); /* * The cpuset partition_root_state may become * invalid. Capture it. @@ -1706,30 +1871,18 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, } spin_lock_irq(&callback_lock); - - if (cp->nr_subparts_cpus && !is_partition_valid(cp)) { - /* - * Put all active subparts_cpus back to effective_cpus. - */ - cpumask_or(tmp->new_cpus, tmp->new_cpus, - cp->subparts_cpus); - cpumask_and(tmp->new_cpus, tmp->new_cpus, - cpu_active_mask); - cp->nr_subparts_cpus = 0; - cpumask_clear(cp->subparts_cpus); - } - cpumask_copy(cp->effective_cpus, tmp->new_cpus); - if (cp->nr_subparts_cpus) { - /* - * Make sure that effective_cpus & subparts_cpus - * are mutually exclusive. - */ - cpumask_andnot(cp->effective_cpus, cp->effective_cpus, - cp->subparts_cpus); - } - cp->partition_root_state = new_prs; + if ((new_prs > 0) && cpumask_empty(cp->exclusive_cpus)) + cpumask_and(cp->exclusive_cpus, + cp->cpus_allowed, parent->exclusive_cpus); + if (new_prs < 0) { + /* Reset partition data */ + cp->nr_subparts = 0; + cpumask_clear(cp->exclusive_cpus); + if (is_cpu_exclusive(cp)) + clear_bit(CS_CPU_EXCLUSIVE, &cp->flags); + } spin_unlock_irq(&callback_lock); notify_partition_change(cp, old_prs); @@ -1826,6 +1979,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, { int retval; struct tmpmasks tmp; + struct cpuset *parent = parent_cs(cs); bool invalidate = false; int old_prs = cs->partition_root_state; @@ -1841,6 +1995,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, */ if (!*buf) { cpumask_clear(trialcs->cpus_allowed); + cpumask_clear(trialcs->exclusive_cpus); } else { retval = cpulist_parse(buf, trialcs->cpus_allowed); if (retval < 0) @@ -1849,6 +2004,13 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, if (!cpumask_subset(trialcs->cpus_allowed, top_cpuset.cpus_allowed)) return -EINVAL; + + /* + * When exclusive_cpus is set, make sure it is a subset of + * cpus_allowed and parent's exclusive_cpus. + */ + cpumask_and(trialcs->exclusive_cpus, + parent->exclusive_cpus, trialcs->cpus_allowed); } /* Nothing to do if the cpus didn't change */ @@ -1858,11 +2020,21 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, if (alloc_cpumasks(NULL, &tmp)) return -ENOMEM; + if (is_partition_valid(cs)) { + if (cpumask_empty(trialcs->exclusive_cpus)) { + invalidate = true; + cs->prs_err = PERR_INVCPUS; + } else if (tasks_nocpu_error(parent, cs, trialcs->exclusive_cpus)) { + invalidate = true; + cs->prs_err = PERR_NOCPUS; + } + } + retval = validate_change(cs, trialcs); if ((retval == -EINVAL) && cgroup_subsys_on_dfl(cpuset_cgrp_subsys)) { - struct cpuset *cp, *parent; struct cgroup_subsys_state *css; + struct cpuset *cp; /* * The -EINVAL error code indicates that partition sibling @@ -1873,69 +2045,44 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, */ invalidate = true; rcu_read_lock(); - parent = parent_cs(cs); cpuset_for_each_child(cp, css, parent) if (is_partition_valid(cp) && - cpumask_intersects(trialcs->cpus_allowed, cp->cpus_allowed)) { + cpumask_intersects(trialcs->exclusive_cpus, cp->exclusive_cpus)) { rcu_read_unlock(); - update_parent_subparts_cpumask(cp, partcmd_invalidate, NULL, &tmp); + update_parent_effective_cpumask(cp, partcmd_invalidate, NULL, &tmp); rcu_read_lock(); } rcu_read_unlock(); retval = 0; } + if (retval < 0) goto out_free; if (cs->partition_root_state) { if (invalidate) - update_parent_subparts_cpumask(cs, partcmd_invalidate, - NULL, &tmp); + update_parent_effective_cpumask(cs, partcmd_invalidate, + NULL, &tmp); else - update_parent_subparts_cpumask(cs, partcmd_update, - trialcs->cpus_allowed, &tmp); + update_parent_effective_cpumask(cs, partcmd_update, + trialcs->exclusive_cpus, &tmp); } - compute_effective_cpumask(trialcs->effective_cpus, trialcs, - parent_cs(cs)); spin_lock_irq(&callback_lock); cpumask_copy(cs->cpus_allowed, trialcs->cpus_allowed); + if (!is_partition_valid(cs)) + cpumask_clear(cs->exclusive_cpus); + else + cpumask_copy(cs->exclusive_cpus, trialcs->exclusive_cpus); - /* - * Make sure that subparts_cpus, if not empty, is a subset of - * cpus_allowed. Clear subparts_cpus if partition not valid or - * empty effective cpus with tasks. - */ - if (cs->nr_subparts_cpus) { - if (!is_partition_valid(cs) || - (cpumask_subset(trialcs->effective_cpus, cs->subparts_cpus) && - partition_is_populated(cs, NULL))) { - cs->nr_subparts_cpus = 0; - cpumask_clear(cs->subparts_cpus); - } else { - cpumask_and(cs->subparts_cpus, cs->subparts_cpus, - cs->cpus_allowed); - cs->nr_subparts_cpus = cpumask_weight(cs->subparts_cpus); - } - } spin_unlock_irq(&callback_lock); /* effective_cpus will be updated here */ update_cpumasks_hier(cs, &tmp, 0); - if (cs->partition_root_state) { - struct cpuset *parent = parent_cs(cs); - - /* - * For partition root, update the cpumasks of sibling - * cpusets if they use parent's effective_cpus. - */ - if (parent->child_ecpus_count) - update_sibling_cpumasks(parent, cs, &tmp); - - /* Update CS_SCHED_LOAD_BALANCE and/or sched_domains */ + /* Update CS_SCHED_LOAD_BALANCE and/or sched_domains, if necessary */ + if (cs->partition_root_state) update_partition_sd_lb(cs, old_prs); - } out_free: free_cpumasks(NULL, &tmp); return 0; @@ -2313,7 +2460,6 @@ static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs, static int update_prstate(struct cpuset *cs, int new_prs) { int err = PERR_NONE, old_prs = cs->partition_root_state; - struct cpuset *parent = parent_cs(cs); struct tmpmasks tmpmask; if (old_prs == new_prs) @@ -2331,6 +2477,13 @@ static int update_prstate(struct cpuset *cs, int new_prs) if (alloc_cpumasks(NULL, &tmpmask)) return -ENOMEM; + /* + * Setup exclusive_cpus if not set yet, it will be cleared later + * if partition becomes invalid. + */ + if (new_prs > 0) + setup_exclusive_cpus(cs, NULL); + err = update_partition_exclusive(cs, new_prs); if (err) goto out; @@ -2344,8 +2497,8 @@ static int update_prstate(struct cpuset *cs, int new_prs) goto out; } - err = update_parent_subparts_cpumask(cs, partcmd_enable, - NULL, &tmpmask); + err = update_parent_effective_cpumask(cs, partcmd_enable, + NULL, &tmpmask); } else if (old_prs && new_prs) { /* * A change in load balance state only, no change in cpumasks. @@ -2356,19 +2509,13 @@ static int update_prstate(struct cpuset *cs, int new_prs) * Switching back to member is always allowed even if it * disables child partitions. */ - update_parent_subparts_cpumask(cs, partcmd_disable, NULL, - &tmpmask); + update_parent_effective_cpumask(cs, partcmd_disable, NULL, + &tmpmask); /* - * If there are child partitions, they will all become invalid. + * Invalidation of child partitions will be done in + * update_cpumasks_hier(). */ - if (unlikely(cs->nr_subparts_cpus)) { - spin_lock_irq(&callback_lock); - cs->nr_subparts_cpus = 0; - cpumask_clear(cs->subparts_cpus); - compute_effective_cpumask(cs->effective_cpus, cs, parent); - spin_unlock_irq(&callback_lock); - } } out: /* @@ -2383,14 +2530,12 @@ static int update_prstate(struct cpuset *cs, int new_prs) spin_lock_irq(&callback_lock); cs->partition_root_state = new_prs; WRITE_ONCE(cs->prs_err, err); + if (!is_partition_valid(cs)) + cpumask_clear(cs->exclusive_cpus); spin_unlock_irq(&callback_lock); - /* - * Update child cpusets, if present. - * Force update if switching back to member. - */ - if (!list_empty(&cs->css.children)) - update_cpumasks_hier(cs, &tmpmask, !new_prs ? HIER_CHECKALL : 0); + /* Force update if switching back to member */ + update_cpumasks_hier(cs, &tmpmask, !new_prs ? HIER_CHECKALL : 0); /* Update sched domains and load balance flag */ update_partition_sd_lb(cs, old_prs); @@ -2626,7 +2771,7 @@ static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task) guarantee_online_cpus(task, cpus_attach); else cpumask_andnot(cpus_attach, task_cpu_possible_mask(task), - cs->subparts_cpus); + subpartitions_cpus); /* * can_attach beforehand should guarantee that this doesn't * fail. TODO: have a better way to handle failure here @@ -2729,6 +2874,7 @@ typedef enum { FILE_EFFECTIVE_CPULIST, FILE_EFFECTIVE_MEMLIST, FILE_SUBPARTS_CPULIST, + FILE_EXCLUSIVE_CPULIST, FILE_CPU_EXCLUSIVE, FILE_MEM_EXCLUSIVE, FILE_MEM_HARDWALL, @@ -2913,8 +3059,11 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v) case FILE_EFFECTIVE_MEMLIST: seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems)); break; + case FILE_EXCLUSIVE_CPULIST: + seq_printf(sf, "%*pbl\n", cpumask_pr_args(cs->exclusive_cpus)); + break; case FILE_SUBPARTS_CPULIST: - seq_printf(sf, "%*pbl\n", cpumask_pr_args(cs->subparts_cpus)); + seq_printf(sf, "%*pbl\n", cpumask_pr_args(subpartitions_cpus)); break; default: ret = -EINVAL; @@ -3186,11 +3335,18 @@ static struct cftype dfl_files[] = { .file_offset = offsetof(struct cpuset, partition_file), }, + { + .name = "cpus.exclusive", + .seq_show = cpuset_common_seq_show, + .private = FILE_EXCLUSIVE_CPULIST, + .flags = CFTYPE_NOT_ON_ROOT, + }, + { .name = "cpus.subpartitions", .seq_show = cpuset_common_seq_show, .private = FILE_SUBPARTS_CPULIST, - .flags = CFTYPE_DEBUG, + .flags = CFTYPE_ONLY_ON_ROOT | CFTYPE_DEBUG, }, { } /* terminate */ @@ -3364,6 +3520,7 @@ static void cpuset_bind(struct cgroup_subsys_state *root_css) if (is_in_v2_mode()) { cpumask_copy(top_cpuset.cpus_allowed, cpu_possible_mask); + cpumask_copy(top_cpuset.exclusive_cpus, cpu_possible_mask); top_cpuset.mems_allowed = node_possible_map; } else { cpumask_copy(top_cpuset.cpus_allowed, @@ -3502,11 +3659,13 @@ int __init cpuset_init(void) { BUG_ON(!alloc_cpumask_var(&top_cpuset.cpus_allowed, GFP_KERNEL)); BUG_ON(!alloc_cpumask_var(&top_cpuset.effective_cpus, GFP_KERNEL)); - BUG_ON(!zalloc_cpumask_var(&top_cpuset.subparts_cpus, GFP_KERNEL)); + BUG_ON(!alloc_cpumask_var(&top_cpuset.exclusive_cpus, GFP_KERNEL)); + BUG_ON(!zalloc_cpumask_var(&subpartitions_cpus, GFP_KERNEL)); cpumask_setall(top_cpuset.cpus_allowed); nodes_setall(top_cpuset.mems_allowed); cpumask_setall(top_cpuset.effective_cpus); + cpumask_setall(top_cpuset.exclusive_cpus); nodes_setall(top_cpuset.effective_mems); fmeter_init(&top_cpuset.fmeter); @@ -3647,30 +3806,15 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp) compute_effective_cpumask(&new_cpus, cs, parent); nodes_and(new_mems, cs->mems_allowed, parent->effective_mems); - if (cs->nr_subparts_cpus) - /* - * Make sure that CPUs allocated to child partitions - * do not show up in effective_cpus. - */ - cpumask_andnot(&new_cpus, &new_cpus, cs->subparts_cpus); - if (!tmp || !cs->partition_root_state) goto update_tasks; /* - * In the unlikely event that a partition root has empty - * effective_cpus with tasks, we will have to invalidate child - * partitions, if present, by setting nr_subparts_cpus to 0 to - * reclaim their cpus. + * Compute effective_cpus for valid partition root, may invalidate + * child partition roots if necessary. */ - if (cs->nr_subparts_cpus && is_partition_valid(cs) && - cpumask_empty(&new_cpus) && partition_is_populated(cs, NULL)) { - spin_lock_irq(&callback_lock); - cs->nr_subparts_cpus = 0; - cpumask_clear(cs->subparts_cpus); - spin_unlock_irq(&callback_lock); - compute_effective_cpumask(&new_cpus, cs, parent); - } + if (is_partition_valid(cs) && is_partition_valid(parent)) + compute_partition_effective_cpumask(cs, &new_cpus); /* * Force the partition to become invalid if either one of @@ -3679,44 +3823,23 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp) * 2) parent is invalid or doesn't grant any cpus to child * partitions. */ - if (is_partition_valid(cs) && (!parent->nr_subparts_cpus || - (cpumask_empty(&new_cpus) && partition_is_populated(cs, NULL)))) { - int old_prs, parent_prs; - - update_parent_subparts_cpumask(cs, partcmd_disable, NULL, tmp); - if (cs->nr_subparts_cpus) { - spin_lock_irq(&callback_lock); - cs->nr_subparts_cpus = 0; - cpumask_clear(cs->subparts_cpus); - spin_unlock_irq(&callback_lock); - compute_effective_cpumask(&new_cpus, cs, parent); - } - - old_prs = cs->partition_root_state; - parent_prs = parent->partition_root_state; - if (is_partition_valid(cs)) { - spin_lock_irq(&callback_lock); - make_partition_invalid(cs); - spin_unlock_irq(&callback_lock); - if (is_prs_invalid(parent_prs)) - WRITE_ONCE(cs->prs_err, PERR_INVPARENT); - else if (!parent_prs) - WRITE_ONCE(cs->prs_err, PERR_NOTPART); - else - WRITE_ONCE(cs->prs_err, PERR_HOTPLUG); - notify_partition_change(cs, old_prs); - } + if (is_partition_valid(cs) && (!is_partition_valid(parent) || + tasks_nocpu_error(parent, cs, &new_cpus))) { + update_parent_effective_cpumask(cs, partcmd_invalidate, NULL, tmp); + compute_effective_cpumask(&new_cpus, cs, parent); cpuset_force_rebuild(); } - /* * On the other hand, an invalid partition root may be transitioned * back to a regular one. */ else if (is_partition_valid(parent) && is_partition_invalid(cs)) { - update_parent_subparts_cpumask(cs, partcmd_update, NULL, tmp); - if (is_partition_valid(cs)) + update_parent_effective_cpumask(cs, partcmd_update, NULL, tmp); + if (is_partition_valid(cs)) { + setup_exclusive_cpus(cs, parent); + compute_partition_effective_cpumask(cs, &new_cpus); cpuset_force_rebuild(); + } } update_tasks: @@ -3773,21 +3896,22 @@ static void cpuset_hotplug_workfn(struct work_struct *work) new_mems = node_states[N_MEMORY]; /* - * If subparts_cpus is populated, it is likely that the check below - * will produce a false positive on cpus_updated when the cpu list - * isn't changed. It is extra work, but it is better to be safe. + * If subpartitions_cpus is populated, it is likely that the check + * below will produce a false positive on cpus_updated when the cpu + * list isn't changed. It is extra work, but it is better to be safe. */ - cpus_updated = !cpumask_equal(top_cpuset.effective_cpus, &new_cpus); + cpus_updated = !cpumask_equal(top_cpuset.effective_cpus, &new_cpus) || + !cpumask_empty(subpartitions_cpus); mems_updated = !nodes_equal(top_cpuset.effective_mems, new_mems); /* - * In the rare case that hotplug removes all the cpus in subparts_cpus, - * we assumed that cpus are updated. + * In the rare case that hotplug removes all the cpus in + * subpartitions_cpus, we assumed that cpus are updated. */ - if (!cpus_updated && top_cpuset.nr_subparts_cpus) + if (!cpus_updated && top_cpuset.nr_subparts) cpus_updated = true; - /* synchronize cpus_allowed to cpu_active_mask */ + /* For v1, synchronize cpus_allowed to cpu_active_mask */ if (cpus_updated) { spin_lock_irq(&callback_lock); if (!on_dfl) @@ -3795,17 +3919,16 @@ static void cpuset_hotplug_workfn(struct work_struct *work) /* * Make sure that CPUs allocated to child partitions * do not show up in effective_cpus. If no CPU is left, - * we clear the subparts_cpus & let the child partitions + * we clear the subpartitions_cpus & let the child partitions * fight for the CPUs again. */ - if (top_cpuset.nr_subparts_cpus) { - if (cpumask_subset(&new_cpus, - top_cpuset.subparts_cpus)) { - top_cpuset.nr_subparts_cpus = 0; - cpumask_clear(top_cpuset.subparts_cpus); + if (!cpumask_empty(subpartitions_cpus)) { + if (cpumask_subset(&new_cpus, subpartitions_cpus)) { + top_cpuset.nr_subparts = 0; + cpumask_clear(subpartitions_cpus); } else { cpumask_andnot(&new_cpus, &new_cpus, - top_cpuset.subparts_cpus); + subpartitions_cpus); } } cpumask_copy(top_cpuset.effective_cpus, &new_cpus); @@ -3937,7 +4060,7 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask) * We first exclude cpus allocated to partitions. If there is no * allowable online cpu left, we fall back to all possible cpus. */ - cpumask_andnot(pmask, possible_mask, top_cpuset.subparts_cpus); + cpumask_andnot(pmask, possible_mask, subpartitions_cpus); if (!cpumask_intersects(pmask, cpu_online_mask)) cpumask_copy(pmask, possible_mask); } From patchwork Tue Jun 27 14:35:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 697058 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4B21EB64D9 for ; Tue, 27 Jun 2023 14:37:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231168AbjF0Og7 (ORCPT ); Tue, 27 Jun 2023 10:36:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38100 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231481AbjF0Og6 (ORCPT ); Tue, 27 Jun 2023 10:36:58 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A33330F4 for ; Tue, 27 Jun 2023 07:36:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687876566; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UX2Fp+D0urB/VW9aCOtFzkSRR1R8cgucSvMdsOv+zvU=; b=JBHOwB5HFKV4O3YystdkivUqgNXao0Gjwt/zACi05jLNYO14qn3sX7BTq+FRWPOjWHCsQ1 +jYMfcWPCLNgOLIeihdvFQv6Flohggxa3YknwzW/WqvbzvIS+2WpGP8Qp8mOx+F0dVHaFF QldjMiCjSLKerx7vXobfI4K+B0/3HJA= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-151-k73gLARBNf-DecFJV5ZQHA-1; Tue, 27 Jun 2023 10:36:02 -0400 X-MC-Unique: k73gLARBNf-DecFJV5ZQHA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0421D3C11CE4; Tue, 27 Jun 2023 14:35:37 +0000 (UTC) Received: from llong.com (unknown [10.22.10.32]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3BDAF40C2063; Tue, 27 Jun 2023 14:35:36 +0000 (UTC) From: Waiman Long To: Tejun Heo , Zefan Li , Johannes Weiner , Jonathan Corbet , Shuah Khan Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Juri Lelli , Valentin Schneider , Frederic Weisbecker , Mrunal Patel , Ryan Phillips , Brent Rowsell , Peter Hunt , Phil Auld , Waiman Long Subject: [PATCH v4 6/9] cgroup/cpuset: Introduce remote partition Date: Tue, 27 Jun 2023 10:35:05 -0400 Message-Id: <20230627143508.1576882-7-longman@redhat.com> In-Reply-To: <20230627143508.1576882-1-longman@redhat.com> References: <20230627143508.1576882-1-longman@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org One can use "cpuset.cpus.partition" to create multiple scheduling domains or to produce a set of isolated CPUs where load balancing is disabled. The former use case is less common but the latter one can be frequently used especially for the Telco use cases like DPDK. The existing "isolated" partition can be used to produce isolated CPUs if the applications have full control of a system. However, in a containerized environment where all the apps are run in a container, it is hard to distribute out isolated CPUs from the root down given the unified hierarchy nature of cgroup v2. The container running on isolated CPUs can be several layers down from the root. The current partition feature requires that all the ancestors of a leaf partition root must be parititon roots themselves. This can be hard to configure. This patch introduces a new type of partition called remote partition. A remote partition is a partition whose parent is not a partition root itself and its CPUs are acquired directly from available CPUs in the top cpuset through a hierachical distribution of exclusive_cpus down from the it. For contrast, the existing type of partitions where their parents have to be valid partition roots are referred to as local partitions as they have to be clustered around the cgroup root. Child local partitons can be created under a remote partition, but a remote partition cannot be created under a local partition. We may relax this limitation in the future if there are use cases for such configuration. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 429 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 397 insertions(+), 32 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 9f2ec8394736..56aa7b4f213c 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -166,6 +166,9 @@ struct cpuset { /* partition root state */ int partition_root_state; + /* Set to true if exclusive_cpus manually set */ + int exclusive_cpus_set; + /* * Default hierarchy only: * use_parent_ecpus - set if using parent's effective_cpus @@ -187,12 +190,19 @@ struct cpuset { /* Handle for cpuset.cpus.partition */ struct cgroup_file partition_file; + + /* Remote partition silbling list anchored at remote_children */ + struct list_head remote_sibling; }; /* * Exclusive CPUs distributed out to sub-partitions of top_cpuset */ static cpumask_var_t subpartitions_cpus; +static cpumask_var_t cs_tmp_cpus; + +/* List of remote partition root children */ +static struct list_head remote_children; /* * Partition root states: @@ -343,6 +353,8 @@ static struct cpuset top_cpuset = { .flags = ((1 << CS_ONLINE) | (1 << CS_CPU_EXCLUSIVE) | (1 << CS_MEM_EXCLUSIVE)), .partition_root_state = PRS_ROOT, + .remote_sibling = LIST_HEAD_INIT(top_cpuset.remote_sibling), + }; /** @@ -1352,7 +1364,7 @@ static bool tasks_nocpu_error(struct cpuset *parent, struct cpuset *cs, /* * A populated partition (cs or parent) can't have empty effective_cpus */ - return (cpumask_subset(parent->effective_cpus, xcpus) && + return (parent && cpumask_subset(parent->effective_cpus, xcpus) && partition_is_populated(parent, cs)) || (!cpumask_intersects(xcpus, cpu_active_mask) && partition_is_populated(cs, NULL)); @@ -1366,6 +1378,8 @@ static void setup_exclusive_cpus(struct cpuset *cs, struct cpuset *parent) if (!cpumask_empty(cs->exclusive_cpus)) return; + WARN_ON_ONCE(cs->exclusive_cpus_set); + if (!parent) parent = parent_cs(cs); spin_lock_irq(&callback_lock); @@ -1374,6 +1388,192 @@ static void setup_exclusive_cpus(struct cpuset *cs, struct cpuset *parent) spin_unlock_irq(&callback_lock); } +static inline bool is_remote_partition(struct cpuset *cs) +{ + return !list_empty(&cs->remote_sibling); +} + +static inline bool is_local_partition(struct cpuset *cs) +{ + return is_partition_valid(cs) && !is_remote_partition(cs); +} + +static void reset_partition_data(struct cpuset *cs) +{ + struct cpuset *parent = parent_cs(cs); + + if (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys)) + return; + + lockdep_assert_held(&callback_lock); + + cs->nr_subparts = 0; + if (!cs->exclusive_cpus_set) { + cpumask_clear(cs->exclusive_cpus); + if (is_cpu_exclusive(cs)) + clear_bit(CS_CPU_EXCLUSIVE, &cs->flags); + } + if (!cpumask_and(cs->effective_cpus, + parent->effective_cpus, cs->cpus_allowed)) { + cs->use_parent_ecpus = true; + parent->child_ecpus_count++; + cpumask_copy(cs->effective_cpus, parent->effective_cpus); + } +} + +/* + * remote_partition_enable - Enable current cpuset as a remote partition root + * @cs: the cpuset to update + * @tmp: temparary masks + * Return: 1 if successful, 0 if error + * + * Enable the current cpuset to become a remote partition root taking CPUs + * directly from the top cpuset. cpuset_mutex must be held by the caller. + */ +static int remote_partition_enable(struct cpuset *cs, struct tmpmasks *tmp) +{ + /* + * The user must have sysadmin privilege. + */ + if (!capable(CAP_SYS_ADMIN)) + return 0; + + /* + * The requested exclusive_cpus must not be allocated to other + * partitions and it can't use up all the root's effective_cpus. + * + * Note that if there is any local partition root above it or + * remote partition root underneath it, its exclusive_cpus must + * have overlapped with subpartitions_cpus. + */ + if (cpumask_empty(cs->exclusive_cpus) || + cpumask_intersects(cs->exclusive_cpus, subpartitions_cpus) || + cpumask_subset(top_cpuset.effective_cpus, cs->exclusive_cpus)) + return 0; + + spin_lock_irq(&callback_lock); + cpumask_andnot(top_cpuset.effective_cpus, + top_cpuset.effective_cpus, cs->exclusive_cpus); + cpumask_or(subpartitions_cpus, + subpartitions_cpus, cs->exclusive_cpus); + + if (cs->use_parent_ecpus) { + struct cpuset *parent = parent_cs(cs); + + cs->use_parent_ecpus = false; + parent->child_ecpus_count--; + } + list_add(&cs->remote_sibling, &remote_children); + spin_unlock_irq(&callback_lock); + + /* + * Proprogate changes in top_cpuset's effective_cpus down the hierarchy. + */ + update_tasks_cpumask(&top_cpuset, tmp->new_cpus); + update_sibling_cpumasks(&top_cpuset, NULL, tmp); + + return 1; +} + +/* + * remote_partition_disable - Remove current cpuset from remote partition list + * @cs: the cpuset to update + * @tmp: temparary masks + * + * The effective_cpus is also updated. + * + * cpuset_mutex must be held by the caller. + */ +static void remote_partition_disable(struct cpuset *cs, struct tmpmasks *tmp) +{ + WARN_ON_ONCE(!is_remote_partition(cs)); + WARN_ON_ONCE(!cpumask_subset(cs->exclusive_cpus, subpartitions_cpus)); + spin_lock_irq(&callback_lock); + cpumask_andnot(subpartitions_cpus, + subpartitions_cpus, cs->exclusive_cpus); + cpumask_and(tmp->new_cpus, + cs->exclusive_cpus, cpu_active_mask); + cpumask_or(top_cpuset.effective_cpus, + top_cpuset.effective_cpus, tmp->new_cpus); + list_del_init(&cs->remote_sibling); + cs->partition_root_state = -cs->partition_root_state; + if (!cs->prs_err) + cs->prs_err = PERR_INVCPUS; + reset_partition_data(cs); + spin_unlock_irq(&callback_lock); + + /* + * Proprogate changes in top_cpuset's effective_cpus down the hierarchy. + */ + update_tasks_cpumask(&top_cpuset, tmp->new_cpus); + update_sibling_cpumasks(&top_cpuset, NULL, tmp); +} + +/* + * remote_cpus_update - cpus_exclusive change of remote partition + * @cs: the cpuset to update + * @newmask: the new exclusive_cpus mask + * @tmp: temparary masks + * + * top_cpuset and subpartitions_cpus will be updated. + * + * Return: 1 if change is allowed, 0 if it needs to become invalid. + */ +static int remote_cpus_update(struct cpuset *cs, struct cpumask *newmask, + struct tmpmasks *tmp) +{ + bool adding, deleting; + + if (WARN_ON_ONCE(!is_remote_partition(cs))) + return 0; + + WARN_ON_ONCE(!cpumask_subset(cs->exclusive_cpus, subpartitions_cpus)); + + if (cpumask_empty(newmask)) + goto invalidate; + + adding = cpumask_andnot(tmp->addmask, newmask, cs->exclusive_cpus); + deleting = cpumask_andnot(tmp->delmask, cs->exclusive_cpus, newmask); + + /* + * Additions of remote CPUs is only allowed if those CPUs are + * not allocated to other partitions and there are effective_cpus + * left in the top cpuset. + */ + if (adding && (!capable(CAP_SYS_ADMIN) || + cpumask_intersects(tmp->addmask, subpartitions_cpus) || + cpumask_subset(top_cpuset.effective_cpus, tmp->addmask))) + goto invalidate; + + spin_lock_irq(&callback_lock); + if (adding) { + cpumask_or(subpartitions_cpus, + subpartitions_cpus, tmp->addmask); + cpumask_andnot(top_cpuset.effective_cpus, + top_cpuset.effective_cpus, tmp->addmask); + } + if (deleting) { + cpumask_andnot(subpartitions_cpus, + subpartitions_cpus, tmp->delmask); + cpumask_and(tmp->delmask, + tmp->delmask, cpu_active_mask); + cpumask_or(top_cpuset.effective_cpus, + top_cpuset.effective_cpus, tmp->delmask); + } + spin_unlock_irq(&callback_lock); + + /* + * Proprogate changes in top_cpuset's effective_cpus down the hierarchy. + */ + update_tasks_cpumask(&top_cpuset, tmp->new_cpus); + update_sibling_cpumasks(&top_cpuset, NULL, tmp); + return 1; + +invalidate: + remote_partition_disable(cs, tmp); + return 0; +} + /** * update_parent_effective_cpumask - update effective_cpus mask of parent cpuset * @cs: The cpuset that requests change in partition root state @@ -1663,8 +1863,7 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd, if (adding || deleting) { update_tasks_cpumask(parent, tmp->addmask); - if (parent->child_ecpus_count) - update_sibling_cpumasks(parent, cs, tmp); + update_sibling_cpumasks(parent, cs, tmp); } /* @@ -1777,12 +1976,24 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, rcu_read_lock(); cpuset_for_each_descendant_pre(cp, pos_css, cs) { struct cpuset *parent = parent_cs(cp); + bool remote = is_remote_partition(cp); bool update_parent = false; - compute_effective_cpumask(tmp->new_cpus, cp, parent); + /* + * Skip remote partition that acquires CPUs directly from + * top_cpuset unless it is cs. + */ + if (remote && (cp != cs)) { + pos_css = css_rightmost_descendant(pos_css); + continue; + } - if (is_partition_valid(parent) && is_partition_valid(cp)) + old_prs = new_prs = cp->partition_root_state; + if (remote || (is_partition_valid(parent) && + is_partition_valid(cp))) compute_partition_effective_cpumask(cp, tmp->new_cpus); + else + compute_effective_cpumask(tmp->new_cpus, cp, parent); /* * A partition with no effective_cpus is allowed as long as @@ -1790,6 +2001,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, * update_parent_effective_cpumask() to check it. */ if (is_partition_valid(cp) && cpumask_empty(tmp->new_cpus)) { + WARN_ON_ONCE(remote); update_parent = true; goto update_parent_effective; } @@ -1800,7 +2012,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, * it is a partition root that has explicitly distributed * out all its CPUs. */ - if (is_in_v2_mode() && cpumask_empty(tmp->new_cpus)) { + if (is_in_v2_mode() && !remote && cpumask_empty(tmp->new_cpus)) { cpumask_copy(tmp->new_cpus, parent->effective_cpus); if (!cp->use_parent_ecpus) { cp->use_parent_ecpus = true; @@ -1812,6 +2024,9 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, parent->child_ecpus_count--; } + if (remote) + goto get_css; + /* * Skip the whole subtree if * 1) the cpumask remains the same, @@ -1834,7 +2049,6 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, * update_tasks_cpumask() again for tasks in the parent * cpuset if the parent's effective_cpus changes. */ - old_prs = new_prs = cp->partition_root_state; if ((cp != cs) && old_prs) { switch (parent->partition_root_state) { case PRS_ROOT: @@ -1857,6 +2071,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, } } +get_css: if (!css_tryget_online(&cp->css)) continue; rcu_read_unlock(); @@ -1876,13 +2091,8 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, if ((new_prs > 0) && cpumask_empty(cp->exclusive_cpus)) cpumask_and(cp->exclusive_cpus, cp->cpus_allowed, parent->exclusive_cpus); - if (new_prs < 0) { - /* Reset partition data */ - cp->nr_subparts = 0; - cpumask_clear(cp->exclusive_cpus); - if (is_cpu_exclusive(cp)) - clear_bit(CS_CPU_EXCLUSIVE, &cp->flags); - } + if (new_prs < 0) + reset_partition_data(cp); spin_unlock_irq(&callback_lock); notify_partition_change(cp, old_prs); @@ -1890,7 +2100,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, WARN_ON(!is_in_v2_mode() && !cpumask_equal(cp->cpus_allowed, cp->effective_cpus)); - update_tasks_cpumask(cp, tmp->new_cpus); + update_tasks_cpumask(cp, cp->effective_cpus); /* * On default hierarchy, inherit the CS_SCHED_LOAD_BALANCE @@ -1943,8 +2153,13 @@ static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs, /* * Check all its siblings and call update_cpumasks_hier() - * if their use_parent_ecpus flag is set in order for them - * to use the right effective_cpus value. + * if their effective_cpus will need to be changed. + * + * With the addition of exclusive_cpus which is a subset of + * cpus_allowed. It is possible a change in parent's effective_cpus + * due to a change in a child partition's exclusive_cpus will impact + * its siblings even if they do not inherit parent's effective_cpus + * directly. * * The update_cpumasks_hier() function may sleep. So we have to * release the RCU read lock before calling it. HIER_NO_SD_REBUILD @@ -1955,8 +2170,13 @@ static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs, cpuset_for_each_child(sibling, pos_css, parent) { if (sibling == cs) continue; - if (!sibling->use_parent_ecpus) - continue; + if (!sibling->use_parent_ecpus && + !is_partition_valid(sibling)) { + compute_effective_cpumask(tmp->new_cpus, sibling, + parent); + if (cpumask_equal(tmp->new_cpus, sibling->effective_cpus)) + continue; + } if (!css_tryget_online(&sibling->css)) continue; @@ -2006,11 +2226,16 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, return -EINVAL; /* - * When exclusive_cpus is set, make sure it is a subset of - * cpus_allowed and parent's exclusive_cpus. + * When exclusive_cpus has previously been set, CPUs no longer + * in cpus_allowed are removed. Otherwise, it is constrainted + * by cpus_allowed and parent's exclusive_cpus. */ - cpumask_and(trialcs->exclusive_cpus, - parent->exclusive_cpus, trialcs->cpus_allowed); + if (cs->exclusive_cpus_set) + cpumask_and(trialcs->exclusive_cpus, + trialcs->exclusive_cpus, trialcs->cpus_allowed); + else if (is_partition_valid(cs)) + cpumask_and(trialcs->exclusive_cpus, + parent->exclusive_cpus, trialcs->cpus_allowed); } /* Nothing to do if the cpus didn't change */ @@ -2059,7 +2284,15 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, if (retval < 0) goto out_free; - if (cs->partition_root_state) { + if (is_partition_valid(cs)) { + /* + * Call remote_cpus_update() to handle valid remote partition + */ + if (is_remote_partition(cs)) { + remote_cpus_update(cs, trialcs->exclusive_cpus, &tmp); + goto update_cpus; + } + if (invalidate) update_parent_effective_cpumask(cs, partcmd_invalidate, NULL, &tmp); @@ -2068,13 +2301,16 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, trialcs->exclusive_cpus, &tmp); } +update_cpus: spin_lock_irq(&callback_lock); cpumask_copy(cs->cpus_allowed, trialcs->cpus_allowed); - if (!is_partition_valid(cs)) - cpumask_clear(cs->exclusive_cpus); - else + if (cpumask_empty(trialcs->exclusive_cpus)) + cs->exclusive_cpus_set = false; + else if (is_partition_valid(cs)) cpumask_copy(cs->exclusive_cpus, trialcs->exclusive_cpus); + if ((old_prs > 0) && !is_partition_valid(cs)) + reset_partition_data(cs); spin_unlock_irq(&callback_lock); /* effective_cpus will be updated here */ @@ -2088,6 +2324,104 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, return 0; } +/** + * update_exclusive_cpumask - update the exclusive_cpus mask of a cpuset + * @cs: the cpuset to consider + * @trialcs: trial cpuset + * @buf: buffer of cpu numbers written to this cpuset + * + * The tasks' cpumask will be updated if cs is a valid partition root. + */ +static int update_exclusive_cpumask(struct cpuset *cs, struct cpuset *trialcs, + const char *buf) +{ + int retval; + struct tmpmasks tmp; + struct cpuset *parent = parent_cs(cs); + bool invalidate = false; + bool freemasks = false; + int old_prs = cs->partition_root_state; + + if (!*buf) { + cpumask_clear(trialcs->exclusive_cpus); + trialcs->exclusive_cpus_set = false; + } else { + retval = cpulist_parse(buf, trialcs->exclusive_cpus); + if (retval < 0) + return retval; + + /* + * exclusive_cpus must be a subset of its cpus_allowed and + * parent's exclusive_cpus or the write will fail. + */ + if (!cpumask_subset(trialcs->exclusive_cpus, trialcs->cpus_allowed) || + !cpumask_subset(trialcs->exclusive_cpus, parent->exclusive_cpus)) + return -EINVAL; + + trialcs->exclusive_cpus_set = true; + if (!is_cpu_exclusive(cs)) + set_bit(CS_CPU_EXCLUSIVE, &trialcs->flags); + } + + /* Nothing to do if the cpus didn't change */ + if (cpumask_equal(cs->exclusive_cpus, trialcs->exclusive_cpus)) { + WRITE_ONCE(cs->exclusive_cpus_set, trialcs->exclusive_cpus_set); + return 0; + } + + retval = validate_change(cs, trialcs); + if (retval) + return retval; + + if (is_partition_valid(cs)) { + freemasks = true; + if (alloc_cpumasks(NULL, &tmp)) + return -ENOMEM; + + if (cpumask_empty(trialcs->exclusive_cpus)) { + invalidate = true; + cs->prs_err = PERR_INVCPUS; + } else if (tasks_nocpu_error(parent, cs, trialcs->exclusive_cpus)) { + invalidate = true; + cs->prs_err = PERR_NOCPUS; + } + + if (is_remote_partition(cs)) { + if (invalidate) + remote_partition_disable(cs, &tmp); + else + remote_cpus_update(cs, trialcs->exclusive_cpus, &tmp); + goto update_xcpus; + } + + if (invalidate) + update_parent_effective_cpumask(cs, partcmd_invalidate, + NULL, &tmp); + else + update_parent_effective_cpumask(cs, partcmd_update, + trialcs->exclusive_cpus, &tmp); + } + +update_xcpus: + spin_lock_irq(&callback_lock); + cpumask_copy(cs->exclusive_cpus, trialcs->exclusive_cpus); + cs->exclusive_cpus_set = trialcs->exclusive_cpus_set; + if ((old_prs > 0) && !is_partition_valid(cs)) + reset_partition_data(cs); + spin_unlock_irq(&callback_lock); + + /* effective_cpus will be updated here */ + update_cpumasks_hier(cs, &tmp, 0); + + /* Update CS_SCHED_LOAD_BALANCE and/or sched_domains, if necessary */ + if (cs->partition_root_state) + update_partition_sd_lb(cs, old_prs); + + if (freemasks) + free_cpumasks(NULL, &tmp); + return 0; +} + /* * Migrate memory region from one set of nodes to another. This is * performed asynchronously as it can be called from process migration path @@ -2499,6 +2833,13 @@ static int update_prstate(struct cpuset *cs, int new_prs) err = update_parent_effective_cpumask(cs, partcmd_enable, NULL, &tmpmask); + + /* + * If an attempt to become local partition root fails, + * try to become a remote partition root instead. + */ + if (err && remote_partition_enable(cs, &tmpmask)) + err = 0; } else if (old_prs && new_prs) { /* * A change in load balance state only, no change in cpumasks. @@ -2509,8 +2850,11 @@ static int update_prstate(struct cpuset *cs, int new_prs) * Switching back to member is always allowed even if it * disables child partitions. */ - update_parent_effective_cpumask(cs, partcmd_disable, NULL, - &tmpmask); + if (is_remote_partition(cs)) + remote_partition_disable(cs, &tmpmask); + else + update_parent_effective_cpumask(cs, partcmd_disable, + NULL, &tmpmask); /* * Invalidation of child partitions will be done in @@ -2531,7 +2875,7 @@ static int update_prstate(struct cpuset *cs, int new_prs) cs->partition_root_state = new_prs; WRITE_ONCE(cs->prs_err, err); if (!is_partition_valid(cs)) - cpumask_clear(cs->exclusive_cpus); + reset_partition_data(cs); spin_unlock_irq(&callback_lock); /* Force update if switching back to member */ @@ -3012,6 +3356,9 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of, case FILE_CPULIST: retval = update_cpumask(cs, trialcs, buf); break; + case FILE_EXCLUSIVE_CPULIST: + retval = update_exclusive_cpumask(cs, trialcs, buf); + break; case FILE_MEMLIST: retval = update_nodemask(cs, trialcs, buf); break; @@ -3338,6 +3685,7 @@ static struct cftype dfl_files[] = { { .name = "cpus.exclusive", .seq_show = cpuset_common_seq_show, + .write = cpuset_write_resmask, .private = FILE_EXCLUSIVE_CPULIST, .flags = CFTYPE_NOT_ON_ROOT, }, @@ -3384,6 +3732,7 @@ cpuset_css_alloc(struct cgroup_subsys_state *parent_css) nodes_clear(cs->effective_mems); fmeter_init(&cs->fmeter); cs->relax_domain_level = -1; + INIT_LIST_HEAD(&cs->remote_sibling); /* Set CS_MEMORY_MIGRATE for default hierarchy */ if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys)) @@ -3419,6 +3768,11 @@ static int cpuset_css_online(struct cgroup_subsys_state *css) cs->effective_mems = parent->effective_mems; cs->use_parent_ecpus = true; parent->child_ecpus_count++; + /* + * Clear CS_SCHED_LOAD_BALANCE if parent is isolated + */ + if (!is_sched_load_balance(parent)) + clear_bit(CS_SCHED_LOAD_BALANCE, &cs->flags); } /* @@ -3661,6 +4015,7 @@ int __init cpuset_init(void) BUG_ON(!alloc_cpumask_var(&top_cpuset.effective_cpus, GFP_KERNEL)); BUG_ON(!alloc_cpumask_var(&top_cpuset.exclusive_cpus, GFP_KERNEL)); BUG_ON(!zalloc_cpumask_var(&subpartitions_cpus, GFP_KERNEL)); + BUG_ON(!alloc_cpumask_var(&cs_tmp_cpus, GFP_KERNEL)); cpumask_setall(top_cpuset.cpus_allowed); nodes_setall(top_cpuset.mems_allowed); @@ -3671,6 +4026,7 @@ int __init cpuset_init(void) fmeter_init(&top_cpuset.fmeter); set_bit(CS_SCHED_LOAD_BALANCE, &top_cpuset.flags); top_cpuset.relax_domain_level = -1; + INIT_LIST_HEAD(&remote_children); BUG_ON(!alloc_cpumask_var(&cpus_attach, GFP_KERNEL)); @@ -3787,6 +4143,7 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp) static nodemask_t new_mems; bool cpus_updated; bool mems_updated; + bool remote; struct cpuset *parent; retry: wait_event(cpuset_attach_wq, cs->attach_in_progress == 0); @@ -3813,9 +4170,17 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp) * Compute effective_cpus for valid partition root, may invalidate * child partition roots if necessary. */ - if (is_partition_valid(cs) && is_partition_valid(parent)) + remote = is_remote_partition(cs); + if (remote || (is_partition_valid(cs) && is_partition_valid(parent))) compute_partition_effective_cpumask(cs, &new_cpus); + if (remote && cpumask_empty(&new_cpus) && + partition_is_populated(cs, NULL)) { + remote_partition_disable(cs, tmp); + compute_effective_cpumask(&new_cpus, cs, parent); + remote = false; + cpuset_force_rebuild(); + } /* * Force the partition to become invalid if either one of * the following conditions hold: @@ -3823,7 +4188,7 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp) * 2) parent is invalid or doesn't grant any cpus to child * partitions. */ - if (is_partition_valid(cs) && (!is_partition_valid(parent) || + if (is_local_partition(cs) && (!is_partition_valid(parent) || tasks_nocpu_error(parent, cs, &new_cpus))) { update_parent_effective_cpumask(cs, partcmd_invalidate, NULL, tmp); compute_effective_cpumask(&new_cpus, cs, parent); From patchwork Tue Jun 27 14:35:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 697055 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92B63C001B1 for ; Tue, 27 Jun 2023 14:37:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231614AbjF0Oh4 (ORCPT ); Tue, 27 Jun 2023 10:37:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231630AbjF0Oh2 (ORCPT ); Tue, 27 Jun 2023 10:37:28 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8D37359B for ; Tue, 27 Jun 2023 07:36:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687876576; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=E6YHVyNv9jllDW5EgfHaZ/VxsLpfT00bCuAF1yM8w0Q=; b=fyOZxavP7mQrOIIYQsZ5sgIYNI/72P94zpLfmDic21xvXuX418XVwF+J7bsC5zUMyhP923 TpcyURzqwsY/eEMZhW2vYlyqTUbkdbQuKeZB1H9xbz1wcHzFzvYGIs3/un+yZpOjHiwp77 ohotveuRA3ykI/7oPQJKWKEijBb3kgI= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-303-DkbVTISrMwWEICP6AkL1oQ-1; Tue, 27 Jun 2023 10:36:13 -0400 X-MC-Unique: DkbVTISrMwWEICP6AkL1oQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8E8BF3819AE1; Tue, 27 Jun 2023 14:35:39 +0000 (UTC) Received: from llong.com (unknown [10.22.10.32]) by smtp.corp.redhat.com (Postfix) with ESMTP id C234A40C2063; Tue, 27 Jun 2023 14:35:38 +0000 (UTC) From: Waiman Long To: Tejun Heo , Zefan Li , Johannes Weiner , Jonathan Corbet , Shuah Khan Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Juri Lelli , Valentin Schneider , Frederic Weisbecker , Mrunal Patel , Ryan Phillips , Brent Rowsell , Peter Hunt , Phil Auld , Waiman Long Subject: [PATCH v4 9/9] cgroup/cpuset: Extend test_cpuset_prs.sh to test remote partition Date: Tue, 27 Jun 2023 10:35:08 -0400 Message-Id: <20230627143508.1576882-10-longman@redhat.com> In-Reply-To: <20230627143508.1576882-1-longman@redhat.com> References: <20230627143508.1576882-1-longman@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org This patch extends the test_cpuset_prs.sh test script to support testing the new remote partition and use the new "cpuset.cpus.exclusive" file by adding new tests for them. In addition, the following changes are also made: 1) Run the state transition tests directly under root to ease testing of remote partition and remove the unneeded test column. 2) Add a column to for the list of expected isolated CPUs and compare it with the actual value by looking at the state of /sys/kernel/debug/sched/domains which will be available if the verbose flag is set. Signed-off-by: Waiman Long --- .../selftests/cgroup/test_cpuset_prs.sh | 398 ++++++++++++------ 1 file changed, 259 insertions(+), 139 deletions(-) diff --git a/tools/testing/selftests/cgroup/test_cpuset_prs.sh b/tools/testing/selftests/cgroup/test_cpuset_prs.sh index 2b5215cc599f..ace992d1ed9e 100755 --- a/tools/testing/selftests/cgroup/test_cpuset_prs.sh +++ b/tools/testing/selftests/cgroup/test_cpuset_prs.sh @@ -3,7 +3,7 @@ # # Test for cpuset v2 partition root state (PRS) # -# The sched verbose flag is set, if available, so that the console log +# The sched verbose flag can be optionally set so that the console log # can be examined for the correct setting of scheduling domain. # @@ -22,27 +22,27 @@ WAIT_INOTIFY=$(cd $(dirname $0); pwd)/wait_inotify # Find cgroup v2 mount point CGROUP2=$(mount -t cgroup2 | head -1 | awk -e '{print $3}') [[ -n "$CGROUP2" ]] || skip_test "Cgroup v2 mount point not found!" +SUBPARTS_CPUS=$CGROUP2/.__DEBUG__.cpuset.cpus.subpartitions +CPULIST=$(cat $CGROUP2/cpuset.cpus.effective) -CPUS=$(lscpu | grep "^CPU(s):" | sed -e "s/.*:[[:space:]]*//") -[[ $CPUS -lt 8 ]] && skip_test "Test needs at least 8 cpus available!" +NR_CPUS=$(lscpu | grep "^CPU(s):" | sed -e "s/.*:[[:space:]]*//") +[[ $NR_CPUS -lt 8 ]] && skip_test "Test needs at least 8 cpus available!" # Set verbose flag and delay factor PROG=$1 -VERBOSE= +VERBOSE=0 DELAY_FACTOR=1 SCHED_DEBUG= while [[ "$1" = -* ]] do case "$1" in - -v) VERBOSE=1 + -v) ((VERBOSE++)) # Enable sched/verbose can slow thing down [[ $DELAY_FACTOR -eq 1 ]] && DELAY_FACTOR=2 - break ;; -d) DELAY_FACTOR=$2 shift - break ;; *) echo "Usage: $PROG [-v] [-d " exit @@ -52,7 +52,7 @@ do done # Set sched verbose flag if available when "-v" option is specified -if [[ -n "$VERBOSE" && -d /sys/kernel/debug/sched ]] +if [[ $VERBOSE -gt 0 && -d /sys/kernel/debug/sched ]] then # Used to restore the original setting during cleanup SCHED_DEBUG=$(cat /sys/kernel/debug/sched/verbose) @@ -61,14 +61,26 @@ fi cd $CGROUP2 echo +cpuset > cgroup.subtree_control + +# +# If cpuset has been set up and used in child cgroups, we may not be able to +# create partition under root cgroup because of the CPU exclusivity rule. +# So we are going to skip the test if this is the case. +# [[ -d test ]] || mkdir test -cd test +echo 0-6 > test/cpuset.cpus +echo root > test/cpuset.cpus.partition +cat test/cpuset.cpus.partition | grep -q invalid +RESULT=$? +echo member > test/cpuset.cpus.partition +echo "" > test/cpuset.cpus +[[ $RESULT -eq 0 ]] && skip_test "Child cgroups are using cpuset!" cleanup() { online_cpus + cd $CGROUP2 rmdir A1/A2/A3 A1/A2 A1 B1 > /dev/null 2>&1 - cd .. rmdir test > /dev/null 2>&1 [[ -n "$SCHED_DEBUG" ]] && echo "$SCHED_DEBUG" > /sys/kernel/debug/sched/verbose @@ -103,7 +115,7 @@ test_partition() [[ $? -eq 0 ]] || exit 1 ACTUAL_VAL=$(cat cpuset.cpus.partition) [[ $ACTUAL_VAL != $EXPECTED_VAL ]] && { - echo "cpuset.cpus.partition: expect $EXPECTED_VAL, found $EXPECTED_VAL" + echo "cpuset.cpus.partition: expect $EXPECTED_VAL, found $ACTUAL_VAL" echo "Test FAILED" exit 1 } @@ -114,7 +126,7 @@ test_effective_cpus() EXPECTED_VAL=$1 ACTUAL_VAL=$(cat cpuset.cpus.effective) [[ "$ACTUAL_VAL" != "$EXPECTED_VAL" ]] && { - echo "cpuset.cpus.effective: expect '$EXPECTED_VAL', found '$EXPECTED_VAL'" + echo "cpuset.cpus.effective: expect '$EXPECTED_VAL', found '$ACTUAL_VAL'" echo "Test FAILED" exit 1 } @@ -139,6 +151,7 @@ test_add_proc() # test_isolated() { + cd $CGROUP2/test echo 2-3 > cpuset.cpus TYPE=$(cat cpuset.cpus.partition) [[ $TYPE = member ]] || echo member > cpuset.cpus.partition @@ -203,125 +216,163 @@ test_isolated() # # Cgroup test hierarchy # -# test -- A1 -- A2 -- A3 -# \- B1 +# root -- A1 -- A2 -- A3 +# +- B1 # -# P = set cpus.partition (0:member, 1:root, 2:isolated, -1:root invalid) +# P = set cpus.partition (0:member, 1:root, 2:isolated) # C = add cpu-list # S

= use prefix in subtree_control # T = put a task into cgroup -# O- = Write to CPU online file of +# O= = Write to CPU online file of # SETUP_A123_PARTITIONS="C1-3:P1:S+ C2-3:P1:S+ C3:P1" TEST_MATRIX=( - # test old-A1 old-A2 old-A3 old-B1 new-A1 new-A2 new-A3 new-B1 fail ECPUs Pstate - # ---- ------ ------ ------ ------ ------ ------ ------ ------ ---- ----- ------ - " S+ C0-1 . . C2-3 S+ C4-5 . . 0 A2:0-1" - " S+ C0-1 . . C2-3 P1 . . . 0 " - " S+ C0-1 . . C2-3 P1:S+ C0-1:P1 . . 0 " - " S+ C0-1 . . C2-3 P1:S+ C1:P1 . . 0 " - " S+ C0-1:S+ . . C2-3 . . . P1 0 " - " S+ C0-1:P1 . . C2-3 S+ C1 . . 0 " - " S+ C0-1:P1 . . C2-3 S+ C1:P1 . . 0 " - " S+ C0-1:P1 . . C2-3 S+ C1:P1 . P1 0 " - " S+ C0-1:P1 . . C2-3 C4-5 . . . 0 A1:4-5" - " S+ C0-1:P1 . . C2-3 S+:C4-5 . . . 0 A1:4-5" - " S+ C0-1 . . C2-3:P1 . . . C2 0 " - " S+ C0-1 . . C2-3:P1 . . . C4-5 0 B1:4-5" - " S+ C0-3:P1:S+ C2-3:P1 . . . . . . 0 A1:0-1,A2:2-3" - " S+ C0-3:P1:S+ C2-3:P1 . . C1-3 . . . 0 A1:1,A2:2-3" - " S+ C2-3:P1:S+ C3:P1 . . C3 . . . 0 A1:,A2:3 A1:P1,A2:P1" - " S+ C2-3:P1:S+ C3:P1 . . C3 P0 . . 0 A1:3,A2:3 A1:P1,A2:P0" - " S+ C2-3:P1:S+ C2:P1 . . C2-4 . . . 0 A1:3-4,A2:2" - " S+ C2-3:P1:S+ C3:P1 . . C3 . . C0-2 0 A1:,B1:0-2 A1:P1,A2:P1" - " S+ $SETUP_A123_PARTITIONS . C2-3 . . . 0 A1:,A2:2,A3:3 A1:P1,A2:P1,A3:P1" + # old-A1 old-A2 old-A3 old-B1 new-A1 new-A2 new-A3 new-B1 fail ECPUs Pstate ISOLCPUS + # ------ ------ ------ ------ ------ ------ ------ ------ ---- ----- ------ -------- + " C0-1 . . C2-3 S+ C4-5 . . 0 A2:0-1" + " C0-1 . . C2-3 P1 . . . 0 " + " C0-1 . . C2-3 P1:S+ C0-1:P1 . . 0 " + " C0-1 . . C2-3 P1:S+ C1:P1 . . 0 " + " C0-1:S+ . . C2-3 . . . P1 0 " + " C0-1:P1 . . C2-3 S+ C1 . . 0 " + " C0-1:P1 . . C2-3 S+ C1:P1 . . 0 " + " C0-1:P1 . . C2-3 S+ C1:P1 . P1 0 " + " C0-1:P1 . . C2-3 C4-5 . . . 0 A1:4-5" + " C0-1:P1 . . C2-3 S+:C4-5 . . . 0 A1:4-5" + " C0-1 . . C2-3:P1 . . . C2 0 " + " C0-1 . . C2-3:P1 . . . C4-5 0 B1:4-5" + "C0-3:P1:S+ C2-3:P1 . . . . . . 0 A1:0-1,A2:2-3" + "C0-3:P1:S+ C2-3:P1 . . C1-3 . . . 0 A1:1,A2:2-3" + "C2-3:P1:S+ C3:P1 . . C3 . . . 0 A1:,A2:3 A1:P1,A2:P1" + "C2-3:P1:S+ C3:P1 . . C3 P0 . . 0 A1:3,A2:3 A1:P1,A2:P0" + "C2-3:P1:S+ C2:P1 . . C2-4 . . . 0 A1:3-4,A2:2" + "C2-3:P1:S+ C3:P1 . . C3 . . C0-2 0 A1:,B1:0-2 A1:P1,A2:P1" + "$SETUP_A123_PARTITIONS . C2-3 . . . 0 A1:,A2:2,A3:3 A1:P1,A2:P1,A3:P1" # CPU offlining cases: - " S+ C0-1 . . C2-3 S+ C4-5 . O2-0 0 A1:0-1,B1:3" - " S+ C0-3:P1:S+ C2-3:P1 . . O2-0 . . . 0 A1:0-1,A2:3" - " S+ C0-3:P1:S+ C2-3:P1 . . O2-0 O2-1 . . 0 A1:0-1,A2:2-3" - " S+ C0-3:P1:S+ C2-3:P1 . . O1-0 . . . 0 A1:0,A2:2-3" - " S+ C0-3:P1:S+ C2-3:P1 . . O1-0 O1-1 . . 0 A1:0-1,A2:2-3" - " S+ C2-3:P1:S+ C3:P1 . . O3-0 O3-1 . . 0 A1:2,A2:3 A1:P1,A2:P1" - " S+ C2-3:P1:S+ C3:P2 . . O3-0 O3-1 . . 0 A1:2,A2:3 A1:P1,A2:P2" - " S+ C2-3:P1:S+ C3:P1 . . O2-0 O2-1 . . 0 A1:2,A2:3 A1:P1,A2:P1" - " S+ C2-3:P1:S+ C3:P2 . . O2-0 O2-1 . . 0 A1:2,A2:3 A1:P1,A2:P2" - " S+ C2-3:P1:S+ C3:P1 . . O2-0 . . . 0 A1:,A2:3 A1:P1,A2:P1" - " S+ C2-3:P1:S+ C3:P1 . . O3-0 . . . 0 A1:2,A2: A1:P1,A2:P1" - " S+ C2-3:P1:S+ C3:P1 . . T:O2-0 . . . 0 A1:3,A2:3 A1:P1,A2:P-1" - " S+ C2-3:P1:S+ C3:P1 . . . T:O3-0 . . 0 A1:2,A2:2 A1:P1,A2:P-1" - " S+ $SETUP_A123_PARTITIONS . O1-0 . . . 0 A1:,A2:2,A3:3 A1:P1,A2:P1,A3:P1" - " S+ $SETUP_A123_PARTITIONS . O2-0 . . . 0 A1:1,A2:,A3:3 A1:P1,A2:P1,A3:P1" - " S+ $SETUP_A123_PARTITIONS . O3-0 . . . 0 A1:1,A2:2,A3: A1:P1,A2:P1,A3:P1" - " S+ $SETUP_A123_PARTITIONS . T:O1-0 . . . 0 A1:2-3,A2:2-3,A3:3 A1:P1,A2:P-1,A3:P-1" - " S+ $SETUP_A123_PARTITIONS . . T:O2-0 . . 0 A1:1,A2:3,A3:3 A1:P1,A2:P1,A3:P-1" - " S+ $SETUP_A123_PARTITIONS . . . T:O3-0 . 0 A1:1,A2:2,A3:2 A1:P1,A2:P1,A3:P-1" - " S+ $SETUP_A123_PARTITIONS . T:O1-0 O1-1 . . 0 A1:1,A2:2,A3:3 A1:P1,A2:P1,A3:P1" - " S+ $SETUP_A123_PARTITIONS . . T:O2-0 O2-1 . 0 A1:1,A2:2,A3:3 A1:P1,A2:P1,A3:P1" - " S+ $SETUP_A123_PARTITIONS . . . T:O3-0 O3-1 0 A1:1,A2:2,A3:3 A1:P1,A2:P1,A3:P1" - " S+ $SETUP_A123_PARTITIONS . T:O1-0 O2-0 O1-1 . 0 A1:1,A2:,A3:3 A1:P1,A2:P1,A3:P1" - " S+ $SETUP_A123_PARTITIONS . T:O1-0 O2-0 O2-1 . 0 A1:2-3,A2:2-3,A3:3 A1:P1,A2:P-1,A3:P-1" - - # test old-A1 old-A2 old-A3 old-B1 new-A1 new-A2 new-A3 new-B1 fail ECPUs Pstate - # ---- ------ ------ ------ ------ ------ ------ ------ ------ ---- ----- ------ + " C0-1 . . C2-3 S+ C4-5 . O2=0 0 A1:0-1,B1:3" + "C0-3:P1:S+ C2-3:P1 . . O2=0 . . . 0 A1:0-1,A2:3" + "C0-3:P1:S+ C2-3:P1 . . O2=0 O2=1 . . 0 A1:0-1,A2:2-3" + "C0-3:P1:S+ C2-3:P1 . . O1=0 . . . 0 A1:0,A2:2-3" + "C0-3:P1:S+ C2-3:P1 . . O1=0 O1=1 . . 0 A1:0-1,A2:2-3" + "C2-3:P1:S+ C3:P1 . . O3=0 O3=1 . . 0 A1:2,A2:3 A1:P1,A2:P1" + "C2-3:P1:S+ C3:P2 . . O3=0 O3=1 . . 0 A1:2,A2:3 A1:P1,A2:P2" + "C2-3:P1:S+ C3:P1 . . O2=0 O2=1 . . 0 A1:2,A2:3 A1:P1,A2:P1" + "C2-3:P1:S+ C3:P2 . . O2=0 O2=1 . . 0 A1:2,A2:3 A1:P1,A2:P2" + "C2-3:P1:S+ C3:P1 . . O2=0 . . . 0 A1:,A2:3 A1:P1,A2:P1" + "C2-3:P1:S+ C3:P1 . . O3=0 . . . 0 A1:2,A2: A1:P1,A2:P1" + "C2-3:P1:S+ C3:P1 . . T:O2=0 . . . 0 A1:3,A2:3 A1:P1,A2:P-1" + "C2-3:P1:S+ C3:P1 . . . T:O3=0 . . 0 A1:2,A2:2 A1:P1,A2:P-1" + "$SETUP_A123_PARTITIONS . O1=0 . . . 0 A1:,A2:2,A3:3 A1:P1,A2:P1,A3:P1" + "$SETUP_A123_PARTITIONS . O2=0 . . . 0 A1:1,A2:,A3:3 A1:P1,A2:P1,A3:P1" + "$SETUP_A123_PARTITIONS . O3=0 . . . 0 A1:1,A2:2,A3: A1:P1,A2:P1,A3:P1" + "$SETUP_A123_PARTITIONS . T:O1=0 . . . 0 A1:2-3,A2:2-3,A3:3 A1:P1,A2:P-1,A3:P-1" + "$SETUP_A123_PARTITIONS . . T:O2=0 . . 0 A1:1,A2:3,A3:3 A1:P1,A2:P1,A3:P-1" + "$SETUP_A123_PARTITIONS . . . T:O3=0 . 0 A1:1,A2:2,A3:2 A1:P1,A2:P1,A3:P-1" + "$SETUP_A123_PARTITIONS . T:O1=0 O1=1 . . 0 A1:1,A2:2,A3:3 A1:P1,A2:P1,A3:P1" + "$SETUP_A123_PARTITIONS . . T:O2=0 O2=1 . 0 A1:1,A2:2,A3:3 A1:P1,A2:P1,A3:P1" + "$SETUP_A123_PARTITIONS . . . T:O3=0 O3=1 0 A1:1,A2:2,A3:3 A1:P1,A2:P1,A3:P1" + "$SETUP_A123_PARTITIONS . T:O1=0 O2=0 O1=1 . 0 A1:1,A2:,A3:3 A1:P1,A2:P1,A3:P1" + "$SETUP_A123_PARTITIONS . T:O1=0 O2=0 O2=1 . 0 A1:2-3,A2:2-3,A3:3 A1:P1,A2:P-1,A3:P-1" + + # old-A1 old-A2 old-A3 old-B1 new-A1 new-A2 new-A3 new-B1 fail ECPUs Pstate ISOLCPUS + # ------ ------ ------ ------ ------ ------ ------ ------ ---- ----- ------ -------- + # + # Remote partition and cpuset.cpus.exclusive tests + # + " C0-3:S+ C1-3:S+ C2-3 . X2-3 . . . 0 A1:0-3,A2:1-3,A3:2-3,XA1:2-3" + " C0-3:S+ C1-3:S+ C2-3 . X2-3 X2-3:P2 . . 0 A1:0-1,A2:2-3,A3:2-3 A1:P0,A2:P2 2-3" + " C0-3:S+ C1-3:S+ C2-3 . X2-3 X3:P2 . . 0 A1:0-2,A2:3,A3:3 A1:P0,A2:P2 3" + " C0-3:S+ C1-3:S+ C2-3 . X2-3 X2-3 X2-3:P2 . 0 A1:0-1,A2:1,A3:2-3 A1:P0,A3:P2 2-3" + " C0-3:S+ C1-3:S+ C2-3 . X2-3 X2-3 X2-3:P2:C3 . 0 A1:0-2,A2:1-2,A3:3 A1:P0,A3:P2 3" + " C0-3:S+ C1-3:S+ C2-3 C2-3 . . . P2 0 A1:0-3,A2:1-3,A3:2-3,B1:2-3 A1:P0,A3:P0,B1:P-2" + " C0-3:S+ C1-3:S+ C2-3 C4-5 . . . P2 0 B1:4-5 B1:P2 4-5" + " C0-3:S+ C1-3:S+ C2-3 C4 X2-3 X2-3 X2-3:P2 P2 0 A3:2-3,B1:4 A3:P2,B1:P2 2-4" + " C0-3:S+ C1-3:S+ C2-3 C4 X2-3 X2-3 X2-3:P2:C1-3 P2 0 A3:2-3,B1:4 A3:P2,B1:P2 2-4" + " C0-3:S+ C1-3:S+ C2-3 C4 X1-3 X1-3:P2 P2 . 0 A2:1,A3:2-3 A2:P2,A3:P2 1-3" + " C0-3:S+ C1-3:S+ C2-3 C4 X2-3 X2-3 X2-3:P2 P2:C4-5 0 A3:2-3,B1:4-5 A3:P2,B1:P2 2-5" + + # Nested remote/local partition tests + " C0-3:S+ C1-3:S+ C2-3 C4-5 X2-3 X2-3:P1 P2 P1 0 A1:0-1,A2:,A3:2-3,B1:4-5 \ + A1:P0,A2:P1,A3:P2,B1:P1 2-3" + " C0-3:S+ C1-3:S+ C2-3 C4 X2-3 X2-3:P1 P2 P1 0 A1:0-1,A2:,A3:2-3,B1:4 \ + A1:P0,A2:P1,A3:P2,B1:P1 2-4" + " C0-3:S+ C1-3:S+ C3 C4 X2-3 X2-3:P1 P2 P1 0 A1:0-1,A2:2,A3:3,B1:4 \ + A1:P0,A2:P1,A3:P2,B1:P1 2-4" + + # Remote partition offline tests + " C0-3:S+ C1-3:S+ C2-3 . X2-3 X2-3 X2-3:P2:O2=0 . 0 A1:0-1,A2:1,A3:3 A1:P0,A3:P2 2-3" + " C0-3:S+ C1-3:S+ C2-3 . X2-3 X2-3 X2-3:P2:O2=0 O2=1 0 A1:0-1,A2:1,A3:2-3 A1:P0,A3:P2 2-3" + " C0-3:S+ C1-3:S+ C3 . X2-3 X2-3 P2:O3=0 . 0 A1:0-2,A2:1-2,A3: A1:P0,A3:P2 3" + " C0-3:S+ C1-3:S+ C3 . X2-3 X2-3 T:P2:O3=0 . 0 A1:0-2,A2:1-2,A3:1-2 A1:P0,A3:P-2 3" + + # An invalidated remote partition cannot self-recover from hotplug + " C0-3:S+ C1-3:S+ C2 . X2-3 X2-3 T:P2:O2=0 O2=1 0 A1:0-3,A2:1-3,A3:2 A1:P0,A3:P-2" + + # old-A1 old-A2 old-A3 old-B1 new-A1 new-A2 new-A3 new-B1 fail ECPUs Pstate ISOLCPUS + # ------ ------ ------ ------ ------ ------ ------ ------ ---- ----- ------ -------- # # Incorrect change to cpuset.cpus invalidates partition root # # Adding CPUs to partition root that are not in parent's # cpuset.cpus is allowed, but those extra CPUs are ignored. - " S+ C2-3:P1:S+ C3:P1 . . . C2-4 . . 0 A1:,A2:2-3 A1:P1,A2:P1" + "C2-3:P1:S+ C3:P1 . . . C2-4 . . 0 A1:,A2:2-3 A1:P1,A2:P1" # Taking away all CPUs from parent or itself if there are tasks # will make the partition invalid. - " S+ C2-3:P1:S+ C3:P1 . . T C2-3 . . 0 A1:2-3,A2:2-3 A1:P1,A2:P-1" - " S+ C3:P1:S+ C3 . . T P1 . . 0 A1:3,A2:3 A1:P1,A2:P-1" - " S+ $SETUP_A123_PARTITIONS . T:C2-3 . . . 0 A1:2-3,A2:2-3,A3:3 A1:P1,A2:P-1,A3:P-1" - " S+ $SETUP_A123_PARTITIONS . T:C2-3:C1-3 . . . 0 A1:1,A2:2,A3:3 A1:P1,A2:P1,A3:P1" + "C2-3:P1:S+ C3:P1 . . T C2-3 . . 0 A1:2-3,A2:2-3 A1:P1,A2:P-1" + " C3:P1:S+ C3 . . T P1 . . 0 A1:3,A2:3 A1:P1,A2:P-1" + "$SETUP_A123_PARTITIONS . T:C2-3 . . . 0 A1:2-3,A2:2-3,A3:3 A1:P1,A2:P-1,A3:P-1" + "$SETUP_A123_PARTITIONS . T:C2-3:C1-3 . . . 0 A1:1,A2:2,A3:3 A1:P1,A2:P1,A3:P1" # Changing a partition root to member makes child partitions invalid - " S+ C2-3:P1:S+ C3:P1 . . P0 . . . 0 A1:2-3,A2:3 A1:P0,A2:P-1" - " S+ $SETUP_A123_PARTITIONS . C2-3 P0 . . 0 A1:2-3,A2:2-3,A3:3 A1:P1,A2:P0,A3:P-1" + "C2-3:P1:S+ C3:P1 . . P0 . . . 0 A1:2-3,A2:3 A1:P0,A2:P-1" + "$SETUP_A123_PARTITIONS . C2-3 P0 . . 0 A1:2-3,A2:2-3,A3:3 A1:P1,A2:P0,A3:P-1" # cpuset.cpus can contains cpus not in parent's cpuset.cpus as long # as they overlap. - " S+ C2-3:P1:S+ . . . . C3-4:P1 . . 0 A1:2,A2:3 A1:P1,A2:P1" + "C2-3:P1:S+ . . . . C3-4:P1 . . 0 A1:2,A2:3 A1:P1,A2:P1" # Deletion of CPUs distributed to child cgroup is allowed. - " S+ C0-1:P1:S+ C1 . C2-3 C4-5 . . . 0 A1:4-5,A2:4-5" + "C0-1:P1:S+ C1 . C2-3 C4-5 . . . 0 A1:4-5,A2:4-5" # To become a valid partition root, cpuset.cpus must overlap parent's # cpuset.cpus. - " S+ C0-1:P1 . . C2-3 S+ C4-5:P1 . . 0 A1:0-1,A2:0-1 A1:P1,A2:P-1" + " C0-1:P1 . . C2-3 S+ C4-5:P1 . . 0 A1:0-1,A2:0-1 A1:P1,A2:P-1" # Enabling partition with child cpusets is allowed - " S+ C0-1:S+ C1 . C2-3 P1 . . . 0 A1:0-1,A2:1 A1:P1" + " C0-1:S+ C1 . C2-3 P1 . . . 0 A1:0-1,A2:1 A1:P1" # A partition root with non-partition root parent is invalid, but it # can be made valid if its parent becomes a partition root too. - " S+ C0-1:S+ C1 . C2-3 . P2 . . 0 A1:0-1,A2:1 A1:P0,A2:P-2" - " S+ C0-1:S+ C1:P2 . C2-3 P1 . . . 0 A1:0,A2:1 A1:P1,A2:P2" + " C0-1:S+ C1 . C2-3 . P2 . . 0 A1:0-1,A2:1 A1:P0,A2:P-2" + " C0-1:S+ C1:P2 . C2-3 P1 . . . 0 A1:0,A2:1 A1:P1,A2:P2" # A non-exclusive cpuset.cpus change will invalidate partition and its siblings - " S+ C0-1:P1 . . C2-3 C0-2 . . . 0 A1:0-2,B1:2-3 A1:P-1,B1:P0" - " S+ C0-1:P1 . . P1:C2-3 C0-2 . . . 0 A1:0-2,B1:2-3 A1:P-1,B1:P-1" - " S+ C0-1 . . P1:C2-3 C0-2 . . . 0 A1:0-2,B1:2-3 A1:P0,B1:P-1" + " C0-1:P1 . . C2-3 C0-2 . . . 0 A1:0-2,B1:2-3 A1:P-1,B1:P0" + " C0-1:P1 . . P1:C2-3 C0-2 . . . 0 A1:0-2,B1:2-3 A1:P-1,B1:P-1" + " C0-1 . . P1:C2-3 C0-2 . . . 0 A1:0-2,B1:2-3 A1:P0,B1:P-1" - # test old-A1 old-A2 old-A3 old-B1 new-A1 new-A2 new-A3 new-B1 fail ECPUs Pstate - # ---- ------ ------ ------ ------ ------ ------ ------ ------ ---- ----- ------ + # old-A1 old-A2 old-A3 old-B1 new-A1 new-A2 new-A3 new-B1 fail ECPUs Pstate ISOLCPUS + # ------ ------ ------ ------ ------ ------ ------ ------ ---- ----- ------ -------- # Failure cases: # A task cannot be added to a partition with no cpu - " S+ C2-3:P1:S+ C3:P1 . . O2-0:T . . . 1 A1:,A2:3 A1:P1,A2:P1" + "C2-3:P1:S+ C3:P1 . . O2=0:T . . . 1 A1:,A2:3 A1:P1,A2:P1" + + # cpuset.cpus.exclusive must be a subset of cpuset.cpus & parent's cpuset.cpus.exclusive + " C0-3:S+ C1-3:S+ C2-3 . X2-4 . . . 1" + " C0-3:S+ C1-3:S+ C2-3 . X1-2 X2-3 . . 1" ) # # Write to the cpu online file -# $1 - - where = cpu number, value to be written +# $1 - = where = cpu number, value to be written # write_cpu_online() { - CPU=${1%-*} - VAL=${1#*-} + CPU=${1%=*} + VAL=${1#*=} CPUFILE=//sys/devices/system/cpu/cpu${CPU}/online if [[ $VAL -eq 0 ]] then @@ -349,11 +400,12 @@ set_ctrl_state() TMPMSG=/tmp/.msg_$$ CGRP=$1 STATE=$2 - SHOWERR=${3}${VERBOSE} + SHOWERR=${3} CTRL=${CTRL:=$CONTROLLER} HASERR=0 REDIRECT="2> $TMPMSG" [[ -z "$STATE" || "$STATE" = '.' ]] && return 0 + [[ $VERBOSE -gt 0 ]] && SHOWERR=1 rm -f $TMPMSG for CMD in $(echo $STATE | sed -e "s/:/ /g") @@ -362,12 +414,18 @@ set_ctrl_state() SFILE=$CGRP/cgroup.subtree_control PFILE=$CGRP/cpuset.cpus.partition CFILE=$CGRP/cpuset.cpus + XFILE=$CGRP/cpuset.cpus.exclusive S=$(expr substr $CMD 1 1) if [[ $S = S ]] then PREFIX=${CMD#?} COMM="echo ${PREFIX}${CTRL} > $SFILE" eval $COMM $REDIRECT + elif [[ $S = X ]] + then + CPUS=${CMD#?} + COMM="echo $CPUS > $XFILE" + eval $COMM $REDIRECT elif [[ $S = C ]] then CPUS=${CMD#?} @@ -430,7 +488,7 @@ online_cpus() [[ -n "OFFLINE_CPUS" ]] && { for C in $OFFLINE_CPUS do - write_cpu_online ${C}-1 + write_cpu_online ${C}=1 done } } @@ -443,18 +501,25 @@ reset_cgroup_states() echo 0 > $CGROUP2/cgroup.procs online_cpus rmdir A1/A2/A3 A1/A2 A1 B1 > /dev/null 2>&1 - set_ctrl_state . S- + pause 0.02 + set_ctrl_state . R- pause 0.01 } dump_states() { - for DIR in A1 A1/A2 A1/A2/A3 B1 + for DIR in . A1 A1/A2 A1/A2/A3 B1 do + CPUS=$DIR/cpuset.cpus ECPUS=$DIR/cpuset.cpus.effective + XCPUS=$DIR/cpuset.cpus.exclusive PRS=$DIR/cpuset.cpus.partition + PCPUS=$DIR/.__DEBUG__.cpuset.cpus.subpartitions + [[ -e $CPUS ]] && echo "$CPUS: $(cat $CPUS)" + [[ -e $XCPUS ]] && echo "$XCPUS: $(cat $XCPUS)" [[ -e $ECPUS ]] && echo "$ECPUS: $(cat $ECPUS)" [[ -e $PRS ]] && echo "$PRS: $(cat $PRS)" + [[ -e $PCPUS ]] && echo "$PCPUS: $(cat $PCPUS)" done } @@ -470,11 +535,17 @@ check_effective_cpus() set -- $(echo $CHK | sed -e "s/:/ /g") CGRP=$1 CPUS=$2 + if [[ $CGRP = X* ]] + then + CGRP=${CGRP#X} + FILE=cpuset.cpus.exclusive + else + FILE=cpuset.cpus.effective + fi [[ $CGRP = A2 ]] && CGRP=A1/A2 [[ $CGRP = A3 ]] && CGRP=A1/A2/A3 - FILE=$CGRP/cpuset.cpus.effective - [[ -e $FILE ]] || return 1 - [[ $CPUS = $(cat $FILE) ]] || return 1 + [[ -e $CGRP/$FILE ]] || return 1 + [[ $CPUS = $(cat $CGRP/$FILE) ]] || return 1 done } @@ -524,6 +595,64 @@ check_cgroup_states() return 0 } +# +# Get isolated (including offline) CPUs by looking at +# /sys/kernel/debug/sched/domains and compare that with the expected value. +# +# Note that a sched domain of just 1 CPU will be considered isolated. +# +# $1 - expected isolated cpu list +# +check_isolcpus() +{ + EXPECT_VAL=$1 + ISOLCPUS= + LASTISOLCPU= + SCHED_DOMAINS=/sys/kernel/debug/sched/domains + [[ -d $SCHED_DOMAINS ]] || return + + for ((CPU=0; CPU < $NR_CPUS; CPU++)) + do + [[ -n "$(ls ${SCHED_DOMAINS}/cpu$CPU)" ]] && continue + + if [[ -z "$LASTISOLCPU" ]] + then + ISOLCPUS=$CPU + LASTISOLCPU=$CPU + elif [[ "$LASTISOLCPU" -eq $((CPU - 1)) ]] + then + echo $ISOLCPUS | grep -q "\<$LASTISOLCPU\$" + if [[ $? -eq 0 ]] + then + ISOLCPUS=${ISOLCPUS}- + fi + LASTISOLCPU=$CPU + else + if [[ $ISOLCPUS = *- ]] + then + ISOLCPUS=${ISOLCPUS}$LASTISOLCPU + fi + ISOLCPUS=${ISOLCPUS},$CPU + LASTISOLCPU=$CPU + fi + done + [[ "$ISOLCPUS" = *- ]] && ISOLCPUS=${ISOLCPUS}$LASTISOLCPU + [[ $EXPECT_VAL = $ISOLCPUS ]] +} + +test_fail() +{ + TESTNUM=$1 + TESTTYPE=$2 + ADDINFO=$3 + echo "Test $TEST[$TESTNUM] failed $TESTTYPE check!" + [[ -n "$ADDINFO" ]] && echo "*** $ADDINFO ***" + eval echo \"\${$TEST[$I]}\" + echo + dump_states + exit 1 +} + # # Run cpuset state transition test # $1 - test matrix name @@ -536,88 +665,80 @@ run_state_test() { TEST=$1 CONTROLLER=cpuset - CPULIST=0-6 I=0 eval CNT="\${#$TEST[@]}" reset_cgroup_states - echo $CPULIST > cpuset.cpus - echo root > cpuset.cpus.partition console_msg "Running state transition test ..." while [[ $I -lt $CNT ]] do echo "Running test $I ..." > /dev/console + [[ $VERBOSE -gt 1 ]] && { + echo "" + eval echo \"\${$TEST[$I]}\" + } eval set -- "\${$TEST[$I]}" - ROOT=$1 - OLD_A1=$2 - OLD_A2=$3 - OLD_A3=$4 - OLD_B1=$5 - NEW_A1=$6 - NEW_A2=$7 - NEW_A3=$8 - NEW_B1=$9 - RESULT=${10} - ECPUS=${11} - STATES=${12} - - set_ctrl_state_noerr . $ROOT + OLD_A1=$1 + OLD_A2=$2 + OLD_A3=$3 + OLD_B1=$4 + NEW_A1=$5 + NEW_A2=$6 + NEW_A3=$7 + NEW_B1=$8 + RESULT=$9 + ECPUS=${10} + STATES=${11} + ICPUS=${12} + + set_ctrl_state_noerr B1 $OLD_B1 set_ctrl_state_noerr A1 $OLD_A1 set_ctrl_state_noerr A1/A2 $OLD_A2 set_ctrl_state_noerr A1/A2/A3 $OLD_A3 - set_ctrl_state_noerr B1 $OLD_B1 RETVAL=0 set_ctrl_state A1 $NEW_A1; ((RETVAL += $?)) set_ctrl_state A1/A2 $NEW_A2; ((RETVAL += $?)) set_ctrl_state A1/A2/A3 $NEW_A3; ((RETVAL += $?)) set_ctrl_state B1 $NEW_B1; ((RETVAL += $?)) - [[ $RETVAL -ne $RESULT ]] && { - echo "Test $TEST[$I] failed result check!" - eval echo \"\${$TEST[$I]}\" - dump_states - exit 1 - } + [[ $RETVAL -ne $RESULT ]] && test_fail $I result [[ -n "$ECPUS" && "$ECPUS" != . ]] && { check_effective_cpus $ECPUS - [[ $? -ne 0 ]] && { - echo "Test $TEST[$I] failed effective CPU check!" - eval echo \"\${$TEST[$I]}\" - echo - dump_states - exit 1 - } + [[ $? -ne 0 ]] && test_fail $I "effective CPU" } - [[ -n "$STATES" ]] && { + [[ -n "$STATES" && "$STATES" != . ]] && { check_cgroup_states $STATES - [[ $? -ne 0 ]] && { - echo "FAILED: Test $TEST[$I] failed states check!" - eval echo \"\${$TEST[$I]}\" - echo - dump_states - exit 1 - } + [[ $? -ne 0 ]] && test_fail $I states } + # Compare the expected isolated CPUs with the actual ones, + # if available + [[ -n "$ICPUS" ]] && { + check_isolcpus $ICPUS + [[ $? -ne 0 ]] && test_fail $I "isolated CPU" \ + "Expect $ICPUS, get $ISOLCPUS instead" + } reset_cgroup_states # # Check to see if effective cpu list changes # - pause 0.05 NEWLIST=$(cat cpuset.cpus.effective) + [[ $NEWLIST != $CPULIST ]] && { + # Wait a bit longer & recheck + pause 0.05 + NEWLIST=$(cat cpuset.cpus.effective) + } [[ $NEWLIST != $CPULIST ]] && { echo "Effective cpus changed to $NEWLIST after test $I!" exit 1 } - [[ -n "$VERBOSE" ]] && echo "Test $I done." + [[ $VERBOSE -gt 0 ]] && echo "Test $I done." ((I++)) done echo "All $I tests of $TEST PASSED." - - echo member > cpuset.cpus.partition } # @@ -642,6 +763,7 @@ test_inotify() { ERR=0 PRS=/tmp/.prs_$$ + cd $CGROUP2/test [[ -f $WAIT_INOTIFY ]] || { echo "wait_inotify not found, inotify test SKIPPED." return @@ -655,7 +777,7 @@ test_inotify() rm -f $PRS wait_inotify $PWD/cpuset.cpus.partition $PRS & pause 0.01 - set_ctrl_state . "O1-0" + set_ctrl_state . "O1=0" pause 0.01 check_cgroup_states ".:P-1" if [[ $? -ne 0 ]] @@ -689,5 +811,3 @@ run_state_test TEST_MATRIX test_isolated test_inotify echo "All tests PASSED." -cd .. -rmdir test