From patchwork Thu Dec 6 18:18:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Valente X-Patchwork-Id: 153051 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp10857174ljp; Thu, 6 Dec 2018 10:18:38 -0800 (PST) X-Google-Smtp-Source: AFSGD/XXFdhewW+LARjS9OT2i2Ie2d0wd4u1e11EJr8NTWe2wRpzZXVqKycjed+UiE/67wRYvgbz X-Received: by 2002:a63:1c61:: with SMTP id c33mr24387492pgm.354.1544120318052; Thu, 06 Dec 2018 10:18:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544120318; cv=none; d=google.com; s=arc-20160816; b=OsNsI/TIxBfOtDm3te45boR60VqzELKTMlOBSRo8a9OQzsDARHMyUQmV/cGwt5/856 w3rJPPpBGCr9WbLVVO4QK8qYPjhbR+vSVBDHMIRYmIATpCBbNY7susRQC3RJEZpsrg/2 bZJp2aNv73RfwZpVljLYcZLxZBffnLnW00Q0n0w4vDbbRCu/M9QCGv5Qob/n1b8Z5F+7 kzaB67mNHQ+3Z7TIZthBs4qjoWOiW2D3kiYXdIvQQvb3I/R9XUA+xCx87mJCoab8xczR U83Ot/eL4FYgsQNKU3A0nNR4VwPeeA/X4nvq41Nqq178t0ViKnyDKd++6pOhdltLlpGf lPWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=nEWTYFxHD6jSePp1DOJ6lK2N9qNF0TKxvyVLNROZM4c=; b=vWrtpseEEXwDoM9sfYJXbqEu8BDi8pJSRYoDEqIl/k2y3ttGhIPAvcuQqD5QsRvPIP ZcwEk+SBxirgT/73Raxuf7l+25ZGekNPse7+6/2gPG8dysF1uChn/aJZ0oajJtMbPu9W K+aPB6KDlwnD6mh5fkT/aJl7K7IcBm8MvIEm8Rk4FcykcF0XYYNceXDvUfOGVrrAFPO1 OihllODXR4wioAAd5ibG/8iUZauS1qA+S1fNSGTn0SEhd8TZzus8sTAuJ2t/Q0PFMWhM ZZqi11ApwdVq+38jMr9VYlFVTu8aGOAnYbhRM5zaK2RvddzcZfXH4p0+mFfdQbDPRHJO xtew== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=DQUJpn77; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l3si730204pld.229.2018.12.06.10.18.37; Thu, 06 Dec 2018 10:18:38 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=DQUJpn77; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726050AbeLFSSh (ORCPT + 31 others); Thu, 6 Dec 2018 13:18:37 -0500 Received: from mail-wr1-f67.google.com ([209.85.221.67]:41931 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726008AbeLFSSf (ORCPT ); Thu, 6 Dec 2018 13:18:35 -0500 Received: by mail-wr1-f67.google.com with SMTP id x10so1424659wrs.8 for ; Thu, 06 Dec 2018 10:18:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=nEWTYFxHD6jSePp1DOJ6lK2N9qNF0TKxvyVLNROZM4c=; b=DQUJpn77whLW4EgtacgtSO29oQnKcdxeWTKC5/51avbJDTGVAIzD3ZzyNji1tM9K7w BjpjqE2Z8B5bzFqYwz3ADJJTwOokGdGKaliXMYihKi6gqFJfJpT6sBCkXZKif/fypn/G +vkgW4FORJkFUC32jEqIRaeapnrdd3tBoVXFw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nEWTYFxHD6jSePp1DOJ6lK2N9qNF0TKxvyVLNROZM4c=; b=Xk4xVAB55Gl3TMWbAjaqmhDLBRGB9F9t93IAF1hiRqe4KOH7trxcgdRRUi1vHa7Vk5 UfghnJFkvqXtFuKJUC6pckeXxwpKYGhdSC6d/d44l2PqEd4waxpLqaKB6WOkODvJBjjh jCujKQHo2mKfneX/H8AwKmNBtnh5tm7V/B0jzH4rEFRAzl0u7UznJaMean5t/QYtXB0g wSILggAdbTl3xo5R67Cb+aoZ6Gn28c9MA+1CHBo3TzYoVLTznW4RFjXIFRFkuEeULISN xKSv8aVzfkIWqBvUaH3gM5ev51d4Fqq4PUaPzAoAxjCBYTDaerIFIeknT+iyr9yelimc nVpQ== X-Gm-Message-State: AA+aEWbZPlI12/mI1D1iBNy/gU4I0jFlsBVB2nPb3hv7vjO+6w0946Hb Yj1Y002SjSs+vOjEciMOCKjZBQ7PL4U= X-Received: by 2002:adf:e509:: with SMTP id j9mr27910785wrm.76.1544120312509; Thu, 06 Dec 2018 10:18:32 -0800 (PST) Received: from localhost.localdomain (146-241-0-214.dyn.eolo.it. [146.241.0.214]) by smtp.gmail.com with ESMTPSA id w18sm727733wru.54.2018.12.06.10.18.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 06 Dec 2018 10:18:31 -0800 (PST) From: Paolo Valente To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, ulf.hansson@linaro.org, linus.walleij@linaro.org, broonie@kernel.org, bfq-iosched@googlegroups.com, oleksandr@natalenko.name, federico@willer.it, Paolo Valente Subject: [PATCH BUGFIX 1/2] block, bfq: fix decrement of num_active_groups Date: Thu, 6 Dec 2018 19:18:18 +0100 Message-Id: <20181206181819.11148-2-paolo.valente@linaro.org> X-Mailer: git-send-email 2.16.1 In-Reply-To: <20181206181819.11148-1-paolo.valente@linaro.org> References: <20181206181819.11148-1-paolo.valente@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Since commit '2d29c9f89fcd ("block, bfq: improve asymmetric scenarios detection")', if there are process groups with I/O requests waiting for completion, then BFQ tags the scenario as 'asymmetric'. This detection is needed for preserving service guarantees (for details, see comments on the computation * of the variable asymmetric_scenario in the function bfq_better_to_idle). Unfortunately, commit '2d29c9f89fcd ("block, bfq: improve asymmetric scenarios detection")' contains an error exactly in the updating of the number of groups with I/O requests waiting for completion: if a group has more than one descendant process, then the above number of groups, which is renamed from num_active_groups to a more appropriate num_groups_with_pending_reqs by this commit, may happen to be wrongly decremented multiple times, namely every time one of the descendant processes gets all its pending I/O requests completed. A correct, complete solution should work as follows. Consider a group that is inactive, i.e., that has no descendant process with pending I/O inside BFQ queues. Then suppose that num_groups_with_pending_reqs is still accounting for this group, because the group still has some descendant process with some I/O request still in flight. num_groups_with_pending_reqs should be decremented when the in-flight request of the last descendant process is finally completed (assuming that nothing else has changed for the group in the meantime, in terms of composition of the group and active/inactive state of child groups and processes). To accomplish this, an additional pending-request counter must be added to entities, and must be updated correctly. To avoid this additional field and operations, this commit resorts to the following tradeoff between simplicity and accuracy: for an inactive group that is still counted in num_groups_with_pending_reqs, this commit decrements num_groups_with_pending_reqs when the first descendant process of the group remains with no request waiting for completion. This simplified scheme provides a fix to the unbalanced decrements introduced by 2d29c9f89fcd. Since this error was also caused by lack of comments on this non-trivial issue, this commit also adds related comments. Fixes: 2d29c9f89fcd ("block, bfq: improve asymmetric scenarios detection") Reported-by: Steven Barrett Tested-by: Steven Barrett Tested-by: Lucjan Lucjanov Reviewed-by: Federico Motta Signed-off-by: Paolo Valente --- block/bfq-iosched.c | 76 +++++++++++++++++++++++++++++++++++++---------------- block/bfq-iosched.h | 51 +++++++++++++++++++++++++++++++++-- block/bfq-wf2q.c | 5 +++- 3 files changed, 107 insertions(+), 25 deletions(-) -- 2.16.1 diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 3a27d31fcda6..97337214bec4 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -638,7 +638,7 @@ static bool bfq_varied_queue_weights_or_active_groups(struct bfq_data *bfqd) bfqd->queue_weights_tree.rb_node->rb_right) #ifdef CONFIG_BFQ_GROUP_IOSCHED ) || - (bfqd->num_active_groups > 0 + (bfqd->num_groups_with_pending_reqs > 0 #endif ); } @@ -802,7 +802,21 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd, */ break; } - bfqd->num_active_groups--; + + /* + * The decrement of num_groups_with_pending_reqs is + * not performed immediately upon the deactivation of + * entity, but it is delayed to when it also happens + * that the first leaf descendant bfqq of entity gets + * all its pending requests completed. The following + * instructions perform this delayed decrement, if + * needed. See the comments on + * num_groups_with_pending_reqs for details. + */ + if (entity->in_groups_with_pending_reqs) { + entity->in_groups_with_pending_reqs = false; + bfqd->num_groups_with_pending_reqs--; + } } } @@ -3529,27 +3543,44 @@ static bool bfq_better_to_idle(struct bfq_queue *bfqq) * fact, if there are active groups, then, for condition (i) * to become false, it is enough that an active group contains * more active processes or sub-groups than some other active - * group. We address this issue with the following bi-modal - * behavior, implemented in the function + * group. More precisely, for condition (i) to hold because of + * such a group, it is not even necessary that the group is + * (still) active: it is sufficient that, even if the group + * has become inactive, some of its descendant processes still + * have some request already dispatched but still waiting for + * completion. In fact, requests have still to be guaranteed + * their share of the throughput even after being + * dispatched. In this respect, it is easy to show that, if a + * group frequently becomes inactive while still having + * in-flight requests, and if, when this happens, the group is + * not considered in the calculation of whether the scenario + * is asymmetric, then the group may fail to be guaranteed its + * fair share of the throughput (basically because idling may + * not be performed for the descendant processes of the group, + * but it had to be). We address this issue with the + * following bi-modal behavior, implemented in the function * bfq_symmetric_scenario(). * - * If there are active groups, then the scenario is tagged as + * If there are groups with requests waiting for completion + * (as commented above, some of these groups may even be + * already inactive), then the scenario is tagged as * asymmetric, conservatively, without checking any of the * conditions (i) and (ii). So the device is idled for bfqq. * This behavior matches also the fact that groups are created - * exactly if controlling I/O (to preserve bandwidth and - * latency guarantees) is a primary concern. + * exactly if controlling I/O is a primary concern (to + * preserve bandwidth and latency guarantees). * - * On the opposite end, if there are no active groups, then - * only condition (i) is actually controlled, i.e., provided - * that condition (i) holds, idling is not performed, - * regardless of whether condition (ii) holds. In other words, - * only if condition (i) does not hold, then idling is - * allowed, and the device tends to be prevented from queueing - * many requests, possibly of several processes. Since there - * are no active groups, then, to control condition (i) it is - * enough to check whether all active queues have the same - * weight. + * On the opposite end, if there are no groups with requests + * waiting for completion, then only condition (i) is actually + * controlled, i.e., provided that condition (i) holds, idling + * is not performed, regardless of whether condition (ii) + * holds. In other words, only if condition (i) does not hold, + * then idling is allowed, and the device tends to be + * prevented from queueing many requests, possibly of several + * processes. Since there are no groups with requests waiting + * for completion, then, to control condition (i) it is enough + * to check just whether all the queues with requests waiting + * for completion also have the same weight. * * Not checking condition (ii) evidently exposes bfqq to the * risk of getting less throughput than its fair share. @@ -3607,10 +3638,11 @@ static bool bfq_better_to_idle(struct bfq_queue *bfqq) * bfqq is weight-raised is checked explicitly here. More * precisely, the compound condition below takes into account * also the fact that, even if bfqq is being weight-raised, - * the scenario is still symmetric if all active queues happen - * to be weight-raised. Actually, we should be even more - * precise here, and differentiate between interactive weight - * raising and soft real-time weight raising. + * the scenario is still symmetric if all queues with requests + * waiting for completion happen to be + * weight-raised. Actually, we should be even more precise + * here, and differentiate between interactive weight raising + * and soft real-time weight raising. * * As a side note, it is worth considering that the above * device-idling countermeasures may however fail in the @@ -5417,7 +5449,7 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e) bfqd->idle_slice_timer.function = bfq_idle_slice_timer; bfqd->queue_weights_tree = RB_ROOT; - bfqd->num_active_groups = 0; + bfqd->num_groups_with_pending_reqs = 0; INIT_LIST_HEAD(&bfqd->active_list); INIT_LIST_HEAD(&bfqd->idle_list); diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index 77651d817ecd..0b02bf302de0 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -196,6 +196,9 @@ struct bfq_entity { /* flag, set to request a weight, ioprio or ioprio_class change */ int prio_changed; + + /* flag, set if the entity is counted in groups_with_pending_reqs */ + bool in_groups_with_pending_reqs; }; struct bfq_group; @@ -448,10 +451,54 @@ struct bfq_data { * bfq_weights_tree_[add|remove] for further details). */ struct rb_root queue_weights_tree; + /* - * number of groups with requests still waiting for completion + * Number of groups with at least one descendant process that + * has at least one request waiting for completion. Note that + * this accounts for also requests already dispatched, but not + * yet completed. Therefore this number of groups may differ + * (be larger) than the number of active groups, as a group is + * considered active only if its corresponding entity has + * descendant queues with at least one request queued. This + * number is used to decide whether a scenario is symmetric. + * For a detailed explanation see comments on the computation + * of the variable asymmetric_scenario in the function + * bfq_better_to_idle(). + * + * However, it is hard to compute this number exactly, for + * groups with multiple descendant processes. Consider a group + * that is inactive, i.e., that has no descendant process with + * pending I/O inside BFQ queues. Then suppose that + * num_groups_with_pending_reqs is still accounting for this + * group, because the group has descendant processes with some + * I/O request still in flight. num_groups_with_pending_reqs + * should be decremented when the in-flight request of the + * last descendant process is finally completed (assuming that + * nothing else has changed for the group in the meantime, in + * terms of composition of the group and active/inactive state of child + * groups and processes). To accomplish this, an additional + * pending-request counter must be added to entities, and must + * be updated correctly. To avoid this additional field and operations, + * we resort to the following tradeoff between simplicity and + * accuracy: for an inactive group that is still counted in + * num_groups_with_pending_reqs, we decrement + * num_groups_with_pending_reqs when the first descendant + * process of the group remains with no request waiting for + * completion. + * + * Even this simpler decrement strategy requires a little + * carefulness: to avoid multiple decrements, we flag a group, + * more precisely an entity representing a group, as still + * counted in num_groups_with_pending_reqs when it becomes + * inactive. Then, when the first descendant queue of the + * entity remains with no request waiting for completion, + * num_groups_with_pending_reqs is decremented, and this flag + * is reset. After this flag is reset for the entity, + * num_groups_with_pending_reqs won't be decremented any + * longer in case a new descendant queue of the entity remains + * with no request waiting for completion. */ - unsigned int num_active_groups; + unsigned int num_groups_with_pending_reqs; /* * Number of bfq_queues containing requests (including the diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index 4b0d5fb69160..63e0f12be7c9 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -1012,7 +1012,10 @@ static void __bfq_activate_entity(struct bfq_entity *entity, container_of(entity, struct bfq_group, entity); struct bfq_data *bfqd = bfqg->bfqd; - bfqd->num_active_groups++; + if (!entity->in_groups_with_pending_reqs) { + entity->in_groups_with_pending_reqs = true; + bfqd->num_groups_with_pending_reqs++; + } } #endif From patchwork Thu Dec 6 18:18:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Valente X-Patchwork-Id: 153052 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp10857251ljp; Thu, 6 Dec 2018 10:18:42 -0800 (PST) X-Google-Smtp-Source: AFSGD/Uz1mZH9ITYyN3V11jtWjfYb1is9hv5YvRoLrWfH+jSslb9Mghhfo3OeD9HwsBEswLUGCo5 X-Received: by 2002:a63:fc05:: with SMTP id j5mr25466853pgi.434.1544120322622; Thu, 06 Dec 2018 10:18:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544120322; cv=none; d=google.com; s=arc-20160816; b=FltGH5Sipib/9zC/AEs/5p+F/yBEEn7XjZ8Eurf5eIDXKwiO66rc1YDBXYONSZXklE ff3Ah1jiWOHLrmHBqoMtg82l4Kt6PfYys/M583XEhxVQk8WdAHpE6RjJAhjjr/UjrfvW jFNDlEAB/2sQq2xZDQS7bMSu4jKsJaFzIuS2eE/o7ZZ0NJZsQyK2NZjVHIP6KAO5xuPY GUXZRtlnpNMom8xAJNKqSLtYvE7KE8537o2LmmFsXFEj5GHZ0yiycpfayCj15Gcxci7m ksy1p2J0C7zo075sGUMjJCP9lAWlyWmPsvsq0uhv3Je1YzyjR45TsSjLUMlOAHOWGzx8 n8KA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=RfVPSbe8COb3JfBcyAa/yaISs2N1Qoc4T/gKzQMBMBc=; b=p9xJFSCqiJWE805XTj+jzPfWLB29PSKiePqR09sIwl9FHnmwkYPbBDIU3xwc46KGam VxLlHdQrJrdaWxpyp+EtNTS/otFC4mKgKbIeyqBBdoknWSrfPQio7A4wAaoxFb1PZ7GY REwC9/UA5oRKfTAIkB7uTKpDY3z40ex+6DXz2BrhaaG+gywTrqaT6j6AQAISY/AYH4f3 mXxn9aBbaZzjGKEPRp5322YphBsdVfOMv9fGZGAOqjWaDHEZbCaDUEIkO+4jrdXtTop+ RPVUbLxdLEUAjwGLIwdFYCBxNIDu+GApSG2I5AH0TPy6ydjJbv551N+LdZ/LDlWE5c+1 b1Sw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=VA9RMPgO; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a90si778787plc.314.2018.12.06.10.18.42; Thu, 06 Dec 2018 10:18:42 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=VA9RMPgO; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726073AbeLFSSl (ORCPT + 31 others); Thu, 6 Dec 2018 13:18:41 -0500 Received: from mail-wr1-f68.google.com ([209.85.221.68]:40160 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726011AbeLFSSg (ORCPT ); Thu, 6 Dec 2018 13:18:36 -0500 Received: by mail-wr1-f68.google.com with SMTP id p4so1435336wrt.7 for ; Thu, 06 Dec 2018 10:18:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=RfVPSbe8COb3JfBcyAa/yaISs2N1Qoc4T/gKzQMBMBc=; b=VA9RMPgOdFcTDws5kwhkDGg5hNmYuVJC8XxTFUhD2NuBAfGrVlVJ9X3hTCG+Smtxlt aLrB59gGuyo15j7yMiWi2NGnls05UYpe1yu0NMBaoC+CkmYSexzCFLVb2y/xQgQTAZf2 Jqb1bhkgP++dxt9tyrlh+X0BufXuK6Lt9L0YM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=RfVPSbe8COb3JfBcyAa/yaISs2N1Qoc4T/gKzQMBMBc=; b=fZzU8RM9xAV5iTGNHnSAzyL2kkbK9tJVvGL9pGeJPLrw/5d1sssYbHCGdexOEjLLqt AHR82G1FvTIOI5CMA4L0AEoGca3VO3NEuOdO7A8xmbTligDmVqk4dcGjyfXx/oTJaOTf bQ83pJbUgQIARbBsNViasFZ+4daKESTVtGa6Eaa7/Iw6npMtkF4qw95h9LLUh8oB5pew VdVOxhkxhO8OlF0FYLeUCgwG+2rY7LiOg1mclduz6Q55XNcDgzuixcJBTOt+tLYh743b cS1pKiyQq4hEjdeCVXtjpAIacFNU0Mo4TbqkiJoPfxdOdVNzPNnGDJRbYyKYUDeEo8EM 7iMA== X-Gm-Message-State: AA+aEWb6PB2YCBsI2OnFGVsJFkGuRq6TGlf1yLrRoJhN74/dqZSZX52r i94kokLLEJZeCEnpiPUYijDTEA== X-Received: by 2002:a5d:5607:: with SMTP id l7mr26615236wrv.25.1544120313858; Thu, 06 Dec 2018 10:18:33 -0800 (PST) Received: from localhost.localdomain (146-241-0-214.dyn.eolo.it. [146.241.0.214]) by smtp.gmail.com with ESMTPSA id w18sm727733wru.54.2018.12.06.10.18.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 06 Dec 2018 10:18:33 -0800 (PST) From: Paolo Valente To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, ulf.hansson@linaro.org, linus.walleij@linaro.org, broonie@kernel.org, bfq-iosched@googlegroups.com, oleksandr@natalenko.name, federico@willer.it, Paolo Valente Subject: [PATCH BUGFIX 2/2] block, bfq: fix comments on __bfq_deactivate_entity Date: Thu, 6 Dec 2018 19:18:19 +0100 Message-Id: <20181206181819.11148-3-paolo.valente@linaro.org> X-Mailer: git-send-email 2.16.1 In-Reply-To: <20181206181819.11148-1-paolo.valente@linaro.org> References: <20181206181819.11148-1-paolo.valente@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Comments on function __bfq_deactivate_entity contains two imprecise or wrong statements: 1) The function performs the deactivation of the entity. 2) The function must be invoked only if the entity is on a service tree. This commits replaces both statements with the correct ones: 1) The functions updates sched_data and service trees for the entity, so as to represent entity as inactive (which is only part of the steps needed for the deactivation of the entity). 2) The function must be invoked on every entity being deactivated. Signed-off-by: Paolo Valente --- block/bfq-wf2q.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) -- 2.16.1 diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index 63e0f12be7c9..72adbbe975d5 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -1154,15 +1154,14 @@ static void bfq_activate_requeue_entity(struct bfq_entity *entity, } /** - * __bfq_deactivate_entity - deactivate an entity from its service tree. - * @entity: the entity to deactivate. + * __bfq_deactivate_entity - update sched_data and service trees for + * entity, so as to represent entity as inactive + * @entity: the entity being deactivated. * @ins_into_idle_tree: if false, the entity will not be put into the * idle tree. * - * Deactivates an entity, independently of its previous state. Must - * be invoked only if entity is on a service tree. Extracts the entity - * from that tree, and if necessary and allowed, puts it into the idle - * tree. + * If necessary and allowed, puts entity into the idle tree. NOTE: + * entity may be on no tree if in service. */ bool __bfq_deactivate_entity(struct bfq_entity *entity, bool ins_into_idle_tree) {