From patchwork Fri Jun 19 14:32:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 224138 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E972EC433E0 for ; Fri, 19 Jun 2020 15:09:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CB15B21974 for ; Fri, 19 Jun 2020 15:09:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1592579382; bh=IlCy+/PA9cG/OOuLd1GmNkRJfPDP1bUeB3weSGCHfdo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=wNEBVr6xHsIfW91MRjqGHAMqEYAAABkSQGfgOyE5WB1W/lzx0p3QoOK15gZntjbHg PAv1P2XtxvJfkplxOCSR/dl4MajL2xVVssY82xFs6BIyYy00MpJo+G19Nm4zlOF7S2 eKieTzu/h0IQF09NwYuRLNfXZeH/3s47w42WKiCU= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391949AbgFSPJm (ORCPT ); Fri, 19 Jun 2020 11:09:42 -0400 Received: from mail.kernel.org ([198.145.29.99]:39434 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391450AbgFSPJl (ORCPT ); Fri, 19 Jun 2020 11:09:41 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 64AA020776; Fri, 19 Jun 2020 15:09:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1592579380; bh=IlCy+/PA9cG/OOuLd1GmNkRJfPDP1bUeB3weSGCHfdo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kTj47ZsQaaf8I8V0PTDu0uDIQESs9l6jFXHAOfD5ph+Qco6NjDuCKefPYCvk1Itsb Ji07DM6gJdVKBL88mlmQX/zKyOQMgo6MWS5GIuZlRLYWZxIW0dZhjDPBXaXMmsXeFv /TQIFCRIXiWGjbX8A/ilBKI1h9CJAZwwWGCpjsE0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Michal Hocko , Coly Li , Song Liu , Sasha Levin Subject: [PATCH 5.4 113/261] raid5: remove gfp flags from scribble_alloc() Date: Fri, 19 Jun 2020 16:32:04 +0200 Message-Id: <20200619141655.282920713@linuxfoundation.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200619141649.878808811@linuxfoundation.org> References: <20200619141649.878808811@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Coly Li [ Upstream commit ba54d4d4d2844c234f1b4692bd8c9e0f833c8a54 ] Using GFP_NOIO flag to call scribble_alloc() from resize_chunk() does not have the expected behavior. kvmalloc_array() inside scribble_alloc() which receives the GFP_NOIO flag will eventually call kmalloc_node() to allocate physically continuous pages. Now we have memalloc scope APIs in mddev_suspend()/mddev_resume() to prevent memory reclaim I/Os during raid array suspend context, calling to kvmalloc_array() with GFP_KERNEL flag may avoid deadlock of recursive I/O as expected. This patch removes the useless gfp flags from parameters list of scribble_alloc(), and call kvmalloc_array() with GFP_KERNEL flag. The incorrect GFP_NOIO flag does not exist anymore. Fixes: b330e6a49dc3 ("md: convert to kvmalloc") Suggested-by: Michal Hocko Signed-off-by: Coly Li Signed-off-by: Song Liu Signed-off-by: Sasha Levin --- drivers/md/raid5.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 36cd7c2fbf40..a3cbc9f4fec1 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -2228,14 +2228,19 @@ static int grow_stripes(struct r5conf *conf, int num) * of the P and Q blocks. */ static int scribble_alloc(struct raid5_percpu *percpu, - int num, int cnt, gfp_t flags) + int num, int cnt) { size_t obj_size = sizeof(struct page *) * (num+2) + sizeof(addr_conv_t) * (num+2); void *scribble; - scribble = kvmalloc_array(cnt, obj_size, flags); + /* + * If here is in raid array suspend context, it is in memalloc noio + * context as well, there is no potential recursive memory reclaim + * I/Os with the GFP_KERNEL flag. + */ + scribble = kvmalloc_array(cnt, obj_size, GFP_KERNEL); if (!scribble) return -ENOMEM; @@ -2267,8 +2272,7 @@ static int resize_chunks(struct r5conf *conf, int new_disks, int new_sectors) percpu = per_cpu_ptr(conf->percpu, cpu); err = scribble_alloc(percpu, new_disks, - new_sectors / STRIPE_SECTORS, - GFP_NOIO); + new_sectors / STRIPE_SECTORS); if (err) break; } @@ -6765,8 +6769,7 @@ static int alloc_scratch_buffer(struct r5conf *conf, struct raid5_percpu *percpu conf->previous_raid_disks), max(conf->chunk_sectors, conf->prev_chunk_sectors) - / STRIPE_SECTORS, - GFP_KERNEL)) { + / STRIPE_SECTORS)) { free_scratch_buffer(conf, percpu); return -ENOMEM; }