From patchwork Mon Jan 4 15:56:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356751 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 827D8C433E0 for ; Mon, 4 Jan 2021 16:07:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 51B54207AE for ; Mon, 4 Jan 2021 16:07:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728817AbhADQCd (ORCPT ); Mon, 4 Jan 2021 11:02:33 -0500 Received: from mail.kernel.org ([198.145.29.99]:39182 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728830AbhADQCY (ORCPT ); Mon, 4 Jan 2021 11:02:24 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 05FC222581; Mon, 4 Jan 2021 16:02:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776129; bh=Fc/qL3k89Pq9zdUYsw2qNJY+gjmXvKz4E+sYqzPlT2U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZIuh6IyTPPpANV1qRUYZFVHjX0zlWUN3T/QXfSgJt3iMtP9ynoGfA3CXflTrcxfLP 00VYWL488mdHpycREWS1gu4aqh5flJ+km4O3xoAMNGQb0UM3SktulkwkAQGQcpyTMA uHgssuzPB8rh43rE0WS5dQnQPgDH1axrYQMJRqhk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, syzbot+96523fb438937cd01220@syzkaller.appspotmail.com, Michal Kubecek , Jakub Kicinski Subject: [PATCH 5.10 04/63] ethtool: fix string set id check Date: Mon, 4 Jan 2021 16:56:57 +0100 Message-Id: <20210104155709.020311198@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Michal Kubecek [ Upstream commit efb796f5571f030743e1d9c662cdebdad724f8c5 ] Syzbot reported a shift of a u32 by more than 31 in strset_parse_request() which is undefined behavior. This is caused by range check of string set id using variable ret (which is always 0 at this point) instead of id (string set id from request). Fixes: 71921690f974 ("ethtool: provide string sets with STRSET_GET request") Reported-by: syzbot+96523fb438937cd01220@syzkaller.appspotmail.com Signed-off-by: Michal Kubecek Link: https://lore.kernel.org/r/b54ed5c5fd972a59afea3e1badfb36d86df68799.1607952208.git.mkubecek@suse.cz Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- net/ethtool/strset.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/net/ethtool/strset.c +++ b/net/ethtool/strset.c @@ -182,7 +182,7 @@ static int strset_parse_request(struct e ret = strset_get_id(attr, &id, extack); if (ret < 0) return ret; - if (ret >= ETH_SS_COUNT) { + if (id >= ETH_SS_COUNT) { NL_SET_ERR_MSG_ATTR(extack, attr, "unknown string set id"); return -EOPNOTSUPP; From patchwork Mon Jan 4 15:57:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356754 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17F75C433E6 for ; Mon, 4 Jan 2021 16:06:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DD55E2250E for ; Mon, 4 Jan 2021 16:06:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726707AbhADQGY (ORCPT ); Mon, 4 Jan 2021 11:06:24 -0500 Received: from mail.kernel.org ([198.145.29.99]:40366 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728958AbhADQC6 (ORCPT ); Mon, 4 Jan 2021 11:02:58 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id DD0B52245C; Mon, 4 Jan 2021 16:02:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776138; bh=3Cv24Vu//k+9LqcI5Km3zQxhoyl+adOYlUF9njqT6t4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=wb3DrWAJKnHF3aZatQ+CFfriKhIWyskfxtL9Hf0NySLdkK2VyruXionv5uUaruimC 61mo9ELsLyhgS7k6mD7N6U1GtXNVLALD8i9KNDnNa1s3n7O+C7d6R0eMtcSG3CWC5Y 9jjsukpywN2pXNcoTk5kcmMTTiFEWquY+f/axvMQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jubin Zhong , lizhe , Richard Weinberger , Sasha Levin Subject: [PATCH 5.10 08/63] jffs2: Allow setting rp_size to zero during remounting Date: Mon, 4 Jan 2021 16:57:01 +0100 Message-Id: <20210104155709.212682044@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: lizhe [ Upstream commit cd3ed3c73ac671ff6b0230ccb72b8300292d3643 ] Set rp_size to zero will be ignore during remounting. The method to identify whether we input a remounting option of rp_size is to check if the rp_size input is zero. It can not work well if we pass "rp_size=0". This patch add a bool variable "set_rp_size" to fix this problem. Reported-by: Jubin Zhong Signed-off-by: lizhe Signed-off-by: Richard Weinberger Signed-off-by: Sasha Levin --- fs/jffs2/jffs2_fs_sb.h | 1 + fs/jffs2/super.c | 7 +++++-- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/fs/jffs2/jffs2_fs_sb.h b/fs/jffs2/jffs2_fs_sb.h index 778275f48a879..5a7091746f68b 100644 --- a/fs/jffs2/jffs2_fs_sb.h +++ b/fs/jffs2/jffs2_fs_sb.h @@ -38,6 +38,7 @@ struct jffs2_mount_opts { * users. This is implemented simply by means of not allowing the * latter users to write to the file system if the amount if the * available space is less then 'rp_size'. */ + bool set_rp_size; unsigned int rp_size; }; diff --git a/fs/jffs2/super.c b/fs/jffs2/super.c index 4fd297bdf0f3f..c523adaca79f3 100644 --- a/fs/jffs2/super.c +++ b/fs/jffs2/super.c @@ -88,7 +88,7 @@ static int jffs2_show_options(struct seq_file *s, struct dentry *root) if (opts->override_compr) seq_printf(s, ",compr=%s", jffs2_compr_name(opts->compr)); - if (opts->rp_size) + if (opts->set_rp_size) seq_printf(s, ",rp_size=%u", opts->rp_size / 1024); return 0; @@ -206,6 +206,7 @@ static int jffs2_parse_param(struct fs_context *fc, struct fs_parameter *param) if (opt > c->mtd->size) return invalf(fc, "jffs2: Too large reserve pool specified, max is %llu KB", c->mtd->size / 1024); + c->mount_opts.set_rp_size = true; c->mount_opts.rp_size = opt; break; default: @@ -225,8 +226,10 @@ static inline void jffs2_update_mount_opts(struct fs_context *fc) c->mount_opts.override_compr = new_c->mount_opts.override_compr; c->mount_opts.compr = new_c->mount_opts.compr; } - if (new_c->mount_opts.rp_size) + if (new_c->mount_opts.set_rp_size) { + c->mount_opts.set_rp_size = new_c->mount_opts.set_rp_size; c->mount_opts.rp_size = new_c->mount_opts.rp_size; + } mutex_unlock(&c->alloc_sem); } From patchwork Mon Jan 4 15:57:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356741 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCC8FC43217 for ; Mon, 4 Jan 2021 16:09:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B5EA0205CA for ; Mon, 4 Jan 2021 16:09:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728129AbhADQJW (ORCPT ); Mon, 4 Jan 2021 11:09:22 -0500 Received: from mail.kernel.org ([198.145.29.99]:38400 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728586AbhADQB3 (ORCPT ); Mon, 4 Jan 2021 11:01:29 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 50E382245C; Mon, 4 Jan 2021 16:01:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776073; bh=KnUIbIablvm3+WOQgt8EdeWIvVEZdEPzjBDG5VjNHQE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RU6jjkqm7bdBX/pcjUxCAol2T/yuVTToWwSXmqUBV3tquRLKIizCvuQ/Bn3HPggcF pbPhvQ0tn86T62kvajtPc8belF5tm++xDWxG3ueJwgpYX4GFw9iZxtD2JMQdliOAyR cJ+2xYQMUfzH4hzKwm9PT8bfPIIS9unBX+hCvhAg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, kernel test robot , Serge Semin , Randy Dunlap , Ramil Zaripov , Mark Brown Subject: [PATCH 5.10 10/63] spi: dw-bt1: Fix undefined devm_mux_control_get symbol Date: Mon, 4 Jan 2021 16:57:03 +0100 Message-Id: <20210104155709.309245084@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Serge Semin commit 7218838109fef61cdec988ff728e902d434c9cc5 upstream. I mistakenly added the select attributes to the SPI_DW_BT1_DIRMAP config instead of having them defined in SPI_DW_BT1. If the kernel doesn't have the MULTIPLEXER and MUX_MMIO configs manually enabled and the SPI_DW_BT1_DIRMAP config hasn't been selected, Baikal-T1 SPI device will always fail to be probed by the driver. Fix that and the error reported by the test robot: >> ld.lld: error: undefined symbol: devm_mux_control_get >>> referenced by spi-dw-bt1.c >>> spi/spi-dw-bt1.o:(dw_spi_bt1_sys_init) in archive drivers/built-in.a by moving the MULTIPLEXER/MUX_MMIO configs selection to the SPI_DW_BT1 config. Link: https://lore.kernel.org/lkml/202011161745.uYRlekse-lkp@intel.com/ Link: https://lore.kernel.org/linux-spi/20201116040721.8001-1-rdunlap@infradead.org/ Fixes: abf00907538e ("spi: dw: Add Baikal-T1 SPI Controller glue driver") Reported-by: kernel test robot Signed-off-by: Serge Semin Cc: Randy Dunlap Cc: Ramil Zaripov Link: https://lore.kernel.org/r/20201127144612.4204-1-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown Signed-off-by: Greg Kroah-Hartman --- drivers/spi/Kconfig | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) --- a/drivers/spi/Kconfig +++ b/drivers/spi/Kconfig @@ -256,6 +256,7 @@ config SPI_DW_BT1 tristate "Baikal-T1 SPI driver for DW SPI core" depends on MIPS_BAIKAL_T1 || COMPILE_TEST select MULTIPLEXER + select MUX_MMIO help Baikal-T1 SoC is equipped with three DW APB SSI-based MMIO SPI controllers. Two of them are pretty much normal: with IRQ, DMA, @@ -269,8 +270,6 @@ config SPI_DW_BT1 config SPI_DW_BT1_DIRMAP bool "Directly mapped Baikal-T1 Boot SPI flash support" depends on SPI_DW_BT1 - select MULTIPLEXER - select MUX_MMIO help Directly mapped SPI flash memory is an interface specific to the Baikal-T1 System Boot Controller. It is a 16MB MMIO region, which From patchwork Mon Jan 4 15:57:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356774 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74E21C43331 for ; Mon, 4 Jan 2021 16:02:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5835222473 for ; Mon, 4 Jan 2021 16:02:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728709AbhADQB6 (ORCPT ); Mon, 4 Jan 2021 11:01:58 -0500 Received: from mail.kernel.org ([198.145.29.99]:39260 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728670AbhADQB5 (ORCPT ); Mon, 4 Jan 2021 11:01:57 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 8EE862250E; Mon, 4 Jan 2021 16:01:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776076; bh=5AVGcMQut3ak/z5tayWj5NubN3ZNkOlQLsrwWLRfMbw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=0uXAA7WAxUOBVaXxhIZi6SA7/rpRQLD2ggAYiaiZ6aNaNJxmoAvfIHGWhcSnITIey qVBzRHqYGaHd1FnY6zptbQhksMl4Z1sj9XKyHIwQlO+l8R4dj0o/sGpCOmTIqgE34O 6w8DMThtUAq4LmKgPkxLUzs74zwJL86Ax0t28kGw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Quanyang Wang , Viresh Kumar Subject: [PATCH 5.10 11/63] opp: fix memory leak in _allocate_opp_table Date: Mon, 4 Jan 2021 16:57:04 +0100 Message-Id: <20210104155709.357491870@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Quanyang Wang commit 976509bb310b913d30577f15b58bdd30effb0542 upstream. In function _allocate_opp_table, opp_dev is allocated and referenced by opp_table via _add_opp_dev. But in the case that the subsequent calls return -EPROBE_DEFER, it will jump to err label and opp_table will be freed. Then opp_dev becomes an unreferenced object to cause memory leak. So let's call _remove_opp_dev to do the cleanup. This fixes the following kmemleak report: unreferenced object 0xffff000801524a00 (size 128): comm "swapper/0", pid 1, jiffies 4294892465 (age 84.616s) hex dump (first 32 bytes): 40 00 56 01 08 00 ff ff 40 00 56 01 08 00 ff ff @.V.....@.V..... b8 52 77 7f 08 00 ff ff 00 3c 4c 00 08 00 ff ff .Rw......] kmemleak_alloc+0x30/0x40 [<0000000056da48f0>] kmem_cache_alloc+0x3d4/0x588 [<00000000a84b3b0e>] _add_opp_dev+0x2c/0x88 [<0000000062a380cd>] _add_opp_table_indexed+0x124/0x268 [<000000008b4c8f1f>] dev_pm_opp_of_add_table+0x20/0x1d8 [<00000000e5316798>] dev_pm_opp_of_cpumask_add_table+0x48/0xf0 [<00000000db0a8ec2>] dt_cpufreq_probe+0x20c/0x448 [<0000000030a3a26c>] platform_probe+0x68/0xd8 [<00000000c618e78d>] really_probe+0xd0/0x3a0 [<00000000642e856f>] driver_probe_device+0x58/0xb8 [<00000000f10f5307>] device_driver_attach+0x74/0x80 [<0000000004f254b8>] __driver_attach+0x58/0xe0 [<0000000009d5d19e>] bus_for_each_dev+0x70/0xc8 [<0000000000d22e1c>] driver_attach+0x24/0x30 [<0000000001d4e952>] bus_add_driver+0x14c/0x1f0 [<0000000089928aaa>] driver_register+0x64/0x120 Cc: v5.10 # v5.10 Fixes: dd461cd9183f ("opp: Allow dev_pm_opp_get_opp_table() to return -EPROBE_DEFER") Signed-off-by: Quanyang Wang [ Viresh: Added the stable tag ] Signed-off-by: Viresh Kumar Signed-off-by: Greg Kroah-Hartman --- drivers/opp/core.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) --- a/drivers/opp/core.c +++ b/drivers/opp/core.c @@ -1102,7 +1102,7 @@ static struct opp_table *_allocate_opp_t if (IS_ERR(opp_table->clk)) { ret = PTR_ERR(opp_table->clk); if (ret == -EPROBE_DEFER) - goto err; + goto remove_opp_dev; dev_dbg(dev, "%s: Couldn't find clock: %d\n", __func__, ret); } @@ -1111,7 +1111,7 @@ static struct opp_table *_allocate_opp_t ret = dev_pm_opp_of_find_icc_paths(dev, opp_table); if (ret) { if (ret == -EPROBE_DEFER) - goto err; + goto remove_opp_dev; dev_warn(dev, "%s: Error finding interconnect paths: %d\n", __func__, ret); @@ -1125,6 +1125,8 @@ static struct opp_table *_allocate_opp_t list_add(&opp_table->node, &opp_tables); return opp_table; +remove_opp_dev: + _remove_opp_dev(opp_dev, opp_table); err: kfree(opp_table); return ERR_PTR(ret); From patchwork Mon Jan 4 15:57:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356399 Delivered-To: patch@linaro.org Received: by 2002:a02:85a7:0:0:0:0:0 with SMTP id d36csp15089575jai; Mon, 4 Jan 2021 08:10:31 -0800 (PST) X-Google-Smtp-Source: ABdhPJzSFn6s5qeIEbCEjG1j8eaZfAAkJjaPzgo/sg+79T3PJUA9enQaPhY9664FqGJ0ea6TrzCM X-Received: by 2002:a17:906:934c:: with SMTP id p12mr68657200ejw.361.1609776630925; Mon, 04 Jan 2021 08:10:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1609776630; cv=none; d=google.com; s=arc-20160816; b=e5HHoCR3wvB4OWXPLRKJ0h1OHPd0FP99/G/ygAhocxsMYV3IxLdvUi1iKoJsvrMCrB /IymOjKNedI4sR+nLylWxDpTo6ODOPDeCMcI64X8wvhg7VNXBDRILII765/aUHQXvmjT 1B+AfU2C9tBUVpeglz1/gIK8pJCvq6e6EccF4BHLAyhnndnl81SNvQ9A38qHjE62cXsA aJwCfm1QlSqacBNXeJSL/cWNfK0icKfumKLwi0FnPpUN1+AniikgyDIucEKA1Mg9dpJO LGMTLmj/YM23GXnF5BS8pUr2d6OjPN78VnJkNl7pmp4kNBaqPChpHcbRLRlrVUgI2oF+ rzIA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=xL+jRdw6M8gQ1ka+NIReTRyhyS7/hTlNHl0zI0YY5qw=; b=CJOHHEMoL79Bd72GUH7OXQLbks+oXH/H43AvOGOmouktJBcrE5cQqDuNmWa6RbTLEg HNq9waYsL2uUp10GCE2P2G+0fwTFTLPT0vldIKu4AkdPZKFk1ZbH25ljYuZpFwfKYQ/b FEMD2cWKghvuebIRVDlM+yJgAfq5cawpR2+ZoHT9/wjRjH4x/zVP90WvaengWAoyT70H peqVzEHxN1WwCGYO167AEcfdh042hvpl7ignnCP0Ka0l6ZyjgluQWtmAfMKSl84WZoF5 DnYJgc0EikVpKQH2ncWxtuTnrjoVHK6bI9MWxwzI5yQRFAdsmDk0VXnh4k49Tn0aAQyR XvOw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=1rxRiBOb; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 61si31780067edk.598.2021.01.04.08.10.30; Mon, 04 Jan 2021 08:10:30 -0800 (PST) Received-SPF: pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=1rxRiBOb; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728309AbhADQJP (ORCPT + 14 others); Mon, 4 Jan 2021 11:09:15 -0500 Received: from mail.kernel.org ([198.145.29.99]:38446 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728603AbhADQBd (ORCPT ); Mon, 4 Jan 2021 11:01:33 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id C80F8224D4; Mon, 4 Jan 2021 16:01:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776078; bh=Qsw7kkjV4wurJiVt+Wqj5QaGaotP5tCVGEJj8RzI4A4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=1rxRiBObT60iOmrGdiit3ZJQHxzR39S38rYgQmf1bPh5vHWlkd1ORwhP4VMKj8FaB GaZE4fhuWcCKJUV/879Zhy2RVC+AOBAzp8Gkqmyr/lHeV9/QrmBtbocoHKinOOMdgY dGYBnBxsAaElbuRX1p4n37qnjSxOMC4DgcSgGNcE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Viresh Kumar Subject: [PATCH 5.10 12/63] opp: Call the missing clk_put() on error Date: Mon, 4 Jan 2021 16:57:05 +0100 Message-Id: <20210104155709.406778571@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Viresh Kumar commit 0e1d9ca1766f5d95fb881f57b6c4a1ffa63d4648 upstream. Fix the clock reference counting by calling the missing clk_put() in the error path. Cc: v5.10 # v5.10 Fixes: dd461cd9183f ("opp: Allow dev_pm_opp_get_opp_table() to return -EPROBE_DEFER") Signed-off-by: Viresh Kumar Signed-off-by: Greg Kroah-Hartman --- drivers/opp/core.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) --- a/drivers/opp/core.c +++ b/drivers/opp/core.c @@ -1111,7 +1111,7 @@ static struct opp_table *_allocate_opp_t ret = dev_pm_opp_of_find_icc_paths(dev, opp_table); if (ret) { if (ret == -EPROBE_DEFER) - goto remove_opp_dev; + goto put_clk; dev_warn(dev, "%s: Error finding interconnect paths: %d\n", __func__, ret); @@ -1125,6 +1125,9 @@ static struct opp_table *_allocate_opp_t list_add(&opp_table->node, &opp_tables); return opp_table; +put_clk: + if (!IS_ERR(opp_table->clk)) + clk_put(opp_table->clk); remove_opp_dev: _remove_opp_dev(opp_dev, opp_table); err: From patchwork Mon Jan 4 15:57:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356743 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B24EFC433DB for ; Mon, 4 Jan 2021 16:09:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8FE6B20665 for ; Mon, 4 Jan 2021 16:09:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729609AbhADQIu (ORCPT ); Mon, 4 Jan 2021 11:08:50 -0500 Received: from mail.kernel.org ([198.145.29.99]:38464 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728611AbhADQBg (ORCPT ); Mon, 4 Jan 2021 11:01:36 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id D2025224D2; Mon, 4 Jan 2021 16:01:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776080; bh=5Gfu3L59rdjO9CtMPxwCNS4CHfJnT5MFmlDci/a6xLE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AQWI4SnJ5sDb4ZVdT2npRXnLWx6KR45BuKghllcoHZHw0siH4SRpOAv/huKLOOIdY 37ADL3k7LMBaJeSG8h4e1uN6xPg+akeI2Dh6a/iBqpGMD6rzJQFSITiPTy5GE4dKss qdnQV72cJUmEn1VX+qNBo982jlEdMw2OseFYzuIs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ming Lei , "Rafael J. Wysocki" , Christoph Hellwig , Hannes Reinecke , Jens Axboe , Alan Stern , Stanley Chu , Can Guo , Bart Van Assche , "Martin K. Petersen" Subject: [PATCH 5.10 13/63] scsi: block: Fix a race in the runtime power management code Date: Mon, 4 Jan 2021 16:57:06 +0100 Message-Id: <20210104155709.457411783@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Bart Van Assche commit fa4d0f1992a96f6d7c988ef423e3127e613f6ac9 upstream. With the current implementation the following race can happen: * blk_pre_runtime_suspend() calls blk_freeze_queue_start() and blk_mq_unfreeze_queue(). * blk_queue_enter() calls blk_queue_pm_only() and that function returns true. * blk_queue_enter() calls blk_pm_request_resume() and that function does not call pm_request_resume() because the queue runtime status is RPM_ACTIVE. * blk_pre_runtime_suspend() changes the queue status into RPM_SUSPENDING. Fix this race by changing the queue runtime status into RPM_SUSPENDING before switching q_usage_counter to atomic mode. Link: https://lore.kernel.org/r/20201209052951.16136-2-bvanassche@acm.org Fixes: 986d413b7c15 ("blk-mq: Enable support for runtime power management") Cc: Ming Lei Cc: Rafael J. Wysocki Cc: stable Reviewed-by: Christoph Hellwig Reviewed-by: Hannes Reinecke Reviewed-by: Jens Axboe Acked-by: Alan Stern Acked-by: Stanley Chu Co-developed-by: Can Guo Signed-off-by: Can Guo Signed-off-by: Bart Van Assche Signed-off-by: Martin K. Petersen Signed-off-by: Greg Kroah-Hartman --- block/blk-pm.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) --- a/block/blk-pm.c +++ b/block/blk-pm.c @@ -67,6 +67,10 @@ int blk_pre_runtime_suspend(struct reque WARN_ON_ONCE(q->rpm_status != RPM_ACTIVE); + spin_lock_irq(&q->queue_lock); + q->rpm_status = RPM_SUSPENDING; + spin_unlock_irq(&q->queue_lock); + /* * Increase the pm_only counter before checking whether any * non-PM blk_queue_enter() calls are in progress to avoid that any @@ -89,15 +93,14 @@ int blk_pre_runtime_suspend(struct reque /* Switch q_usage_counter back to per-cpu mode. */ blk_mq_unfreeze_queue(q); - spin_lock_irq(&q->queue_lock); - if (ret < 0) + if (ret < 0) { + spin_lock_irq(&q->queue_lock); + q->rpm_status = RPM_ACTIVE; pm_runtime_mark_last_busy(q->dev); - else - q->rpm_status = RPM_SUSPENDING; - spin_unlock_irq(&q->queue_lock); + spin_unlock_irq(&q->queue_lock); - if (ret) blk_clear_pm_only(q); + } return ret; } From patchwork Mon Jan 4 15:57:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356748 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 784C1C433E9 for ; Mon, 4 Jan 2021 16:08:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4C23820665 for ; Mon, 4 Jan 2021 16:08:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727808AbhADQHj (ORCPT ); Mon, 4 Jan 2021 11:07:39 -0500 Received: from mail.kernel.org ([198.145.29.99]:39372 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728735AbhADQCF (ORCPT ); Mon, 4 Jan 2021 11:02:05 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 5C6A3224DF; Mon, 4 Jan 2021 16:01:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776084; bh=nrWRPnlWRpW8q8MPiT2tJYroz6dTYmE/9n0PgzrHSnU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fklZfH+mm40CeXmSrDimS7rVjpP2yk6edHXx0f4HCAwjyA0Z7N1BBzCdua9Te2M2B wzoZZHhU9lDVCOVORZbC6TO8mUxexv0API084ZaTjuFj25IN+ikZpK7ngQLCjtyar1 UhUeqLhiiQb5k+yMNbK2DSRUO5XDDpGwsMnLoufE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Baoquan He , Rahul Gopakumar , Mike Rapoport , David Hildenbrand , Andrew Morton , Linus Torvalds Subject: [PATCH 5.10 15/63] mm: memmap defer init doesnt work as expected Date: Mon, 4 Jan 2021 16:57:08 +0100 Message-Id: <20210104155709.555108034@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Baoquan He commit dc2da7b45ffe954a0090f5d0310ed7b0b37d2bd2 upstream. VMware observed a performance regression during memmap init on their platform, and bisected to commit 73a6e474cb376 ("mm: memmap_init: iterate over memblock regions rather that check each PFN") causing it. Before the commit: [0.033176] Normal zone: 1445888 pages used for memmap [0.033176] Normal zone: 89391104 pages, LIFO batch:63 [0.035851] ACPI: PM-Timer IO Port: 0x448 With commit [0.026874] Normal zone: 1445888 pages used for memmap [0.026875] Normal zone: 89391104 pages, LIFO batch:63 [2.028450] ACPI: PM-Timer IO Port: 0x448 The root cause is the current memmap defer init doesn't work as expected. Before, memmap_init_zone() was used to do memmap init of one whole zone, to initialize all low zones of one numa node, but defer memmap init of the last zone in that numa node. However, since commit 73a6e474cb376, function memmap_init() is adapted to iterater over memblock regions inside one zone, then call memmap_init_zone() to do memmap init for each region. E.g, on VMware's system, the memory layout is as below, there are two memory regions in node 2. The current code will mistakenly initialize the whole 1st region [mem 0xab00000000-0xfcffffffff], then do memmap defer to iniatialize only one memmory section on the 2nd region [mem 0x10000000000-0x1033fffffff]. In fact, we only expect to see that there's only one memory section's memmap initialized. That's why more time is costed at the time. [ 0.008842] ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] [ 0.008842] ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] [ 0.008843] ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x55ffffffff] [ 0.008844] ACPI: SRAT: Node 1 PXM 1 [mem 0x5600000000-0xaaffffffff] [ 0.008844] ACPI: SRAT: Node 2 PXM 2 [mem 0xab00000000-0xfcffffffff] [ 0.008845] ACPI: SRAT: Node 2 PXM 2 [mem 0x10000000000-0x1033fffffff] Now, let's add a parameter 'zone_end_pfn' to memmap_init_zone() to pass down the real zone end pfn so that defer_init() can use it to judge whether defer need be taken in zone wide. Link: https://lkml.kernel.org/r/20201223080811.16211-1-bhe@redhat.com Link: https://lkml.kernel.org/r/20201223080811.16211-2-bhe@redhat.com Fixes: commit 73a6e474cb376 ("mm: memmap_init: iterate over memblock regions rather that check each PFN") Signed-off-by: Baoquan He Reported-by: Rahul Gopakumar Reviewed-by: Mike Rapoport Cc: David Hildenbrand Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- arch/ia64/mm/init.c | 4 ++-- include/linux/mm.h | 5 +++-- mm/memory_hotplug.c | 2 +- mm/page_alloc.c | 8 +++++--- 4 files changed, 11 insertions(+), 8 deletions(-) --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -536,7 +536,7 @@ virtual_memmap_init(u64 start, u64 end, if (map_start < map_end) memmap_init_zone((unsigned long)(map_end - map_start), - args->nid, args->zone, page_to_pfn(map_start), + args->nid, args->zone, page_to_pfn(map_start), page_to_pfn(map_end), MEMINIT_EARLY, NULL, MIGRATE_MOVABLE); return 0; } @@ -546,7 +546,7 @@ memmap_init (unsigned long size, int nid unsigned long start_pfn) { if (!vmem_map) { - memmap_init_zone(size, nid, zone, start_pfn, + memmap_init_zone(size, nid, zone, start_pfn, start_pfn + size, MEMINIT_EARLY, NULL, MIGRATE_MOVABLE); } else { struct page *start; --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2439,8 +2439,9 @@ extern int __meminit __early_pfn_to_nid( #endif extern void set_dma_reserve(unsigned long new_dma_reserve); -extern void memmap_init_zone(unsigned long, int, unsigned long, unsigned long, - enum meminit_context, struct vmem_altmap *, int migratetype); +extern void memmap_init_zone(unsigned long, int, unsigned long, + unsigned long, unsigned long, enum meminit_context, + struct vmem_altmap *, int migratetype); extern void setup_per_zone_wmarks(void); extern int __meminit init_per_zone_wmark_min(void); extern void mem_init(void); --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -714,7 +714,7 @@ void __ref move_pfn_range_to_zone(struct * expects the zone spans the pfn range. All the pages in the range * are reserved so nobody should be touching them so we should be safe */ - memmap_init_zone(nr_pages, nid, zone_idx(zone), start_pfn, + memmap_init_zone(nr_pages, nid, zone_idx(zone), start_pfn, 0, MEMINIT_HOTPLUG, altmap, migratetype); set_zone_contiguous(zone); --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -448,6 +448,8 @@ defer_init(int nid, unsigned long pfn, u if (end_pfn < pgdat_end_pfn(NODE_DATA(nid))) return false; + if (NODE_DATA(nid)->first_deferred_pfn != ULONG_MAX) + return true; /* * We start only with one section of pages, more pages are added as * needed until the rest of deferred pages are initialized. @@ -6050,7 +6052,7 @@ overlap_memmap_init(unsigned long zone, * zone stats (e.g., nr_isolate_pageblock) are touched. */ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, - unsigned long start_pfn, + unsigned long start_pfn, unsigned long zone_end_pfn, enum meminit_context context, struct vmem_altmap *altmap, int migratetype) { @@ -6086,7 +6088,7 @@ void __meminit memmap_init_zone(unsigned if (context == MEMINIT_EARLY) { if (overlap_memmap_init(zone, &pfn)) continue; - if (defer_init(nid, pfn, end_pfn)) + if (defer_init(nid, pfn, zone_end_pfn)) break; } @@ -6200,7 +6202,7 @@ void __meminit __weak memmap_init(unsign if (end_pfn > start_pfn) { size = end_pfn - start_pfn; - memmap_init_zone(size, nid, zone, start_pfn, + memmap_init_zone(size, nid, zone, start_pfn, range_end_pfn, MEMINIT_EARLY, NULL, MIGRATE_MOVABLE); } } From patchwork Mon Jan 4 15:57:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356745 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD945C4332E for ; Mon, 4 Jan 2021 16:09:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AEF4720665 for ; Mon, 4 Jan 2021 16:09:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729589AbhADQId (ORCPT ); Mon, 4 Jan 2021 11:08:33 -0500 Received: from mail.kernel.org ([198.145.29.99]:38732 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728637AbhADQBm (ORCPT ); Mon, 4 Jan 2021 11:01:42 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 90B9E2251E; Mon, 4 Jan 2021 16:01:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776087; bh=jhm/B5OpPhNWdpC2OjByY/9D+qGhR18HHOvJ1eE8rEI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=wNrpmi4qJJs9WJq/2gbyxxhyOrejmtwVsG4Z7cFYwo5H8RDB/PxK+3XK9c63VQF5U Ou1YTmndSdL9VvydZGgNdXdU0cxywJqj1ZhVDuua6cPdHdZCsYBfLNmL3R6rvK1Xxo TEu78Xsh8wzGrea+fEh1cPc72loJrxgCCaZ8kuCY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ilya Leoshkevich , Christian Borntraeger , Mikhail Zaslonko , Heiko Carstens , Vasily Gorbik , Andrew Morton , Linus Torvalds Subject: [PATCH 5.10 16/63] lib/zlib: fix inflating zlib streams on s390 Date: Mon, 4 Jan 2021 16:57:09 +0100 Message-Id: <20210104155709.601396478@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Ilya Leoshkevich commit f0bb29e8c4076444d32df00c8d32e169ceecf283 upstream. Decompressing zlib streams on s390 fails with "incorrect data check" error. Userspace zlib checks inflate_state.flags in order to byteswap checksums only for zlib streams, and s390 hardware inflate code, which was ported from there, tries to match this behavior. At the same time, kernel zlib does not use inflate_state.flags, so it contains essentially random values. For many use cases either zlib stream is zeroed out or checksum is not used, so this problem is masked, but at least SquashFS is still affected. Fix by always passing a checksum to and from the hardware as is, which matches zlib_inflate()'s expectations. Link: https://lkml.kernel.org/r/20201215155551.894884-1-iii@linux.ibm.com Fixes: 126196100063 ("lib/zlib: add s390 hardware support for kernel zlib_inflate") Signed-off-by: Ilya Leoshkevich Tested-by: Christian Borntraeger Acked-by: Mikhail Zaslonko Acked-by: Christian Borntraeger Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Mikhail Zaslonko Cc: [5.6+] Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- lib/zlib_dfltcc/dfltcc_inflate.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/lib/zlib_dfltcc/dfltcc_inflate.c +++ b/lib/zlib_dfltcc/dfltcc_inflate.c @@ -125,7 +125,7 @@ dfltcc_inflate_action dfltcc_inflate( param->ho = (state->write - state->whave) & ((1 << HB_BITS) - 1); if (param->hl) param->nt = 0; /* Honor history for the first block */ - param->cv = state->flags ? REVERSE(state->check) : state->check; + param->cv = state->check; /* Inflate */ do { @@ -138,7 +138,7 @@ dfltcc_inflate_action dfltcc_inflate( state->bits = param->sbb; state->whave = param->hl; state->write = (param->ho + param->hl) & ((1 << HB_BITS) - 1); - state->check = state->flags ? REVERSE(param->cv) : param->cv; + state->check = param->cv; if (cc == DFLTCC_CC_OP2_CORRUPT && param->oesc != 0) { /* Report an error if stream is corrupted */ state->mode = BAD; From patchwork Mon Jan 4 15:57:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356773 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15F90C433E6 for ; Mon, 4 Jan 2021 16:02:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D20922251F for ; Mon, 4 Jan 2021 16:02:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728731AbhADQCK (ORCPT ); Mon, 4 Jan 2021 11:02:10 -0500 Received: from mail.kernel.org ([198.145.29.99]:39414 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728748AbhADQCJ (ORCPT ); Mon, 4 Jan 2021 11:02:09 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id C466022519; Mon, 4 Jan 2021 16:01:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776089; bh=waRQhcYKE8DOL1wuAwegN+jbUsvrLSohcSjKxKE+tn4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=N5DYcnrklATj6xZbUBR3SGvaeaNaU+qt9NSW7BTEQCwhOOkTAzMuzPaZ5+4oivhmx kVRz6FpHp4MYb2cW6S9KYoIHE7Qels4k+AMaBm08S4uilw855lZYTc70tZqm7ZnV08 w02vLJhqQbvLyPpBvB7Srxhyvv/26fYsM+CdDE+s= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jens Axboe , Christian Brauner Subject: [PATCH 5.10 17/63] io_uring: dont assume mm is constant across submits Date: Mon, 4 Jan 2021 16:57:10 +0100 Message-Id: <20210104155709.650714386@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Jens Axboe commit 77788775c7132a8d93c6930ab1bd84fc743c7cb7 upstream. If we COW the identity, we assume that ->mm never changes. But this isn't true of multiple processes end up sharing the ring. Hence treat id->mm like like any other process compontent when it comes to the identity mapping. This is pretty trivial, just moving the existing grab into io_grab_identity(), and including a check for the match. Cc: stable@vger.kernel.org # 5.10 Fixes: 1e6fa5216a0e ("io_uring: COW io_identity on mismatch") Reported-by: Christian Brauner : Tested-by: Christian Brauner : Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- fs/io_uring.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -1369,6 +1369,13 @@ static bool io_grab_identity(struct io_k spin_unlock_irq(&ctx->inflight_lock); req->work.flags |= IO_WQ_WORK_FILES; } + if (!(req->work.flags & IO_WQ_WORK_MM) && + (def->work_flags & IO_WQ_WORK_MM)) { + if (id->mm != current->mm) + return false; + mmgrab(id->mm); + req->work.flags |= IO_WQ_WORK_MM; + } return true; } @@ -1393,13 +1400,6 @@ static void io_prep_async_work(struct io req->work.flags |= IO_WQ_WORK_UNBOUND; } - /* ->mm can never change on us */ - if (!(req->work.flags & IO_WQ_WORK_MM) && - (def->work_flags & IO_WQ_WORK_MM)) { - mmgrab(id->mm); - req->work.flags |= IO_WQ_WORK_MM; - } - /* if we fail grabbing identity, we must COW, regrab, and retry */ if (io_grab_identity(req)) return; From patchwork Mon Jan 4 15:57:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356750 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D22CC43381 for ; Mon, 4 Jan 2021 16:07:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1E669207AE for ; Mon, 4 Jan 2021 16:07:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728818AbhADQCW (ORCPT ); Mon, 4 Jan 2021 11:02:22 -0500 Received: from mail.kernel.org ([198.145.29.99]:39692 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728808AbhADQCV (ORCPT ); Mon, 4 Jan 2021 11:02:21 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2B97722583; Mon, 4 Jan 2021 16:01:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776100; bh=Xpl1Q/NMIvh0M97GEHMAJJY/CIV7rz9d9p7YjW4eqkY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=2IiF6/bOTeQKi+RhcyQbYu1rH0qeZPR/vAtn7lDt4eHablgL1HFOh98KgEE9CUXW5 ZXDdMtwJlLG5VWVc77I16L5QKLRkrYN1oVQGbb5vRzcEQMYeq8H3Tys3GcZUDuD+dX ITFOl0CQoM/k9eo130fMdrGD4HleJrdhyAQNB21U= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Pavel Begunkov , Jens Axboe Subject: [PATCH 5.10 21/63] kernel/io_uring: cancel io_uring before task works Date: Mon, 4 Jan 2021 16:57:14 +0100 Message-Id: <20210104155709.846535201@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Pavel Begunkov commit b1b6b5a30dce872f500dc43f067cba8e7f86fc7d upstream. For cancelling io_uring requests it needs either to be able to run currently enqueued task_works or having it shut down by that moment. Otherwise io_uring_cancel_files() may be waiting for requests that won't ever complete. Go with the first way and do cancellations before setting PF_EXITING and so before putting the task_work infrastructure into a transition state where task_work_run() would better not be called. Cc: stable@vger.kernel.org # 5.5+ Signed-off-by: Pavel Begunkov Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- fs/file.c | 2 -- kernel/exit.c | 2 ++ 2 files changed, 2 insertions(+), 2 deletions(-) --- a/fs/file.c +++ b/fs/file.c @@ -21,7 +21,6 @@ #include #include #include -#include unsigned int sysctl_nr_open __read_mostly = 1024*1024; unsigned int sysctl_nr_open_min = BITS_PER_LONG; @@ -453,7 +452,6 @@ void exit_files(struct task_struct *tsk) struct files_struct * files = tsk->files; if (files) { - io_uring_files_cancel(files); task_lock(tsk); tsk->files = NULL; task_unlock(tsk); --- a/kernel/exit.c +++ b/kernel/exit.c @@ -63,6 +63,7 @@ #include #include #include +#include #include #include @@ -762,6 +763,7 @@ void __noreturn do_exit(long code) schedule(); } + io_uring_files_cancel(tsk->files); exit_signals(tsk); /* sets PF_EXITING */ /* sync mm's RSS info before statistics gathering */ From patchwork Mon Jan 4 15:57:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356747 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23D03C43332 for ; Mon, 4 Jan 2021 16:08:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E5FF8205CA for ; Mon, 4 Jan 2021 16:08:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729344AbhADQHq (ORCPT ); Mon, 4 Jan 2021 11:07:46 -0500 Received: from mail.kernel.org ([198.145.29.99]:38936 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728710AbhADQB6 (ORCPT ); Mon, 4 Jan 2021 11:01:58 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 6202520769; Mon, 4 Jan 2021 16:01:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776102; bh=hwUyeNb9NGkxfnDkBWhFxs4Fh1afjms4BVH6fV3Jk0I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=0EvYAzp5s2uWLsz4J+5ti9gv4SyP3JBaeIhoBXrukALh3aHpEcF3QLYVVru0G9MlQ k54CBupm1wMnW+Q3svdIkNgr/b7XIvT8XONoMdp8g6chdcYdyKRzhw+K03WtXZNkSD tAygkbdZJUSsg/SlRvg3UWI1CjbisPdW96IwhESg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Petr Vorel , Rich Felker , Rich Felker , Peter Korsgaard , Baruch Siach , Florian Weimer , Andrew Morton , Linus Torvalds Subject: [PATCH 5.10 22/63] uapi: move constants from to Date: Mon, 4 Jan 2021 16:57:15 +0100 Message-Id: <20210104155709.897542552@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Petr Vorel commit a85cbe6159ffc973e5702f70a3bd5185f8f3c38d upstream. and include in UAPI headers instead of . The reason is to avoid indirect include when using some network headers: or others -> -> . This indirect include causes on MUSL redefinition of struct sysinfo when included both and some of UAPI headers: In file included from x86_64-buildroot-linux-musl/sysroot/usr/include/linux/kernel.h:5, from x86_64-buildroot-linux-musl/sysroot/usr/include/linux/netlink.h:5, from ../include/tst_netlink.h:14, from tst_crypto.c:13: x86_64-buildroot-linux-musl/sysroot/usr/include/linux/sysinfo.h:8:8: error: redefinition of `struct sysinfo' struct sysinfo { ^~~~~~~ In file included from ../include/tst_safe_macros.h:15, from ../include/tst_test.h:93, from tst_crypto.c:11: x86_64-buildroot-linux-musl/sysroot/usr/include/sys/sysinfo.h:10:8: note: originally defined here Link: https://lkml.kernel.org/r/20201015190013.8901-1-petr.vorel@gmail.com Signed-off-by: Petr Vorel Suggested-by: Rich Felker Acked-by: Rich Felker Cc: Peter Korsgaard Cc: Baruch Siach Cc: Florian Weimer Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- include/uapi/linux/const.h | 5 +++++ include/uapi/linux/ethtool.h | 2 +- include/uapi/linux/kernel.h | 9 +-------- include/uapi/linux/lightnvm.h | 2 +- include/uapi/linux/mroute6.h | 2 +- include/uapi/linux/netfilter/x_tables.h | 2 +- include/uapi/linux/netlink.h | 2 +- include/uapi/linux/sysctl.h | 2 +- 8 files changed, 12 insertions(+), 14 deletions(-) --- a/include/uapi/linux/const.h +++ b/include/uapi/linux/const.h @@ -28,4 +28,9 @@ #define _BITUL(x) (_UL(1) << (x)) #define _BITULL(x) (_ULL(1) << (x)) +#define __ALIGN_KERNEL(x, a) __ALIGN_KERNEL_MASK(x, (typeof(x))(a) - 1) +#define __ALIGN_KERNEL_MASK(x, mask) (((x) + (mask)) & ~(mask)) + +#define __KERNEL_DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d)) + #endif /* _UAPI_LINUX_CONST_H */ --- a/include/uapi/linux/ethtool.h +++ b/include/uapi/linux/ethtool.h @@ -14,7 +14,7 @@ #ifndef _UAPI_LINUX_ETHTOOL_H #define _UAPI_LINUX_ETHTOOL_H -#include +#include #include #include --- a/include/uapi/linux/kernel.h +++ b/include/uapi/linux/kernel.h @@ -3,13 +3,6 @@ #define _UAPI_LINUX_KERNEL_H #include - -/* - * 'kernel.h' contains some often-used function prototypes etc - */ -#define __ALIGN_KERNEL(x, a) __ALIGN_KERNEL_MASK(x, (typeof(x))(a) - 1) -#define __ALIGN_KERNEL_MASK(x, mask) (((x) + (mask)) & ~(mask)) - -#define __KERNEL_DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d)) +#include #endif /* _UAPI_LINUX_KERNEL_H */ --- a/include/uapi/linux/lightnvm.h +++ b/include/uapi/linux/lightnvm.h @@ -21,7 +21,7 @@ #define _UAPI_LINUX_LIGHTNVM_H #ifdef __KERNEL__ -#include +#include #include #else /* __KERNEL__ */ #include --- a/include/uapi/linux/mroute6.h +++ b/include/uapi/linux/mroute6.h @@ -2,7 +2,7 @@ #ifndef _UAPI__LINUX_MROUTE6_H #define _UAPI__LINUX_MROUTE6_H -#include +#include #include #include #include /* For struct sockaddr_in6. */ --- a/include/uapi/linux/netfilter/x_tables.h +++ b/include/uapi/linux/netfilter/x_tables.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ #ifndef _UAPI_X_TABLES_H #define _UAPI_X_TABLES_H -#include +#include #include #define XT_FUNCTION_MAXNAMELEN 30 --- a/include/uapi/linux/netlink.h +++ b/include/uapi/linux/netlink.h @@ -2,7 +2,7 @@ #ifndef _UAPI__LINUX_NETLINK_H #define _UAPI__LINUX_NETLINK_H -#include +#include #include /* for __kernel_sa_family_t */ #include --- a/include/uapi/linux/sysctl.h +++ b/include/uapi/linux/sysctl.h @@ -23,7 +23,7 @@ #ifndef _UAPI_LINUX_SYSCTL_H #define _UAPI_LINUX_SYSCTL_H -#include +#include #include #include From patchwork Mon Jan 4 15:57:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356749 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFE7FC43333 for ; Mon, 4 Jan 2021 16:07:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BFF2E21D93 for ; Mon, 4 Jan 2021 16:07:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728742AbhADQCF (ORCPT ); Mon, 4 Jan 2021 11:02:05 -0500 Received: from mail.kernel.org ([198.145.29.99]:39138 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727897AbhADQCF (ORCPT ); Mon, 4 Jan 2021 11:02:05 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id F3813207AE; Mon, 4 Jan 2021 16:01:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776109; bh=q9Q+DvzxGEMeJy831KMmzBCzYUFwM4Mf+Q89Q/DnXnE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=z3O6TGq3LF1HvVLR9oVeKzWEP5AQH6MO+knnBeX0Q8xkwJFuOYF6cMP6Tqe3JqvTK miaIxuPhz0NqlAB8rBolRnywsa3u3ya/SqhRcAnCRIH1/sHX67877k2xtWZjzmJfGx ag4+/nbmkk5LMNk0A3+16aCGSEp1WAEoWYrCrUNU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Randy Dunlap , "Acked-by: Ilya Leoshkevich" , Christian Borntraeger , Zaslonko Mikhail , Heiko Carstens , Vasily Gorbik , Andrew Morton , Linus Torvalds Subject: [PATCH 5.10 25/63] zlib: move EXPORT_SYMBOL() and MODULE_LICENSE() out of dfltcc_syms.c Date: Mon, 4 Jan 2021 16:57:18 +0100 Message-Id: <20210104155710.042752544@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Randy Dunlap commit 605cc30dea249edf1b659e7d0146a2cf13cbbf71 upstream. In commit 11fb479ff5d9 ("zlib: export S390 symbols for zlib modules"), I added EXPORT_SYMBOL()s to dfltcc_inflate.c but then Mikhail said that these should probably be in dfltcc_syms.c with the other EXPORT_SYMBOL()s. However, that is contrary to the current kernel style, which places EXPORT_SYMBOL() immediately after the function that it applies to, so move all EXPORT_SYMBOL()s to their respective function locations and drop the dfltcc_syms.c file. Also move MODULE_LICENSE() from the deleted file to dfltcc.c. [rdunlap@infradead.org: remove dfltcc_syms.o from Makefile] Link: https://lkml.kernel.org/r/20201227171837.15492-1-rdunlap@infradead.org Link: https://lkml.kernel.org/r/20201219052530.28461-1-rdunlap@infradead.org Fixes: 11fb479ff5d9 ("zlib: export S390 symbols for zlib modules") Signed-off-by: Randy Dunlap Cc: Acked-by: Ilya Leoshkevich Acked-by: Christian Borntraeger Cc: Zaslonko Mikhail Cc: Heiko Carstens Cc: Vasily Gorbik Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- lib/zlib_dfltcc/Makefile | 2 +- lib/zlib_dfltcc/dfltcc.c | 6 +++++- lib/zlib_dfltcc/dfltcc_deflate.c | 3 +++ lib/zlib_dfltcc/dfltcc_syms.c | 17 ----------------- 4 files changed, 9 insertions(+), 19 deletions(-) --- a/lib/zlib_dfltcc/Makefile +++ b/lib/zlib_dfltcc/Makefile @@ -8,4 +8,4 @@ obj-$(CONFIG_ZLIB_DFLTCC) += zlib_dfltcc.o -zlib_dfltcc-objs := dfltcc.o dfltcc_deflate.o dfltcc_inflate.o dfltcc_syms.o +zlib_dfltcc-objs := dfltcc.o dfltcc_deflate.o dfltcc_inflate.o --- a/lib/zlib_dfltcc/dfltcc.c +++ b/lib/zlib_dfltcc/dfltcc.c @@ -1,7 +1,8 @@ // SPDX-License-Identifier: Zlib /* dfltcc.c - SystemZ DEFLATE CONVERSION CALL support. */ -#include +#include +#include #include "dfltcc_util.h" #include "dfltcc.h" @@ -53,3 +54,6 @@ void dfltcc_reset( dfltcc_state->dht_threshold = DFLTCC_DHT_MIN_SAMPLE_SIZE; dfltcc_state->param.ribm = DFLTCC_RIBM; } +EXPORT_SYMBOL(dfltcc_reset); + +MODULE_LICENSE("GPL"); --- a/lib/zlib_dfltcc/dfltcc_deflate.c +++ b/lib/zlib_dfltcc/dfltcc_deflate.c @@ -4,6 +4,7 @@ #include "dfltcc_util.h" #include "dfltcc.h" #include +#include #include /* @@ -34,6 +35,7 @@ int dfltcc_can_deflate( return 1; } +EXPORT_SYMBOL(dfltcc_can_deflate); static void dfltcc_gdht( z_streamp strm @@ -277,3 +279,4 @@ again: goto again; /* deflate() must use all input or all output */ return 1; } +EXPORT_SYMBOL(dfltcc_deflate); --- a/lib/zlib_dfltcc/dfltcc_syms.c +++ /dev/null @@ -1,17 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * linux/lib/zlib_dfltcc/dfltcc_syms.c - * - * Exported symbols for the s390 zlib dfltcc support. - * - */ - -#include -#include -#include -#include "dfltcc.h" - -EXPORT_SYMBOL(dfltcc_can_deflate); -EXPORT_SYMBOL(dfltcc_deflate); -EXPORT_SYMBOL(dfltcc_reset); -MODULE_LICENSE("GPL"); From patchwork Mon Jan 4 15:57:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356752 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F01CEC43331 for ; Mon, 4 Jan 2021 16:07:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BFBD1224D2 for ; Mon, 4 Jan 2021 16:07:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728844AbhADQHE (ORCPT ); Mon, 4 Jan 2021 11:07:04 -0500 Received: from mail.kernel.org ([198.145.29.99]:39930 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728868AbhADQCh (ORCPT ); Mon, 4 Jan 2021 11:02:37 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 9A9B9224D2; Mon, 4 Jan 2021 16:01:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776116; bh=kbXVDURpVO4n7IDAFIoO6qDrcqygc9gNZYFaaVWklug=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lFtL3uPmD4C3SQbnOdYUnfyX0j9Uj1mCQG2kUVx52i1fLst7NvanZ6bTaYaP5s6lI Y72UB7CFt3DQVf+ZsDe5tWeqZaM349hJoahlxd70rax7HC9yIZdx6lIz+2Rg3MuXkH 84vwWmI4FdIK9feDlx/C38m2dyrRdbgbGnyzIEHc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Tomi Valkeinen , Thomas Zimmermann , Sam Ravnborg , Jiri Slaby , Bartlomiej Zolnierkiewicz , Linus Torvalds , Ben Skeggs , nouveau@lists.freedesktop.org, Daniel Vetter , "Gustavo A. R. Silva" , Tetsuo Handa , Peilin Ye , George Kennedy , Nathan Chancellor , Peter Rosin , Daniel Vetter Subject: [PATCH 5.10 28/63] fbcon: Disable accelerated scrolling Date: Mon, 4 Jan 2021 16:57:21 +0100 Message-Id: <20210104155710.187945647@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Daniel Vetter commit 39aead8373b3c20bb5965c024dfb51a94e526151 upstream. So ever since syzbot discovered fbcon, we have solid proof that it's full of bugs. And often the solution is to just delete code and remove features, e.g. 50145474f6ef ("fbcon: remove soft scrollback code"). Now the problem is that most modern-ish drivers really only treat fbcon as an dumb kernel console until userspace takes over, and Oops printer for some emergencies. Looking at drm drivers and the basic vesa/efi fbdev drivers shows that only 3 drivers support any kind of acceleration: - nouveau, seems to be enabled by default - omapdrm, when a DMM remapper exists using remapper rewriting for y/xpanning - gma500, but that is getting deleted now for the GTT remapper trick, and the accelerated copyarea never set the FBINFO_HWACCEL_COPYAREA flag, so unused (and could be deleted already I think). No other driver supportes accelerated fbcon. And fbcon is the only user of this accel code (it's not exposed as uapi through ioctls), which means we could garbage collect fairly enormous amounts of code if we kill this. Plus because syzbot only runs on virtual hardware, and none of the drivers for that have acceleration, we'd remove a huge gap in testing. And there's no other even remotely comprehensive testing aside from syzbot. This patch here just disables the acceleration code by always redrawing when scrolling. The plan is that once this has been merged for well over a year in released kernels, we can start to go around and delete a lot of code. v2: - Drop a few more unused local variables, somehow I missed the compiler warnings (Sam) - Fix typo in comment (Jiri) - add a todo entry for the cleanup (Thomas) v3: Remove more unused variables (0day) Reviewed-by: Tomi Valkeinen Reviewed-by: Thomas Zimmermann Reviewed-by: Greg Kroah-Hartman Acked-by: Sam Ravnborg Cc: Jiri Slaby Cc: Bartlomiej Zolnierkiewicz Cc: Greg Kroah-Hartman Cc: Linus Torvalds Cc: Ben Skeggs Cc: nouveau@lists.freedesktop.org Cc: Tomi Valkeinen Cc: Daniel Vetter Cc: Jiri Slaby Cc: "Gustavo A. R. Silva" Cc: Tetsuo Handa Cc: Peilin Ye Cc: George Kennedy Cc: Nathan Chancellor Cc: Peter Rosin Signed-off-by: Daniel Vetter Link: https://patchwork.freedesktop.org/patch/msgid/20201029132229.4068359-1-daniel.vetter@ffwll.ch Signed-off-by: Greg Kroah-Hartman --- Documentation/gpu/todo.rst | 18 +++++++++++++++ drivers/video/fbdev/core/fbcon.c | 45 ++++++--------------------------------- 2 files changed, 26 insertions(+), 37 deletions(-) --- a/Documentation/gpu/todo.rst +++ b/Documentation/gpu/todo.rst @@ -273,6 +273,24 @@ Contact: Daniel Vetter, Noralf Tronnes Level: Advanced +Garbage collect fbdev scrolling acceleration +-------------------------------------------- + +Scroll acceleration is disabled in fbcon by hard-wiring p->scrollmode = +SCROLL_REDRAW. There's a ton of code this will allow us to remove: +- lots of code in fbcon.c +- a bunch of the hooks in fbcon_ops, maybe the remaining hooks could be called + directly instead of the function table (with a switch on p->rotate) +- fb_copyarea is unused after this, and can be deleted from all drivers + +Note that not all acceleration code can be deleted, since clearing and cursor +support is still accelerated, which might be good candidates for further +deletion projects. + +Contact: Daniel Vetter + +Level: Intermediate + idr_init_base() --------------- --- a/drivers/video/fbdev/core/fbcon.c +++ b/drivers/video/fbdev/core/fbcon.c @@ -1033,7 +1033,7 @@ static void fbcon_init(struct vc_data *v struct vc_data *svc = *default_mode; struct fbcon_display *t, *p = &fb_display[vc->vc_num]; int logo = 1, new_rows, new_cols, rows, cols, charcnt = 256; - int cap, ret; + int ret; if (WARN_ON(info_idx == -1)) return; @@ -1042,7 +1042,6 @@ static void fbcon_init(struct vc_data *v con2fb_map[vc->vc_num] = info_idx; info = registered_fb[con2fb_map[vc->vc_num]]; - cap = info->flags; if (logo_shown < 0 && console_loglevel <= CONSOLE_LOGLEVEL_QUIET) logo_shown = FBCON_LOGO_DONTSHOW; @@ -1147,11 +1146,13 @@ static void fbcon_init(struct vc_data *v ops->graphics = 0; - if ((cap & FBINFO_HWACCEL_COPYAREA) && - !(cap & FBINFO_HWACCEL_DISABLED)) - p->scrollmode = SCROLL_MOVE; - else /* default to something safe */ - p->scrollmode = SCROLL_REDRAW; + /* + * No more hw acceleration for fbcon. + * + * FIXME: Garbage collect all the now dead code after sufficient time + * has passed. + */ + p->scrollmode = SCROLL_REDRAW; /* * ++guenther: console.c:vc_allocate() relies on initializing @@ -1961,45 +1962,15 @@ static void updatescrollmode(struct fbco { struct fbcon_ops *ops = info->fbcon_par; int fh = vc->vc_font.height; - int cap = info->flags; - u16 t = 0; - int ypan = FBCON_SWAP(ops->rotate, info->fix.ypanstep, - info->fix.xpanstep); - int ywrap = FBCON_SWAP(ops->rotate, info->fix.ywrapstep, t); int yres = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres); int vyres = FBCON_SWAP(ops->rotate, info->var.yres_virtual, info->var.xres_virtual); - int good_pan = (cap & FBINFO_HWACCEL_YPAN) && - divides(ypan, vc->vc_font.height) && vyres > yres; - int good_wrap = (cap & FBINFO_HWACCEL_YWRAP) && - divides(ywrap, vc->vc_font.height) && - divides(vc->vc_font.height, vyres) && - divides(vc->vc_font.height, yres); - int reading_fast = cap & FBINFO_READS_FAST; - int fast_copyarea = (cap & FBINFO_HWACCEL_COPYAREA) && - !(cap & FBINFO_HWACCEL_DISABLED); - int fast_imageblit = (cap & FBINFO_HWACCEL_IMAGEBLIT) && - !(cap & FBINFO_HWACCEL_DISABLED); p->vrows = vyres/fh; if (yres > (fh * (vc->vc_rows + 1))) p->vrows -= (yres - (fh * vc->vc_rows)) / fh; if ((yres % fh) && (vyres % fh < yres % fh)) p->vrows--; - - if (good_wrap || good_pan) { - if (reading_fast || fast_copyarea) - p->scrollmode = good_wrap ? - SCROLL_WRAP_MOVE : SCROLL_PAN_MOVE; - else - p->scrollmode = good_wrap ? SCROLL_REDRAW : - SCROLL_PAN_REDRAW; - } else { - if (reading_fast || (fast_copyarea && !fast_imageblit)) - p->scrollmode = SCROLL_MOVE; - else - p->scrollmode = SCROLL_REDRAW; - } } #define PITCH(w) (((w) + 7) >> 3) From patchwork Mon Jan 4 15:57:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356772 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D905C433E9 for ; Mon, 4 Jan 2021 16:02:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1EEA62251F for ; Mon, 4 Jan 2021 16:02:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728777AbhADQCO (ORCPT ); Mon, 4 Jan 2021 11:02:14 -0500 Received: from mail.kernel.org ([198.145.29.99]:39260 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728773AbhADQCN (ORCPT ); Mon, 4 Jan 2021 11:02:13 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 0290822515; Mon, 4 Jan 2021 16:01:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776118; bh=EjAqTvqR5xGB0ysuOSu9sqTv7/TUJ9YORzCYjtD5m8s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Tm645glq3mEH1Ig948As/8evmaljT7Sb+Xd6aVeqTlBOD7KIdCsmAIRZhPwsGyfZ6 6O38zEyezpzAMAJrsgLYgs10LMmTE427xJ3R7Fpx8vR8tCBy5PlfEDjzy0GQEUR51v j4xqsRzSyaKhdun5cQa0IFDDdKfKSo4odgZFiWr0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Rustam Kovhaev , Jan Kara , syzbot+83b6f7cf9922cae5c4d7@syzkaller.appspotmail.com Subject: [PATCH 5.10 29/63] reiserfs: add check for an invalid ih_entry_count Date: Mon, 4 Jan 2021 16:57:22 +0100 Message-Id: <20210104155710.234806001@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Rustam Kovhaev commit d24396c5290ba8ab04ba505176874c4e04a2d53c upstream. when directory item has an invalid value set for ih_entry_count it might trigger use-after-free or out-of-bounds read in bin_search_in_dir_item() ih_entry_count * IH_SIZE for directory item should not be larger than ih_item_len Link: https://lore.kernel.org/r/20201101140958.3650143-1-rkovhaev@gmail.com Reported-and-tested-by: syzbot+83b6f7cf9922cae5c4d7@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?extid=83b6f7cf9922cae5c4d7 Signed-off-by: Rustam Kovhaev Signed-off-by: Jan Kara Signed-off-by: Greg Kroah-Hartman --- fs/reiserfs/stree.c | 6 ++++++ 1 file changed, 6 insertions(+) --- a/fs/reiserfs/stree.c +++ b/fs/reiserfs/stree.c @@ -454,6 +454,12 @@ static int is_leaf(char *buf, int blocks "(second one): %h", ih); return 0; } + if (is_direntry_le_ih(ih) && (ih_item_len(ih) < (ih_entry_count(ih) * IH_SIZE))) { + reiserfs_warning(NULL, "reiserfs-5093", + "item entry count seems wrong %h", + ih); + return 0; + } prev_location = ih_location(ih); } From patchwork Mon Jan 4 15:57:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356753 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CC3EC433DB for ; Mon, 4 Jan 2021 16:07:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F0E5122519 for ; Mon, 4 Jan 2021 16:07:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728504AbhADQGs (ORCPT ); Mon, 4 Jan 2021 11:06:48 -0500 Received: from mail.kernel.org ([198.145.29.99]:40012 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728898AbhADQCn (ORCPT ); Mon, 4 Jan 2021 11:02:43 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 6CC4A22516; Mon, 4 Jan 2021 16:02:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776122; bh=xG67Zm/Pub3LnzeGF+mtGQjupBLMb7U8Wk+xH/2uKM4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BK5TzQFjrTcO0ttslxfRtZnIlUMKmo2it7YxkdFZeprVVrZaifTFB5M+K9hE0XvSd cfoPu439iNBo7QvfgLQilb3SMqhAYN8/VRyzIsluvDEuq9IgdiJGO2QuTwwCMQuW0p r9ymzDSCdJsw3kFPHzkF+WcFruKV0ZtJ71c2SId8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, syzbot+a79e17c39564bedf0930@syzkaller.appspotmail.com, Anant Thazhemadam Subject: [PATCH 5.10 30/63] misc: vmw_vmci: fix kernel info-leak by initializing dbells in vmci_ctx_get_chkpt_doorbells() Date: Mon, 4 Jan 2021 16:57:23 +0100 Message-Id: <20210104155710.281866429@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Anant Thazhemadam commit 31dcb6c30a26d32650ce134820f27de3c675a45a upstream. A kernel-infoleak was reported by syzbot, which was caused because dbells was left uninitialized. Using kzalloc() instead of kmalloc() fixes this issue. Reported-by: syzbot+a79e17c39564bedf0930@syzkaller.appspotmail.com Tested-by: syzbot+a79e17c39564bedf0930@syzkaller.appspotmail.com Signed-off-by: Anant Thazhemadam Link: https://lore.kernel.org/r/20201122224534.333471-1-anant.thazhemadam@gmail.com Signed-off-by: Greg Kroah-Hartman --- drivers/misc/vmw_vmci/vmci_context.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/drivers/misc/vmw_vmci/vmci_context.c +++ b/drivers/misc/vmw_vmci/vmci_context.c @@ -743,7 +743,7 @@ static int vmci_ctx_get_chkpt_doorbells( return VMCI_ERROR_MORE_DATA; } - dbells = kmalloc(data_size, GFP_ATOMIC); + dbells = kzalloc(data_size, GFP_ATOMIC); if (!dbells) return VMCI_ERROR_NO_MEM; From patchwork Mon Jan 4 15:57:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3167C433E0 for ; Mon, 4 Jan 2021 16:03:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C50A0224DF for ; Mon, 4 Jan 2021 16:03:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729002AbhADQDG (ORCPT ); Mon, 4 Jan 2021 11:03:06 -0500 Received: from mail.kernel.org ([198.145.29.99]:40456 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728966AbhADQDG (ORCPT ); Mon, 4 Jan 2021 11:03:06 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 8A56622512; Mon, 4 Jan 2021 16:02:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776145; bh=25NtS8XB5LccoyFX+CjGFcC1swS244ghNyIoz1QgFB8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=M7pNqp4m5eRU0qx6rtrX/UJE/OCmAmsoScBoRkevJKnP/Oq3wZ+WMRqpTiqbMNI/E kd3HAURCmBfPefS3Py4BxU0dD954mkDgbxFz6Gn46ih7WVOceF/b4h4LGoTu8JiTE2 F1oIjsZ5EIburGMTDTEnmNh21OO3YIDz62dPu6KU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, syzbot+a23a6f1215c84756577c@syzkaller.appspotmail.com, syzbot+3d367d1df1d2b67f5c19@syzkaller.appspotmail.com, Takashi Iwai Subject: [PATCH 5.10 34/63] ALSA: rawmidi: Access runtime->avail always in spinlock Date: Mon, 4 Jan 2021 16:57:27 +0100 Message-Id: <20210104155710.476222345@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Takashi Iwai commit 88a06d6fd6b369d88cec46c62db3e2604a2f50d5 upstream. The runtime->avail field may be accessed concurrently while some places refer to it without taking the runtime->lock spinlock, as detected by KCSAN. Usually this isn't a big problem, but for consistency and safety, we should take the spinlock at each place referencing this field. Reported-by: syzbot+a23a6f1215c84756577c@syzkaller.appspotmail.com Reported-by: syzbot+3d367d1df1d2b67f5c19@syzkaller.appspotmail.com Link: https://lore.kernel.org/r/20201206083527.21163-1-tiwai@suse.de Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/core/rawmidi.c | 49 +++++++++++++++++++++++++++++++++++-------------- 1 file changed, 35 insertions(+), 14 deletions(-) --- a/sound/core/rawmidi.c +++ b/sound/core/rawmidi.c @@ -95,11 +95,21 @@ static inline unsigned short snd_rawmidi } } -static inline int snd_rawmidi_ready(struct snd_rawmidi_substream *substream) +static inline bool __snd_rawmidi_ready(struct snd_rawmidi_runtime *runtime) +{ + return runtime->avail >= runtime->avail_min; +} + +static bool snd_rawmidi_ready(struct snd_rawmidi_substream *substream) { struct snd_rawmidi_runtime *runtime = substream->runtime; + unsigned long flags; + bool ready; - return runtime->avail >= runtime->avail_min; + spin_lock_irqsave(&runtime->lock, flags); + ready = __snd_rawmidi_ready(runtime); + spin_unlock_irqrestore(&runtime->lock, flags); + return ready; } static inline int snd_rawmidi_ready_append(struct snd_rawmidi_substream *substream, @@ -1019,7 +1029,7 @@ int snd_rawmidi_receive(struct snd_rawmi if (result > 0) { if (runtime->event) schedule_work(&runtime->event_work); - else if (snd_rawmidi_ready(substream)) + else if (__snd_rawmidi_ready(runtime)) wake_up(&runtime->sleep); } spin_unlock_irqrestore(&runtime->lock, flags); @@ -1098,7 +1108,7 @@ static ssize_t snd_rawmidi_read(struct f result = 0; while (count > 0) { spin_lock_irq(&runtime->lock); - while (!snd_rawmidi_ready(substream)) { + while (!__snd_rawmidi_ready(runtime)) { wait_queue_entry_t wait; if ((file->f_flags & O_NONBLOCK) != 0 || result > 0) { @@ -1115,9 +1125,11 @@ static ssize_t snd_rawmidi_read(struct f return -ENODEV; if (signal_pending(current)) return result > 0 ? result : -ERESTARTSYS; - if (!runtime->avail) - return result > 0 ? result : -EIO; spin_lock_irq(&runtime->lock); + if (!runtime->avail) { + spin_unlock_irq(&runtime->lock); + return result > 0 ? result : -EIO; + } } spin_unlock_irq(&runtime->lock); count1 = snd_rawmidi_kernel_read1(substream, @@ -1255,7 +1267,7 @@ int __snd_rawmidi_transmit_ack(struct sn runtime->avail += count; substream->bytes += count; if (count > 0) { - if (runtime->drain || snd_rawmidi_ready(substream)) + if (runtime->drain || __snd_rawmidi_ready(runtime)) wake_up(&runtime->sleep); } return count; @@ -1444,9 +1456,11 @@ static ssize_t snd_rawmidi_write(struct return -ENODEV; if (signal_pending(current)) return result > 0 ? result : -ERESTARTSYS; - if (!runtime->avail && !timeout) - return result > 0 ? result : -EIO; spin_lock_irq(&runtime->lock); + if (!runtime->avail && !timeout) { + spin_unlock_irq(&runtime->lock); + return result > 0 ? result : -EIO; + } } spin_unlock_irq(&runtime->lock); count1 = snd_rawmidi_kernel_write1(substream, buf, NULL, count); @@ -1526,6 +1540,7 @@ static void snd_rawmidi_proc_info_read(s struct snd_rawmidi *rmidi; struct snd_rawmidi_substream *substream; struct snd_rawmidi_runtime *runtime; + unsigned long buffer_size, avail, xruns; rmidi = entry->private_data; snd_iprintf(buffer, "%s\n\n", rmidi->name); @@ -1544,13 +1559,16 @@ static void snd_rawmidi_proc_info_read(s " Owner PID : %d\n", pid_vnr(substream->pid)); runtime = substream->runtime; + spin_lock_irq(&runtime->lock); + buffer_size = runtime->buffer_size; + avail = runtime->avail; + spin_unlock_irq(&runtime->lock); snd_iprintf(buffer, " Mode : %s\n" " Buffer size : %lu\n" " Avail : %lu\n", runtime->oss ? "OSS compatible" : "native", - (unsigned long) runtime->buffer_size, - (unsigned long) runtime->avail); + buffer_size, avail); } } } @@ -1568,13 +1586,16 @@ static void snd_rawmidi_proc_info_read(s " Owner PID : %d\n", pid_vnr(substream->pid)); runtime = substream->runtime; + spin_lock_irq(&runtime->lock); + buffer_size = runtime->buffer_size; + avail = runtime->avail; + xruns = runtime->xruns; + spin_unlock_irq(&runtime->lock); snd_iprintf(buffer, " Buffer size : %lu\n" " Avail : %lu\n" " Overruns : %lu\n", - (unsigned long) runtime->buffer_size, - (unsigned long) runtime->avail, - (unsigned long) runtime->xruns); + buffer_size, avail, xruns); } } } From patchwork Mon Jan 4 15:57:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356756 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 517D8C4332B for ; Mon, 4 Jan 2021 16:06:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 262E72253D for ; Mon, 4 Jan 2021 16:06:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728992AbhADQDF (ORCPT ); Mon, 4 Jan 2021 11:03:05 -0500 Received: from mail.kernel.org ([198.145.29.99]:40050 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728175AbhADQDE (ORCPT ); Mon, 4 Jan 2021 11:03:04 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id A6C7C22507; Mon, 4 Jan 2021 16:02:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776169; bh=bgM8K8mfIxwMypr9yt9d+vSr1q/tvc+VRvHeeSHqAYU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=utMVXjaw055rXC1wVi+CHkvDsXdVYwcPSVO1RF6j/3fLn/0sJlORC+hcL+0l314q+ Ny4tkWUwDDxi9dORSWKHaQZ6gQyvdkP38xnKQpsSZoZeXBXAjoAGyiwMCijZZXp41p 3ywum51xntoRSiH70zCIzr/1ZG3EX6fyCLQLndVQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, syzbot+3fd34060f26e766536ff@syzkaller.appspotmail.com, Randy Dunlap , Dmitry Vyukov , Al Viro , "Tigran A. Aivazian" , Andrew Morton , Linus Torvalds Subject: [PATCH 5.10 35/63] bfs: dont use WARNING: string when its just info. Date: Mon, 4 Jan 2021 16:57:28 +0100 Message-Id: <20210104155710.524201550@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Randy Dunlap commit dc889b8d4a8122549feabe99eead04e6b23b6513 upstream. Make the printk() [bfs "printf" macro] seem less severe by changing "WARNING:" to "NOTE:". warns us about using WARNING or BUG in a format string other than in WARN() or BUG() family macros. bfs/inode.c is doing just that in a normal printk() call, so change the "WARNING" string to be "NOTE". Link: https://lkml.kernel.org/r/20201203212634.17278-1-rdunlap@infradead.org Reported-by: syzbot+3fd34060f26e766536ff@syzkaller.appspotmail.com Signed-off-by: Randy Dunlap Cc: Dmitry Vyukov Cc: Al Viro Cc: "Tigran A. Aivazian" Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- fs/bfs/inode.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/fs/bfs/inode.c +++ b/fs/bfs/inode.c @@ -350,7 +350,7 @@ static int bfs_fill_super(struct super_b info->si_lasti = (le32_to_cpu(bfs_sb->s_start) - BFS_BSIZE) / sizeof(struct bfs_inode) + BFS_ROOT_INO - 1; if (info->si_lasti == BFS_MAX_LASTI) - printf("WARNING: filesystem %s was created with 512 inodes, the real maximum is 511, mounting anyway\n", s->s_id); + printf("NOTE: filesystem %s was created with 512 inodes, the real maximum is 511, mounting anyway\n", s->s_id); else if (info->si_lasti > BFS_MAX_LASTI) { printf("Impossible last inode number %lu > %d on %s\n", info->si_lasti, BFS_MAX_LASTI, s->s_id); goto out1; From patchwork Mon Jan 4 15:57:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C66CC43381 for ; Mon, 4 Jan 2021 16:04:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 480FB20769 for ; Mon, 4 Jan 2021 16:04:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728453AbhADQEF (ORCPT ); Mon, 4 Jan 2021 11:04:05 -0500 Received: from mail.kernel.org ([198.145.29.99]:41540 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728036AbhADQEE (ORCPT ); Mon, 4 Jan 2021 11:04:04 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id DFB48224D2; Mon, 4 Jan 2021 16:03:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776204; bh=1CGsBUwgP92kK0hfAOxG13/cx7bwhM8eKe/2rxmiqdM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cxBQgPoNFMZaD5vVXdVLB7LmAGusg2zWaEHlMxPS/y4PmsQ63Hpki5YKuaCwmhiqy 9ZYYX/GNTNIvI6kqy23y/jt8hdwWeQuRFqMKzGPZOg4646FkHNht/VS/Nm9LgsOqKS 1R00IwdSJSTvaYSdi1ZHjALF3bgoLAhKn09omyO4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Miroslav Benes , Jessica Yu , Sasha Levin Subject: [PATCH 5.10 40/63] module: set MODULE_STATE_GOING state when a module fails to load Date: Mon, 4 Jan 2021 16:57:33 +0100 Message-Id: <20210104155710.765782069@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Miroslav Benes [ Upstream commit 5e8ed280dab9eeabc1ba0b2db5dbe9fe6debb6b5 ] If a module fails to load due to an error in prepare_coming_module(), the following error handling in load_module() runs with MODULE_STATE_COMING in module's state. Fix it by correctly setting MODULE_STATE_GOING under "bug_cleanup" label. Signed-off-by: Miroslav Benes Signed-off-by: Jessica Yu Signed-off-by: Sasha Levin --- kernel/module.c | 1 + 1 file changed, 1 insertion(+) diff --git a/kernel/module.c b/kernel/module.c index a4fa44a652a75..b34235082394b 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -3991,6 +3991,7 @@ static int load_module(struct load_info *info, const char __user *uargs, MODULE_STATE_GOING, mod); klp_module_going(mod); bug_cleanup: + mod->state = MODULE_STATE_GOING; /* module_bug_cleanup needs module_mutex protection */ mutex_lock(&module_mutex); module_bug_cleanup(mod); From patchwork Mon Jan 4 15:57:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356766 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0013EC433E0 for ; Mon, 4 Jan 2021 16:04:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B86A22072D for ; Mon, 4 Jan 2021 16:04:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729177AbhADQEL (ORCPT ); Mon, 4 Jan 2021 11:04:11 -0500 Received: from mail.kernel.org ([198.145.29.99]:41582 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729172AbhADQEJ (ORCPT ); Mon, 4 Jan 2021 11:04:09 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 4A64B2250E; Mon, 4 Jan 2021 16:03:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776208; bh=0zJjSQYT2Lmkmhjj3YMXNHHmQ66jWl5KgvHAY3EgdHk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QM04V7F9CrP/S1WKVsRUxSEqrOSd++QjqTUAVbP4/lNvEZkqhqRpjwsPCjrqNenPv keYHBEEmXUYvUJ6b2C+p5NAOE2/33gqSt2UXgaeHQPFqSAJ9c+E9eNGK8mYtlU6Yf2 Dv6dIqTVmQ6EbzaQ7Z+CAADNy9gM52O/2OhpM0mA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Hulk Robot , Zheng Liang , Alexandre Belloni , Linus Walleij , Sasha Levin Subject: [PATCH 5.10 42/63] rtc: pl031: fix resource leak in pl031_probe Date: Mon, 4 Jan 2021 16:57:35 +0100 Message-Id: <20210104155710.863667274@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Zheng Liang [ Upstream commit 1eab0fea2514b269e384c117f5b5772b882761f0 ] When devm_rtc_allocate_device is failed in pl031_probe, it should release mem regions with device. Reported-by: Hulk Robot Signed-off-by: Zheng Liang Signed-off-by: Alexandre Belloni Acked-by: Linus Walleij Link: https://lore.kernel.org/r/20201112093139.32566-1-zhengliang6@huawei.com Signed-off-by: Sasha Levin --- drivers/rtc/rtc-pl031.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/rtc/rtc-pl031.c b/drivers/rtc/rtc-pl031.c index c6b89273feba8..d4b2ab7861266 100644 --- a/drivers/rtc/rtc-pl031.c +++ b/drivers/rtc/rtc-pl031.c @@ -361,8 +361,10 @@ static int pl031_probe(struct amba_device *adev, const struct amba_id *id) device_init_wakeup(&adev->dev, true); ldata->rtc = devm_rtc_allocate_device(&adev->dev); - if (IS_ERR(ldata->rtc)) - return PTR_ERR(ldata->rtc); + if (IS_ERR(ldata->rtc)) { + ret = PTR_ERR(ldata->rtc); + goto out; + } ldata->rtc->ops = ops; ldata->rtc->range_min = vendor->range_min; From patchwork Mon Jan 4 15:57:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356771 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28D17C43381 for ; Mon, 4 Jan 2021 16:03:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E155322507 for ; Mon, 4 Jan 2021 16:03:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728900AbhADQCn (ORCPT ); Mon, 4 Jan 2021 11:02:43 -0500 Received: from mail.kernel.org ([198.145.29.99]:39414 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728867AbhADQCm (ORCPT ); Mon, 4 Jan 2021 11:02:42 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id B7E1522515; Mon, 4 Jan 2021 16:02:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776147; bh=xCZw4h1gUVy5W+RXm5Qkfjq27sAVjQQ4+42yUPxqyMY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rIFlZG1gIOSKmt0ngS2L+dv9U1mgCrKFMQzJrypYAUk2e0twZT4LoJRhiRdn1oN16 /uSMQhV9z/NGWhPlCpUp4kKePQJ0WbWnoOYirsjwH775XorcTT3UcTR4xVvgsTwF2e QKNvDic3YSfh4pWGJ8+DqL8Pxltog6vIjH3pn3jQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Qinglang Miao , Michael Ellerman , Sasha Levin Subject: [PATCH 5.10 43/63] powerpc: sysdev: add missing iounmap() on error in mpic_msgr_probe() Date: Mon, 4 Jan 2021 16:57:36 +0100 Message-Id: <20210104155710.911461055@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Qinglang Miao [ Upstream commit ffa1797040c5da391859a9556be7b735acbe1242 ] I noticed that iounmap() of msgr_block_addr before return from mpic_msgr_probe() in the error handling case is missing. So use devm_ioremap() instead of just ioremap() when remapping the message register block, so the mapping will be automatically released on probe failure. Signed-off-by: Qinglang Miao Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20201028091551.136400-1-miaoqinglang@huawei.com Signed-off-by: Sasha Levin --- arch/powerpc/sysdev/mpic_msgr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/sysdev/mpic_msgr.c b/arch/powerpc/sysdev/mpic_msgr.c index f6b253e2be409..36ec0bdd8b63c 100644 --- a/arch/powerpc/sysdev/mpic_msgr.c +++ b/arch/powerpc/sysdev/mpic_msgr.c @@ -191,7 +191,7 @@ static int mpic_msgr_probe(struct platform_device *dev) /* IO map the message register block. */ of_address_to_resource(np, 0, &rsrc); - msgr_block_addr = ioremap(rsrc.start, resource_size(&rsrc)); + msgr_block_addr = devm_ioremap(&dev->dev, rsrc.start, resource_size(&rsrc)); if (!msgr_block_addr) { dev_err(&dev->dev, "Failed to iomap MPIC message registers"); return -EFAULT; From patchwork Mon Jan 4 15:57:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BD57C433E0 for ; Mon, 4 Jan 2021 16:06:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 776C221D93 for ; Mon, 4 Jan 2021 16:06:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728447AbhADQF7 (ORCPT ); Mon, 4 Jan 2021 11:05:59 -0500 Received: from mail.kernel.org ([198.145.29.99]:40524 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729047AbhADQDM (ORCPT ); Mon, 4 Jan 2021 11:03:12 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 234BB22517; Mon, 4 Jan 2021 16:02:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776151; bh=CzRYXEk9ZbdYqqv0mBK0IJg95yy1SokTxylsOXRqXIw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NfMCvvvt1yIKXkVHh2XsTSbVJTvPAH6vbjIvG63LfjVNfGMPuSzd4wrp8o8awjxbP NY4fxI9O9JBIpu3ydlAR0ncYgELLqGcE8ii0UXDWbSriQo5EtO6mD4MuF6y8JQG7P4 UI/i2jzzYJREh2L1C9t6txBRgWikqRYve5PWUX38= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Trond Myklebust , Sasha Levin Subject: [PATCH 5.10 45/63] NFSv4: Fix a pNFS layout related use-after-free race when freeing the inode Date: Mon, 4 Jan 2021 16:57:38 +0100 Message-Id: <20210104155711.007242412@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Trond Myklebust [ Upstream commit b6d49ecd1081740b6e632366428b960461f8158b ] When returning the layout in nfs4_evict_inode(), we need to ensure that the layout is actually done being freed before we can proceed to free the inode itself. Signed-off-by: Trond Myklebust Signed-off-by: Sasha Levin --- fs/nfs/nfs4super.c | 2 +- fs/nfs/pnfs.c | 33 +++++++++++++++++++++++++++++++-- fs/nfs/pnfs.h | 5 +++++ 3 files changed, 37 insertions(+), 3 deletions(-) diff --git a/fs/nfs/nfs4super.c b/fs/nfs/nfs4super.c index 93f5c1678ec29..984cc42ee54d8 100644 --- a/fs/nfs/nfs4super.c +++ b/fs/nfs/nfs4super.c @@ -67,7 +67,7 @@ static void nfs4_evict_inode(struct inode *inode) nfs_inode_evict_delegation(inode); /* Note that above delegreturn would trigger pnfs return-on-close */ pnfs_return_layout(inode); - pnfs_destroy_layout(NFS_I(inode)); + pnfs_destroy_layout_final(NFS_I(inode)); /* First call standard NFS clear_inode() code */ nfs_clear_inode(inode); nfs4_xattr_cache_zap(inode); diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c index 0e50b9d45c320..07f59dc8cb2e7 100644 --- a/fs/nfs/pnfs.c +++ b/fs/nfs/pnfs.c @@ -294,6 +294,7 @@ void pnfs_put_layout_hdr(struct pnfs_layout_hdr *lo) { struct inode *inode; + unsigned long i_state; if (!lo) return; @@ -304,8 +305,12 @@ pnfs_put_layout_hdr(struct pnfs_layout_hdr *lo) if (!list_empty(&lo->plh_segs)) WARN_ONCE(1, "NFS: BUG unfreed layout segments.\n"); pnfs_detach_layout_hdr(lo); + i_state = inode->i_state; spin_unlock(&inode->i_lock); pnfs_free_layout_hdr(lo); + /* Notify pnfs_destroy_layout_final() that we're done */ + if (i_state & (I_FREEING | I_CLEAR)) + wake_up_var(lo); } } @@ -734,8 +739,7 @@ pnfs_free_lseg_list(struct list_head *free_me) } } -void -pnfs_destroy_layout(struct nfs_inode *nfsi) +static struct pnfs_layout_hdr *__pnfs_destroy_layout(struct nfs_inode *nfsi) { struct pnfs_layout_hdr *lo; LIST_HEAD(tmp_list); @@ -753,9 +757,34 @@ pnfs_destroy_layout(struct nfs_inode *nfsi) pnfs_put_layout_hdr(lo); } else spin_unlock(&nfsi->vfs_inode.i_lock); + return lo; +} + +void pnfs_destroy_layout(struct nfs_inode *nfsi) +{ + __pnfs_destroy_layout(nfsi); } EXPORT_SYMBOL_GPL(pnfs_destroy_layout); +static bool pnfs_layout_removed(struct nfs_inode *nfsi, + struct pnfs_layout_hdr *lo) +{ + bool ret; + + spin_lock(&nfsi->vfs_inode.i_lock); + ret = nfsi->layout != lo; + spin_unlock(&nfsi->vfs_inode.i_lock); + return ret; +} + +void pnfs_destroy_layout_final(struct nfs_inode *nfsi) +{ + struct pnfs_layout_hdr *lo = __pnfs_destroy_layout(nfsi); + + if (lo) + wait_var_event(lo, pnfs_layout_removed(nfsi, lo)); +} + static bool pnfs_layout_add_bulk_destroy_list(struct inode *inode, struct list_head *layout_list) diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h index 2661c44c62db4..78c3893918486 100644 --- a/fs/nfs/pnfs.h +++ b/fs/nfs/pnfs.h @@ -266,6 +266,7 @@ struct pnfs_layout_segment *pnfs_layout_process(struct nfs4_layoutget *lgp); void pnfs_layoutget_free(struct nfs4_layoutget *lgp); void pnfs_free_lseg_list(struct list_head *tmp_list); void pnfs_destroy_layout(struct nfs_inode *); +void pnfs_destroy_layout_final(struct nfs_inode *); void pnfs_destroy_all_layouts(struct nfs_client *); int pnfs_destroy_layouts_byfsid(struct nfs_client *clp, struct nfs_fsid *fsid, @@ -710,6 +711,10 @@ static inline void pnfs_destroy_layout(struct nfs_inode *nfsi) { } +static inline void pnfs_destroy_layout_final(struct nfs_inode *nfsi) +{ +} + static inline struct pnfs_layout_segment * pnfs_get_lseg(struct pnfs_layout_segment *lseg) { From patchwork Mon Jan 4 15:57:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356768 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F0C5C4332D for ; Mon, 4 Jan 2021 16:03:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DB39A22517 for ; Mon, 4 Jan 2021 16:03:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729074AbhADQDQ (ORCPT ); Mon, 4 Jan 2021 11:03:16 -0500 Received: from mail.kernel.org ([198.145.29.99]:40716 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729060AbhADQDP (ORCPT ); Mon, 4 Jan 2021 11:03:15 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 54DD322525; Mon, 4 Jan 2021 16:02:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776153; bh=WfuIME2z6Gp2ljOyk9vHEyeNlTivkrOPmfy+3WR/rwM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hOlgx+3iCGiOnOBa11bWF/mrfURRrCPBWvre5xOT2TaX3L1dOx4qUZpUb8/iOFNMX FRCtuC6t+g6/q0bEvWi56PvYRuudCKSzW66vj3sYuLpHaE3NqOUzDcQLo6ZIpdMmNv 09DscFNI4l5HJN/FUL5YX0by/TDS+412/x5YWCWw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Light Hsieh , Chao Yu , Jaegeuk Kim , Sasha Levin Subject: [PATCH 5.10 46/63] f2fs: avoid race condition for shrinker count Date: Mon, 4 Jan 2021 16:57:39 +0100 Message-Id: <20210104155711.049210727@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Jaegeuk Kim [ Upstream commit a95ba66ac1457b76fe472c8e092ab1006271f16c ] Light reported sometimes shinker gets nat_cnt < dirty_nat_cnt resulting in wrong do_shinker work. Let's avoid to return insane overflowed value by adding single tracking value. Reported-by: Light Hsieh Reviewed-by: Chao Yu Signed-off-by: Jaegeuk Kim Signed-off-by: Sasha Levin --- fs/f2fs/checkpoint.c | 2 +- fs/f2fs/debug.c | 11 ++++++----- fs/f2fs/f2fs.h | 10 ++++++++-- fs/f2fs/node.c | 29 ++++++++++++++++++----------- fs/f2fs/node.h | 4 ++-- fs/f2fs/shrinker.c | 4 +--- 6 files changed, 36 insertions(+), 24 deletions(-) diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c index 023462e80e58d..b39bf416d5114 100644 --- a/fs/f2fs/checkpoint.c +++ b/fs/f2fs/checkpoint.c @@ -1600,7 +1600,7 @@ int f2fs_write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) goto out; } - if (NM_I(sbi)->dirty_nat_cnt == 0 && + if (NM_I(sbi)->nat_cnt[DIRTY_NAT] == 0 && SIT_I(sbi)->dirty_sentries == 0 && prefree_segments(sbi) == 0) { f2fs_flush_sit_entries(sbi, cpc); diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c index a8357fd4f5fab..197c914119da8 100644 --- a/fs/f2fs/debug.c +++ b/fs/f2fs/debug.c @@ -145,8 +145,8 @@ static void update_general_status(struct f2fs_sb_info *sbi) si->node_pages = NODE_MAPPING(sbi)->nrpages; if (sbi->meta_inode) si->meta_pages = META_MAPPING(sbi)->nrpages; - si->nats = NM_I(sbi)->nat_cnt; - si->dirty_nats = NM_I(sbi)->dirty_nat_cnt; + si->nats = NM_I(sbi)->nat_cnt[TOTAL_NAT]; + si->dirty_nats = NM_I(sbi)->nat_cnt[DIRTY_NAT]; si->sits = MAIN_SEGS(sbi); si->dirty_sits = SIT_I(sbi)->dirty_sentries; si->free_nids = NM_I(sbi)->nid_cnt[FREE_NID]; @@ -278,9 +278,10 @@ static void update_mem_info(struct f2fs_sb_info *sbi) si->cache_mem += (NM_I(sbi)->nid_cnt[FREE_NID] + NM_I(sbi)->nid_cnt[PREALLOC_NID]) * sizeof(struct free_nid); - si->cache_mem += NM_I(sbi)->nat_cnt * sizeof(struct nat_entry); - si->cache_mem += NM_I(sbi)->dirty_nat_cnt * - sizeof(struct nat_entry_set); + si->cache_mem += NM_I(sbi)->nat_cnt[TOTAL_NAT] * + sizeof(struct nat_entry); + si->cache_mem += NM_I(sbi)->nat_cnt[DIRTY_NAT] * + sizeof(struct nat_entry_set); si->cache_mem += si->inmem_pages * sizeof(struct inmem_pages); for (i = 0; i < MAX_INO_ENTRY; i++) si->cache_mem += sbi->im[i].ino_num * sizeof(struct ino_entry); diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 9a321c52facec..e4344d98a780c 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -894,6 +894,13 @@ enum nid_state { MAX_NID_STATE, }; +enum nat_state { + TOTAL_NAT, + DIRTY_NAT, + RECLAIMABLE_NAT, + MAX_NAT_STATE, +}; + struct f2fs_nm_info { block_t nat_blkaddr; /* base disk address of NAT */ nid_t max_nid; /* maximum possible node ids */ @@ -909,8 +916,7 @@ struct f2fs_nm_info { struct rw_semaphore nat_tree_lock; /* protect nat_tree_lock */ struct list_head nat_entries; /* cached nat entry list (clean) */ spinlock_t nat_list_lock; /* protect clean nat entry list */ - unsigned int nat_cnt; /* the # of cached nat entries */ - unsigned int dirty_nat_cnt; /* total num of nat entries in set */ + unsigned int nat_cnt[MAX_NAT_STATE]; /* the # of cached nat entries */ unsigned int nat_blocks; /* # of nat blocks */ /* free node ids management */ diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c index 42394de6c7eb1..e65d73293a3f6 100644 --- a/fs/f2fs/node.c +++ b/fs/f2fs/node.c @@ -62,8 +62,8 @@ bool f2fs_available_free_memory(struct f2fs_sb_info *sbi, int type) sizeof(struct free_nid)) >> PAGE_SHIFT; res = mem_size < ((avail_ram * nm_i->ram_thresh / 100) >> 2); } else if (type == NAT_ENTRIES) { - mem_size = (nm_i->nat_cnt * sizeof(struct nat_entry)) >> - PAGE_SHIFT; + mem_size = (nm_i->nat_cnt[TOTAL_NAT] * + sizeof(struct nat_entry)) >> PAGE_SHIFT; res = mem_size < ((avail_ram * nm_i->ram_thresh / 100) >> 2); if (excess_cached_nats(sbi)) res = false; @@ -177,7 +177,8 @@ static struct nat_entry *__init_nat_entry(struct f2fs_nm_info *nm_i, list_add_tail(&ne->list, &nm_i->nat_entries); spin_unlock(&nm_i->nat_list_lock); - nm_i->nat_cnt++; + nm_i->nat_cnt[TOTAL_NAT]++; + nm_i->nat_cnt[RECLAIMABLE_NAT]++; return ne; } @@ -207,7 +208,8 @@ static unsigned int __gang_lookup_nat_cache(struct f2fs_nm_info *nm_i, static void __del_from_nat_cache(struct f2fs_nm_info *nm_i, struct nat_entry *e) { radix_tree_delete(&nm_i->nat_root, nat_get_nid(e)); - nm_i->nat_cnt--; + nm_i->nat_cnt[TOTAL_NAT]--; + nm_i->nat_cnt[RECLAIMABLE_NAT]--; __free_nat_entry(e); } @@ -253,7 +255,8 @@ static void __set_nat_cache_dirty(struct f2fs_nm_info *nm_i, if (get_nat_flag(ne, IS_DIRTY)) goto refresh_list; - nm_i->dirty_nat_cnt++; + nm_i->nat_cnt[DIRTY_NAT]++; + nm_i->nat_cnt[RECLAIMABLE_NAT]--; set_nat_flag(ne, IS_DIRTY, true); refresh_list: spin_lock(&nm_i->nat_list_lock); @@ -273,7 +276,8 @@ static void __clear_nat_cache_dirty(struct f2fs_nm_info *nm_i, set_nat_flag(ne, IS_DIRTY, false); set->entry_cnt--; - nm_i->dirty_nat_cnt--; + nm_i->nat_cnt[DIRTY_NAT]--; + nm_i->nat_cnt[RECLAIMABLE_NAT]++; } static unsigned int __gang_lookup_nat_set(struct f2fs_nm_info *nm_i, @@ -2944,14 +2948,17 @@ int f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc) LIST_HEAD(sets); int err = 0; - /* during unmount, let's flush nat_bits before checking dirty_nat_cnt */ + /* + * during unmount, let's flush nat_bits before checking + * nat_cnt[DIRTY_NAT]. + */ if (enabled_nat_bits(sbi, cpc)) { down_write(&nm_i->nat_tree_lock); remove_nats_in_journal(sbi); up_write(&nm_i->nat_tree_lock); } - if (!nm_i->dirty_nat_cnt) + if (!nm_i->nat_cnt[DIRTY_NAT]) return 0; down_write(&nm_i->nat_tree_lock); @@ -2962,7 +2969,8 @@ int f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc) * into nat entry set. */ if (enabled_nat_bits(sbi, cpc) || - !__has_cursum_space(journal, nm_i->dirty_nat_cnt, NAT_JOURNAL)) + !__has_cursum_space(journal, + nm_i->nat_cnt[DIRTY_NAT], NAT_JOURNAL)) remove_nats_in_journal(sbi); while ((found = __gang_lookup_nat_set(nm_i, @@ -3086,7 +3094,6 @@ static int init_node_manager(struct f2fs_sb_info *sbi) F2FS_RESERVED_NODE_NUM; nm_i->nid_cnt[FREE_NID] = 0; nm_i->nid_cnt[PREALLOC_NID] = 0; - nm_i->nat_cnt = 0; nm_i->ram_thresh = DEF_RAM_THRESHOLD; nm_i->ra_nid_pages = DEF_RA_NID_PAGES; nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD; @@ -3220,7 +3227,7 @@ void f2fs_destroy_node_manager(struct f2fs_sb_info *sbi) __del_from_nat_cache(nm_i, natvec[idx]); } } - f2fs_bug_on(sbi, nm_i->nat_cnt); + f2fs_bug_on(sbi, nm_i->nat_cnt[TOTAL_NAT]); /* destroy nat set cache */ nid = 0; diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h index 69e5859e993cf..f84541b57acbb 100644 --- a/fs/f2fs/node.h +++ b/fs/f2fs/node.h @@ -126,13 +126,13 @@ static inline void raw_nat_from_node_info(struct f2fs_nat_entry *raw_ne, static inline bool excess_dirty_nats(struct f2fs_sb_info *sbi) { - return NM_I(sbi)->dirty_nat_cnt >= NM_I(sbi)->max_nid * + return NM_I(sbi)->nat_cnt[DIRTY_NAT] >= NM_I(sbi)->max_nid * NM_I(sbi)->dirty_nats_ratio / 100; } static inline bool excess_cached_nats(struct f2fs_sb_info *sbi) { - return NM_I(sbi)->nat_cnt >= DEF_NAT_CACHE_THRESHOLD; + return NM_I(sbi)->nat_cnt[TOTAL_NAT] >= DEF_NAT_CACHE_THRESHOLD; } static inline bool excess_dirty_nodes(struct f2fs_sb_info *sbi) diff --git a/fs/f2fs/shrinker.c b/fs/f2fs/shrinker.c index d66de5999a26d..dd3c3c7a90ec8 100644 --- a/fs/f2fs/shrinker.c +++ b/fs/f2fs/shrinker.c @@ -18,9 +18,7 @@ static unsigned int shrinker_run_no; static unsigned long __count_nat_entries(struct f2fs_sb_info *sbi) { - long count = NM_I(sbi)->nat_cnt - NM_I(sbi)->dirty_nat_cnt; - - return count > 0 ? count : 0; + return NM_I(sbi)->nat_cnt[RECLAIMABLE_NAT]; } static unsigned long __count_free_nids(struct f2fs_sb_info *sbi) From patchwork Mon Jan 4 15:57:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356758 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AE8DC43333 for ; Mon, 4 Jan 2021 16:05:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 22A53207AE for ; Mon, 4 Jan 2021 16:05:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728027AbhADQDR (ORCPT ); Mon, 4 Jan 2021 11:03:17 -0500 Received: from mail.kernel.org ([198.145.29.99]:40752 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728004AbhADQDR (ORCPT ); Mon, 4 Jan 2021 11:03:17 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 865D42250F; Mon, 4 Jan 2021 16:02:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776156; bh=Fvl5jkA1Td66HC/9Cec2dG6iQKwzilt9B/S3Cx7Ll70=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bxZJAbaOxGPh/unT/6xE0m9tPJPxl2z0PN0KgsghPeWm8CkACcVpY5FbzakfAZ9oL SrHRQTInTG7/XHDBHC2hLOmISth99CGiDlY+BlR9/nTqw442dRrV8ivVEZYa+gwPGh F+BbFgZLm3tEiSuP1OEkHkBtsgX6l2GLHzzmpSAE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Daeho Jeong , Jaegeuk Kim , Sasha Levin Subject: [PATCH 5.10 47/63] f2fs: fix race of pending_pages in decompression Date: Mon, 4 Jan 2021 16:57:40 +0100 Message-Id: <20210104155711.097185632@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Daeho Jeong [ Upstream commit 6422a71ef40e4751d59b8c9412e7e2dafe085878 ] I found out f2fs_free_dic() is invoked in a wrong timing, but f2fs_verify_bio() still needed the dic info and it triggered the below kernel panic. It has been caused by the race condition of pending_pages value between decompression and verity logic, when the same compression cluster had been split in different bios. By split bios, f2fs_verify_bio() ended up with decreasing pending_pages value before it is reset to nr_cpages by f2fs_decompress_pages() and caused the kernel panic. [ 4416.564763] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000 ... [ 4416.896016] Workqueue: fsverity_read_queue f2fs_verity_work [ 4416.908515] pc : fsverity_verify_page+0x20/0x78 [ 4416.913721] lr : f2fs_verify_bio+0x11c/0x29c [ 4416.913722] sp : ffffffc019533cd0 [ 4416.913723] x29: ffffffc019533cd0 x28: 0000000000000402 [ 4416.913724] x27: 0000000000000001 x26: 0000000000000100 [ 4416.913726] x25: 0000000000000001 x24: 0000000000000004 [ 4416.913727] x23: 0000000000001000 x22: 0000000000000000 [ 4416.913728] x21: 0000000000000000 x20: ffffffff2076f9c0 [ 4416.913729] x19: ffffffff2076f9c0 x18: ffffff8a32380c30 [ 4416.913731] x17: ffffffc01f966d97 x16: 0000000000000298 [ 4416.913732] x15: 0000000000000000 x14: 0000000000000000 [ 4416.913733] x13: f074faec89ffffff x12: 0000000000000000 [ 4416.913734] x11: 0000000000001000 x10: 0000000000001000 [ 4416.929176] x9 : ffffffff20d1f5c7 x8 : 0000000000000000 [ 4416.929178] x7 : 626d7464ff286b6b x6 : ffffffc019533ade [ 4416.929179] x5 : 000000008049000e x4 : ffffffff2793e9e0 [ 4416.929180] x3 : 000000008049000e x2 : ffffff89ecfa74d0 [ 4416.929181] x1 : 0000000000000c40 x0 : ffffffff2076f9c0 [ 4416.929184] Call trace: [ 4416.929187] fsverity_verify_page+0x20/0x78 [ 4416.929189] f2fs_verify_bio+0x11c/0x29c [ 4416.929192] f2fs_verity_work+0x58/0x84 [ 4417.050667] process_one_work+0x270/0x47c [ 4417.055354] worker_thread+0x27c/0x4d8 [ 4417.059784] kthread+0x13c/0x320 [ 4417.063693] ret_from_fork+0x10/0x18 Chao pointed this can happen by the below race condition. Thread A f2fs_post_read_wq fsverity_wq - f2fs_read_multi_pages() - f2fs_alloc_dic - dic->pending_pages = 2 - submit_bio() - submit_bio() - f2fs_post_read_work() handle first bio - f2fs_decompress_work() - __read_end_io() - f2fs_decompress_pages() - dic->pending_pages-- - enqueue f2fs_verity_work() - f2fs_verity_work() handle first bio - f2fs_verify_bio() - dic->pending_pages-- - f2fs_post_read_work() handle second bio - f2fs_decompress_work() - enqueue f2fs_verity_work() - f2fs_verify_pages() - f2fs_free_dic() - f2fs_verity_work() handle second bio - f2fs_verfy_bio() - use-after-free on dic Signed-off-by: Daeho Jeong Signed-off-by: Jaegeuk Kim Signed-off-by: Sasha Levin --- fs/f2fs/compress.c | 2 -- fs/f2fs/data.c | 58 +++++++++++++++++++++++++++++++++++++--------- fs/f2fs/f2fs.h | 1 + 3 files changed, 48 insertions(+), 13 deletions(-) diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c index 14262e0f1cd60..c5fee4d7ea72f 100644 --- a/fs/f2fs/compress.c +++ b/fs/f2fs/compress.c @@ -798,8 +798,6 @@ void f2fs_decompress_pages(struct bio *bio, struct page *page, bool verity) if (cops->destroy_decompress_ctx) cops->destroy_decompress_ctx(dic); out_free_dic: - if (verity) - atomic_set(&dic->pending_pages, dic->nr_cpages); if (!verity) f2fs_decompress_end_io(dic->rpages, dic->cluster_size, ret, false); diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index be4da52604edc..b29243ee1c3e5 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -202,7 +202,7 @@ static void f2fs_verify_bio(struct bio *bio) dic = (struct decompress_io_ctx *)page_private(page); if (dic) { - if (atomic_dec_return(&dic->pending_pages)) + if (atomic_dec_return(&dic->verity_pages)) continue; f2fs_verify_pages(dic->rpages, dic->cluster_size); @@ -1027,7 +1027,8 @@ static inline bool f2fs_need_verity(const struct inode *inode, pgoff_t idx) static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr, unsigned nr_pages, unsigned op_flag, - pgoff_t first_idx, bool for_write) + pgoff_t first_idx, bool for_write, + bool for_verity) { struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct bio *bio; @@ -1049,7 +1050,7 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr, post_read_steps |= 1 << STEP_DECRYPT; if (f2fs_compressed_file(inode)) post_read_steps |= 1 << STEP_DECOMPRESS_NOWQ; - if (f2fs_need_verity(inode, first_idx)) + if (for_verity && f2fs_need_verity(inode, first_idx)) post_read_steps |= 1 << STEP_VERITY; if (post_read_steps) { @@ -1079,7 +1080,7 @@ static int f2fs_submit_page_read(struct inode *inode, struct page *page, struct bio *bio; bio = f2fs_grab_read_bio(inode, blkaddr, 1, op_flags, - page->index, for_write); + page->index, for_write, true); if (IS_ERR(bio)) return PTR_ERR(bio); @@ -2133,7 +2134,7 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page, if (bio == NULL) { bio = f2fs_grab_read_bio(inode, block_nr, nr_pages, is_readahead ? REQ_RAHEAD : 0, page->index, - false); + false, true); if (IS_ERR(bio)) { ret = PTR_ERR(bio); bio = NULL; @@ -2180,6 +2181,8 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, const unsigned blkbits = inode->i_blkbits; const unsigned blocksize = 1 << blkbits; struct decompress_io_ctx *dic = NULL; + struct bio_post_read_ctx *ctx; + bool for_verity = false; int i; int ret = 0; @@ -2245,10 +2248,29 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, goto out_put_dnode; } + /* + * It's possible to enable fsverity on the fly when handling a cluster, + * which requires complicated error handling. Instead of adding more + * complexity, let's give a rule where end_io post-processes fsverity + * per cluster. In order to do that, we need to submit bio, if previous + * bio sets a different post-process policy. + */ + if (fsverity_active(cc->inode)) { + atomic_set(&dic->verity_pages, cc->nr_cpages); + for_verity = true; + + if (bio) { + ctx = bio->bi_private; + if (!(ctx->enabled_steps & (1 << STEP_VERITY))) { + __submit_bio(sbi, bio, DATA); + bio = NULL; + } + } + } + for (i = 0; i < dic->nr_cpages; i++) { struct page *page = dic->cpages[i]; block_t blkaddr; - struct bio_post_read_ctx *ctx; blkaddr = data_blkaddr(dn.inode, dn.node_page, dn.ofs_in_node + i + 1); @@ -2264,17 +2286,31 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, if (!bio) { bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages, is_readahead ? REQ_RAHEAD : 0, - page->index, for_write); + page->index, for_write, for_verity); if (IS_ERR(bio)) { + unsigned int remained = dic->nr_cpages - i; + bool release = false; + ret = PTR_ERR(bio); dic->failed = true; - if (!atomic_sub_return(dic->nr_cpages - i, - &dic->pending_pages)) { + + if (for_verity) { + if (!atomic_sub_return(remained, + &dic->verity_pages)) + release = true; + } else { + if (!atomic_sub_return(remained, + &dic->pending_pages)) + release = true; + } + + if (release) { f2fs_decompress_end_io(dic->rpages, - cc->cluster_size, true, - false); + cc->cluster_size, true, + false); f2fs_free_dic(dic); } + f2fs_put_dnode(&dn); *bio_ret = NULL; return ret; diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index e4344d98a780c..06e5a6053f3f9 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -1410,6 +1410,7 @@ struct decompress_io_ctx { size_t rlen; /* valid data length in rbuf */ size_t clen; /* valid data length in cbuf */ atomic_t pending_pages; /* in-flight compressed page count */ + atomic_t verity_pages; /* in-flight page count for verity */ bool failed; /* indicate IO error during decompression */ void *private; /* payload buffer for specified decompression algorithm */ void *private2; /* extra payload buffer */ From patchwork Mon Jan 4 15:57:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356770 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DE77C4332E for ; Mon, 4 Jan 2021 16:03:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BFDC62253D for ; Mon, 4 Jan 2021 16:03:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727899AbhADQCy (ORCPT ); Mon, 4 Jan 2021 11:02:54 -0500 Received: from mail.kernel.org ([198.145.29.99]:39930 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727755AbhADQCx (ORCPT ); Mon, 4 Jan 2021 11:02:53 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id B6D0D22518; Mon, 4 Jan 2021 16:02:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776158; bh=NGCnC4Uo+ziZasP2zp0+4kFBqpp5NTrJOJ2MBZ1cGOk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KAi8wBqUxu3D9jXok9hI7Dl4Y+cFmjOztyG4h2AcCMRDHtCfA2/qv3Z+jU6sXX/Ra RG/3osRB87EuG794yu9yYBIwnRQHkE8mOnR7TuBW0Nv5uP3uenjKrlNfapCv+1TMPr 0vhtKSaUb28o8BwiwN8vkGoqFj8NRUkQ+KGnwVtE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Nicolas Morey-Chaisemartin , Jessica Yu , Sasha Levin Subject: [PATCH 5.10 48/63] module: delay kobject uevent until after module init call Date: Mon, 4 Jan 2021 16:57:41 +0100 Message-Id: <20210104155711.145536166@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Jessica Yu [ Upstream commit 38dc717e97153e46375ee21797aa54777e5498f3 ] Apparently there has been a longstanding race between udev/systemd and the module loader. Currently, the module loader sends a uevent right after sysfs initialization, but before the module calls its init function. However, some udev rules expect that the module has initialized already upon receiving the uevent. This race has been triggered recently (see link in references) in some systemd mount unit files. For instance, the configfs module creates the /sys/kernel/config mount point in its init function, however the module loader issues the uevent before this happens. sys-kernel-config.mount expects to be able to mount /sys/kernel/config upon receipt of the module loading uevent, but if the configfs module has not called its init function yet, then this directory will not exist and the mount unit fails. A similar situation exists for sys-fs-fuse-connections.mount, as the fuse sysfs mount point is created during the fuse module's init function. If udev is faster than module initialization then the mount unit would fail in a similar fashion. To fix this race, delay the module KOBJ_ADD uevent until after the module has finished calling its init routine. Reviewed-by: Greg Kroah-Hartman Tested-By: Nicolas Morey-Chaisemartin Signed-off-by: Jessica Yu Signed-off-by: Sasha Levin --- kernel/module.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/kernel/module.c b/kernel/module.c index b34235082394b..e20499309b2af 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -1895,7 +1895,6 @@ static int mod_sysfs_init(struct module *mod) if (err) mod_kobject_put(mod); - /* delay uevent until full sysfs population */ out: return err; } @@ -1932,7 +1931,6 @@ static int mod_sysfs_setup(struct module *mod, add_sect_attrs(mod, info); add_notes_attrs(mod, info); - kobject_uevent(&mod->mkobj.kobj, KOBJ_ADD); return 0; out_unreg_modinfo_attrs: @@ -3639,6 +3637,9 @@ static noinline int do_init_module(struct module *mod) blocking_notifier_call_chain(&module_notify_list, MODULE_STATE_LIVE, mod); + /* Delay uevent until module has finished its init routine */ + kobject_uevent(&mod->mkobj.kobj, KOBJ_ADD); + /* * We need to finish all async code before the module init sequence * is done. This has potential to deadlock. For example, a newly From patchwork Mon Jan 4 15:57:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356767 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E3BDC4332E for ; Mon, 4 Jan 2021 16:03:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 292DB22517 for ; Mon, 4 Jan 2021 16:03:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729102AbhADQDW (ORCPT ); Mon, 4 Jan 2021 11:03:22 -0500 Received: from mail.kernel.org ([198.145.29.99]:40796 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729093AbhADQDV (ORCPT ); Mon, 4 Jan 2021 11:03:21 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id E19DE224D2; Mon, 4 Jan 2021 16:02:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776160; bh=g2Fon07Qd3sTnPTV8LLOjNJ5NIrMxxWNxKGJRrGT4Vk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=g9Z4jDywOTIocZvRlht3vKolAV2BQXLKaT1petRYkRhdKJm7WdsVNKokwW0blY8uT XlfYlydUDgEN017ABDYTaRjAKFyhtZWoWDajVOMcurgluz0yq//z7SQjcPx/SEvh3B cQBg2uKmqDx7/SIadYfrALRhrKRFiaa5Ysf/beH0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Nicholas Piggin , Michael Ellerman , Sasha Levin Subject: [PATCH 5.10 49/63] powerpc/64: irq replay remove decrementer overflow check Date: Mon, 4 Jan 2021 16:57:42 +0100 Message-Id: <20210104155711.190888663@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Nicholas Piggin [ Upstream commit 59d512e4374b2d8a6ad341475dc94c4a4bdec7d3 ] This is way to catch some cases of decrementer overflow, when the decrementer has underflowed an odd number of times, while MSR[EE] was disabled. With a typical small decrementer, a timer that fires when MSR[EE] is disabled will be "lost" if MSR[EE] remains disabled for between 4.3 and 8.6 seconds after the timer expires. In any case, the decrementer interrupt would be taken at 8.6 seconds and the timer would be found at that point. So this check is for catching extreme latency events, and it prevents those latencies from being a further few seconds long. It's not obvious this is a good tradeoff. This is already a watchdog magnitude event and that situation is not improved a significantly with this check. For large decrementers, it's useless. Therefore remove this check, which avoids a mftb when enabling hard disabled interrupts (e.g., when enabling after coming from hardware interrupt handlers). Perhaps more importantly, it also removes the clunky MSR[EE] vs PACA_IRQ_HARD_DIS incoherency in soft-interrupt replay which simplifies the code. Signed-off-by: Nicholas Piggin Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20201107014336.2337337-1-npiggin@gmail.com Signed-off-by: Sasha Levin --- arch/powerpc/kernel/irq.c | 53 ++------------------------- arch/powerpc/kernel/time.c | 9 ++--- arch/powerpc/platforms/powernv/opal.c | 2 +- 3 files changed, 8 insertions(+), 56 deletions(-) diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c index 7d0f7682d01df..6b1eca53e36cc 100644 --- a/arch/powerpc/kernel/irq.c +++ b/arch/powerpc/kernel/irq.c @@ -102,14 +102,6 @@ static inline notrace unsigned long get_irq_happened(void) return happened; } -static inline notrace int decrementer_check_overflow(void) -{ - u64 now = get_tb(); - u64 *next_tb = this_cpu_ptr(&decrementers_next_tb); - - return now >= *next_tb; -} - #ifdef CONFIG_PPC_BOOK3E /* This is called whenever we are re-enabling interrupts @@ -142,35 +134,6 @@ notrace unsigned int __check_irq_replay(void) trace_hardirqs_on(); trace_hardirqs_off(); - /* - * We are always hard disabled here, but PACA_IRQ_HARD_DIS may - * not be set, which means interrupts have only just been hard - * disabled as part of the local_irq_restore or interrupt return - * code. In that case, skip the decrementr check becaus it's - * expensive to read the TB. - * - * HARD_DIS then gets cleared here, but it's reconciled later. - * Either local_irq_disable will replay the interrupt and that - * will reconcile state like other hard interrupts. Or interrupt - * retur will replay the interrupt and in that case it sets - * PACA_IRQ_HARD_DIS by hand (see comments in entry_64.S). - */ - if (happened & PACA_IRQ_HARD_DIS) { - local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS; - - /* - * We may have missed a decrementer interrupt if hard disabled. - * Check the decrementer register in case we had a rollover - * while hard disabled. - */ - if (!(happened & PACA_IRQ_DEC)) { - if (decrementer_check_overflow()) { - local_paca->irq_happened |= PACA_IRQ_DEC; - happened |= PACA_IRQ_DEC; - } - } - } - if (happened & PACA_IRQ_DEC) { local_paca->irq_happened &= ~PACA_IRQ_DEC; return 0x900; @@ -186,6 +149,9 @@ notrace unsigned int __check_irq_replay(void) return 0x280; } + if (happened & PACA_IRQ_HARD_DIS) + local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS; + /* There should be nothing left ! */ BUG_ON(local_paca->irq_happened != 0); @@ -229,18 +195,6 @@ void replay_soft_interrupts(void) if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) WARN_ON_ONCE(mfmsr() & MSR_EE); - if (happened & PACA_IRQ_HARD_DIS) { - /* - * We may have missed a decrementer interrupt if hard disabled. - * Check the decrementer register in case we had a rollover - * while hard disabled. - */ - if (!(happened & PACA_IRQ_DEC)) { - if (decrementer_check_overflow()) - happened |= PACA_IRQ_DEC; - } - } - /* * Force the delivery of pending soft-disabled interrupts on PS3. * Any HV call will have this side effect. @@ -345,6 +299,7 @@ notrace void arch_local_irq_restore(unsigned long mask) if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) WARN_ON_ONCE(!(mfmsr() & MSR_EE)); __hard_irq_disable(); + local_paca->irq_happened |= PACA_IRQ_HARD_DIS; } else { /* * We should already be hard disabled here. We had bugs diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index 74efe46f55327..7d372ff3504b2 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -552,14 +552,11 @@ void timer_interrupt(struct pt_regs *regs) struct pt_regs *old_regs; u64 now; - /* Some implementations of hotplug will get timer interrupts while - * offline, just ignore these and we also need to set - * decrementers_next_tb as MAX to make sure __check_irq_replay - * don't replay timer interrupt when return, otherwise we'll trap - * here infinitely :( + /* + * Some implementations of hotplug will get timer interrupts while + * offline, just ignore these. */ if (unlikely(!cpu_online(smp_processor_id()))) { - *next_tb = ~(u64)0; set_dec(decrementer_max); return; } diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c index d95954ad4c0af..c61c3b62c8c62 100644 --- a/arch/powerpc/platforms/powernv/opal.c +++ b/arch/powerpc/platforms/powernv/opal.c @@ -731,7 +731,7 @@ int opal_hmi_exception_early2(struct pt_regs *regs) return 1; } -/* HMI exception handler called in virtual mode during check_irq_replay. */ +/* HMI exception handler called in virtual mode when irqs are next enabled. */ int opal_handle_hmi_exception(struct pt_regs *regs) { /* From patchwork Mon Jan 4 15:57:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356755 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF4D0C433DB for ; Mon, 4 Jan 2021 16:06:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AB0A1207AE for ; Mon, 4 Jan 2021 16:06:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728962AbhADQC6 (ORCPT ); Mon, 4 Jan 2021 11:02:58 -0500 Received: from mail.kernel.org ([198.145.29.99]:39992 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728954AbhADQC6 (ORCPT ); Mon, 4 Jan 2021 11:02:58 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 17B6F20769; Mon, 4 Jan 2021 16:02:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776162; bh=NqEGqp1novZlMhQPthLuK9lzcA9lKdPOjw64y0wArsg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nUJ3zA2P3GwrrGuBkJCdQj5UtGARi/iwnd547bSN9yv/emb4341UsmAEUG5x4DP3p 4UOztkuwLuB0zoVyGAnrmMrXTLiRuv+Qx3r22q0PqLjYZo1J88sGQtXwl0Jd9dNGrz kJGkbRt2qTPS1QqlCz8NxhPXsctYjYlTnN37rAMk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Miklos Szeredi , Eric Biggers , Al Viro , Sasha Levin Subject: [PATCH 5.10 50/63] fs/namespace.c: WARN if mnt_count has become negative Date: Mon, 4 Jan 2021 16:57:43 +0100 Message-Id: <20210104155711.239399679@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Eric Biggers [ Upstream commit edf7ddbf1c5eb98b720b063b73e20e8a4a1ce673 ] Missing calls to mntget() (or equivalently, too many calls to mntput()) are hard to detect because mntput() delays freeing mounts using task_work_add(), then again using call_rcu(). As a result, mnt_count can often be decremented to -1 without getting a KASAN use-after-free report. Such cases are still bugs though, and they point to real use-after-frees being possible. For an example of this, see the bug fixed by commit 1b0b9cc8d379 ("vfs: fsmount: add missing mntget()"), discussed at https://lkml.kernel.org/linux-fsdevel/20190605135401.GB30925@xxxxxxxxxxxxxxxxxxxxxxxxx/T/#u. This bug *should* have been trivial to find. But actually, it wasn't found until syzkaller happened to use fchdir() to manipulate the reference count just right for the bug to be noticeable. Address this by making mntput_no_expire() issue a WARN if mnt_count has become negative. Suggested-by: Miklos Szeredi Signed-off-by: Eric Biggers Signed-off-by: Al Viro Signed-off-by: Sasha Levin --- fs/namespace.c | 9 ++++++--- fs/pnode.h | 2 +- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/fs/namespace.c b/fs/namespace.c index cebaa3e817940..93006abe7946a 100644 --- a/fs/namespace.c +++ b/fs/namespace.c @@ -156,10 +156,10 @@ static inline void mnt_add_count(struct mount *mnt, int n) /* * vfsmount lock must be held for write */ -unsigned int mnt_get_count(struct mount *mnt) +int mnt_get_count(struct mount *mnt) { #ifdef CONFIG_SMP - unsigned int count = 0; + int count = 0; int cpu; for_each_possible_cpu(cpu) { @@ -1139,6 +1139,7 @@ static DECLARE_DELAYED_WORK(delayed_mntput_work, delayed_mntput); static void mntput_no_expire(struct mount *mnt) { LIST_HEAD(list); + int count; rcu_read_lock(); if (likely(READ_ONCE(mnt->mnt_ns))) { @@ -1162,7 +1163,9 @@ static void mntput_no_expire(struct mount *mnt) */ smp_mb(); mnt_add_count(mnt, -1); - if (mnt_get_count(mnt)) { + count = mnt_get_count(mnt); + if (count != 0) { + WARN_ON(count < 0); rcu_read_unlock(); unlock_mount_hash(); return; diff --git a/fs/pnode.h b/fs/pnode.h index 49a058c73e4c7..26f74e092bd98 100644 --- a/fs/pnode.h +++ b/fs/pnode.h @@ -44,7 +44,7 @@ int propagate_mount_busy(struct mount *, int); void propagate_mount_unlock(struct mount *); void mnt_release_group_id(struct mount *); int get_dominating_id(struct mount *mnt, const struct path *root); -unsigned int mnt_get_count(struct mount *mnt); +int mnt_get_count(struct mount *mnt); void mnt_set_mountpoint(struct mount *, struct mountpoint *, struct mount *); void mnt_change_mountpoint(struct mount *parent, struct mountpoint *mp, From patchwork Mon Jan 4 15:57:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356760 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35106C433DB for ; Mon, 4 Jan 2021 16:05:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F253421D93 for ; Mon, 4 Jan 2021 16:05:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728326AbhADQDZ (ORCPT ); Mon, 4 Jan 2021 11:03:25 -0500 Received: from mail.kernel.org ([198.145.29.99]:40842 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728320AbhADQDZ (ORCPT ); Mon, 4 Jan 2021 11:03:25 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 4688C2250E; Mon, 4 Jan 2021 16:02:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776164; bh=TYS9CyxV609M0wdg5tRs0/P/rV4X+CfCd2orivUlb5Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CjngF34IKOpbM9R7ifqKX7/ZvroE1fM2eO9hJK6J0CpZxZuljy9Bgs5/AnFvYiVdP Fgc5LTm5iWLtoSlG3rBw5sxpPn+RVg+bmFU6O+0J/UWreJI4V41+wprbySMUv6fbNb jwpwp/OpQZNMItjcRGfVJLOEW0aqfaccx9BLTyq0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Zhang Qilong , Guenter Roeck , Wim Van Sebroeck , Sasha Levin Subject: [PATCH 5.10 51/63] watchdog: rti-wdt: fix reference leak in rti_wdt_probe Date: Mon, 4 Jan 2021 16:57:44 +0100 Message-Id: <20210104155711.287734172@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Zhang Qilong [ Upstream commit 8711071e9700b67045fe5518161d63f7a03e3c9e ] pm_runtime_get_sync() will increment pm usage counter even it failed. Forgetting to call pm_runtime_put_noidle will result in reference leak in rti_wdt_probe, so we should fix it. Signed-off-by: Zhang Qilong Reviewed-by: Guenter Roeck Link: https://lore.kernel.org/r/20201030154909.100023-1-zhangqilong3@huawei.com Signed-off-by: Guenter Roeck Signed-off-by: Wim Van Sebroeck Signed-off-by: Sasha Levin --- drivers/watchdog/rti_wdt.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c index 836319cbaca9d..359302f71f7ef 100644 --- a/drivers/watchdog/rti_wdt.c +++ b/drivers/watchdog/rti_wdt.c @@ -227,8 +227,10 @@ static int rti_wdt_probe(struct platform_device *pdev) pm_runtime_enable(dev); ret = pm_runtime_get_sync(dev); - if (ret) + if (ret) { + pm_runtime_put_noidle(dev); return dev_err_probe(dev, ret, "runtime pm failed\n"); + } platform_set_drvdata(pdev, wdt); From patchwork Mon Jan 4 15:57:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 415DFC433E6 for ; Mon, 4 Jan 2021 16:05:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 12A5922516 for ; Mon, 4 Jan 2021 16:05:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728410AbhADQE6 (ORCPT ); Mon, 4 Jan 2021 11:04:58 -0500 Received: from mail.kernel.org ([198.145.29.99]:40938 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729147AbhADQDd (ORCPT ); Mon, 4 Jan 2021 11:03:33 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id CF41F22515; Mon, 4 Jan 2021 16:02:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776171; bh=JSA62VsnU96+Dty+AHhPpa68tj5xroqoUHABd19Pqvw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OPXnwJRzSdJn0+Z3Esr136IYL9QnypUs1b5bg2XkiMSWLit+icPU4w/xWRcC27XwV HIhIqtDSDMAIplEoa8/zmcB8fGZW9UO9UIpMfuD2wLQqwdMaTORUDAoda7EuAF0MgV SSsqtGZWoIpOHVSkQiO5sNqjieXwzskTXHw0pSnM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Christopher Obbard , Martyn Welch , Gabriel Krisman Bertazi , Anton Ivanov , Richard Weinberger , Sasha Levin Subject: [PATCH 5.10 53/63] um: ubd: Submit all data segments atomically Date: Mon, 4 Jan 2021 16:57:46 +0100 Message-Id: <20210104155711.384092400@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Gabriel Krisman Bertazi [ Upstream commit fc6b6a872dcd48c6f39c7975836d75113db67d37 ] Internally, UBD treats each physical IO segment as a separate command to be submitted in the execution pipe. If the pipe returns a transient error after a few segments have already been written, UBD will tell the block layer to requeue the request, but there is no way to reclaim the segments already submitted. When a new attempt to dispatch the request is done, those segments already submitted will get duplicated, causing the WARN_ON below in the best case, and potentially data corruption. In my system, running a UML instance with 2GB of RAM and a 50M UBD disk, I can reproduce the WARN_ON by simply running mkfs.fvat against the disk on a freshly booted system. There are a few ways to around this, like reducing the pressure on the pipe by reducing the queue depth, which almost eliminates the occurrence of the problem, increasing the pipe buffer size on the host system, or by limiting the request to one physical segment, which causes the block layer to submit way more requests to resolve a single operation. Instead, this patch modifies the format of a UBD command, such that all segments are sent through a single element in the communication pipe, turning the command submission atomic from the point of view of the block layer. The new format has a variable size, depending on the number of elements, and looks like this: +------------+-----------+-----------+------------ | cmd_header | segment 0 | segment 1 | segment ... +------------+-----------+-----------+------------ With this format, we push a pointer to cmd_header in the submission pipe. This has the advantage of reducing the memory footprint of executing a single request, since it allow us to merge some fields in the header. It is possible to reduce even further each segment memory footprint, by merging bitmap_words and cow_offset, for instance, but this is not the focus of this patch and is left as future work. One issue with the patch is that for a big number of segments, we now perform one big memory allocation instead of multiple small ones, but I wasn't able to trigger any real issues or -ENOMEM because of this change, that wouldn't be reproduced otherwise. This was tested using fio with the verify-crc32 option, and by running an ext4 filesystem over this UBD device. The original WARN_ON was: ------------[ cut here ]------------ WARNING: CPU: 0 PID: 0 at lib/refcount.c:28 refcount_warn_saturate+0x13f/0x141 refcount_t: underflow; use-after-free. Modules linked in: CPU: 0 PID: 0 Comm: swapper Not tainted 5.5.0-rc6-00002-g2a5bb2cf75c8 #346 Stack: 6084eed0 6063dc77 00000009 6084ef60 00000000 604b8d9f 6084eee0 6063dcbc 6084ef40 6006ab8d e013d780 1c00000000 Call Trace: [<600a0c1c>] ? printk+0x0/0x94 [<6004a888>] show_stack+0x13b/0x155 [<6063dc77>] ? dump_stack_print_info+0xdf/0xe8 [<604b8d9f>] ? refcount_warn_saturate+0x13f/0x141 [<6063dcbc>] dump_stack+0x2a/0x2c [<6006ab8d>] __warn+0x107/0x134 [<6008da6c>] ? wake_up_process+0x17/0x19 [<60487628>] ? blk_queue_max_discard_sectors+0x0/0xd [<6006b05f>] warn_slowpath_fmt+0xd1/0xdf [<6006af8e>] ? warn_slowpath_fmt+0x0/0xdf [<600acc14>] ? raw_read_seqcount_begin.constprop.0+0x0/0x15 [<600619ae>] ? os_nsecs+0x1d/0x2b [<604b8d9f>] refcount_warn_saturate+0x13f/0x141 [<6048bc8f>] refcount_sub_and_test.constprop.0+0x2f/0x37 [<6048c8de>] blk_mq_free_request+0xf1/0x10d [<6048ca06>] __blk_mq_end_request+0x10c/0x114 [<6005ac0f>] ubd_intr+0xb5/0x169 [<600a1a37>] __handle_irq_event_percpu+0x6b/0x17e [<600a1b70>] handle_irq_event_percpu+0x26/0x69 [<600a1bd9>] handle_irq_event+0x26/0x34 [<600a1bb3>] ? handle_irq_event+0x0/0x34 [<600a5186>] ? unmask_irq+0x0/0x37 [<600a57e6>] handle_edge_irq+0xbc/0xd6 [<600a131a>] generic_handle_irq+0x21/0x29 [<60048f6e>] do_IRQ+0x39/0x54 [...] ---[ end trace c6e7444e55386c0f ]--- Cc: Christopher Obbard Reported-by: Martyn Welch Signed-off-by: Gabriel Krisman Bertazi Tested-by: Christopher Obbard Acked-by: Anton Ivanov Signed-off-by: Richard Weinberger Signed-off-by: Sasha Levin --- arch/um/drivers/ubd_kern.c | 191 ++++++++++++++++++++++--------------- 1 file changed, 115 insertions(+), 76 deletions(-) diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c index eae8c83364f71..b12c1b0d3e1d0 100644 --- a/arch/um/drivers/ubd_kern.c +++ b/arch/um/drivers/ubd_kern.c @@ -47,18 +47,25 @@ /* Max request size is determined by sector mask - 32K */ #define UBD_MAX_REQUEST (8 * sizeof(long)) +struct io_desc { + char *buffer; + unsigned long length; + unsigned long sector_mask; + unsigned long long cow_offset; + unsigned long bitmap_words[2]; +}; + struct io_thread_req { struct request *req; int fds[2]; unsigned long offsets[2]; unsigned long long offset; - unsigned long length; - char *buffer; int sectorsize; - unsigned long sector_mask; - unsigned long long cow_offset; - unsigned long bitmap_words[2]; int error; + + int desc_cnt; + /* io_desc has to be the last element of the struct */ + struct io_desc io_desc[]; }; @@ -525,12 +532,7 @@ static void ubd_handler(void) blk_queue_max_write_zeroes_sectors(io_req->req->q, 0); blk_queue_flag_clear(QUEUE_FLAG_DISCARD, io_req->req->q); } - if ((io_req->error) || (io_req->buffer == NULL)) - blk_mq_end_request(io_req->req, io_req->error); - else { - if (!blk_update_request(io_req->req, io_req->error, io_req->length)) - __blk_mq_end_request(io_req->req, io_req->error); - } + blk_mq_end_request(io_req->req, io_req->error); kfree(io_req); } } @@ -946,6 +948,7 @@ static int ubd_add(int n, char **error_out) blk_queue_write_cache(ubd_dev->queue, true, false); blk_queue_max_segments(ubd_dev->queue, MAX_SG); + blk_queue_segment_boundary(ubd_dev->queue, PAGE_SIZE - 1); err = ubd_disk_register(UBD_MAJOR, ubd_dev->size, n, &ubd_gendisk[n]); if(err){ *error_out = "Failed to register device"; @@ -1289,37 +1292,74 @@ static void cowify_bitmap(__u64 io_offset, int length, unsigned long *cow_mask, *cow_offset += bitmap_offset; } -static void cowify_req(struct io_thread_req *req, unsigned long *bitmap, +static void cowify_req(struct io_thread_req *req, struct io_desc *segment, + unsigned long offset, unsigned long *bitmap, __u64 bitmap_offset, __u64 bitmap_len) { - __u64 sector = req->offset >> SECTOR_SHIFT; + __u64 sector = offset >> SECTOR_SHIFT; int i; - if (req->length > (sizeof(req->sector_mask) * 8) << SECTOR_SHIFT) + if (segment->length > (sizeof(segment->sector_mask) * 8) << SECTOR_SHIFT) panic("Operation too long"); if (req_op(req->req) == REQ_OP_READ) { - for (i = 0; i < req->length >> SECTOR_SHIFT; i++) { + for (i = 0; i < segment->length >> SECTOR_SHIFT; i++) { if(ubd_test_bit(sector + i, (unsigned char *) bitmap)) ubd_set_bit(i, (unsigned char *) - &req->sector_mask); + &segment->sector_mask); + } + } else { + cowify_bitmap(offset, segment->length, &segment->sector_mask, + &segment->cow_offset, bitmap, bitmap_offset, + segment->bitmap_words, bitmap_len); + } +} + +static void ubd_map_req(struct ubd *dev, struct io_thread_req *io_req, + struct request *req) +{ + struct bio_vec bvec; + struct req_iterator iter; + int i = 0; + unsigned long byte_offset = io_req->offset; + int op = req_op(req); + + if (op == REQ_OP_WRITE_ZEROES || op == REQ_OP_DISCARD) { + io_req->io_desc[0].buffer = NULL; + io_req->io_desc[0].length = blk_rq_bytes(req); + } else { + rq_for_each_segment(bvec, req, iter) { + BUG_ON(i >= io_req->desc_cnt); + + io_req->io_desc[i].buffer = + page_address(bvec.bv_page) + bvec.bv_offset; + io_req->io_desc[i].length = bvec.bv_len; + i++; + } + } + + if (dev->cow.file) { + for (i = 0; i < io_req->desc_cnt; i++) { + cowify_req(io_req, &io_req->io_desc[i], byte_offset, + dev->cow.bitmap, dev->cow.bitmap_offset, + dev->cow.bitmap_len); + byte_offset += io_req->io_desc[i].length; } + } - else cowify_bitmap(req->offset, req->length, &req->sector_mask, - &req->cow_offset, bitmap, bitmap_offset, - req->bitmap_words, bitmap_len); } -static int ubd_queue_one_vec(struct blk_mq_hw_ctx *hctx, struct request *req, - u64 off, struct bio_vec *bvec) +static struct io_thread_req *ubd_alloc_req(struct ubd *dev, struct request *req, + int desc_cnt) { - struct ubd *dev = hctx->queue->queuedata; struct io_thread_req *io_req; - int ret; + int i; - io_req = kmalloc(sizeof(struct io_thread_req), GFP_ATOMIC); + io_req = kmalloc(sizeof(*io_req) + + (desc_cnt * sizeof(struct io_desc)), + GFP_ATOMIC); if (!io_req) - return -ENOMEM; + return NULL; io_req->req = req; if (dev->cow.file) @@ -1327,26 +1367,41 @@ static int ubd_queue_one_vec(struct blk_mq_hw_ctx *hctx, struct request *req, else io_req->fds[0] = dev->fd; io_req->error = 0; - - if (bvec != NULL) { - io_req->buffer = page_address(bvec->bv_page) + bvec->bv_offset; - io_req->length = bvec->bv_len; - } else { - io_req->buffer = NULL; - io_req->length = blk_rq_bytes(req); - } - io_req->sectorsize = SECTOR_SIZE; io_req->fds[1] = dev->fd; - io_req->cow_offset = -1; - io_req->offset = off; - io_req->sector_mask = 0; + io_req->offset = (u64) blk_rq_pos(req) << SECTOR_SHIFT; io_req->offsets[0] = 0; io_req->offsets[1] = dev->cow.data_offset; - if (dev->cow.file) - cowify_req(io_req, dev->cow.bitmap, - dev->cow.bitmap_offset, dev->cow.bitmap_len); + for (i = 0 ; i < desc_cnt; i++) { + io_req->io_desc[i].sector_mask = 0; + io_req->io_desc[i].cow_offset = -1; + } + + return io_req; +} + +static int ubd_submit_request(struct ubd *dev, struct request *req) +{ + int segs = 0; + struct io_thread_req *io_req; + int ret; + int op = req_op(req); + + if (op == REQ_OP_FLUSH) + segs = 0; + else if (op == REQ_OP_WRITE_ZEROES || op == REQ_OP_DISCARD) + segs = 1; + else + segs = blk_rq_nr_phys_segments(req); + + io_req = ubd_alloc_req(dev, req, segs); + if (!io_req) + return -ENOMEM; + + io_req->desc_cnt = segs; + if (segs) + ubd_map_req(dev, io_req, req); ret = os_write_file(thread_fd, &io_req, sizeof(io_req)); if (ret != sizeof(io_req)) { @@ -1357,22 +1412,6 @@ static int ubd_queue_one_vec(struct blk_mq_hw_ctx *hctx, struct request *req, return ret; } -static int queue_rw_req(struct blk_mq_hw_ctx *hctx, struct request *req) -{ - struct req_iterator iter; - struct bio_vec bvec; - int ret; - u64 off = (u64)blk_rq_pos(req) << SECTOR_SHIFT; - - rq_for_each_segment(bvec, req, iter) { - ret = ubd_queue_one_vec(hctx, req, off, &bvec); - if (ret < 0) - return ret; - off += bvec.bv_len; - } - return 0; -} - static blk_status_t ubd_queue_rq(struct blk_mq_hw_ctx *hctx, const struct blk_mq_queue_data *bd) { @@ -1385,17 +1424,12 @@ static blk_status_t ubd_queue_rq(struct blk_mq_hw_ctx *hctx, spin_lock_irq(&ubd_dev->lock); switch (req_op(req)) { - /* operations with no lentgth/offset arguments */ case REQ_OP_FLUSH: - ret = ubd_queue_one_vec(hctx, req, 0, NULL); - break; case REQ_OP_READ: case REQ_OP_WRITE: - ret = queue_rw_req(hctx, req); - break; case REQ_OP_DISCARD: case REQ_OP_WRITE_ZEROES: - ret = ubd_queue_one_vec(hctx, req, (u64)blk_rq_pos(req) << 9, NULL); + ret = ubd_submit_request(ubd_dev, req); break; default: WARN_ON_ONCE(1); @@ -1483,22 +1517,22 @@ static int map_error(int error_code) * will result in unpredictable behaviour and/or crashes. */ -static int update_bitmap(struct io_thread_req *req) +static int update_bitmap(struct io_thread_req *req, struct io_desc *segment) { int n; - if(req->cow_offset == -1) + if (segment->cow_offset == -1) return map_error(0); - n = os_pwrite_file(req->fds[1], &req->bitmap_words, - sizeof(req->bitmap_words), req->cow_offset); - if (n != sizeof(req->bitmap_words)) + n = os_pwrite_file(req->fds[1], &segment->bitmap_words, + sizeof(segment->bitmap_words), segment->cow_offset); + if (n != sizeof(segment->bitmap_words)) return map_error(-n); return map_error(0); } -static void do_io(struct io_thread_req *req) +static void do_io(struct io_thread_req *req, struct io_desc *desc) { char *buf = NULL; unsigned long len; @@ -1513,21 +1547,20 @@ static void do_io(struct io_thread_req *req) return; } - nsectors = req->length / req->sectorsize; + nsectors = desc->length / req->sectorsize; start = 0; do { - bit = ubd_test_bit(start, (unsigned char *) &req->sector_mask); + bit = ubd_test_bit(start, (unsigned char *) &desc->sector_mask); end = start; while((end < nsectors) && - (ubd_test_bit(end, (unsigned char *) - &req->sector_mask) == bit)) + (ubd_test_bit(end, (unsigned char *) &desc->sector_mask) == bit)) end++; off = req->offset + req->offsets[bit] + start * req->sectorsize; len = (end - start) * req->sectorsize; - if (req->buffer != NULL) - buf = &req->buffer[start * req->sectorsize]; + if (desc->buffer != NULL) + buf = &desc->buffer[start * req->sectorsize]; switch (req_op(req->req)) { case REQ_OP_READ: @@ -1567,7 +1600,8 @@ static void do_io(struct io_thread_req *req) start = end; } while(start < nsectors); - req->error = update_bitmap(req); + req->offset += len; + req->error = update_bitmap(req, desc); } /* Changed in start_io_thread, which is serialized by being called only @@ -1600,8 +1634,13 @@ int io_thread(void *arg) } for (count = 0; count < n/sizeof(struct io_thread_req *); count++) { + struct io_thread_req *req = (*io_req_buffer)[count]; + int i; + io_count++; - do_io((*io_req_buffer)[count]); + for (i = 0; !req->error && i < req->desc_cnt; i++) + do_io(req, &(req->io_desc[i])); + } written = 0; From patchwork Mon Jan 4 15:57:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356762 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30620C433E9 for ; Mon, 4 Jan 2021 16:05:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0B71320769 for ; Mon, 4 Jan 2021 16:05:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728176AbhADQE5 (ORCPT ); Mon, 4 Jan 2021 11:04:57 -0500 Received: from mail.kernel.org ([198.145.29.99]:40970 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728095AbhADQDe (ORCPT ); Mon, 4 Jan 2021 11:03:34 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 0B86122516; Mon, 4 Jan 2021 16:02:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776173; bh=TBFURPubX4lJg4Bp/uUQfXFe64zJLuJrE0DcFxmIhis=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ULQIIaa2watUVEbDzaf1+BE70vlxwGOTKPULJG2ghpIduIxdloeHjdE68CHBEytSY jyaqVyd1tPDN34yB2sjusNIMsvoENeiLfZ9Ds1WRVAPAA6DzmuKjQVoLbaok6d9q3e ZKm71xeIIaSOKc7MTQFYV8iCPuHUPs4/Kg+3wu1c= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Trond Myklebust , Sasha Levin Subject: [PATCH 5.10 54/63] NFSv4.2: Dont error when exiting early on a READ_PLUS buffer overflow Date: Mon, 4 Jan 2021 16:57:47 +0100 Message-Id: <20210104155711.434974524@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Trond Myklebust [ Upstream commit 503b934a752f7e789a5f33217520e0a79f3096ac ] Expanding the READ_PLUS extents can cause the read buffer to overflow. If it does, then don't error, but just exit early. Signed-off-by: Trond Myklebust Signed-off-by: Sasha Levin --- fs/nfs/nfs42xdr.c | 36 +++++++++++++++++------------------- 1 file changed, 17 insertions(+), 19 deletions(-) diff --git a/fs/nfs/nfs42xdr.c b/fs/nfs/nfs42xdr.c index 8432bd6b95f08..c078f88552695 100644 --- a/fs/nfs/nfs42xdr.c +++ b/fs/nfs/nfs42xdr.c @@ -1019,29 +1019,24 @@ static int decode_deallocate(struct xdr_stream *xdr, struct nfs42_falloc_res *re return decode_op_hdr(xdr, OP_DEALLOCATE); } -static int decode_read_plus_data(struct xdr_stream *xdr, struct nfs_pgio_res *res, - uint32_t *eof) +static int decode_read_plus_data(struct xdr_stream *xdr, + struct nfs_pgio_res *res) { uint32_t count, recvd; uint64_t offset; __be32 *p; p = xdr_inline_decode(xdr, 8 + 4); - if (unlikely(!p)) - return -EIO; + if (!p) + return 1; p = xdr_decode_hyper(p, &offset); count = be32_to_cpup(p); recvd = xdr_align_data(xdr, res->count, count); res->count += recvd; - if (count > recvd) { - dprintk("NFS: server cheating in read reply: " - "count %u > recvd %u\n", count, recvd); - *eof = 0; + if (count > recvd) return 1; - } - return 0; } @@ -1052,18 +1047,16 @@ static int decode_read_plus_hole(struct xdr_stream *xdr, struct nfs_pgio_res *re __be32 *p; p = xdr_inline_decode(xdr, 8 + 8); - if (unlikely(!p)) - return -EIO; + if (!p) + return 1; p = xdr_decode_hyper(p, &offset); p = xdr_decode_hyper(p, &length); recvd = xdr_expand_hole(xdr, res->count, length); res->count += recvd; - if (recvd < length) { - *eof = 0; + if (recvd < length) return 1; - } return 0; } @@ -1088,12 +1081,12 @@ static int decode_read_plus(struct xdr_stream *xdr, struct nfs_pgio_res *res) for (i = 0; i < segments; i++) { p = xdr_inline_decode(xdr, 4); - if (unlikely(!p)) - return -EIO; + if (!p) + goto early_out; type = be32_to_cpup(p++); if (type == NFS4_CONTENT_DATA) - status = decode_read_plus_data(xdr, res, &eof); + status = decode_read_plus_data(xdr, res); else if (type == NFS4_CONTENT_HOLE) status = decode_read_plus_hole(xdr, res, &eof); else @@ -1102,12 +1095,17 @@ static int decode_read_plus(struct xdr_stream *xdr, struct nfs_pgio_res *res) if (status < 0) return status; if (status > 0) - break; + goto early_out; } out: res->eof = eof; return 0; +early_out: + if (unlikely(!i)) + return -EIO; + res->eof = 0; + return 0; } static int decode_seek(struct xdr_stream *xdr, struct nfs42_seek_res *res) From patchwork Mon Jan 4 15:57:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 411B7C433E0 for ; Mon, 4 Jan 2021 16:05:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0BD8C2072D for ; Mon, 4 Jan 2021 16:05:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727634AbhADQEm (ORCPT ); Mon, 4 Jan 2021 11:04:42 -0500 Received: from mail.kernel.org ([198.145.29.99]:41022 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728471AbhADQDi (ORCPT ); Mon, 4 Jan 2021 11:03:38 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 723282072D; Mon, 4 Jan 2021 16:02:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776177; bh=7ig1rKJIVwtSJ8LzofyjMuBDe2lGk1Ll4z/G22TAaMI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nWePgD/HB/Zvg0tSTYnAIbBhXqDbkcW0bNBFfB95NEjD1dwTQoU1z63OuMxEJgrvo SV+0ybxcFPOPBdjCcFr0rTdYsl2V9WkEh3FFSQLZJa8RxslMM6GFIZaVctuKex8Yj5 wNxhk7LvTKiNKv4qO+rNuyJc0XY8P4L39+ZooKL4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jake Wang , Yongqiang Sun , Qingqing Zhuo , Alex Deucher , Sasha Levin Subject: [PATCH 5.10 56/63] drm/amd/display: updated wm table for Renoir Date: Mon, 4 Jan 2021 16:57:49 +0100 Message-Id: <20210104155711.521915510@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Jake Wang [ Upstream commit 410066d24cfc1071be25e402510367aca9db5cb6 ] [Why] For certain timings, Renoir may underflow due to sr exit latency being too slow. [How] Updated wm table for renoir. Signed-off-by: Jake Wang Reviewed-by: Yongqiang Sun Acked-by: Qingqing Zhuo Signed-off-by: Alex Deucher Signed-off-by: Sasha Levin --- .../drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c index 6b431db146cd9..1c6e401dd4cce 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c @@ -704,24 +704,24 @@ static struct wm_table ddr4_wm_table_rn = { .wm_inst = WM_B, .wm_type = WM_TYPE_PSTATE_CHG, .pstate_latency_us = 11.72, - .sr_exit_time_us = 10.12, - .sr_enter_plus_exit_time_us = 11.48, + .sr_exit_time_us = 11.12, + .sr_enter_plus_exit_time_us = 12.48, .valid = true, }, { .wm_inst = WM_C, .wm_type = WM_TYPE_PSTATE_CHG, .pstate_latency_us = 11.72, - .sr_exit_time_us = 10.12, - .sr_enter_plus_exit_time_us = 11.48, + .sr_exit_time_us = 11.12, + .sr_enter_plus_exit_time_us = 12.48, .valid = true, }, { .wm_inst = WM_D, .wm_type = WM_TYPE_PSTATE_CHG, .pstate_latency_us = 11.72, - .sr_exit_time_us = 10.12, - .sr_enter_plus_exit_time_us = 11.48, + .sr_exit_time_us = 11.12, + .sr_enter_plus_exit_time_us = 12.48, .valid = true, }, } From patchwork Mon Jan 4 15:57:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FA4DC43332 for ; Mon, 4 Jan 2021 16:05:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DE1B6207AE for ; Mon, 4 Jan 2021 16:05:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728784AbhADQFt (ORCPT ); Mon, 4 Jan 2021 11:05:49 -0500 Received: from mail.kernel.org ([198.145.29.99]:40012 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729091AbhADQDU (ORCPT ); Mon, 4 Jan 2021 11:03:20 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 092862245C; Mon, 4 Jan 2021 16:03:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776184; bh=+5kp/6j33OLmu6miCw7AecR6CM8H+7zPqlyEF1lYGXs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pUbtnXaaTKNbzKSwbt6aSbTaDvh1VQIWF94+ocPJpQ0zgqHwyS2buvcSt6BxcZL+u yE7Y7C7YDtlMa0Tkkfx/Ci6jDgHodgXQceQwI6xk0Gtq5qWxxDhlxo9TKp5be3k8U0 8MDcRn6A+8ZK3H/lG5Nblg9jo4KvZ+QCsS/bKR/Y= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Pavel Begunkov , Jens Axboe , Sasha Levin Subject: [PATCH 5.10 59/63] io_uring: remove racy overflow list fast checks Date: Mon, 4 Jan 2021 16:57:52 +0100 Message-Id: <20210104155711.666098637@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Pavel Begunkov [ Upstream commit 9cd2be519d05ee78876d55e8e902b7125f78b74f ] list_empty_careful() is not racy only if some conditions are met, i.e. no re-adds after del_init. io_cqring_overflow_flush() does list_move(), so it's actually racy. Remove those checks, we have ->cq_check_overflow for the fast path. Signed-off-by: Pavel Begunkov Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin --- fs/io_uring.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index e28eedab5365f..1f798c5c4213e 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -1636,8 +1636,6 @@ static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force, LIST_HEAD(list); if (!force) { - if (list_empty_careful(&ctx->cq_overflow_list)) - return true; if ((ctx->cached_cq_tail - READ_ONCE(rings->cq.head) == rings->cq_ring_entries)) return false; @@ -6579,8 +6577,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr) /* if we have a backlog and couldn't flush it all, return BUSY */ if (test_bit(0, &ctx->sq_check_overflow)) { - if (!list_empty(&ctx->cq_overflow_list) && - !io_cqring_overflow_flush(ctx, false, NULL, NULL)) + if (!io_cqring_overflow_flush(ctx, false, NULL, NULL)) return -EBUSY; } From patchwork Mon Jan 4 15:57:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 356764 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD40EC4332D for ; Mon, 4 Jan 2021 16:04:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7637A2072D for ; Mon, 4 Jan 2021 16:04:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728120AbhADQDv (ORCPT ); Mon, 4 Jan 2021 11:03:51 -0500 Received: from mail.kernel.org ([198.145.29.99]:41380 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728036AbhADQDt (ORCPT ); Mon, 4 Jan 2021 11:03:49 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 8A5C822507; Mon, 4 Jan 2021 16:03:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609776189; bh=KWk5pWbTEHZ6TTGGZbFPEBpBo6ewhVXlMbkXg2F/BRQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RPRhoN+6iZwxwBczB+iU87PtDBERsJknPoOyIK/6d8GfG5IMiMdG94ji/hvMUS/lH CHVzy7FPZqKnqrfVxQBsdvwe/+/Du4E41nxr4g5cjJGRg6WCI6Hbl0d9vAf6IBEAB7 efmVVrmWlb1VtUg5cH+ic9RjQb4jcfiC26drw3+M= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Hyeongseok Kim , Sami Tolvanen , Mike Snitzer , Sasha Levin Subject: [PATCH 5.10 61/63] dm verity: skip verity work if I/O error when system is shutting down Date: Mon, 4 Jan 2021 16:57:54 +0100 Message-Id: <20210104155711.761767339@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155708.800470590@linuxfoundation.org> References: <20210104155708.800470590@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Hyeongseok Kim [ Upstream commit 252bd1256396cebc6fc3526127fdb0b317601318 ] If emergency system shutdown is called, like by thermal shutdown, a dm device could be alive when the block device couldn't process I/O requests anymore. In this state, the handling of I/O errors by new dm I/O requests or by those already in-flight can lead to a verity corruption state, which is a misjudgment. So, skip verity work in response to I/O error when system is shutting down. Signed-off-by: Hyeongseok Kim Reviewed-by: Sami Tolvanen Signed-off-by: Mike Snitzer Signed-off-by: Sasha Levin --- drivers/md/dm-verity-target.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index f74982dcbea0d..6b8e5bdd8526d 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -537,6 +537,15 @@ static int verity_verify_io(struct dm_verity_io *io) return 0; } +/* + * Skip verity work in response to I/O error when system is shutting down. + */ +static inline bool verity_is_system_shutting_down(void) +{ + return system_state == SYSTEM_HALT || system_state == SYSTEM_POWER_OFF + || system_state == SYSTEM_RESTART; +} + /* * End one "io" structure with a given error. */ @@ -564,7 +573,8 @@ static void verity_end_io(struct bio *bio) { struct dm_verity_io *io = bio->bi_private; - if (bio->bi_status && !verity_fec_is_enabled(io->v)) { + if (bio->bi_status && + (!verity_fec_is_enabled(io->v) || verity_is_system_shutting_down())) { verity_finish_io(io, bio->bi_status); return; }