From patchwork Sat Mar 12 04:43:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Wu X-Patchwork-Id: 550871 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6A65C433EF for ; Sat, 12 Mar 2022 04:43:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230342AbiCLEo0 (ORCPT ); Fri, 11 Mar 2022 23:44:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229502AbiCLEoZ (ORCPT ); Fri, 11 Mar 2022 23:44:25 -0500 Received: from out28-1.mail.aliyun.com (out28-1.mail.aliyun.com [115.124.28.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 741961FF41C; Fri, 11 Mar 2022 20:43:18 -0800 (PST) X-Alimail-AntiSpam: AC=CONTINUE; BC=0.1887753|-1; CH=green; DM=|CONTINUE|false|; DS=CONTINUE|ham_alarm|0.00849406-0.00236518-0.989141; FP=0|0|0|0|0|-1|-1|-1; HT=ay29a033018047203; MF=michael@allwinnertech.com; NM=1; PH=DS; RN=9; RT=9; SR=0; TI=SMTPD_---.N2bFkqv_1647060194; Received: from sunxibot.allwinnertech.com(mailfrom:michael@allwinnertech.com fp:SMTPD_---.N2bFkqv_1647060194) by smtp.aliyun-inc.com(10.147.40.233); Sat, 12 Mar 2022 12:43:15 +0800 From: Michael Wu To: ulf.hansson@linaro.org (maintainer:MULTIMEDIA CARD (MMC), SECURE DIGITAL (SD) AND...,commit_signer:11/9=100%,authored:4/9=44% ,added_lines:26/61=43%,removed_lines:25/35=71%), adrian.hunter@intel.com (commit_signer:3/9=33%,authored:4/9=44% ,added_lines:26/61=43%,removed_lines:25/35=71%), avri.altman@wdc.com (commit_signer:2/9=22%,authored:4/9=44% ,authored:2/9=22%,added_lines:26/61=43%,added_lines:16/61=26% ,removed_lines:25/35=71%), beanhuo@micron.com (commit_signer:1/9=11%,authored:4/9=44% ,authored:1/9=11%,added_lines:26/61=43%,removed_lines:25/35=71%), porzio@gmail.com (commit_signer:1/9=11%,authored:4/9=44% ,authored:1/9=11%,added_lines:26/61=43%,added_lines:4/61=7% ,removed_lines:25/35=71%,removed_lines:3/35=9%), michael@allwinnertech.com (authored:1/9=11%,added_lines:26/61=43% ,added_lines:14/61=23%,removed_lines:25/35=71%,removed_lines:6/35=17%) Cc: Michael Wu , Ulf Hansson , Adrian Hunter , Avri Altman , Luca Porzio , lixiang , Bean Huo , linux-mmc@vger.kernel.org (open list:MULTIMEDIA CARD (MMC), SECURE DIGITAL (SD) AND...), linux-kernel@vger.kernel.org (open list) Subject: [PATCH] mmc: block: enable cache-flushing when mmc cache is on Date: Sat, 12 Mar 2022 12:43:13 +0800 Message-Id: <20220312044315.7994-1-michael@allwinnertech.com> X-Mailer: git-send-email 2.29.0 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org The mmc core enable cache on default. But it only enables cache-flushing when host supports cmd23 and eMMC supports reliable write. For hosts which do not support cmd23 or eMMCs which do not support reliable write, the cache can not be flushed by `sync` command. This may leads to cache data lost. This patch enables cache-flushing as long as cache is enabled, no matter host supports cmd23 and/or eMMC supports reliable write or not. Signed-off-by: Michael Wu --- drivers/mmc/core/block.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 689eb9afeeed..1e508c079c1e 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -2279,6 +2279,8 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card, struct mmc_blk_data *md; int devidx, ret; char cap_str[10]; + bool enable_cache = false; + bool enable_fua = false; devidx = ida_simple_get(&mmc_blk_ida, 0, max_devices, GFP_KERNEL); if (devidx < 0) { @@ -2375,12 +2377,18 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card, md->flags |= MMC_BLK_CMD23; } - if (mmc_card_mmc(card) && - md->flags & MMC_BLK_CMD23 && - ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) || - card->ext_csd.rel_sectors)) { - md->flags |= MMC_BLK_REL_WR; - blk_queue_write_cache(md->queue.queue, true, true); + if (mmc_card_mmc(card)) { + if (md->flags & MMC_BLK_CMD23 && + ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) || + card->ext_csd.rel_sectors)) { + md->flags |= MMC_BLK_REL_WR; + enable_fua = true; + } + + if (mmc_cache_enabled(card->host)) + enable_cache = true; + + blk_queue_write_cache(md->queue.queue, enable_cache, enable_fua); } string_get_size((u64)size, 512, STRING_UNITS_2,