From patchwork Fri Sep 18 20:19:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 263780 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B747EC43463 for ; Fri, 18 Sep 2020 20:38:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 76FFF21D7F for ; Fri, 18 Sep 2020 20:38:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726187AbgIRUiK (ORCPT ); Fri, 18 Sep 2020 16:38:10 -0400 Received: from mga09.intel.com ([134.134.136.24]:19952 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726139AbgIRUiJ (ORCPT ); Fri, 18 Sep 2020 16:38:09 -0400 IronPort-SDR: bQKaNJq//ygkvNt6cc//fLvC6H/BH61kCYKXxLELUc9JJKttaayx83P9W7FpqMRC1BWfT/eNKR FtWUu7IS1Srw== X-IronPort-AV: E=McAfee;i="6000,8403,9748"; a="160946666" X-IronPort-AV: E=Sophos;i="5.77,274,1596524400"; d="scan'208";a="160946666" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Sep 2020 13:38:02 -0700 IronPort-SDR: ilZWPNojwb/+KA4/TyF/2Pu2WPihpr6m9llbqbpJFUsYLzrv17yUSP61lRZgnkUtruszTzSBrs Zq0No1cq1VVg== X-IronPort-AV: E=Sophos;i="5.77,274,1596524400"; d="scan'208";a="381022391" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Sep 2020 13:38:02 -0700 Subject: [PATCH v3] dm: Call proper helper to determine dax support From: Dan Williams To: linux-nvdimm@lists.01.org Cc: stable@vger.kernel.org, Adrian Huang , Jan Kara , Mike Snitzer , kernel test robot , dm-devel@redhat.com, linux-kernel@vger.kernel.org, ira.weiny@intel.com, mpatocka@redhat.com, snitzer@redhat.com Date: Fri, 18 Sep 2020 13:19:42 -0700 Message-ID: <160046028990.22670.15271558589864899328.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Jan Kara DM was calling generic_fsdax_supported() to determine whether a device referenced in the DM table supports DAX. However this is a helper for "leaf" device drivers so that they don't have to duplicate common generic checks. High level code should call dax_supported() helper which that calls into appropriate helper for the particular device. This problem manifested itself as kernel messages: dm-3: error: dax access failed (-95) when lvm2-testsuite run in cases where a DM device was stacked on top of another DM device. Fixes: 7bf7eac8d648 ("dax: Arrange for dax_supported check to span multiple devices") Cc: Tested-by: Adrian Huang Signed-off-by: Jan Kara Acked-by: Mike Snitzer Reported-by: kernel test robot Signed-off-by: Dan Williams --- Changes since v2 [1]: - Add dummy definitions for dax_read_{lock,unlock} in the CONFIG_DAX=n case (0day robot) [1]: http://lore.kernel.org/r/160040692945.25320.13233625491405115889.stgit@dwillia2-desk3.amr.corp.intel.com drivers/dax/super.c | 4 ++++ drivers/md/dm-table.c | 10 +++++++--- include/linux/dax.h | 22 ++++++++++++++++++++-- 3 files changed, 31 insertions(+), 5 deletions(-) diff --git a/drivers/dax/super.c b/drivers/dax/super.c index e5767c83ea23..b6284c5cae0a 100644 --- a/drivers/dax/super.c +++ b/drivers/dax/super.c @@ -325,11 +325,15 @@ EXPORT_SYMBOL_GPL(dax_direct_access); bool dax_supported(struct dax_device *dax_dev, struct block_device *bdev, int blocksize, sector_t start, sector_t len) { + if (!dax_dev) + return false; + if (!dax_alive(dax_dev)) return false; return dax_dev->ops->dax_supported(dax_dev, bdev, blocksize, start, len); } +EXPORT_SYMBOL_GPL(dax_supported); size_t dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr, size_t bytes, struct iov_iter *i) diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 5edc3079e7c1..229f461e7def 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -860,10 +860,14 @@ EXPORT_SYMBOL_GPL(dm_table_set_type); int device_supports_dax(struct dm_target *ti, struct dm_dev *dev, sector_t start, sector_t len, void *data) { - int blocksize = *(int *) data; + int blocksize = *(int *) data, id; + bool rc; - return generic_fsdax_supported(dev->dax_dev, dev->bdev, blocksize, - start, len); + id = dax_read_lock(); + rc = dax_supported(dev->dax_dev, dev->bdev, blocksize, start, len); + dax_read_unlock(id); + + return rc; } /* Check devices support synchronous DAX */ diff --git a/include/linux/dax.h b/include/linux/dax.h index 6904d4e0b2e0..d0af16b23122 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -130,6 +130,8 @@ static inline bool generic_fsdax_supported(struct dax_device *dax_dev, return __generic_fsdax_supported(dax_dev, bdev, blocksize, start, sectors); } +bool dax_supported(struct dax_device *dax_dev, struct block_device *bdev, + int blocksize, sector_t start, sector_t len); static inline void fs_put_dax(struct dax_device *dax_dev) { @@ -157,6 +159,13 @@ static inline bool generic_fsdax_supported(struct dax_device *dax_dev, return false; } +static inline bool dax_supported(struct dax_device *dax_dev, + struct block_device *bdev, int blocksize, sector_t start, + sector_t len) +{ + return false; +} + static inline void fs_put_dax(struct dax_device *dax_dev) { } @@ -189,14 +198,23 @@ static inline void dax_unlock_page(struct page *page, dax_entry_t cookie) } #endif +#ifdef CONFIG_DAX int dax_read_lock(void); void dax_read_unlock(int id); +#else +static inline int dax_read_lock(void) +{ + return 0; +} + +static inline void dax_read_unlock(int id) +{ +} +#endif /* CONFIG_DAX */ bool dax_alive(struct dax_device *dax_dev); void *dax_get_private(struct dax_device *dax_dev); long dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, long nr_pages, void **kaddr, pfn_t *pfn); -bool dax_supported(struct dax_device *dax_dev, struct block_device *bdev, - int blocksize, sector_t start, sector_t len); size_t dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr, size_t bytes, struct iov_iter *i); size_t dax_copy_to_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,