From patchwork Wed Mar 24 19:28:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bodo Stroesser X-Patchwork-Id: 409219 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0A6EC433E9 for ; Wed, 24 Mar 2021 19:29:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BED2B61A21 for ; Wed, 24 Mar 2021 19:29:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237709AbhCXT3X (ORCPT ); Wed, 24 Mar 2021 15:29:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237755AbhCXT3C (ORCPT ); Wed, 24 Mar 2021 15:29:02 -0400 Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com [IPv6:2a00:1450:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3C55C061763; Wed, 24 Mar 2021 12:29:01 -0700 (PDT) Received: by mail-ej1-x62d.google.com with SMTP id k10so34814150ejg.0; Wed, 24 Mar 2021 12:29:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=dzcjbeFj+WpxxyXTlFQ0pILyU0z24ZRfU4LjJPFEC9Y=; b=aAfYYdY2MkfeQ5kX7eTV7I5HpN0Q3T6I/ENqrCmVXqxojtf7VSRhXcd9G5ltyzQPu3 qlmYOd6spBskWhqeH/IsEQz734PxsZGB6lm/cbTq6prarXxTORy5RmM2zGZ6VB04WL2M CsmyJ3eNis46Km80thkgQrvIARa02mi/vos6JxlFHPn1H+3FTc/ZmQm8548O2e8FSYLR 1czpJe3RD6R6UQELE5BAn07kgWtY9Jbl74hxqF8B8ZL/XALhn10v7eDXI0u6+FR2ju7p l9DVOY4ZKa4TKCDqriI8B9+nvU5K2BFus00A701ctX3qcJREQcTAMjCkBsF00OiL6yMD cobg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=dzcjbeFj+WpxxyXTlFQ0pILyU0z24ZRfU4LjJPFEC9Y=; b=YlNGH58X2V3MD02FbXqOV2FuRg9b6uF9qIU4JR7+ggonkiEp9s4PlNWES7X5qrLbVW C38jJKNc+bWfuqsOkNYDQPhdW3b9bTV0BmA/NOhI6m5phPGCBbwdf+3YftIG6AzNPf/6 DnWoCH1ZG0rzmtEFjSn4AqwA2ePkduEL4r9i6+uW3YoLRmE3Z7NDDLx3/V2gwl4TIQqw BUpY09Zhctw6ODzk+LKKV0VOhqN05BwQ2B/QVe9VAd6g6y3zL8ghvPmGcK5g0bMGZ2p3 aJf/0leIzISqre9NFQ+8VjAMbLS43yUuL3VQJFNcOXS854loF6KxKtIcJzAHz3a49NkC Q5UA== X-Gm-Message-State: AOAM5304awU2iHEFjupS9qJF77iecyMAjLHCiYprkO2q5wTR0bGJUW9H U+JRn0Nt3F0aVBMY83Uc4v8ZUlm+uWY= X-Google-Smtp-Source: ABdhPJzBcNBJ6sH5ucFxr5Jwq7lzVvoC4G6C6082pRuPGcCRWzE6davfPLT12nk4KgeZtBfgkwpbwQ== X-Received: by 2002:a17:906:ae88:: with SMTP id md8mr5312924ejb.264.1616614140694; Wed, 24 Mar 2021 12:29:00 -0700 (PDT) Received: from localhost (ipbcc11466.dynamic.kabel-deutschland.de. [188.193.20.102]) by smtp.gmail.com with ESMTPSA id mp12sm1316886ejc.1.2021.03.24.12.29.00 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 24 Mar 2021 12:29:00 -0700 (PDT) From: Bodo Stroesser To: linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, "Martin K. Petersen" Cc: Bodo Stroesser , Mike Christie Subject: [PATCH 1/6] scsi: target: tcmu: Adjust names of variables and definitions Date: Wed, 24 Mar 2021 20:28:37 +0100 Message-Id: <20210324192842.30446-2-bostroesser@gmail.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20210324192842.30446-1-bostroesser@gmail.com> References: <20210324192842.30446-1-bostroesser@gmail.com> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Bodo Stroesser Some definitions and members of struct tcmu_dev had misleading names. Examples: - ring_size was used for the size of mailbox + cmd ring + data area - CMDR_SIZE was used for size of mailbox + cmd ring I added the new definition MB_CMDR_SIZE (mailbox + command ring), changed CMDR_SIZE to hold the size of the command ring only and replaced in struct tcmu_dev the member ring_size with mmap_pages, because the member is now used in tcmu_mmap() only, where we need page count, not size. I also added the new struct tcmu_dev member 'cmdr' which is used to replace some occurences of '(void *)mb + CMDR_OFF' with 'udev->cmdr' for better readability. Signed-off-by: Bodo Stroesser --- drivers/target/target_core_user.c | 39 ++++++++++++++++++++++----------------- 1 file changed, 22 insertions(+), 17 deletions(-) diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index bdfc057f000c..35975dd75dde 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -60,8 +60,11 @@ #define TCMU_TIME_OUT (30 * MSEC_PER_SEC) -/* For cmd area, the size is fixed 8MB */ -#define CMDR_SIZE (8 * 1024 * 1024) +/* For mailbox plus cmd ring, the size is fixed 8MB */ +#define MB_CMDR_SIZE (8 * 1024 * 1024) +/* Offset of cmd ring is size of mailbox */ +#define CMDR_OFF sizeof(struct tcmu_mailbox) +#define CMDR_SIZE (MB_CMDR_SIZE - CMDR_OFF) /* * For data area, the block size is PAGE_SIZE and @@ -126,8 +129,10 @@ struct tcmu_dev { struct inode *inode; - struct tcmu_mailbox *mb_addr; uint64_t dev_size; + + struct tcmu_mailbox *mb_addr; + void *cmdr; u32 cmdr_size; u32 cmdr_last_cleaned; /* Offset of data area from start of mb */ @@ -135,7 +140,7 @@ struct tcmu_dev { size_t data_off; size_t data_size; uint32_t max_blocks; - size_t ring_size; + size_t mmap_pages; struct mutex cmdr_lock; struct list_head qfull_queue; @@ -166,8 +171,6 @@ struct tcmu_dev { #define TCMU_DEV(_se_dev) container_of(_se_dev, struct tcmu_dev, se_dev) -#define CMDR_OFF sizeof(struct tcmu_mailbox) - struct tcmu_cmd { struct se_cmd *se_cmd; struct tcmu_dev *tcmu_dev; @@ -941,7 +944,7 @@ static uint32_t ring_insert_padding(struct tcmu_dev *udev, size_t cmd_size) if (head_to_end(cmd_head, udev->cmdr_size) < cmd_size) { size_t pad_size = head_to_end(cmd_head, udev->cmdr_size); - hdr = (void *) mb + CMDR_OFF + cmd_head; + hdr = udev->cmdr + cmd_head; tcmu_hdr_set_op(&hdr->len_op, TCMU_OP_PAD); tcmu_hdr_set_len(&hdr->len_op, pad_size); hdr->cmd_id = 0; /* not used for PAD */ @@ -1065,7 +1068,7 @@ static int queue_cmd_ring(struct tcmu_cmd *tcmu_cmd, sense_reason_t *scsi_err) cmd_head = ring_insert_padding(udev, command_size); - entry = (void *) mb + CMDR_OFF + cmd_head; + entry = udev->cmdr + cmd_head; memset(entry, 0, command_size); tcmu_hdr_set_op(&entry->hdr.len_op, TCMU_OP_CMD); @@ -1157,7 +1160,7 @@ queue_tmr_ring(struct tcmu_dev *udev, struct tcmu_tmr *tmr) cmd_head = ring_insert_padding(udev, cmd_size); - entry = (void *)mb + CMDR_OFF + cmd_head; + entry = udev->cmdr + cmd_head; memset(entry, 0, cmd_size); tcmu_hdr_set_op(&entry->hdr.len_op, TCMU_OP_TMR); tcmu_hdr_set_len(&entry->hdr.len_op, cmd_size); @@ -1412,7 +1415,7 @@ static unsigned int tcmu_handle_completions(struct tcmu_dev *udev) while (udev->cmdr_last_cleaned != READ_ONCE(mb->cmd_tail)) { - struct tcmu_cmd_entry *entry = (void *) mb + CMDR_OFF + udev->cmdr_last_cleaned; + struct tcmu_cmd_entry *entry = udev->cmdr + udev->cmdr_last_cleaned; /* * Flush max. up to end of cmd ring since current entry might @@ -1851,7 +1854,7 @@ static int tcmu_mmap(struct uio_info *info, struct vm_area_struct *vma) vma->vm_private_data = udev; /* Ensure the mmap is exactly the right size */ - if (vma_pages(vma) != (udev->ring_size >> PAGE_SHIFT)) + if (vma_pages(vma) != udev->mmap_pages) return -EINVAL; tcmu_vma_open(vma); @@ -2100,20 +2103,22 @@ static int tcmu_configure_device(struct se_device *dev) goto err_bitmap_alloc; } - udev->mb_addr = vzalloc(CMDR_SIZE); - if (!udev->mb_addr) { + mb = vzalloc(MB_CMDR_SIZE); + if (!mb) { ret = -ENOMEM; goto err_vzalloc; } /* mailbox fits in first part of CMDR space */ - udev->cmdr_size = CMDR_SIZE - CMDR_OFF; - udev->data_off = CMDR_SIZE; + udev->mb_addr = mb; + udev->cmdr = (void *)mb + CMDR_OFF; + udev->cmdr_size = CMDR_SIZE; + udev->data_off = MB_CMDR_SIZE; udev->data_size = udev->max_blocks * DATA_BLOCK_SIZE; + udev->mmap_pages = (udev->data_size + MB_CMDR_SIZE) >> PAGE_SHIFT; udev->dbi_thresh = 0; /* Default in Idle state */ /* Initialise the mailbox of the ring buffer */ - mb = udev->mb_addr; mb->version = TCMU_MAILBOX_VERSION; mb->flags = TCMU_MAILBOX_FLAG_CAP_OOOC | TCMU_MAILBOX_FLAG_CAP_READ_LEN | @@ -2129,7 +2134,7 @@ static int tcmu_configure_device(struct se_device *dev) info->mem[0].name = "tcm-user command & data buffer"; info->mem[0].addr = (phys_addr_t)(uintptr_t)udev->mb_addr; - info->mem[0].size = udev->ring_size = udev->data_size + CMDR_SIZE; + info->mem[0].size = udev->data_size + MB_CMDR_SIZE; info->mem[0].memtype = UIO_MEM_NONE; info->irqcontrol = tcmu_irqcontrol; From patchwork Wed Mar 24 19:28:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bodo Stroesser X-Patchwork-Id: 408563 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.9 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BD63C433EC for ; Wed, 24 Mar 2021 19:29:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 24C8F61A0F for ; Wed, 24 Mar 2021 19:29:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237886AbhCXT33 (ORCPT ); Wed, 24 Mar 2021 15:29:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237780AbhCXT3F (ORCPT ); Wed, 24 Mar 2021 15:29:05 -0400 Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com [IPv6:2a00:1450:4864:20::52c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFB9AC061763; Wed, 24 Mar 2021 12:29:04 -0700 (PDT) Received: by mail-ed1-x52c.google.com with SMTP id b16so28992016eds.7; Wed, 24 Mar 2021 12:29:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=fAJAQqQM/KSIzB/YNyRtDB7l1vZbiPRNKpjtpnrynTs=; b=fIdtbC/PHe/IUGNSlQf4sge3v1jTCD4QPKht58Q3eVfWFarC89ahKsygru9ZTczAbW M+2YgRmV7NXyaGCSl8LfddXe2/r7GZbzdocwo1QhUXhGLlO0/l9LjDUN4yNo2O303ahj NUJ7BmDaQmPHdidpMWBXU1NV1q4eLI7a/QETqJvZP+j3uk5rP5jJWpNE/YKd+yGxqUs3 M8Yhrb7AvPqvuwUHUr0w60Wc3sw7TJmrACQP45IvXZ9GmTgCoqyVN4vMd8FW1UIbImeB ur42tdfuweArOzjlQn/cJHStmAJ0JLl+dL8E9zV0ogfEVH6nhIbD++YbT5s7O43O6AIv d05Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=fAJAQqQM/KSIzB/YNyRtDB7l1vZbiPRNKpjtpnrynTs=; b=K53nQO+kq1v6qM34Mb3k3tqeZcyneoycJ9sbzqzgT3PbNL9Wab3u4ClgoTUipFP3Vq 0T3ZMO1PsHuumhfDiT2jEQHhD6/wc6EQxmyjw2+sXaw/B7X2Rj/r+69ZJIEA4fUqk9zF Ukt0Y/qDfd8bqMDFtAQwzz1df4vay+czuJpolOH7KrndKXD6Vnafh0j5JwIrg4/qcwHl a/LWgMitWyP1I7hJBLI4ku4/V3AVkKNYvKzk4K1vcUWBajr0dqyzX9Dl2a78gJwVETuR 5M6spCBXxBLKIAWhJs3aQtWjQhRahGYKFi8mYKufsgGt8Brvc9M4RgCG/9L8irMPckl3 6bdw== X-Gm-Message-State: AOAM532oXGqVGqRs91ebKsTonsNZ1asxirGDzAGshDCLtpRw5du+Hf6M 21fNbe2se3S4Flbto43eSu1+fZtnvbg= X-Google-Smtp-Source: ABdhPJwoziWLAknSO29+chFH4v1NVo/wWQX/IG5yJ7oFMNrpeOmiDZiVqkIIJrt0jL91Iz1MaDPUYw== X-Received: by 2002:aa7:c447:: with SMTP id n7mr5172409edr.171.1616614143621; Wed, 24 Mar 2021 12:29:03 -0700 (PDT) Received: from localhost (ipbcc11466.dynamic.kabel-deutschland.de. [188.193.20.102]) by smtp.gmail.com with ESMTPSA id si7sm1374644ejb.84.2021.03.24.12.29.03 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 24 Mar 2021 12:29:03 -0700 (PDT) From: Bodo Stroesser To: linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, "Martin K. Petersen" Cc: Bodo Stroesser , Mike Christie Subject: [PATCH 2/6] scsi: target: tcmu: Prepare for PAGE_SIZE != DATA_BLOCK_SIZE Date: Wed, 24 Mar 2021 20:28:38 +0100 Message-Id: <20210324192842.30446-3-bostroesser@gmail.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20210324192842.30446-1-bostroesser@gmail.com> References: <20210324192842.30446-1-bostroesser@gmail.com> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Bodo Stroesser This patch renames some variables and definitions as a first preparation for DATA_BLOCK_SIZE != PAGE_SIZE and adds the new DATA_PAGES_PER_BLK definition containing the number of pages per data block. I also renamed tcmu_try_get_block_page() to tcmu_try_get_data_page(), but kept name tcmu_get_block_page(), since it will go away in a following patch, when there is only one caller left. The following patches will then add full support for DATA_PAGES_PER_BLK != 1, which also means DATA_BLOCK_SIZE = DATA_PAGES_PER_BLK * PAGE_SIZE Signed-off-by: Bodo Stroesser --- drivers/target/target_core_user.c | 82 +++++++++++++++++++-------------------- 1 file changed, 41 insertions(+), 41 deletions(-) diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index 35975dd75dde..f42d38873aaf 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -71,17 +71,17 @@ * the total size is 256K * PAGE_SIZE. */ #define DATA_BLOCK_SIZE PAGE_SIZE -#define DATA_BLOCK_SHIFT PAGE_SHIFT +#define DATA_PAGES_PER_BLK 1 #define DATA_BLOCK_BITS_DEF (256 * 1024) -#define TCMU_MBS_TO_BLOCKS(_mbs) (_mbs << (20 - DATA_BLOCK_SHIFT)) -#define TCMU_BLOCKS_TO_MBS(_blocks) (_blocks >> (20 - DATA_BLOCK_SHIFT)) +#define TCMU_MBS_TO_PAGES(_mbs) (_mbs << (20 - PAGE_SHIFT)) +#define TCMU_PAGES_TO_MBS(_pages) (_pages >> (20 - PAGE_SHIFT)) /* * Default number of global data blocks(512K * PAGE_SIZE) * when the unmap thread will be started. */ -#define TCMU_GLOBAL_MAX_BLOCKS_DEF (512 * 1024) +#define TCMU_GLOBAL_MAX_PAGES_DEF (512 * 1024) static u8 tcmu_kern_cmd_reply_supported; static u8 tcmu_netlink_blocked; @@ -149,7 +149,7 @@ struct tcmu_dev { uint32_t dbi_max; uint32_t dbi_thresh; unsigned long *data_bitmap; - struct xarray data_blocks; + struct xarray data_pages; struct xarray commands; @@ -219,9 +219,9 @@ static LIST_HEAD(timed_out_udevs); static struct kmem_cache *tcmu_cmd_cache; -static atomic_t global_db_count = ATOMIC_INIT(0); +static atomic_t global_page_count = ATOMIC_INIT(0); static struct delayed_work tcmu_unmap_work; -static int tcmu_global_max_blocks = TCMU_GLOBAL_MAX_BLOCKS_DEF; +static int tcmu_global_max_pages = TCMU_GLOBAL_MAX_PAGES_DEF; static int tcmu_set_global_max_data_area(const char *str, const struct kernel_param *kp) @@ -237,8 +237,8 @@ static int tcmu_set_global_max_data_area(const char *str, return -EINVAL; } - tcmu_global_max_blocks = TCMU_MBS_TO_BLOCKS(max_area_mb); - if (atomic_read(&global_db_count) > tcmu_global_max_blocks) + tcmu_global_max_pages = TCMU_MBS_TO_PAGES(max_area_mb); + if (atomic_read(&global_page_count) > tcmu_global_max_pages) schedule_delayed_work(&tcmu_unmap_work, 0); else cancel_delayed_work_sync(&tcmu_unmap_work); @@ -249,7 +249,7 @@ static int tcmu_set_global_max_data_area(const char *str, static int tcmu_get_global_max_data_area(char *buffer, const struct kernel_param *kp) { - return sprintf(buffer, "%d\n", TCMU_BLOCKS_TO_MBS(tcmu_global_max_blocks)); + return sprintf(buffer, "%d\n", TCMU_PAGES_TO_MBS(tcmu_global_max_pages)); } static const struct kernel_param_ops tcmu_global_max_data_area_op = { @@ -510,10 +510,10 @@ static inline int tcmu_get_empty_block(struct tcmu_dev *udev, if (dbi == udev->dbi_thresh) return -1; - page = xa_load(&udev->data_blocks, dbi); + page = xa_load(&udev->data_pages, dbi); if (!page) { - if (atomic_add_return(1, &global_db_count) > - tcmu_global_max_blocks) + if (atomic_add_return(1, &global_page_count) > + tcmu_global_max_pages) schedule_delayed_work(&tcmu_unmap_work, 0); /* try to get new page from the mm */ @@ -521,7 +521,7 @@ static inline int tcmu_get_empty_block(struct tcmu_dev *udev, if (!page) goto err_alloc; - if (xa_store(&udev->data_blocks, dbi, page, GFP_NOIO)) + if (xa_store(&udev->data_pages, dbi, page, GFP_NOIO)) goto err_insert; } @@ -538,7 +538,7 @@ static inline int tcmu_get_empty_block(struct tcmu_dev *udev, err_insert: __free_page(page); err_alloc: - atomic_dec(&global_db_count); + atomic_dec(&global_page_count); return -1; } @@ -558,9 +558,9 @@ static int tcmu_get_empty_blocks(struct tcmu_dev *udev, } static inline struct page * -tcmu_get_block_page(struct tcmu_dev *udev, uint32_t dbi) +tcmu_get_block_page(struct tcmu_dev *udev, uint32_t dpi) { - return xa_load(&udev->data_blocks, dbi); + return xa_load(&udev->data_pages, dpi); } static inline void tcmu_free_cmd(struct tcmu_cmd *tcmu_cmd) @@ -1454,7 +1454,7 @@ static unsigned int tcmu_handle_completions(struct tcmu_dev *udev) if (free_space) free_space = tcmu_run_tmr_queue(udev); - if (atomic_read(&global_db_count) > tcmu_global_max_blocks && + if (atomic_read(&global_page_count) > tcmu_global_max_pages && xa_empty(&udev->commands) && list_empty(&udev->qfull_queue)) { /* * Allocated blocks exceeded global block limit, currently no @@ -1583,7 +1583,7 @@ static struct se_device *tcmu_alloc_device(struct se_hba *hba, const char *name) timer_setup(&udev->qfull_timer, tcmu_qfull_timedout, 0); timer_setup(&udev->cmd_timer, tcmu_cmd_timedout, 0); - xa_init(&udev->data_blocks); + xa_init(&udev->data_pages); return &udev->se_dev; } @@ -1617,7 +1617,7 @@ static void tcmu_blocks_release(struct xarray *blocks, unsigned long first, xas_for_each(&xas, page, last) { xas_store(&xas, NULL); __free_page(page); - atomic_dec(&global_db_count); + atomic_dec(&global_page_count); } xas_unlock(&xas); } @@ -1661,7 +1661,7 @@ static void tcmu_dev_kref_release(struct kref *kref) xa_destroy(&udev->commands); WARN_ON(!all_expired); - tcmu_blocks_release(&udev->data_blocks, 0, udev->dbi_max); + tcmu_blocks_release(&udev->data_pages, 0, udev->dbi_max); bitmap_free(udev->data_bitmap); mutex_unlock(&udev->cmdr_lock); @@ -1759,12 +1759,12 @@ static int tcmu_find_mem_index(struct vm_area_struct *vma) return -1; } -static struct page *tcmu_try_get_block_page(struct tcmu_dev *udev, uint32_t dbi) +static struct page *tcmu_try_get_data_page(struct tcmu_dev *udev, uint32_t dpi) { struct page *page; mutex_lock(&udev->cmdr_lock); - page = tcmu_get_block_page(udev, dbi); + page = tcmu_get_block_page(udev, dpi); if (likely(page)) { mutex_unlock(&udev->cmdr_lock); return page; @@ -1774,12 +1774,11 @@ static struct page *tcmu_try_get_block_page(struct tcmu_dev *udev, uint32_t dbi) * Userspace messed up and passed in a address not in the * data iov passed to it. */ - pr_err("Invalid addr to data block mapping (dbi %u) on device %s\n", - dbi, udev->name); - page = NULL; + pr_err("Invalid addr to data page mapping (dpi %u) on device %s\n", + dpi, udev->name); mutex_unlock(&udev->cmdr_lock); - return page; + return NULL; } static void tcmu_vma_open(struct vm_area_struct *vma) @@ -1824,11 +1823,11 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *vmf) addr = (void *)(unsigned long)info->mem[mi].addr + offset; page = vmalloc_to_page(addr); } else { - uint32_t dbi; + uint32_t dpi; /* For the dynamically growing data area pages */ - dbi = (offset - udev->data_off) / DATA_BLOCK_SIZE; - page = tcmu_try_get_block_page(udev, dbi); + dpi = (offset - udev->data_off) / PAGE_SIZE; + page = tcmu_try_get_data_page(udev, dpi); if (!page) return VM_FAULT_SIGBUS; } @@ -2344,7 +2343,7 @@ static int tcmu_set_dev_attrib(substring_t *arg, u32 *dev_attrib) static int tcmu_set_max_blocks_param(struct tcmu_dev *udev, substring_t *arg) { - int val, ret; + int val, ret, blks; ret = match_int(arg, &val); if (ret < 0) { @@ -2353,7 +2352,8 @@ static int tcmu_set_max_blocks_param(struct tcmu_dev *udev, substring_t *arg) return ret; } - if (val <= 0) { + blks = TCMU_MBS_TO_PAGES(val) / DATA_PAGES_PER_BLK; + if (blks <= 0) { pr_err("Invalid max_data_area %d.\n", val); return -EINVAL; } @@ -2365,11 +2365,11 @@ static int tcmu_set_max_blocks_param(struct tcmu_dev *udev, substring_t *arg) goto unlock; } - udev->max_blocks = TCMU_MBS_TO_BLOCKS(val); - if (udev->max_blocks > tcmu_global_max_blocks) { + udev->max_blocks = blks; + if (udev->max_blocks * DATA_PAGES_PER_BLK > tcmu_global_max_pages) { pr_err("%d is too large. Adjusting max_data_area_mb to global limit of %u\n", - val, TCMU_BLOCKS_TO_MBS(tcmu_global_max_blocks)); - udev->max_blocks = tcmu_global_max_blocks; + val, TCMU_PAGES_TO_MBS(tcmu_global_max_pages)); + udev->max_blocks = tcmu_global_max_pages / DATA_PAGES_PER_BLK; } unlock: @@ -2449,7 +2449,7 @@ static ssize_t tcmu_show_configfs_dev_params(struct se_device *dev, char *b) udev->dev_config[0] ? udev->dev_config : "NULL"); bl += sprintf(b + bl, "Size: %llu ", udev->dev_size); bl += sprintf(b + bl, "MaxDataAreaMB: %u\n", - TCMU_BLOCKS_TO_MBS(udev->max_blocks)); + TCMU_PAGES_TO_MBS(udev->max_blocks * DATA_PAGES_PER_BLK)); return bl; } @@ -2544,7 +2544,7 @@ static ssize_t tcmu_max_data_area_mb_show(struct config_item *item, char *page) struct tcmu_dev *udev = TCMU_DEV(da->da_dev); return snprintf(page, PAGE_SIZE, "%u\n", - TCMU_BLOCKS_TO_MBS(udev->max_blocks)); + TCMU_PAGES_TO_MBS(udev->max_blocks * DATA_PAGES_PER_BLK)); } CONFIGFS_ATTR_RO(tcmu_, max_data_area_mb); @@ -2904,7 +2904,7 @@ static void find_free_blocks(void) loff_t off; u32 start, end, block, total_freed = 0; - if (atomic_read(&global_db_count) <= tcmu_global_max_blocks) + if (atomic_read(&global_page_count) <= tcmu_global_max_pages) return; mutex_lock(&root_udev_mutex); @@ -2949,7 +2949,7 @@ static void find_free_blocks(void) unmap_mapping_range(udev->inode->i_mapping, off, 0, 1); /* Release the block pages */ - tcmu_blocks_release(&udev->data_blocks, start, end - 1); + tcmu_blocks_release(&udev->data_pages, start, end - 1); mutex_unlock(&udev->cmdr_lock); total_freed += end - start; @@ -2958,7 +2958,7 @@ static void find_free_blocks(void) } mutex_unlock(&root_udev_mutex); - if (atomic_read(&global_db_count) > tcmu_global_max_blocks) + if (atomic_read(&global_page_count) > tcmu_global_max_pages) schedule_delayed_work(&tcmu_unmap_work, msecs_to_jiffies(5000)); } From patchwork Wed Mar 24 19:28:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bodo Stroesser X-Patchwork-Id: 409218 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6663AC433ED for ; Wed, 24 Mar 2021 19:29:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4CEFF61A0F for ; Wed, 24 Mar 2021 19:29:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237892AbhCXT3a (ORCPT ); Wed, 24 Mar 2021 15:29:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237809AbhCXT3I (ORCPT ); Wed, 24 Mar 2021 15:29:08 -0400 Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com [IPv6:2a00:1450:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41025C061763; Wed, 24 Mar 2021 12:29:08 -0700 (PDT) Received: by mail-ed1-x535.google.com with SMTP id dm8so29000750edb.2; Wed, 24 Mar 2021 12:29:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=014HcUicOvrtCKEp/i5/PR0CyFsTl/NEPtcTqEszbNI=; b=rvVDmVjafMW757cPwOXVmFmCua+td/8blMEYNdgm9HR177mwkyX7bQIEifDPfipq61 uq7rwXFh5V+MQ7jGjx1LhFOq5ZOn5ceYG5UA3MPLFIV8o2FJ7C94mD0uHbGGGcCtRosE wyDzGM18rBkKD/hJ6eeWXOKjEa6B4ji8QqcirdLPEWfUtl2uAOppqCkAfZcW4fU9XRKD l0tDsaiHcPzKnVlY5e3S95RiV0ArtG2zSqd8oy8IZVijSQCAfblaAu37I956IuaKR74w kV47eGBNDbWVfUzakZUj59VS592pjUM5fqSuZ7TlTGNUptozquzdtK8dvjxCw+gNtCP7 RVGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=014HcUicOvrtCKEp/i5/PR0CyFsTl/NEPtcTqEszbNI=; b=TIJ7XdII1vYioIFYGeGOkOxtPCZOFy/J5FAPZ/qLNqiAwgHzuvn498vVqifQakJlVL m+qtmDTnOmVgHbvHI0SXTLPKPToz65Vi8BR5G9Jlr+LocGzF1lbuu1ubnlVIZ4EHt6C2 kxIHn6rno0p7Al/L84lQJx/Y6wpbuRCGKl56FJK5O8X+3bwbq8tr5ok/mSs+PDIYpnLs qdvhwnfwmU1BNCqfWPqpApSYYJ1PU6JvfghAZiaMBumhA6wDu4kEaW+wyRU1WZVPFw4y odFGZd0Yyrc7PxN+b9uNJILonaNmrfkrf9G2Z58rqS4sofRtUkQ9Krlv6ODVdcGNtyKZ tr8Q== X-Gm-Message-State: AOAM533vKMwJkXDWCgf5FMMVTGBvWlq+FOx/bMcQRONkDl9LsEg9DzRO q/knpO+r/ye9kHP8Ccrl6brbSZm8w+Y= X-Google-Smtp-Source: ABdhPJzXcbAWvZf4DNi2wn0+1zVCbQswwHe3TdizbgqjZ7N7H6210Yse0WM8AqcZPX9oumdQCVJRsQ== X-Received: by 2002:a05:6402:1103:: with SMTP id u3mr5130980edv.205.1616614146938; Wed, 24 Mar 2021 12:29:06 -0700 (PDT) Received: from localhost (ipbcc11466.dynamic.kabel-deutschland.de. [188.193.20.102]) by smtp.gmail.com with ESMTPSA id fi11sm1351489ejb.73.2021.03.24.12.29.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 24 Mar 2021 12:29:06 -0700 (PDT) From: Bodo Stroesser To: linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, "Martin K. Petersen" Cc: Bodo Stroesser , Mike Christie Subject: [PATCH 3/6] scsi: target: tcmu: Support DATA_BLOCK_SIZE = N * PAGE_SIZE Date: Wed, 24 Mar 2021 20:28:39 +0100 Message-Id: <20210324192842.30446-4-bostroesser@gmail.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20210324192842.30446-1-bostroesser@gmail.com> References: <20210324192842.30446-1-bostroesser@gmail.com> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch changes tcmu to support DATA_BLOCK_SIZE being a multiple of PAGE_SIZE. There are two reason why one would like to have a bigger DATA_BLOCK_SIZE: 1) If userspace - e.g. due to data compression, encryption or deduplication - needs to have receive or transmit data in a consecutive buffer, we can define DATA_BLOCK_SIZE to the max. size of a SCSI READ/WRITE to enforce that userspace sees just one consecutive buffer. That way we can avoid the need for doing data copy in userspace. 2) Using a bigger data block size can speed up command processing in tcmu. The number of free data blocks to look up in bitmap is reduced by far. The lookup for data pages in radix_tree can be done more efficiently if there are multiple pages in a data block. The max. number of IOVs to set up is lower, so cmd entries in the ring become smaller. Note: This patch still keeps DATA_BLOCK_SIZE as a preprocessor definition. So testing this patch with DATA_BLOCK_SIZE > PAGE_SIZE is possible only by changing the code and building target_core_user.ko again. (Just change DATA_PAGES_PER_BLK to a higher value than the default of 1.) The next patches will replace definitions DATA_PAGES_PER_BLK and DATA_BLOCK_SIZE with new struct tcmu_dev members data_blk_size and data_pages_per_blk, and finally will make data_pages_per_blk changeable via configFS. Signed-off-by: Bodo Stroesser --- drivers/target/target_core_user.c | 215 +++++++++++++++++++++----------------- 1 file changed, 121 insertions(+), 94 deletions(-) diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index f42d38873aaf..3596346f362e 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -67,14 +67,14 @@ #define CMDR_SIZE (MB_CMDR_SIZE - CMDR_OFF) /* - * For data area, the block size is PAGE_SIZE and - * the total size is 256K * PAGE_SIZE. + * For data area, the default block size is PAGE_SIZE and + * the default total size is 256K * PAGE_SIZE. */ -#define DATA_BLOCK_SIZE PAGE_SIZE #define DATA_PAGES_PER_BLK 1 -#define DATA_BLOCK_BITS_DEF (256 * 1024) +#define DATA_BLOCK_SIZE (DATA_PAGES_PER_BLK * PAGE_SIZE) +#define DATA_AREA_PAGES_DEF (256 * 1024) -#define TCMU_MBS_TO_PAGES(_mbs) (_mbs << (20 - PAGE_SHIFT)) +#define TCMU_MBS_TO_PAGES(_mbs) ((size_t)_mbs << (20 - PAGE_SHIFT)) #define TCMU_PAGES_TO_MBS(_pages) (_pages >> (20 - PAGE_SHIFT)) /* @@ -138,7 +138,7 @@ struct tcmu_dev { /* Offset of data area from start of mb */ /* Must add data_off and mb_addr to get the address */ size_t data_off; - size_t data_size; + int data_area_mb; uint32_t max_blocks; size_t mmap_pages; @@ -501,31 +501,39 @@ static void tcmu_cmd_free_data(struct tcmu_cmd *tcmu_cmd, uint32_t len) static inline int tcmu_get_empty_block(struct tcmu_dev *udev, struct tcmu_cmd *tcmu_cmd, - int prev_dbi, int *iov_cnt) + int prev_dbi, int length, int *iov_cnt) { + XA_STATE(xas, &udev->data_pages, 0); struct page *page; - int dbi; + int i, cnt, dbi; + int page_cnt = DIV_ROUND_UP(length, PAGE_SIZE); dbi = find_first_zero_bit(udev->data_bitmap, udev->dbi_thresh); if (dbi == udev->dbi_thresh) return -1; - page = xa_load(&udev->data_pages, dbi); - if (!page) { - if (atomic_add_return(1, &global_page_count) > - tcmu_global_max_pages) - schedule_delayed_work(&tcmu_unmap_work, 0); + /* Count the number of already allocated pages */ + xas_set(&xas, dbi * DATA_PAGES_PER_BLK); + for (cnt = 0; xas_next(&xas) && cnt < page_cnt;) + cnt++; + for (i = cnt; i < page_cnt; i++) { /* try to get new page from the mm */ page = alloc_page(GFP_NOIO); if (!page) - goto err_alloc; + break; - if (xa_store(&udev->data_pages, dbi, page, GFP_NOIO)) - goto err_insert; + if (xa_store(&udev->data_pages, dbi * DATA_PAGES_PER_BLK + i, + page, GFP_NOIO)) { + __free_page(page); + break; + } } + if (atomic_add_return(i - cnt, &global_page_count) > + tcmu_global_max_pages) + schedule_delayed_work(&tcmu_unmap_work, 0); - if (dbi > udev->dbi_max) + if (i && dbi > udev->dbi_max) udev->dbi_max = dbi; set_bit(dbi, udev->data_bitmap); @@ -534,23 +542,19 @@ static inline int tcmu_get_empty_block(struct tcmu_dev *udev, if (dbi != prev_dbi + 1) *iov_cnt += 1; - return dbi; -err_insert: - __free_page(page); -err_alloc: - atomic_dec(&global_page_count); - return -1; + return i == page_cnt ? dbi : -1; } static int tcmu_get_empty_blocks(struct tcmu_dev *udev, - struct tcmu_cmd *tcmu_cmd, int dbi_cnt) + struct tcmu_cmd *tcmu_cmd, int length) { /* start value of dbi + 1 must not be a valid dbi */ int dbi = -2; - int i, iov_cnt = 0; + int blk_len, iov_cnt = 0; - for (i = 0; i < dbi_cnt; i++) { - dbi = tcmu_get_empty_block(udev, tcmu_cmd, dbi, &iov_cnt); + for (; length > 0; length -= DATA_BLOCK_SIZE) { + blk_len = min_t(int, length, DATA_BLOCK_SIZE); + dbi = tcmu_get_empty_block(udev, tcmu_cmd, dbi, blk_len, &iov_cnt); if (dbi < 0) return -1; } @@ -698,9 +702,11 @@ static inline void tcmu_copy_data(struct tcmu_dev *udev, struct scatterlist *sg, unsigned int sg_nents, struct iovec **iov, size_t data_len) { + XA_STATE(xas, &udev->data_pages, 0); /* start value of dbi + 1 must not be a valid dbi */ int dbi = -2; - size_t block_remaining, cp_len; + size_t page_remaining, cp_len; + int page_cnt, page_inx; struct sg_mapping_iter sg_iter; unsigned int sg_flags; struct page *page; @@ -718,37 +724,48 @@ static inline void tcmu_copy_data(struct tcmu_dev *udev, data_len); else dbi = tcmu_cmd_get_dbi(tcmu_cmd); - page = tcmu_get_block_page(udev, dbi); - if (direction == TCMU_DATA_AREA_TO_SG) - flush_dcache_page(page); - data_page_start = kmap_atomic(page); - block_remaining = DATA_BLOCK_SIZE; - - while (block_remaining && data_len) { - if (!sg_miter_next(&sg_iter)) { - /* set length to 0 to abort outer loop */ - data_len = 0; - pr_debug("tcmu_move_data: aborting data copy due to exhausted sg_list\n"); - break; + + page_cnt = DIV_ROUND_UP(data_len, PAGE_SIZE); + if (page_cnt > DATA_PAGES_PER_BLK) + page_cnt = DATA_PAGES_PER_BLK; + + xas_set(&xas, dbi * DATA_PAGES_PER_BLK); + for (page_inx = 0; page_inx < page_cnt && data_len; page_inx++) { + page = xas_next(&xas); + + if (direction == TCMU_DATA_AREA_TO_SG) + flush_dcache_page(page); + data_page_start = kmap_atomic(page); + page_remaining = PAGE_SIZE; + + while (page_remaining && data_len) { + if (!sg_miter_next(&sg_iter)) { + /* set length to 0 to abort outer loop */ + data_len = 0; + pr_debug("%s: aborting data copy due to exhausted sg_list\n", + __func__); + break; + } + cp_len = min3(sg_iter.length, page_remaining, + data_len); + + data_addr = data_page_start + + PAGE_SIZE - page_remaining; + if (direction == TCMU_SG_TO_DATA_AREA) + memcpy(data_addr, sg_iter.addr, cp_len); + else + memcpy(sg_iter.addr, data_addr, cp_len); + + data_len -= cp_len; + page_remaining -= cp_len; + sg_iter.consumed = cp_len; } - cp_len = min3(sg_iter.length, block_remaining, data_len); + sg_miter_stop(&sg_iter); - data_addr = data_page_start + - DATA_BLOCK_SIZE - block_remaining; + kunmap_atomic(data_page_start); if (direction == TCMU_SG_TO_DATA_AREA) - memcpy(data_addr, sg_iter.addr, cp_len); - else - memcpy(sg_iter.addr, data_addr, cp_len); - - data_len -= cp_len; - block_remaining -= cp_len; - sg_iter.consumed = cp_len; + flush_dcache_page(page); } - sg_miter_stop(&sg_iter); - - kunmap_atomic(data_page_start); - if (direction == TCMU_SG_TO_DATA_AREA) - flush_dcache_page(page); } } @@ -858,13 +875,12 @@ static int tcmu_alloc_data_space(struct tcmu_dev *udev, struct tcmu_cmd *cmd, udev->dbi_thresh = udev->max_blocks; } - iov_cnt = tcmu_get_empty_blocks(udev, cmd, - cmd->dbi_cnt - cmd->dbi_bidi_cnt); + iov_cnt = tcmu_get_empty_blocks(udev, cmd, cmd->se_cmd->data_length); if (iov_cnt < 0) return -1; if (cmd->dbi_bidi_cnt) { - ret = tcmu_get_empty_blocks(udev, cmd, cmd->dbi_bidi_cnt); + ret = tcmu_get_empty_blocks(udev, cmd, cmd->data_len_bidi); if (ret < 0) return -1; } @@ -1020,9 +1036,9 @@ static int queue_cmd_ring(struct tcmu_cmd *tcmu_cmd, sense_reason_t *scsi_err) if (!list_empty(&udev->qfull_queue)) goto queue; - if (data_length > udev->data_size) { + if (data_length > udev->max_blocks * DATA_BLOCK_SIZE) { pr_warn("TCMU: Request of size %zu is too big for %zu data area\n", - data_length, udev->data_size); + data_length, udev->max_blocks * DATA_BLOCK_SIZE); *scsi_err = TCM_INVALID_CDB_FIELD; return -1; } @@ -1570,7 +1586,8 @@ static struct se_device *tcmu_alloc_device(struct se_hba *hba, const char *name) udev->cmd_time_out = TCMU_TIME_OUT; udev->qfull_time_out = -1; - udev->max_blocks = DATA_BLOCK_BITS_DEF; + udev->max_blocks = DATA_AREA_PAGES_DEF / DATA_PAGES_PER_BLK; + udev->data_area_mb = TCMU_PAGES_TO_MBS(DATA_AREA_PAGES_DEF); mutex_init(&udev->cmdr_lock); INIT_LIST_HEAD(&udev->node); @@ -1607,19 +1624,24 @@ static int tcmu_check_and_free_pending_cmd(struct tcmu_cmd *cmd) return -EINVAL; } -static void tcmu_blocks_release(struct xarray *blocks, unsigned long first, +static u32 tcmu_blocks_release(struct xarray *blocks, unsigned long first, unsigned long last) { - XA_STATE(xas, blocks, first); + XA_STATE(xas, blocks, first * DATA_PAGES_PER_BLK); struct page *page; + u32 pages_freed = 0; xas_lock(&xas); - xas_for_each(&xas, page, last) { + xas_for_each(&xas, page, (last + 1) * DATA_PAGES_PER_BLK - 1) { xas_store(&xas, NULL); __free_page(page); - atomic_dec(&global_page_count); + pages_freed++; } xas_unlock(&xas); + + atomic_sub(pages_freed, &global_page_count); + + return pages_freed; } static void tcmu_remove_all_queued_tmr(struct tcmu_dev *udev) @@ -2086,6 +2108,7 @@ static int tcmu_configure_device(struct se_device *dev) struct tcmu_dev *udev = TCMU_DEV(dev); struct uio_info *info; struct tcmu_mailbox *mb; + size_t data_size; int ret = 0; ret = tcmu_update_uio_info(udev); @@ -2113,8 +2136,8 @@ static int tcmu_configure_device(struct se_device *dev) udev->cmdr = (void *)mb + CMDR_OFF; udev->cmdr_size = CMDR_SIZE; udev->data_off = MB_CMDR_SIZE; - udev->data_size = udev->max_blocks * DATA_BLOCK_SIZE; - udev->mmap_pages = (udev->data_size + MB_CMDR_SIZE) >> PAGE_SHIFT; + data_size = TCMU_MBS_TO_PAGES(udev->data_area_mb) << PAGE_SHIFT; + udev->mmap_pages = (data_size + MB_CMDR_SIZE) >> PAGE_SHIFT; udev->dbi_thresh = 0; /* Default in Idle state */ /* Initialise the mailbox of the ring buffer */ @@ -2126,14 +2149,13 @@ static int tcmu_configure_device(struct se_device *dev) mb->cmdr_size = udev->cmdr_size; WARN_ON(!PAGE_ALIGNED(udev->data_off)); - WARN_ON(udev->data_size % PAGE_SIZE); - WARN_ON(udev->data_size % DATA_BLOCK_SIZE); + WARN_ON(data_size % PAGE_SIZE); info->version = __stringify(TCMU_MAILBOX_VERSION); info->mem[0].name = "tcm-user command & data buffer"; info->mem[0].addr = (phys_addr_t)(uintptr_t)udev->mb_addr; - info->mem[0].size = udev->data_size + MB_CMDR_SIZE; + info->mem[0].size = data_size + MB_CMDR_SIZE; info->mem[0].memtype = UIO_MEM_NONE; info->irqcontrol = tcmu_irqcontrol; @@ -2343,7 +2365,7 @@ static int tcmu_set_dev_attrib(substring_t *arg, u32 *dev_attrib) static int tcmu_set_max_blocks_param(struct tcmu_dev *udev, substring_t *arg) { - int val, ret, blks; + int val, ret; ret = match_int(arg, &val); if (ret < 0) { @@ -2351,26 +2373,30 @@ static int tcmu_set_max_blocks_param(struct tcmu_dev *udev, substring_t *arg) ret); return ret; } - - blks = TCMU_MBS_TO_PAGES(val) / DATA_PAGES_PER_BLK; - if (blks <= 0) { + if (val <= 0) { pr_err("Invalid max_data_area %d.\n", val); return -EINVAL; } - - mutex_lock(&udev->cmdr_lock); - if (udev->data_bitmap) { - pr_err("Cannot set max_data_area_mb after it has been enabled.\n"); - ret = -EINVAL; - goto unlock; - } - - udev->max_blocks = blks; - if (udev->max_blocks * DATA_PAGES_PER_BLK > tcmu_global_max_pages) { + if (val > TCMU_PAGES_TO_MBS(tcmu_global_max_pages)) { pr_err("%d is too large. Adjusting max_data_area_mb to global limit of %u\n", val, TCMU_PAGES_TO_MBS(tcmu_global_max_pages)); - udev->max_blocks = tcmu_global_max_pages / DATA_PAGES_PER_BLK; + val = TCMU_PAGES_TO_MBS(tcmu_global_max_pages); } + if (TCMU_MBS_TO_PAGES(val) < DATA_PAGES_PER_BLK) { + pr_err("Invalid max_data_area %d (%zu pages): smaller than data_pages_per_blk (%d pages).\n", + val, TCMU_MBS_TO_PAGES(val), DATA_PAGES_PER_BLK); + return -EINVAL; + } + + mutex_lock(&udev->cmdr_lock); + if (udev->data_bitmap) { + pr_err("Cannot set max_data_area_mb after it has been enabled.\n"); + ret = -EINVAL; + goto unlock; + } + + udev->data_area_mb = val; + udev->max_blocks = TCMU_MBS_TO_PAGES(val) / DATA_PAGES_PER_BLK; unlock: mutex_unlock(&udev->cmdr_lock); @@ -2448,8 +2474,7 @@ static ssize_t tcmu_show_configfs_dev_params(struct se_device *dev, char *b) bl = sprintf(b + bl, "Config: %s ", udev->dev_config[0] ? udev->dev_config : "NULL"); bl += sprintf(b + bl, "Size: %llu ", udev->dev_size); - bl += sprintf(b + bl, "MaxDataAreaMB: %u\n", - TCMU_PAGES_TO_MBS(udev->max_blocks * DATA_PAGES_PER_BLK)); + bl += sprintf(b + bl, "MaxDataAreaMB: %u\n", udev->data_area_mb); return bl; } @@ -2543,8 +2568,7 @@ static ssize_t tcmu_max_data_area_mb_show(struct config_item *item, char *page) struct se_dev_attrib, da_group); struct tcmu_dev *udev = TCMU_DEV(da->da_dev); - return snprintf(page, PAGE_SIZE, "%u\n", - TCMU_PAGES_TO_MBS(udev->max_blocks * DATA_PAGES_PER_BLK)); + return snprintf(page, PAGE_SIZE, "%u\n", udev->data_area_mb); } CONFIGFS_ATTR_RO(tcmu_, max_data_area_mb); @@ -2902,7 +2926,8 @@ static void find_free_blocks(void) { struct tcmu_dev *udev; loff_t off; - u32 start, end, block, total_freed = 0; + u32 pages_freed, total_pages_freed = 0; + u32 start, end, block, total_blocks_freed = 0; if (atomic_read(&global_page_count) <= tcmu_global_max_pages) return; @@ -2949,12 +2974,14 @@ static void find_free_blocks(void) unmap_mapping_range(udev->inode->i_mapping, off, 0, 1); /* Release the block pages */ - tcmu_blocks_release(&udev->data_pages, start, end - 1); + pages_freed = tcmu_blocks_release(&udev->data_pages, start, end - 1); mutex_unlock(&udev->cmdr_lock); - total_freed += end - start; - pr_debug("Freed %u blocks (total %u) from %s.\n", end - start, - total_freed, udev->name); + total_pages_freed += pages_freed; + total_blocks_freed += end - start; + pr_debug("Freed %u pages (total %u) from %u blocks (total %u) from %s.\n", + pages_freed, total_pages_freed, end - start, + total_blocks_freed, udev->name); } mutex_unlock(&root_udev_mutex); From patchwork Wed Mar 24 19:28:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bodo Stroesser X-Patchwork-Id: 408562 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.9 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7773C433F8 for ; Wed, 24 Mar 2021 19:29:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8FE4161A1D for ; Wed, 24 Mar 2021 19:29:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237898AbhCXT3b (ORCPT ); Wed, 24 Mar 2021 15:29:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237814AbhCXT3L (ORCPT ); Wed, 24 Mar 2021 15:29:11 -0400 Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com [IPv6:2a00:1450:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 642AAC061763; Wed, 24 Mar 2021 12:29:11 -0700 (PDT) Received: by mail-ej1-x62c.google.com with SMTP id u5so34798464ejn.8; Wed, 24 Mar 2021 12:29:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=fZSR7K+/P2y30RZS/IlJubWHc1G48R3feXYi2cCY8tM=; b=jpxkJZquCiIBmYihY395TUfUubCDxx/3tsmyp2y+uqGhhOArc4cbps6SOH6mt5yBco jtt83PMl90rRsvuHq7ieU683BlXA6/k8T9Jdz3uVqwe9ECpkmefXIaz1SIYLLaB1n4zY PKGeodHVVkEzzgUvhEGKdwGwraFFIJGL+rZUD5jH2ohZ43/cpsfKa5Y6t7o+1FT0a7e5 ZCZuQw9lBzSEfxzWDhtLG+DyWbkYxdNk7IgKFOYsN+DPzQmagEOrS+5Bb3sw6fT4e+Ym LO08VdWWFk1cpBaQkCjYZEGFLrcGe4cbA92C+sK2MNiK/KrDYeeURmQ5NigywD4V+msG 5ZBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=fZSR7K+/P2y30RZS/IlJubWHc1G48R3feXYi2cCY8tM=; b=Th/ONPMqTDcyyb+EfrqE0rDiKV85zeGunv8cavL8Mt7rDQUIlw+E+V4c1ctRUwvNNY TtIAIwXxIEreCMenSLNnJv2QLJQh3tcPWz0ggzyghnARR2wqhfJnZ0DnsVQDH7gsNNFK FRYMmVnJX6ue+woGnAP+GpdkzWIQg50iEWs6TfP0hHZOLj7G4e1+f74RApAnQLAS3l5v X6X/e05vVJZoZdPxqXJaKdKm/+FHfR+uixTWnompayZACYy7iwa23jV70wyqCIGh5h3B oKOUeZVxvcOjvtchJwnYoPKwbf0Sr7YEObDGgIfHlSQ1Eyy5xbXehUClcb9imBeZQHR0 a8tg== X-Gm-Message-State: AOAM533W6Qq+p8682SVJkHkNjQTBltaPa1zTR3skQUynwzV5uM/2Aq0q DONmDsaVtRJngCuVcfwuTmCV2xnYG5Q= X-Google-Smtp-Source: ABdhPJwYUUKW+vZZbEmS5idn0QxU6AXm/8AtHem+NsRhscf+SFAYq+bEdO3QxaRSGZjfCRx4F5ZJMg== X-Received: by 2002:a17:907:2d89:: with SMTP id gt9mr5507056ejc.226.1616614150143; Wed, 24 Mar 2021 12:29:10 -0700 (PDT) Received: from localhost (ipbcc11466.dynamic.kabel-deutschland.de. [188.193.20.102]) by smtp.gmail.com with ESMTPSA id v8sm1593958edq.76.2021.03.24.12.29.09 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 24 Mar 2021 12:29:09 -0700 (PDT) From: Bodo Stroesser To: linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, "Martin K. Petersen" Cc: Bodo Stroesser , Mike Christie Subject: [PATCH 4/6] scsi: target: tcmu: Remove function tcmu_get_block_page Date: Wed, 24 Mar 2021 20:28:40 +0100 Message-Id: <20210324192842.30446-5-bostroesser@gmail.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20210324192842.30446-1-bostroesser@gmail.com> References: <20210324192842.30446-1-bostroesser@gmail.com> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org After previous patch there is only one caller of tcmu_get_block_page left. Since it is a one-liner, we better remove the function. Signed-off-by: Bodo Stroesser --- drivers/target/target_core_user.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index 3596346f362e..9b2bff450510 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -561,12 +561,6 @@ static int tcmu_get_empty_blocks(struct tcmu_dev *udev, return iov_cnt; } -static inline struct page * -tcmu_get_block_page(struct tcmu_dev *udev, uint32_t dpi) -{ - return xa_load(&udev->data_pages, dpi); -} - static inline void tcmu_free_cmd(struct tcmu_cmd *tcmu_cmd) { kfree(tcmu_cmd->dbi); @@ -1786,7 +1780,7 @@ static struct page *tcmu_try_get_data_page(struct tcmu_dev *udev, uint32_t dpi) struct page *page; mutex_lock(&udev->cmdr_lock); - page = tcmu_get_block_page(udev, dpi); + page = xa_load(&udev->data_pages, dpi); if (likely(page)) { mutex_unlock(&udev->cmdr_lock); return page; From patchwork Wed Mar 24 19:28:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bodo Stroesser X-Patchwork-Id: 408561 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.9 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 100F0C433DB for ; Wed, 24 Mar 2021 19:30:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E51FF61A19 for ; Wed, 24 Mar 2021 19:29:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237907AbhCXT3c (ORCPT ); Wed, 24 Mar 2021 15:29:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237817AbhCXT3O (ORCPT ); Wed, 24 Mar 2021 15:29:14 -0400 Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com [IPv6:2a00:1450:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B284C061763; Wed, 24 Mar 2021 12:29:14 -0700 (PDT) Received: by mail-ed1-x533.google.com with SMTP id e7so28887343edu.10; Wed, 24 Mar 2021 12:29:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5+hIx5p7bz9AsvCkqooSwiz/LqaDIMVyvjnw+uIs8RY=; b=hs3PV3ipMW128Vf9tdw66rCCwLymfO/lbA3MkrAYs4hQtgH58SanEpufBOf4aAJvk3 qHUKCYiSGQgBxCLHXTCugLpZmKZviADg5/XO36H2r7uvSA9gz2xj4Y5BIW96w+ox9lUF ShFFLX1BQNo2XNQ3KIvE8FHbI0fE/4Xs6rNbVX7iEBCvzr4HWiaEvwdqB/2TDBiJf8pA MH5k+9aQJchu4Qk9L/t9rz9LOGhnLwAffPTxDZYigNgpajvlXvJArn029l+mIx7tWP7O 25pEkpJZKJfgGE34BX8gMob1YILiOh/1FU0XeBEroqGXVEjeBJuS8Z7EDxFOe2EE5cOM K/iA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5+hIx5p7bz9AsvCkqooSwiz/LqaDIMVyvjnw+uIs8RY=; b=g/zboJ9fPm2sLbI4SvP+VHUJrHJZzwz4J5me1vqUj86jwQR3hC7TKeZ6lDirVkR/EQ AcOlrytPFQtx7pF4HmZRDpnLo5BicZ+V7++TuCrW22GplKV+H5AiRJVKd37KMRJQn6+S wjuR1gIW9AYhQusnds3JoCaKcrYUIY4p6D7vqW++0fgmbwiNZxxCyQyybgd6xmxewAte WrnCaxkuO09qN9/3xB7ooxc5txQjXWSYP61dJiAXNbAWIWZS+D4FfTGTqW0qk+L6dW1n VeJMbanKsUw5XO7litCqGzXmT2Klcbx54DTQufREQwyG2kKVQxEHmmmSDU90VONMMcNZ PEMQ== X-Gm-Message-State: AOAM5338xGCfX2CLAx2R8NqfCn6asKk5iXB53x0nwJIqkkF4FFv1qUL9 l4MMP3/LBv66I7b67kd+2zU0r6JQXHA= X-Google-Smtp-Source: ABdhPJyAyd/FgUBMckPiuZsJ1nJoVR/yIG2HnukKy39oJTWKV31TiIeedJlheAgbPzYdSik5l6N9WA== X-Received: by 2002:a05:6402:270c:: with SMTP id y12mr5262653edd.284.1616614152793; Wed, 24 Mar 2021 12:29:12 -0700 (PDT) Received: from localhost (ipbcc11466.dynamic.kabel-deutschland.de. [188.193.20.102]) by smtp.gmail.com with ESMTPSA id m14sm1572787edd.63.2021.03.24.12.29.12 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 24 Mar 2021 12:29:12 -0700 (PDT) From: Bodo Stroesser To: linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, "Martin K. Petersen" Cc: Bodo Stroesser , Mike Christie Subject: [PATCH 5/6] scsi: target: tcmu: Replace block size definitions with new udev members Date: Wed, 24 Mar 2021 20:28:41 +0100 Message-Id: <20210324192842.30446-6-bostroesser@gmail.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20210324192842.30446-1-bostroesser@gmail.com> References: <20210324192842.30446-1-bostroesser@gmail.com> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Replace DATA_PAGES_PER_BLK and DATA_BLOCK_SIZE with new struct elements tcmu_dev->data_pages_per_blk and tcmu_dev->data_blk_size. These new variables are still loaded with constant definition DATA_PAGES_PER_BLK_DEF (= 1) and DATA_PAGES_PER_BLK_DEF * PAGE_SIZE There is no way yet to set the values via configFS, so for testing please modify definition of DATA_PAGES_PER_BLK_DEF in source. Next patch will add ConfigFS support. Signed-off-by: Bodo Stroesser --- drivers/target/target_core_user.c | 82 +++++++++++++++++++++------------------ 1 file changed, 45 insertions(+), 37 deletions(-) diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index 9b2bff450510..3de66db06438 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -70,8 +70,7 @@ * For data area, the default block size is PAGE_SIZE and * the default total size is 256K * PAGE_SIZE. */ -#define DATA_PAGES_PER_BLK 1 -#define DATA_BLOCK_SIZE (DATA_PAGES_PER_BLK * PAGE_SIZE) +#define DATA_PAGES_PER_BLK_DEF 1 #define DATA_AREA_PAGES_DEF (256 * 1024) #define TCMU_MBS_TO_PAGES(_mbs) ((size_t)_mbs << (20 - PAGE_SHIFT)) @@ -150,6 +149,8 @@ struct tcmu_dev { uint32_t dbi_thresh; unsigned long *data_bitmap; struct xarray data_pages; + uint32_t data_pages_per_blk; + uint32_t data_blk_size; struct xarray commands; @@ -505,15 +506,16 @@ static inline int tcmu_get_empty_block(struct tcmu_dev *udev, { XA_STATE(xas, &udev->data_pages, 0); struct page *page; - int i, cnt, dbi; + int i, cnt, dbi, dpi; int page_cnt = DIV_ROUND_UP(length, PAGE_SIZE); dbi = find_first_zero_bit(udev->data_bitmap, udev->dbi_thresh); if (dbi == udev->dbi_thresh) return -1; + dpi = dbi * udev->data_pages_per_blk; /* Count the number of already allocated pages */ - xas_set(&xas, dbi * DATA_PAGES_PER_BLK); + xas_set(&xas, dpi); for (cnt = 0; xas_next(&xas) && cnt < page_cnt;) cnt++; @@ -523,8 +525,7 @@ static inline int tcmu_get_empty_block(struct tcmu_dev *udev, if (!page) break; - if (xa_store(&udev->data_pages, dbi * DATA_PAGES_PER_BLK + i, - page, GFP_NOIO)) { + if (xa_store(&udev->data_pages, dpi + i, page, GFP_NOIO)) { __free_page(page); break; } @@ -550,11 +551,13 @@ static int tcmu_get_empty_blocks(struct tcmu_dev *udev, { /* start value of dbi + 1 must not be a valid dbi */ int dbi = -2; - int blk_len, iov_cnt = 0; + int blk_data_len, iov_cnt = 0; + uint32_t blk_size = udev->data_blk_size; - for (; length > 0; length -= DATA_BLOCK_SIZE) { - blk_len = min_t(int, length, DATA_BLOCK_SIZE); - dbi = tcmu_get_empty_block(udev, tcmu_cmd, dbi, blk_len, &iov_cnt); + for (; length > 0; length -= blk_size) { + blk_data_len = min_t(uint32_t, length, blk_size); + dbi = tcmu_get_empty_block(udev, tcmu_cmd, dbi, blk_data_len, + &iov_cnt); if (dbi < 0) return -1; } @@ -571,14 +574,15 @@ static inline void tcmu_cmd_set_block_cnts(struct tcmu_cmd *cmd) { int i, len; struct se_cmd *se_cmd = cmd->se_cmd; + uint32_t blk_size = cmd->tcmu_dev->data_blk_size; - cmd->dbi_cnt = DIV_ROUND_UP(se_cmd->data_length, DATA_BLOCK_SIZE); + cmd->dbi_cnt = DIV_ROUND_UP(se_cmd->data_length, blk_size); if (se_cmd->se_cmd_flags & SCF_BIDI) { BUG_ON(!(se_cmd->t_bidi_data_sg && se_cmd->t_bidi_data_nents)); for (i = 0, len = 0; i < se_cmd->t_bidi_data_nents; i++) len += se_cmd->t_bidi_data_sg[i].length; - cmd->dbi_bidi_cnt = DIV_ROUND_UP(len, DATA_BLOCK_SIZE); + cmd->dbi_bidi_cnt = DIV_ROUND_UP(len, blk_size); cmd->dbi_cnt += cmd->dbi_bidi_cnt; cmd->data_len_bidi = len; } @@ -590,9 +594,8 @@ static int new_block_to_iov(struct tcmu_dev *udev, struct tcmu_cmd *cmd, /* Get the next dbi */ int dbi = tcmu_cmd_get_dbi(cmd); - /* Do not add more than DATA_BLOCK_SIZE to iov */ - if (len > DATA_BLOCK_SIZE) - len = DATA_BLOCK_SIZE; + /* Do not add more than udev->data_blk_size to iov */ + len = min_t(int, len, udev->data_blk_size); /* * The following code will gather and map the blocks to the same iovec @@ -604,7 +607,7 @@ static int new_block_to_iov(struct tcmu_dev *udev, struct tcmu_cmd *cmd, (*iov)++; /* write offset relative to mb_addr */ (*iov)->iov_base = (void __user *) - (udev->data_off + dbi * DATA_BLOCK_SIZE); + (udev->data_off + dbi * udev->data_blk_size); } (*iov)->iov_len += len; @@ -618,7 +621,7 @@ static void tcmu_setup_iovs(struct tcmu_dev *udev, struct tcmu_cmd *cmd, int dbi = -2; /* We prepare the IOVs for DMA_FROM_DEVICE transfer direction */ - for (; data_length > 0; data_length -= DATA_BLOCK_SIZE) + for (; data_length > 0; data_length -= udev->data_blk_size) dbi = new_block_to_iov(udev, cmd, iov, dbi, data_length); } @@ -720,10 +723,10 @@ static inline void tcmu_copy_data(struct tcmu_dev *udev, dbi = tcmu_cmd_get_dbi(tcmu_cmd); page_cnt = DIV_ROUND_UP(data_len, PAGE_SIZE); - if (page_cnt > DATA_PAGES_PER_BLK) - page_cnt = DATA_PAGES_PER_BLK; + if (page_cnt > udev->data_pages_per_blk) + page_cnt = udev->data_pages_per_blk; - xas_set(&xas, dbi * DATA_PAGES_PER_BLK); + xas_set(&xas, dbi * udev->data_pages_per_blk); for (page_inx = 0; page_inx < page_cnt && data_len; page_inx++) { page = xas_next(&xas); @@ -858,9 +861,9 @@ static int tcmu_alloc_data_space(struct tcmu_dev *udev, struct tcmu_cmd *cmd, (udev->max_blocks - udev->dbi_thresh) + space; if (blocks_left < cmd->dbi_cnt) { - pr_debug("no data space: only %lu available, but ask for %lu\n", - blocks_left * DATA_BLOCK_SIZE, - cmd->dbi_cnt * DATA_BLOCK_SIZE); + pr_debug("no data space: only %lu available, but ask for %u\n", + blocks_left * udev->data_blk_size, + cmd->dbi_cnt * udev->data_blk_size); return -1; } @@ -1012,8 +1015,9 @@ static int queue_cmd_ring(struct tcmu_cmd *tcmu_cmd, sense_reason_t *scsi_err) int iov_cnt, iov_bidi_cnt; uint32_t cmd_id, cmd_head; uint64_t cdb_off; + uint32_t blk_size = udev->data_blk_size; /* size of data buffer needed */ - size_t data_length = (size_t)tcmu_cmd->dbi_cnt * DATA_BLOCK_SIZE; + size_t data_length = (size_t)tcmu_cmd->dbi_cnt * blk_size; *scsi_err = TCM_NO_SENSE; @@ -1030,9 +1034,9 @@ static int queue_cmd_ring(struct tcmu_cmd *tcmu_cmd, sense_reason_t *scsi_err) if (!list_empty(&udev->qfull_queue)) goto queue; - if (data_length > udev->max_blocks * DATA_BLOCK_SIZE) { + if (data_length > (size_t)udev->max_blocks * blk_size) { pr_warn("TCMU: Request of size %zu is too big for %zu data area\n", - data_length, udev->max_blocks * DATA_BLOCK_SIZE); + data_length, (size_t)udev->max_blocks * blk_size); *scsi_err = TCM_INVALID_CDB_FIELD; return -1; } @@ -1580,8 +1584,10 @@ static struct se_device *tcmu_alloc_device(struct se_hba *hba, const char *name) udev->cmd_time_out = TCMU_TIME_OUT; udev->qfull_time_out = -1; - udev->max_blocks = DATA_AREA_PAGES_DEF / DATA_PAGES_PER_BLK; + udev->data_pages_per_blk = DATA_PAGES_PER_BLK_DEF; + udev->max_blocks = DATA_AREA_PAGES_DEF / udev->data_pages_per_blk; udev->data_area_mb = TCMU_PAGES_TO_MBS(DATA_AREA_PAGES_DEF); + mutex_init(&udev->cmdr_lock); INIT_LIST_HEAD(&udev->node); @@ -1618,15 +1624,15 @@ static int tcmu_check_and_free_pending_cmd(struct tcmu_cmd *cmd) return -EINVAL; } -static u32 tcmu_blocks_release(struct xarray *blocks, unsigned long first, +static u32 tcmu_blocks_release(struct tcmu_dev *udev, unsigned long first, unsigned long last) { - XA_STATE(xas, blocks, first * DATA_PAGES_PER_BLK); + XA_STATE(xas, &udev->data_pages, first * udev->data_pages_per_blk); struct page *page; u32 pages_freed = 0; xas_lock(&xas); - xas_for_each(&xas, page, (last + 1) * DATA_PAGES_PER_BLK - 1) { + xas_for_each(&xas, page, (last + 1) * udev->data_pages_per_blk - 1) { xas_store(&xas, NULL); __free_page(page); pages_freed++; @@ -1677,7 +1683,7 @@ static void tcmu_dev_kref_release(struct kref *kref) xa_destroy(&udev->commands); WARN_ON(!all_expired); - tcmu_blocks_release(&udev->data_pages, 0, udev->dbi_max); + tcmu_blocks_release(udev, 0, udev->dbi_max); bitmap_free(udev->data_bitmap); mutex_unlock(&udev->cmdr_lock); @@ -2132,6 +2138,7 @@ static int tcmu_configure_device(struct se_device *dev) udev->data_off = MB_CMDR_SIZE; data_size = TCMU_MBS_TO_PAGES(udev->data_area_mb) << PAGE_SHIFT; udev->mmap_pages = (data_size + MB_CMDR_SIZE) >> PAGE_SHIFT; + udev->data_blk_size = udev->data_pages_per_blk * PAGE_SIZE; udev->dbi_thresh = 0; /* Default in Idle state */ /* Initialise the mailbox of the ring buffer */ @@ -2360,6 +2367,7 @@ static int tcmu_set_dev_attrib(substring_t *arg, u32 *dev_attrib) static int tcmu_set_max_blocks_param(struct tcmu_dev *udev, substring_t *arg) { int val, ret; + uint32_t pages_per_blk = udev->data_pages_per_blk; ret = match_int(arg, &val); if (ret < 0) { @@ -2376,9 +2384,9 @@ static int tcmu_set_max_blocks_param(struct tcmu_dev *udev, substring_t *arg) val, TCMU_PAGES_TO_MBS(tcmu_global_max_pages)); val = TCMU_PAGES_TO_MBS(tcmu_global_max_pages); } - if (TCMU_MBS_TO_PAGES(val) < DATA_PAGES_PER_BLK) { - pr_err("Invalid max_data_area %d (%zu pages): smaller than data_pages_per_blk (%d pages).\n", - val, TCMU_MBS_TO_PAGES(val), DATA_PAGES_PER_BLK); + if (TCMU_MBS_TO_PAGES(val) < pages_per_blk) { + pr_err("Invalid max_data_area %d (%zu pages): smaller than data_pages_per_blk (%u pages).\n", + val, TCMU_MBS_TO_PAGES(val), pages_per_blk); return -EINVAL; } @@ -2390,7 +2398,7 @@ static int tcmu_set_max_blocks_param(struct tcmu_dev *udev, substring_t *arg) } udev->data_area_mb = val; - udev->max_blocks = TCMU_MBS_TO_PAGES(val) / DATA_PAGES_PER_BLK; + udev->max_blocks = TCMU_MBS_TO_PAGES(val) / pages_per_blk; unlock: mutex_unlock(&udev->cmdr_lock); @@ -2964,11 +2972,11 @@ static void find_free_blocks(void) } /* Here will truncate the data area from off */ - off = udev->data_off + start * DATA_BLOCK_SIZE; + off = udev->data_off + (loff_t)start * udev->data_blk_size; unmap_mapping_range(udev->inode->i_mapping, off, 0, 1); /* Release the block pages */ - pages_freed = tcmu_blocks_release(&udev->data_pages, start, end - 1); + pages_freed = tcmu_blocks_release(udev, start, end - 1); mutex_unlock(&udev->cmdr_lock); total_pages_freed += pages_freed; From patchwork Wed Mar 24 19:28:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bodo Stroesser X-Patchwork-Id: 409217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.9 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32FB3C43381 for ; Wed, 24 Mar 2021 19:30:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0B78E61A0F for ; Wed, 24 Mar 2021 19:30:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237661AbhCXT3d (ORCPT ); Wed, 24 Mar 2021 15:29:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237825AbhCXT3R (ORCPT ); Wed, 24 Mar 2021 15:29:17 -0400 Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com [IPv6:2a00:1450:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9400BC061763; Wed, 24 Mar 2021 12:29:16 -0700 (PDT) Received: by mail-ed1-x52d.google.com with SMTP id x21so28968068eds.4; Wed, 24 Mar 2021 12:29:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=MHohuala06X+Op+MBLQAUJeQz0pWN05N9urbHHEj/fE=; b=hzWnkcJvriY+CblsFBoCr1DmQuMEXq4xh55xmxdHrNltE7qQGF+Jcwplzu5LCz2edb 8f3RprvcsTFoXfTys2hYWlnqh/AQQx477q+twnV24XBcsgueXllALvEIr9SHJHyvcDDl zJB1sJY8zT3bniizM7YTaoqmwYNK7dWgZS+5mIoAmn+W/SmJCsulpNDPVGmIomD4ghvD QYdy91CxX/Aio1l+WIitwBp3AzRtVsJGWWcbxse3egVjREYbzBn+DP70l6cGNqiDmd9x Qdt6g3jJXGuhVyLKwHOPsKaJk6gUW9naYDR5jzpnBoOCzpu9GubujTgy581SoGuG5i/e ns8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MHohuala06X+Op+MBLQAUJeQz0pWN05N9urbHHEj/fE=; b=R3jFA/IXtRVbss8ZjdfyE4IMjw2WiYQs73gE/S/ta3qNhYpB3ryI+m6Que76D0YpaO cRS1acxD0FSHVWbrYcAuhe/rjmCnQhwuuKKOCTfTg7smpfPCH+FYplCc/2cWumvHjkX4 SByrAOY0uztyOVoqKA46pfITQaJ20XayRMir6Dx/9VtE+OWOGo0kazGruI60NV9j3H+k zjYdrB36mFOtj0ZbujW4yFWTwbTkN2TMBU33k6HW70jZPQyoKtKWos2pTZcoorOmrrzz 3lknWhgO2oGqYGMNDhSqzdP97tngYGbjKUKCOz5FcrfITqDWrUE0SBAK7VrfHrk/Zt81 RdCQ== X-Gm-Message-State: AOAM532q1IYbOKEExegoIeLOHzsxcJEWVxhQ2hnhAqZ9++ySKzJxgWFW 5qLrFDINeFBfnaAM3Z0+k/kvFWRsM8s= X-Google-Smtp-Source: ABdhPJwBLYgwi9TsSFbF97zOKErPNVFUiZd/OXegWZgGRksg02eVk+POuOxvRMBGjXXrbfqlE1lccg== X-Received: by 2002:aa7:db53:: with SMTP id n19mr5341317edt.330.1616614155298; Wed, 24 Mar 2021 12:29:15 -0700 (PDT) Received: from localhost (ipbcc11466.dynamic.kabel-deutschland.de. [188.193.20.102]) by smtp.gmail.com with ESMTPSA id lu5sm1370187ejb.97.2021.03.24.12.29.14 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 24 Mar 2021 12:29:15 -0700 (PDT) From: Bodo Stroesser To: linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, "Martin K. Petersen" Cc: Bodo Stroesser , Mike Christie Subject: [PATCH 6/6] scsi: target: tcmu: Make data_pages_per_blk changeable via configFS Date: Wed, 24 Mar 2021 20:28:42 +0100 Message-Id: <20210324192842.30446-7-bostroesser@gmail.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20210324192842.30446-1-bostroesser@gmail.com> References: <20210324192842.30446-1-bostroesser@gmail.com> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Bodo Stroesser This patch makes data_pages_per_blk changeable in the same way as it already is implemented for max_data_area_mb. This means, one can change it typing echo "data_pages_per_blk=N" >control It is printed when doing cat info Also a new readonly attribute data_pages_per_blk delivers the value if read. Signed-off-by: Bodo Stroesser --- drivers/target/target_core_user.c | 55 +++++++++++++++++++++++++++++++++++++-- 1 file changed, 53 insertions(+), 2 deletions(-) diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index 3de66db06438..eec2fd573e2b 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -2331,7 +2331,8 @@ static void tcmu_reset_ring(struct tcmu_dev *udev, u8 err_level) enum { Opt_dev_config, Opt_dev_size, Opt_hw_block_size, Opt_hw_max_sectors, - Opt_nl_reply_supported, Opt_max_data_area_mb, Opt_err, + Opt_nl_reply_supported, Opt_max_data_area_mb, Opt_data_pages_per_blk, + Opt_err, }; static match_table_t tokens = { @@ -2341,6 +2342,7 @@ static match_table_t tokens = { {Opt_hw_max_sectors, "hw_max_sectors=%d"}, {Opt_nl_reply_supported, "nl_reply_supported=%d"}, {Opt_max_data_area_mb, "max_data_area_mb=%d"}, + {Opt_data_pages_per_blk, "data_pages_per_blk=%d"}, {Opt_err, NULL} }; @@ -2405,6 +2407,39 @@ static int tcmu_set_max_blocks_param(struct tcmu_dev *udev, substring_t *arg) return ret; } +static int tcmu_set_data_pages_per_blk(struct tcmu_dev *udev, substring_t *arg) +{ + int val, ret; + + ret = match_int(arg, &val); + if (ret < 0) { + pr_err("match_int() failed for data_pages_per_blk=. Error %d.\n", + ret); + return ret; + } + + if (val > TCMU_MBS_TO_PAGES(udev->data_area_mb)) { + pr_err("Invalid data_pages_per_blk %d: greater than max_data_area_mb %d -> %zd pages).\n", + val, udev->data_area_mb, + TCMU_MBS_TO_PAGES(udev->data_area_mb)); + return -EINVAL; + } + + mutex_lock(&udev->cmdr_lock); + if (udev->data_bitmap) { + pr_err("Cannot set data_pages_per_blk after it has been enabled.\n"); + ret = -EINVAL; + goto unlock; + } + + udev->data_pages_per_blk = val; + udev->max_blocks = TCMU_MBS_TO_PAGES(udev->data_area_mb) / val; + +unlock: + mutex_unlock(&udev->cmdr_lock); + return ret; +} + static ssize_t tcmu_set_configfs_dev_params(struct se_device *dev, const char *page, ssize_t count) { @@ -2456,6 +2491,9 @@ static ssize_t tcmu_set_configfs_dev_params(struct se_device *dev, case Opt_max_data_area_mb: ret = tcmu_set_max_blocks_param(udev, &args[0]); break; + case Opt_data_pages_per_blk: + ret = tcmu_set_data_pages_per_blk(udev, &args[0]); + break; default: break; } @@ -2476,7 +2514,8 @@ static ssize_t tcmu_show_configfs_dev_params(struct se_device *dev, char *b) bl = sprintf(b + bl, "Config: %s ", udev->dev_config[0] ? udev->dev_config : "NULL"); bl += sprintf(b + bl, "Size: %llu ", udev->dev_size); - bl += sprintf(b + bl, "MaxDataAreaMB: %u\n", udev->data_area_mb); + bl += sprintf(b + bl, "MaxDataAreaMB: %u ", udev->data_area_mb); + bl += sprintf(b + bl, "DataPagesPerBlk: %u\n", udev->data_pages_per_blk); return bl; } @@ -2574,6 +2613,17 @@ static ssize_t tcmu_max_data_area_mb_show(struct config_item *item, char *page) } CONFIGFS_ATTR_RO(tcmu_, max_data_area_mb); +static ssize_t tcmu_data_pages_per_blk_show(struct config_item *item, + char *page) +{ + struct se_dev_attrib *da = container_of(to_config_group(item), + struct se_dev_attrib, da_group); + struct tcmu_dev *udev = TCMU_DEV(da->da_dev); + + return snprintf(page, PAGE_SIZE, "%u\n", udev->data_pages_per_blk); +} +CONFIGFS_ATTR_RO(tcmu_, data_pages_per_blk); + static ssize_t tcmu_dev_config_show(struct config_item *item, char *page) { struct se_dev_attrib *da = container_of(to_config_group(item), @@ -2885,6 +2935,7 @@ static struct configfs_attribute *tcmu_attrib_attrs[] = { &tcmu_attr_cmd_time_out, &tcmu_attr_qfull_time_out, &tcmu_attr_max_data_area_mb, + &tcmu_attr_data_pages_per_blk, &tcmu_attr_dev_config, &tcmu_attr_dev_size, &tcmu_attr_emulate_write_cache,