From patchwork Mon Nov 11 07:33:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "\(Exiting\) Baolin Wang" X-Patchwork-Id: 179055 Delivered-To: patch@linaro.org Received: by 2002:a92:38d5:0:0:0:0:0 with SMTP id g82csp6294098ilf; Sun, 10 Nov 2019 23:35:21 -0800 (PST) X-Google-Smtp-Source: APXvYqy4HVuMFjKtB0xTcXJbYVuDyqXIQp6T9NwGtQRi5pVHJ4q96Ek8uMe8XjuWs4dpeYQqeP1A X-Received: by 2002:a50:b63b:: with SMTP id b56mr24949157ede.165.1573457721434; Sun, 10 Nov 2019 23:35:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1573457721; cv=none; d=google.com; s=arc-20160816; b=VxnCwJq9lG0tSUce06FdAeldBqXZ3rlE3zL+e9GJq4KhXcbrJh5O+R43Q/8ZDSQHTj Bo2n7fvOV+oNisRiVtSfJ9xPsYfhsw9zsPxayLWXzcyxBHGndxHsdXy15NwE3wboSSmc /BYCIXeXetmeH8o8Dn4t6NlXM3Yp4bJ74mEIYNQx0hxeWZHBFvWR+uqq7JxVah7ACZM8 B1w7GYxN1RrqJQ+FfxZMAij/glrCrwoyDdhEC/njK4JGSo//iFpWEfve1FWnMyUB4yEM R6bz6yxf0K/luNzL3FzaNdom4IRuJlLMzXL9tLmWU9boqgw/W6a7fp6jUCzhg4inUXUP OstQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature; bh=ksMByVugGoeKVqQdqoiQK1gUk1ZG1bkEIqQisl6szOs=; b=f9b1od7k9YR7yPNAL4CYqyjHNVxddhDzutqZaT6nvezXtfTEiKbOWt0g/a2U61jo8u ZN1Z6i+lamtB/EKCxStHBE+h5IX4l3JIZFRxWNqKsuUP7gJmHidY7ALlek9ikb9kvxZ/ DSRC7ffxA1S62AlRbahitc3v4y38OiC3XAs6tZo/4i4EYWFhuPDDtToz4ylkKD08Sh+p h/ybW7FcIvC9BxIk5GQazI//qVn89HRcTyaV3Yxr5ZLbyBm4rVQMOgbmwdmazO55V1iT uerhz7UVQeBqSGvOUvMMUfJ0Q4RtCqjmg/3HFXi1bFJDJk1nG8RnkFyKiAZ1cIBM59VM jfLQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MFaWPmwz; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f13si9369704ejt.61.2019.11.10.23.35.20; Sun, 10 Nov 2019 23:35:21 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MFaWPmwz; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726989AbfKKHfQ (ORCPT + 26 others); Mon, 11 Nov 2019 02:35:16 -0500 Received: from mail-pl1-f195.google.com ([209.85.214.195]:41451 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726805AbfKKHfQ (ORCPT ); Mon, 11 Nov 2019 02:35:16 -0500 Received: by mail-pl1-f195.google.com with SMTP id d29so7441285plj.8 for ; Sun, 10 Nov 2019 23:35:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=ksMByVugGoeKVqQdqoiQK1gUk1ZG1bkEIqQisl6szOs=; b=MFaWPmwzHnlDkSKskFXefaTLMh5RlWBFPldRoWbCEfbomKceJ7giqS3z4NpYzW2iBN XLiOaWFbrYYJuJo0bvRu+Akdaa5hXUCibWTQQKLcwT5I9Cm8IDqPfiI3/wDRCpXZFxVw 7Y1bMiq7gVy6tQ8DzHbsFZUEmV172Fet/vF8WcBVXUpSyNfpWNXSCJ3M6wK97NZLeGzZ 7yP4BRi61odokrQTQq8Z9AQ+9xpWgtIYDVXXwCc6xfdYqi/7XD4KWxPHNlX/LFfBNK/E g0o45j0p3uwcoAwdEU1sVyg8g2tXfNQulw+Qu2jvny0e+T0o87FptMblg+E+QXvm6ifl eLDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=ksMByVugGoeKVqQdqoiQK1gUk1ZG1bkEIqQisl6szOs=; b=TUt7tnw1HcRZPoIuO7MrVU+tyw1Esvhyolp4AC0jpJUf34qhohxTv7JOFGkeXdqjax MgCRt3z/vleJCHq2FNH4X6UZHOhTVnqqyhPgFma54wBVFXmTZh6djZ0gaNQ1r6XO85Wz Y8mI3XbCmUR5PB5JwhfEBO96rDTmzLfbedG/HR3PG+xPpIxfO3AwyN7fOhUp07ZXr8K2 vxbzgDq9fl08Jl7EWdSCjH7l2ffym3SaFybp2UoKMUlvP0e/uI85m04wxvP+bJRuyDz2 IK6hddDUVANefM9ARfcry5pMM2TlKrksSHMUYjnwih8CMnaw7/ebISdfwhttWUirkt37 f8tA== X-Gm-Message-State: APjAAAU1Sx+imhVpTUb4mj/XQgO868t4G5LxyjndBZqWGX5CX2ctKqIQ Bu9BM/Hv9Niaa2s3hDIROEkYYQ== X-Received: by 2002:a17:902:59ca:: with SMTP id d10mr24817715plj.237.1573457714707; Sun, 10 Nov 2019 23:35:14 -0800 (PST) Received: from baolinwangubtpc.spreadtrum.com ([117.18.48.82]) by smtp.gmail.com with ESMTPSA id c184sm17345285pfc.159.2019.11.10.23.35.11 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 10 Nov 2019 23:35:13 -0800 (PST) From: Baolin Wang To: adrian.hunter@intel.com, ulf.hansson@linaro.org, asutoshd@codeaurora.org Cc: orsonzhai@gmail.com, zhang.lyra@gmail.com, arnd@arndb.de, linus.walleij@linaro.org, vincent.guittot@linaro.org, baolin.wang@linaro.org, baolin.wang7@gmail.com, linux-mmc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 1/4] mmc: Add MMC host software queue support Date: Mon, 11 Nov 2019 15:33:57 +0800 Message-Id: X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Now the MMC read/write stack will always wait for previous request is completed by mmc_blk_rw_wait(), before sending a new request to hardware, or queue a work to complete request, that will bring context switching overhead, especially for high I/O per second rates, to affect the IO performance. Thus this patch introduces MMC software queue interface based on the hardware command queue engine's interfaces, which is similar with the hardware command queue engine's idea, that can remove the context switching. Moreover we set the default queue depth as 32 for software queue, which allows more requests to be prepared, merged and inserted into IO scheduler to improve performance, but we only allow 2 requests in flight, that is enough to let the irq handler always trigger the next request without a context switch, as well as avoiding a long latency. >From the fio testing data in cover letter, we can see the software queue can improve some performance with 4K block size, increasing about 16% for random read, increasing about 90% for random write, though no obvious improvement for sequential read and write. Moreover we can expand the software queue interface to support MMC packed request or packed command in future. Signed-off-by: Baolin Wang --- drivers/mmc/core/block.c | 61 ++++++++ drivers/mmc/core/mmc.c | 13 +- drivers/mmc/core/queue.c | 33 ++++- drivers/mmc/host/Kconfig | 7 + drivers/mmc/host/Makefile | 1 + drivers/mmc/host/mmc_hsq.c | 344 ++++++++++++++++++++++++++++++++++++++++++++ drivers/mmc/host/mmc_hsq.h | 30 ++++ include/linux/mmc/host.h | 3 + 8 files changed, 482 insertions(+), 10 deletions(-) create mode 100644 drivers/mmc/host/mmc_hsq.c create mode 100644 drivers/mmc/host/mmc_hsq.h -- 1.7.9.5 diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 2c71a43..870462c 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -168,6 +168,11 @@ struct mmc_rpmb_data { static inline int mmc_blk_part_switch(struct mmc_card *card, unsigned int part_type); +static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, + struct mmc_card *card, + int disable_multi, + struct mmc_queue *mq); +static void mmc_blk_swq_req_done(struct mmc_request *mrq); static struct mmc_blk_data *mmc_blk_get(struct gendisk *disk) { @@ -1569,9 +1574,30 @@ static int mmc_blk_cqe_issue_flush(struct mmc_queue *mq, struct request *req) return mmc_blk_cqe_start_req(mq->card->host, mrq); } +static int mmc_blk_swq_issue_rw_rq(struct mmc_queue *mq, struct request *req) +{ + struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req); + struct mmc_host *host = mq->card->host; + int err; + + mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq); + mqrq->brq.mrq.done = mmc_blk_swq_req_done; + mmc_pre_req(host, &mqrq->brq.mrq); + + err = mmc_cqe_start_req(host, &mqrq->brq.mrq); + if (err) + mmc_post_req(host, &mqrq->brq.mrq, err); + + return err; +} + static int mmc_blk_cqe_issue_rw_rq(struct mmc_queue *mq, struct request *req) { struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req); + struct mmc_host *host = mq->card->host; + + if (host->swq_enabled) + return mmc_blk_swq_issue_rw_rq(mq, req); mmc_blk_data_prep(mq, mqrq, 0, NULL, NULL); @@ -1957,6 +1983,41 @@ static void mmc_blk_urgent_bkops(struct mmc_queue *mq, mmc_run_bkops(mq->card); } +static void mmc_blk_swq_req_done(struct mmc_request *mrq) +{ + struct mmc_queue_req *mqrq = + container_of(mrq, struct mmc_queue_req, brq.mrq); + struct request *req = mmc_queue_req_to_req(mqrq); + struct request_queue *q = req->q; + struct mmc_queue *mq = q->queuedata; + struct mmc_host *host = mq->card->host; + unsigned long flags; + + if (mmc_blk_rq_error(&mqrq->brq) || + mmc_blk_urgent_bkops_needed(mq, mqrq)) { + spin_lock_irqsave(&mq->lock, flags); + mq->recovery_needed = true; + mq->recovery_req = req; + spin_unlock_irqrestore(&mq->lock, flags); + + host->cqe_ops->cqe_recovery_start(host); + + schedule_work(&mq->recovery_work); + return; + } + + mmc_blk_rw_reset_success(mq, req); + + /* + * Block layer timeouts race with completions which means the normal + * completion path cannot be used during recovery. + */ + if (mq->in_recovery) + mmc_blk_cqe_complete_rq(mq, req); + else + blk_mq_complete_request(req); +} + void mmc_blk_mq_complete(struct request *req) { struct mmc_queue *mq = req->q->queuedata; diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c index c880489..8eac1a2 100644 --- a/drivers/mmc/core/mmc.c +++ b/drivers/mmc/core/mmc.c @@ -1852,15 +1852,22 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr, */ card->reenable_cmdq = card->ext_csd.cmdq_en; - if (card->ext_csd.cmdq_en && !host->cqe_enabled) { + if (host->cqe_ops && !host->cqe_enabled) { err = host->cqe_ops->cqe_enable(host, card); if (err) { pr_err("%s: Failed to enable CQE, error %d\n", mmc_hostname(host), err); } else { host->cqe_enabled = true; - pr_info("%s: Command Queue Engine enabled\n", - mmc_hostname(host)); + + if (card->ext_csd.cmdq_en) { + pr_info("%s: Command Queue Engine enabled\n", + mmc_hostname(host)); + } else { + host->swq_enabled = true; + pr_info("%s: Software Queue enabled\n", + mmc_hostname(host)); + } } } diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 9edc086..d9086c1 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -62,7 +62,7 @@ enum mmc_issue_type mmc_issue_type(struct mmc_queue *mq, struct request *req) { struct mmc_host *host = mq->card->host; - if (mq->use_cqe) + if (mq->use_cqe && !host->swq_enabled) return mmc_cqe_issue_type(host, req); if (req_op(req) == REQ_OP_READ || req_op(req) == REQ_OP_WRITE) @@ -124,12 +124,14 @@ static enum blk_eh_timer_return mmc_mq_timed_out(struct request *req, { struct request_queue *q = req->q; struct mmc_queue *mq = q->queuedata; + struct mmc_card *card = mq->card; + struct mmc_host *host = card->host; unsigned long flags; int ret; spin_lock_irqsave(&mq->lock, flags); - if (mq->recovery_needed || !mq->use_cqe) + if (mq->recovery_needed || !mq->use_cqe || host->swq_enabled) ret = BLK_EH_RESET_TIMER; else ret = mmc_cqe_timed_out(req); @@ -144,12 +146,13 @@ static void mmc_mq_recovery_handler(struct work_struct *work) struct mmc_queue *mq = container_of(work, struct mmc_queue, recovery_work); struct request_queue *q = mq->queue; + struct mmc_host *host = mq->card->host; mmc_get_card(mq->card, &mq->ctx); mq->in_recovery = true; - if (mq->use_cqe) + if (mq->use_cqe && !host->swq_enabled) mmc_blk_cqe_recovery(mq); else mmc_blk_mq_recovery(mq); @@ -160,6 +163,9 @@ static void mmc_mq_recovery_handler(struct work_struct *work) mq->recovery_needed = false; spin_unlock_irq(&mq->lock); + if (host->swq_enabled) + host->cqe_ops->cqe_recovery_finish(host); + mmc_put_card(mq->card, &mq->ctx); blk_mq_run_hw_queues(q, true); @@ -279,6 +285,14 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx, } break; case MMC_ISSUE_ASYNC: + /* + * For MMC host software queue, we only allow 2 requests in + * flight to avoid a long latency. + */ + if (host->swq_enabled && mq->in_flight[issue_type] > 2) { + spin_unlock_irq(&mq->lock); + return BLK_STS_RESOURCE; + } break; default: /* @@ -430,11 +444,16 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card) * The queue depth for CQE must match the hardware because the request * tag is used to index the hardware queue. */ - if (mq->use_cqe) - mq->tag_set.queue_depth = - min_t(int, card->ext_csd.cmdq_depth, host->cqe_qdepth); - else + if (mq->use_cqe) { + if (host->swq_enabled) + mq->tag_set.queue_depth = host->cqe_qdepth; + else + mq->tag_set.queue_depth = + min_t(int, card->ext_csd.cmdq_depth, host->cqe_qdepth); + } else { mq->tag_set.queue_depth = MMC_QUEUE_DEPTH; + } + mq->tag_set.numa_node = NUMA_NO_NODE; mq->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_BLOCKING; mq->tag_set.nr_hw_queues = 1; diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig index 49ea02c..efa4019 100644 --- a/drivers/mmc/host/Kconfig +++ b/drivers/mmc/host/Kconfig @@ -936,6 +936,13 @@ config MMC_CQHCI If unsure, say N. +config MMC_HSQ + tristate "MMC Host Software Queue support" + help + This selects the Software Queue support. + + If unsure, say N. + config MMC_TOSHIBA_PCI tristate "Toshiba Type A SD/MMC Card Interface Driver" depends on PCI diff --git a/drivers/mmc/host/Makefile b/drivers/mmc/host/Makefile index 11c4598..c14b439 100644 --- a/drivers/mmc/host/Makefile +++ b/drivers/mmc/host/Makefile @@ -98,6 +98,7 @@ obj-$(CONFIG_MMC_SDHCI_BRCMSTB) += sdhci-brcmstb.o obj-$(CONFIG_MMC_SDHCI_OMAP) += sdhci-omap.o obj-$(CONFIG_MMC_SDHCI_SPRD) += sdhci-sprd.o obj-$(CONFIG_MMC_CQHCI) += cqhci.o +obj-$(CONFIG_MMC_HSQ) += mmc_hsq.o ifeq ($(CONFIG_CB710_DEBUG),y) CFLAGS-cb710-mmc += -DDEBUG diff --git a/drivers/mmc/host/mmc_hsq.c b/drivers/mmc/host/mmc_hsq.c new file mode 100644 index 0000000..f5a4f93 --- /dev/null +++ b/drivers/mmc/host/mmc_hsq.c @@ -0,0 +1,344 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * MMC software queue support based on command queue interfaces + * + * Copyright (C) 2019 Linaro, Inc. + * Author: Baolin Wang + */ + +#include +#include + +#include "mmc_hsq.h" + +#define HSQ_NUM_SLOTS 32 +#define HSQ_INVALID_TAG HSQ_NUM_SLOTS + +static void mmc_hsq_pump_requests(struct mmc_hsq *hsq) +{ + struct mmc_host *mmc = hsq->mmc; + struct hsq_slot *slot; + unsigned long flags; + + spin_lock_irqsave(&hsq->lock, flags); + + /* Make sure we are not already running a request now */ + if (hsq->mrq) { + spin_unlock_irqrestore(&hsq->lock, flags); + return; + } + + /* Make sure there are remain requests need to pump */ + if (!hsq->qcnt || !hsq->enabled) { + spin_unlock_irqrestore(&hsq->lock, flags); + return; + } + + slot = &hsq->slot[hsq->next_tag]; + hsq->mrq = slot->mrq; + hsq->qcnt--; + + spin_unlock_irqrestore(&hsq->lock, flags); + + mmc->ops->request(mmc, hsq->mrq); +} + +static void mmc_hsq_update_next_tag(struct mmc_hsq *hsq, int remains) +{ + struct hsq_slot *slot; + int tag; + + /* + * If there are no remain requests in software queue, then set a invalid + * tag. + */ + if (!remains) { + hsq->next_tag = HSQ_INVALID_TAG; + return; + } + + /* + * Increasing the next tag and check if the corresponding request is + * available, if yes, then we found a candidate request. + */ + if (++hsq->next_tag != HSQ_INVALID_TAG) { + slot = &hsq->slot[hsq->next_tag]; + if (slot->mrq) + return; + } + + /* Othersie we should iterate all slots to find a available tag. */ + for (tag = 0; tag < HSQ_NUM_SLOTS; tag++) { + slot = &hsq->slot[tag]; + if (slot->mrq) + break; + } + + if (tag == HSQ_NUM_SLOTS) + tag = HSQ_INVALID_TAG; + + hsq->next_tag = tag; +} + +static void mmc_hsq_post_request(struct mmc_hsq *hsq) +{ + unsigned long flags; + int remains; + + spin_lock_irqsave(&hsq->lock, flags); + + remains = hsq->qcnt; + hsq->mrq = NULL; + + /* Update the next available tag to be queued. */ + mmc_hsq_update_next_tag(hsq, remains); + + if (hsq->waiting_for_idle && !remains) { + hsq->waiting_for_idle = false; + wake_up(&hsq->wait_queue); + } + + /* Do not pump new request in recovery mode. */ + if (hsq->recovery_halt) { + spin_unlock_irqrestore(&hsq->lock, flags); + return; + } + + spin_unlock_irqrestore(&hsq->lock, flags); + + /* + * Try to pump new request to host controller as fast as possible, + * after completing previous request. + */ + if (remains > 0) + mmc_hsq_pump_requests(hsq); +} + +/** + * mmc_hsq_finalize_request - finalize one request if the request is done + * @mmc: the host controller + * @mrq: the request need to be finalized + * + * Return true if we finalized the corresponding request in software queue, + * otherwise return false. + */ +bool mmc_hsq_finalize_request(struct mmc_host *mmc, struct mmc_request *mrq) +{ + struct mmc_hsq *hsq = mmc->cqe_private; + unsigned long flags; + + spin_lock_irqsave(&hsq->lock, flags); + + if (!hsq->enabled || !hsq->mrq || hsq->mrq != mrq) { + spin_unlock_irqrestore(&hsq->lock, flags); + return false; + } + + /* + * Clear current completed slot request to make a room for new request. + */ + hsq->slot[hsq->next_tag].mrq = NULL; + + spin_unlock_irqrestore(&hsq->lock, flags); + + mmc_cqe_request_done(mmc, hsq->mrq); + + mmc_hsq_post_request(hsq); + + return true; +} +EXPORT_SYMBOL_GPL(mmc_hsq_finalize_request); + +static void mmc_hsq_recovery_start(struct mmc_host *mmc) +{ + struct mmc_hsq *hsq = mmc->cqe_private; + unsigned long flags; + + spin_lock_irqsave(&hsq->lock, flags); + + hsq->recovery_halt = true; + + spin_unlock_irqrestore(&hsq->lock, flags); +} + +static void mmc_hsq_recovery_finish(struct mmc_host *mmc) +{ + struct mmc_hsq *hsq = mmc->cqe_private; + int remains; + + spin_lock_irq(&hsq->lock); + + hsq->recovery_halt = false; + remains = hsq->qcnt; + + spin_unlock_irq(&hsq->lock); + + /* + * Try to pump new request if there are request pending in software + * queue after finishing recovery. + */ + if (remains > 0) + mmc_hsq_pump_requests(hsq); +} + +static int mmc_hsq_request(struct mmc_host *mmc, struct mmc_request *mrq) +{ + struct mmc_hsq *hsq = mmc->cqe_private; + int tag = mrq->tag; + + spin_lock_irq(&hsq->lock); + + if (!hsq->enabled) { + spin_unlock_irq(&hsq->lock); + return -ESHUTDOWN; + } + + /* Do not queue any new requests in recovery mode. */ + if (hsq->recovery_halt) { + spin_unlock_irq(&hsq->lock); + return -EBUSY; + } + + hsq->slot[tag].mrq = mrq; + + /* + * Set the next tag as current request tag if no available + * next tag. + */ + if (hsq->next_tag == HSQ_INVALID_TAG) + hsq->next_tag = tag; + + hsq->qcnt++; + + spin_unlock_irq(&hsq->lock); + + mmc_hsq_pump_requests(hsq); + + return 0; +} + +static void mmc_hsq_post_req(struct mmc_host *mmc, struct mmc_request *mrq) +{ + if (mmc->ops->post_req) + mmc->ops->post_req(mmc, mrq, 0); +} + +static bool mmc_hsq_queue_is_idle(struct mmc_hsq *hsq, int *ret) +{ + bool is_idle; + + spin_lock_irq(&hsq->lock); + + is_idle = (!hsq->mrq && !hsq->qcnt) || + hsq->recovery_halt; + + *ret = hsq->recovery_halt ? -EBUSY : 0; + hsq->waiting_for_idle = !is_idle; + + spin_unlock_irq(&hsq->lock); + + return is_idle; +} + +static int mmc_hsq_wait_for_idle(struct mmc_host *mmc) +{ + struct mmc_hsq *hsq = mmc->cqe_private; + int ret; + + wait_event(hsq->wait_queue, + mmc_hsq_queue_is_idle(hsq, &ret)); + + return ret; +} + +static void mmc_hsq_disable(struct mmc_host *mmc) +{ + struct mmc_hsq *hsq = mmc->cqe_private; + u32 timeout = 500; + int ret; + + spin_lock_irq(&hsq->lock); + + if (!hsq->enabled) { + spin_unlock_irq(&hsq->lock); + return; + } + + spin_unlock_irq(&hsq->lock); + + ret = wait_event_timeout(hsq->wait_queue, + mmc_hsq_queue_is_idle(hsq, &ret), + msecs_to_jiffies(timeout)); + if (ret == 0) { + pr_warn("could not stop mmc software queue\n"); + return; + } + + spin_lock_irq(&hsq->lock); + + hsq->enabled = false; + + spin_unlock_irq(&hsq->lock); +} + +static int mmc_hsq_enable(struct mmc_host *mmc, struct mmc_card *card) +{ + struct mmc_hsq *hsq = mmc->cqe_private; + + spin_lock_irq(&hsq->lock); + + if (hsq->enabled) { + spin_unlock_irq(&hsq->lock); + return -EBUSY; + } + + hsq->enabled = true; + + spin_unlock_irq(&hsq->lock); + + return 0; +} + +static const struct mmc_cqe_ops mmc_hsq_ops = { + .cqe_enable = mmc_hsq_enable, + .cqe_disable = mmc_hsq_disable, + .cqe_request = mmc_hsq_request, + .cqe_post_req = mmc_hsq_post_req, + .cqe_wait_for_idle = mmc_hsq_wait_for_idle, + .cqe_recovery_start = mmc_hsq_recovery_start, + .cqe_recovery_finish = mmc_hsq_recovery_finish, +}; + +int mmc_hsq_init(struct mmc_hsq *hsq, struct mmc_host *mmc) +{ + hsq->num_slots = HSQ_NUM_SLOTS; + hsq->next_tag = HSQ_INVALID_TAG; + mmc->cqe_qdepth = HSQ_NUM_SLOTS; + + hsq->slot = devm_kcalloc(mmc_dev(mmc), hsq->num_slots, + sizeof(struct hsq_slot), GFP_KERNEL); + if (!hsq->slot) + return -ENOMEM; + + hsq->mmc = mmc; + hsq->mmc->cqe_private = hsq; + mmc->cqe_ops = &mmc_hsq_ops; + + spin_lock_init(&hsq->lock); + init_waitqueue_head(&hsq->wait_queue); + + return 0; +} +EXPORT_SYMBOL_GPL(mmc_hsq_init); + +void mmc_hsq_suspend(struct mmc_host *mmc) +{ + mmc_hsq_disable(mmc); +} +EXPORT_SYMBOL_GPL(mmc_hsq_suspend); + +int mmc_hsq_resume(struct mmc_host *mmc) +{ + return mmc_hsq_enable(mmc, NULL); +} +EXPORT_SYMBOL_GPL(mmc_hsq_resume); diff --git a/drivers/mmc/host/mmc_hsq.h b/drivers/mmc/host/mmc_hsq.h new file mode 100644 index 0000000..d51beb7 --- /dev/null +++ b/drivers/mmc/host/mmc_hsq.h @@ -0,0 +1,30 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef LINUX_MMC_HSQ_H +#define LINUX_MMC_HSQ_H + +struct hsq_slot { + struct mmc_request *mrq; +}; + +struct mmc_hsq { + struct mmc_host *mmc; + struct mmc_request *mrq; + wait_queue_head_t wait_queue; + struct hsq_slot *slot; + spinlock_t lock; + + int next_tag; + int num_slots; + int qcnt; + + bool enabled; + bool waiting_for_idle; + bool recovery_halt; +}; + +int mmc_hsq_init(struct mmc_hsq *hsq, struct mmc_host *mmc); +void mmc_hsq_suspend(struct mmc_host *mmc); +int mmc_hsq_resume(struct mmc_host *mmc); +bool mmc_hsq_finalize_request(struct mmc_host *mmc, struct mmc_request *mrq); + +#endif diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index ba70338..3931aa3 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -462,6 +462,9 @@ struct mmc_host { bool cqe_enabled; bool cqe_on; + /* Software Queue support */ + bool swq_enabled; + unsigned long private[0] ____cacheline_aligned; }; From patchwork Mon Nov 11 07:33:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "\(Exiting\) Baolin Wang" X-Patchwork-Id: 179056 Delivered-To: patch@linaro.org Received: by 2002:a92:38d5:0:0:0:0:0 with SMTP id g82csp6294110ilf; Sun, 10 Nov 2019 23:35:22 -0800 (PST) X-Google-Smtp-Source: APXvYqw0EHJWgyA8Se2qAEmHYxr28/Kl0IDsjdddotJv/ZdwTuqudfnIm9ybEx0C30eeUQAZPioB X-Received: by 2002:a05:6402:1049:: with SMTP id e9mr24685908edu.91.1573457722194; Sun, 10 Nov 2019 23:35:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1573457722; cv=none; d=google.com; s=arc-20160816; b=fNg+a9joyYvjs3lH2qlfYbP6NBuOIzCuUFj4mdziTTFwVR1d5ABtnvj8qBzjdUVcEH 4Ko8dPjbw4NtxjkH+UwFHCcDcZUWoHZowfPcPME6bfCK/aCYkWqO3oJEFOxv3ymtdC4W SAhAiBx4W7q9SVVmW6cZa1GF3v0EuerLxD/SFT0Tibde+VaTtf5V2iJiL33DVf6muXeN yHdIqjyh5SIlzQz/kYxBiqLifzcjXo7nEc3IPNXlDTGn4J7DDxUj6Lpk2qieKHgMpvZ1 piQ4JKTOM91iD0MJqFZWqDAAgf8L8kUK6wkfkf6whaq4Te2zkv9RmpRKH8ihkclTIVzB I7JQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature; bh=ILB8a88XETD+WUMe0JJMqsMts/FAHmPFp1FBBYuuKZE=; b=L75LLFA1KQhrKc/173gwPJX9HYgrUqTio1mUq03bMYNm6s+uT766jyAadsUmNk1JfL +ZLAzmajqNsMNd0E97tB2QnkU60N44K6bygE8hKFJoY5JRe3aKaVgfItfsqCXL/eI9Nw HE44S3g+CCATv3BQXrddAIKn/AU+83lfy5pvXro69gQJtP+eamT1OAWhIu1Et5yU6Avj ouzwiGUC5yldgFS9MXFTr16espOdTopB9R5u3TRN0UqrRvgioqKxyznbXDqL9gTftDRy dPKvGsxzD05LydVPmm+HHYIrhZldpnb+eNlhwHW3F07GjmTSZ1uDyIpc2Pc4NMqtPSfo w3jw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=avMpbdpT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f13si9369704ejt.61.2019.11.10.23.35.22; Sun, 10 Nov 2019 23:35:22 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=avMpbdpT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727012AbfKKHfU (ORCPT + 26 others); Mon, 11 Nov 2019 02:35:20 -0500 Received: from mail-pg1-f193.google.com ([209.85.215.193]:36037 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726805AbfKKHfT (ORCPT ); Mon, 11 Nov 2019 02:35:19 -0500 Received: by mail-pg1-f193.google.com with SMTP id k13so8934566pgh.3 for ; Sun, 10 Nov 2019 23:35:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=ILB8a88XETD+WUMe0JJMqsMts/FAHmPFp1FBBYuuKZE=; b=avMpbdpTTxTcZgMNsmHJriqYfjzppWrWU4fDAsp8ZdHzgQeCtuy8ra7ZPMS668QTzR cWJbYzjfn6TIqK5gseB7rSCq7pWrcV3K3E28XL1UdekndfoR4ZtB0IFBnxgTfI7xZqNe UcXlNCSIcb0waslZC2IAkZG/qgGlbz+Vq5taKIWhn2Y+RmX5g7Q+OH/NolpLeLLmC2z4 H7MYHVj0XUS6/LlHrHEkRMZW2fWj5gMEOVbSPkTx4yjpO2dHcYxd4YGThqaEg9MhnDUr 7+gh/Qba7laBgtfDeB9IcyI2hFYOhpzGuYpgyo1FD7qAbh8QaWMSCqYf+qkx1szCBpUK o4Kg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=ILB8a88XETD+WUMe0JJMqsMts/FAHmPFp1FBBYuuKZE=; b=WmPKSfsModdrh/RnGOD9kOyTpDzVKmS/8HtN+IHmmRd/iU2M7QKW6cwQ3furLXCXHQ qPq7n5lQNBYSjU1a2Ml+ytErAp8FsrVAqLfC35+lQxYzg1wSvzGP8lPUG5JQv8HWE89R Y26cCazfFV94B6xCWzVtI2VPVGpNocMk4/VjqrdPezLPbPo2ukNIrQnqWGRcliL51tGQ vz1N3Ja6ohV5gOQzCdN+EqKE+kNIDvaEb2NgncP9HTk7ufowNtVxbSfD9lmv4Sjc5kcQ L7U2bxI1Qma/Tjq35wRloNUPGwEkqdm8uHoBLPqzi7QqMmYyFGEwXoBuIndwJ58GBO/Z LrnA== X-Gm-Message-State: APjAAAUxUaTxqz0n+py/PJiYoiwYfyt2G8zq3ZKvL99A/oUF/7I5xj4h SxpKCZgS/c2rdIrHb3M3xlGPow== X-Received: by 2002:a62:e316:: with SMTP id g22mr28062398pfh.19.1573457718675; Sun, 10 Nov 2019 23:35:18 -0800 (PST) Received: from baolinwangubtpc.spreadtrum.com ([117.18.48.82]) by smtp.gmail.com with ESMTPSA id c184sm17345285pfc.159.2019.11.10.23.35.15 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 10 Nov 2019 23:35:17 -0800 (PST) From: Baolin Wang To: adrian.hunter@intel.com, ulf.hansson@linaro.org, asutoshd@codeaurora.org Cc: orsonzhai@gmail.com, zhang.lyra@gmail.com, arnd@arndb.de, linus.walleij@linaro.org, vincent.guittot@linaro.org, baolin.wang@linaro.org, baolin.wang7@gmail.com, linux-mmc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 2/4] mmc: host: sdhci: Add request_done ops for struct sdhci_ops Date: Mon, 11 Nov 2019 15:33:58 +0800 Message-Id: <94603120e6431f0ce35af78935bfe7dddda4850b.1573456284.git.baolin.wang@linaro.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add request_done ops for struct sdhci_ops as a preparation in case some host controllers have different method to complete one request, such as supporting request completion of MMC software queue. Suggested-by: Adrian Hunter Signed-off-by: Baolin Wang --- drivers/mmc/host/sdhci.c | 12 ++++++++++-- drivers/mmc/host/sdhci.h | 2 ++ 2 files changed, 12 insertions(+), 2 deletions(-) -- 1.7.9.5 diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c index b056400..850241f 100644 --- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -2729,7 +2729,10 @@ static bool sdhci_request_done(struct sdhci_host *host) spin_unlock_irqrestore(&host->lock, flags); - mmc_request_done(host->mmc, mrq); + if (host->ops->request_done) + host->ops->request_done(host, mrq); + else + mmc_request_done(host->mmc, mrq); return false; } @@ -3157,7 +3160,12 @@ static irqreturn_t sdhci_irq(int irq, void *dev_id) /* Process mrqs ready for immediate completion */ for (i = 0; i < SDHCI_MAX_MRQS; i++) { - if (mrqs_done[i]) + if (!mrqs_done[i]) + continue; + + if (host->ops->request_done) + host->ops->request_done(host, mrqs_done[i]); + else mmc_request_done(host->mmc, mrqs_done[i]); } diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h index 0ed3e0e..d89cdb9 100644 --- a/drivers/mmc/host/sdhci.h +++ b/drivers/mmc/host/sdhci.h @@ -644,6 +644,8 @@ struct sdhci_ops { void (*voltage_switch)(struct sdhci_host *host); void (*adma_write_desc)(struct sdhci_host *host, void **desc, dma_addr_t addr, int len, unsigned int cmd); + void (*request_done)(struct sdhci_host *host, + struct mmc_request *mrq); }; #ifdef CONFIG_MMC_SDHCI_IO_ACCESSORS