From patchwork Thu Feb 9 15:33:59 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 93728 Delivered-To: patch@linaro.org Received: by 10.182.3.34 with SMTP id 2csp81436obz; Thu, 9 Feb 2017 07:36:43 -0800 (PST) X-Received: by 10.98.141.194 with SMTP id p63mr4345832pfk.185.1486654603738; Thu, 09 Feb 2017 07:36:43 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z4si10362155pfd.119.2017.02.09.07.36.43; Thu, 09 Feb 2017 07:36:43 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932190AbdBIPgl (ORCPT + 5 others); Thu, 9 Feb 2017 10:36:41 -0500 Received: from mail-lf0-f44.google.com ([209.85.215.44]:33353 "EHLO mail-lf0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753925AbdBIPgI (ORCPT ); Thu, 9 Feb 2017 10:36:08 -0500 Received: by mail-lf0-f44.google.com with SMTP id x1so4591062lff.0 for ; Thu, 09 Feb 2017 07:35:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=WfADosfQKOLXg6eOEnSYje1veX1H06OAjr3FdGAbpp8=; b=DqvXrJlBaRW9kaye3hwfqlhqyS9BQEIIJIGK+Yswk+3odL+aTOROiFB3mljNXkmAu+ xLvjWPnbDGdNtYR5fBx8kmK0iibkDhbihSy9rlILPcb8pfGY9vuAy6YrAdL7piRs5BQD rKARtnz2boqoQPQKazEvyntWtdggWoBHs2LWA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=WfADosfQKOLXg6eOEnSYje1veX1H06OAjr3FdGAbpp8=; b=T9G8Ivc1n2uetGhJuPRj/r+Y7mco8v7hKdZoYlW5T8sSLZq22GGlEmL11+v1eWBDhG vWFt/OH+DTCmsk1FxnXA8Mmez9wXK5tkWwtVe+AQLvl4uCUuclE34CnixeX769JHLqbA K27XwW+ZcPYUyKghcv0HZdocOHOVdUyZ7o640Ucf8qGT6A4rXFUmUKDqmDS7S+ag/Qvn roNBw1je9OmpxDKSzqlqlY4z+2VX+eECTY3Gt83PK2bQ9/5w8uHvKS2o4DHtfzseFMWo U0zKcxnCm2apJ6kgac2ABfEcZZr28QLiYYMS4cNX7/xNlWFLgowC1lljkVlkkX8Abhhr 5s+Q== X-Gm-Message-State: AMke39nMtcWR2uCqLl5ZRR9Ii25TUvH/l0Gfyu+Kdd3Pc4jV9G6ssjloMhM54oPukLRhyQMc X-Received: by 10.25.203.87 with SMTP id b84mr1264620lfg.158.1486654525327; Thu, 09 Feb 2017 07:35:25 -0800 (PST) Received: from gnarp.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id e86sm3670614lji.32.2017.02.09.07.35.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 09 Feb 2017 07:35:24 -0800 (PST) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson , Adrian Hunter , Paolo Valente Cc: Chunyan Zhang , Baolin Wang , linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Linus Walleij Subject: [PATCH 12/16] mmc: queue: stop flushing the pipeline with NULL Date: Thu, 9 Feb 2017 16:33:59 +0100 Message-Id: <20170209153403.9730-13-linus.walleij@linaro.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170209153403.9730-1-linus.walleij@linaro.org> References: <20170209153403.9730-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org Remove all the pipeline flush: i.e. repeatedly sending NULL down to the core layer to flush out asynchronous requests, and also sending NULL after "special" commands to achieve the same flush. Instead: let the "special" commands wait for any ongoing asynchronous transfers using the completion, and apart from that expect the core.c and block.c layers to deal with the ongoing requests autonomously without any "push" from the queue. Signed-off-by: Linus Walleij --- drivers/mmc/core/block.c | 80 +++++++++++++++++------------------------------- drivers/mmc/core/core.c | 37 ++++++++++------------ drivers/mmc/core/queue.c | 18 ++++++++--- include/linux/mmc/core.h | 5 ++- 4 files changed, 60 insertions(+), 80 deletions(-) -- 2.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 0bd9070f5f2e..4952a105780e 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -1753,42 +1753,27 @@ void mmc_blk_rw_done(struct mmc_async_req *areq, static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) { - enum mmc_blk_status status; - struct mmc_async_req *new_areq; - struct mmc_async_req *old_areq; struct mmc_card *card = mq->card; - if (!new_req && !mq->mqrq_prev->req) + if (!new_req) { + pr_err("%s: NULL request!\n", __func__); return; + } - if (new_req) { - /* - * When 4KB native sector is enabled, only 8 blocks - * multiple read or write is allowed - */ - if (mmc_large_sector(card) && - !IS_ALIGNED(blk_rq_sectors(new_req), 8)) { - pr_err("%s: Transfer size is not 4KB sector size aligned\n", - new_req->rq_disk->disk_name); - mmc_blk_rw_cmd_abort(card, new_req); - return; - } - - mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq); - new_areq = &mq->mqrq_cur->areq; - } else - new_areq = NULL; - - old_areq = mmc_start_areq(card->host, new_areq, &status); - if (!old_areq) { - /* - * We have just put the first request into the pipeline - * and there is nothing more to do until it is - * complete. - */ + /* + * When 4KB native sector is enabled, only 8 blocks + * multiple read or write is allowed + */ + if (mmc_large_sector(card) && + !IS_ALIGNED(blk_rq_sectors(new_req), 8)) { + pr_err("%s: Transfer size is not 4KB sector size aligned\n", + new_req->rq_disk->disk_name); + mmc_blk_rw_cmd_abort(card, new_req); return; } - /* FIXME: yes, we just disregard the old_areq */ + + mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq); + mmc_start_areq(card->host, &mq->mqrq_cur->areq); } void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) @@ -1796,48 +1781,39 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) int ret; struct mmc_blk_data *md = mq->blkdata; struct mmc_card *card = md->queue.card; - bool req_is_special = mmc_req_is_special(req); - - if (req && !mq->mqrq_prev->req) - /* claim host only for the first request */ - mmc_get_card(card); ret = mmc_blk_part_switch(card, md); if (ret) { if (req) { blk_end_request_all(req, -EIO); } - goto out; + return; } if (req && req_op(req) == REQ_OP_DISCARD) { /* complete ongoing async transfer before issuing discard */ - if (card->host->areq) - mmc_blk_issue_rw_rq(mq, NULL); + if (card->host->areq) { + wait_for_completion(&card->host->areq->complete); + card->host->areq = NULL; + } mmc_blk_issue_discard_rq(mq, req); } else if (req && req_op(req) == REQ_OP_SECURE_ERASE) { /* complete ongoing async transfer before issuing secure erase*/ - if (card->host->areq) - mmc_blk_issue_rw_rq(mq, NULL); + if (card->host->areq) { + wait_for_completion(&card->host->areq->complete); + card->host->areq = NULL; + } mmc_blk_issue_secdiscard_rq(mq, req); } else if (req && req_op(req) == REQ_OP_FLUSH) { /* complete ongoing async transfer before issuing flush */ - if (card->host->areq) - mmc_blk_issue_rw_rq(mq, NULL); + if (card->host->areq) { + wait_for_completion(&card->host->areq->complete); + card->host->areq = NULL; + } mmc_blk_issue_flush(mq, req); } else { mmc_blk_issue_rw_rq(mq, req); } - -out: - if (!req || req_is_special) - /* - * Release host when there are no more requests - * and after special request(discard, flush) is done. - * In case sepecial request, there is no reentry to - * the 'mmc_blk_issue_rq' with 'mqrq_prev->req'. - */ - mmc_put_card(card); } static inline int mmc_blk_readonly(struct mmc_card *card) diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index 34337ef6705e..03c290e5e2c9 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -667,42 +667,37 @@ EXPORT_SYMBOL(mmc_restart_areq); * return the completed request. If there is no ongoing request, NULL * is returned without waiting. NULL is not an error condition. */ -struct mmc_async_req *mmc_start_areq(struct mmc_host *host, - struct mmc_async_req *areq, - enum mmc_blk_status *ret_stat) +int mmc_start_areq(struct mmc_host *host, + struct mmc_async_req *areq) { - enum mmc_blk_status status; - int start_err = 0; + int ret; struct mmc_async_req *previous = host->areq; /* Prepare a new request */ - if (areq) - mmc_pre_req(host, areq->mrq); + if (!areq) { + pr_err("%s: NULL asynchronous request!\n", __func__); + return -EIO; + } + + mmc_pre_req(host, areq->mrq); /* Finalize previous request, if there is one */ if (previous) wait_for_completion(&previous->complete); - status = MMC_BLK_SUCCESS; - if (ret_stat) - *ret_stat = status; - /* Fine so far, start the new request! */ - if (status == MMC_BLK_SUCCESS && areq) { - init_completion(&areq->complete); - start_err = __mmc_start_data_req(host, areq->mrq); - } + init_completion(&areq->complete); + ret = __mmc_start_data_req(host, areq->mrq); /* Cancel a prepared request if it was not started. */ - if ((status != MMC_BLK_SUCCESS || start_err) && areq) + if (ret) { mmc_post_req(host, areq->mrq, -EINVAL); - - if (status != MMC_BLK_SUCCESS) host->areq = NULL; - else - host->areq = areq; + pr_err("%s: failed to start request\n", __func__); + } + host->areq = areq; - return previous; + return ret; } EXPORT_SYMBOL(mmc_start_areq); diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index ae6837317fe0..c9f28de7b0f4 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -53,6 +53,7 @@ static int mmc_queue_thread(void *d) { struct mmc_queue *mq = d; struct request_queue *q = mq->queue; + bool claimed_host = false; current->flags |= PF_MEMALLOC; @@ -67,9 +68,11 @@ static int mmc_queue_thread(void *d) mq->mqrq_cur->req = req; spin_unlock_irq(q->queue_lock); - if (req || mq->mqrq_prev->req) { + if (req) { bool req_is_special = mmc_req_is_special(req); + if (!claimed_host) + mmc_get_card(mq->card); set_current_state(TASK_RUNNING); mmc_blk_issue_rq(mq, req); cond_resched(); @@ -78,11 +81,14 @@ static int mmc_queue_thread(void *d) * and vice versa. * In case of special requests, current request * has been finished. Do not assign it to previous - * request. + * request. Always unclaim the host after special + * commands. */ - if (req_is_special) + if (req_is_special) { mq->mqrq_cur->req = NULL; - + mmc_put_card(mq->card); + claimed_host = false; + } mq->mqrq_prev->brq.mrq.data = NULL; mq->mqrq_prev->req = NULL; swap(mq->mqrq_prev, mq->mqrq_cur); @@ -97,6 +103,10 @@ static int mmc_queue_thread(void *d) down(&mq->thread_sem); } } while (1); + + if (claimed_host) + mmc_put_card(mq->card); + up(&mq->thread_sem); return 0; diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h index 55b45dcddee6..af651e723ba2 100644 --- a/include/linux/mmc/core.h +++ b/include/linux/mmc/core.h @@ -160,9 +160,8 @@ struct mmc_async_req; void mmc_finalize_areq(struct kthread_work *work); int mmc_restart_areq(struct mmc_host *host, struct mmc_async_req *areq); -struct mmc_async_req *mmc_start_areq(struct mmc_host *host, - struct mmc_async_req *areq, - enum mmc_blk_status *ret_stat); +int mmc_start_areq(struct mmc_host *host, + struct mmc_async_req *areq); void mmc_wait_for_req(struct mmc_host *host, struct mmc_request *mrq); int mmc_wait_for_cmd(struct mmc_host *host, struct mmc_command *cmd, int retries);