From patchwork Fri Nov 10 10:01:41 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 118525 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp7722801qgn; Fri, 10 Nov 2017 02:02:18 -0800 (PST) X-Google-Smtp-Source: ABhQp+TQDlxVlZrtrR4dxS3xyauks/EH8lxbs3mMBrD4QUeN4C8zucU9GuKdEJ/Pz1oGy+z5E8jr X-Received: by 10.99.115.4 with SMTP id o4mr3514880pgc.371.1510308137982; Fri, 10 Nov 2017 02:02:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510308137; cv=none; d=google.com; s=arc-20160816; b=T4lxcXEzHugw2Mn1ONKelkNttMAERQHuPMNfdYo1aHlO/kk3RVSifc4YqaBTTTOkNN f0+hF65f9JcVJ/qD1Hp0VdrHptA53aZ3zHZdfX02vSt4s1q4/E/NRY5xC1eRYG49rIQv fSTHAtg+ohxk4vg2kV3YnurmbHkn63hdGIcjVhNQAB75bcX36QbIhnlTSyrur1J4kLv6 GYGcQB1sBkiAJZgkR7KpegMIzg7rNFFHcvImTy9epty5yfbanu8UWUfVIZIWZG85AuiP XQ+1M7gPuBzjiPiQalAUKR4iMhlJKhfEfJedtPZqh7zXg6h1VlkIj3d+MEX0jVP51GC5 6wHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=90HcHF9B4xLooTXLu9H0KQ9Uo9hrsVsvRipKUE66bTg=; b=jpXajbK/ItlkrO6pZ1mmiY1x5zOtVPe3/24DUodzyP7sm3GJ9IZ2fmregvY3V6/RqW kvZZv/Gxi1T1l3gX+Seh6AQrmYw1QYtBBcDzBNfOnalBvSMotiZOSeUghRMj9huj/nWG qzPoq/ucp9M9KIa88qBObwDoy0bg/xyBVF3w8ThNcFMMVZBf5U3bRkEZa+uAC9rug66j Cxht8NqnTvy4cxEAbN40mjTdc2y9a7zKnRojmRFGMKuTOss+y8fPM5mKVkywJCB61Rh+ oThNmIAlyrKxzROgwNC9mGFVDy5MGUg+oToaWPDm2EOlxJdpIDvGzQSMugeJHURKI7WS i3MQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=YkdCBRmm; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z14si8801659plh.282.2017.11.10.02.02.17; Fri, 10 Nov 2017 02:02:17 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=YkdCBRmm; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752414AbdKJKCP (ORCPT + 6 others); Fri, 10 Nov 2017 05:02:15 -0500 Received: from mail-lf0-f67.google.com ([209.85.215.67]:45260 "EHLO mail-lf0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752451AbdKJKCN (ORCPT ); Fri, 10 Nov 2017 05:02:13 -0500 Received: by mail-lf0-f67.google.com with SMTP id n69so10446375lfn.2 for ; Fri, 10 Nov 2017 02:02:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=dzbUeMGOEGHx9QrsmRsNEmqOoI8vEUck7ipK2opkvVw=; b=YkdCBRmmMcQwR3oWU1KdFc4IpH/WSYrY5ZaC7dISHkYsV7E+z518C1+l5frn81U2ZW +s6KOn648EjvVqQknnnHjsz4MxLM5GEevQcveiLbZ6+RZjaHmjHn1EyxEHjhX7/HPeH3 q/7fZCw/ztaeVbjdBPu8XV9kWcQNm9S4+FdiQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=dzbUeMGOEGHx9QrsmRsNEmqOoI8vEUck7ipK2opkvVw=; b=M4Sg1yAhLFA5suqmOoTrL+2w69WQHTRbUHlkO0amiq5VgKBdbywfi+1G+fNv27Komr VnftmCCC459cpQNx1nbx8bx77fTda0pYMllBbmSmrGjBKP8JBWUu2sg7kFSC7bP5QvWs Pap5boZn/DWVX7yVxi79mbvpvMuP4ODBi+bxg/ghbB6OTNRDK1uOulc8ZK0kKq20dKAS icAyRmNum4iTQA2qtuu5WY+M9RGP6bGogCpwJXPBJDOIUFoIQayk/t+uO2wL4MwO9qLc sNnM0LTwQojaHNMNxdB0O5nmfjuejJnyy7O0Sbi/j4/7m12lXRXSFvhPGsibVVX1rGq4 UFCA== X-Gm-Message-State: AJaThX7OWTa4d+To6LgHA+g5vLup9hIGbZDP/GYl8rzpH2kxpEPMYrRW KiAIBTAQZ34rA3hVGqSuA/A0LVefFLo= X-Received: by 10.25.206.69 with SMTP id e66mr1183105lfg.259.1510308131611; Fri, 10 Nov 2017 02:02:11 -0800 (PST) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id n36sm310843lfi.78.2017.11.10.02.02.10 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 Nov 2017 02:02:10 -0800 (PST) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 10/12 v5] mmc: queue/block: pass around struct mmc_queue_req*s Date: Fri, 10 Nov 2017 11:01:41 +0100 Message-Id: <20171110100143.12256-11-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171110100143.12256-1-linus.walleij@linaro.org> References: <20171110100143.12256-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org Instead of passing two pointers around several pointers to mmc_queue_req, request, mmc_queue, and reassigning to the left and right, issue mmc_queue_req and dereference the queue and request from the mmq_queue_req where needed. The struct mmc_queue_req is the thing that has a lifecycle after all: this is what we are keeping in our queue, and what the block layer helps us manager. Augment a bunch of functions to take a single argument so we can see the trees and not just a big jungle of arguments. Signed-off-by: Linus Walleij --- ChangeLog v1->v5: - Rebasing on the "next" branch in the MMC tree. --- drivers/mmc/core/block.c | 128 ++++++++++++++++++++++++----------------------- drivers/mmc/core/block.h | 5 +- drivers/mmc/core/queue.c | 2 +- 3 files changed, 69 insertions(+), 66 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index c7a57006e27f..2cd9fe5a8c9b 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -1208,9 +1208,9 @@ static inline void mmc_blk_reset_success(struct mmc_blk_data *md, int type) * processed it with all other requests and then they get issued in this * function. */ -static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req) +static void mmc_blk_issue_drv_op(struct mmc_queue_req *mq_rq) { - struct mmc_queue_req *mq_rq; + struct mmc_queue *mq = mq_rq->mq; struct mmc_card *card = mq->card; struct mmc_blk_data *md = mq->blkdata; struct mmc_blk_ioc_data **idata; @@ -1220,7 +1220,6 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req) int ret; int i; - mq_rq = req_to_mmc_queue_req(req); rpmb_ioctl = (mq_rq->drv_op == MMC_DRV_OP_IOCTL_RPMB); switch (mq_rq->drv_op) { @@ -1264,12 +1263,14 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req) break; } mq_rq->drv_op_result = ret; - blk_end_request_all(req, ret ? BLK_STS_IOERR : BLK_STS_OK); + blk_end_request_all(mmc_queue_req_to_req(mq_rq), + ret ? BLK_STS_IOERR : BLK_STS_OK); } -static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req) +static void mmc_blk_issue_discard_rq(struct mmc_queue_req *mq_rq) { - struct mmc_blk_data *md = mq->blkdata; + struct request *req = mmc_queue_req_to_req(mq_rq); + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; unsigned int from, nr, arg; int err = 0, type = MMC_BLK_DISCARD; @@ -1310,10 +1311,10 @@ static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req) blk_end_request(req, status, blk_rq_bytes(req)); } -static void mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq, - struct request *req) +static void mmc_blk_issue_secdiscard_rq(struct mmc_queue_req *mq_rq) { - struct mmc_blk_data *md = mq->blkdata; + struct request *req = mmc_queue_req_to_req(mq_rq); + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; unsigned int from, nr, arg; int err = 0, type = MMC_BLK_SECDISCARD; @@ -1380,14 +1381,15 @@ static void mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq, blk_end_request(req, status, blk_rq_bytes(req)); } -static void mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req) +static void mmc_blk_issue_flush(struct mmc_queue_req *mq_rq) { - struct mmc_blk_data *md = mq->blkdata; + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; int ret = 0; ret = mmc_flush_cache(card); - blk_end_request_all(req, ret ? BLK_STS_IOERR : BLK_STS_OK); + blk_end_request_all(mmc_queue_req_to_req(mq_rq), + ret ? BLK_STS_IOERR : BLK_STS_OK); } /* @@ -1698,18 +1700,18 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq, *do_data_tag_p = do_data_tag; } -static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, - struct mmc_card *card, - bool disable_multi, - struct mmc_queue *mq) +static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mq_rq, + bool disable_multi) { u32 readcmd, writecmd; - struct mmc_blk_request *brq = &mqrq->brq; - struct request *req = mmc_queue_req_to_req(mqrq); + struct mmc_queue *mq = mq_rq->mq; + struct mmc_card *card = mq->card; + struct mmc_blk_request *brq = &mq_rq->brq; + struct request *req = mmc_queue_req_to_req(mq_rq); struct mmc_blk_data *md = mq->blkdata; bool do_rel_wr, do_data_tag; - mmc_blk_data_prep(mq, mqrq, disable_multi, &do_rel_wr, &do_data_tag); + mmc_blk_data_prep(mq, mq_rq, disable_multi, &do_rel_wr, &do_data_tag); brq->mrq.cmd = &brq->cmd; brq->mrq.areq = NULL; @@ -1764,9 +1766,9 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, brq->mrq.sbc = &brq->sbc; } - mqrq->areq.err_check = mmc_blk_err_check; - mqrq->areq.host = card->host; - INIT_WORK(&mqrq->areq.finalization_work, mmc_finalize_areq); + mq_rq->areq.err_check = mmc_blk_err_check; + mq_rq->areq.host = card->host; + INIT_WORK(&mq_rq->areq.finalization_work, mmc_finalize_areq); } static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card, @@ -1798,10 +1800,12 @@ static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card, return req_pending; } -static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card, - struct request *req, - struct mmc_queue_req *mqrq) +static void mmc_blk_rw_cmd_abort(struct mmc_queue_req *mq_rq) { + struct mmc_queue *mq = mq_rq->mq; + struct mmc_card *card = mq->card; + struct request *req = mmc_queue_req_to_req(mq_rq); + if (mmc_card_removed(card)) req->rq_flags |= RQF_QUIET; while (blk_end_request(req, BLK_STS_IOERR, blk_rq_cur_bytes(req))); @@ -1809,16 +1813,15 @@ static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card, /** * mmc_blk_rw_try_restart() - tries to restart the current async request - * @mq: the queue with the card and host to restart - * @mqrq: the mmc_queue_request containing the areq to be restarted + * @mq_rq: the mmc_queue_request containing the areq to be restarted */ -static void mmc_blk_rw_try_restart(struct mmc_queue *mq, - struct mmc_queue_req *mqrq) +static void mmc_blk_rw_try_restart(struct mmc_queue_req *mq_rq) { - struct mmc_async_req *areq = &mqrq->areq; + struct mmc_async_req *areq = &mq_rq->areq; + struct mmc_queue *mq = mq_rq->mq; /* Proceed and try to restart the current async request */ - mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq); + mmc_blk_rw_rq_prep(mq_rq, 0); areq->disable_multi = false; areq->retry = 0; mmc_restart_areq(mq->card->host, areq); @@ -1867,7 +1870,7 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat pr_err("%s BUG rq_tot %d d_xfer %d\n", __func__, blk_rq_bytes(old_req), brq->data.bytes_xfered); - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); + mmc_blk_rw_cmd_abort(mq_rq); return; } break; @@ -1875,12 +1878,12 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending); if (mmc_blk_reset(md, card->host, type)) { if (req_pending) - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_cmd_abort(mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; } if (!req_pending) { - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; } break; @@ -1892,8 +1895,8 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat case MMC_BLK_ABORT: if (!mmc_blk_reset(md, card->host, type)) break; - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_cmd_abort(mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; case MMC_BLK_DATA_ERR: { int err; @@ -1901,8 +1904,8 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat if (!err) break; if (err == -ENODEV) { - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_cmd_abort(mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; } /* Fall through */ @@ -1923,19 +1926,19 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat req_pending = blk_end_request(old_req, BLK_STS_IOERR, brq->data.blksz); if (!req_pending) { - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; } break; case MMC_BLK_NOMEDIUM: - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_cmd_abort(mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; default: pr_err("%s: Unhandled return value (%d)", old_req->rq_disk->disk_name, status); - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_cmd_abort(mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; } @@ -1944,25 +1947,25 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat * In case of a incomplete request * prepare it again and resend. */ - mmc_blk_rw_rq_prep(mq_rq, card, - areq->disable_multi, mq); + mmc_blk_rw_rq_prep(mq_rq, areq->disable_multi); mmc_start_areq(card->host, areq); mq_rq->brq.retune_retry_done = retune_retry_done; } } -static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) +static void mmc_blk_issue_rw_rq(struct mmc_queue_req *mq_rq) { + struct request *req = mmc_queue_req_to_req(mq_rq); + struct mmc_queue *mq = mq_rq->mq; struct mmc_card *card = mq->card; - struct mmc_queue_req *mqrq_cur = req_to_mmc_queue_req(new_req); - struct mmc_async_req *areq = &mqrq_cur->areq; + struct mmc_async_req *areq = &mq_rq->areq; /* * If the card was removed, just cancel everything and return. */ if (mmc_card_removed(card)) { - new_req->rq_flags |= RQF_QUIET; - blk_end_request_all(new_req, BLK_STS_IOERR); + req->rq_flags |= RQF_QUIET; + blk_end_request_all(req, BLK_STS_IOERR); return; } @@ -1971,24 +1974,25 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) * multiple read or write is allowed */ if (mmc_large_sector(card) && - !IS_ALIGNED(blk_rq_sectors(new_req), 8)) { + !IS_ALIGNED(blk_rq_sectors(req), 8)) { pr_err("%s: Transfer size is not 4KB sector size aligned\n", - new_req->rq_disk->disk_name); - mmc_blk_rw_cmd_abort(mq, card, new_req, mqrq_cur); + req->rq_disk->disk_name); + mmc_blk_rw_cmd_abort(mq_rq); return; } - mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq); + mmc_blk_rw_rq_prep(mq_rq, 0); areq->disable_multi = false; areq->retry = 0; areq->report_done_status = mmc_blk_rw_done; mmc_start_areq(card->host, areq); } -void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) +void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq) { int ret; - struct mmc_blk_data *md = mq->blkdata; + struct request *req = mmc_queue_req_to_req(mq_rq); + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; if (!req) { @@ -2010,7 +2014,7 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) * ioctl()s */ mmc_wait_for_areq(card->host); - mmc_blk_issue_drv_op(mq, req); + mmc_blk_issue_drv_op(mq_rq); break; case REQ_OP_DISCARD: /* @@ -2018,7 +2022,7 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) * discard. */ mmc_wait_for_areq(card->host); - mmc_blk_issue_discard_rq(mq, req); + mmc_blk_issue_discard_rq(mq_rq); break; case REQ_OP_SECURE_ERASE: /* @@ -2026,7 +2030,7 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) * secure erase. */ mmc_wait_for_areq(card->host); - mmc_blk_issue_secdiscard_rq(mq, req); + mmc_blk_issue_secdiscard_rq(mq_rq); break; case REQ_OP_FLUSH: /* @@ -2034,11 +2038,11 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) * flush. */ mmc_wait_for_areq(card->host); - mmc_blk_issue_flush(mq, req); + mmc_blk_issue_flush(mq_rq); break; default: /* Normal request, just issue it */ - mmc_blk_issue_rw_rq(mq, req); + mmc_blk_issue_rw_rq(mq_rq); break; } } diff --git a/drivers/mmc/core/block.h b/drivers/mmc/core/block.h index 860ca7c8df86..bbc1c8029b3b 100644 --- a/drivers/mmc/core/block.h +++ b/drivers/mmc/core/block.h @@ -1,9 +1,8 @@ #ifndef _MMC_CORE_BLOCK_H #define _MMC_CORE_BLOCK_H -struct mmc_queue; -struct request; +struct mmc_queue_req; -void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req); +void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq); #endif diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index cf43a2d5410d..5511e323db31 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -62,7 +62,7 @@ static int mmc_queue_thread(void *d) claimed_card = true; } set_current_state(TASK_RUNNING); - mmc_blk_issue_rq(mq, req); + mmc_blk_issue_rq(req_to_mmc_queue_req(req)); cond_resched(); } else { mq->asleep = true;