From patchwork Wed Aug 16 19:53:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 714301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC3A3C25B5F for ; Wed, 16 Aug 2023 19:55:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345919AbjHPTzW (ORCPT ); Wed, 16 Aug 2023 15:55:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345907AbjHPTzD (ORCPT ); Wed, 16 Aug 2023 15:55:03 -0400 Received: from mail-pg1-f181.google.com (mail-pg1-f181.google.com [209.85.215.181]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15648198E; Wed, 16 Aug 2023 12:55:02 -0700 (PDT) Received: by mail-pg1-f181.google.com with SMTP id 41be03b00d2f7-564b8e60ce9so4078648a12.2; Wed, 16 Aug 2023 12:55:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692215701; x=1692820501; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=b8soXFjhbaqMNIvZK3ljDXn5iJDlvOmt84AIDipCptw=; b=hdt7em/mtzNyOn6JIQVHe8KnzOuHwg64y7F3vnken0/t7pdcWX5x6sU5v8lro/UifV vGyzfKg7mycUuMRMVs95q0EqEOcYL5SnPTH7q620dTKK2ELL2nvKgluQxNGK9TGOBZHk gaMiNGpBg9pbld2MVVqC7R3K60uKRFC54dFo8rkE2vTjnACIrG62ViY2FU+JPAZxtDXv h6cObPvVMEBUGM3WfObBRrhtJH6/+jQQxNwxE3fgfp3mKWsrldTuRpb3clX+gbaAhbNU 73COdU//B0rtwMZmL6OhAh8m3qyDhFUdSN2gdmmziZZbUz4Cr761jeQtOQtL72AHLOeu FTFw== X-Gm-Message-State: AOJu0YzQ5mX25s4iOBPa/CmDSjVXu8/a7MJ/TvwkLuaY5Akv3cJokBGU 1GqyMEG2yo92RNbli5xwXezZ4+LJ3n4= X-Google-Smtp-Source: AGHT+IE/p+obXzCwXtyPgArb6nhlCM5x/1jZTEMziMUspvydIk5hij6MyTiGw0HSHRjIfv4wasi2oA== X-Received: by 2002:a05:6a21:35c9:b0:140:a273:db48 with SMTP id ba9-20020a056a2135c900b00140a273db48mr2534507pzc.62.1692215701388; Wed, 16 Aug 2023 12:55:01 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:7141:e456:f574:7de0]) by smtp.gmail.com with ESMTPSA id r26-20020a62e41a000000b0068890c19c49sm1588508pfh.180.2023.08.16.12.55.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 12:55:00 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v9 01/17] block: Introduce more member variables related to zone write locking Date: Wed, 16 Aug 2023 12:53:13 -0700 Message-ID: <20230816195447.3703954-2-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog In-Reply-To: <20230816195447.3703954-1-bvanassche@acm.org> References: <20230816195447.3703954-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Many but not all storage controllers require serialization of zoned writes. Introduce two new request queue limit member variables related to write serialization. 'driver_preserves_write_order' allows block drivers to indicate that the order of write commands is preserved and hence that serialization of writes per zone is not required. 'use_zone_write_lock' is set by disk_set_zoned() if and only if the block device has zones and if the block driver does not preserve the order of write requests. Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche Reviewed-by: Damien Le Moal --- block/blk-settings.c | 15 +++++++++++++++ block/blk-zoned.c | 1 + include/linux/blkdev.h | 10 ++++++++++ 3 files changed, 26 insertions(+) diff --git a/block/blk-settings.c b/block/blk-settings.c index 0046b447268f..4c776c08f190 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -56,6 +56,8 @@ void blk_set_default_limits(struct queue_limits *lim) lim->alignment_offset = 0; lim->io_opt = 0; lim->misaligned = 0; + lim->driver_preserves_write_order = false; + lim->use_zone_write_lock = false; lim->zoned = BLK_ZONED_NONE; lim->zone_write_granularity = 0; lim->dma_alignment = 511; @@ -82,6 +84,8 @@ void blk_set_stacking_limits(struct queue_limits *lim) lim->max_dev_sectors = UINT_MAX; lim->max_write_zeroes_sectors = UINT_MAX; lim->max_zone_append_sectors = UINT_MAX; + /* Request-based stacking drivers do not reorder requests. */ + lim->driver_preserves_write_order = true; } EXPORT_SYMBOL(blk_set_stacking_limits); @@ -685,6 +689,10 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, b->max_secure_erase_sectors); t->zone_write_granularity = max(t->zone_write_granularity, b->zone_write_granularity); + t->driver_preserves_write_order = t->driver_preserves_write_order && + b->driver_preserves_write_order; + t->use_zone_write_lock = t->use_zone_write_lock || + b->use_zone_write_lock; t->zoned = max(t->zoned, b->zoned); return ret; } @@ -949,6 +957,13 @@ void disk_set_zoned(struct gendisk *disk, enum blk_zoned_model model) } q->limits.zoned = model; + /* + * Use the zone write lock only for zoned block devices and only if + * the block driver does not preserve the order of write commands. + */ + q->limits.use_zone_write_lock = model != BLK_ZONED_NONE && + !q->limits.driver_preserves_write_order; + if (model != BLK_ZONED_NONE) { /* * Set the zone write granularity to the device logical block diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 619ee41a51cc..112620985bff 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -631,6 +631,7 @@ void disk_clear_zone_settings(struct gendisk *disk) q->limits.chunk_sectors = 0; q->limits.zone_write_granularity = 0; q->limits.max_zone_append_sectors = 0; + q->limits.use_zone_write_lock = false; blk_mq_unfreeze_queue(q); } diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 4feed1fc141f..e6d5450c8488 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -316,6 +316,16 @@ struct queue_limits { unsigned char misaligned; unsigned char discard_misaligned; unsigned char raid_partial_stripes_expensive; + /* + * Whether or not the block driver preserves the order of write + * requests. Set by the block driver. + */ + bool driver_preserves_write_order; + /* + * Whether or not zone write locking should be used. Set by + * disk_set_zoned(). + */ + bool use_zone_write_lock; enum blk_zoned_model zoned; /* From patchwork Wed Aug 16 19:53:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 714549 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2363C25B5E for ; Wed, 16 Aug 2023 19:55:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345926AbjHPTzW (ORCPT ); Wed, 16 Aug 2023 15:55:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345910AbjHPTzE (ORCPT ); Wed, 16 Aug 2023 15:55:04 -0400 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 446041BEE; Wed, 16 Aug 2023 12:55:03 -0700 (PDT) Received: by mail-pf1-f179.google.com with SMTP id d2e1a72fcca58-6887b3613e4so1179377b3a.3; Wed, 16 Aug 2023 12:55:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692215703; x=1692820503; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3+wK2b/T5ahLT/169wPA2eew56bTI2jrIsHOq12bMQA=; b=EhLzdUnZ5CDX0wooop5A0Qy2LNwf48H63clRs5608awpNCv2M87DxxLJIcbsw9Abbh 2n6PM2J7ZTPLIHA99mdCFkezIQuRUmVrccH9S44X5TW05J+p2WplKxiWfABDMsZKGCL9 0YMWRa4CZygiDG+TJbd7e86Lyek8NOOvUqgCtoqQpuY/BHyWqJEh9vJ5LpzQYjEGoEQG 9aWkglbqpaifVgYYnjZD8gbGoapV4Y8Ut4b0gKGRH7TkV7lomOPWAHvoMjybC4F7Y930 TXLy9UGm7AogVMiWYnfN3zEsl32ruZ/ArqnxNkgoz6ub3hCuZvxfy7PYBE+RHq1p7wTW ThBg== X-Gm-Message-State: AOJu0YwxrOUhQ2zjQqs14E7cWt7VFgPAWh/howTIqCAA394bL/ECBKlG dV8CBZsGhHJxT0BPyqPGCwA= X-Google-Smtp-Source: AGHT+IEuPF57GoiVHy80p8JN6Kr+gPanM9h0ZyGHzuX3I8WzdqLR9CvUVl4AOtI95bYJNiaGxHdubQ== X-Received: by 2002:a05:6a00:3490:b0:687:9909:3c75 with SMTP id cp16-20020a056a00349000b0068799093c75mr3170690pfb.4.1692215702559; Wed, 16 Aug 2023 12:55:02 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:7141:e456:f574:7de0]) by smtp.gmail.com with ESMTPSA id r26-20020a62e41a000000b0068890c19c49sm1588508pfh.180.2023.08.16.12.55.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 12:55:02 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v9 02/17] block: Only use write locking if necessary Date: Wed, 16 Aug 2023 12:53:14 -0700 Message-ID: <20230816195447.3703954-3-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog In-Reply-To: <20230816195447.3703954-1-bvanassche@acm.org> References: <20230816195447.3703954-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Make blk_req_needs_zone_write_lock() return false if q->limits.use_zone_write_lock is false. Inline this function because it is short and because it is called from the hot path of the mq-deadline I/O scheduler. Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- block/blk-zoned.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 112620985bff..d8a80cce832f 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -53,11 +53,16 @@ const char *blk_zone_cond_str(enum blk_zone_cond zone_cond) EXPORT_SYMBOL_GPL(blk_zone_cond_str); /* - * Return true if a request is a write requests that needs zone write locking. + * Return true if a request is a write request that needs zone write locking. */ bool blk_req_needs_zone_write_lock(struct request *rq) { - if (!rq->q->disk->seq_zones_wlock) + struct request_queue *q = rq->q; + + if (!q->limits.use_zone_write_lock) + return false; + + if (!q->disk->seq_zones_wlock) return false; return blk_rq_is_seq_zoned_write(rq); From patchwork Wed Aug 16 19:53:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 714548 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48E3AC25B77 for ; Wed, 16 Aug 2023 19:55:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345910AbjHPTzW (ORCPT ); Wed, 16 Aug 2023 15:55:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345921AbjHPTzF (ORCPT ); Wed, 16 Aug 2023 15:55:05 -0400 Received: from mail-pf1-f170.google.com (mail-pf1-f170.google.com [209.85.210.170]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58E071BE8; Wed, 16 Aug 2023 12:55:04 -0700 (PDT) Received: by mail-pf1-f170.google.com with SMTP id d2e1a72fcca58-688731c6331so1414907b3a.3; Wed, 16 Aug 2023 12:55:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692215704; x=1692820504; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=buz7yG1gH/FErqP7TNn0hyeyJ8/RFoYWVgESu90bQm4=; b=IcYCz/zM7HhTekSUS5KMyls1MLUE5JZaUgZ6752Cxn2oim/SSyyAN2h13ZBug4ajQU S/EW2LPX3sn94fkQ8bgKCsNIcbuNZJtpbWV9RMu4tOt0DsMb9YEKDzY+2vOBevHeigu3 kxiBqsmpDtsn7FmKxADorL69caOXQb9kHhc7TGEvgmRaCQs48m3J46JMqYM8lvjpcoHI B0HH6ReS3lWWHlDRwKvtEGvutOyielTlnTVbRoiNq3jLUmsbSPwKgvM4/pN1g4GTxLLz PfWVzlFbq/TUrdmVv4+km986yj+ejsx9XnAzOfR0IdKT1zAWAu5/0xe/FKMaSKdFwSju bD4g== X-Gm-Message-State: AOJu0YzyPS+to+kUCUMPvOwGTxeBSsncT9vFEct+8Ihd4PVEoZfrA7hu 9XFePDKIaNhaBfs+Aa4yhgE= X-Google-Smtp-Source: AGHT+IGwjZopgZO6O5VKQZLlXIj8gkZLt/kCZCLpW9pgzDH1/vpW75h8eUOF0QD5ThxaYpg3i7j60w== X-Received: by 2002:a05:6a00:84b:b0:666:81ae:fec0 with SMTP id q11-20020a056a00084b00b0066681aefec0mr3258142pfk.25.1692215703782; Wed, 16 Aug 2023 12:55:03 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:7141:e456:f574:7de0]) by smtp.gmail.com with ESMTPSA id r26-20020a62e41a000000b0068890c19c49sm1588508pfh.180.2023.08.16.12.55.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 12:55:03 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v9 03/17] block/mq-deadline: Only use zone locking if necessary Date: Wed, 16 Aug 2023 12:53:15 -0700 Message-ID: <20230816195447.3703954-4-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog In-Reply-To: <20230816195447.3703954-1-bvanassche@acm.org> References: <20230816195447.3703954-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Measurements have shown that limiting the queue depth to one per zone for zoned writes has a significant negative performance impact on zoned UFS devices. Hence this patch that disables zone locking by the mq-deadline scheduler if the storage controller preserves the command order. This patch is based on the following assumptions: - It happens infrequently that zoned write requests are reordered by the block layer. - The I/O priority of all write requests is the same per zone. - Either no I/O scheduler is used or an I/O scheduler is used that serializes write requests per zone. Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche Reviewed-by: Damien Le Moal --- block/mq-deadline.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/block/mq-deadline.c b/block/mq-deadline.c index f958e79277b8..082ccf3186f4 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -353,7 +353,7 @@ deadline_fifo_request(struct deadline_data *dd, struct dd_per_prio *per_prio, return NULL; rq = rq_entry_fifo(per_prio->fifo_list[data_dir].next); - if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q)) + if (data_dir == DD_READ || !rq->q->limits.use_zone_write_lock) return rq; /* @@ -398,7 +398,7 @@ deadline_next_request(struct deadline_data *dd, struct dd_per_prio *per_prio, if (!rq) return NULL; - if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q)) + if (data_dir == DD_READ || !rq->q->limits.use_zone_write_lock) return rq; /* @@ -526,8 +526,9 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd, } /* - * For a zoned block device, if we only have writes queued and none of - * them can be dispatched, rq will be NULL. + * For a zoned block device that requires write serialization, if we + * only have writes queued and none of them can be dispatched, rq will + * be NULL. */ if (!rq) return NULL; @@ -934,7 +935,7 @@ static void dd_finish_request(struct request *rq) atomic_inc(&per_prio->stats.completed); - if (blk_queue_is_zoned(q)) { + if (rq->q->limits.use_zone_write_lock) { unsigned long flags; spin_lock_irqsave(&dd->zone_lock, flags); From patchwork Wed Aug 16 19:53:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 714547 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A81E8C25B7A for ; Wed, 16 Aug 2023 19:55:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345931AbjHPTzX (ORCPT ); Wed, 16 Aug 2023 15:55:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345923AbjHPTzH (ORCPT ); Wed, 16 Aug 2023 15:55:07 -0400 Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com [209.85.210.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C36A0E56; Wed, 16 Aug 2023 12:55:05 -0700 (PDT) Received: by mail-pf1-f182.google.com with SMTP id d2e1a72fcca58-6887480109bso1598471b3a.0; Wed, 16 Aug 2023 12:55:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692215705; x=1692820505; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kJ7yasQRI5LCeMCk9S4bEElLyvdl/s1ciHklGiKKwos=; b=gpIFL9dRbcFodWY6hUrlPtbMs+L/HZIJsw6hTmgbFs1zNLBerIsWCsROehxstPqQtG zrHwGsimv3lsqdyKdDHdtXYBcNasyPiRn/5OMEuOlboFNm9XbveBozVv4dKNuhNeBR+S 6AKtnqBYVXBLBMHnljzrVxQABr9ht6t31OZLiiay8Ubo+2TUSlubzOMyEvFm9XO03z7E 567zgDgl18B6x/g6W4/xzZpH1uQKiNcR9X1CLQ3kKpYJCJwoHRLwPIEKlssXQ3CCuNCD ugMzzM3+LCRE11Napqoov24Z2twDZmnsNTSocjLT3x16FwZZea2SXubvpSkB2iC1L/VE twHg== X-Gm-Message-State: AOJu0YzZDFlaS+eSU5ht7yN9alyyxDDiMNOlgGdXRn3FPF8O5wBiG+Dp P+vAv1SZ8vaZyPjnwgjEcfI= X-Google-Smtp-Source: AGHT+IEMn7bJGo0ID6tU4JMQRpsZExRripHFhWgK+z0vHtzebVYQuN4INkj0HwmgdVU4GLQWWT+PjQ== X-Received: by 2002:a05:6a00:887:b0:686:babd:f5c1 with SMTP id q7-20020a056a00088700b00686babdf5c1mr3793668pfj.25.1692215705183; Wed, 16 Aug 2023 12:55:05 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:7141:e456:f574:7de0]) by smtp.gmail.com with ESMTPSA id r26-20020a62e41a000000b0068890c19c49sm1588508pfh.180.2023.08.16.12.55.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 12:55:04 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v9 04/17] scsi: core: Call .eh_prepare_resubmit() before resubmitting Date: Wed, 16 Aug 2023 12:53:16 -0700 Message-ID: <20230816195447.3703954-5-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog In-Reply-To: <20230816195447.3703954-1-bvanassche@acm.org> References: <20230816195447.3703954-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Introduce the .eh_prepare_resubmit function pointer in struct scsi_driver. Make the error handler call .eh_prepare_resubmit() before resubmitting commands. A later patch will use this functionality to sort SCSI commands by LBA from inside the SCSI disk driver. Cc: Martin K. Petersen Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_error.c | 65 ++++++++++++++++++++++++++++++++++++++ drivers/scsi/scsi_priv.h | 1 + include/scsi/scsi_driver.h | 1 + 3 files changed, 67 insertions(+) diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c index c67cdcdc3ba8..4393a7fd8a07 100644 --- a/drivers/scsi/scsi_error.c +++ b/drivers/scsi/scsi_error.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include @@ -2186,6 +2187,68 @@ void scsi_eh_ready_devs(struct Scsi_Host *shost, } EXPORT_SYMBOL_GPL(scsi_eh_ready_devs); +/* + * Returns true if the commands in @done_q should be sorted in LBA order + * before being resubmitted. + */ +static bool scsi_needs_sorting(struct list_head *done_q) +{ + struct scsi_cmnd *scmd; + + list_for_each_entry(scmd, done_q, eh_entry) { + struct request *rq = scsi_cmd_to_rq(scmd); + + if (!rq->q->limits.use_zone_write_lock && + blk_rq_is_seq_zoned_write(rq)) + return true; + } + + return false; +} + +/* + * Comparison function that allows to sort SCSI commands by ULD driver. + */ +static int scsi_cmp_uld(void *priv, const struct list_head *_a, + const struct list_head *_b) +{ + struct scsi_cmnd *a = list_entry(_a, typeof(*a), eh_entry); + struct scsi_cmnd *b = list_entry(_b, typeof(*b), eh_entry); + + /* See also the comment above the list_sort() definition. */ + return scsi_cmd_to_driver(a) > scsi_cmd_to_driver(b); +} + +void scsi_call_prepare_resubmit(struct list_head *done_q) +{ + struct scsi_cmnd *scmd, *next; + + if (!scsi_needs_sorting(done_q)) + return; + + /* Sort pending SCSI commands by ULD. */ + list_sort(NULL, done_q, scsi_cmp_uld); + + /* + * Call .eh_prepare_resubmit for each range of commands with identical + * ULD driver pointer. + */ + list_for_each_entry_safe(scmd, next, done_q, eh_entry) { + struct scsi_driver *uld = scsi_cmd_to_driver(scmd); + struct list_head *prev, uld_cmd_list; + + while (&next->eh_entry != done_q && + scsi_cmd_to_driver(next) == uld) + next = list_next_entry(next, eh_entry); + if (!uld->eh_prepare_resubmit) + continue; + prev = scmd->eh_entry.prev; + list_cut_position(&uld_cmd_list, prev, next->eh_entry.prev); + uld->eh_prepare_resubmit(&uld_cmd_list); + list_splice(&uld_cmd_list, prev); + } +} + /** * scsi_eh_flush_done_q - finish processed commands or retry them. * @done_q: list_head of processed commands. @@ -2194,6 +2257,8 @@ void scsi_eh_flush_done_q(struct list_head *done_q) { struct scsi_cmnd *scmd, *next; + scsi_call_prepare_resubmit(done_q); + list_for_each_entry_safe(scmd, next, done_q, eh_entry) { list_del_init(&scmd->eh_entry); if (scsi_device_online(scmd->device) && diff --git a/drivers/scsi/scsi_priv.h b/drivers/scsi/scsi_priv.h index f42388ecb024..df4af4645430 100644 --- a/drivers/scsi/scsi_priv.h +++ b/drivers/scsi/scsi_priv.h @@ -101,6 +101,7 @@ int scsi_eh_get_sense(struct list_head *work_q, struct list_head *done_q); bool scsi_noretry_cmd(struct scsi_cmnd *scmd); void scsi_eh_done(struct scsi_cmnd *scmd); +void scsi_call_prepare_resubmit(struct list_head *done_q); /* scsi_lib.c */ extern int scsi_maybe_unblock_host(struct scsi_device *sdev); diff --git a/include/scsi/scsi_driver.h b/include/scsi/scsi_driver.h index 4ce1988b2ba0..2b11be896eee 100644 --- a/include/scsi/scsi_driver.h +++ b/include/scsi/scsi_driver.h @@ -18,6 +18,7 @@ struct scsi_driver { int (*done)(struct scsi_cmnd *); int (*eh_action)(struct scsi_cmnd *, int); void (*eh_reset)(struct scsi_cmnd *); + void (*eh_prepare_resubmit)(struct list_head *cmd_list); }; #define to_scsi_driver(drv) \ container_of((drv), struct scsi_driver, gendrv) From patchwork Wed Aug 16 19:53:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 714300 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BDBAC25B78 for ; Wed, 16 Aug 2023 19:55:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345934AbjHPTzY (ORCPT ); Wed, 16 Aug 2023 15:55:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345936AbjHPTzO (ORCPT ); Wed, 16 Aug 2023 15:55:14 -0400 Received: from mail-pg1-f169.google.com (mail-pg1-f169.google.com [209.85.215.169]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C568E55; Wed, 16 Aug 2023 12:55:13 -0700 (PDT) Received: by mail-pg1-f169.google.com with SMTP id 41be03b00d2f7-564b6276941so5223603a12.3; Wed, 16 Aug 2023 12:55:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692215712; x=1692820512; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cNIT/AH6U/3H/4Easw6T2cqeSY/Vrxw0YPM041/czqs=; b=ievchkL3U/Y/Eys91y936vnKWGoaCZRsB5PAT1c6OvIAnXW7c5DqBBE1Lhu8zO0vFx 6L98UqA3n7SMzasS125bHxuPqkZcqBN2pnLM6C/hv09Nx+YRt7Y1P03Z2h3V4VXiSOSH xEfeXExakcUwpJX/mH+vPtmGIl34UcEvZLg26CWt7Xz/Blw0sudd0aY6rhN/lVKlDwSv c9PPIbMRWALOvtldfMC9uIGMxZvKzaBewgsZ0Hhz3VbjB6lTeI+R5n5K/Gw9gAjKoB+m mQxDccOEfVMZglYCmuBuWfC4rIr4d8Ztr2IaegTC1Vra7NTj/2Vjb/g5ZLi9o8TWK87C 92qg== X-Gm-Message-State: AOJu0YxIbjBqdsTOr+MlyV3xKQxckGMNBveHfCj+Erb9qMrB62DGdM6d b8X4zpLlGY1dLRch6cm4IIY= X-Google-Smtp-Source: AGHT+IEiuGHtdFRuOftauodyOz06rD1vZTz9TKKjq3T/+q+9r8Ywc+LI1yVT2z48oyqaNK4VTEpKUA== X-Received: by 2002:a05:6a20:13da:b0:132:8620:8d21 with SMTP id ho26-20020a056a2013da00b0013286208d21mr2943145pzc.58.1692215712405; Wed, 16 Aug 2023 12:55:12 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:7141:e456:f574:7de0]) by smtp.gmail.com with ESMTPSA id r26-20020a62e41a000000b0068890c19c49sm1588508pfh.180.2023.08.16.12.55.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 12:55:11 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v9 05/17] scsi: core: Add unit tests for scsi_call_prepare_resubmit() Date: Wed, 16 Aug 2023 12:53:17 -0700 Message-ID: <20230816195447.3703954-6-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog In-Reply-To: <20230816195447.3703954-1-bvanassche@acm.org> References: <20230816195447.3703954-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Triggering all code paths in scsi_call_prepare_resubmit() via manual testing is difficult. Hence add unit tests for this function. Cc: Martin K. Petersen Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/Kconfig | 2 + drivers/scsi/Kconfig.kunit | 4 + drivers/scsi/Makefile | 2 + drivers/scsi/Makefile.kunit | 1 + drivers/scsi/scsi_error_test.c | 196 +++++++++++++++++++++++++++++++++ 5 files changed, 205 insertions(+) create mode 100644 drivers/scsi/Kconfig.kunit create mode 100644 drivers/scsi/Makefile.kunit create mode 100644 drivers/scsi/scsi_error_test.c diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig index 4962ce989113..fc288f8fb800 100644 --- a/drivers/scsi/Kconfig +++ b/drivers/scsi/Kconfig @@ -232,6 +232,8 @@ config SCSI_SCAN_ASYNC Note that this setting also affects whether resuming from system suspend will be performed asynchronously. +source "drivers/scsi/Kconfig.kunit" + menu "SCSI Transports" depends on SCSI diff --git a/drivers/scsi/Kconfig.kunit b/drivers/scsi/Kconfig.kunit new file mode 100644 index 000000000000..90984a6ec7cc --- /dev/null +++ b/drivers/scsi/Kconfig.kunit @@ -0,0 +1,4 @@ +config SCSI_ERROR_TEST + tristate "scsi_error.c unit tests" if !KUNIT_ALL_TESTS + depends on SCSI && KUNIT + default KUNIT_ALL_TESTS diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile index f055bfd54a68..1c5c3afb6c6e 100644 --- a/drivers/scsi/Makefile +++ b/drivers/scsi/Makefile @@ -168,6 +168,8 @@ scsi_mod-$(CONFIG_PM) += scsi_pm.o scsi_mod-$(CONFIG_SCSI_DH) += scsi_dh.o scsi_mod-$(CONFIG_BLK_DEV_BSG) += scsi_bsg.o +include $(srctree)/drivers/scsi/Makefile.kunit + hv_storvsc-y := storvsc_drv.o sd_mod-objs := sd.o diff --git a/drivers/scsi/Makefile.kunit b/drivers/scsi/Makefile.kunit new file mode 100644 index 000000000000..3e98053b2709 --- /dev/null +++ b/drivers/scsi/Makefile.kunit @@ -0,0 +1 @@ +obj-$(CONFIG_SCSI_ERROR_TEST) += scsi_error_test.o diff --git a/drivers/scsi/scsi_error_test.c b/drivers/scsi/scsi_error_test.c new file mode 100644 index 000000000000..c35ac628065e --- /dev/null +++ b/drivers/scsi/scsi_error_test.c @@ -0,0 +1,196 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright 2023 Google LLC + */ +#include +#include +#include +#include "scsi_priv.h" + +static struct kunit *kunit_test; + +static void uld_prepare_resubmit(struct list_head *cmd_list) +{ + /* This function must not be called. */ + KUNIT_EXPECT_TRUE(kunit_test, false); +} + +/* + * Verify that .eh_prepare_resubmit() is not called if use_zone_write_lock is + * true. + */ +static void test_prepare_resubmit1(struct kunit *test) +{ + static struct gendisk disk; + static struct request_queue q = { + .disk = &disk, + .limits = { + .driver_preserves_write_order = false, + .use_zone_write_lock = true, + .zoned = BLK_ZONED_HM, + } + }; + static struct scsi_driver uld = { + .eh_prepare_resubmit = uld_prepare_resubmit, + }; + static struct scsi_device dev = { + .request_queue = &q, + .sdev_gendev.driver = &uld.gendrv, + }; + static struct rq_and_cmd { + struct request rq; + struct scsi_cmnd cmd; + } cmd1, cmd2; + LIST_HEAD(cmd_list); + + BUILD_BUG_ON(scsi_cmd_to_rq(&cmd1.cmd) != &cmd1.rq); + + disk.queue = &q; + cmd1 = (struct rq_and_cmd){ + .rq = { + .q = &q, + .cmd_flags = REQ_OP_WRITE, + .__sector = 2, + }, + .cmd.device = &dev, + }; + cmd2 = cmd1; + cmd2.rq.__sector = 1; + list_add_tail(&cmd1.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd2.cmd.eh_entry, &cmd_list); + + KUNIT_EXPECT_EQ(test, list_count_nodes(&cmd_list), 2); + kunit_test = test; + scsi_call_prepare_resubmit(&cmd_list); + kunit_test = NULL; + KUNIT_EXPECT_EQ(test, list_count_nodes(&cmd_list), 2); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next, &cmd1.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next, &cmd2.cmd.eh_entry); +} + +static struct scsi_driver *uld1, *uld2, *uld3; + +static void uld1_prepare_resubmit(struct list_head *cmd_list) +{ + struct scsi_cmnd *cmd; + + KUNIT_EXPECT_EQ(kunit_test, list_count_nodes(cmd_list), 2); + list_for_each_entry(cmd, cmd_list, eh_entry) + KUNIT_EXPECT_PTR_EQ(kunit_test, scsi_cmd_to_driver(cmd), uld1); +} + +static void uld2_prepare_resubmit(struct list_head *cmd_list) +{ + struct scsi_cmnd *cmd; + + KUNIT_EXPECT_EQ(kunit_test, list_count_nodes(cmd_list), 2); + list_for_each_entry(cmd, cmd_list, eh_entry) + KUNIT_EXPECT_PTR_EQ(kunit_test, scsi_cmd_to_driver(cmd), uld2); +} + +static void test_prepare_resubmit2(struct kunit *test) +{ + static struct gendisk disk; + static struct request_queue q = { + .disk = &disk, + .limits = { + .driver_preserves_write_order = true, + .use_zone_write_lock = false, + .zoned = BLK_ZONED_HM, + } + }; + static struct rq_and_cmd { + struct request rq; + struct scsi_cmnd cmd; + } cmd1, cmd2, cmd3, cmd4, cmd5, cmd6; + static struct scsi_device dev1, dev2, dev3; + struct scsi_driver *uld; + LIST_HEAD(cmd_list); + + BUILD_BUG_ON(scsi_cmd_to_rq(&cmd1.cmd) != &cmd1.rq); + + uld = kzalloc(3 * sizeof(uld), GFP_KERNEL); + uld1 = &uld[0]; + uld1->eh_prepare_resubmit = uld1_prepare_resubmit; + uld2 = &uld[1]; + uld2->eh_prepare_resubmit = uld2_prepare_resubmit; + uld3 = &uld[2]; + disk.queue = &q; + dev1.sdev_gendev.driver = &uld1->gendrv; + dev1.request_queue = &q; + dev2.sdev_gendev.driver = &uld2->gendrv; + dev2.request_queue = &q; + dev3.sdev_gendev.driver = &uld3->gendrv; + dev3.request_queue = &q; + cmd1 = (struct rq_and_cmd){ + .rq = { + .q = &q, + .cmd_flags = REQ_OP_WRITE, + .__sector = 3, + }, + .cmd.device = &dev1, + }; + cmd2 = cmd1; + cmd2.rq.__sector = 4; + cmd3 = (struct rq_and_cmd){ + .rq = { + .q = &q, + .cmd_flags = REQ_OP_WRITE, + .__sector = 1, + }, + .cmd.device = &dev2, + }; + cmd4 = cmd3; + cmd4.rq.__sector = 2, + cmd5 = (struct rq_and_cmd){ + .rq = { + .q = &q, + .cmd_flags = REQ_OP_WRITE, + .__sector = 5, + }, + .cmd.device = &dev3, + }; + cmd6 = cmd5; + cmd6.rq.__sector = 6; + list_add_tail(&cmd3.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd1.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd2.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd5.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd6.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd4.cmd.eh_entry, &cmd_list); + + KUNIT_EXPECT_EQ(test, list_count_nodes(&cmd_list), 6); + kunit_test = test; + scsi_call_prepare_resubmit(&cmd_list); + kunit_test = NULL; + KUNIT_EXPECT_EQ(test, list_count_nodes(&cmd_list), 6); + KUNIT_EXPECT_TRUE(test, uld1 < uld2); + KUNIT_EXPECT_TRUE(test, uld2 < uld3); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next, &cmd1.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next, &cmd2.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next->next, + &cmd3.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next->next->next, + &cmd4.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next->next->next->next, + &cmd5.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next->next->next->next->next, + &cmd6.cmd.eh_entry); + kfree(uld); +} + +static struct kunit_case prepare_resubmit_test_cases[] = { + KUNIT_CASE(test_prepare_resubmit1), + KUNIT_CASE(test_prepare_resubmit2), + {} +}; + +static struct kunit_suite prepare_resubmit_test_suite = { + .name = "prepare_resubmit", + .test_cases = prepare_resubmit_test_cases, +}; +kunit_test_suite(prepare_resubmit_test_suite); + +MODULE_DESCRIPTION("scsi_call_prepare_resubmit() unit tests"); +MODULE_AUTHOR("Bart Van Assche"); +MODULE_LICENSE("GPL"); From patchwork Wed Aug 16 19:53:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 714299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F4D6C25B7F for ; Wed, 16 Aug 2023 19:55:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345939AbjHPTzZ (ORCPT ); Wed, 16 Aug 2023 15:55:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345937AbjHPTzP (ORCPT ); Wed, 16 Aug 2023 15:55:15 -0400 Received: from mail-pg1-f169.google.com (mail-pg1-f169.google.com [209.85.215.169]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4861FE55; Wed, 16 Aug 2023 12:55:14 -0700 (PDT) Received: by mail-pg1-f169.google.com with SMTP id 41be03b00d2f7-565403bda57so4299148a12.3; Wed, 16 Aug 2023 12:55:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692215714; x=1692820514; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=odcxs1ViPnFxwilUGJzlR+a6yqgvZFgbIQUNsV7/ZVM=; b=SHbxCecBWhHy1y1reJaR/IQ51i7T0TdRe5QPfyaUezctGLgfGVLhuijoBs7UE/KvJW lN6yIgYP+hRiuvGt8KLe2Xk29+TjTF87oWp8GxDo1EPc26qAy+v1toOmiSlXaCMzuk7R sSknGTEp+dNKR6ueWkf4+859Ou3+3X8Y4vQR/27rAuSy2hfpgLAy9fvqnHI+kauHjtf2 qgil8H8ygOG23e70v/ceUGSrUas8mwF1NQtyqsQutKGB27Iz1n4dfHlcntoU2AMu1H2g 6K0QSQ7HAiuHeW70bJJMq+4U+LIFZgiRIeoN5iV4BPstwmysKi4Y5OgoN/3rl1f7MbGd 01jw== X-Gm-Message-State: AOJu0YyVmytO9eH4qTyxi/v3AirQy65loBQjdr4oeI1AwnX2YH5BDwUo 4PRP13YeNpgG4QotRrOroJ0= X-Google-Smtp-Source: AGHT+IGaQfx1kh4sa48mT3nrUEn4EzJ/T2yw4N+p410g+vF4PdUeGp6Uoaf8/BfG1OklG4tZ0pZCBQ== X-Received: by 2002:a05:6a20:748b:b0:131:4808:d5a1 with SMTP id p11-20020a056a20748b00b001314808d5a1mr3349403pzd.28.1692215713690; Wed, 16 Aug 2023 12:55:13 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:7141:e456:f574:7de0]) by smtp.gmail.com with ESMTPSA id r26-20020a62e41a000000b0068890c19c49sm1588508pfh.180.2023.08.16.12.55.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 12:55:13 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v9 06/17] scsi: sd: Sort commands by LBA before resubmitting Date: Wed, 16 Aug 2023 12:53:18 -0700 Message-ID: <20230816195447.3703954-7-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog In-Reply-To: <20230816195447.3703954-1-bvanassche@acm.org> References: <20230816195447.3703954-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Sort SCSI commands by LBA before the SCSI error handler resubmits these commands. This is necessary when resubmitting zoned writes (REQ_OP_WRITE) if multiple writes have been queued for a single zone. Cc: Martin K. Petersen Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/sd.c | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index 3c668cfb146d..8a4b0874e7fe 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -47,6 +47,7 @@ #include #include #include +#include #include #include #include @@ -117,6 +118,7 @@ static void sd_uninit_command(struct scsi_cmnd *SCpnt); static int sd_done(struct scsi_cmnd *); static void sd_eh_reset(struct scsi_cmnd *); static int sd_eh_action(struct scsi_cmnd *, int); +static void sd_prepare_resubmit(struct list_head *cmd_list); static void sd_read_capacity(struct scsi_disk *sdkp, unsigned char *buffer); static void scsi_disk_release(struct device *cdev); @@ -617,6 +619,7 @@ static struct scsi_driver sd_template = { .done = sd_done, .eh_action = sd_eh_action, .eh_reset = sd_eh_reset, + .eh_prepare_resubmit = sd_prepare_resubmit, }; /* @@ -2018,6 +2021,38 @@ static int sd_eh_action(struct scsi_cmnd *scmd, int eh_disp) return eh_disp; } +static int sd_cmp_sector(void *priv, const struct list_head *_a, + const struct list_head *_b) +{ + struct scsi_cmnd *a = list_entry(_a, typeof(*a), eh_entry); + struct scsi_cmnd *b = list_entry(_b, typeof(*b), eh_entry); + struct request *rq_a = scsi_cmd_to_rq(a); + struct request *rq_b = scsi_cmd_to_rq(b); + bool use_zwl_a = rq_a->q->limits.use_zone_write_lock; + bool use_zwl_b = rq_b->q->limits.use_zone_write_lock; + + /* + * Order the commands that need zone write locking after the commands + * that do not need zone write locking. Order the commands that do not + * need zone write locking by LBA. Do not reorder the commands that + * need zone write locking. See also the comment above the list_sort() + * definition. + */ + if (use_zwl_a || use_zwl_b) + return use_zwl_a > use_zwl_b; + return blk_rq_pos(rq_a) > blk_rq_pos(rq_b); +} + +static void sd_prepare_resubmit(struct list_head *cmd_list) +{ + /* + * Sort pending SCSI commands in starting sector order. This is + * important if one of the SCSI devices associated with @shost is a + * zoned block device for which zone write locking is disabled. + */ + list_sort(NULL, cmd_list, sd_cmp_sector); +} + static unsigned int sd_completed_bytes(struct scsi_cmnd *scmd) { struct request *req = scsi_cmd_to_rq(scmd); From patchwork Wed Aug 16 19:53:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 714545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D08FC27C44 for ; Wed, 16 Aug 2023 19:55:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345940AbjHPTzZ (ORCPT ); Wed, 16 Aug 2023 15:55:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345941AbjHPTzQ (ORCPT ); Wed, 16 Aug 2023 15:55:16 -0400 Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB9E1E56; Wed, 16 Aug 2023 12:55:15 -0700 (PDT) Received: by mail-pg1-f176.google.com with SMTP id 41be03b00d2f7-565e395e7a6so1092211a12.0; Wed, 16 Aug 2023 12:55:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692215715; x=1692820515; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/oInJY7oEtcOeIfKLOjrIBYHnkmAcJFxzJJlVQKBQ14=; b=lIhzx8X5/SAUQuhcA8RC9MaJA4V5wFRapElprzRoxCdYT8Gb2/oUW3MJYbKcHpeL8b sEDxz4VEZMbJlh5Ocy6sDzzYaQnY5rSLpJDdVK+m5SZyBFUP4VEQUjRoOVMdnDhlejKE edLDpHyr1fypzzwvD7G4AZrZDa4iQ+KFDZAdb/vzrj+pvvTt/jo5kALfg1ulxCRI24jS HT8t0NnfI0uSNnfgSBZPdrxG/G6W3QNP8d8mhqMmgx/CbjfFZCxWV0Il+2Xb016ZXSew 5bUGq8ezyGmEnFDAJOKPKfBuGdJ2iDyVzwuS6MsMNfhsXkLC5jg6kW2QDBaGImMvLCTY mkKw== X-Gm-Message-State: AOJu0Yx8nB+5YFdvCcAcD76xFeysIrFcWgCwkROBXDd+grGSrTfjBohr m+D2YkMLUznn3dY/PkYb01L1eJRREMs= X-Google-Smtp-Source: AGHT+IHtZN+JT/2RuwPnOrfBgV6CL97RG33flGSt7q3QDp8nGmLSUNhu1zYkektCCjkI25FKx/Zt9A== X-Received: by 2002:a05:6a20:5497:b0:134:11c9:46bd with SMTP id i23-20020a056a20549700b0013411c946bdmr3256493pzk.3.1692215715196; Wed, 16 Aug 2023 12:55:15 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:7141:e456:f574:7de0]) by smtp.gmail.com with ESMTPSA id r26-20020a62e41a000000b0068890c19c49sm1588508pfh.180.2023.08.16.12.55.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 12:55:14 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v9 07/17] scsi: core: Retry unaligned zoned writes Date: Wed, 16 Aug 2023 12:53:19 -0700 Message-ID: <20230816195447.3703954-8-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog In-Reply-To: <20230816195447.3703954-1-bvanassche@acm.org> References: <20230816195447.3703954-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org If zoned writes (REQ_OP_WRITE) for a sequential write required zone have a starting LBA that differs from the write pointer, e.g. because zoned writes have been reordered, then the storage device will respond with an UNALIGNED WRITE COMMAND error. Send commands that failed with an unaligned write error to the SCSI error handler if zone write locking is disabled. The SCSI error handler will sort SCSI commands per LBA before resubmitting these. If zone write locking is disabled, increase the number of retries for write commands sent to a sequential zone to the maximum number of outstanding commands because in the worst case the number of times reordered zoned writes have to be retried is (number of outstanding writes per sequential zone) - 1. Cc: Martin K. Petersen Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_error.c | 16 ++++++++++++++++ drivers/scsi/scsi_lib.c | 1 + drivers/scsi/sd.c | 6 ++++++ include/scsi/scsi.h | 1 + 4 files changed, 24 insertions(+) diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c index 4393a7fd8a07..69510e99ccfd 100644 --- a/drivers/scsi/scsi_error.c +++ b/drivers/scsi/scsi_error.c @@ -699,6 +699,22 @@ enum scsi_disposition scsi_check_sense(struct scsi_cmnd *scmd) fallthrough; case ILLEGAL_REQUEST: + /* + * Unaligned write command. This may indicate that zoned writes + * have been received by the device in the wrong order. If zone + * write locking is disabled, retry after all pending commands + * have completed. + */ + if (sshdr.asc == 0x21 && sshdr.ascq == 0x04 && + !req->q->limits.use_zone_write_lock && + blk_rq_is_seq_zoned_write(req)) { + SCSI_LOG_ERROR_RECOVERY(3, + sdev_printk(KERN_INFO, scmd->device, + "Retrying unaligned write at LBA %#llx.\n", + scsi_get_lba(scmd))); + return NEEDS_DELAYED_RETRY; + } + if (sshdr.asc == 0x20 || /* Invalid command operation code */ sshdr.asc == 0x21 || /* Logical block address out of range */ sshdr.asc == 0x22 || /* Invalid function */ diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 59176946ab56..69da8aee13df 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -1443,6 +1443,7 @@ static void scsi_complete(struct request *rq) case ADD_TO_MLQUEUE: scsi_queue_insert(cmd, SCSI_MLQUEUE_DEVICE_BUSY); break; + case NEEDS_DELAYED_RETRY: default: scsi_eh_scmd_add(cmd); break; diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index 8a4b0874e7fe..05baf5d1c24c 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -1238,6 +1238,12 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd) cmd->transfersize = sdp->sector_size; cmd->underflow = nr_blocks << 9; cmd->allowed = sdkp->max_retries; + /* + * Increase the number of allowed retries for zoned writes if zone + * write locking is disabled. + */ + if (!rq->q->limits.use_zone_write_lock && blk_rq_is_seq_zoned_write(rq)) + cmd->allowed += rq->q->nr_requests; cmd->sdb.length = nr_blocks * sdp->sector_size; SCSI_LOG_HLQUEUE(1, diff --git a/include/scsi/scsi.h b/include/scsi/scsi.h index ec093594ba53..6600db046227 100644 --- a/include/scsi/scsi.h +++ b/include/scsi/scsi.h @@ -93,6 +93,7 @@ static inline int scsi_status_is_check_condition(int status) * Internal return values. */ enum scsi_disposition { + NEEDS_DELAYED_RETRY = 0x2000, NEEDS_RETRY = 0x2001, SUCCESS = 0x2002, FAILED = 0x2003, From patchwork Wed Aug 16 19:53:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 714546 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6534FC27C41 for ; Wed, 16 Aug 2023 19:55:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345943AbjHPTz0 (ORCPT ); Wed, 16 Aug 2023 15:55:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345942AbjHPTzR (ORCPT ); Wed, 16 Aug 2023 15:55:17 -0400 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06921E56; Wed, 16 Aug 2023 12:55:17 -0700 (PDT) Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-686be3cbea0so152869b3a.0; Wed, 16 Aug 2023 12:55:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692215716; x=1692820516; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OG1Q4E4gS04WVynI05SnxfmTGAZJINVqD2qG/eTVmwQ=; b=jgfmVdR2IaI5d25E3eweM2OAPv3/QBXjIQkOJxZx6dzUfeYITDiHXap1R4v8ic8t7/ OxPbr5e1OeiIwvBsCkuJzQcGEoEppxfMJ8Fd1I1sUEqrSaZSuY3D3h/lljCpzVE3yznC EJuUZA2ZEkhYdY0JlFVPy1kd6Ie9+Y8ILjaiSpRpOSkqa+ECqT7WBuLo1oo4tGY1hpIe XU8ys0HpgFF+E79ZAb5qA/nLOxiyOO7u27VaGQQE/73w12JB7gOa7jSWB/8FOIup3aw8 t/uXKsSCIWC86icGJl2O3L9ads97TDmd4qYSdrRX14uUnIFxIaKsdViIuDrCdJdaxuiI jA/Q== X-Gm-Message-State: AOJu0Yz8GqvdFaPTqPdDDDLCfmMAHeoRYIZmWfrHjyE0CvRRnmqGqAE2 UOyulTVIWOlGw/DqoqpKlHM= X-Google-Smtp-Source: AGHT+IF1/pgihdH+GKUoHEgKc6Pfhp21iQTd+gezAbn8GmkAicJjwRZP0fR8cnW3KfjZm9vhWCOLgg== X-Received: by 2002:a05:6a00:148f:b0:666:b22d:c6e0 with SMTP id v15-20020a056a00148f00b00666b22dc6e0mr679847pfu.11.1692215716402; Wed, 16 Aug 2023 12:55:16 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:7141:e456:f574:7de0]) by smtp.gmail.com with ESMTPSA id r26-20020a62e41a000000b0068890c19c49sm1588508pfh.180.2023.08.16.12.55.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 12:55:16 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v9 08/17] scsi: sd_zbc: Only require an I/O scheduler if needed Date: Wed, 16 Aug 2023 12:53:20 -0700 Message-ID: <20230816195447.3703954-9-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog In-Reply-To: <20230816195447.3703954-1-bvanassche@acm.org> References: <20230816195447.3703954-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org An I/O scheduler that serializes zoned writes is only needed if the SCSI LLD does not preserve the write order. Hence only set ELEVATOR_F_ZBD_SEQ_WRITE if the LLD does not preserve the write order. Cc: Martin K. Petersen Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche Reviewed-by: Damien Le Moal --- drivers/scsi/sd_zbc.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c index a25215507668..718b31bed878 100644 --- a/drivers/scsi/sd_zbc.c +++ b/drivers/scsi/sd_zbc.c @@ -955,7 +955,9 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, u8 buf[SD_BUF_SIZE]) /* The drive satisfies the kernel restrictions: set it up */ blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q); - blk_queue_required_elevator_features(q, ELEVATOR_F_ZBD_SEQ_WRITE); + if (!q->limits.driver_preserves_write_order) + blk_queue_required_elevator_features(q, + ELEVATOR_F_ZBD_SEQ_WRITE); if (sdkp->zones_max_open == U32_MAX) disk_set_max_open_zones(disk, 0); else From patchwork Wed Aug 16 19:53:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 714298 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94F89C27C43 for ; Wed, 16 Aug 2023 19:55:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345946AbjHPTz0 (ORCPT ); Wed, 16 Aug 2023 15:55:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345944AbjHPTzT (ORCPT ); Wed, 16 Aug 2023 15:55:19 -0400 Received: from mail-pg1-f177.google.com (mail-pg1-f177.google.com [209.85.215.177]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F597E56; Wed, 16 Aug 2023 12:55:18 -0700 (PDT) Received: by mail-pg1-f177.google.com with SMTP id 41be03b00d2f7-55b0e7efb1cso4096918a12.1; Wed, 16 Aug 2023 12:55:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692215718; x=1692820518; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=f1sJsHjb25Up82wy0+rLFEE09ikEWVEcaOgckYbsEWo=; b=NYoKrxeNmKAe+LRndYf49EUfUJfUCj+Nhx9pHinsctwRB7jfe7z/nrGETOS4sFGfg0 EvpMiKCDlVi+LDXH3glwpWtqVVH9xwqMZfq69e5a2ooU854WPFfp6qJgNib+IWQxScA+ myAvMt8BaQCTVCx25ZRT5XdBmp3gpMGzR887PgDlAuOkzlbQZKnLcs4x/dWYfYIC5Gee MoiBtaHa5LTX9A6mtm8PPcioQrpTQk5sJYZAZrahQVJEIZRA92hFzRgfvynIg/t8rGqL QYlNBgzkThvAieAhA9xaqczGQpGT4aTWms4gtrAOL89krp1qqIYLpGvJNwb5wCkuHfaG T1zQ== X-Gm-Message-State: AOJu0YwEc1AS8xy8Qj9LeR9W2uvLeikHoZJ+g15x9ozq2QyaoZJaoIWY NuSEp301JPaQATwgX2n/ZLU= X-Google-Smtp-Source: AGHT+IF3d32FeeP5+cAcJ4ILETznyQKerW76vwvgMk9VMZ64yPLkvLT6Be4pXoHC7jcPaBcq1Pjdxg== X-Received: by 2002:a05:6a00:1389:b0:688:7ccb:5ad1 with SMTP id t9-20020a056a00138900b006887ccb5ad1mr3236591pfg.1.1692215717707; Wed, 16 Aug 2023 12:55:17 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:7141:e456:f574:7de0]) by smtp.gmail.com with ESMTPSA id r26-20020a62e41a000000b0068890c19c49sm1588508pfh.180.2023.08.16.12.55.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 12:55:17 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Douglas Gilbert , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v9 09/17] scsi: scsi_debug: Support disabling zone write locking Date: Wed, 16 Aug 2023 12:53:21 -0700 Message-ID: <20230816195447.3703954-10-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog In-Reply-To: <20230816195447.3703954-1-bvanassche@acm.org> References: <20230816195447.3703954-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Make it easier to test not using zone write locking by supporting disabling zone write locking in the scsi_debug driver. Cc: Martin K. Petersen Cc: Douglas Gilbert Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_debug.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index 9c0af50501f9..c44c523bde2c 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -832,6 +832,7 @@ static int dix_reads; static int dif_errors; /* ZBC global data */ +static bool sdeb_no_zwrl; static bool sdeb_zbc_in_use; /* true for host-aware and host-managed disks */ static int sdeb_zbc_zone_cap_mb; static int sdeb_zbc_zone_size_mb; @@ -5138,9 +5139,13 @@ static struct sdebug_dev_info *find_build_dev_info(struct scsi_device *sdev) static int scsi_debug_slave_alloc(struct scsi_device *sdp) { + struct request_queue *q = sdp->request_queue; + if (sdebug_verbose) pr_info("slave_alloc <%u %u %u %llu>\n", sdp->host->host_no, sdp->channel, sdp->id, sdp->lun); + if (sdeb_no_zwrl) + q->limits.driver_preserves_write_order = true; return 0; } @@ -5738,6 +5743,7 @@ module_param_named(ndelay, sdebug_ndelay, int, S_IRUGO | S_IWUSR); module_param_named(no_lun_0, sdebug_no_lun_0, int, S_IRUGO | S_IWUSR); module_param_named(no_rwlock, sdebug_no_rwlock, bool, S_IRUGO | S_IWUSR); module_param_named(no_uld, sdebug_no_uld, int, S_IRUGO); +module_param_named(no_zone_write_lock, sdeb_no_zwrl, bool, S_IRUGO); module_param_named(num_parts, sdebug_num_parts, int, S_IRUGO); module_param_named(num_tgts, sdebug_num_tgts, int, S_IRUGO | S_IWUSR); module_param_named(opt_blks, sdebug_opt_blks, int, S_IRUGO); @@ -5812,6 +5818,8 @@ MODULE_PARM_DESC(ndelay, "response delay in nanoseconds (def=0 -> ignore)"); MODULE_PARM_DESC(no_lun_0, "no LU number 0 (def=0 -> have lun 0)"); MODULE_PARM_DESC(no_rwlock, "don't protect user data reads+writes (def=0)"); MODULE_PARM_DESC(no_uld, "stop ULD (e.g. sd driver) attaching (def=0))"); +MODULE_PARM_DESC(no_zone_write_lock, + "Disable serialization of zoned writes (def=0)"); MODULE_PARM_DESC(num_parts, "number of partitions(def=0)"); MODULE_PARM_DESC(num_tgts, "number of targets per host to simulate(def=1)"); MODULE_PARM_DESC(opt_blks, "optimal transfer length in blocks (def=1024)"); From patchwork Wed Aug 16 19:53:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 714297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7364C19F4F for ; Wed, 16 Aug 2023 19:56:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345964AbjHPTzx (ORCPT ); Wed, 16 Aug 2023 15:55:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345902AbjHPTzV (ORCPT ); Wed, 16 Aug 2023 15:55:21 -0400 Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC184E55; Wed, 16 Aug 2023 12:55:19 -0700 (PDT) Received: by mail-pf1-f173.google.com with SMTP id d2e1a72fcca58-686f94328a4so158093b3a.0; Wed, 16 Aug 2023 12:55:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692215719; x=1692820519; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PniZFVm0KVVX2uS5WNDISfoYsmok7mUzRbqporfC4VQ=; b=J3sMhDWVqq7fmdpFcWKDTs7i90PubmupA1qINMbWrzNv6EZju5NY8Vg2HEsqFmpceF G+r7YWTVxyGs/BOvxrNQYAEEB+AHOvKpwXjvj2RSIypEMM+3sjM7s8zKebbpWOnBoNhx 5Eb+Kkr0erQ360YuxCI7x4kQfPfe0jGHTnjsBMIAjodGxJleZg5nCv29Zdb4UDGjxgAx l6fcMkXipJhCHFf43wUrKEnkUD8kUpd5rzwoBeOZxxmvj1jbrhjnET4QQmhkPO9ToRgp 5q7jUhWzcwWg9hke/jEEPcmJYJgkL1RaGZbXdKqHIaHms1OQ22oAw+BqYxdnDz5O+a3B P0EQ== X-Gm-Message-State: AOJu0YzuCLEYpN8DjwtWfTghmV4KIudEZscDbT7Rhp1pUhYyEn0e9+Xc zjyvqTw7xGc0cVPqSwrS9sY= X-Google-Smtp-Source: AGHT+IHjyb4MplqOB5ZI7uN2J6oIHoppcFqeMU9iVyvBkiXadvXwekCM0l2rnmBosM03Z1rMtVd9NA== X-Received: by 2002:a05:6a00:847:b0:67a:72d5:3365 with SMTP id q7-20020a056a00084700b0067a72d53365mr825477pfk.6.1692215719003; Wed, 16 Aug 2023 12:55:19 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:7141:e456:f574:7de0]) by smtp.gmail.com with ESMTPSA id r26-20020a62e41a000000b0068890c19c49sm1588508pfh.180.2023.08.16.12.55.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 12:55:18 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Douglas Gilbert , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v9 10/17] scsi: scsi_debug: Support injecting unaligned write errors Date: Wed, 16 Aug 2023 12:53:22 -0700 Message-ID: <20230816195447.3703954-11-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog In-Reply-To: <20230816195447.3703954-1-bvanassche@acm.org> References: <20230816195447.3703954-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Allow user space software, e.g. a blktests test, to inject unaligned write errors. Acked-by: Douglas Gilbert Cc: Martin K. Petersen Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche Reviewed-by: Damien Le Moal --- drivers/scsi/scsi_debug.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index c44c523bde2c..c92bd6d00249 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -181,6 +181,7 @@ static const char *sdebug_version_date = "20210520"; #define SDEBUG_OPT_NO_CDB_NOISE 0x4000 #define SDEBUG_OPT_HOST_BUSY 0x8000 #define SDEBUG_OPT_CMD_ABORT 0x10000 +#define SDEBUG_OPT_UNALIGNED_WRITE 0x20000 #define SDEBUG_OPT_ALL_NOISE (SDEBUG_OPT_NOISE | SDEBUG_OPT_Q_NOISE | \ SDEBUG_OPT_RESET_NOISE) #define SDEBUG_OPT_ALL_INJECTING (SDEBUG_OPT_RECOVERED_ERR | \ @@ -188,7 +189,8 @@ static const char *sdebug_version_date = "20210520"; SDEBUG_OPT_DIF_ERR | SDEBUG_OPT_DIX_ERR | \ SDEBUG_OPT_SHORT_TRANSFER | \ SDEBUG_OPT_HOST_BUSY | \ - SDEBUG_OPT_CMD_ABORT) + SDEBUG_OPT_CMD_ABORT | \ + SDEBUG_OPT_UNALIGNED_WRITE) #define SDEBUG_OPT_RECOV_DIF_DIX (SDEBUG_OPT_RECOVERED_ERR | \ SDEBUG_OPT_DIF_ERR | SDEBUG_OPT_DIX_ERR) @@ -3587,6 +3589,14 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip) struct sdeb_store_info *sip = devip2sip(devip, true); u8 *cmd = scp->cmnd; + if (unlikely(sdebug_opts & SDEBUG_OPT_UNALIGNED_WRITE && + atomic_read(&sdeb_inject_pending))) { + atomic_set(&sdeb_inject_pending, 0); + mk_sense_buffer(scp, ILLEGAL_REQUEST, LBA_OUT_OF_RANGE, + UNALIGNED_WRITE_ASCQ); + return check_condition_result; + } + switch (cmd[0]) { case WRITE_16: ei_lba = 0; From patchwork Wed Aug 16 19:53:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 714544 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 146DAC41513 for ; Wed, 16 Aug 2023 19:56:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345947AbjHPTzz (ORCPT ); Wed, 16 Aug 2023 15:55:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345948AbjHPTz1 (ORCPT ); Wed, 16 Aug 2023 15:55:27 -0400 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF7C8198E; Wed, 16 Aug 2023 12:55:26 -0700 (PDT) Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-6889078ee66so799240b3a.0; Wed, 16 Aug 2023 12:55:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692215726; x=1692820526; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FVQD0WbqqvycXZAf7qyOpSGFUndWIL1qRVpGPKP4Vtc=; b=gXMQyUreHONEHAywrSPgbtry6b+28MGXdEOAIi01ZoJP8ilDD7JdcZ81ul04hb0lWK DU+gWtbf6TypP50N23XIo7xuOR5Zil0riI8zM19xQv63cbny8FnmDbPNw4/K6+WKsm+i 1MN4KdLmbhlh2dbcbhe69M89VhuI5V3JU+aSij5saM/s4lCpYprkZg9MAAuDrFrtcY0I /r2CJ7xYj84ykrNpBE5nsAwwNaMjMhiWfj7gdEqbWIowvChpEV034YUzNb4CgPNWF8Uf TnTlw5kyuOpDY12GtinOq/vnb4DTalf7NNkt9TlQAQ5Li6QEzbaniLNsOS/vg6vqfo8D Ifzg== X-Gm-Message-State: AOJu0YyF9A/jnN1DH9I8UkylimVSDKGYnJeyrIy595CeDTWvvXzhGiTW ZJqEtZWjGQBcymKubscYSew= X-Google-Smtp-Source: AGHT+IHDh+zVS1dLun5267dYF1CvXgHrofHBqKfTvTdBVcoTitcd3YtENXj2/CGcIBNF2olanM8YBA== X-Received: by 2002:a05:6a00:240a:b0:668:6eed:7c0f with SMTP id z10-20020a056a00240a00b006686eed7c0fmr3180512pfh.12.1692215726177; Wed, 16 Aug 2023 12:55:26 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:7141:e456:f574:7de0]) by smtp.gmail.com with ESMTPSA id r26-20020a62e41a000000b0068890c19c49sm1588508pfh.180.2023.08.16.12.55.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 12:55:25 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , "James E.J. Bottomley" , Bean Huo , Keoseong Park , Krzysztof Kozlowski , Avri Altman Subject: [PATCH v9 11/17] scsi: ufs: hisi: Rework the code that disables auto-hibernation Date: Wed, 16 Aug 2023 12:53:23 -0700 Message-ID: <20230816195447.3703954-12-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog In-Reply-To: <20230816195447.3703954-1-bvanassche@acm.org> References: <20230816195447.3703954-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The host driver link startup callback is called indirectly by ufshcd_probe_hba(). That function applies the auto-hibernation settings by writing hba->ahit into the auto-hibernation control register. Simplify the code for disabling auto-hibernation by setting hba->ahit instead of writing into the auto-hibernation control register. This patch is part of an effort to move all auto-hibernation register changes into the UFSHCI driver core. Cc: Martin K. Petersen Signed-off-by: Bart Van Assche --- drivers/ufs/host/ufs-hisi.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/drivers/ufs/host/ufs-hisi.c b/drivers/ufs/host/ufs-hisi.c index 5b3060cd0ab8..f2ec687121bb 100644 --- a/drivers/ufs/host/ufs-hisi.c +++ b/drivers/ufs/host/ufs-hisi.c @@ -142,7 +142,6 @@ static int ufs_hisi_link_startup_pre_change(struct ufs_hba *hba) struct ufs_hisi_host *host = ufshcd_get_variant(hba); int err; uint32_t value; - uint32_t reg; /* Unipro VS_mphy_disable */ ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(0xD0C1, 0x0), 0x1); @@ -232,9 +231,7 @@ static int ufs_hisi_link_startup_pre_change(struct ufs_hba *hba) ufshcd_writel(hba, UFS_HCLKDIV_NORMAL_VALUE, UFS_REG_HCLKDIV); /* disable auto H8 */ - reg = ufshcd_readl(hba, REG_AUTO_HIBERNATE_IDLE_TIMER); - reg = reg & (~UFS_AHIT_AH8ITV_MASK); - ufshcd_writel(hba, reg, REG_AUTO_HIBERNATE_IDLE_TIMER); + hba->ahit = 0; /* Unipro PA_Local_TX_LCC_Enable */ ufshcd_disable_host_tx_lcc(hba); From patchwork Wed Aug 16 19:53:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 714296 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 413FFC25B5F for ; Wed, 16 Aug 2023 19:56:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345954AbjHPTz5 (ORCPT ); Wed, 16 Aug 2023 15:55:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58008 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345949AbjHPTz3 (ORCPT ); Wed, 16 Aug 2023 15:55:29 -0400 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B14CE55; Wed, 16 Aug 2023 12:55:28 -0700 (PDT) Received: by mail-pg1-f178.google.com with SMTP id 41be03b00d2f7-565f24a24c4so993311a12.1; Wed, 16 Aug 2023 12:55:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692215727; x=1692820527; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bxmh1ZZBBxGsevqYm7Hoejol5PAGBkpBUksXBvv8y1U=; b=bXFV6S8Gs/xmcbdJAo5qrT48RuA4j9rlRim8+yxBG6RSdHuUiUhgnnHq2Z4whLbd4k F4b3Yc7K7NkP5NtOz5EGq5Ba5/ZvccU3bulCKJeYhsCie2gk1t9pH/oHNMyHWuCeAPTe VaBKcHETmW0DZQmB9oUxR4C384I9NO55eAdpBI9QIcepHWFV6/kcf6D6ls4dpXRi1Bkr 4iITYiB41z0J4OqKq/9+EaRw/MOvJCStpt536yG8c5HzlzepOK7+H/yIJVyUqD9aOpBj csouGtDvwT6oU9mWV+IuH7vEEIJ/YUL/FEFdnOq+gKuT+L66LQ2tu2uwEJi1rVnjoyq0 S50A== X-Gm-Message-State: AOJu0YzXYMBG9Nt7F/E6/qhgPJOwldoBE0irujgVyKDRdItfm4bNyNDH YxV+5GCHQt2VmWavZfBbB9Y= X-Google-Smtp-Source: AGHT+IE5cVldksZVSEi9QTyWuADmKmyVKqjdO82EHUvRdCcSQ5snc9qjey3+DzRCHkh/wPWJZBSRLg== X-Received: by 2002:a05:6a21:7888:b0:12f:382d:2a37 with SMTP id bf8-20020a056a21788800b0012f382d2a37mr4212498pzc.15.1692215727576; Wed, 16 Aug 2023 12:55:27 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:7141:e456:f574:7de0]) by smtp.gmail.com with ESMTPSA id r26-20020a62e41a000000b0068890c19c49sm1588508pfh.180.2023.08.16.12.55.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 12:55:27 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Stanley Chu , "James E.J. Bottomley" , Matthias Brugger Subject: [PATCH v9 12/17] scsi: ufs: mediatek: Rework the code for disabling auto-hibernation Date: Wed, 16 Aug 2023 12:53:24 -0700 Message-ID: <20230816195447.3703954-13-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog In-Reply-To: <20230816195447.3703954-1-bvanassche@acm.org> References: <20230816195447.3703954-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Call ufshcd_auto_hibern8_update() instead of writing directly into the auto-hibernation control register. This patch is part of an effort to move all auto-hibernation register changes into the UFSHCI driver core. Cc: Martin K. Petersen Signed-off-by: Bart Van Assche --- drivers/ufs/host/ufs-mediatek.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/ufs/host/ufs-mediatek.c b/drivers/ufs/host/ufs-mediatek.c index e68b05976f9e..a3cf30e603ca 100644 --- a/drivers/ufs/host/ufs-mediatek.c +++ b/drivers/ufs/host/ufs-mediatek.c @@ -1252,7 +1252,7 @@ static void ufs_mtk_auto_hibern8_disable(struct ufs_hba *hba) int ret; /* disable auto-hibern8 */ - ufshcd_writel(hba, 0, REG_AUTO_HIBERNATE_IDLE_TIMER); + ufshcd_auto_hibern8_update(hba, 0); /* wait host return to idle state when auto-hibern8 off */ ufs_mtk_wait_idle_state(hba, 5); From patchwork Wed Aug 16 19:53:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 714543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3566C25B75 for ; Wed, 16 Aug 2023 19:56:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345958AbjHPTz6 (ORCPT ); Wed, 16 Aug 2023 15:55:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345955AbjHPTzj (ORCPT ); Wed, 16 Aug 2023 15:55:39 -0400 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16381E55; Wed, 16 Aug 2023 12:55:39 -0700 (PDT) Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-6889288a31fso149997b3a.1; Wed, 16 Aug 2023 12:55:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692215738; x=1692820538; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=02q7BnxcFpL3p2sPDrxUxFMijGfj0ettab/jJOKKOXQ=; b=gZaKkaR1SwoEbryrUtxKtNEH7dxHrCvWEIoDp9xG8JN6EqattCLYmELiSUaIaKLeoz fQZH+8gJCcIl2cFILhTArh4FXfjG1v0zSA12EWYFr1YOCFjd4Zt2Of7AdtZr9RpgG5m2 dOisGAmu4wcSt2vk1HdaeTu6jU288SNwSWJj8coC1dz9YzSHQdfX79qUWZjPjtxzkAX/ N/P/v7KPhEnAc9AVYTeYeLe+s8i7pv6mpEmibPLDdIlqBXwr7dxNZf16FQ2Eu/noixxC t4mzT32eK6kxUh7Cacj54sCddkTFbvj/OwfbWssTpPXnvkJB/p0oebxVQzyhlzoyLy5o Ziqg== X-Gm-Message-State: AOJu0Yz30om3jhksG+S+1QytLU/FB+7wATzaigziWMrQWOxOyD5YLK11 iEyGn1GdqQOttFaKXe0mbR7fOkg06Sw= X-Google-Smtp-Source: AGHT+IFcbLgVF3ok3DWOjHY4ZizSbxx0CwP13mvUhonSS/IK4bW5HMOmYtv13P+HWANzm7csnjanjA== X-Received: by 2002:a05:6a00:a14:b0:67f:830f:b809 with SMTP id p20-20020a056a000a1400b0067f830fb809mr750958pfh.3.1692215738470; Wed, 16 Aug 2023 12:55:38 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:7141:e456:f574:7de0]) by smtp.gmail.com with ESMTPSA id r26-20020a62e41a000000b0068890c19c49sm1588508pfh.180.2023.08.16.12.55.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 12:55:38 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , "James E.J. Bottomley" , Orson Zhai , Baolin Wang , Chunyan Zhang , Zhe Wang , Adrian Hunter Subject: [PATCH v9 13/17] scsi: ufs: sprd: Rework the code for disabling auto-hibernation Date: Wed, 16 Aug 2023 12:53:25 -0700 Message-ID: <20230816195447.3703954-14-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog In-Reply-To: <20230816195447.3703954-1-bvanassche@acm.org> References: <20230816195447.3703954-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Call ufshcd_auto_hibern8_update() instead of writing directly into the auto-hibernation control register. This patch is part of an effort to move all auto-hibernation register changes into the UFSHCI driver core. Cc: Martin K. Petersen Signed-off-by: Bart Van Assche --- drivers/ufs/host/ufs-sprd.c | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-) diff --git a/drivers/ufs/host/ufs-sprd.c b/drivers/ufs/host/ufs-sprd.c index 2bad75dd6d58..a8e631bb695b 100644 --- a/drivers/ufs/host/ufs-sprd.c +++ b/drivers/ufs/host/ufs-sprd.c @@ -180,15 +180,8 @@ static int sprd_ufs_pwr_change_notify(struct ufs_hba *hba, static int ufs_sprd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op, enum ufs_notify_change_status status) { - unsigned long flags; - - if (status == PRE_CHANGE) { - if (ufshcd_is_auto_hibern8_supported(hba)) { - spin_lock_irqsave(hba->host->host_lock, flags); - ufshcd_writel(hba, 0, REG_AUTO_HIBERNATE_IDLE_TIMER); - spin_unlock_irqrestore(hba->host->host_lock, flags); - } - } + if (status == PRE_CHANGE) + ufshcd_auto_hibern8_update(hba, 0); return 0; } From patchwork Wed Aug 16 19:53:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 714295 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25479C19F4F for ; Wed, 16 Aug 2023 19:56:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345953AbjHPT40 (ORCPT ); Wed, 16 Aug 2023 15:56:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46766 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239012AbjHPTzx (ORCPT ); Wed, 16 Aug 2023 15:55:53 -0400 Received: from mail-pg1-f175.google.com (mail-pg1-f175.google.com [209.85.215.175]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BEDE5E55; Wed, 16 Aug 2023 12:55:52 -0700 (PDT) Received: by mail-pg1-f175.google.com with SMTP id 41be03b00d2f7-5657add1073so155813a12.0; Wed, 16 Aug 2023 12:55:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692215752; x=1692820552; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CdLWDLWt8NwQDtu8yUMOEnz/bTnZV/4Gh7yYHqpvQl4=; b=MB2v+YwswhXbAuaEHJHFuAsyFiU+quPj7d7gBOzq7eGY0j9Z+8rpGYzi6Ve195mZsK M7IDbB2w803b1pe5zjuziZe/5xq6c/wzg1wmvPcd+dX/qKY43bcn7wqxMvK2gx1rj2NM PrJxMTQD/v6c7rH60wUI3Xs6n0fMXjuPXMp5F+tBhPeGSBe2FnTnaqnxgc/gBbGI+Uc4 rPiwuJtRUtfrcfOD03KXfeGJOpGSGxys6emfyDssu4XsRDcGeOqttbILvU5e8T5oZczT ZTScpVIRlyaZRXcaL9wUqdcjqdvacFRuLz6oA6eP3EMJVeMtMNti+FGEYgZFv+Pq3M8U b89Q== X-Gm-Message-State: AOJu0YxW6V8GkEIaaIrmhSBBpQuediuHSMQur3h+N/BNlpD9Sn+RvHpQ EauwRNSHm1abXA1mLtfMOzM= X-Google-Smtp-Source: AGHT+IF4fyslhORuHPxwzF3rbco8HT/KcV5rTTkvRqSxQvQhTgZBvUVDA7WUSCc0fW4CoBfzfCzF6Q== X-Received: by 2002:a05:6a20:1587:b0:125:928d:6745 with SMTP id h7-20020a056a20158700b00125928d6745mr841551pzj.15.1692215752184; Wed, 16 Aug 2023 12:55:52 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:7141:e456:f574:7de0]) by smtp.gmail.com with ESMTPSA id r26-20020a62e41a000000b0068890c19c49sm1588508pfh.180.2023.08.16.12.55.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 12:55:51 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , "Bao D . Nguyen" , "James E.J. Bottomley" , Stanley Chu , Can Guo , Bean Huo , Asutosh Das , Arthur Simchaev , Manivannan Sadhasivam , Po-Wen Kao , Eric Biggers , Keoseong Park , Daniil Lunev Subject: [PATCH v9 14/17] scsi: ufs: Rename ufshcd_auto_hibern8_enable() and make it static Date: Wed, 16 Aug 2023 12:53:26 -0700 Message-ID: <20230816195447.3703954-15-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog In-Reply-To: <20230816195447.3703954-1-bvanassche@acm.org> References: <20230816195447.3703954-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Rename ufshcd_auto_hibern8_enable() into ufshcd_configure_auto_hibern8() since this function can enable or disable auto-hibernation. Since ufshcd_auto_hibern8_enable() is only used inside the UFSHCI driver core, declare it static. Additionally, move the definition of this function to just before its first caller. Suggested-by: Bao D. Nguyen Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 24 +++++++++++------------- include/ufs/ufshcd.h | 1 - 2 files changed, 11 insertions(+), 14 deletions(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 129446775796..f1bba459b46f 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4337,6 +4337,14 @@ int ufshcd_uic_hibern8_exit(struct ufs_hba *hba) } EXPORT_SYMBOL_GPL(ufshcd_uic_hibern8_exit); +static void ufshcd_configure_auto_hibern8(struct ufs_hba *hba) +{ + if (!ufshcd_is_auto_hibern8_supported(hba)) + return; + + ufshcd_writel(hba, hba->ahit, REG_AUTO_HIBERNATE_IDLE_TIMER); +} + void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) { unsigned long flags; @@ -4356,21 +4364,13 @@ void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) !pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { ufshcd_rpm_get_sync(hba); ufshcd_hold(hba); - ufshcd_auto_hibern8_enable(hba); + ufshcd_configure_auto_hibern8(hba); ufshcd_release(hba); ufshcd_rpm_put_sync(hba); } } EXPORT_SYMBOL_GPL(ufshcd_auto_hibern8_update); -void ufshcd_auto_hibern8_enable(struct ufs_hba *hba) -{ - if (!ufshcd_is_auto_hibern8_supported(hba)) - return; - - ufshcd_writel(hba, hba->ahit, REG_AUTO_HIBERNATE_IDLE_TIMER); -} - /** * ufshcd_init_pwr_info - setting the POR (power on reset) * values in hba power info @@ -8815,8 +8815,7 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool init_dev_params) if (hba->ee_usr_mask) ufshcd_write_ee_control(hba); - /* Enable Auto-Hibernate if configured */ - ufshcd_auto_hibern8_enable(hba); + ufshcd_configure_auto_hibern8(hba); ufshpb_toggle_state(hba, HPB_RESET, HPB_PRESENT); out: @@ -9809,8 +9808,7 @@ static int __ufshcd_wl_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op) cancel_delayed_work(&hba->rpm_dev_flush_recheck_work); } - /* Enable Auto-Hibernate if configured */ - ufshcd_auto_hibern8_enable(hba); + ufshcd_configure_auto_hibern8(hba); ufshpb_resume(hba); goto out; diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index 6dc11fa0ebb1..040d66d99912 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -1363,7 +1363,6 @@ static inline int ufshcd_disable_host_tx_lcc(struct ufs_hba *hba) return ufshcd_dme_set(hba, UIC_ARG_MIB(PA_LOCAL_TX_LCC_ENABLE), 0); } -void ufshcd_auto_hibern8_enable(struct ufs_hba *hba); void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); void ufshcd_fixup_dev_quirks(struct ufs_hba *hba, const struct ufs_dev_quirk *fixups); From patchwork Wed Aug 16 19:53:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 714542 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 672E3C25B5E for ; Wed, 16 Aug 2023 19:56:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345960AbjHPT41 (ORCPT ); Wed, 16 Aug 2023 15:56:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345989AbjHPT4M (ORCPT ); Wed, 16 Aug 2023 15:56:12 -0400 Received: from mail-pf1-f170.google.com (mail-pf1-f170.google.com [209.85.210.170]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18DC82D50; Wed, 16 Aug 2023 12:56:01 -0700 (PDT) Received: by mail-pf1-f170.google.com with SMTP id d2e1a72fcca58-6889656eb58so486538b3a.1; Wed, 16 Aug 2023 12:56:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692215760; x=1692820560; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JezTmlByBxxwVL/c+JEFvGZtx2Wv2zOwYtr9WnFCI4Q=; b=FAVnHv+TY52CRAGTAq87i+E0sl+IDlHebTvM60FbImy/zHUD027io9iV3Qw4U6JlmB NPrD8RmX4uF4LPQtTJW7ZxtGdfeZjGbjnl/SfnKbwqH15Q/njVyXQjavKQ7wZoir1liV kbal2JsFH0oDoZWafr0WBGGCSPDPiCIpdyY5p88hawwecLvnY1quZc9Utz+rfs1x77Bx Yu3ZjrSaQUzydR/QMVzZg8zbnkaImPg89s3jyHtS+gksz6hmK32PoFYhqZJ08iNZpbqT TwjiiHWjEjYlrdQRzUbcJ9Q6SayWAaPR8rY3JDZChyI+lLHFgX7XdAogEvshU3iIfRGR zlog== X-Gm-Message-State: AOJu0YwiFlJ0t9SpIlBj/qlShWNkKOs9I5a197fPfPsaz58zMN74yZ8B THOHf8LyBJO89CyaQztC1Vs= X-Google-Smtp-Source: AGHT+IG7ypN2mncc9Kbis3KBEAngXyHI4rQgea1EUr4Nb+qS+zhJv1LRKuwAN8QFNY5mB/GzALOH5Q== X-Received: by 2002:a05:6a00:1792:b0:682:f529:6d69 with SMTP id s18-20020a056a00179200b00682f5296d69mr3499666pfg.7.1692215760229; Wed, 16 Aug 2023 12:56:00 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:7141:e456:f574:7de0]) by smtp.gmail.com with ESMTPSA id r26-20020a62e41a000000b0068890c19c49sm1588508pfh.180.2023.08.16.12.55.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 12:55:59 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Avri Altman , Damien Le Moal , Ming Lei , "James E.J. Bottomley" , Stanley Chu , Can Guo , Bean Huo , Asutosh Das , "Bao D. Nguyen" , Arthur Simchaev Subject: [PATCH v9 15/17] scsi: ufs: Simplify ufshcd_auto_hibern8_update() Date: Wed, 16 Aug 2023 12:53:27 -0700 Message-ID: <20230816195447.3703954-16-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog In-Reply-To: <20230816195447.3703954-1-bvanassche@acm.org> References: <20230816195447.3703954-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Calls to ufshcd_auto_hibern8_update() are already serialized: this function is either called if user space software is not running (preparing to suspend) or from a single sysfs store callback function. Kernfs serializes sysfs .store() callbacks. No functionality is changed. This patch makes the next patch in this series easier to read. Cc: Martin K. Petersen Cc: Avri Altman Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index f1bba459b46f..39000c018d8b 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4347,21 +4347,13 @@ static void ufshcd_configure_auto_hibern8(struct ufs_hba *hba) void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) { - unsigned long flags; - bool update = false; + const u32 cur_ahit = READ_ONCE(hba->ahit); - if (!ufshcd_is_auto_hibern8_supported(hba)) + if (!ufshcd_is_auto_hibern8_supported(hba) || cur_ahit == ahit) return; - spin_lock_irqsave(hba->host->host_lock, flags); - if (hba->ahit != ahit) { - hba->ahit = ahit; - update = true; - } - spin_unlock_irqrestore(hba->host->host_lock, flags); - - if (update && - !pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { + WRITE_ONCE(hba->ahit, ahit); + if (!pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { ufshcd_rpm_get_sync(hba); ufshcd_hold(hba); ufshcd_configure_auto_hibern8(hba); From patchwork Wed Aug 16 19:53:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 714294 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C59EC25B5E for ; Wed, 16 Aug 2023 19:57:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345955AbjHPT46 (ORCPT ); Wed, 16 Aug 2023 15:56:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346042AbjHPT4z (ORCPT ); Wed, 16 Aug 2023 15:56:55 -0400 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C618E2D5F; Wed, 16 Aug 2023 12:56:27 -0700 (PDT) Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-68890d565b5so803310b3a.2; Wed, 16 Aug 2023 12:56:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692215787; x=1692820587; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BTSMOfTx0pMCvUPspES6iLE7qSqH956qQYDrRNFNXN0=; b=WUf9F7iCQZNGUqmxClTqpb4m9FSI4pvuiAftttQH+H2qjTV1h2PrMjJphE7PQWtQ+u DDO20b/B+tzexWrYDvTm2XNOAdsgWHjfU6al7x/dQzOxLriMvUlLC53z7cXE8YYFqpwZ UZ7OWYKoeIB08lLei+ZE6Z/y/3xBjx1l5++LG/uuI5WqKA1jWPxKTCRqNc7pqPmKtpqA JgsnZzgYyaQAvSvtZQdsCUSoSTcKIBYEEFwG7gr149vWpaSEN3Gx8DjiUVfakBpsGD4P WYrJX6PtE3EvBfRu5wZeQv/Tz+/SPt69rBsO+eZqbDJ+/znQMtiXxgI0TLynxX2WfGk0 3Oaw== X-Gm-Message-State: AOJu0YyyKW4CeTZaoPBF42rfq3MmOYIF803ySU7QrhEVUWgAgkVOVxZy QDNDoJxGJRnFw+AMNybXk2k= X-Google-Smtp-Source: AGHT+IEDc8pSVKHfw7i1ypJBfLFz4tYdsFxrGYr8CsQdd5+U8nKNL/D8ivw6SLkFT0gnT9UG6kAlFQ== X-Received: by 2002:a05:6a00:438a:b0:688:2253:ce07 with SMTP id bt10-20020a056a00438a00b006882253ce07mr2703662pfb.2.1692215787161; Wed, 16 Aug 2023 12:56:27 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:7141:e456:f574:7de0]) by smtp.gmail.com with ESMTPSA id r26-20020a62e41a000000b0068890c19c49sm1588508pfh.180.2023.08.16.12.56.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 12:56:26 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Can Guo , Avri Altman , Damien Le Moal , Ming Lei , "James E.J. Bottomley" , Bean Huo , Jinyoung Choi , Lu Hongfei , Daniil Lunev , Peter Wang , Stanley Chu , Manivannan Sadhasivam , Asutosh Das , "Bao D. Nguyen" , Arthur Simchaev , zhanghui , Po-Wen Kao , Eric Biggers , Keoseong Park Subject: [PATCH v9 16/17] scsi: ufs: Forbid auto-hibernation without I/O scheduler Date: Wed, 16 Aug 2023 12:53:28 -0700 Message-ID: <20230816195447.3703954-17-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog In-Reply-To: <20230816195447.3703954-1-bvanassche@acm.org> References: <20230816195447.3703954-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org UFSHCI 3.0 controllers do not preserve the write order if auto-hibernation is enabled. If the write order is not preserved, an I/O scheduler is required to serialize zoned writes. Hence do not allow auto-hibernation to be enabled without I/O scheduler if a zoned logical unit is present and if the controller is operating in legacy mode. This patch has been tested with the following shell script: show_ah8() { echo -n "auto_hibern8: " adb shell "cat /sys/devices/platform/13200000.ufs/auto_hibern8" } set_ah8() { local rc adb shell "echo $1 > /sys/devices/platform/13200000.ufs/auto_hibern8" rc=$? show_ah8 return $rc } set_iosched() { adb shell "echo $1 >/sys/class/block/$zoned_bdev/queue/scheduler && echo -n 'I/O scheduler: ' && cat /sys/class/block/sde/queue/scheduler" } adb root zoned_bdev=$(adb shell grep -lvw 0 /sys/class/block/sd*/queue/chunk_sectors |& sed 's|/sys/class/block/||g;s|/queue/chunk_sectors||g') [ -n "$zoned_bdev" ] show_ah8 set_ah8 0 set_iosched none if set_ah8 150000; then echo "Error: enabled AH8 without I/O scheduler" fi set_iosched mq-deadline set_ah8 150000 Cc: Martin K. Petersen Cc: Can Guo Cc: Avri Altman Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufs-sysfs.c | 2 +- drivers/ufs/core/ufshcd-priv.h | 1 - drivers/ufs/core/ufshcd.c | 60 ++++++++++++++++++++++++++++++++-- include/ufs/ufshcd.h | 2 +- 4 files changed, 60 insertions(+), 5 deletions(-) diff --git a/drivers/ufs/core/ufs-sysfs.c b/drivers/ufs/core/ufs-sysfs.c index 6c72075750dd..a693dea1bd18 100644 --- a/drivers/ufs/core/ufs-sysfs.c +++ b/drivers/ufs/core/ufs-sysfs.c @@ -203,7 +203,7 @@ static ssize_t auto_hibern8_store(struct device *dev, goto out; } - ufshcd_auto_hibern8_update(hba, ufshcd_us_to_ahit(timer)); + ret = ufshcd_auto_hibern8_update(hba, ufshcd_us_to_ahit(timer)); out: up(&hba->host_sem); diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h index 0f3bd943b58b..a2b74fbc2056 100644 --- a/drivers/ufs/core/ufshcd-priv.h +++ b/drivers/ufs/core/ufshcd-priv.h @@ -60,7 +60,6 @@ int ufshcd_query_attr(struct ufs_hba *hba, enum query_opcode opcode, enum attr_idn idn, u8 index, u8 selector, u32 *attr_val); int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode, enum flag_idn idn, u8 index, bool *flag_res); -void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); void ufshcd_compl_one_cqe(struct ufs_hba *hba, int task_tag, struct cq_entry *cqe); int ufshcd_mcq_init(struct ufs_hba *hba); diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 39000c018d8b..37d430d20939 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4337,6 +4337,29 @@ int ufshcd_uic_hibern8_exit(struct ufs_hba *hba) } EXPORT_SYMBOL_GPL(ufshcd_uic_hibern8_exit); +static int ufshcd_update_preserves_write_order(struct ufs_hba *hba, + bool preserves_write_order) +{ + struct scsi_device *sdev; + + if (!preserves_write_order) { + shost_for_each_device(sdev, hba->host) { + struct request_queue *q = sdev->request_queue; + + /* + * This code does not check whether the attached I/O + * scheduler serializes zoned writes + * (ELEVATOR_F_ZBD_SEQ_WRITE) because this cannot be + * checked from outside the block layer core. + */ + if (blk_queue_is_zoned(q) && !q->elevator) + return -EPERM; + } + } + + return 0; +} + static void ufshcd_configure_auto_hibern8(struct ufs_hba *hba) { if (!ufshcd_is_auto_hibern8_supported(hba)) @@ -4345,13 +4368,37 @@ static void ufshcd_configure_auto_hibern8(struct ufs_hba *hba) ufshcd_writel(hba, hba->ahit, REG_AUTO_HIBERNATE_IDLE_TIMER); } -void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) +/** + * ufshcd_auto_hibern8_update() - Modify the auto-hibernation control register + * @hba: per-adapter instance + * @ahit: New auto-hibernate settings. Includes the scale and the value of the + * auto-hibernation timer. See also the UFSHCI_AHIBERN8_TIMER_MASK and + * UFSHCI_AHIBERN8_SCALE_MASK constants. + * + * Note: enabling auto-hibernation if a zoned logical unit is present without + * attaching the mq-deadline scheduler first to the zoned logical unit may cause + * unaligned write errors for the zoned logical unit. + */ +int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) { const u32 cur_ahit = READ_ONCE(hba->ahit); + bool prev_state, new_state; + int ret; if (!ufshcd_is_auto_hibern8_supported(hba) || cur_ahit == ahit) - return; + return 0; + prev_state = FIELD_GET(UFSHCI_AHIBERN8_TIMER_MASK, cur_ahit); + new_state = FIELD_GET(UFSHCI_AHIBERN8_TIMER_MASK, ahit); + + if (!is_mcq_enabled(hba) && !prev_state && new_state) { + /* + * Auto-hibernation will be enabled for legacy UFSHCI mode. + */ + ret = ufshcd_update_preserves_write_order(hba, false); + if (ret) + return ret; + } WRITE_ONCE(hba->ahit, ahit); if (!pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { ufshcd_rpm_get_sync(hba); @@ -4360,6 +4407,15 @@ void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) ufshcd_release(hba); ufshcd_rpm_put_sync(hba); } + if (!is_mcq_enabled(hba) && prev_state && !new_state) { + /* + * Auto-hibernation has been disabled. + */ + ret = ufshcd_update_preserves_write_order(hba, true); + WARN_ON_ONCE(ret); + } + + return 0; } EXPORT_SYMBOL_GPL(ufshcd_auto_hibern8_update); diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index 040d66d99912..7ae071a6811c 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -1363,7 +1363,7 @@ static inline int ufshcd_disable_host_tx_lcc(struct ufs_hba *hba) return ufshcd_dme_set(hba, UIC_ARG_MIB(PA_LOCAL_TX_LCC_ENABLE), 0); } -void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); +int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); void ufshcd_fixup_dev_quirks(struct ufs_hba *hba, const struct ufs_dev_quirk *fixups); #define SD_ASCII_STD true