From patchwork Thu Mar 6 14:49:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 871082 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0F6E716DEB3 for ; Thu, 6 Mar 2025 14:49:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.7 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741272570; cv=none; b=uPXrdw6fKzyZWDHTo6gG8yM0ucOb+5uh88ishUSjrjtMn1S3DBUuWUWUmGBEjlataSUvKU7/V9aBGcvj5fOjjVOED54VQXS4g+AxVHco75fRbzRUreev2DGP/7aNlCn8BEoUgWizobx0IqMXm7G5CkbpKU2gTcrf7f0OJCkdSD0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741272570; c=relaxed/simple; bh=ZhQp1SbanU9lG/QyxFdzp5uy0GZaoqivD0iflGquVMo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VcXJHcF9AGz088KD8LL12r/9tWWbSKBX3exNGggxX3Xs/c7bd5Vy8xlCsI9GYForyGVJsyoef7KG2uXU3HY71p4XKVOtNll0ViXTWOVDcYeHLFuXWx44hdi0XghUwKV5d8b6YKTSbCJDcaITs2X4buTGxSXCaoIWBW3GmsWfack= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=PoaEf+L+; arc=none smtp.client-ip=192.198.163.7 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="PoaEf+L+" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1741272569; x=1772808569; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZhQp1SbanU9lG/QyxFdzp5uy0GZaoqivD0iflGquVMo=; b=PoaEf+L+v2wzqoKpvSY6x//cEC/LC576e14aWNmYrxqVo41v/jvfsEYz i1/91ArctiaWtJ2AEWcT0UeeS8YaqgBh83/KoZr0iWJkHQGvTBEvOnCCZ pdcyx2vvJdMUogwj2xmktR8HEE/pdMs/uRFq+QKUllMrk+q9unodRcuL0 xtk2ElqwMDpzsurk8TuyqIfFKExGmvuShq6cI3bRoCJ+h9LRt9lg0HBwe O1LdRVnV1hGoXmzMgkpjh0Cx+T1hJX72i9Zmq/J6MBlMyA543+wxL1l9E jbuuWwCpUEKTjTMOCaHCJ+fKvd7i6pRf06BJVfpt9/wvA/8pcAO6/ENdl w==; X-CSE-ConnectionGUID: BuQ3aAXGTQqfTmGNklt8sg== X-CSE-MsgGUID: VEcd934qRWeOc3E0rarP+g== X-IronPort-AV: E=McAfee;i="6700,10204,11365"; a="67657069" X-IronPort-AV: E=Sophos;i="6.14,226,1736841600"; d="scan'208";a="67657069" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Mar 2025 06:49:28 -0800 X-CSE-ConnectionGUID: Ry/zP7HuRVOA2UCA4WxyGA== X-CSE-MsgGUID: Rb3TI7z7QTa3LgUorfcgtA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119954790" Received: from unknown (HELO mattu-haswell.fi.intel.com) ([10.237.72.199]) by orviesa008.jf.intel.com with ESMTP; 06 Mar 2025 06:49:26 -0800 From: Mathias Nyman To: Cc: , Niklas Neronin , Mathias Nyman Subject: [PATCH 02/15] usb: xhci: remove redundant update_ring_for_set_deq_completion() function Date: Thu, 6 Mar 2025 16:49:41 +0200 Message-ID: <20250306144954.3507700-3-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250306144954.3507700-1-mathias.nyman@linux.intel.com> References: <20250306144954.3507700-1-mathias.nyman@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Niklas Neronin The function is a remnant from a previous implementation and is now redundant. There is no longer a need to search for the dequeue pointer, as both the TRB and segment dequeue pointers are saved within 'queued_deq_seg' and 'queued_deq_ptr'. Signed-off-by: Niklas Neronin Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-ring.c | 41 ++---------------------------------- 1 file changed, 2 insertions(+), 39 deletions(-) diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 965bffce301e..23cf20026359 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -1332,43 +1332,6 @@ void xhci_hc_died(struct xhci_hcd *xhci) usb_hc_died(xhci_to_hcd(xhci)); } -static void update_ring_for_set_deq_completion(struct xhci_hcd *xhci, - struct xhci_virt_device *dev, - struct xhci_ring *ep_ring, - unsigned int ep_index) -{ - union xhci_trb *dequeue_temp; - - dequeue_temp = ep_ring->dequeue; - - /* If we get two back-to-back stalls, and the first stalled transfer - * ends just before a link TRB, the dequeue pointer will be left on - * the link TRB by the code in the while loop. So we have to update - * the dequeue pointer one segment further, or we'll jump off - * the segment into la-la-land. - */ - if (trb_is_link(ep_ring->dequeue)) { - ep_ring->deq_seg = ep_ring->deq_seg->next; - ep_ring->dequeue = ep_ring->deq_seg->trbs; - } - - while (ep_ring->dequeue != dev->eps[ep_index].queued_deq_ptr) { - /* We have more usable TRBs */ - ep_ring->dequeue++; - if (trb_is_link(ep_ring->dequeue)) { - if (ep_ring->dequeue == - dev->eps[ep_index].queued_deq_ptr) - break; - ep_ring->deq_seg = ep_ring->deq_seg->next; - ep_ring->dequeue = ep_ring->deq_seg->trbs; - } - if (ep_ring->dequeue == dequeue_temp) { - xhci_dbg(xhci, "Unable to find new dequeue pointer\n"); - break; - } - } -} - /* * When we get a completion for a Set Transfer Ring Dequeue Pointer command, * we need to clear the set deq pending flag in the endpoint ring state, so that @@ -1473,8 +1436,8 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id, /* Update the ring's dequeue segment and dequeue pointer * to reflect the new position. */ - update_ring_for_set_deq_completion(xhci, ep->vdev, - ep_ring, ep_index); + ep_ring->deq_seg = ep->queued_deq_seg; + ep_ring->dequeue = ep->queued_deq_ptr; } else { xhci_warn(xhci, "Mismatch between completed Set TR Deq Ptr command & xHCI internal state.\n"); xhci_warn(xhci, "ep deq seg = %p, deq ptr = %p\n", From patchwork Thu Mar 6 14:49:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 871081 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3600E176ADB for ; Thu, 6 Mar 2025 14:49:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.7 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741272573; cv=none; b=k4fxBauHDPUSddcCMAAY/MQ7WweSf3tnnlDHspg0habe7/z5Hvjcc2cCI0mMZY9uRyL3s8R4PHqgGEka0Dg35cLggIKTSVdC7G/wwCMXtpbFJqyc6MtYFHZytN3gP2Rpc5+KO2BD/uIh5ezsUnQPRtGyA41u0tpkvMtWbXtnNnw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741272573; c=relaxed/simple; bh=oVmjP8w0JU6XCm+Dk/U7ONYI2FWa0ax1oTFED//DoSM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UmtoG58pV+n46b25lXqk8h72H/2QeDi1bgm5cqUESwSWLR6hkFnVEEN09RFI45kh/AGkoWFsEzKuuRW3TQbIh0zjs5Dw9GwzE1q1CKKBdP7AoJ+rv3qqNC0XaGj4TvAhIGmcoQhZCELn2fhhSLOWNzTVq7oyYRlSdcCXKHOD6IY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=SmGxu9gF; arc=none smtp.client-ip=192.198.163.7 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="SmGxu9gF" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1741272572; x=1772808572; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oVmjP8w0JU6XCm+Dk/U7ONYI2FWa0ax1oTFED//DoSM=; b=SmGxu9gF3gkTBjoyEx6flcXZ4WLknFoH+U/nONhNC5wz3nLIkOWPBSNy D107XO1TtbCk4BiRnkvyDu4mpiD/JuPWmQy9M0tU0VrX/IZmeG2wBmLTn KccqjRHu9gBleRYZ+/9DbTdTKAMWywXviay7hF+ugcccZlAPlMbGsgyMN N0PW2/VCnlXeCs5sJibkjO26ltI/oh11TRi+O1LA6TW/p3Y5Y0/gx4yH3 DtIJJs07BKOQMS4j762RpcLSSSL/Gz7CAb47FXa16opnFOtbDRlJEk4n5 xBWt41c8EFhw41z8luI7UMZcz/77UZIdgT4VBhblqGtl3K8o+jTOjF5aA Q==; X-CSE-ConnectionGUID: UMiRtNrkQGmLbVPJgbtedQ== X-CSE-MsgGUID: Y7VI7IOTSWyxUIiCTScJ1A== X-IronPort-AV: E=McAfee;i="6700,10204,11365"; a="67657074" X-IronPort-AV: E=Sophos;i="6.14,226,1736841600"; d="scan'208";a="67657074" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Mar 2025 06:49:32 -0800 X-CSE-ConnectionGUID: s3ox1EpIQ2y4cDFXWH1rrw== X-CSE-MsgGUID: O/eMaZ7eQ6SPA0w0mMBPgg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119954798" Received: from unknown (HELO mattu-haswell.fi.intel.com) ([10.237.72.199]) by orviesa008.jf.intel.com with ESMTP; 06 Mar 2025 06:49:30 -0800 From: Mathias Nyman To: Cc: , Michal Pecio , Mathias Nyman Subject: [PATCH 04/15] usb: xhci: Complete 'error mid TD' transfers when handling Missed Service Date: Thu, 6 Mar 2025 16:49:43 +0200 Message-ID: <20250306144954.3507700-5-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250306144954.3507700-1-mathias.nyman@linux.intel.com> References: <20250306144954.3507700-1-mathias.nyman@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Michal Pecio Missed Service Error after an error mid TD means that the failed TD has already been passed by the xHC without acknowledgment of the final TRB, a known hardware bug. So don't wait any more and give back the TD. Reproduced on NEC uPD720200 under conditions of ludicrously bad USB link quality, confirmed to behave as expected using dynamic debug. Signed-off-by: Michal Pecio Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-ring.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 6fb48d30ec21..47aaaf4eb92a 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -2752,7 +2752,7 @@ static int handle_tx_event(struct xhci_hcd *xhci, xhci_dbg(xhci, "Miss service interval error for slot %u ep %u, set skip flag\n", slot_id, ep_index); - return 0; + break; case COMP_NO_PING_RESPONSE_ERROR: ep->skip = true; xhci_dbg(xhci, @@ -2800,6 +2800,10 @@ static int handle_tx_event(struct xhci_hcd *xhci, xhci_dequeue_td(xhci, td, ep_ring, td->status); } + /* Missed TDs will be skipped on the next event */ + if (trb_comp_code == COMP_MISSED_SERVICE_ERROR) + return 0; + if (list_empty(&ep_ring->td_list)) { /* * Don't print wanings if ring is empty due to a stopped endpoint generating an From patchwork Thu Mar 6 14:49:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 871080 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3EE55176ADB for ; Thu, 6 Mar 2025 14:49:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.7 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741272577; cv=none; b=Z3+2nj9CI+AQq0KvQ3BQprI0KKI5iwrmzi9r7SUgqXtrfdLnd58E/q4K94a6rkpD5HN6DG/lEtY7IJvJctef9B7nIctdA1t6TZkVcrLiQYsxyg44rnukjIqbexJgfyXgVBbcxjW0WPCSKFOeP5KhTA2hKwPz/Pr7Tf/b7y8G/V0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741272577; c=relaxed/simple; bh=zVRu0Dcyn4GnoVhJbWKp0to9ukP5+nTq0yFqha6kKpE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PpT4DtwJEHAj+tS6INu27H9wUz2fp/Vupx3HdEqy/WjhmhaBwkE3Ui5zGDf/vj7joZgEcpTnDn+HyHwdmSm35P6pYxhZqjGlqqCsTP7P8oB/1lyaIpyCOwOC1+R05qxjBjXOkp/30ow3C016QZkx4cLCP8O9HkQlFSpEEtHiQWg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=J2WuY0pL; arc=none smtp.client-ip=192.198.163.7 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="J2WuY0pL" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1741272576; x=1772808576; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zVRu0Dcyn4GnoVhJbWKp0to9ukP5+nTq0yFqha6kKpE=; b=J2WuY0pL03EiHHR9G2Ay7ITut3tdtNr81NjCqZQ50Bs6DF2W1yXj/hKZ j8BIzwqBrgZpHzTEyXsIJnzf65q3eK2E0yaA5oHXo9qGcONdGO0nQDPMI FIanXLDc/kePjKTEocvY5Twka5JkxIa21S0he4yTgHorhHJELtR1r8/WE T6n3XVGh4uS0e/h6u1YZR6M/vBNZvZ51eZZV9muWp8hl3Y0Nrc9GErjuS enlgnml6MSdIDqUiMAZgXNTmpjXxPZ+3LXZuboqtX7dh81TatIDvQs9O0 s+NXB69hkXJpfFMIC/ohEQvDVM4z7NOMPSaYCVJ+JRriPoiP9kaAvQQAY Q==; X-CSE-ConnectionGUID: f/KKg8RXRPGGUmWrFYTodg== X-CSE-MsgGUID: y1C31HJSS+KPRZXEv13aeQ== X-IronPort-AV: E=McAfee;i="6700,10204,11365"; a="67657087" X-IronPort-AV: E=Sophos;i="6.14,226,1736841600"; d="scan'208";a="67657087" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Mar 2025 06:49:35 -0800 X-CSE-ConnectionGUID: GBIn5CQsT2CmRSnY2emjHw== X-CSE-MsgGUID: Jee0kzDeTJmAVbUv2V7N3A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119954806" Received: from unknown (HELO mattu-haswell.fi.intel.com) ([10.237.72.199]) by orviesa008.jf.intel.com with ESMTP; 06 Mar 2025 06:49:33 -0800 From: Mathias Nyman To: Cc: , Michal Pecio , Mathias Nyman Subject: [PATCH 06/15] usb: xhci: Expedite skipping missed isoch TDs on modern HCs Date: Thu, 6 Mar 2025 16:49:45 +0200 Message-ID: <20250306144954.3507700-7-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250306144954.3507700-1-mathias.nyman@linux.intel.com> References: <20250306144954.3507700-1-mathias.nyman@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Michal Pecio xHCI spec rev. 1.0 allowed the TRB pointer of Missed Service events to be NULL. Having no idea which of the queued TDs were missed and which are waiting, we can only set a flag to skip missed TDs later. But HCs are also allowed to give us pointer to the last missed TRB, and this became mandatory in spec rev. 1.1 and later. Use this pointer, if available, to immediately skip all missed TDs. This reduces latency and risk of skipping-related bugs, because we can now leave the skip flag cleared for future events. Handle Missed Service Error events as 'error mid TD', if applicable, because rev. 1.0 spec excplicitly says so in notes to 4.10.3.2 and later revs in 4.10.3.2 and 4.11.2.5.2. Notes to 4.9.1 seem to apply. Tested on ASM1142 and ASM3142 v1.1 xHCs which provide TRB pointers. Tested on AMD, Etron, Renesas v1.0 xHCs which provide TRB pointers. Tested on NEC v0.96 and VIA v1.0 xHCs which send a NULL pointer. Change inspired by a discussion about realtime USB audio. Link: https://lore.kernel.org/linux-usb/76e1a191-020d-4a76-97f6-237f9bd0ede0@gmx.net/T/ Signed-off-by: Michal Pecio Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-ring.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index d34f46b63006..e871dd61a636 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -2439,6 +2439,12 @@ static void process_isoc_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep, if (ep_trb != td->end_trb) td->error_mid_td = true; break; + case COMP_MISSED_SERVICE_ERROR: + frame->status = -EXDEV; + sum_trbs_for_length = true; + if (ep_trb != td->end_trb) + td->error_mid_td = true; + break; case COMP_INCOMPATIBLE_DEVICE_ERROR: case COMP_STALL_ERROR: frame->status = -EPROTO; @@ -2749,8 +2755,8 @@ static int handle_tx_event(struct xhci_hcd *xhci, */ ep->skip = true; xhci_dbg(xhci, - "Miss service interval error for slot %u ep %u, set skip flag\n", - slot_id, ep_index); + "Miss service interval error for slot %u ep %u, set skip flag%s\n", + slot_id, ep_index, ep_trb_dma ? ", skip now" : ""); break; case COMP_NO_PING_RESPONSE_ERROR: ep->skip = true; @@ -2799,8 +2805,8 @@ static int handle_tx_event(struct xhci_hcd *xhci, xhci_dequeue_td(xhci, td, ep_ring, td->status); } - /* Missed TDs will be skipped on the next event */ - if (trb_comp_code == COMP_MISSED_SERVICE_ERROR) + /* If the TRB pointer is NULL, missed TDs will be skipped on the next event */ + if (trb_comp_code == COMP_MISSED_SERVICE_ERROR && !ep_trb_dma) return 0; if (list_empty(&ep_ring->td_list)) { From patchwork Thu Mar 6 14:49:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 871079 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 80E46189520 for ; Thu, 6 Mar 2025 14:49:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.7 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741272580; cv=none; b=PlVHgzJyVwuqBpOqH5XQy1W+QflQe6WI9S/dTGYaDbQC/nOmTFKoBQE2MhVRObeqLmbr6xmWcEuIt+u7I5y5riMBXkpxPmTlZvG/mUyXvrO5mytrH/X4Rs7+RYxrhZWkMRteZBtRHbwRWdHn0McLg+5ZnhQl1IijF4VSNDfUIjc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741272580; c=relaxed/simple; bh=WlPYjLgbpW13RkXfUCvYmGhjGRIj2KD2O+2YVWR1p3k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jOpw/chaUfSmI/JL8SCzb4cYAfsftD0a+0OVv3wQLSjoaLiaFxBvmAWhpVRUyXFvLDd/0BdsaS1FfHlIGLfACTbxXG1RGrI0LdQy9eOTFFCJcPU+Of8MYHkidUBcY8/ldiTpzlq3QnNm9sbeKsVwDuKbjnQT97TCRmcVhZb7JXc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ePTjGjr+; arc=none smtp.client-ip=192.198.163.7 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ePTjGjr+" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1741272579; x=1772808579; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WlPYjLgbpW13RkXfUCvYmGhjGRIj2KD2O+2YVWR1p3k=; b=ePTjGjr+etrsGL+sr28n+tpjBzqh2DAyAkjAdTrtm19tedkGN5Zhj92N +EDwY7reDbc64yD9osRhFTXFENHpsfU0dSJz4zet1L7abLKClGJ5raaiF wBpwZ8pAw+mP3idsxJVCgL91g7BMfV0uLqRdXpX7BcM/q80ufV0/NLeeo 0Bqqrdmld/FOrrqLoT9AX69PgURxb68yVHHQlcKt/TJLGJvIyCJdlXcQy gm24wjxXhlxJG/eT+hA4UOpZ6hDZVi9ANtW1+KTJ3OO64oybaEPjfueHo TWf9PPW/h5zvTOF4McjPu4T56R7MmoMwBNO6iH0WfyzWc0ktTOBj8j9Vw w==; X-CSE-ConnectionGUID: u4gMUdjvSE+sqTPTWwbxxg== X-CSE-MsgGUID: iaC7cTXPTFqxsMa/OMQi3g== X-IronPort-AV: E=McAfee;i="6700,10204,11365"; a="67657101" X-IronPort-AV: E=Sophos;i="6.14,226,1736841600"; d="scan'208";a="67657101" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Mar 2025 06:49:38 -0800 X-CSE-ConnectionGUID: ymb58kyqQ2aeHZnRG4mxBQ== X-CSE-MsgGUID: 4QwmtPDQRr2kSRa4R/S3CQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119954831" Received: from unknown (HELO mattu-haswell.fi.intel.com) ([10.237.72.199]) by orviesa008.jf.intel.com with ESMTP; 06 Mar 2025 06:49:37 -0800 From: Mathias Nyman To: Cc: , Niklas Neronin , Mathias Nyman Subject: [PATCH 08/15] usb: xhci: correct debug message page size calculation Date: Thu, 6 Mar 2025 16:49:47 +0200 Message-ID: <20250306144954.3507700-9-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250306144954.3507700-1-mathias.nyman@linux.intel.com> References: <20250306144954.3507700-1-mathias.nyman@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Niklas Neronin The ffs() function returns the index of the first set bit, starting from 1. If no bits are set, it returns zero. This behavior causes an off-by-one page size in the debug message, as the page size calculation [1] is zero-based, while ffs() is one-based. Fix this by subtracting one from the result of ffs(). Note that since variable 'val' is unsigned, subtracting one from zero will result in the maximum unsigned integer value. Consequently, the condition 'if (val < 16)' will still function correctly. [1], Page size: (2^(n+12)), where 'n' is the set page size bit. Fixes: 81720ec5320c ("usb: host: xhci: use ffs() in xhci_mem_init()") Signed-off-by: Niklas Neronin Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-mem.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index 92703efda1f7..dc5bcd8db4c0 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -2391,10 +2391,10 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags) page_size = readl(&xhci->op_regs->page_size); xhci_dbg_trace(xhci, trace_xhci_dbg_init, "Supported page size register = 0x%x", page_size); - i = ffs(page_size); - if (i < 16) + val = ffs(page_size) - 1; + if (val < 16) xhci_dbg_trace(xhci, trace_xhci_dbg_init, - "Supported page size of %iK", (1 << (i+12)) / 1024); + "Supported page size of %iK", (1 << (val + 12)) / 1024); else xhci_warn(xhci, "WARN: no supported page size\n"); /* Use 4K pages, since that's common and the minimum the HC supports */ From patchwork Thu Mar 6 14:49:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 871078 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D96FF18C92F for ; Thu, 6 Mar 2025 14:49:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.7 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741272584; cv=none; b=DwRwkJu1PgiLJ+ZF+BTulF+0c1FvM1LkTEkQzNYl6rKNOHFiaTpxz6pXrFIB1eNUCO/WQUZspCjIeo1iMALj9e+LoeS5CD2TKcMs1WvdXokqURiSPz+YESsfcPaHRQV5QM2cdk4nP/vvmT3p4N4OUDWCFY7yHI/1+Qh2FeXiH+s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741272584; c=relaxed/simple; bh=lHz2IuVhfAf7hNSWAwFhNo2S0ql8i7glZ/39KTaUcQo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SSTx6V7Z+DNfOKOTFRUja0Pxq3rDKIzkYZVgvNQ5FvOjIL6stwX5gvO69r56IiEloiE3Xn/xUMa/2YUDoy3DDe9b0t1bIKsu1lp3bzBPvdaGgMt4TW2et/YjuI/JrCOih4KuVzNsI6Cf0LscftGPVkbRvLJXSmg7u9BleYdqDRw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Y30SiK0D; arc=none smtp.client-ip=192.198.163.7 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Y30SiK0D" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1741272582; x=1772808582; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lHz2IuVhfAf7hNSWAwFhNo2S0ql8i7glZ/39KTaUcQo=; b=Y30SiK0DXZZ1OJfZbr9DWM+xLUEQ9jENmyEGrUA03q60Ywall3E7yJwc eMdo/um08QfTxav3dqIXlPWIn/6i9eogrEV8g+azOaMyA1ZvRmcVRQ3tv 9obmqpTFaX0DMumOvSIry1+F2A2WonwbRXSbaI/bUS6U6chClhQkEfjFb N9jHvKLlMqCpIvDVJ+t02mMo98p4IGHo+94tJnZparKf0MoXYa7AsNlHj yrnS5akTMYt+KAgxyf8el6YaOPqtKVbaPMQfBJGXlxLSraSGMLL/cg7QD QXnPktMjtuvguBEz6hBI+7PDdUMFh0pxxc5OAN4T7nyzb6VnE4gJXtfsa Q==; X-CSE-ConnectionGUID: p9E2MoR8QUiI3Hv/RmHaDw== X-CSE-MsgGUID: 4rnW83ATSjGMQY2uyM7IRQ== X-IronPort-AV: E=McAfee;i="6700,10204,11365"; a="67657115" X-IronPort-AV: E=Sophos;i="6.14,226,1736841600"; d="scan'208";a="67657115" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Mar 2025 06:49:42 -0800 X-CSE-ConnectionGUID: vBEDqE/KSNirjcdiy4r8SQ== X-CSE-MsgGUID: q9b0RNBXQry/43qSZ6DNwg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119954847" Received: from unknown (HELO mattu-haswell.fi.intel.com) ([10.237.72.199]) by orviesa008.jf.intel.com with ESMTP; 06 Mar 2025 06:49:40 -0800 From: Mathias Nyman To: Cc: , Niklas Neronin , Mathias Nyman Subject: [PATCH 10/15] usb: xhci: refactor trb_in_td() to be static Date: Thu, 6 Mar 2025 16:49:49 +0200 Message-ID: <20250306144954.3507700-11-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250306144954.3507700-1-mathias.nyman@linux.intel.com> References: <20250306144954.3507700-1-mathias.nyman@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Niklas Neronin Relocate trb_in_td() and marks it as static, as it's exclusively utilized in xhci-ring.c. This adjustment lays the groundwork for future rework of the function. The function's logic remains unchanged; only its access specifier is altered to static and a redundant "else" is removed on line 325 (due to checkpatch.pl complaining). Signed-off-by: Niklas Neronin Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-ring.c | 122 +++++++++++++++++------------------ drivers/usb/host/xhci.h | 2 - 2 files changed, 61 insertions(+), 63 deletions(-) diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 70b896297494..8c7258afb6bf 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -277,6 +277,67 @@ static void inc_enq(struct xhci_hcd *xhci, struct xhci_ring *ring, } } +/* + * If the suspect DMA address is a TRB in this TD, this function returns that + * TRB's segment. Otherwise it returns 0. + */ +static struct xhci_segment *trb_in_td(struct xhci_hcd *xhci, struct xhci_td *td, + dma_addr_t suspect_dma, bool debug) +{ + dma_addr_t start_dma; + dma_addr_t end_seg_dma; + dma_addr_t end_trb_dma; + struct xhci_segment *cur_seg; + + start_dma = xhci_trb_virt_to_dma(td->start_seg, td->start_trb); + cur_seg = td->start_seg; + + do { + if (start_dma == 0) + return NULL; + /* We may get an event for a Link TRB in the middle of a TD */ + end_seg_dma = xhci_trb_virt_to_dma(cur_seg, + &cur_seg->trbs[TRBS_PER_SEGMENT - 1]); + /* If the end TRB isn't in this segment, this is set to 0 */ + end_trb_dma = xhci_trb_virt_to_dma(cur_seg, td->end_trb); + + if (debug) + xhci_warn(xhci, + "Looking for event-dma %016llx trb-start %016llx trb-end %016llx seg-start %016llx seg-end %016llx\n", + (unsigned long long)suspect_dma, + (unsigned long long)start_dma, + (unsigned long long)end_trb_dma, + (unsigned long long)cur_seg->dma, + (unsigned long long)end_seg_dma); + + if (end_trb_dma > 0) { + /* The end TRB is in this segment, so suspect should be here */ + if (start_dma <= end_trb_dma) { + if (suspect_dma >= start_dma && suspect_dma <= end_trb_dma) + return cur_seg; + } else { + /* Case for one segment with + * a TD wrapped around to the top + */ + if ((suspect_dma >= start_dma && + suspect_dma <= end_seg_dma) || + (suspect_dma >= cur_seg->dma && + suspect_dma <= end_trb_dma)) + return cur_seg; + } + return NULL; + } + /* Might still be somewhere in this segment */ + if (suspect_dma >= start_dma && suspect_dma <= end_seg_dma) + return cur_seg; + + cur_seg = cur_seg->next; + start_dma = xhci_trb_virt_to_dma(cur_seg, &cur_seg->trbs[0]); + } while (cur_seg != td->start_seg); + + return NULL; +} + /* * Return number of free normal TRBs from enqueue to dequeue pointer on ring. * Not counting an assumed link TRB at end of each TRBS_PER_SEGMENT sized segment. @@ -2079,67 +2140,6 @@ static void handle_port_status(struct xhci_hcd *xhci, union xhci_trb *event) spin_lock(&xhci->lock); } -/* - * If the suspect DMA address is a TRB in this TD, this function returns that - * TRB's segment. Otherwise it returns 0. - */ -struct xhci_segment *trb_in_td(struct xhci_hcd *xhci, struct xhci_td *td, dma_addr_t suspect_dma, - bool debug) -{ - dma_addr_t start_dma; - dma_addr_t end_seg_dma; - dma_addr_t end_trb_dma; - struct xhci_segment *cur_seg; - - start_dma = xhci_trb_virt_to_dma(td->start_seg, td->start_trb); - cur_seg = td->start_seg; - - do { - if (start_dma == 0) - return NULL; - /* We may get an event for a Link TRB in the middle of a TD */ - end_seg_dma = xhci_trb_virt_to_dma(cur_seg, - &cur_seg->trbs[TRBS_PER_SEGMENT - 1]); - /* If the end TRB isn't in this segment, this is set to 0 */ - end_trb_dma = xhci_trb_virt_to_dma(cur_seg, td->end_trb); - - if (debug) - xhci_warn(xhci, - "Looking for event-dma %016llx trb-start %016llx trb-end %016llx seg-start %016llx seg-end %016llx\n", - (unsigned long long)suspect_dma, - (unsigned long long)start_dma, - (unsigned long long)end_trb_dma, - (unsigned long long)cur_seg->dma, - (unsigned long long)end_seg_dma); - - if (end_trb_dma > 0) { - /* The end TRB is in this segment, so suspect should be here */ - if (start_dma <= end_trb_dma) { - if (suspect_dma >= start_dma && suspect_dma <= end_trb_dma) - return cur_seg; - } else { - /* Case for one segment with - * a TD wrapped around to the top - */ - if ((suspect_dma >= start_dma && - suspect_dma <= end_seg_dma) || - (suspect_dma >= cur_seg->dma && - suspect_dma <= end_trb_dma)) - return cur_seg; - } - return NULL; - } else { - /* Might still be somewhere in this segment */ - if (suspect_dma >= start_dma && suspect_dma <= end_seg_dma) - return cur_seg; - } - cur_seg = cur_seg->next; - start_dma = xhci_trb_virt_to_dma(cur_seg, &cur_seg->trbs[0]); - } while (cur_seg != td->start_seg); - - return NULL; -} - static void xhci_clear_hub_tt_buffer(struct xhci_hcd *xhci, struct xhci_td *td, struct xhci_virt_ep *ep) { diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index 5b8751b86008..cd96e0a8c593 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -1884,8 +1884,6 @@ int xhci_set_interrupter_moderation(struct xhci_interrupter *ir, /* xHCI ring, segment, TRB, and TD functions */ dma_addr_t xhci_trb_virt_to_dma(struct xhci_segment *seg, union xhci_trb *trb); -struct xhci_segment *trb_in_td(struct xhci_hcd *xhci, struct xhci_td *td, - dma_addr_t suspect_dma, bool debug); int xhci_is_vendor_info_code(struct xhci_hcd *xhci, unsigned int trb_comp_code); void xhci_ring_cmd_db(struct xhci_hcd *xhci); int xhci_queue_slot_control(struct xhci_hcd *xhci, struct xhci_command *cmd, From patchwork Thu Mar 6 14:49:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 871077 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9828218DF81 for ; Thu, 6 Mar 2025 14:49:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.7 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741272587; cv=none; b=WwFUkKMnWbMb5nWL+IJLJTrxymZ/HyxQ5ZXeBG7bJ8vD3l0gKLpbHn9hU6+dI3XIOD76n2JFemiKVZ8LPNSSRaLXSQUr5fMbQFtwR0E9rY0nvUPL3VlNuyNAOSd80BU9gz4AzDSyWpIjkMLmgeOBDqiNVUEZ2UBlH+8ElRDQ+4M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741272587; c=relaxed/simple; bh=v+Mk99ZnZEQmPQ5kDPchn9OHkgy2n4/m7U5IopKQ5rY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mElxSG42TWfVtJ2XVaUWDkE616VOz4PlWJcuhCadLwZ0/e+h0rG9GyX0e1FL8ePaAeQw6CT7/22FC4o+iU9QXGPvbWmPwzadsY4oeRg5dwEj/29ueuFjHk9VxJy7dxZnCAddj7xR61+LVRjNGq2b9P2u7uxcTQQl94HndVVYRdc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=N73VWpHx; arc=none smtp.client-ip=192.198.163.7 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="N73VWpHx" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1741272585; x=1772808585; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=v+Mk99ZnZEQmPQ5kDPchn9OHkgy2n4/m7U5IopKQ5rY=; b=N73VWpHxyJhWaZcEkiTUiboMy1yUIKd3Q5h5YVdCUKpV/j2RJd3E3mMZ WX7xkon4zq1CAzJD+K6nzgWki1SnQRlmF6tLLCfFxy77PqdE3Z4y0L+uG 9y8bqUkFs62ap7XTyhCaaTC88KWji1FoMDrEdrKXS1gzFLD3heMQgcz/h oSeWZM8VXi6e+1V+siyU3M7hC0miTxAMdVL/hQUD58Wozj7UpyW89nzE1 ID1QKpo3Xv8V/mVrdndPBCRhxDQ/CI95KMCyZ4UPfUSsin8/32O6C/8Z6 oi6CygxVrspJtWwfwa21NSAfosThIYdD4xUrsymJxtQk5guGGGcp9mG6S Q==; X-CSE-ConnectionGUID: VHaSpRpVTo+RjDHmBs6oWw== X-CSE-MsgGUID: Hd2H0mq+Qy2YhMFVrPChHQ== X-IronPort-AV: E=McAfee;i="6700,10204,11365"; a="67657127" X-IronPort-AV: E=Sophos;i="6.14,226,1736841600"; d="scan'208";a="67657127" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Mar 2025 06:49:45 -0800 X-CSE-ConnectionGUID: Uq2vwelKQ2ui/AA60bK5uw== X-CSE-MsgGUID: F49EXsQpQmOr/pPeai8NKQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119954861" Received: from unknown (HELO mattu-haswell.fi.intel.com) ([10.237.72.199]) by orviesa008.jf.intel.com with ESMTP; 06 Mar 2025 06:49:44 -0800 From: Mathias Nyman To: Cc: , Mathias Nyman Subject: [PATCH 12/15] xhci: Prevent early endpoint restart when handling STALL errors. Date: Thu, 6 Mar 2025 16:49:51 +0200 Message-ID: <20250306144954.3507700-13-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250306144954.3507700-1-mathias.nyman@linux.intel.com> References: <20250306144954.3507700-1-mathias.nyman@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Ensure that an endpoint halted due to device STALL is not restarted before a Clear_Feature(ENDPOINT_HALT) request is sent to the device. The host side of the endpoint may otherwise be started early by the 'Set TR Deq' command completion handler which is called if dequeue is moved past a cancelled or halted TD. Prevent this with a new flag set for bulk and interrupt endpoints when a Stall Error is received. Clear it in hcd->endpoint_reset() which is called after Clear_Feature(ENDPOINT_HALT) is sent. Also add a debug message if a class driver queues a new URB after the STALL. Note that class driver might not be aware of the STALL yet when it submits the URB as URBs are given back in BH. Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-ring.c | 7 +++++-- drivers/usb/host/xhci.c | 6 ++++++ drivers/usb/host/xhci.h | 3 ++- 3 files changed, 13 insertions(+), 3 deletions(-) diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index c2e15a27338b..7643ab9ec3b4 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -556,8 +556,8 @@ void xhci_ring_ep_doorbell(struct xhci_hcd *xhci, * pointer command pending because the device can choose to start any * stream once the endpoint is on the HW schedule. */ - if ((ep_state & EP_STOP_CMD_PENDING) || (ep_state & SET_DEQ_PENDING) || - (ep_state & EP_HALTED) || (ep_state & EP_CLEARING_TT)) + if (ep_state & (EP_STOP_CMD_PENDING | SET_DEQ_PENDING | EP_HALTED | + EP_CLEARING_TT | EP_STALLED)) return; trace_xhci_ring_ep_doorbell(slot_id, DB_VALUE(ep_index, stream_id)); @@ -2555,6 +2555,9 @@ static void process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep, xhci_handle_halted_endpoint(xhci, ep, td, EP_SOFT_RESET); return; + case COMP_STALL_ERROR: + ep->ep_state |= EP_STALLED; + break; default: /* do nothing */ break; diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index 3f2cd546a7a2..0c22b78358b9 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -1604,6 +1604,11 @@ static int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag goto free_priv; } + /* Class driver might not be aware ep halted due to async URB giveback */ + if (*ep_state & EP_STALLED) + dev_dbg(&urb->dev->dev, "URB %p queued before clearing halt\n", + urb); + switch (usb_endpoint_type(&urb->ep->desc)) { case USB_ENDPOINT_XFER_CONTROL: @@ -3202,6 +3207,7 @@ static void xhci_endpoint_reset(struct usb_hcd *hcd, return; ep = &vdev->eps[ep_index]; + ep->ep_state &= ~EP_STALLED; /* Bail out if toggle is already being cleared by a endpoint reset */ spin_lock_irqsave(&xhci->lock, flags); diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index cd96e0a8c593..4ee14f651d36 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -664,7 +664,7 @@ struct xhci_virt_ep { unsigned int err_count; unsigned int ep_state; #define SET_DEQ_PENDING (1 << 0) -#define EP_HALTED (1 << 1) /* For stall handling */ +#define EP_HALTED (1 << 1) /* Halted host ep handling */ #define EP_STOP_CMD_PENDING (1 << 2) /* For URB cancellation */ /* Transitioning the endpoint to using streams, don't enqueue URBs */ #define EP_GETTING_STREAMS (1 << 3) @@ -675,6 +675,7 @@ struct xhci_virt_ep { #define EP_SOFT_CLEAR_TOGGLE (1 << 7) /* usb_hub_clear_tt_buffer is in progress */ #define EP_CLEARING_TT (1 << 8) +#define EP_STALLED (1 << 9) /* For stall handling */ /* ---- Related to URB cancellation ---- */ struct list_head cancelled_td_list; struct xhci_hcd *xhci; From patchwork Thu Mar 6 14:49:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 871076 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AC481188012 for ; Thu, 6 Mar 2025 14:49:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.7 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741272591; cv=none; b=H9vIUhCgYo5PxJI7DnpclfVJYuA2FhuiTvDxWuC1UQ6pKfkfOG4BTboP+pk4viTz4FVWiN7GjmvwPeak3uvtGsW9us9537fsVVg5bVYu3NflngWSMVeZdw8GmXMG2L5616WKO9RAN1GNeKB5ApJIPQ6ISiXl32v+bPGNLcj+WMQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741272591; c=relaxed/simple; bh=QaIqd/aAOjm82oUjsmHWxXEkA2zfHO0WeYg2sFbSqJY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ur6tTFIw7GUstblV7pfd24lBjbLWGUm0t87V/8i34uJ5mzfoGEUpy4ICfgsnbcl/OuiAUzfhJykCvk4XT7+pR59ylDWMqdQ4sH/0yje41/Pq0TY5puY+aWeXvnyQ67aEL9N6/m/JwrF4axjRAjZYkChL8jBvTp15QhBbilmuZtI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=joZp3jTy; arc=none smtp.client-ip=192.198.163.7 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="joZp3jTy" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1741272589; x=1772808589; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QaIqd/aAOjm82oUjsmHWxXEkA2zfHO0WeYg2sFbSqJY=; b=joZp3jTyYE31jdCiXX9pO1MvQfV3SmKe3lc1vbdKAvZwQCWyytO+mssG 4KepBBc9zYPO2V2ygWQEeGDVqStUWfY9oYVPrjP6rwhH611n2QXyBzlt/ NO0AK53T5bHl8J2CEjLjtwDc35e66QfaNWt6I6y+EjeoGcA6RfXNBrOOI O1pSnxFfOeXfJgVGHXyh0YlhBJHsTTOpobwrk0VmPIPDRKMV7T1xrmc9h ixJSvDiXP0oSAVMy0ttxQan9LP/NUNxFnHJGImQ8DtgV8NG4OtEbaWVt5 Gco5QsgHN19DrbyYynhh0di9Lc/NPsdeFo+zu7287zjrwwK3MiP52HqEl w==; X-CSE-ConnectionGUID: u2KE7CYfQYquvp453ixGPg== X-CSE-MsgGUID: YWA7IdoWRZGRTkzDJmqR2g== X-IronPort-AV: E=McAfee;i="6700,10204,11365"; a="67657146" X-IronPort-AV: E=Sophos;i="6.14,226,1736841600"; d="scan'208";a="67657146" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Mar 2025 06:49:49 -0800 X-CSE-ConnectionGUID: 7pDB5363T1KAs2/nVTF5og== X-CSE-MsgGUID: zJyQvk/ERkO0kGhrDV+oeg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119954867" Received: from unknown (HELO mattu-haswell.fi.intel.com) ([10.237.72.199]) by orviesa008.jf.intel.com with ESMTP; 06 Mar 2025 06:49:47 -0800 From: Mathias Nyman To: Cc: , Michal Pecio , Mathias Nyman Subject: [PATCH 14/15] usb: xhci: Unify duplicate inc_enq() code Date: Thu, 6 Mar 2025 16:49:53 +0200 Message-ID: <20250306144954.3507700-15-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250306144954.3507700-1-mathias.nyman@linux.intel.com> References: <20250306144954.3507700-1-mathias.nyman@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Michal Pecio Extract a block of code copied from inc_enq() into a separate function and call it from inc_enq() and the other function which used this code. Remove the pointless 'next' variable which only aliases ring->enqueue. Note: I don't know if any 0.95 xHC ever reached series production, but "AMD 0.96 host" appears to be the "Llano" family APU. Example dmesg at https://linux-hardware.org/?probe=79d5cfd4fd&log=dmesg pci 0000:00:10.0: [1022:7812] type 00 class 0x0c0330 hcc params 0x014042c3 hci version 0x96 quirks 0x0000000000000608 Signed-off-by: Michal Pecio Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-ring.c | 130 +++++++++++++++-------------------- 1 file changed, 55 insertions(+), 75 deletions(-) diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 7643ab9ec3b4..2df94ed3152c 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -204,79 +204,84 @@ void inc_deq(struct xhci_hcd *xhci, struct xhci_ring *ring) } /* - * See Cycle bit rules. SW is the consumer for the event ring only. - * - * If we've just enqueued a TRB that is in the middle of a TD (meaning the - * chain bit is set), then set the chain bit in all the following link TRBs. - * If we've enqueued the last TRB in a TD, make sure the following link TRBs - * have their chain bit cleared (so that each Link TRB is a separate TD). - * - * Section 6.4.4.1 of the 0.95 spec says link TRBs cannot have the chain bit - * set, but other sections talk about dealing with the chain bit set. This was - * fixed in the 0.96 specification errata, but we have to assume that all 0.95 - * xHCI hardware can't handle the chain bit being cleared on a link TRB. - * - * @more_trbs_coming: Will you enqueue more TRBs before calling - * prepare_transfer()? + * If enqueue points at a link TRB, follow links until an ordinary TRB is reached. + * Toggle the cycle bit of passed link TRBs and optionally chain them. */ -static void inc_enq(struct xhci_hcd *xhci, struct xhci_ring *ring, - bool more_trbs_coming) +static void inc_enq_past_link(struct xhci_hcd *xhci, struct xhci_ring *ring, u32 chain) { - u32 chain; - union xhci_trb *next; unsigned int link_trb_count = 0; - chain = le32_to_cpu(ring->enqueue->generic.field[3]) & TRB_CHAIN; - - if (last_trb_on_seg(ring->enq_seg, ring->enqueue)) { - xhci_err(xhci, "Tried to move enqueue past ring segment\n"); - return; - } - - next = ++(ring->enqueue); - - /* Update the dequeue pointer further if that was a link TRB */ - while (trb_is_link(next)) { + while (trb_is_link(ring->enqueue)) { /* - * If the caller doesn't plan on enqueueing more TDs before - * ringing the doorbell, then we don't want to give the link TRB - * to the hardware just yet. We'll give the link TRB back in - * prepare_ring() just before we enqueue the TD at the top of - * the ring. - */ - if (!chain && !more_trbs_coming) - break; - - /* If we're not dealing with 0.95 hardware or isoc rings on - * AMD 0.96 host, carry over the chain bit of the previous TRB - * (which may mean the chain bit is cleared). + * Section 6.4.4.1 of the 0.95 spec says link TRBs cannot have the chain bit + * set, but other sections talk about dealing with the chain bit set. This was + * fixed in the 0.96 specification errata, but we have to assume that all 0.95 + * xHCI hardware can't handle the chain bit being cleared on a link TRB. + * + * On 0.95 and some 0.96 HCs the chain bit is set once at segment initalization + * and never changed here. On all others, modify it as requested by the caller. */ if (!xhci_link_chain_quirk(xhci, ring->type)) { - next->link.control &= cpu_to_le32(~TRB_CHAIN); - next->link.control |= cpu_to_le32(chain); + ring->enqueue->link.control &= cpu_to_le32(~TRB_CHAIN); + ring->enqueue->link.control |= cpu_to_le32(chain); } + /* Give this link TRB to the hardware */ wmb(); - next->link.control ^= cpu_to_le32(TRB_CYCLE); + ring->enqueue->link.control ^= cpu_to_le32(TRB_CYCLE); /* Toggle the cycle bit after the last ring segment. */ - if (link_trb_toggles_cycle(next)) + if (link_trb_toggles_cycle(ring->enqueue)) ring->cycle_state ^= 1; ring->enq_seg = ring->enq_seg->next; ring->enqueue = ring->enq_seg->trbs; - next = ring->enqueue; trace_xhci_inc_enq(ring); if (link_trb_count++ > ring->num_segs) { - xhci_warn(xhci, "%s: Ring link TRB loop\n", __func__); + xhci_warn(xhci, "Link TRB loop at enqueue\n"); break; } } } +/* + * See Cycle bit rules. SW is the consumer for the event ring only. + * + * If we've just enqueued a TRB that is in the middle of a TD (meaning the + * chain bit is set), then set the chain bit in all the following link TRBs. + * If we've enqueued the last TRB in a TD, make sure the following link TRBs + * have their chain bit cleared (so that each Link TRB is a separate TD). + * + * @more_trbs_coming: Will you enqueue more TRBs before calling + * prepare_transfer()? + */ +static void inc_enq(struct xhci_hcd *xhci, struct xhci_ring *ring, + bool more_trbs_coming) +{ + u32 chain; + + chain = le32_to_cpu(ring->enqueue->generic.field[3]) & TRB_CHAIN; + + if (last_trb_on_seg(ring->enq_seg, ring->enqueue)) { + xhci_err(xhci, "Tried to move enqueue past ring segment\n"); + return; + } + + ring->enqueue++; + + /* + * If we are in the middle of a TD or the caller plans to enqueue more + * TDs as one transfer (eg. control), traverse any link TRBs right now. + * Otherwise, enqueue can stay on a link until the next prepare_ring(). + * This avoids enqueue entering deq_seg and simplifies ring expansion. + */ + if (trb_is_link(ring->enqueue) && (chain || more_trbs_coming)) + inc_enq_past_link(xhci, ring, chain); +} + /* * If the suspect DMA address is a TRB in this TD, this function returns that * TRB's segment. Otherwise it returns 0. @@ -3213,7 +3218,6 @@ static void queue_trb(struct xhci_hcd *xhci, struct xhci_ring *ring, static int prepare_ring(struct xhci_hcd *xhci, struct xhci_ring *ep_ring, u32 ep_state, unsigned int num_trbs, gfp_t mem_flags) { - unsigned int link_trb_count = 0; unsigned int new_segs = 0; /* Make sure the endpoint has been added to xHC schedule */ @@ -3261,33 +3265,9 @@ static int prepare_ring(struct xhci_hcd *xhci, struct xhci_ring *ep_ring, } } - while (trb_is_link(ep_ring->enqueue)) { - /* If we're not dealing with 0.95 hardware or isoc rings - * on AMD 0.96 host, clear the chain bit. - */ - if (!xhci_link_chain_quirk(xhci, ep_ring->type)) - ep_ring->enqueue->link.control &= - cpu_to_le32(~TRB_CHAIN); - else - ep_ring->enqueue->link.control |= - cpu_to_le32(TRB_CHAIN); - - wmb(); - ep_ring->enqueue->link.control ^= cpu_to_le32(TRB_CYCLE); - - /* Toggle the cycle bit after the last ring segment. */ - if (link_trb_toggles_cycle(ep_ring->enqueue)) - ep_ring->cycle_state ^= 1; - - ep_ring->enq_seg = ep_ring->enq_seg->next; - ep_ring->enqueue = ep_ring->enq_seg->trbs; - - /* prevent infinite loop if all first trbs are link trbs */ - if (link_trb_count++ > ep_ring->num_segs) { - xhci_warn(xhci, "Ring is an endless link TRB loop\n"); - return -EINVAL; - } - } + /* Ensure that new TRBs won't overwrite a link */ + if (trb_is_link(ep_ring->enqueue)) + inc_enq_past_link(xhci, ep_ring, 0); if (last_trb_on_seg(ep_ring->enq_seg, ep_ring->enqueue)) { xhci_warn(xhci, "Missing link TRB at end of ring segment\n");